report
stringlengths 320
1.32M
| summary
stringlengths 127
13.7k
|
---|---|
SBA and FEMA have independent authorities for providing disaster assistance. State governors’ requests for SBA’s assistance are directed to the SBA Administrator through SBA’s regional offices. Under the Small Business Act, the Administrator is authorized to make or guarantee loans to victims of sudden physical disaster. The loans are made to repair or replace damaged property. In fiscal 1994, SBA obligated about $4.2 billion for disaster assistance. Under the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5121 and following), a state governor may request the President to declare that an “emergency” or “major disaster” exists in the state. The scope of authorized assistance for emergencies is smaller than that for major disasters. The act provides that requests for declarations (and therefore federal assistance) shall be based on a finding that the incident “is of such severity and magnitude that effective response is beyond the capabilities of the State and the affected local governments and that federal assistance is necessary.” FEMA gathers and analyzes information and recommends to the President whether or not federal assistance is warranted. In the event of a presidential declaration, FEMA directly supplies some assistance and coordinates the overall federal effort. The types of assistance provided include money (grants and loans), equipment, supplies, housing, and personnel. FEMA’s public assistance grants help state and local governments and eligible private nonprofit organizations to fund repairs to damaged public facilities and address health and safety threats. Individual assistance grants to individuals and families to help them recover from the effects of disaster-related damage include housing and unemployment assistance. In fiscal 1994, FEMA obligated about $5.4 billion for disaster assistance. Neither SBA’s nor FEMA’s disaster declaration policies and procedures differ with respect to whether the affected area is considered rural or urban. Both agencies employ a process of assessing postdisaster conditions and using a set of factors, or criteria, to determine whether or not to grant assistance. Neither agency’s factors include any measure of population density. (App. I shows the steps in each agency’s declaration process.) SBA’s declaration process and criteria are published in the Code of Federal Regulations. The criteria provide that assistance from SBA may be provided if, as a result of disaster-related damage to a county, at least 25 homes or businesses have sustained uninsured losses of at least 40 percent of their replacement value or at least three businesses have sustained uninsured losses of at least 40 percent of their replacement value and, as a direct result of the disaster, at least 25 percent of the workforce in the community would be unemployed for at least 90 days. To determine the extent of the damage, SBA, state, and local officials jointly assess conditions in the affected counties following a governor’s request. SBA’s policy is to suspend action on the requests it receives if the governor has requested a presidential declaration that includes individual assistance. SBA does not act on such requests until the President has made a decision on the governor’s request. In hearings before the 103rd Congress, the incoming SBA Administrator noted that all disaster declaration requests to SBA are handled in the same manner. Also, SBA’s Associate Administrator for Disaster Assistance stated that SBA treats all of the requests from states the same, whether the disaster area is rural or urban. The Stafford Act establishes the disaster declaration process. The act does not prescribe specific criteria to guide FEMA’s recommendation or the President’s decision. As a prerequisite to federal disaster assistance under the act, a governor must take “appropriate response action” and provide information on the nature and amount of state and local resources committed to alleviating the results of the disaster. (FEMA may conduct a preliminary damage assessment, along with state, local, and/or other federal officials, before the governor requests assistance.) The President then decides whether federal assistance is needed to supplement state and local resources. The Stafford Act does not identify criteria for evaluating governors’ requests. According to the Chief, Program Policy Branch, Response and Recovery Directorate, FEMA generally considers some or all of the following factors in making a recommendation to the President: The number of homes destroyed or sustaining major damage. The number of homes sustaining minor damage. The extent to which the damage is concentrated or dispersed. The estimated cost of repairing the damage. The demographics of the affected areas (e.g., income levels, unemployment, and concentrations of the elderly). The extent to which the damage is covered by insurance. The extent to which the disaster area is traumatized. The extent of disaster-related unemployment. The level of assistance available from other federal agencies (e.g., SBA’s home and business loans). The state and local governments’ capabilities for dealing with the disaster. The level of assistance available from voluntary organizations (e.g., the American Red Cross). The availability of rental housing. The extent of health and safety problems. The extent of damage to facilities providing essential services (e.g., medical, utilities, police, etc.). While these factors do not explicitly take into account the urban/rural status of an affected area, they include factors that could vary with measures of population density. For example, the number of homes destroyed or sustaining major damage might be expected to be larger in more densely populated areas than in less densely populated areas. According to the Branch Chief, these factors serve as guidelines for FEMA staff who evaluate disaster declaration requests. Staff are encouraged to apply the factors consistently, but there is no formula for applying them quantitatively. FEMA officials stated that FEMA relies most heavily on how the assessment of a state’s capability compares with the costs entailed by the disaster. However, they acknowledged that “capability” is not precisely defined and that determining a state’s capability is subjective. The flexibility and generally subjective nature of FEMA’s criteria have raised questions about the consistency and clarity of the disaster declaration process. FEMA’s Inspector General reported in 1994 that (1) neither a governor’s findings nor FEMA’s analysis of capability is supported by standard factual data or related to published criteria and (2) FEMA’s process does not always ensure equity in disaster decisions because the agency does not always review requests for declarations in the context of previous declarations. We previously reported that disclosing the process for evaluating requests would help state and local governments determine the circumstances that warrant federal assistance. Several attempts have been made to address these concerns, and FEMA is currently negotiating a partnership agreement with each state, designed in part to clarify the conditions under which FEMA’s assistance will be available. The disaster declaration process can be divided into two intervals: (1) the time between the disaster’s “incident date” and the gubernatorial request and (2) the time between the gubernatorial request and a declaration decision. The latter interval covers the period when federal agencies are actually processing disaster declaration requests. In addition, FEMA and SBA frequently help assess damages and/or advise state emergency personnel before a governor requests assistance. During calendar 1993 and 1994, FEMA received 120 gubernatorial requests for presidential declarations covering 2,157 counties. As shown in figure 1, the median number of days that elapsed during both intervals was greater for rural and very rural counties than for urban and very urban counties. During calendar 1993 and 1994, SBA received 73 requests covering 179 counties. Figure 2 shows that the number of days that elapsed during both intervals was generally greater for rural and very rural counties than for urban and very urban counties. Governors’ requests for SBA’s assistance may be made directly to SBA or may be included in a request for a presidential declaration. SBA’s policy is to suspend action on the latter type of requests until a presidential declaration decision is made. For the requests that were made directly to SBA during calendar 1993 and 1994 (covering 124 counties), the median number of days between the gubernatorial request and SBA’s decision was 7, or 34 days less than for requests that had been included in a request for a presidential declaration. For all requests made to FEMA and SBA, in addition to computing the medians, we also computed the mean number of days for each interval. The results showed the same general pattern: The mean times tended to be longer for rural and very rural counties. (See app. II for more details on the processing times for disaster declaration requests.) As noted above, neither FEMA’s nor SBA’s factors for assessing requests for disaster declarations and helping determine whether or not to grant assistance include any measure of population density. Therefore, while the data show a general pattern of smaller median and mean elapsed times as county population density increases, they should not be interpreted as demonstrating that population density determines the length of elapsed time from request to declaration. As shown in figure 3, a greater proportion of requests for very rural counties resulted in presidential disaster declarations than did requests for counties in the other categories. SBA, on the other hand, denied a greater proportion of requests for rural and very rural counties than for urban and very urban counties. Similar to the data on elapsed time, the data on the proportion of requests approved and denied should not be interpreted as demonstrating that population density determines the approval/denial decision. FEMA and SBA officials stated that many factors can affect the time that elapses during the declaration process. For example, according to the FEMA Branch Chief, it generally takes longer to travel to remote areas to assess damages. We reviewed (1) selected requests for disaster declarations by the President and SBA that were processed more quickly or more slowly than the median number of days between the incident date and the gubernatorial request and (2) additional selected requests for the time elapsed between the gubernatorial request and the declaration decision. The results are summarized below and detailed in appendix III. One factor affecting the length of time between a disaster incident and a gubernatorial request for a declaration was how quickly a preliminary damage assessment could be made. For example, when a severe winter storm struck 44 counties in Virginia in 1994, preliminary damage assessments were not made immediately because (1) federal, state, and local emergency personnel were still responding to a severe winter storm that had struck the same counties less than 1 month previously, (2) it was difficult to differentiate the damages from the two storms, and (3) the storm made travel to some areas difficult. Furthermore, the situation was not life threatening. The governor waited until the damage assessments were completed for most of the affected counties before asking for a declaration. Conversely, a preliminary damage assessment was completed more quickly than usual following a 1993 earthquake in a very rural Oregon county. The speed with which the assessment was completed contributed to the governor’s requesting a disaster declaration from SBA in less-than-average time. Another factor affecting the timing of requests to SBA is whether or not the governor first requests a presidential declaration for the same disaster incident. For example, following 1994 floods and tornadoes in North Carolina, the governor first requested a presidential disaster declaration. The same day that FEMA denied this request, the governor requested a declaration from SBA—more than 4 weeks after the disaster incident. The factors that affected the length of time between a gubernatorial request and a declaration decision included (1) the extent of documentation in the governor’s request and (2) in the case of requests for SBA’s assistance, whether or not the request was included in a request for a presidential declaration. Gubernatorial requests that are well documented generally can be processed more quickly, while missing documentation can contribute to delays, as in the following example. South Dakota experienced severe storms and flooding from March through July 1994, and in June the governor requested a disaster declaration from SBA. SBA required additional documentation showing that the incident was “sudden” (the Small Business Act does not authorize assistance for “gradual” incidents), lengthening the time required before reaching a decision. Conversely, state officials credited the clarity of SBA’s criteria for the agency’s relatively quick decision on a request for assistance following a 1994 flood in a very urban Pennsylvania county. Because the criteria were clear, the governor could clearly address them in the disaster declaration request. Also contributing to SBA’s quick declaration was that the damage occurred in a concentrated area, and it is easier to evaluate damage when it is concentrated than when it is more widely dispersed. The time that elapses between a governor’s request and SBA’s decision can be affected by whether or not the governor has requested a presidential declaration for the same disaster incident. As noted above, SBA’s policy is to suspend action on requests it receives if the governor has requested a presidential declaration that includes individual assistance. We provided a draft of this report for comment to the FEMA Director and received comments from the Associate Director, Response and Recovery Directorate. We also provided a draft to the SBA Administrator and received comments from the Associate Administrator for Disaster Assistance. (SBA’s written comments and our responses are in app. IV.) FEMA generally concurred with the information presented in the draft report. FEMA suggested minor revisions to clarify our description of the disaster declaration process, and we incorporated those changes as appropriate. (App. V contains FEMA’s written comments.) The SBA Associate Administrator stated that the draft report contained four points that could be misleading without further explanation. First, he suggested that we clarify that the time that elapses between a request for SBA’s assistance and SBA’s decision may be influenced by whether the request is made directly to SBA or is included in a request for a presidential declaration. To address this point, we added clarifying language as well as tables II.4 and II.5 (see app. II), which show the elapsed time for those requests included in a request for a presidential declaration and those made directly to SBA, respectively. SBA’s second point was that because gubernatorial requests are made on a state basis, analyzing response time to disasters on a county-by-county basis could “skew” the overall results. SBA suggested using the state as the unit of analysis. Our unit of analysis was the county because that enabled us to better distinguish between “rural” and “urban” areas. Accordingly, our analysis treats each county in a request equally, and our computed median times reflect the effects of the number of counties in each population density category and the length of time that elapsed between the request and SBA’s decision. We do not believe that using states as the unit of analysis would allow us to distinguish between the experiences of rural and urban areas. The third point SBA raised was that for the gubernatorial request date, we used the date of the governor’s request for assistance rather than the actual receipt of the governor’s letter by SBA. SBA provided a sample of requests showing, on average, a 2-day difference between the date of the governor’s letter and the actual date SBA received the request. We used the date of the letter to ensure consistency with FEMA’s data set. We included SBA’s sample in the final report. SBA’s fourth comment was that the report should clarify that in those cases in which SBA denied the governor’s request for a declaration, the requests were denied because they did not meet the agency’s criteria. Because our draft report stated that SBA relies on its criteria to determine eligibility for a declaration, we did not revise the final report. To respond to your request, we reviewed relevant legislation and FEMA’s and SBA’s regulations for requests for disaster assistance and interviewed cognizant officials at each agency. Using copies of gubernatorial requests for disaster declarations and other documents from both FEMA and SBA and automated data provided by FEMA, we compiled a database to analyze the timing of events and the proportion of requests approved for each category of county. For the case studies, we interviewed federal headquarters and regional office personnel and state emergency management officials and obtained relevant documentation. We performed our work between February and July 1995 in accordance with generally accepted government auditing standards. (See app. VI for further details on our scope and methodology.) We are sending copies of this report to the Administrator, SBA; the Director, FEMA; appropriate congressional committees; and other interested parties. Should you or your staff have any questions, you can reach me at (202) 512-7631. Major contributors to this report are listed in appendix VII. Although the two agencies operate under separate authorities, the Federal Emergency Management Agency (FEMA) and the U.S. Small Business Administration (SBA) follow similar processes for federal disaster declarations. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5121 and following) authorizes the President to declare that an emergency or major disaster exists in a state, if requested by the governor of the state, and to make federal assistance available to supplement state and local resources. Figure I.1 shows the steps generally involved in the disaster declaration process. The preliminary damage assessment (PDA) is a mechanism used to determine the impact and magnitude of damage and the resulting unmet needs of individuals, businesses, the public sector, and the community as a whole. Information collected is used by the state in preparing the governor’s request and by FEMA in making a recommendation to the President about whether and what type(s) of assistance is warranted. A presidential declaration authorizes federal assistance (which may be public assistance, individual assistance, or both) in the affected state(s) after a governor’s request. The federal assistance authorized by the President includes assistance from other federal agencies, including SBA. The President delegates authority to FEMA to determine which counties within the state will receive assistance and the type(s) of assistance to be provided. At any point after the governor’s initial request letter, the governor may request that additional counties be made eligible and/or that additional types of assistance be provided, as part of the same disaster declaration. These requests are submitted to FEMA’s regional offices. FEMA, other federal, state, and local government personnel conduct an on-site preliminary damage assessment (PDA). The FEMA Director recommends a declaration action to the President based on the analysis. The governor requests assistance from the President, certifying that the severity of the disaster is beyond state and local capability. The President determines whether to grant or deny gubernatorial request. The governor may appeal the decision. FEMA regional personnel summarize the information collected during the PDA and send a summary to FEMA headquarters for further analysis. Under the provisions of the Small Business Act, the SBA Administrator is authorized to make or guarantee loans to victims of sudden physical disaster if requested by the governor of a state. Figure I.2 shows the steps involved in the SBA declaration process. The governor includes SBA assistance in a request for a presidential disaster declaration. If the presidential disaster declaration is turned down, FEMA refers the request to SBA. SBA regional and area office staff and state and local representatives assess damage. The governor requests assistance from the SBA Administrator. SBA staff analyze whether declaration criteria have been met and recommend declaration action to SBA Administrator. The SBA Administrator decides whether the request for a declaration should be granted. SBA has no formal appeals process. With some exceptions, the times taken for governors to request a presidential or SBA disaster declaration and the times taken for the President or SBA Administrator to reach a decision were longer for rural and very rural counties. Consequently, the overall times from disaster incidents to decisions on federal aid were longer for these counties than for urban or very urban counties. To determine the amount of time that the declaration process takes for requests to the President through FEMA, we reviewed all counties for which emergency and/or major disaster declarations were requested in calendar 1993 and 1994. These totaled 2,874 counties, divided as follows: (1) 2,157 counties that were included in original gubernatorial requests (“initial” counties) and (2) 717 counties that governors subsequently asked FEMA to make eligible for assistance (“add-on” counties). Because FEMA does not maintain centralized records of add-on counties that were turned down, the add-on counties for which we obtained information are limited to those that were declared eligible for federal assistance. The total time that elapses between a disaster incident and a decision on federal aid depends on how quickly a governor asks for assistance, as well as how quickly federal officials act on the request. Accordingly, for each initial county we computed the numbers of days that elapsed from the disaster incident to the gubernatorial request; from the gubernatorial request to the date that a declaration decision was made; and the total time that elapsed from incident to declaration decision. As shown in table II.1, the number of days that elapsed both before and after gubernatorial requests generally tended to be shorter for initial counties as county population density increased. Very rural (157) Rural (827) Urban (837) Very urban (336) All (2,157) As noted above, for the “add-on” counties, FEMA does not maintain centralized information; therefore, we were able to compute only the total times elapsed between disaster incidents and declaration decisions. As table II.2 shows, while there is somewhat more variation, the overall pattern was the same as for initial counties: The number of days that elapsed generally tended to be smaller as county population density increased. Very rural (107) Rural (393) Urban (161) Very urban (56) All (717) A greater proportion of add-on counties fell into the “very rural” or “rural” categories than did initial counties. According to FEMA officials, it typically takes longer to obtain accurate damage reports from more remote areas where the extent of damages may not be apparent as quickly. For requests to SBA, we reviewed those counties for which physical disaster loan requests had been made during calendar 1993 and 1994; these totaled 179. Similar to our treatment of presidential declarations, for each county we computed the number of days that elapsed from the disaster incident to the gubernatorial request; from the gubernatorial request to the date that a declaration decision was made; and the total time that elapsed from incident to declaration decision. In addition, we separately analyzed requests to SBA that (1) were referred by FEMA and (2) were made directly by governors. Governors may explicitly request SBA’s assistance as part of their request to FEMA for a presidential disaster declaration. If the request is granted, then SBA may provide assistance without a separate SBA declaration. If such requests for presidential declarations are turned down, then FEMA refers the requests to SBA. FEMA does not refer to SBA turned-down requests in which a governor has not explicitly requested SBA assistance; therefore, in those cases, the governor must request a declaration directly from SBA. Table II.3 shows that for all requests for SBA disaster assistance, the number of days that elapsed before and after a gubernatorial request generally tended to be smaller as county population density increased. Very rural (16) Rural (52) Urban (66) Very urban (45) All (179) Tables II.4 and II.5 present the time elapsed for SBA disaster assistance requests (1) referred by FEMA and (2) made directly to SBA, respectively. The tables show a similar overall pattern: The number of days that elapsed before and after a gubernatorial request generally tended to be smaller as county population density increased, whether the requests were referred by FEMA or made directly to SBA. The tables also show that the median and mean processing times were somewhat greater for requests referred by FEMA. Very rural (1) Rural (30) Urban (13) Very urban (11) All (55) Very rural (15) Rural (22) Urban (53) Very urban (34) All (124) To identify factors that may influence the time that elapses during the disaster declaration process, we reviewed selected cases of disaster declaration requests at FEMA and SBA. (We defined a “case” as a request to either the President or the SBA Administrator for a declaration in a state for one disaster incident; details on our selection criteria are in app. V.) We selected a total of eight cases, four dealing with the time period between disaster incident and gubernatorial request and four dealing with the time period between gubernatorial request and declaration decision. These eight cases represent seven disaster incidents (one incident resulted in a request for a presidential disaster declaration that was turned down, and a subsequent request for a declaration from the SBA Administrator). The following provides descriptive information about each case, citing factors that may have affected the time that elapsed in each case. However, because the cases were not randomly selected, they should not be viewed as representative of all disaster declaration requests. The cases we reviewed suggest that a principal factor affecting the length of time between a disaster event and a gubernatorial request for a declaration was how quickly preliminary damage assessments (PDA) could be made. A number of variables can affect the speed with which PDAs are conducted. A second factor that affected requests to SBA was whether or not the governor had requested a presidential disaster declaration for the same disaster event. Virginia experienced a severe winter ice storm on March 1-5, 1994. The situation was not considered life threatening. The governor requested a presidential declaration of a major disaster on March 15. The governor asked for public assistance for 44 counties, about half of which were rural. The storm came on the heels of an earlier (February 8-12) severe winter storm that struck generally the same areas of Virginia. (The President had declared a major disaster following the earlier storm.) Damage survey teams composed of federal, state, and local personnel were still on the scene responding to the earlier storm when the later storm hit. This situation created a dilemma: The survey teams are generally composed of the same personnel who conduct PDAs and often, in rural areas, include volunteers. One option was for the survey teams to interrupt their work to conduct PDAs for the second storm. However, federal and state officials determined that it would be more efficient to concurrently conduct surveys for the first storm and PDAs for the second storm. Therefore, the principal contributing factor to a prolonged period between the incident date and the gubernatorial request was that the PDAs were not conducted immediately following the storm. Officials thought it was prudent to delay the assessments since the request was for “cleaning up” rather than saving lives and because a delay would allow more efficient use of the limited and exhausted human resources available. Other factors that contributed to the length of time between the incident and the governor’s request: The governor did not submit the request until most of the PDAs were complete. Frequently, governors will request a declaration once it is established that at least some counties are eligible and that the state lacks the capability to respond to the incident. Federal and state officials noted that for this storm, waiting until most of the PDAs had been completed enabled them to obtain a clearer understanding of the extent of the damage, and as a result, FEMA was able to process the request with few questions about eligibility. The PDAs were more difficult to conduct since differentiating the damages caused by the two storms was difficult. The nature of the storm—ice accompanied by 3 inches of snow, winds of 50-60 miles per hour, and subsequent flooding—made travel to conduct the assessments challenging. As a result of the second storm, the President declared a major disaster for Virginia, and FEMA determined that 33 of the 44 counties requested by the governor would be designated as eligible for public assistance grants. Utah experienced a severe winter storm from January 2 through January 11, 1993, that affected five counties—Salt Lake County, another very urban county, one urban county, and two very rural counties. Up until January 8, when the storm intensified, the state and local governments were able to respond to the storm. The governor declared a state of emergency on January 11 after a record-breaking snowfall and on January 16 requested a presidential emergency declaration. The PDA indicated that snow debris removal was the most significant need. The request was denied on the grounds that the situation was not deemed to be beyond the combined capabilities of the state and local governments. Utah unsuccessfully appealed the denial. One factor contributing to our computed longer-than-average time is the incident date recorded by FEMA. FEMA’s records show the incident date as January 2; however, a more accurate incident date might be later because the storm intensified starting January 8. The event period was never fully defined because the request was denied. A second potential factor was the nature of the incident. The request was unique because it was the first snow-removal request FEMA had received in nearly 15 years. FEMA’s “snow” policy had become inactive, and confusion prevailed over matters such as determining which costs were eligible. A third factor was that the record-breaking snow levels made communications between the localities and the state difficult, delaying the state’s ability to obtain critical information on the extent of storm damage. North Carolina experienced damage caused by floods and tornadoes on August 16 and 17, 1994. SBA and FEMA personnel as well as state and local officials conducted a PDA of several counties on August 22-24. The governor requested a presidential declaration for 14 counties, mostly urban, on August 30; the request was denied on September 21. On that same day—more than 4 weeks after the incident—the governor requested assistance from SBA. On September 27, the SBA Administrator declared 2 of the 14 counties eligible for SBA assistance. Another eight counties were eligible for assistance because they were contiguous to the two declared counties. Much of the time that elapsed between the disaster incident and the governor’s request to SBA can be attributed to processing the presidential request. While SBA made a decision 6 days after receiving the request, over 6 weeks had already elapsed before the governor requested SBA assistance. Klamath County, Oregon (a very rural county), experienced an earthquake on September 20, 1993. On September 27-29, FEMA and SBA jointly conducted a damage assessment to determine the extent of the damage. On September 30, the governor requested an SBA disaster declaration for the county. On October 1, the governor requested a presidential major disaster declaration for public assistance. According to state officials, a principal factor that may have contributed to a gubernatorial request in less-than-average time for very rural counties was the prompt damage assessment, which was conducted before the governor’s official request for assistance. On the basis of the cases we reviewed, factors that affected the length of time between a gubernatorial request and a declaration decision included (1) the extent of documentation of the damage in the governor’s request and (2) whether the damage occurred in a concentrated or more widely dispersed area. South Dakota experienced severe storms and flooding from March through July 1994. On June 6, the governor requested a presidential major disaster declaration primarily to repair road and bridge damage for 15 counties (11 very rural and 4 rural counties). The President made the declaration on June 21, and FEMA designated all 15 counties. The governor subsequently requested that FEMA designate an additional six counties, and FEMA did so. FEMA officials explained that because the land is flat and water subsidence takes longer than in less flat areas, flooding incidents often—as in this case—last longer in South Dakota as well as North Dakota where the topography is similar. Therefore, subsidence of the flood waters to determine the extent of damage took longer than subsidence in less flat areas. Also, the Dakotas are not highly populated and contain relatively more public facilities that are less expensive to repair and replace than do more densely populated areas—for example, gravel roads, which are common in the Dakotas, are less expensive to repair than highways. Also, the overriding factor that FEMA employs in determining eligibility for a disaster declaration is state and local capability. Since the road and bridge repair costs incurred by this disaster appeared to be relatively inexpensive, FEMA, in determining state and local government capability, scrutinized the request more closely than events with more obvious and expensive damage. FEMA officials noted that although the costs may be less, they recognized that the impact was not necessarily less. In addition to the request to the President for public assistance, the governor requested an SBA declaration for assistance to households and businesses in the same 15 counties as requested in the presidential request. The gubernatorial request was made on June 17; and on July 14, the SBA Administrator declared two counties eligible for SBA assistance. (Six other counties, because they were contiguous to the two declared counties, became eligible for assistance.) SBA’s records indicate that the gubernatorial request did not establish that the flooding was a “sudden” physical event. Because the Small Business Act prohibits providing assistance for gradual events, SBA required additional information and adequate documentation that the event was sudden, lengthening the declaration process. Also, SBA did not receive the gubernatorial request letter until 6 days after the date of the letter. Philadelphia County, Pennsylvania (a very urban county), experienced widespread urban flooding due to thunderstorms and heavy rain on July 14, 1994. On August 3, the governor requested SBA assistance. On August 4, SBA personnel along with state and local officials conducted a survey to determine the extent of the damage caused by the flooding. On August 8, the SBA Administrator declared the county eligible for SBA assistance. State officials suggested that SBA’s processing of disaster declaration requests is expedited because SBA provides clearly detailed criteria and instructions for evaluating whether the criteria have been met. They noted that by closely following the instructions and carefully addressing the criteria in the gubernatorial request, the declaration is usually forthcoming. Also, SBA officials explained that the flooding occurred in a concentrated area. It is easier to evaluate the extent of damage when it occurs in a geographically concentrated area than when the damage is more widely dispersed. Therefore, SBA was able to quickly assess the damage. From March 13 through March 17, 1993, Tennessee (as well as 16 eastern and mid-Atlantic states and the District of Columbia) experienced a severe winter storm with excess snowfall. The states and the District requested that the President declare an emergency and requested public assistance for cleaning up the storm-related damages. The time from the governor’s request to the presidential decision was shorter than average for all 18 requests received. FEMA officials explained that the overriding factor in considering the 18 declaration requests was the storm’s “crippling” impact. The requirement to conduct PDAs was waived, and the decision to provide emergency assistance was made more quickly than usual. FEMA policy is to waive PDAs (i.e., expedite processing) only for disasters of the greatest magnitude, such as Hurricane Andrew. FEMA expedited the processing of gubernatorial requests for this storm. In addition to waiving the PDA requirement, FEMA provided a draft request letter to the affected states and drafted a snow-removal policy. The states and the District requested public assistance only. The following are GAO’s comments on the Small Business Administration’s letter dated July 31, 1995. 1. SBA’s first comment was that for requests for SBA disaster assistance, our data on median elapsed times did not distinguish between (1) requests that are made directly to SBA and (2) requests that are referred by FEMA in cases in which requests for a presidential disaster declaration have been turned down. Our draft report stated that SBA’s policy is to suspend action on requests it receives if the governor has requested a presidential declaration that includes individual assistance and that SBA does not act on such requests until the President has made a decision on the request for a presidential declaration. To respond to SBA’s comment, we disaggregated requests for SBA assistance between (1) requests that are made directly to SBA and (2) requests that are referred by FEMA. We then computed the median and mean elapsed times for each group and included these data in appendix II. Among other things, the data show that for the requests that were made directly to SBA during 1993 and 1994, the median number of days between the gubernatorial request and SBA’s decision was 7, or 34 days fewer than the median of 41 days for requests referred by FEMA. We added language to the letter noting this distinction. 2. SBA’s second comment was that we used counties rather than states as the units of analysis and that governors’ requests are on a “state basis.” We used counties because doing so enabled a somewhat better distinction between “rural” and “urban” areas. Our report noted that (1) in cases of requests for a presidential disaster declaration, FEMA determines which counties within a declared state will receive assistance and (2) in the absence of a presidential declaration, the SBA Administrator may declare that counties struck by disasters are eligible to receive some types of SBA assistance. Our analysis treats each county in a request equally. Our computed median times reflect the effects of the number of counties in each population density category and the length of time that elapsed between the request and SBA’s decision. We do not believe that using states as the unit of analysis would allow us to distinguish between the experiences of rural and urban areas. 3. SBA’s third comment concerned our definition of “date of governor’s request” for SBA assistance. We used the dates that appeared on the governors’ letters to ensure a data set consistent with FEMA’s records. Our draft report noted that according to SBA officials, a gubernatorial request letter may not necessarily be mailed on the date of the letter and that any delays in mailing would help account for the time lapse between a gubernatorial request and an SBA decision, as shown by our analysis. We added the figures cited in SBA’s comments to the final report. 4. SBA’s fourth comment was that the report should clarify that any counties that were denied SBA assistance did not meet the agency’s published criteria. Our draft report stated that SBA uses criteria, published in the Code of Federal Regulations, to determine whether or not a county is eligible for disaster assistance; therefore, we do not believe any change is necessary on the basis of this comment. To determine if FEMA’s and SBA’s disaster declaration policies and procedures differ for requests for rural, as compared with urban, areas, we reviewed the Robert T. Stafford Disaster Relief and Emergency Assistance Act (42 U.S.C. 5121 and following), the Small Business Act, and FEMA’s and SBA’s procedures as outlined in the Code of Federal Regulations. We also interviewed FEMA and SBA officials responsible for administering the disaster declaration process and obtained and reviewed guidance used in evaluating disaster declaration requests at each agency. To compare the length of time each agency took to respond to requests for rural and urban areas and to compare the proportion of requests for rural areas that were granted with the corresponding proportion for urban areas, we developed a database using FEMA’s and SBA’s disaster declaration request records and Bureau of the Census’ county and state information. We did not verify the accuracy of these records. The database included all counties that were included in requests for “emergency” or “major disaster” declaration requests under the Stafford Act, or for SBA physical disaster assistance, received during calendar 1993 and 1994. We counted each county each time it was included in a request. We limited the scope of our review to calendar 1993 and 1994 because those were the years for which the best information was available. FEMA and SBA officials told us that these years were not atypical. We used county population and land area data from the 1990 U.S. census to compute a measure of population density for each county. We then used the Bureau of the Census’ population density categories to classify each of the counties as very rural, rural, urban, or very urban, as shown in table VI.1. Census Bureau population density (persons/sq. mile) Using FEMA’s and SBA’s records, we included for each county the dates of (1) the disaster incident, (2) the gubernatorial request, and (3) the decision by the President or SBA Administrator. For Presidential declaration requests: We obtained the disaster incident dates from FEMA’s automated information system and notices published in the Federal Register. In cases in which an incident spanned more than 1 day, we used the first day of the period. We used the dates on the governors’ request letters for the gubernatorial request dates. For the decision date, we used the date of the declaration as published in the Federal Register if the request was granted. Requests that are turned down do not result in a Federal Register notice. When all counties included in requests were turned down, we obtained the date from FEMA’s automated information system. However, FEMA does not centrally maintain records of add-on counties that were denied eligibility for assistance. Therefore, we excluded those counties in our timing calculations. We disaggregated requests for SBA assistance between (1) requests that were made directly to SBA and (2) requests that were referred by FEMA. For all requests, we obtained the disaster incident dates and SBA decision dates from SBA’s files. For requests made directly to SBA, we defined the “gubernatorial request date” as the date of the governor’s letter. For requests referred by FEMA, we used the FEMA “turn-down” date—the date that FEMA announced that a governor’s request for a presidential declaration had been denied—as the date of the request to SBA. We used this information to compute the median and mean number of days that elapsed, for each county, between the disaster incident and the gubernatorial request, and between the gubernatorial request and the decision. We also included information on whether each county was granted assistance or was turned down. For the “add-on” counties, FEMA does not maintain centralized records showing the gubernatorial request dates; therefore, we were able to compute only the total time elapsed from disaster incident to a declaration decision. For each county population density category, we computed the median and mean numbers of days for each of the two time periods, the proportion of requests that resulted in assistance being granted, and the proportion that were turned down. To select the case studies used to identify factors that influence the length of time taken for the disaster declaration process at each agency, we selected the cases that took approximately 25 percent more than the median time between the disaster incident and gubernatorial request and the cases that took approximately 25 percent less. Similarly, at each agency we selected the cases that took approximately 25 percent more than the median time between gubernatorial request and a disaster declaration decision and the cases that took approximately 25 percent less. We judgmentally selected cases that were diverse, differing by such characteristics as type and size of disaster, geographic location, type of declaration requested (presidential emergency, presidential major disaster, or SBA), and outcome (request granted or turned down). We interviewed relevant agency and state officials to identify the factors in each case that appeared to affect the time intervals. We used the cases to illustrate what happened in a few instances to speed up or slow down the disaster declaration process. The cases are not necessarily representative of all disaster declaration requests, and they should not be interpreted as explaining all variation in time elapses among requests. Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs (GAO/T-RCED-95-140, Mar. 16, 1995). GAO Work on Disaster Assistance (GAO/RCED-94-293R, Aug. 31, 1994). Los Angeles Earthquake: Opinions of Officials on Federal Impediments to Rebuilding (GAO/RCED-94-193, June 17, 1994). Federal Disaster Insurance: Goals Are Good, but Insurance Programs Would Expose the Federal Government to Large Potential Losses (GAO/T-GGD-94-153, May 26, 1994). Disaster Management: Improving the Nation’s Response to Catastrophic Disasters (GAO/RCED-93-186, July 23, 1993). Disaster Assistance: DOD’s Support for Hurricanes Andrew and Iniki and Typhoon Omar (GAO/NSIAD-93-180, June 18, 1993). Rural Disaster Assistance (GAO/RCED-93-170R, June 14, 1993). Disaster Relief Fund: Actions Still Needed to Prevent Recurrence of Funding Shortfall (GAO/RCED-93-60, Feb. 3, 1993). Disaster Assistance: Timeliness and Other Issues Involving the Major Disaster Declaration Process (GAO/RCED-89-138, May 25, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the federal disaster declaration process, focusing on: (1) whether the Federal Emergency Management Agency's (FEMA) and the Small Business Administration's (SBA) disaster declaration policies differ for rural and urban areas; (2) the length of time taken to respond to disaster declaration requests for rural and urban areas; (3) the proportion of requests granted for rural areas, as compared with the corresponding proportion for urban areas; and (4) factors that influence disaster declaration processing time. GAO found that: (1) neither FEMA's nor SBA's disaster declaration policies differ with respect to whether the affected area is rural or urban; (2) both agencies use criteria such as measures of damage to homes, businesses, and public facilities to assess requests for disaster declarations and to help determine whether or not to grant assistance; (3) neither agency's criteria include a measure of population density; (4) for requests received in calendar 1993 and 1994, the time that elapsed between the governors' requests and the declaration decisions by the President or SBA was longer for rural and very rural counties than for urban or very urban counties; (5) for example, the median processing time for requests to FEMA for very rural counties was 11 days, and for very urban counties, it was 7 days; (6) similarly, the time that elapsed between the occurrence of a "disaster incident" and the governor's request for a disaster declaration was longest for very rural counties and shortest for very urban counties (medians of 10 days and 4 days, respectively, for requests made to the President); (7) in disasters declared by the President, FEMA made a greater proportion of very rural counties (93 percent) eligible for assistance than any other type of county; (8) in contrast, SBA declared a greater proportion of urban and very urban counties (58 percent and 70 percent, respectively) eligible for assistance than rural and very rural counties; (9) in the cases GAO reviewed, various factors affected the time required for the declaration process; one factor affecting the length of time between a disaster incident and a gubernatorial request for a declaration was how quickly damage assessments could be made; and (10) among the factors that affected the length of time between a gubernatorial request and a declaration decision was the extent to which the damage was documented in the governor's request. |
WHO, in conjunction with the United States and other governments, has developed an international strategy for forestalling the onset of an influenza pandemic. Elements of this strategy include restricting the movement of people in and out of the affected area, isolation of ill persons, and school closures. Antivirals are also an important element of this strategy. Studies suggest that using antiviral drugs, along with other interventions, to treat infections and prevent illness might contain a pandemic at the site of the outbreak or at least slow its international spread, thus gaining time to put emergency measures in place and begin producing matched vaccines that would be effective in preventing individuals from being infected with the strain of influenza causing the pandemic. Influenza, also called “the flu,” is caused by a virus that primarily attacks the upper respiratory tract—the nose and throat—and sometimes the lungs. Influenza is characterized by cough, fever, headache, and other symptoms and is more severe than some viral respiratory infections, such as the common cold. In almost every year a seasonal influenza virus causes acute respiratory disease in epidemic proportions somewhere in the world. Most people who contract seasonal influenza recover completely in 1 to 2 weeks, but some develop serious and potentially life- threatening medical complications, such as pneumonia. Most healthy adults may infect others 1 day before getting symptoms and up to 5 days after they first develop symptoms. Some young children and people with weakened immune systems may be contagious for more than a week. WHO estimates that seasonal influenza affects about 5 to 15 percent of the world’s population each year, causing 3 to 5 million cases of severe illness worldwide including 250,000 to 500,000 deaths. There are three types of influenza viruses: A, B, and C. However, only influenza A viruses cause pandemics. Influenza A viruses are further categorized into subtypes according to differences in the “HA” and “NA” proteins that are on the outer surface of the virus. These influenza A subtypes are further characterized into strains. Influenza strains mutate, or genetically change, over time. As strains mutate, new strains of influenza viruses appear and may replace older, circulating strains. When a new strain of human influenza virus emerges, immunity that may have developed after a previous infection or vaccination may not provide protection against the new strain. Small mutations in the influenza virus are the reason why someone who has previously been infected with influenza can still be susceptible to seasonal or common influenza. More substantial changes in the influenza virus can result in the emergence of a pandemic influenza subtype. Pandemic human influenza is a virulent influenza that causes a global outbreak, or pandemic, of serious illness. It occurs when an existing strain of the influenza virus is replaced by a new influenza A strain to which humans have no immunity, resulting in widespread morbidity and mortality. According to WHO, pandemic influenza can spread to all parts of the world very quickly, usually in less than a year, and can sicken more than a quarter of the global population. Three conditions must be met before an influenza pandemic begins: (1) a new influenza virus subtype that has not previously, or at least recently, circulated in humans must emerge, (2) the virus must be capable of causing disease in humans, and (3) the virus must be capable of sustained human-to-human transmission. The H5N1 virus currently meets the first two of these three conditions but not the third. The current H5N1 pandemic influenza threat stems from an unprecedented outbreak of H5N1 influenza that first appeared in birds in southeastern China and Hong Kong in 1996 and 1997 and was first detected in humans in Hong Kong in 1997. The virus reappeared in late 2003 and early 2004 and has since spread in bird populations across parts of Asia, Europe, and Africa, with limited infections in humans. From December 1, 2003, to December 11, 2007, H5N1 was detected in animals in 60 countries. According to WHO, the geographical spread of H5N1 in animals in 2006 was the fastest and most extensive of any pathogenic avian influenza virus recorded to date. From January 1, 2003, through December 12, 2007, WHO reported 338 confirmed human cases, including 208 human deaths from the H5N1 virus in a total of 12 countries—a case fatality rate of 62 percent. Scientists and public health officials agree that the spread of the H5N1 virus in birds and the occurrence of infections in humans have increased the risk that this disease may change through adaptive mutation or reassortment into a form that is easily transmissible among humans, resulting in an influenza pandemic. HHS stated that little is known about how to control a pandemic and that it is important to distinguish between seasonal influenza and pandemic influenza. Current knowledge about how antiviral drugs and influenza vaccines perform is largely drawn from experience with seasonal influenza. HHS stated that how antivirals and vaccines will perform against a pandemic influenza virus cannot be predicted, but as there are currently no better options, the agency has made plans for their use in response to a pandemic. Vaccines are considered the first line of defense against influenza to prevent infection and control the spread of the disease. Vaccines stimulate immune responses which include causing the body to produce neutralizing antibodies to provide protective immunity to a particular virus strain. After vaccination, the body takes about 2 weeks to produce protective antibodies for that strain. For the one FDA-licensed H5N1 vaccine, two doses administered about 4 weeks apart would be required to provide what is believed to be an adequate immune response based on past experience with seasonal influenza vaccines. When a vaccinated person is exposed to the specific virus proteins in the vaccine, antibodies develop in response that will help either to prevent infection or reduce the severity of the illness caused by infection. To be most effective, an influenza vaccine needs to closely match the circulating influenza strain. However, because influenza viruses undergo minor but continuous genetic changes from year to year, a matched vaccine cannot be developed until the circulating strain has been identified. Generally, the purpose of vaccination is to prevent infection; however, in the event of a pandemic, the purpose could be broadened to include decreasing mortality or morbidity. The impact of such a change could be to increase vaccine availability since a vaccine that is not fully matched to the virus might be available more quickly and still help reduce mortality and morbidity. In the case of vaccines for seasonal influenza, WHO, CDC, FDA, health officials around the world, and vaccine manufacturers participate in a system that develops and produces vaccines targeted to the influenza strains most likely to be in circulation during the next influenza season. This system collects and analyzes circulating influenza viruses, uses the information to determine the three human strains most likely to circulate in the upcoming year, and formulates and distributes virus reference strains to vaccine manufacturers, who produce seed viruses to manufacture influenza vaccines. Influenza vaccine is produced in a complex process that involves growing viruses in millions of fertilized chicken eggs. Seasonal vaccine production generally takes 6 or more months after virus strains have been selected. The same general system would be used in the event of a pandemic to manufacture a vaccine targeted to the influenza strain causing it. Influenza vaccines can be categorized into three types: seasonal, pre- pandemic, and pandemic. As discussed in table 1, seasonal vaccines protect against annual (i.e., seasonal) influenza strains. Pre-pandemic vaccines are formulated to match strains of influenza viruses that have had limited circulation in humans but have pandemic potential. However, they are not matched or targeted to the specific pandemic strain that may eventually emerge. Pandemic vaccines are formulated to match a pandemic strain that has already emerged. Influenza vaccines are made either from inactivated (i.e., killed) viruses or from live viruses that have been attenuated (i.e., weakened). Generally, inactivated influenza vaccines are made from parts of the influenza virus rather than the whole virus. Globally, influenza vaccine production is largely a private-sector activity and vaccine manufacturing is concentrated in Europe and North America, with approximately 90 percent of worldwide production capacity located in these areas. However, there are manufacturers throughout the world, including in Australia, China, and Japan. Some manufacturers have production facilities in more than one country. In some cases, more than one manufacturer may be producing vaccine for distribution in a particular country. For example, there were four manufacturers producing five vaccines for the 2006-2007 influenza season in the United States. In the event of a pandemic, manufacturers would switch production from seasonal to pandemic vaccine, and would use the same facilities to produce the pandemic vaccine as they had used to produce seasonal vaccine. Pre-pandemic vaccines are currently produced only during the 3- to 4-month period when manufacturers are not producing seasonal vaccine. Antiviral drugs are also used against seasonal influenza in humans to reduce symptoms and complications and could be used in the event of a pandemic. Antivirals can be used to both prevent illness and treat those who are already infected by killing or suppressing the replication of the influenza virus. Antivirals are not reformulated to match a specific influenza strain and could be used from the early phase of an influenza pandemic. As shown in table 2, two classes of antiviral drugs are currently available for the prevention and treatment of influenza, and two types of drugs within each class have been approved. Amantadine and rimantadine belong to the older class, adamantanes. Tamiflu and Relenza belong to the newer class, neuraminidase inhibitors. Amantadine is given as a capsule, syrup, or tablet, while rimantadine is administered as a syrup or tablet. Tamiflu can be administered as either a capsule or liquid. Relenza is a powder that must be inhaled using a special device. According to CDC, antivirals are about 70 to 90 percent effective for preventing illness in healthy adults; that is, they are about as effective as vaccines, when the vaccine and circulating virus strains are well matched, in preventing illness among healthy adults. For maximal effectiveness in preventing infection, the antiviral must be taken throughout the entire period of a community outbreak. According to current research involving seasonal influenza, if taken within 2 days of the onset of symptoms, these drugs can shorten the duration of the illness by 1 or 2 days, alleviate symptoms, reduce complications and serious illness, and may make someone with influenza less contagious to others. However, it is unknown if antivirals will perform the same for pandemic influenza as they do for seasonal influenza. In addition, influenza virus strains can become resistant to one or more of these drugs, and so they may not always be effective for prevention or treatment. WHO has stated that the neuraminidase inhibitors are preferred for prevention and treatment of influenza because there is lower risk for adverse events (compared historically to adamantanes), less evidence of drug resistance, and greater therapeutic value associated with these particular antivirals. Of the two currently available neuraminidase inhibitors, WHO strongly recommends the use of Tamiflu. Tamiflu is generally less expensive and easier to ship than Relenza and, because it is given as a capsule or liquid, it is easier to administer. Pharmaceutical manufacturers are currently producing both brand name and generic versions of antivirals approved for preventing and treating influenza. Tamiflu is produced by Roche, a health care company that sells products throughout the world. Relenza is manufactured by GlaxoSmithKline, another health care company that sells products worldwide. Neither drug is patent protected in all countries, so generic drug manufacturers may produce these drugs where they are not under patent protection. Both amantadine and rimantadine are no longer under patent protection and, consequently, the number of manufacturers that can produce the drug worldwide is not limited by patent restrictions. HHS, along with the Departments of Agriculture, Defense, and State and the U.S. Agency for International Development, carries out U.S. international animal and pandemic influenza assistance programs. The Department of State leads the federal government’s international engagement on influenza and coordinates U.S. international assistance activities through an interagency working group. The Homeland Security Council is monitoring the U.S. efforts to improve domestic and international preparedness. HHS provides technical assistance and financing to improve human disease detection and response capacity. HHS received total appropriations specifically available for pandemic-influenza-related purposes in fiscal year 2006 of $5.683 billion. Of this amount, HHS allocated approximately $3.2 billion to vaccines, $1.1 billion to antivirals, and $179 million to international collaboration with the remainder going to such areas as state and local preparedness and risk communications. The U.S. Agency for International Development provides technical assistance, equipment, and financing for both animal and human health- related activities. In addition, the Department of Agriculture provides technical assistance and conducts training and research programs and the Department of Defense stockpiles protective equipment. WHO, in conjunction with the United States and other governments, has developed an international strategy on how to contain an emerging pandemic virus at the site of the outbreak, whether it is H5N1 or another influenza virus with pandemic potential. Containment is a key element of the broad U.S. National Strategy for Pandemic Influenza. The public health community has generally not attempted to contain an initial outbreak of a pandemic-potential strain or to eradicate it while it is still confined to a limited area. WHO has noted that the success of the strategy in halting a pandemic or delaying its spread cannot be assured. However, WHO has stated that given the potential health, economic, and social damage a pandemic can produce, forestalling a pandemic must be tried. Further, WHO notes that should early containment fail, once a certain level of spread of the pandemic virus is reached, no interventions are expected to halt international spread, and the public health response will need to shift to the reduction of morbidity and mortality. The international containment strategy is based on studies suggesting that efforts, centered on using antiviral drugs to prevent infection as well as treat cases, might contain a pandemic at the site of the outbreak or at least slow its international spread, thus gaining time to put emergency measures in place and develop vaccines. Such a strategy includes the creation of a geographically defined containment zone. According to WHO, the containment zone would be created around the cases where widespread antiviral and nonpharmaceutical countermeasures should be used. The containment zone should be large enough so that all known persons infected by the pandemic virus are located within the zone as well as many of the people in frequent contact with them. Rapid detection and reliable reporting of outbreaks, immediate availability of necessary antivirals for large numbers of people, and the restriction of the movement of people in and out of the affected area (or containment zone) are components of the strategy. Other elements of the strategy include isolation of ill persons, voluntary quarantine of people in contact with these persons, school closures, and cancellation of mass gatherings. These measures are meant to reduce the opportunities for additional human-to-human transmission to occur. Disease surveillance in animals and humans has a critical role in the success of the international strategy to forestall the onset of a pandemic. The Director of CDC has stated that for optimal response, an emerging influenza pandemic outbreak anywhere in the world must be recognized within 1 to 2 weeks and then be investigated and confirmed within days. Infectious disease surveillance activities include detecting and reporting cases of disease, analyzing and confirming this information to identify possible outbreaks or longer-term trends, exchanging information related to cases of infectious disease, and applying the information to inform public health decision-making. HHS officials have noted that as outbreaks of animal influenza viruses spread and affect people, collaboration between animal and human influenza surveillance systems is needed. Additionally, WHO has stated early detection of animal diseases, which might be transmissible to humans, leads to quicker actions to reduce threats to humans. Alerts of animal outbreaks can provide early warning so that human surveillance can be enhanced and preventive action taken. When effective, surveillance can facilitate (1) timely action to control disease outbreaks, (2) informed allocation of resources to meet changing disease conditions and other public health programs, and (3) adjustment of disease control programs to make them more effective. Diagnostic tests are an important component of identifying pandemic influenza and putting measures in place to forestall its spread. Diagnostic tests for a range of viruses help assess patients for the presence of H5N1, other emerging influenza viruses, and seasonal influenza. Quick and accurate diagnosis of influenza is essential to early treatment. In addition, accurate, rapid diagnosis enables timely implementation of containment and treatment procedures, and will be critical in identifying the beginning of a pandemic and possibly slowing the spread of the disease. Rapid diagnosis allows more time for equipment and personnel to be mobilized to aid in pandemic response. As part of the WHO Global Influenza Surveillance Network, individual countries, including the United States, collect and analyze influenza virus samples and submit selected samples to WHO Collaborating Centres for further analysis. These samples allow WHO to perform a number of influenza-related public health activities, including: determining if the virus has acquired human genes or made other tracking the evolution of the virus and its geographic spread, updating diagnostic tests and reagents, identifying potential vaccine strains, and testing to determine if the virus remains susceptible to antivirals. The success of this network is dependent upon the participation of its members. According to HHS, the network has functioned efficiently in the past for the detection and characterization of newly emergent influenza viruses of epidemic potential. The use of antivirals and vaccines to forestall the onset of a pandemic would likely be constrained by their uncertain effectiveness and limited availability. Weaknesses within the international influenza surveillance system impede the detection of strains, which could limit the ability to promptly administer or develop effective antivirals and vaccines to treat and prevent cases of infection to prevent its spread. The delayed use of antivirals and the emergence of antiviral resistance in influenza strains could limit their effectiveness. A targeted vaccine cannot be manufactured until the pandemic strain has emerged and been identified. The availability of antivirals and vaccines is constrained by existing limitations in their production, distribution, and administration. Current antiviral production capacity is inadequate to reach the number of antivirals WHO estimates will be needed to contain a pandemic. Vaccines targeted to match a pandemic strain are unlikely to be available for prevention of disease at the onset of a pandemic as, according to HHS officials, they would not become available until 20 to 23 weeks following detection of a pandemic. Moreover, most countries do not possess the capacity to distribute and administer these antivirals and vaccines quickly enough to forestall a pandemic. Antiviral and vaccine effectiveness depends upon their timely application. To achieve timely application, health authorities must be able to detect the virus strain quickly through surveillance efforts and use this information to administer or develop effective antivirals and vaccines. However, weaknesses within the global influenza surveillance system could limit the effectiveness of antivirals and vaccines in treating and preventing cases of infection. In addition, limited support for clinical trials could hinder their ability to improve understanding of the use of antivirals and vaccines against a pandemic strain. An influenza surveillance system that can promptly detect outbreaks would facilitate the timely use of antivirals. The effectiveness of antivirals in containing an initial influenza outbreak of a new strain depends in part on the timely use of the appropriate drug. Experience with seasonal influenza indicates that antivirals are most effective for treatment if started within 48 hours of the onset of symptoms; therefore, rapid detection of human outbreaks of potential pandemic strains is necessary. If an individual is diagnosed too late, antivirals may not be effective. WHO has noted that a critical problem is the tendency of human H5N1 cases to be detected late in the course of the illness. Antivirals used for prevention should be started either before exposure or as soon as possible after initial exposure. International surveillance is also required to monitor strain evolution for the development of vaccines targeted to a potential pandemic strain or the actual pandemic strain. A well-matched vaccine cannot be ensured until the pandemic virus strain has been identified. According to HHS officials, 20 to 23 weeks are currently required from the detection of a pandemic before a well-matched vaccine can be developed. Consequently, well- matched vaccines are likely to play little or no role in efforts to stop or contain a pandemic, at least in its initial phases. However, an effective surveillance system is necessary to develop a safe and effective pandemic vaccine as soon as possible so that a vaccine is available for later stages of the pandemic. Another concern is that influenza strains can be resistant to antivirals, rendering them ineffective in treating or preventing infection. Monitoring strain evolution to determine susceptibility or emergence of antiviral resistance is one element of assessing the likelihood that a particular antiviral will be effective. The effectiveness of an antiviral against one strain of seasonal influenza does not mean that it will be effective against an H5N1 strain or another potential pandemic strain. While both classes of antivirals, adamantanes and neuraminidase inhibitors, could potentially be used against a pandemic strain, experts caution against the use of adamantanes without prior indication that the emerging strain is susceptible to them. For example, CDC recommends against the use of adamantanes to treat or prevent currently circulating influenza because strains resistant to adamantanes have emerged. Similarly, WHO recommends only neuraminidase inhibitors be used to respond to H5N1 outbreaks unless neuraminidase inhibitors are not available or local surveillance data show that the H5N1 virus is known or likely to be susceptible to the adamantanes. A high proportion of H5N1 strains circulating in Indonesia, Thailand, and Vietnam have been resistant to adamantanes. Like adamantanes, the effectiveness of neuraminidase inhibitors against potential pandemic strains could also be constrained by the emergence of antiviral-resistant strains of the virus. In Vietnam, a study identified H5N1 strains resistant to Tamiflu, a neuraminidase inhibitor, and a few seasonal influenza viruses—less than .5 percent—have been resistant to Tamiflu. Another study examined the effectiveness of Tamiflu and Relenza, another neuraminidase inhibitor, against H5N1 viruses. The researchers found that there was little variation in the effectiveness of Relenza against all H5N1 viruses studied but that there was variation in the effectiveness of Tamiflu. For example, they reported that one group of H5N1 viruses was 15- to 30-fold less sensitive to Tamiflu than was another group of H5N1 viruses. According to the U.S. National Strategy for Pandemic Influenza Implementation Plan, international capacity for influenza surveillance still has many weaknesses, including limited influenza sample collection and sharing. Surveillance requires the collection and sharing of virus samples and the genetic sequencing of these samples from both infected humans and animals to monitor if and how a strain is mutating. According to WHO, global influenza surveillance in humans is weak in some parts of the world, particularly in developing countries. Surveillance systems in many of the countries where H5N1 influenza is of greatest concern are inadequate, particularly in rural areas where many cases have occurred. WHO has noted that to increase the likelihood of successfully forestalling the onset of a pandemic, surveillance in affected countries needs to improve, particularly concerning the capacity to detect clusters of cases closely related in time and place. Such clusters could provide the first signal that the virus has begun to spread more easily among humans. If early signals are not identified, the opportunity for preemptive action will be missed. In addition, some countries experiencing H5N1 influenza outbreaks (e.g., Indonesia) have at times not promptly shared human virus samples with the international community, thus further weakening international surveillance efforts. Similarly, a surveillance network to monitor influenza in animals faces weaknesses. Global animal influenza surveillance can help provide early recognition of viruses with the potential for causing human influenza. Surveillance in animals may indicate how an influenza virus is spreading and evolving. WHO has recommended combining the detection of new outbreaks in animals with active searches for human cases. However, influenza surveillance in animals has weaknesses. For example, definitions of what constitutes an outbreak vary between countries and may be reported as a single infected farm, an affected village, or an affected province. In addition, only the number of outbreaks may be reported rather than more specific information. Moreover, animal disease surveillance is completely lacking in some countries. For example, Djibouti and Uganda have no capacity to collect, transport, and diagnose animal influenza samples. Just as with human influenza samples, there are concerns that animal samples have also not always been shared promptly, or for every outbreak. According to WHO, few countries have the necessary expertise and facilities to diagnose H5N1. This leads to the need for countries lacking laboratories these facilities to wait until collected samples of a strain are tested by labs outside the country, possibly delaying both timely diagnosis and antiviral administration. Therefore, laboratories must have the necessary information, guidance, and materials to allow them to recognize, store, and safely transport H5N1 samples to more specialized laboratories in other countries. In a previous report, we reported, for example, that Indonesia and Nigeria both had limited capacity to collect, diagnose, or transport influenza in human samples. Currently, there is not a good way to quickly and easily determine whether a patient has H5N1 or a more common type of influenza. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza. The amount of time required to attain results from diagnostic tests varies from minutes to several days, with accuracy often being the trade-off for rapid results. Existing point-of-care tests can provide results rapidly and determine if the patient is infected with seasonal influenza viruses A or B but cannot identify avian influenza H5N1. A viral culture test can provide specific information on circulating strains and subtypes of influenza viruses in 2 to 10 days but may require longer for more detailed analysis. In addition, the need to conduct viral culture tests in laboratories with enhanced safety levels can also restrict their usefulness. HHS recommends an H5 polymerase chain reaction test, which can be done without the specialized laboratory facilities required by viral culture tests, for the diagnosis of H5N1 influenza. This test is FDA- approved and is used by public health laboratories throughout the United States and in many parts of the world. Limited support for clinical trials could hinder their ability to improve understanding of the use of antivirals and vaccines against a pandemic strain. Clinical trials improve the understanding of effectiveness, timing of administration, duration of treatment, optimal dosage, safety, and the balance of risks and benefits of antivirals and vaccines. Improved understanding gained through clinical trials would assist with updating international guidance on antiviral use. The current estimates on the effectiveness of antivirals in a pandemic are largely based on their use in treating and preventing influenza illness caused by seasonal influenza strains circulating at the times the studies were performed. However, the viral characteristics of a pandemic strain may be different. Similarly, clinical trials are an essential step in vaccine development and are used for testing the safety and effectiveness of vaccines. For instance, clinical trials could test for the optimal dosage of vaccines developed against a potential pandemic strain. However, few governments are assisting vaccine manufacturers with funding and technical support for clinical trials. The availability of antivirals and vaccines is constrained by limited production, distribution, and administration. Vaccine manufacturers’ liability concerns might also limit their willingness to manufacture these drugs and make them available in certain countries. Current antiviral production is inadequate to reach WHO estimates for the number of antivirals needed to contain a pandemic. While WHO has not set a target for national antiviral stockpiles, it stated in 2007 that it is unlikely that sufficient quantities of antivirals will be available in any country at the onset of a pandemic. WHO estimates that the quantity of antivirals required to forestall a pandemic would be enough treatment courses for 25 percent of the population. In addition, there would need to be enough preventative courses to last 20 days for the remaining 75 percent of the population in the outbreak containment zone. While Roche, the primary manufacturer of Tamiflu, has expanded production, it has stated that the demand for Tamiflu will need to further increase before there are any new increases in production. While vaccination is considered to be the best defense against influenza, it is unlikely that a vaccine targeted to the pandemic strain will be available in time to forestall the onset of a pandemic. HHS has reported that 20 to 23 weeks are currently required from the start of a pandemic to the availability of a well-matched vaccine; WHO expects that once a pandemic strain emerges, it is likely that it will spread globally within approximately 3 months. Figure 1 shows how WHO, its Collaborating Centres around the world, and pharmaceutical manufacturers would proceed to develop and produce vaccines designed to protect against a newly emerged pandemic strain, and how long it would take for the vaccines to become available. Some health authorities have suggested that increased seasonal vaccination could play a limited role in forestalling the emergence of a pandemic by limiting the opportunities for human and animal influenza strains to combine and form a pandemic strain, but it is likely that the limited availability of seasonal vaccination would limit its role in forestalling an influenza pandemic. Seasonal vaccine would not prevent individuals from becoming infected with animal influenza. However, in the case of an H5N1 strain, promoting seasonal vaccination prior to the emergence of a pandemic strain, particularly among health care workers and others in contact with human cases of H5N1 infection and infected poultry, could reduce the likelihood of H5N1 and seasonal influenza coinfection in humans. Experts fear that such co-infection could lead to the emergence of a reassorted influenza strain that has the transmissibility of the human seasonal strain and the virulence of the H5N1 strain, thus resulting in a pandemic. However, large-scale global seasonal influenza vaccination would be difficult to implement because of the lack of influenza vaccination programs in many countries. Additionally, seasonal vaccination of humans would not prevent influenza reassortment within animals. According to WHO, current annual global production capacity for trivalent seasonal vaccines is approximately 565 million doses; these doses would only be enough to vaccinate about 9 percent of the world’s population of 6.6 billion people. WHO has also stated that the current demand for and supply of seasonal influenza vaccine is approximately equal. Thus, without additional production, either by current manufacturers scaling up their production or by increasing the number of manufacturers, the supply of seasonal vaccine would not be able to meet the increased demand that would stem from the promotion of seasonal vaccination. In fact, due to limitations in vaccine production capacity, even countries with existing seasonal vaccine programs have experienced shortages. For example, the United States experienced vaccine shortages as recently as the 2004-2005 influenza season due to production problems experienced by one manufacturer. This limited vaccine production capacity would also limit the availability of a pandemic vaccine in the event of a pandemic since the processes used to manufacture seasonal and pandemic vaccines are similar and the manufacturing would take place in the same facilities. If a monovalent vaccine (that is, a vaccine that contains only one influenza strain) were produced for a pandemic strain, experts estimate that approximately three times the number of trivalent doses could be produced. Consequently, if annual production capacity is sufficient to produce 565 million doses of trivalent vaccine, 1.695 billion doses of monovalent vaccine could be produced each year. However, the actual number of doses that could be produced would depend on a number of factors including how well the virus strain grows in eggs and the dosage required. For instance, if a dose larger than 15 micrograms—the dose required for current seasonal vaccine—was needed, fewer doses could be produced. Testing on a sanofi pasteur H5N1 vaccine approved by FDA in April 2007 indicates that a single 15 microgram dose would not be sufficient to confer immunity. Instead, the testing indicated that 45 percent of individuals who received two 90 microgram doses of this vaccine—or twelve times as much— developed an immune response expected to reduce the risk of getting influenza. If this dosage were required during a pandemic, instead of having the capability to vaccinate 1.695 billion people, only 141,250,000 (one-twelfth as many) could be vaccinated. This would likely be well below global demand, given a global population of 6.6 billion people. The location of vaccine manufacturing facilities could also limit the role that vaccines would play in forestalling an influenza pandemic. Experts fear that the concentration in a few countries of vaccine production capacity could, in the event of a pandemic, lead to vaccine shortages in countries without domestic manufacturing capacity. According to WHO, 90 percent of vaccine production capacity is concentrated in Europe and North America. Currently, only one manufacturer’s entire seasonal influenza vaccine production facilities are located completely within the United States. There is concern among experts that countries without domestic manufacturing capacity would not have access to vaccines in the event of a pandemic if the countries with domestic manufacturing capacity prohibited the export of vaccine until their own needs were met. Many countries experiencing H5N1 influenza outbreaks, such as Cambodia and Indonesia, do not have domestic manufacturers that produce influenza vaccines, and according to WHO, would require financial and technical support from the international community to create a domestic pharmaceutical infrastructure. Limited global, national, and local-level distribution and administration capacity could restrict the availability of antivirals at the site of outbreaks for use in forestalling the onset of a pandemic. Distribution and administration capacities require plans, delivery networks, facilities suitable for administering the drugs, trained personnel, and funding to get antivirals to where they are needed and administer them promptly. As discussed earlier, experience with seasonal influenza indicates that antivirals are most effective in treating influenza if they are taken within 48 hours of the onset of symptoms. This requires an efficient distribution network to get the drugs to where they are needed. Antiviral distribution networks are poor or nonexistent in some countries. We previously reported that as of October/November 2005, 10 of 17 countries reviewed did not have distribution plans for the release of antiviral stockpiles and there was insufficient information available to reach conclusions for 4 others. Studies of national pandemic preparedness plans in Europe and the Asia-Pacific region found that most did not adequately address how antivirals would be transported to locations where they are needed and how they would be administered to individuals. Thirteen of the 21 European plans had guidance on priority groups for treatment with antivirals, but none described the process by which individuals belonging to priority groups would be identified. Most of the plans in Asian-Pacific countries did not identify such priority groups. The timely administration of antivirals would also likely be constrained if there is a scarcity of trained professionals as well as packaging and instructions that are printed in languages foreign to those administering the drugs. In addition, countries that depend upon outside sources to provide antivirals might not have these drugs available in time to contain an outbreak. Many countries do not have national stockpiles of antivirals and are dependent on outside sources to provide these drugs for distribution in the event of an outbreak. Antiviral stockpiling is expensive, and it may not be feasible for many countries to establish their own national stockpiles. Similarly, the availability of vaccines could be affected by limitations in countries’ capacity for distributing and administering vaccines. For example, a lack of supplementary medical supplies (such as syringes) could impede the administration of vaccines. Countries’ experience with seasonal vaccination programs indicates potential problems in the event of a pandemic. IFPMA has noted that many developing countries have insufficient health care systems to deliver vaccines. Most countries have little seasonal influenza vaccine distribution infrastructure and lack financial and human resources to implement national seasonal influenza vaccination programs. In 2005, WHO reported that about 50 of the 193 countries in the world, mainly those that are industrialized and some countries in rapid economic development, offer influenza vaccination to nationally defined high-risk groups. However, even in industrialized nations such as the United States, vaccine distribution and administration issues arise. For example, during the vaccine shortage in the 2004-2005 influenza season, CDC developed a plan to allocate the available vaccine among states. However, the formula for allocating each state’s allocation was imperfect, resulting in some states having more vaccine than needed to cover demand and other states having too little. Manufacturers’ concerns regarding product liability in individual countries could also hinder the global availability of vaccines. Experts and vaccine manufacturers have said that the lack of liability protection increases liability concerns for manufacturers, which may hinder their willingness to manufacture and distribute vaccines in countries where they might be held liable for any adverse effects that occurred from their administration. Concerns regarding potential liability for the vaccines could hinder efforts by WHO to get companies to donate vaccines to countries where they are not licensed. Industry representatives have stated that manufacturers would need advance assurance that governments would provide liability protection. The United States, its international partners, and the pharmaceutical industry are investing substantial resources in efforts to address the uncertain effectiveness and limited availability of antivirals and vaccines. Efforts to make effective antivirals and vaccines more available include (1) improving disease surveillance on an international scale in order to monitor the evolution of influenza strains and the effectiveness of antivirals and vaccines against those strains, (2) increasing global demand for antivirals and vaccines to encourage production and spur research and development, and (3) increasing global distribution and administration capacity. However, some of these efforts face funding and logistical limitations and will take several years to complete. The U.S. government and its international partners are supporting efforts to increase the effectiveness of antivirals and vaccines by improving influenza surveillance. International surveillance is required for monitoring strain evolution in humans and animals to detect the emergence of new influenza strains and evaluate the continued effectiveness of antivirals and vaccines as the virus evolves. Governments, international organizations, manufacturers, and scientists have initiatives under way to improve international surveillance by improving disease surveillance in humans, creating animal surveillance networks, improving animal and human sample sharing and analysis, increasing international collaboration in monitoring influenza strains, and improving diagnostic capabilities. WHO’s revised International Health Regulations seek to improve worldwide disease surveillance in humans. The revised Regulations were adopted in May 2005 and effective on June 15, 2007, require that member states report all events that constitute a public health emergency of international concern, such as those caused by new and reemerging diseases with epidemic potential like H5N1 influenza. The Regulations set out the basic public health capacities a country must develop, strengthen, and maintain to detect, report, and respond to public health risks and potential public health emergencies of international concern. For example, at the national level a country is required to be able to assess all reports of urgent events within 48 hours. Each country must assess its ability to meet the core surveillance capacities by June 2009 and has until June 2012 to develop these capacities. Among activities to improve influenza surveillance in animals, in May 2005 the World Organisation for Animal Health (OIE) and the Food and Agriculture Organization of the United Nations (FAO) created the OIE/FAO Network of Expertise on Avian Influenza (OFFLU), an international veterinary counterpart to WHO’s human Global Influenza Surveillance Network. OFFLU supports international efforts to monitor and control H5N1 in poultry and other bird species through the collection and sharing of influenza virus samples from infected animals. Increased animal surveillance could speed the diagnosis and reporting of novel influenza strains. One of OFFLU’s goals is to put influenza sequences in the public domain for the benefit of research and development, and OFFLU is actively supported in this endeavor by the U.S. government. Influenza sequencing reveals complete genetic blueprints of influenza viruses, information which is used to develop vaccines and to monitor the emergence of antiviral-resistant influenza strains. Additionally, sequencing provides information that might indicate that a virus has changed in such a way to become more transmissible among humans. OFFLU collects animal influenza samples and shares them with NIH for sequencing and with CDC for antigenic analysis. NIH sequences the samples and funds the costs of sequencing these samples. NIH then makes the completed sequences available in the public domain. Through its Influenza Genome Sequencing project, NIH makes available to the entire scientific community over the Internet the genetic sequences of human and animal influenza viruses. As of December 13, 2007, 2,807 human and animal influenza viruses have been completely sequenced. In addition to this project, CDC and NIH have provided materials to countries affected by H5N1 to test animal influenza virus strains. In the event of a pandemic, CDC and NIH have also offered them assistance in sequencing influenza viruses. An additional effort to improve surveillance through sample sharing is the Global Initiative on Sharing Avian Influenza Data, formed in August 2006 by a group of scientists from over 40 countries. Genetic sequence data collected through this initiative will be deposited in a publicly available database and then after a specified period of time will be released automatically to publicly funded databases participating in the International Sequence Database Collaboration or in other publicly available databases. This initiative will work to overcome restrictions which have previously prevented influenza information sharing, with the hope that more shared information will help researchers understand how viruses spread, evolve, and can potentially lead to a pandemic. This initiative is open to all scientists, provided they agree to share their own data, credit the use of others’ data, analyze findings jointly, and publish results collaboratively. In addition to OFFLU, a surveillance system for animal diseases that are transmissible to humans has also been established and many countries have improved their surveillance of animal diseases. FAO, OIE, and WHO launched the Global Early Warning and Response System for Major Animal Diseases, including Zoonoses (GLEWS) in July 2006 to improve the early warning and response capacity of the three organizations to animal diseases, including those that can spread to humans. GLEWS is the first joint early warning and response system conceived with the aim of predicting and responding to such diseases. WHO has stated that from a public health perspective, early warnings of animal outbreaks that have a known potential to spread to humans will enable the initiation of control measures that can prevent human morbidity and mortality. The United Nations System Influenza Coordinator and the World Bank have reported that many countries have improved their animal disease surveillance systems. They noted that better disease surveillance systems, along with improved laboratory capacity and increased access to epidemiological expertise, account for improved detection of H5N1 and other influenza viruses. In April 2007, NIH announced that it was awarding $23 million per year for 7 years to establish six Centers of Excellence for Influenza Research and Surveillance. The mission of the centers is to expand NIH’s influenza research program, both in the United States and internationally, to determine how these viruses cause disease as well as how the human immune system responds to them. Specific activities include expanding animal influenza surveillance and studying how pandemic viruses emerge. Governments, including the U.S. government, and manufacturers are undertaking efforts to increase international collaboration to monitor the evolution of influenza strains. Through a collaborative global network, CDC’s WHO Collaborating Centre is monitoring the H5N1 virus to track its geographic spread and to identify and analyze changes in the virus. CDC is providing funds for the shipment of influenza samples to WHO Collaborating Centres for analysis. As part of its surveillance role, CDC conducts antiviral susceptibility testing on seasonal and novel influenza viruses and has been able to identify changes in the sequence of H5N1 virus samples that could affect their susceptibility to existing antiviral medications. For example, in January 2007 CDC testing found an H5N1 virus sample from Egypt with reduced susceptibility to Tamiflu. FDA, CDC and other WHO Collaborating Centres, other WHO laboratories, and national regulatory authorities have also used information on H5N1 strain evolution to recommend representative strains for use in the development of pre-pandemic vaccines and to develop H5N1 reference viruses which are shared with manufacturers. Manufacturers are also supporting the independent Neuraminidase Inhibitor Susceptibility Network, which includes government officials and works in collaboration with WHO to monitor influenza viruses for any signs of strains that have developed resistance to this class of antivirals. Concerns regarding the failure of certain countries to share human and animal influenza samples and the availability of vaccines developed from these samples have led to efforts to promote sample sharing. In February 2007, Indonesia announced that it would no longer share H5N1 samples with WHO because the resulting vaccines produced by private companies were unlikely to be available to developing countries such as Indonesia. At times, the Indonesian government has also expressed a desire for royalties from any invention derived from an influenza sample isolated within its borders. In March 2007, WHO said that an agreement had been reached and that Indonesia would resume sharing H5N1 samples immediately. However, sample sharing did not resume until May 2007 when, at the World Health Assembly meeting, 17 developing countries introduced a resolution demanding equitable access to vaccines made from H5N1 samples the countries provide. At that time, Indonesia provided three samples from two patients to WHO. Later at that meeting, the World Health Assembly requested that WHO formulate mechanisms and guidelines aimed at ensuring the fair and equitable distribution of pandemic influenza vaccines at affordable prices in the event of a pandemic. Following this, in June 2007 the health ministers of the Asia- Pacific Economic Cooperation stated they planned to share influenza virus specimens in a timely manner. However, HHS officials told us that concerns remain. In July 2007, HHS reported to us that Indonesia had not shared any seasonal or H5N1 influenza samples since those it sent to WHO in May 2007. HHS also noted that the Asia-Pacific Economic Cooperation agreement is not being followed. A WHO official has also expressed concern. In August 2007, he stated that by not sharing virus samples, Indonesia is endangering the world’s health as well as its own. Also in August 2007, Indonesian health officials stated that the county will continue to withhold H5N1 samples at least until a new virus-sharing agreement is developed at an international meeting in November 2007. Later in August Indonesia sent two samples to CDC for testing, although concerns remain whether Indonesia will share or continue to withhold samples in the future. At the November 2007 meeting, no agreement on sample sharing was reached. Indonesia advocated an accord stating that for every virus sample sent out of a country, there should be an agreement specifying that the sample be used only for diagnostic purposes. Commercial use of the virus would require permission of the country that provided the sample. Improved understanding of influenza viruses could improve surveillance and, in turn, vaccine development. Scientists at NIH, along with a collaborator at Emory University, have identified mutations that would help a strain of the H5N1 virus spread easily from person-to-person. This knowledge could contribute to better surveillance of naturally occurring influenza outbreaks because efforts could be focused on identifying viruses with mutations that lead to increased transmissibility among humans. This could permit the development of vaccines prior to a pandemic, and possibly help contain a pandemic at its outset. WHO and CDC are undertaking a number of activities in order to improve diagnostic capability worldwide. WHO reported providing equipment and training to staff working within national laboratories and is providing experts to give hands-on support. At the regional level, it reported enhancing the laboratory network with the facilities and expertise to analyze H5 samples so that every country has access to a regional H5 laboratory. This H5 laboratory network has provided support to countries in shipping samples and providing confirmation of suspected H5N1cases. According to WHO, four laboratories in Africa have been upgraded so that they can conduct H5 diagnosis. For the long term, WHO is working to build and strengthen local H5 diagnostic capability. In addition, CDC officials stated that among its activities the agency provides financial and technical assistance to 35 countries, WHO, and WHO regional offices in order to improve influenza laboratory diagnostic capability. CDC is also providing training for laboratory workers and epidemiologists in order to expand laboratory diagnostic capabilities and develop rapid response teams that could quickly detect, report, and control outbreaks caused by novel influenza viruses. CDC officials have also provided laboratory support and diagnostic reagents to countries investigating H5N1 outbreaks. Research is being conducted to improve rapid, diagnostic tests for influenza. In order to forestall a pandemic, it is critical to be able to identify people with H5N1 quickly. A reliable, rapid diagnostic test is needed for epidemiological assessments, traveler screening, and clinical care. Currently, rapid tests cannot distinguish between strains and subtypes of influenza viruses. To address this shortcoming, in December 2006, CDC awarded four companies a total of $11.4 million in contracts to develop new viral diagnostic tests with quicker and more reliable results that could be used at, for example, a patient’s bedside or a port of entry (see table 3). CDC hopes for FDA approval and commercialization of these products in 2 to 3 years. In addition, tests designed for large reference and public health laboratories are also being developed. In February 2006, FDA approved a test developed by CDC that identifies H5 but not the strain within 4 hours once testing begins. Previously, it would have taken 2 to 3 days. If the virus is identified as H5, tests are then conducted to identify the strain. FDA has shared this technology with WHO and its Collaborating Centres. Research is also under way to improve other types of diagnostic tests for influenza. For example, using funding from NIH, scientists at the University of Colorado at Boulder and CDC have developed a test that is based on a single influenza virus gene that could allow scientists to quickly identify influenza viruses, including H5N1. This test offers several advantages over available tests including being based on a gene that, unlike hemagglutinin and neuraminidase, does not mutate constantly. Consequently, the researchers believe that this test will be more useful than other tests because it will provide accurate results even if the hemagglutinin and neuraminidase genes mutate. However, WHO has cautioned that the availability of such tests are at least 4 years away. Efforts to expand seasonal vaccination and build national stockpiles of antivirals and pre-pandemic vaccines are under way to encourage increased demand for these drugs. Demand for seasonal influenza treatment drives global production capacity for antivirals and all types of influenza vaccines. Increasing demand through government support provides incentives for manufacturers to develop more effective antivirals and vaccines. While the primary benefit of increased seasonal vaccination would be the enhanced protection against seasonal influenza, WHO has stated that increased demand for seasonal vaccines would spur manufacturers to increase their vaccine manufacturing capacity. One of WHO’s goals is to increase seasonal vaccine coverage in countries that already use seasonal vaccine to 75 percent of target populations by 2010, which would require an increase in global vaccine production to 560 million doses to cover use in these countries only. Some countries with seasonal influenza vaccination programs had increased their use of seasonal vaccines prior to WHO setting vaccination goals, thus providing incentives for manufacturers to increase overall vaccine production capacity. In October 2007, WHO stated that seasonal influenza vaccine capacity is expected to rise to 1 billion doses annually in 2010, provided sufficient demand exists. Increased demand for antivirals and pre-pandemic vaccines also stems from orders placed by countries to build national stockpiles. According to Roche, as of April 2007, more than 80 countries had ordered Tamiflu for their own national antiviral stockpiles. Some countries, including Australia, France, and the United States, are also ordering Relenza to supplement their Tamiflu stockpiles. The United States had 36.6 million neuraminidase treatment courses in its federal stockpiles as of August 6, 2007, consisting of 30.8 million treatment courses of Tamiflu and 5.8 million treatment courses of Relenza. It also had 3.6 million treatment courses of rimantadine, an adamantane, on-hand for a total stockpile of 40.2 million antiviral treatment courses. Approximately 100,000 additional Tamiflu treatment courses and 700,000 additional Relenza treatment courses are currently on order for the stockpile. The U.S. goal at the national level is to have a federal stockpile of 50 million antiviral treatment courses. In addition, states and other entities had stockpiles totaling 12.9 million treatment courses of Tamiflu and 1.6 million treatment courses of Relenza as of August 6, 2007. Similarly, Australia, Japan, the United States, and countries in Europe have been establishing stockpiles of pre-pandemic vaccines. For example, in 2005, sanofi pasteur agreed to produce 1.4 million doses of H5N1 pre-pandemic vaccine for France’s stockpile. It is also providing H5N1 pre-pandemic vaccines for national stockpiling in the United States and Italy. In addition, GlaxoSmithKline Biologicals and Novartis Vaccines and Diagnostics are also producing H5N1 pre-pandemic vaccine for the U.S. national stockpile. The United States has stockpiled enough H5N1 pre-pandemic vaccine to cover about 7 million people. The United States’ goal is to have a pre-pandemic vaccine stockpile of treatment courses for 20 million persons. However, developing countries may not be able to build such antiviral and vaccine stockpiles. Antiviral manufacturers have expanded their production capabilities. Roche expanded its Tamiflu production so that it could produce 400 million treatment courses of Tamiflu by the end of 2006. Roche noted that this represents an approximate 15 fold increase over its production capacity of 27 million treatment courses in 2004. In April 2007, Roche stated that its production capacity now exceeded government and corporate orders for Tamiflu received to date. To increase capacity, Roche expanded production from one facility to eight Roche sites, including the United States where 80 million Tamiflu treatment courses can now be produced. In addition, Roche now has 19 external manufacturing partners that perform particular functions in the manufacturing process. Roche has also granted sublicenses to selected drug companies in China and India to allow them to produce Tamiflu in its generic form, oseltamivir, which will increase the amount of that antiviral available globally. In Africa, Roche granted a sublicense to a South African company allowing it to produce oseltamivir to increase production and speed up availability of the drug for use against a pandemic strain in Africa. GlaxoSmithKline, the manufacturer of Relenza, is undertaking efforts to boost Relenza production. While less than 1 million treatment courses of Relenza were produced in 2005, GlaxoSmithKline stated in May 2006 that it planned to increase production capacity in its existing facilities in North America, Europe, and Australia. It increased production to 15 million treatment courses in 2006 and plans to produce 40 million treatment courses in 2007. GlaxoSmithKline also stated that it is willing to license other manufacturers to produce Relenza in its generic form, zanamivir. In September 2006, GlaxoSmithKline announced a licensing agreement with a Chinese drug company to produce the antiviral and sell it in China, Indonesia, Thailand, Vietnam, and other developing countries. Governments and manufacturers are also working to increase the global production of vaccines by helping to build production facilities and supplying the technology and resources necessary to produce influenza vaccines. In September 2006, WHO stated that worldwide vaccine production capacity is expected to increase by 280 million trivalent doses in the next 2 to 3 years. The U.S. government has offered assistance to countries trying to create the infrastructure necessary for vaccine production. For example, HHS has provided countries with reagents, the chemicals required to assess vaccine effectiveness, and training for testing vaccines. It also works with countries to help them develop their own reagents and tests for use in clinical trials and other research. WHO’s Global pandemic influenza action plan to increase vaccine supply, dated September 2006, proposes building new production plants in both developing and industrialized countries as one means to increase production capacity. In October 2006, HHS announced a grant of $10 million to WHO to support influenza vaccine development and manufacturing infrastructure in other countries, while Japan has contributed $8 million. In April 2007, WHO announced that it was awarding grants to six countries to help them develop the capacity to make influenza vaccine. Two of the projects will be in Latin America and four in Asia. Three of the Asian countries receiving grants—Indonesia, Thailand, and Vietnam—have had cases of persons infected with H5N1 influenza. Manufacturers have also committed substantial funds to increase their own vaccine production capacity. Additionally, sanofi pasteur signed a technology transfer arrangement with the governments of Thailand, Mexico, and Brazil. In June 2007, HHS awarded a $77.4 million contract to sanofi pasteur and a $55.1 million contract to MedImmune to renovate existing vaccine manufacturing facilities in the United States and to provide warm-base operations for manufacturing pandemic influenza vaccines. In warm-base operations, a facility does not shut down. HHS stated that these changes will increase production capacity and permit year-round production of pre-pandemic influenza vaccines for the national stockpile, which is currently limited to 3 months per year. In July 2007, sanofi pasteur announced that it had completed construction of a new influenza vaccine manufacturing facility in the United States. It also noted that it was expanding its influenza vaccine manufacturing capacity in France. Increased demand through government support has provided incentives for manufacturers to develop more effective antivirals and vaccines. Manufacturers are conducting research on new antivirals and on improving the use of existing antivirals. Manufacturers are also working to improve the effectiveness of vaccines to combat pandemic influenza through such activities as developing pre-pandemic vaccines, examining cell-based production technology, studying substances that can be added to vaccines to improve effectiveness, and conducting research on vaccines that would provide protection against multiple influenza strains. These studies could be used to help define additional studies and resources that might be needed to assess the safety, effectiveness, and risk and benefit of products, and appropriate use of proposed new products or new uses of existing products. Research on and Development of Antivirals Development of new antivirals is particularly important due to concern over the emergence of antiviral-resistant influenza strains that could render existing antivirals ineffective. Manufacturers are developing and testing new antivirals, and the U.S. government is providing support to manufacturers that are developing new antivirals. In 2005, HHS announced plans to spend $400 million to develop new antiviral drugs. In January 2007, HHS awarded a 4-year $103 million contract to BioCryst Pharmaceuticals, Inc., to support development of a new antiviral, peramivir. Sankyo Co., Ltd., of Japan and Biota Holdings Limited of Australia are working together to develop new antivirals called long-acting neuraminidase inhibitors. These companies have received a $5.6 million grant from HHS to accelerate the development of these antivirals. In addition to developing new antivirals, governments and manufacturers are exploring ways in which existing antivirals could be used to treat influenza more effectively and efficiently. For example, researchers are examining the potential use of antiviral combination therapy, which would entail the use of more than one antiviral to treat an influenza infection. Combination antiviral therapy may be more effective and could reduce the likelihood that an antiviral-resistant strain might emerge because, for example, there may be less chance that a strain resistant to both antivirals would emerge. Researchers are also examining the use of antivirals with other types of pharmaceuticals. NIH, the Department of Defense, and the Department of Veterans Affairs are collaborating on a study to determine if Tamiflu used in combination with the drug probenecid can stretch the supply of Tamiflu. The aim of these studies is to determine whether the combination of these drugs results in Tamiflu remaining in the body longer, thus reducing the amount of Tamiflu that an individual would need to take and effectively increasing the supply of the drug. NIH has also provided funding to the South East Asian Influenza Clinical Research Network, to improve understanding and clinical management of influenza through clinical research, as well as to increase clinical research capacity in participating countries (Indonesia, Thailand, and Vietnam). One ongoing study will compare the safety and effectiveness of standard- and high-dose Tamiflu in treating animal and severe seasonal influenza in hospitalized children and adults. Planned studies include the evaluation of the safety and tolerability of the long-term use of Tamiflu and Relenza to prevent influenza in health care workers and a study of the safety and effectiveness of using intravenous Relenza for the treatment of H5N1 infection in adults and children. Research on and Development of Pre-Pandemic Vaccines Manufacturers, sometimes with the assistance of governments, are working to develop pre-pandemic vaccines. These vaccines might provide some protection against a pandemic strain and also give manufacturers experience in producing effective vaccines for a potential pandemic strain. The United States has been the primary government sponsor of these efforts although other countries have sponsored some studies; other studies have been conducted without government support. The United States has supported studies of vaccines developed by Baxter International, Inc., MedImmune, Novartis, and sanofi pasteur. WHO has reported two ways in which a pre-pandemic vaccine could be used. First, such a vaccine could be used to protect selected populations at risk of being infected by viruses currently circulating among poultry. Second, it could be used to immunize general populations or selected groups (e.g., health care workers) against a potential pandemic strain. However, WHO points out that the pandemic virus may be quite different than what people are immunized against and therefore the pre-pandemic vaccine may not be protective. Pre-pandemic vaccine might also be used as part of a “prime-boost” series in which two doses of vaccine based on different strains would be given. The first vaccine would be a pre-pandemic vaccine that would prime the immune system for a second vaccine. The second vaccine would match the pandemic strain. It is hoped that together the two doses would result in immunity. However, the data needed to support such an approach have not been fully developed. In April 2007, FDA licensed the first pre-pandemic vaccine for human use in the United States against H5N1 based on the results of a clinical trial conducted by NIH, although the results revealed limitations. FDA approved the vaccine for the immunization of persons 18 to 64 years of age at increased risk of exposure to the H5N1 influenza subtype. The vaccine, manufactured by sanofi pasteur, will not be marketed commercially. Instead, the vaccine has been purchased by the federal government for inclusion in the United States stockpile for distribution if needed. However, NIH’s clinical trial showed limitations of the vaccine. First, in previously unexposed populations, two 90 microgram doses are needed to elicit the levels of immune responses usually thought to be adequate to provide protection instead of the single 15 microgram dose of seasonal influenza vaccine that is needed for protection against a seasonal influenza strain. Second, even with this larger dosing regimen, vaccination results in an immune response thought to be protective in only 45 percent of those receiving the vaccine. Studies of seasonal vaccines in healthy persons have demonstrated that effectiveness against well-matched strains is 70 to 90 percent. In addition, experts have noted that such a high vaccine dose could result in an unusually high rate of adverse reactions. NIH, along with other federal agencies, sanofi pasteur, and other manufacturers, continue to work on the development of vaccines that will stimulate enhanced immune response at lower doses of vaccine. GlaxoSmithKline and Novartis have both announced that they have submitted pre-pandemic H5N1 vaccines for approval in Europe. Research on and Development of Cell-Based Production Technology To speed development and production of new technologies for influenza vaccines, the U.S. government and manufacturers are pursuing the development of cell-based vaccine production technology as an alternative method to current egg-based production. Egg-based vaccine production cannot be scaled up quickly and egg supplies can be compromised in the event of an influenza outbreak. According to HHS, cell-based technology could be scaled up quickly because cells can be frozen in advance and large volumes grown quickly, thus providing surge capacity in the event of a pandemic. In April 2005, HHS awarded a $97 million 5-year contract to sanofi pasteur for development of a cell-based influenza vaccine. Subsequently, HHS awarded more than $1 billion in contracts to accelerate development and production of cell-based production technologies for influenza vaccines within the United States. (See table 4.) HHS officials told us that this funding provided companies with the incentive to invest in this technology. In the past, companies did not want to invest in cell-based production technologies because it would not increase efficiency as both cell- and egg-based production would yield similar amounts of vaccine. FDA issued draft guidance in September 2006 to assist manufacturers in developing cell-based vaccines. In other countries, companies are making similar although smaller investments than in the United States, usually without government support. Progress has already been made on the development of cell-based influenza vaccines. For example, Solvay Pharmaceuticals received authorization to market its cell-culture influenza vaccine in the Netherlands in 2001. However, this vaccine has not yet been marketed. In June 2007, the European Union approved a cell-culture-derived seasonal influenza vaccine manufactured by Novartis. The company has stated that it expects to submit an application for approval to market the vaccine in the United States in 2008. Research on and Development of Adjuvants HHS has awarded contracts to manufacturers to research and develop influenza vaccines that use adjuvants. An adjuvant is a substance added to a vaccine to improve its effectiveness so that less vaccine is needed to provide protection. When added to a vaccine, adjuvants can stretch the vaccine supply by decreasing the amount of vaccine needed per person while still providing the same level of protection. Adjuvants have been used in other vaccines, but not in influenza vaccines. GlaxoSmithKline, Novartis, and sanofi pasteur have announced study results that show that adjuvanated influenza vaccines produced possible protective immunity at lower doses than did nonadjuvanated vaccines. For example, Novartis has reported that its adjuvanated vaccine produced a strong immune response against H5N1, H5N3, and H9N2, but that its vaccine without adjuvant produced a poor response. In January 2007, HHS announced that it had awarded contracts totaling $132.5 million to three vaccine manufacturers for the development of H5N1 vaccines using an adjuvant. (See table 5.) In addition to potentially stretching the vaccine supply, there is evidence that when adjuvants are added to a vaccine, that vaccine might also provide protection against strains to which it is not fully matched. Research by Novartis demonstrated that its H5N3 vaccine generated a better immune response against H5N1 strains with an adjuvant than without it. Similarly, a GlaxoSmithKline vaccine with adjuvant provided protection against two diverse H5N1 influenza strains. Research on and Development of Universal Vaccines and Other Vaccines That Protect Against Multiple Influenza Strains Current efforts to develop a universal influenza vaccine are intended to address constraints on both the effectiveness and availability of vaccines. A universal vaccine would protect against multiple virus strains. Availability of universal influenza vaccines would eliminate the current process required to reformulate seasonal influenza vaccines each year. Consequently, if vaccines effective against pandemic influenza could be available when a pandemic strain emerged, there would not be a 20- to 23-week period between identification of the pandemic strain and the ability to produce an effective vaccine. The recent threat of a human pandemic arising from H5N1 has spurred new funding for manufacturers currently attempting to develop universal influenza vaccines. In October 2005, a consortium of companies and universities announced that it had received a 2-year $1.4 million grant from the European Union to support the Universal Vaccine project. The aim of this project is to develop an easily-administered nasal vaccine that provides life-long protection against influenza. Manufacturers such as Merck and Cytos Biotechnology are also working to develop a universal vaccine. NIH is working to bring universal vaccine candidates through the pre-clinical development stage. Despite the recent increase in funding, experts caution that a completely universal influenza vaccine is years away. Therefore, some researchers and manufacturers are developing live attenuated vaccines that might protect against a matched strain as well as mutated strains that typically emerge from year to year. These live attenuated vaccines would not be completely universal, but are easier to develop than universal vaccines and may provide broader protection than current vaccines that match a specific influenza strain. For example, MedImmune’s current seasonal FluMist vaccine, which is a live attenuated vaccine, proved effective in children against the H3N2 strain to which it was fully matched as well as against H3N2 strains for which there was a mismatch in studies in children. However, this may not be the case in adults. Evidence suggests that a live attenuated vaccine was less effective in protecting against mismatched strains in healthy adults than was the inactivated trivalent vaccine. In September 2005, HHS announced that it would work with MedImmune to develop at least one vaccine for each of the 16 identified hemagglutinin influenza proteins. According to experts, it is not clear whether live attenuated virus vaccines matched only for the hemagglutinin protein (e.g., H5 or H7) would work as well against a pandemic strain as would a vaccine matched to the particular strain. However, as in the case of pre-pandemic vaccines, even if limited in their effect these vaccines might help reduce mortality during a pandemic while a fully matched vaccine is developed. HHS officials noted that the protection offered by live attenuated vaccines against multiple strains of different subtypes has yet to be established. It has also been noted that even if an acceptable live attenuated H5N1 vaccine is developed, it could not be used as a pre- pandemic vaccine. There is concern that it could reassort with a circulating seasonal influenza virus and thereby increase its transmissibility among humans. Similarly, research has been conducted using vaccines made from whole influenza virus rather than just parts of the virus. Studies from two vaccine manufacturers, Baxter and Biken, have independently suggested that whole virus vaccines provide protection against multiple strains of the H5N1 virus and require a smaller dose than do vaccines made from parts of the virus. Consequently, the use of whole virus vaccine might not only increase the number of influenza strains against which one is protected, but also increase the number of doses available. Increasing global availability of antivirals and vaccines includes improving the global capacity for their distribution and administration. These efforts also include establishing global and regional antiviral stockpiles and addressing restrictions that different national regulations place on drug manufacture and approval. WHO, countries, and pharmaceutical manufacturers have established global and regional antiviral stockpiles to enhance the availability and quick distribution of antivirals to the site of outbreaks. In August 2005, Roche donated 3 million treatment courses of Tamiflu to WHO for a global stockpile to contain or slow the spread of a pandemic at its origin. According to Roche officials, the size of the stockpile was based on studies that indicated that 3 million treatment courses would be sufficient to stop the spread of a pandemic strain at its source. Roche will be responsible for the delivery of Tamiflu from these stockpiles to the international airport closest to the outbreak, where it will transfer the Tamiflu to WHO. It will then be the responsibility of the affected countries to distribute the donated antivirals within their country to contain outbreaks. Subsequently, in January 2006 Roche announced the donation of an additional 2 million treatment courses to WHO for the establishment of regional stockpiles. In March 2007, WHO stated that these drugs are for the use of countries currently experiencing human outbreaks of animal influenza. Supplies from this second donation have already been sent to those countries. Additionally, some countries have taken the lead in funding regional stockpiles. Japan has provided 500,000 treatment courses of Tamiflu for a regional stockpile for Asia. Japan is also funding the delivery of antivirals from that regional stockpile to the capitals of affected Asian nations. Discussions are under way for HHS to assist in this antiviral stockpiling. For example, there are discussions about sharing antivirals from the United States stockpile, but these drugs could be recalled for domestic use if outbreaks could not be contained or if an outbreak occurred in North America. In May 2006, HHS sent a stockpile of approximately 260,000 treatment courses of Tamiflu to Asia to be pre- positioned for international containment efforts in the event of a pandemic influenza outbreak in that region. The United Nations System Influenza Coordinator and the World Bank reported in December 2007 that individual countries have also purchased or are planning to purchase antivirals but that coverage in many countries remains limited. Sixty-eight percent of countries worldwide have purchased antivirals and an additional 22 percent plan to purchase them. However, the agencies also note that 36 percent of countries report that that their supply of antivirals covers less than 1 percent of their population while another 37 percent report that their antiviral supply covers from 1 to 20 percent of their populations. Individual countries and WHO are also establishing pre-pandemic influenza vaccine stockpiles. Several industrialized countries, including the United States, have established pre-pandemic influenza vaccine stockpiles to vaccinate critical workforce and primary health care workers at the onset of a pandemic. WHO is working to establish a pre-pandemic vaccine stockpile. Such a stockpile could help to alleviate developing countries’ concerns about their lack of access to H5N1 vaccines developed using virus samples provided by them. In April 2007, a WHO expert committee wrote that there is sufficient scientific support for creating a stockpile of H5N1 vaccine for use in countries without influenza vaccine production capacity or the ability to purchase stockpiles of H5N1 vaccines. The committee noted that there is some evidence that current H5N1 vaccines produce a protective immune response against other H5N1 viruses as well. Following this, in May 2007, the World Health Assembly passed a resolution requesting WHO to establish an international stockpile of vaccines for H5N1 or other influenza viruses of pandemic potential. In June 2007, GlaxoSmithKline announced that it would contribute 50 million doses of its H5N1 vaccine to the stockpile, enough to vaccinate 25 million people. Also in June, WHO stated that three additional companies had indicated their willingness to make some of their H5N1 vaccine available for the stockpile. In an effort to facilitate access to various vaccines, FDA and its international counterparts, in collaboration with WHO, are developing a standard set of data requirements to support the licensure of pandemic and pre-pandemic vaccines. Each country has its own requirements for the development and licensure of vaccines for human use (which include testing in clinical trials). If demand were to surge as might happen in the event of a pandemic, the time needed to go through the regulatory process to gain approval for a new vaccine could constrain its availability. FDA and its international counterparts, in conjunction with WHO, participate in international working groups that examine regulations for the development and manufacturing of influenza vaccines. Some governments are also exploring other avenues to speed up their domestic regulatory process to enhance pandemic preparedness. Currently, FDA’s goal is to complete the review of a “standard” application in the United States for vaccine licensure within 10 months. However, the goal for review of a “priority” license application is 6 months. Priority reviews are given to those vaccines that have the potential for providing significant preventive, diagnostic, or therapeutic advancement as compared to existing treatments for a serious or life-threatening disease. In addition, FDA has processes intended to shorten the time needed for commercial development and FDA review in certain circumstances. For example, because it can take many years to determine whether a drug provides real improvement for patients—such as living longer or feeling better—FDA has a process known as “accelerated approval.” Under accelerated approval, applications are reviewed using a substitute measurement of effectiveness that is considered likely to predict patient benefit. Similarly, the European Union is pursuing approval for pre-pandemic vaccines as a mechanism to expedite approval for a pandemic vaccine. Prior to the onset of a pandemic, these pre-pandemic vaccines undergo safety and effectiveness testing and are submitted for approval. In the event of a pandemic, this approved pre-pandemic vaccine would then be reformulated to match the pandemic virus and expedited approval for the reformulated vaccine would be sought. Because the application would only pertain to a variation on the earlier, approved, pre-pandemic vaccine, regulatory approval is expected to be faster. Both GlaxoSmithKline and Novartis have had pre-pandemic vaccines approved by the European Union under this mechanism. In addition, GlaxoSmithKline, Novartis, and sanofi pasteur have submitted additional vaccines for approval under this process. While efforts are under way to alleviate constraints upon the effectiveness and availability of antivirals and vaccines, certain efforts face limitations and will take several years to complete. The strengthening of animal and human surveillance systems is vital to increasing the effectiveness of antivirals and vaccines. However, according to OFFLU officials, that network lacks sufficient funding to hire staff needed to analyze influenza strains. Officials fear that without this staff, scientists might not continue to submit samples to OFFLU—which are analyzed and presented in public databases—out of concern that they would not be analyzed. FAO, OIE, and WHO have stated that greater support of OFFLU is required in order for it to fulfill its functions. Experts have noted that public access to databases that contain influenza sequence information is vital to understanding the spread and evolution of influenza viruses and, therefore, to the research and development of influenza treatments. International support for clinical trials—necessary for developing and evaluating the effectiveness of antivirals and vaccines—is largely provided by only four countries: the United States, Australia, Japan, and the United Kingdom. The United States supports clinical trials for antivirals and vaccines being developed by global manufacturers, but experts state that more widespread and consistent international support is needed. Clinical trials are also required to test effectiveness and cross-protection provided by pre-pandemic vaccines. The United States, Australia, Japan, and the United Kingdom have provided the most support for pandemic vaccine development. However, only the United States provides substantial support to both domestic and international manufacturers for such trials. According to IFPMA representatives, the United States’ efforts are the primary governmental source for funding clinical trials for these vaccines. Increasing demand for vaccines is likely to continue to pose difficulties because a number of countries will still need to balance concerns about a potential pandemic against other existing public health concerns. Low demand for vaccines to treat seasonal influenza is due in part to the low priority placed on seasonal influenza by many countries. As discussed earlier, current global demand for seasonal influenza vaccines is lower than global need, which is the amount required to cover individuals under medical guidelines for influenza vaccination. Manufacturers have been reluctant to invest in the development and production of vaccines due to this low demand and disincentives such as low profits. One reason for low demand is that seasonal influenza programs compete with many other public health priorities for limited budgets in developing countries. For example, Indonesia, one of the countries experiencing human H5N1 outbreaks, is also dealing with other diseases as well as the aftermath of a tsunami, volcanic eruptions, and earthquakes. Some developing countries are willing to implement seasonal influenza vaccination programs but require outside funding to do so. One objective of WHO’s Global pandemic influenza action plan to increase vaccine supply is to increase seasonal vaccine use. According to WHO, a minimum of $300 million is required to do this. Similarly, efforts to increase vaccine production capacity can also be problematic. Citing Vietnam as an example, NIH officials told us that countries may have been too overwhelmed with H5N1 outbreaks to accept offers of assistance to develop vaccine production infrastructure. Although efforts are under way to increase antiviral and vaccine manufacturers’ production capacity by building new facilities, these new facilities are not expected to be ready to produce antivirals and vaccines for several years. According to manufacturers, it will take at least 5 years to build new vaccine manufacturing facilities and receive regulatory approval. WHO stated that it will take the six countries that received grants to develop vaccine production capacity at least 3 to 5 years to begin producing vaccine. Additionally, Roche granted sublicenses to selected drug companies in developing countries for the production of generic versions of Tamiflu. However, these agreements will not immediately alleviate any shortages due to the complicated production process for Tamiflu. Roche has estimated that it would take 2 to 3 years for a new facility to produce Tamiflu on a large scale. It has also stated that, even with all the materials necessary for production available, it takes 6 to 8 months to produce Tamiflu. Similarly, GlaxoSmithKline has stated that it would take a minimum of 6 to 9 months to increase production capacity for Relenza. WHO has stated that it is unlikely that sufficient quantities of antivirals will be available in any country at the onset of a pandemic. Further, in November 2006, Roche stated that because of high demand and long manufacturing lead times for Tamiflu, it is highly unlikely that it would be able to fill large Tamiflu orders on short notice. Demonstrating the importance of demand in driving production capacity, Roche announced in April 2007 that it planned to reduce Tamiflu production because it now exceeded demand for the drug. Roche officials stated that if demand were to increase, it would take 4 months to return the production level to 400 million treatment courses annually. Although global, regional, and national stockpiles of antivirals are being established, little progress has been made in improving the capacity for distributing the stockpiled antivirals to the site of outbreaks around the world, particularly within developing countries. According to Asian Development Bank officials, the logistics of distributing antivirals in the event of a pandemic would be of greater concern than the limited supply of antivirals. Although the cost and logistics of transporting antivirals from a stockpile to a country’s capital are addressed to some extent by WHO and those countries and manufacturers that have donated antivirals, issues of transportation from the capital to a province or distant region remain unaddressed by many national governments. WHO has established protocols for countries to request Tamiflu for containment purposes from its global stockpile. Roche, the donor of the WHO global stockpile, will deliver the drugs to the international airport nearest the crisis and hand them over to WHO. National authorities would then be responsible for the storage, transportation, and administration of those drugs within their borders. To do so effectively, governments must have plans in place prior to an outbreak as well as adequate resources to implement them. The U.S. government has assisted countries in developing such plans. In June 2007, WHO officials reported that over 178 countries have drafted or finalized their preparedness plans. However, WHO has noted that not all plans have incorporated its rapid containment protocol. HHS, the Department of State and WHO provided comments on a draft of this report. The comments from HHS and the Department of State are reproduced in appendixes I and II. WHO provided comments via e-mail and stated that the report was comprehensive and useful. In its comments, HHS said that we had lost the larger context of all efforts with respect to pandemic preparedness and that HHS’s antiviral and vaccine strategies and implementation plans are not captured. Expressing concern about our focus on antivirals and vaccines, HHS said that these are only one piece of the agency’s broader scope of work on this topic. It cautioned that the use of antivirals and vaccines in response to a pandemic is part of a larger, integrated whole, so viewing them outside of the broader context is likely to raise other questions and issues. HHS said that we were assuming that antivirals and vaccines are the only tools necessary to forestall a pandemic. HHS commented on the uncertainty of success associated with measures such as stockpiling antivirals, stating that one must not assume that establishing antiviral stockpiles has solved the problem. HHS noted that if a potential pandemic is identified early, efforts at containment should and will be attempted. It further commented that if containment fails, the effort may still have the effect of slowing the pandemic’s rate of spread while if it succeeds, a pandemic may be at least temporarily averted. In its comments, HHS stated that as a preventive health measure only a vaccine will have the capability of dramatically changing the course of an influenza pandemic. We do not agree that we have lost the larger context of efforts to prepare for a pandemic. This work was done in response to a congressional request that we study the role that antivirals and vaccines could play in forestalling a pandemic. While this was the focus of this engagement specifically, we are well aware that antivirals and vaccines are just two of many possible measures that could be taken in response to an influenza pandemic. As HHS notes in its comments, we have issued other reports on various aspects of pandemic preparedness (see the Related GAO Products section of this report) and we have ongoing work on numerous other aspects of this issue. We stated on page 2 of the draft report provided to HHS for comment that antivirals and vaccines may play a role in forestalling a pandemic, but we did not suggest that they were the only available response measures. Nonetheless we have added language to the report to make it clearer still that antivirals and vaccines are just two of a variety of available countermeasures that are being contemplated by WHO, HHS, and other agencies charged with the responsibility of protecting the public in the event of a pandemic. However, a discussion of the full range of possible responses to an influenza pandemic currently being contemplated by HHS and other organizations and agencies is beyond the scope of this report. HHS also expressed concern with our discussion of “forestalling” a pandemic, suggesting that the premise that a pandemic can be forestalled is not one widely held by the public health or scientific community and is misguided and misleading. They said that few believe that a developing pandemic can be stopped in its tracks. In elaborating on this point, HHS suggested that the concept that antivirals and possibly vaccines might be used to stop an incipient pandemic or to slow the spread should be explicitly stated instead of using the word forestalled in a way that is very likely to be misinterpreted. They note that theoretically, the only way to truly forestall a human pandemic would be to eliminate the avian reservoir from which a future pandemic is likely to emerge. In the draft of this report provided to HHS for comment, we defined the word “forestalling” to mean “preventing or at least delaying.” We use this term to suggest that, while preventing a pandemic would be the desired result of any response effort, delaying the pandemic would perhaps be the more likely yet still desired result. It is not clear why the level of concern expressed by HHS in its comments on our use of the word “forestall” is being raised at this time. We issued a report in June 2007 that discussed efforts to forestall an influenza pandemic, including the word “forestall” in the report title, on which HHS provided written comments. At that time, HHS expressed no concern with the term. In addition WHO has used the word in describing its efforts to respond to a pandemic and our definition is consistent with WHO’s use of the term. Moreover, we fail to see significant differences in the meaning of the word “forestall” as compared to other terms and concepts contained in other parts of HHS’s comments on this report and other public comments. For example, in its comments, HHS discussed a goal to “minimize the impact” of a pandemic. They expressed the desire that we explicitly state the concept of “stopping or slowing the spread of” an incipient pandemic, rather than using the word forestall, defined as “prevent or delay,” which HHS believes is likely to be misinterpreted by the readership. Later in its comments, HHS stated that a pandemic may be “temporarily averted” or slowed. HHS stated in its letter that a vaccine could “dramatically change the course” of a pandemic. While we believe that these concepts are consistent with our use of the term forestall when describing efforts to respond to a pandemic in such a way as to avert, slow, mitigate, or change the course of a pandemic, we have revised the report and defined forestall to mean containing, delaying, or minimizing the impact of the onset of a pandemic. We have also added discussion to the report to further clarify our use of the term, making it clear that success in these efforts is uncertain and that it is unlikely that a pandemic can be entirely prevented. HHS provided other general comments on the structure and organization of the report. HHS expressed concern that we have provided an inadequate amount of information about influenza diagnostic tests. They said that our emphasis on OFFLU is out of proportion to the role it plays in recognizing when a new virus with pandemic potential has begun to spread in humans. They further suggest that we do not adequately distinguish between seasonal, pre-pandemic, and pandemic vaccines. Finally, HHS stated that some information in the draft is out of date and that they corrected many factual errors in their technical comments. We have evaluated HHS’s other comments on the structure and organization of the report. Both their general and technical comments suggested that we have overemphasized some issues while underemphasizing others. They also touched upon areas where HHS does not believe that we adequately distinguished between various aspects of influenza response; for example, HHS commented that we do not adequately distinguish between seasonal and pandemic influenza but also noted that much of what is believed to be true about pandemic influenza is based upon experience with seasonal influenza. We made changes to the report where HHS’s general and technical comments could enhance clarity and completeness. However, this report was intended to describe the challenges and limitations of efforts to respond to an impending pandemic using antivirals and vaccines; it was not intended to capture a complete inventory of the most current scientific knowledge and developments regarding these two countermeasures. In some cases, rapid scientific advances may have outpaced the timing of this report such as, for example, the initiation of a new area of research not specifically identified in the report. In other cases, there is no consensus on the appropriate use and likely results of various medical countermeasures, including different types of antivirals and vaccines. Further, while we updated the report to reflect changes that occurred while the draft report was with the agencies for comment, we disagree with HHS that the draft contained many factual errors. In its comments, HHS updated the information in the report in several areas, provided additional information on some points, and suggested different areas of emphasis in others. HHS suggested different wording in several instances that would have made our description of certain concepts extremely technical and not easily understood by persons not expert in the field. In those instances, we often chose not to make the change suggested by HHS. There were few instances of corrections of facts. Moreover, in a meeting discussing the draft, an HHS official was complimentary of the accuracy and completeness of the report. The Department of State suggested in its comments that the report should be restructured to separate discussion of antivirals and vaccines. The comments state that these medical countermeasures are very different from each other in their application, utility, and the challenges the U.S. government faces in development and production of sufficient quantities. They further commented that while the report extensively discusses the challenges in production of adequate quantities of medical countermeasures, it does not give adequate consideration to the challenges of establishing protocols that would guide the international community in the use of whatever vaccines and antivirals are available. They also suggested that our use of the word “forestall” is somewhat ambiguous and should be clarified. While we did not separate the discussion of antivirals and vaccines as the Department of State suggested, we revised the draft to ensure that discussion of antivirals and vaccines are clearly distinguished from one another. While antivirals and vaccines are very different from each other, we believe that the issues involved in identifying where they are needed, manufacturing sufficient quantities, shipping them to where they are needed, and administering them safely, are similar enough to merit discussing them together. We agree that the issue of establishing protocols to guide the international community in the use of antivirals and vaccines is an important one. However, this issue was discussed in the draft report and the Department of State did not articulate in its comments what information needs to be added. The Department of State’s concerns about our use of the word “forestall” are unclear. In making this comment, the Department of State suggests that we refer to the North American Plan for Avian and Pandemic Influenza, approved by the United States, Canada, and Mexico. The comments quote the plan, which states that “The North American Plan will enhance collaboration in order to … prevent or slow the entry of a novel (pandemic) strain of human influenza to North America.” We fail to see the meaningful difference between the words, “prevent or slow” in the plan and “prevent or delay,” which is the meaning of the word “forestall.” However, as stated earlier, we added discussion to the report to clarify our use of the word “forestall.” We incorporated technical comments provided by HHS, the Department of State, and WHO, as appropriate throughout the report. As arranged with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Secretary of State, the Commissioner of the U.S. Food and Drug Administration, the Director of the Centers for Disease Control and Prevention, the Director of the National Institutes of Health, the Director of the Office of Global Health Affairs, the Special Representative on Avian and Pandemic Influenza at the U.S. Department of State, and to interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Marcia Crosse at (202) 512-7114 or crossem@gao.gov or David Gootnick at (202) 512-3149 or gootnickd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. In addition to the contacts above, Thomas Conahan, Assistant Director; Celia Thomas, Assistant Director; Robert Copeland; Etana Finkler; David Fox; Cathy Hamann; R. Gifford Howland; Michael McAtee; Jasleen Modi; Syeda Uddin; and George Bogart made key contributions to this report. Influenza Vaccine: Issues Related to Production, Distribution, and Public Health Messages. GAO-08-27. Washington, D.C.: October 31, 2007. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Influenza Pandemic: Further Efforts are Needed to Ensure Clearer Federal Leadership Roles and an Effective National Strategy. GAO-07-781. Washington, D.C.: August 14, 2007. Influenza Pandemic: DOD Combatant Commands’ Preparedness Efforts Could Benefit from More Clearly Defined Roles, Resources, and Risk Mitigation. GAO-07-696. Washington, D.C.: June 20, 2007. Influenza Pandemic: Efforts to Forestall Onset Are Under Way; Identifying Countries at Greatest Risk Entails Challenges. GAO-07-604. Washington, D.C.: June 20, 2007. Avian Influenza: USDA Has Taken Important Steps to Prepare for Outbreaks, but Better Planning Could Improve Response. GAO-07-652. Washington, D.C.: June 11, 2007. The Federal Workforce: Additional Steps Needed to Take Advantage of Federal Executive Boards’ Ability to Contribute to Emergency Operations. GAO-07-515. Washington, D.C.: May 4, 2007. Financial Market Preparedness: Significant Progress Has Been Made, but Pandemic Planning and Other Challenges Remain. GAO-07-399. Washington, D.C.: March 29, 2007. Influenza Pandemic: DOD Has Taken Important Actions to Prepare, but Accountability, Funding, and Communications Need to be Clearer and Focused Departmentwide. GAO-06-1042. Washington, D.C.: September 21, 2006. Influenza Pandemic: Applying Lessons Learned from the 2004-05 Influenza Vaccine Shortage. GAO-06-221T. Washington, D.C.: November 4, 2005. Influenza Vaccine: Shortages in 2004–05 Season Underscore Need for Better Preparation. GAO-05-984. Washington, D.C.: September 30, 2005. Influenza Pandemic: Challenges in Preparedness and Response. GAO-05-863T. Washington, D.C.: June 30, 2005. Influenza Pandemic: Challenges Remain in Preparedness. GAO-05-760T. Washington, D.C.: May 26, 2005. Flu Vaccine: Recent Supply Shortages Underscore Ongoing Challenges. GAO-05-177T. Washington, D.C.: November 18, 2004. Emerging Infectious Diseases: Review of State and Federal Disease Surveillance Efforts. GAO-04-877. Washington, D.C.: September 30, 2004. Emerging Infectious Diseases: Asian SARS Outbreak Challenged International and National Responses. GAO-04-564. Washington, D.C.: April 28, 2004. Global Health: Challenges in Improving Infectious Disease Surveillance Systems. GAO-01-722. Washington, D.C.: August 31, 2001. Flu Vaccine: Supply Problems Heighten Need to Ensure Access for High- Risk People. GAO-01-624. Washington, D.C.: May 15, 2001. Influenza Pandemic: Plan Needed for Federal and State Response. GAO-01-4. Washington, D.C.: October 27, 2000. | Pandemic influenza poses a threat to public health at a time when the United Nations' World Health Organization (WHO) has said that infectious diseases are spreading faster than at any time in history. The last major influenza pandemic occurred from 1918 to 1919. Estimates of deaths worldwide if a similar pandemic were to occur have ranged between 30 million and 384 million people. Individual countries and international organizations have developed and begun to implement a strategy for forestalling (that is, containing, delaying, or minimizing the impact of) the onset of a pandemic. Antivirals and vaccines may help forestall a pandemic. GAO was asked to examine (1) constraints upon the use of antivirals and vaccines to forestall a pandemic and (2) efforts under way to overcome these constraints. GAO reviewed documents and consulted with officials of the Departments of State and Health and Human Services (HHS), international organizations, and pharmaceutical manufacturers. WHO commented that the report was comprehensive and useful. HHS stressed that vaccines and antivirals must be viewed in a larger context. State and HHS commented that the term "forestall" is ambiguous and misleading. However, GAO has used the word in a way that is consistent with WHO's use of the term. The use of antivirals and vaccines, two elements of the international strategy to forestall a pandemic, could be constrained by their uncertain effectiveness and limited availability. To use antivirals effectively, health authorities must be able to detect a pandemic influenza strain quickly through surveillance and diagnostic efforts and use this information to administer antivirals. The effectiveness of antivirals could be limited if they are used more than 48 hours after the onset of symptoms or by the emergence of strains resistant to antivirals. Unlike antivirals, vaccines are formulated to target a specific influenza strain in advance of infection. The effectiveness of vaccines in forestalling a pandemic could be limited because such a targeted pandemic vaccine cannot be developed until that strain has been identified. Due to the time required to identify the virus and develop and manufacture a pandemic vaccine--20 to 23 weeks according to HHS--such vaccines are likely to play little or no role in efforts to forestall a pandemic in its initial phases. The availability of antivirals and vaccines in a pandemic could be inadequate due to limited production, distribution, and administration capacity. WHO has stated that it is unlikely that sufficient quantities of antivirals will be available in any country at the onset of a pandemic. The distribution and administration capacity for antivirals and vaccines is limited in some countries by poor or nonexistent delivery plans and networks, a lack of facilities for administering the drugs, and small numbers of personnel trained to administer them. The United States, its international partners, and the pharmaceutical industry are investing substantial resources to address constraints on the availability and effectiveness of antivirals and vaccines. Efforts are under way to improve influenza surveillance, including diagnostic capabilities, so that outbreaks can be quickly detected. Increased demand and government support has led manufacturers to increase research into more effective antivirals and vaccines. Manufacturers are developing new antivirals to combat influenza. New methods for developing vaccines are being studied in order to reduce the amount of vaccine that is needed and to increase the number of strains against which it is effective. Pre-pandemic vaccines, which are formulated to target influenza strains that have the potential to cause a pandemic, are being developed. However, these vaccines may or may not be effective against the pandemic strain that ultimately emerges. To overcome limitations on the availability of antivirals and vaccines, manufacturers are working to increase production at existing facilities and build new facilities. To address constraints on the distribution and administration of antivirals, stockpiles are being established to allow faster delivery of antivirals to countries experiencing outbreaks. WHO is also working to establish stockpiles of pre-pandemic vaccines. Additionally, other efforts also face limitations. For example, increasing production capacity of vaccines and antivirals will take several years as new facilities are built and necessary materials acquired. |
For the 2020 Census, the Bureau is significantly changing how it intends to conduct the census, in part by re-engineering key census-taking methods and infrastructure, and making use of new IT applications and systems. The CEDCAP program, which began in October 2014, is intended to provide data collection and processing solutions (including systems, interfaces, platforms, and environments) to support the Bureau’s multiple surveys throughout the survey life cycle (including survey design; instrument development; sample design and implementation; data collection; and data editing, imputation, and estimation). In October 2015, the Bureau estimated that, with its new approach, it expects to be able to conduct the 2020 Census for a life-cycle cost of $12.5 billion, which would be a reduction of about $5.2 billion from its estimate of what it would cost to repeat the design and methods of the 2010 Census. However, in June 2016, we reported that this $12.5 billion cost estimate was not reliable and did not adequately account for risks that could affect the 2020 Census costs. In November 2015, the Bureau issued a 2020 Census Operational Plan, which is intended to outline the design decisions that are to drive how the 2020 Decennial Census will be conducted—and which are expected to dramatically change the Bureau’s approach to conducting the 2020 Decennial Census. The plan identified 350 redesign decisions that the Bureau had either made or was planning to make through 2018. In August 2016, we reported that the Bureau had determined that about 51 percent of the design decisions were either IT-related or partially IT- related (84 IT-related and 94 partially IT-related). As of October 2016, the Bureau reported that it had made 68 IT-related and 62 partially IT-related design decisions. For example, the Bureau had decided that individuals/households are to be able to respond to the census on the Internet from a computer, mobile device, or other devices that access the Internet; that it intends to award a contract to provide mobile phones and the accompanying service to enumerators; and that it plans to use a hybrid cloud solution where it is feasible. However, the Bureau acknowledged that it still needed to make 16 IT-related and 32 partially IT-related design decisions, including on (1) the uses of cloud- based solutions, such as whether it plans to use a cloud service provider to support a tool for assigning, controlling, tracking, and managing enumerators’ caseloads in the field; (2) the tools and test materials to be used during integration testing; and (3) the expected scale of the system workload for those respondents that do not use the Bureau-provided Census identification. To inform these design decisions, the Bureau held several major operational tests, including the 2014 Census test, which was conducted in the Maryland and Washington, D.C., areas to evaluate new methods for conducting self- response and non-response follow-up; the 2015 Census test in Arizona, which evaluated, among other things, (1) the use of a field operations management system to automate data collection operations and provide real-time data, (2) the ability to reduce the non-response follow-up workload using information previously provided to the government, and (3) the use of personally owned mobile devices by the field-based enumerators who go door to door to collect census data; the 2015 Optimizing Self-Response test in Savannah, Georgia, and the surrounding area, which was intended to explore methods of encouraging households to respond using the Internet, such as by using advertising and outreach to motivate respondents, and enabling households to respond without a Bureau-issued identification number; and the 2016 Census tests in Harris County, Texas and Los Angeles, California, which evaluated, among other things, the efficiency of non- response follow-up using contractor-provided mobile devices. Looking forward, the Bureau has plans for two additional operational tests: (1) the 2017 Census test—a nationwide sample of how individuals respond to Census questions using paper, the Internet, or the phone— in order to evaluate key new IT components, such as the Internet self- response system and the use of a cloud-based infrastructure; and (2) the 2018 end-to-end test, scheduled from August 2017 through December 2018, which, as previously mentioned, is to test all key systems and operations to ensure readiness for the 2020 Census. The 2020 Decennial Census operations are dependent on about 50 IT systems that are currently being developed or are already in production. Eleven of these systems are expected to be provided as CEDCAP enterprise systems, which have the potential to offer numerous benefits to the Bureau’s multiple survey programs, such as enabling an Internet response option; automating the assignment, control, and tracking of the caseloads of the field-based enumerators; and enabling a mobile data collection tool for field work. More details on each of the CEDCAP projects can be found in our June 2016 testimony and our August 2016 report. Our August 2016 report noted that the projects were at varying stages of planning and design, and none were in the implementation/deployment stage. The Bureau had previously developed several pilot systems to provide and test different capabilities, but in May 2016, decided that it would acquire six of the capabilities from a vendor, using a commercial- off-the-shelf IT platform, rather than continue to develop the capabilities in-house. This project is called the Enterprise Censuses and Surveys Enabling (ECASE) initiative. The capabilities that ECASE is to provide include key functionality intended to significantly redesign the 2020 Census and achieve efficiency gains, such as enabling an Internet response-option and an operational control system that automates the assignment, tracking, and management of enumerators’ case work. The Bureau does not have a firm estimate for the cost of the CEDCAP projects. In 2013, the CEDCAP program office estimated that the program would cost about $548 million from 2015 to 2020. More recently, in July 2015, an independent cost estimate for CEDCAP projected the projects to cost about $1.14 billion from 2015 to 2020. However, this July 2015 estimate was developed before the bureau decided to purchase rather than continue to build six of the CEDCAP capabilities. As noted in our prior reports, the Bureau’s past efforts to acquire and implement new approaches and systems have not always gone as planned. As one example, during the 2010 Census, the Bureau planned to use handheld mobile devices to support field data collection for the census, including following up with non-respondents. However, due to significant problems identified during testing of the devices, as well as cost overruns and schedule slippages, the Bureau decided not to use the handheld devices for nonresponse follow-up. Instead, it reverted to paper- based processing, which increased the cost of the 2010 Census by up to $3 billion and significantly increased the risk of not completing the Census on time. Due in part to these technology issues the Bureau was facing, we designated the 2010 Census a high-risk area in March 2008. Further, we testified in November 2015 that key IT decisions needed to be made soon because the Bureau was less than 2 years away from end- to-end testing of all systems and operations to ensure readiness for the 2020 Census, leaving limited time to implement the systems. We emphasized that the Bureau had deferred key IT-related decisions, and that it was running out of time to develop, acquire, and implement the systems it will need to deliver the redesigned operations. The Bureau is not alone in facing challenges in acquiring IT systems—it is a systemic issue that plagues the federal government. Although the executive branch has undertaken numerous initiatives to better manage the more than $80 billion that is annually invested in IT, we have a significant body of work that has found that federal IT investments too frequently fail or incur cost overruns and schedule slippages while contributing little to mission-related outcomes. We have previously testified that the federal government has spent billions of dollars on failed IT investments, such as the Department of Defense’s Expeditionary Combat Support System, which was canceled in December 2012, after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds; the Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program, which was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011 due to challenges in managing the program; and the National Oceanic and Atmospheric Administration, Department of Defense, and the National Aeronautics and Space Administration’s National Polar-orbiting Operational Environmental Satellite System, which was a tri-agency weather satellite program that was terminated in February 2010 after having spent 16 years and almost $5 billion on the program, when a presidential task force decided to disband the system. Our work has shown that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies have not consistently applied best practices that are critical to successfully acquiring IT investments, such as (1) program staff having the necessary knowledge and skills; (2) program staff prioritizing requirements; (3) end users participating in the testing of system functionality prior to end user acceptance testing; (4) government and contractor staff being stable and consistent; and (5) program officials maintaining regular communication with the prime contractor. Due to the challenges of acquiring IT across the federal government, we added improving the management of IT acquisitions and operations as a key area in our 2015 High-Risk Report. As part of this new area, we also identified CEDCAP as one of nine programs across the federal government in need of the most attention. In August 2016, we reported that the CEDCAP and 2020 Census programs were intended to be on parallel implementation tracks and had major interdependencies; however, the interdependencies between these two programs had not always been effectively managed. Importantly, CEDCAP relies on 2020 Census to be one of the biggest consumers of its enterprise systems, and 2020 Census relies heavily on CEDCAP to deliver key systems to support its redesign. Nevertheless, while both programs had taken a number of steps to coordinate, such as holding weekly schedule coordination meetings and participating in each other’s risk review board meetings, they lacked processes for effectively integrating their schedule dependencies, integrating the management of interrelated risks, and managing requirements. Specifically: The CEDCAP and 2020 Census programs did not have an effective process for integrating schedule dependencies. Best practices identified in our Schedule Assessment Guide call for dependencies between two programs to be automatically linked and dynamically responsive to change, or handled through a defined repeatable process if manual reconciliation cannot be avoided. We reported that the CEDCAP and 2020 Census programs had both established master schedules that contain thousands of milestones and tens of thousands of activities and had identified major milestones within each program that were intended to align with each other. However, the CEDCAP and 2020 Census programs maintained their master schedules using different software where dependencies between the two programs were not automatically linked, were not dynamically responsive to change, and not handled through a defined repeatable process. Instead, the Bureau’s practice of maintaining separate dependency schedules, which must be manually reconciled, had proven to be ineffective and had contributed to the misalignment between the programs’ schedules. We concluded in our report that, without an effective process for ensuring alignment between the two programs, the Bureau faces increased risk that capabilities for carrying out the 2020 Census will not be delivered as intended. Thus, we recommended that the Bureau define, document, and implement a repeatable process to establish complete alignment between the CEDCAP and 2020 Census programs by, for example, maintaining a single dependency schedule. The Bureau agreed with this recommendation and indicated it would be taking actions to address it. CEDCAP and 2020 Census did not have an integrated list of risks facing both programs. We reported that the two programs had taken steps to collaborate on identifying and mitigating risks, such as having processes in place for identifying and mitigating risks that affect their respective programs. However, we found that these programs did not have an integrated list of risks (referred to as a risk register) with agreed-upon roles and responsibilities for tracking them, as called for by best practices identified by GAO for collaboration and leading practices in risk management. This decentralized approach introduced two key challenges: (1) there were inconsistencies in tracking and managing interdependent risks, and (2) tracking risks in two different registers could result in redundant efforts and potentially conflicting mitigation efforts. To address this, we recommended that the Bureau establish a comprehensive and integrated list of all interdependent risks facing the CEDCAP and 2020 Census programs, and clearly identify roles and responsibilities for managing this list. The Bureau agreed with this recommendation and indicated it would take actions to address it. Among other requirements management challenges, we reported that although the Bureau had drafted a process for managing requirements between CEDCAP and 2020 Census programs, the process had not yet been finalized. As a result, the Bureau had developed three system releases without having a fully documented and institutionalized process for collecting those requirements. In July 2016, Bureau officials stated that, due to the recent selection of a commercial vendor to deliver many of the CEDCAP capabilities, they did not plan to finalize this process until January 2017. We made three recommendations to the Bureau to strengthen its requirements management processes. The Bureau agreed with these recommendations and reported that it planned to take actions to address them. While the Bureau plans to extensively use IT systems to support the 2020 Census redesign in an effort to realize potentially significant efficiency gains and cost savings, we reported that this redesign introduces critical information security challenges related to the following: minimizing the threat of phishing aimed at stealing personal information, which could target 2020 Census respondents, as well as Census employees and contractors; ensuring that individuals gain only limited and appropriate access to adequately protecting approximately 300,000 mobile devices; ensuring adequate control of security performance requirements in a cloud environment, such as those related to data reliability, preservation, privacy, and access rights; adequately considering information security when making decisions about the IT solutions and infrastructure supporting the 2020 Census; making certain that key IT positions are filled and have appropriate information security knowledge and expertise; ensuring that contingency and incident response plans are in place that encompass all of the IT systems to be used to support the 2020 Census; adequately training Bureau employees, including its massive temporary workforce, in information security awareness; making certain that security assessments are completed in a timely manner and that risks are at an acceptable level; and properly configuring and patching systems supporting the 2020 Census. For example, the introduction of an option for households to respond using the Internet puts respondents more at risk for phishing attacks. In addition, because the Bureau plans to provide its enumerators with mobile devices to collect information from households that did not self- respond to the survey, it is important that the Bureau ensures that these devices are adequately protected. More details on each of these challenges can be found in our recently issued report. In early 2016, the Bureau’s acting Chief Information Officer and its Chief Information Security Officer acknowledged these challenges and described the Bureau’s plans to address them. For example, the Bureau has developed a risk management framework, intended to ensure that proper security controls are in place and provide authorizing officials with details on residual risks and progress to address those risks. To minimize the risk of phishing, Bureau officials noted that they plan to contract with a company to monitor the Internet for fraudulent sites pretending to be those of the Census Bureau. Continued focus on these considerable challenges will be important as the Bureau begins to develop and/or acquire systems and implement the 2020 design. Looking forward, there is uncertainty as to whether the Census Bureau will be ready for the 2018 end-to-end test. We have ongoing work for this committee that is evaluating the significant challenges the Bureau faces in developing, testing, and integrating systems prior to the 2018 test. Among other things, we plan to address the following key questions: Is the Bureau sufficiently prepared to complete the development, testing, and integration of all of the systems and infrastructure in time for the end-to-end test? There are less than 9 months before the 2018 test is scheduled to begin, but a great deal of development work remains to be completed and the Bureau is still developing the plans and schedules leading up to the 2018 test. For example, as of October 2016, only 3 of the 50 systems (6 percent) had been delivered. The other 47 systems that the Bureau plans to use during the 2018 end-to-end test were in various forms of development, including: 22 systems (or 44 percent) that were expected to be delivered by the time the 2018 end-to-end test begins; 15 systems (or 30 percent) that were expected to be delivered after the 2018 end-to-end test begins; and 10 systems (or 20 percent) that did not have firm delivery dates. Figure 1 depicts the percentage of systems that have been delivered, are scheduled before and after August 1, 2017, and that have not yet been firmly scheduled for delivery. In addition, the Bureau has not identified the entire infrastructure (i.e., cloud solutions and/or data centers) that it plans to use for the 2018 test or 2020 operations and, as of October 2016, it did not yet have a time frame for the implementation of the infrastructure. Is the Bureau effectively managing its significant contractor support? The Bureau is relying on contractor support in many key areas, including the technical integration of all of the key systems and infrastructure, and the development of many of the data collection systems. For example, in August 2016, the Bureau awarded a contract for the technical integration of the 2020 Census systems and infrastructure, to include an evaluation of the systems and infrastructure, development of the infrastructure (e.g., cloud or data center) to meet the Bureau’s scalability and performance needs, integration of all of the systems, and support for testing activities. However, key dates for this work have yet not been finalized. In addition, the Bureau is relying on other contractors to develop a number of key systems, such as (1) development of the IT platform that will be used to collect data from a majority of respondents— through the use of the Internet, telephone, and non-response follow- up activities; (2) procurement of the mobile devices and cellular service to be used for non-response follow-up; and (3) development of the IT infrastructure in the field offices. The 2020 Census will be the first time that the Bureau uses a technical integrator in this manner; collects data nationwide via the Internet; and relies on mobile devices for non-response follow-up. A greater reliance on contractors for these key components of the 2020 Census requires the Bureau to focus on sound management and oversight of the key contracts, projects, and systems. Does the Bureau have back-up plans in case key systems are not ready in time for the 2018 test? The 2017 Census Test (with a Census Day of April 1, 2017) will be the first time that the Bureau has an opportunity to test various IT systems and infrastructure in operation, including the Internet response system and the system to be used for phone responses. However, because the Bureau is revising its plans for the 2017 test, it has not yet determined whether or how it will test other systems and features prior to the end-to-end test, such as the mobile devices that the enumerator’s will use to record and upload household information and whether these systems can handle a nationwide scope. Uncertainty about what will be included in the 2017 test has the potential to add risk to the 2018 end- to-end test, and it will be important for the Bureau to make plans in case key systems are not ready in time for the 2018 test. Can the Bureau adequately secure the systems and data, and respond to breaches should they occur? As described previously, the Bureau faces significant challenges in securing systems and data, and tight time frames can exacerbate those challenges. Because many of the systems to be used in the 2018 end-to-end test are not yet fully developed, the Bureau has not finalized all of the controls to be implemented, completed an assessment of those controls, developed plans to remediate any control weaknesses, and determined whether there is time to fully remediate any weaknesses before the system test begins. We are continuing to evaluate these and other important areas related to the Bureau’s efforts to ensure its systems are ready for the 2020 Decennial Census. In summary, the CEDCAP program has the potential to offer numerous benefits to the Bureau’s multiple survey programs, including the 2020 Census program. While the Bureau had taken steps to implement CEDCAP projects, considerable work remains for its production systems to be in place to support the 2020 Census end-to-end system integration test—which is to occur in less than a year. Given the numerous and critical dependencies between the CEDCAP and 2020 Census programs, their parallel implementation tracks, and the 2020 Census’ immovable deadline, it is imperative that the interdependencies between these programs be effectively managed. Implementation of our recommendations to, among other things, use a repeatable process to establish complete alignment between the programs; establish an integrated list of all interdependent risks facing the programs; and strengthen the programs’ processes for requirements management would help align the programs and better ensure that the efficiency and effectiveness goals of the 2020 Census redesign are achieved. Additionally, while the large-scale technological changes for the 2020 Decennial Census introduce great potential for efficiency and effectiveness gains, it also introduces many information security challenges, including educating the public to offset inevitable phishing scams. Continued focus on these considerable security challenges will be important as the Bureau begins to develop and/or acquire systems and implement the 2020 Census design. In our ongoing work for this committee, we plan to address key questions about the Bureau’s ability to develop, integrate, test, and secure the IT systems and infrastructure in time for the end–to-end test. Given the short window of time before the test begins, it is important that the Bureau continue to focus its attention on implementing and securing the systems that will collect and store the personal information of millions of American people. Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you have any questions concerning this statement, please contact David A. Powner at (202) 512-9286 or pownerd@gao.gov. GAO staff who made key contributions to this testimony are Carol Harris (Director), Colleen Phillips (Assistant Director), Shannin G. O’Neill (Assistant Director), Kate Sharkey (Analyst in Charge), Andrew Beggs, Chris Businsky, Juana Collymore, Becca Eyler, Lee McCracken, Andrea Starosciak, Jeanne Sung, and Umesh Thakkar. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The U.S. Census Bureau (a component of the Department of Commerce) plans to significantly change the methods and technology it uses to count the population with the 2020 Decennial Census, such as by offering an option for households to respond to the survey via the Internet. The Bureau's redesign of the Census program relies on the acquisition and development of many new and modified systems. Several of the key systems are to be provided by an enterprise-wide initiative called CEDCAP. This statement summarizes the report GAO issued in August 2016 on the challenges the Bureau faces in managing the interdependencies between the 2020 Census and CEDCAP programs, as well as challenges it faces in ensuring the security and integrity of Bureau systems and data. GAO also updated key information based on its ongoing work for this committee by, among other things, reviewing the updated 2020 Operational Plan and systems lists provided by the Bureau, and by interviewing agency officials. The U.S. Census Bureau's (Bureau) 2020 Decennial Census program is heavily dependent upon the Census Enterprise Data Collection and Processing (CEDCAP) program to deliver the key systems needed to support the 2020 redesign. CEDCAP is a complex modernization program intended to deliver a system-of-systems for the Bureau's survey data collection and processing functions. In August 2016, GAO reported that while the two programs had taken steps to coordinate their schedules, risks, and requirements, they lacked effective processes for managing interdependencies. Officials acknowledged weaknesses in managing interdependencies and reported that they were taking steps to address them. Until these interdependencies are managed more effectively, the Bureau will be limited in its ability to meet milestones, mitigate major risks, and ensure that requirements are appropriately identified. While the large-scale technological changes for the 2020 Decennial Census introduce great potential for efficiency and effectiveness gains, they also introduce many information security challenges. For example, the introduction of an option for households to respond using the Internet puts respondents more at risk for phishing attacks (requests for information from authentic-looking, but fake, e-mails and websites). The Bureau had begun efforts to address a number of these challenges; as it begins implementing this decennial census' design, continued focus on these considerable security challenges will be critical. Looking forward, there is uncertainty as to whether the Census Bureau will be ready for the 2018 end-to-end test, set to begin in August 2017. GAO has ongoing work for this committee that is evaluating the significant challenges the Bureau faces in developing, testing, integrating, and securing systems prior to the 2018 test. For example, of the 50 systems to be included in the end-to-end test, half of them are to be delivered after the start of the test or lack a firm delivery date (see figure). In addition, key dates for the integration of the systems have not yet been defined. Given the short window of time before the test is to begin, it is important that the Bureau continue to focus its attention on implementing and securing the data collection systems that are to collect and store the personal information of millions of American people. In its August report, GAO made eight recommendations to the Department of Commerce. The recommendations addressed, among other things, deficiencies in the Bureau's management of interdependencies related to schedule, risk, and requirements. The department agreed with all eight recommendations and indicated that it would be taking actions to address them. |
IRIS was created in 1985 to help EPA develop consensus opinions within the agency about the health effects of chronic exposure to chemicals. Its importance has increased over time as EPA program offices and the states have increasingly relied on IRIS information in making environmental protection decisions. Currently, the IRIS database contains assessments of more than 540 chemicals. According to EPA, national and international users access the IRIS database approximately 9 million times a year. EPA’s Assistant Administrator for the Office of Research and Development has described IRIS as the premier national and international source for qualitative and quantitative chemical risk information; other federal agencies have noted that IRIS data are widely accepted by all levels of government across the country for application of public health policy, providing benefits such as uniform, standardized methods for toxicology testing and risk assessment, as well as uniform toxicity values. Similarly, a private-sector risk assessment expert has stated that the IRIS database has become the most important source of regulatory toxicity values for use across EPA’s programs and is also widely used across state programs and internationally. Historically and currently, the focus of IRIS toxicity assessments has been on the potential health effects of long-term (chronic) exposure to chemicals. According to OMB, EPA is the only federal agency that develops qualitative and quantitative assessments of both cancer and noncancer risks of exposure to chemicals, and EPA does so largely under the IRIS program. Other federal agencies develop quantitative estimates of noncancer effects or qualitative cancer assessments of exposure to chemicals in the environment. While these latter assessments provide information on the effects of long-term exposures to chemicals, they provide only qualitative assessments of cancer risks (known human carcinogen, likely human carcinogen, etc.) and not quantitative estimates of cancer potency, which are required to conduct quantitative risk assessments. EPA’s IRIS assessment process has undergone a number of formal and informal changes during the past several years. While the process used to develop IRIS chemical assessments includes numerous individual steps or activities, major assessment steps include (1) a review of the scientific literature; (2) preparation of a draft IRIS assessment; (3) internal EPA reviews of draft assessments; (4) two OMB/interagency reviews, managed by OMB, that provide input from OMB as well as from other federal agencies, including those that may be affected by the IRIS assessments if they lead to regulatory or other actions; (5) an independent peer review conducted by a panel of experts; and (6) the completion of a final assessment that is posted to the IRIS Web site. Unlike many other EPA programs that have statutory requirements, including specific time frames for completing mandated tasks, the IRIS program is not subject to statutory requirements or timeframes. In contrast, the Department of Human Health and Services’ Agency for Toxic Substances and Disease Registry (ATSDR), which develops quantitative estimates of the noncancer effects of exposures to chemicals in the environment, is statutorily required to complete its assessments within certain timeframes. The IRIS database is at serious risk of becoming obsolete because the agency has not been able to routinely complete timely, credible assessments or decrease a backlog of 70 ongoing assessments. Specifically, although EPA has taken important steps to improve the IRIS program and productivity since 2000 and has developed a number of draft assessments for external review, its efforts to finalize the assessments have been thwarted by a combination of factors including the imposition of external requirements, the growing complexity and scope of risk assessments, and certain EPA management decisions. In addition, the changes to the IRIS assessment process that EPA was considering at the time of our review would have added to the already unacceptable level of delays in completing IRIS assessments and further limited the credibility of the assessments. EPA has taken a number of steps to help ensure that IRIS contains current, credible chemical risk information; to address its backlog of ongoing assessments; and to respond to new OMB requirements. However, to date, these changes—including increasing funding, centralizing staff conducting assessments, and revising the assessment process—have not enabled EPA to routinely complete credible IRIS assessments or decrease the backlog. That is, although EPA sent 32 draft assessments for external review in fiscal years 2006 and 2007, the agency finalized only 4 IRIS assessments during this time (see fig. 2). Several key factors have contributed to EPA’s inability to achieve a level of productivity that is needed to sustain the IRIS program and database: new OMB-required reviews of IRIS assessments by OMB and other federal agencies; the growing complexity and scope of risk assessments; certain EPA management decisions and issues, including delaying completion of some assessments to await new research or to develop enhanced analyses of uncertainty in the assessments; and the compounding effect of delays. Regarding the last factor, even a single delay in the assessment process can lead to the need to essentially repeat the assessment process to take into account changes in science and methodologies. A variety of delays have impacted the majority of the 70 assessments being conducted as of December 2007—48 had been in process for more than 5 years, and 12 of those for more than 9 years. These time frames are problematic because of the substantial rework such cases often require to take into account changing science and methodologies before they can be completed. For example, EPA’s assessment of the cancer risks stemming from exposure to naphthalene—a chemical used in jet fuel and in the production of widely used commercial products such as moth balls, dyes, insecticides, and plasticizers—was nearing completion in 2006. However, prior to finalizing this assessment, which had been ongoing for over 4 years, EPA decided that the existing noncancer assessment had become outdated and essentially restarted the assessment to include both cancer and noncancer effects. As a result, 6 years after the naphthalene assessment began, it is now back at the drafting stage. The assessment now will need to reflect relevant research completed since the draft underwent initial external peer review in 2004, and it will have to undergo all of the IRIS assessment steps again, including the additional internal and external reviews that are now required (see app. I). Further, because EPA staff time continues to be dedicated to completing assessments in the backlog, EPA’s ability to both keep the more than 540 existing assessments up to date and initiate new assessments is limited. Importantly, EPA program offices and state and local entities have requested assessments of hundreds of chemicals not yet in IRIS, and EPA data as of 2003 indicated that the assessments of 287 chemicals in the database may be outdated—that is, new information could change the risk estimates currently in IRIS or enable EPA to develop additional risk estimates for chemicals in the database (for example, developing a cancer potency estimate for assessments with only noncancer estimates). In addition, because EPA’s 2003 data are now more than 4 years old, it is likely that more assessments may be outdated now. The consequences of not having current, credible IRIS information can be significant. EPA’s inability to complete its assessment of formaldehyde, which the agency initiated in 1997 to update information already in IRIS on the chemical, has had a significant impact on EPA’s air toxics program. Although in 2003 and 2004, the National Cancer Institute and the National Institute of Occupational Safety and Health (NIOSH) had released updates to major epidemiological studies of industrial workers that showed a relationship between formaldehyde and certain cancers, including leukemia, EPA did not move forward to finalize an IRIS assessment incorporating these important data. Instead, EPA opted to await the results of another update to the National Cancer Institute study. While this additional research was originally estimated to take, at most, 18 months to complete, at the time of our report (more than 3 years later) the update was not complete. In the absence of this information, EPA’s Office of Air and Radiation decided to use risk information developed by an industry- funded organization—the CIIT Centers for Health Research—for a national emissions standard. This decision was a factor in EPA exempting certain facilities with formaldehyde emissions from the national emissions standard. The CIIT risk estimate indicates a potency about 2,400 times lower than the estimate in IRIS that was being re-evaluated and that did not yet consider the 2003 and 2004 National Cancer Institute and NIOSH epidemiological studies. According to an EPA official, an IRIS cancer risk factor based on the 2003 and 2004 National Cancer Institute and NIOSH studies would likely be close to the current IRIS assessment, which EPA has been re-evaluating since 1997. The discrepancy between these two risk estimates raises concerns about whether the public health is adequately protected in the absence of current IRIS information. For example, in 1999, EPA published a national assessment that provided information about the types and amounts of air toxics to which people are exposed. The assessment, which also used the CIIT risk estimate for formaldehyde, concluded, for example, that formaldehyde did not contribute significantly to the overall cancer risk in the state of New Jersey. However, in carrying out its own risk assessment on formaldehyde, the New Jersey Department of Environmental Protection opted to use the risk information that is currently in IRIS (dating back to 1991) and found that the contribution from formaldehyde to overall cancer risk in New Jersey is quite significant, second only to diesel particulate matter. (App. I provides additional information on EPA’s IRIS assessment for formaldehyde.) One of the factors that has contributed to EPA’s inability to complete assessments in a timely manner—the new OMB-directed OMB/interagency review process—also limits the credibility of the assessments because it lacks transparency. Specifically, neither the comments nor the changes EPA makes to the scientific IRIS assessments in response to the comments made by OMB and other federal agencies, including those whose workload and resource levels could be affected by the assessments, are disclosed. In addition, the OMB/interagency reviews have hindered EPA’s ability to independently manage its IRIS assessments. For example, without communicating its rationale for doing so, OMB directed EPA to terminate five IRIS assessments that for the first time addressed acute, rather than chronic exposure—even though EPA initiated this type of assessment to help it implement the Clean Air Act. For our March 2008 report, we reviewed the additional assessment process changes EPA was planning and concluded that they would likely exacerbate delays in completing IRIS assessments and further affect their credibility. Specifically, despite the OMB/interagency review process that OMB required EPA to incorporate into the IRIS assessment process in 2005, certain federal agencies continued to believe they should have greater and more formal roles in EPA’s development of IRIS assessments. Consequently, EPA had been working for several years to establish a formal IRIS assessment process that would further expand the role of federal agencies in the process—including agencies such as DOD, which could be affected by the outcome of IRIS assessments. For example, some of these agencies and their contractors could face increased cleanup costs and other legal liabilities if EPA issued an IRIS assessment for a chemical that resulted in a decision to regulate the chemical to protect the public. In addition, the agencies could be required to, for example, redesign systems and processes to eliminate hazardous materials; develop material substitutes; and improve personal protective clothing, equipment, and procedures. Under the changes that EPA was planning at the time of our review, these potentially affected agencies would have the opportunity to be involved, or provide some form of input, at almost every step of EPA’s IRIS assessment process. Most significantly, the changes would have provided federal agencies, including those facing potential regulatory liability, with several opportunities during the IRIS assessment process to subject particular chemicals of interest to additional process steps. These additional process steps, which would have lengthened assessment times considerably, include giving federal agencies and the public 45 days to identify additional information on a chemical for EPA’s consideration in its assessment or to correct any errors on an additional assessment draft that would provide qualitative information; giving potentially affected federal agencies 30 days to review the public comments EPA received and initiate a meeting with EPA if they want to discuss a particular set of comments; allowing potentially affected federal agencies to have assessments suspended for up to 18 months to fill a data gap or eliminate an uncertainty factor that EPA plans to use in its assessment; and allowing other federal agencies to weigh in on (1) the level of independent peer review that would be sought (that is, whether the peer reviews would be conducted by EPA Science Advisory Board panels, National Academies’ panels, or panels organized by an EPA contractor); (2) the areas of scientific expertise needed on the panel; and (3) the scope of the peer reviews and the specific issues they would address. EPA estimated that assessments that undergo these additional process steps would take up to 6 years to complete. While it is important to ensure that assessments consider the best science, EPA has acknowledged that waiting for new data can result in substantial harm to human health, safety, and the environment. Further, although coordination with other federal agencies about IRIS assessments could enhance their quality, increasing the role of agencies that may be affected by IRIS assessments in the process itself reduces the credibility of the assessments if that expanded role is not transparent. In this regard, while EPA’s proposed changes would have allowed for including federal agencies’ comments in the public record, the implementation of this proposal was delayed for a year, in part, because of OMB’s view that agencies’ comments about IRIS assessments represent internal executive branch communications that may not be made public—a view that is inconsistent with the principle of sound science, which relies on, among other things, transparency. (App. II and III provide flow charts of the IRIS process that was in place at the time of our review and EPA’s draft proposed process being considered at the time of our review, respectively). To address the productivity and credibility issues we identified, we recommended that the EPA Administrator require the Office of Research and Development to re-evaluate its draft proposed changes to the IRIS assessment process in light of the issues raised in our report and ensure that any revised process, among other things, clearly defines and documents an IRIS assessment process that will enable the agency to develop the timely chemical risk information it needs to effectively conduct its mission. One of our recommendations—that EPA provide at least 2 years’ notice of IRIS assessments that are planned—would, among other things, provide an efficient alternative to suspending assessments while waiting for new research because interested parties would have the opportunity to conduct research before assessments are started. In addition, we recommended that the EPA Administrator take steps to better ensure that EPA has the ability to develop transparent, credible IRIS assessments—an ability that relies in large part on EPA’s independence in conducting these important assessments. Actions that are key to this ability include ensuring that EPA can (1) determine the types of assessments it needs to support EPA programs and (2) define the appropriate role of external federal agencies in EPA’s IRIS assessment process, and (3) manage an interagency review process in a manner that enhances the quality, transparency, timeliness, and credibility of IRIS assessments. In its February 21, 2008, letter providing comments on our draft report, EPA said it would consider each of our recommendations in light of the new IRIS process the agency was developing. On April 10, 2008, EPA issued a revised IRIS assessment process, effective immediately. Overall, EPA’s revised process is not responsive to the recommendations made in our March 2008 report—it is largely the same as the draft proposed process we evaluated in our March 2008 report (see app. III and IV). Moreover, changes EPA did incorporate into the final process are likely to further exacerbate the productivity and credibility issues we identified in our report. We recommended that EPA ensure that, among other things, any revised process clearly defines and documents a streamlined IRIS assessment process that can be conducted within time frames that minimize the need for wasteful rework. As discussed in our report, when assessments take longer than 2 years, they can become subject to substantial delays stemming from the need to redo key analyses to take into account changing science and assessment methodologies. However, EPA’s revised process institutionalizes a process that the agency estimates will take up to 6 years to complete. Further, the estimated time frames do not factor in the time for peer reviews conducted by the National Academies, which can take 2 years to plan and complete. EPA typically uses reviews by the National Academies for highly controversial chemicals or complex assessments. Therefore, assessments of key chemicals of concern to public health that are reviewed by the National Academies are likely to take at least 8 years to complete. These time frames must also be considered in light of OMB’s view that health assessment values in IRIS are out of date if they are more than 10 years old and if new scientific information exists that could change the health assessment values. Thus, EPA’s new process institutionalizes time frames that could essentially require the agency to start assessment updates as soon as 2 years after assessments are finalized in order to keep the IRIS database current. Such time frames are not consistent with our recommendation that EPA develop, clearly define, and document a streamlined IRIS process that can be conducted within time frames that minimize the need for wasteful rework. Further, the agency would need a significant increase in resources to support such an assessment cycle. In addition, EPA had previously emphasized that, in suspending assessments to allow agencies to fill in data gaps, it would allow no more than 18 months to complete the studies and have them peer reviewed. However, under the new process, EPA states that it generally will allow no more than 18 months to complete the studies and have them peer reviewed. As we concluded in our report, we believe the ability to suspend assessments for up to 18 months would add to the already unacceptable level of delays in completing IRIS assessments. Further, we and several agency officials with whom we spoke believe that the time needed to plan, conduct, and complete research that would address significant data gaps, and have it peer reviewed, would likely exceed 18 months. Therefore, the less rigid time frame EPA included in its new process could result in additional delays. Finally, the new process expands the scope of one of the additional steps that initially was to apply only to chemicals of particular interest to federal agencies. Specifically, under the draft process we reviewed, EPA would have provided an additional review and comment opportunity for federal agencies and the public for what EPA officials said would be a small group of chemicals. However, under EPA’s new process, this additional step has been added to the assessment process for all chemicals and, therefore, will add time to the already lengthy assessments of all chemicals. We also recommended that the EPA Administrator take steps to better ensure that EPA has the ability to develop transparent, credible IRIS assessments—an ability that relies in large part on EPA’s independence in conducting these important assessments. Contrary to our recommendation, EPA has formalized a revised IRIS process that is selectively, rather than fully, transparent, limiting the credibility of the assessments. Specifically, while the draft process we reviewed provided that comments on IRIS assessments from OMB and other federal agencies would be part of the public record, under the recently implemented process, comments from federal agencies are expressly defined as “deliberative” and will not be included in the public record. Given the importance and sensitivity of IRIS assessments, we believe it is critical that input from all parties, particularly agencies that may be affected by the outcome of IRIS assessments, be publicly available. However, under EPA’s new process, input from some IRIS assessment reviewers—representatives of federal agencies, including those facing potential regulatory liability, and private stakeholders associated with these agencies—will continue to receive less public scrutiny than comments from all others. In commenting on a draft of our March 2008 report, and in a recent congressional hearing, EPA’s Assistant Administrator, Office of Research and Development, stated that the IRIS process is transparent because all final IRIS assessments must undergo public and external peer review. However, as we stated in our report, the presence of transparency at a later stage of IRIS assessment development does not explain or excuse its absence earlier. Under the new process, neither peer reviewers nor the public are privy to the changes EPA makes in response to the comments OMB and other federal agencies provide to EPA at several stages in the assessment process—changes to draft assessments or to the questions EPA poses to the peer review panels. Importantly, the first IRIS assessment draft that is released to peer reviewers and to the public includes the undisclosed input from federal agencies potentially subject to regulation and therefore with an interest in minimizing the impacts of IRIS assessments on their budgets and operations. In addition, EPA’s revised process does not provide EPA with sufficient independence in developing IRIS assessments to ensure they are credible and transparent. We made several recommendations aimed at restoring EPA’s independence. For example, we recommended that the EPA Administrator ensure that EPA has the ability to, among other things, define the appropriate role of external federal agencies in the IRIS assessment process and determine when interagency issues have been appropriately addressed. However, under the newly implemented IRIS assessment process, OMB continues to inform EPA when EPA has adequately addressed OMB’s and interagency comments. This determination must be made both before EPA can provide draft assessments to external peer reviewers and to the public and before EPA can finalize and post assessments on the IRIS database. While EPA officials state that ultimately IRIS assessments reflect EPA decisions, the new process does not support this assertion given the clearances EPA needs to receive from OMB to move forward at key stages. In fact, we believe the new IRIS assessment process may elevate the goal of reaching interagency agreement above achieving IRIS program objectives. Further, as discussed above, because the negotiations over OMB/interagency comments are not disclosed, whether EPA is entirely responsible for the content of information on IRIS is open to question. In our report, we also emphasized the importance of ensuring that IRIS assessments be based solely on science issues and not policy concerns. However, under the new IRIS assessment process, EPA has further introduced policy considerations into the IRIS assessment process. That is, the newly implemented IRIS assessment process broadens EPA’s characterization of IRIS assessments from “the agency’s scientific positions on human health effects that may result from exposure to environmental contaminants” to “the agency’s science and science policy positions” on such effects. EPA’s new, broader characterization of IRIS raises concerns about the agency’s stated intent to ensure that scientific assessments are appropriately based on the best available science and that they are not inappropriately impacted by policy issues and considerations. For example, in discussing science and science policy at a recent Senate hearing, EPA’s Assistant Administrator of Research and Development described science policy considerations as including decisions about filling knowledge gaps (e.g., whether and to what extent to use default assumptions) and assessing weight-of-the-evidence approaches to make scientific inferences or assumptions. We believe that these are scientific decisions that should reflect the best judgment of EPA scientists who are evaluating the data, using the detailed risk assessment guidance the agency has developed for such purposes. We have concerns about the manner and extent to which other federal agencies, including those that may be affected by the outcome of assessments, are involved in these decisions as well as the lack of transparency of their input. As we highlighted earlier, under the National Academies’ risk assessment and risk management paradigm, policy considerations are relevant in the risk management phase—which occurs after the risk assessment phase that encompasses IRIS assessments. The National Academies recently addressed this issue as follows: “The committee believes that risk assessors and risk managers should talk with each other; that is, a ‘conceptual distinction’ does not mean establishing a wall between risk assessors and risk managers. Indeed they should have constant interaction. However, the dialogue should not bias or otherwise color the risk assessment conducted, and the activities should remain distinct; that is, risk assessors should not be performing risk management activities.” The new IRIS assessment process that EPA implemented in April 2008 will not allow the agency to routinely and timely complete credible assessments. In fact, it will exacerbate the problems we identified in our March 2008 report and sought to address with our recommendations—all of which were aimed at preserving the viability of this critical database, which is integral to EPA’s mission of protecting the public and the environment from exposure to toxic chemicals. Specifically, under the new process, assessment time frames will be significantly lengthened, and the lack of transparency will further limit the credibility of the assessments because input from OMB and other agencies at all stages of the IRIS assessment process is now expressly defined as deliberative and therefore not subject to public disclosure. The position of the Assistant Administrator, Office of Research and Development, that the IRIS process is transparent because all final IRIS assessments must undergo public and external peer review is unconvincing. Transparency at a later stage of the IRIS assessment process—after OMB and other federal agencies have had multiple opportunities to influence the content of the assessment without any disclosure of their input—does not compensate for its absence earlier. We continue to believe that to effectively maintain IRIS EPA must streamline its lengthy assessment process and adopt transparency practices that provide assurance that IRIS assessments are appropriately based on the best available science and that they are not inappropriately biased by policy issues and considerations. As discussed in our April 29, 2008, testimony before the Senate Environment and Public Works Committee, we believe that the Congress should consider requiring EPA to suspend implementation of its new IRIS assessment process and develop a streamlined process that is transparent and otherwise responsive to our recommendations aimed at improving the timeliness and credibility of IRIS assessments. For example, suspending assessments to obtain additional research is inefficient; alternatively, with longer-term planning, EPA could provide agencies and the public with more advance notice of assessments, enabling them to complete relevant research before IRIS assessments are started. In addition, as discussed in our April 2008 testimony, the Congress should consider requiring EPA to obtain and be responsive to input from the Congress and the public before finalizing a revised IRIS assessment process. We note that while EPA and OMB initially had planned for EPA to release a draft revised IRIS assessment process to the public, hold a public meeting to discuss EPA’s proposed changes, and seek and incorporate public input before finalizing the process, EPA released its new assessment process without obtaining public input and made it effective immediately. This was inconsistent with assertions made in OMB’s letter commenting on our draft report, which emphasized that EPA had not completed the development of the IRIS assessment process and stated: “Indeed, the process will not be complete until EPA circulates its draft to the public for comments and then releases a final product that is responsive to those comments.” Finally, if EPA is not able to take the steps we have recommended to effectively maintain this critical program, other approaches, including statutory requirements, may need to be explored. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For further information about this testimony, please contact John B. Stephenson on (202) 512-3841 or stephensonj@gao.gov. Contact points for our Congressional Relations and Public Affairs Offices may be found on the last page of this statement. Contributors to this testimony include Christine Fishkin (Assistant Director), Laura Gatz, Richard P. Johnson, and Nancy Crothers. Some key IRIS assessments have been in progress for a number of years, in part because of delays stemming from one or more of the key factors we identified that have hindered EPA’s productivity. Examples include the following: Naphthalene. EPA started the IRIS assessment of cancer risks stemming from the inhalation of naphthalene in 2002. Naphthalene is used in jet fuel and in the production of widely used commercial products such as moth balls, dyes, insecticides, and plasticizers. According to a presentation delivered at the 2007 annual meeting of the Society for Risk Analysis by an Army Corps of Engineers toxicologist, “The changing naphthalene regulatory environment includes a draft EPA risk assessment that if/when finalized, will change naphthalene’s status from ‘possible’ to ‘likely’ human carcinogen.” Thus, according to this presentation, one potential impact of this IRIS assessment on DOD is that DOD would need to provide many employees exposed to naphthalene with equipment measuring their exposure to the chemical. In addition, because many military bases are contaminated with naphthalene, a component of jet fuel (approximately 1 percent to 3 percent) used by all DOD services, DOD could face extensive cleanup costs. By 2004, 2 years after starting the assessment, EPA had drafted a chemical assessment that had completed internal peer reviews and was about to be sent to an external peer review committee. Once it returned from external review, the next step, at that time, would have been a formal review by EPA’s IRIS Agency Review Committee. If approved, the assessment would have been completed and released. However, in part because of concerns raised by DOD, OMB asked to review the assessment and conducted an interagency review of the draft. In their 2004 reviews of the draft IRIS assessment, both OMB and DOD raised a number of concerns about the assessment and suggested to EPA that it be suspended until additional research could be completed to address what they considered to be significant uncertainties associated with the assessment. Although all of the issues raised by OMB and DOD were not resolved, EPA continued with its assessment by submitting the draft for external peer review, which was completed in September 2004. However, according to EPA, OMB continued to object to the draft IRIS assessment and directed EPA to convene an additional expert review panel on genotoxicity to obtain recommendations about short-term tests that OMB thought could be done quickly. According to EPA, this added 6 months to the process, and the panel, which met in April 2005, concluded that the research that OMB was proposing could not be conducted in the short term. Nonetheless, EPA officials said that the second expert panel review did not eliminate OMB’s concerns regarding the assessment, which they described as reaching a stalemate. In September 2006, EPA decided, however, to proceed with developing the assessment. By this time, the naphthalene assessment had been in progress for over 4 years; EPA decided that the IRIS noncancer assessment, issued in 1998, was outdated and needed to be revisited. Thus, EPA expanded the IRIS naphthalene assessment to include both noncancer and cancer assessments. As a result, 6 years after the naphthalene assessment began, it is now back at the drafting stage. The assessment now will need to reflect relevant research completed since the draft underwent initial external peer review in 2004, and it will have to undergo all of the IRIS assessment steps again, including additional internal and external reviews that are now required. This series of delays has limited EPA’s ability to conduct its mission. For example, the Office of Air and Radiation has identified the naphthalene assessment as one of its highest-priority needs for its air toxics program. In addition, the Office of Solid Waste and Emergency Response considers the naphthalene assessment a high priority for the Superfund program— naphthalene has been found in at least 654 of Superfund’s current or former National Priorities List sites. Although EPA currently estimates that it will complete the assessment in 2009, meeting this revised estimate will be challenging, given all of the steps that are yet to be completed and the extensive external scrutiny to which it will continue to be subjected. Royal Demolition Explosive. This chemical, also called RDX or hexahydro-1,3,5-trinitrotriazine, is a highly powerful explosive used by the U.S. military in thousands of munitions. Currently classified by EPA as a possible human carcinogen, this chemical is known to leach from soil to groundwater. Royal Demolition Explosive can cause seizures in humans and animals when large amounts are inhaled or ingested, but the effects of long-term, low-level exposure on the nervous system are unknown. As is the case with naphthalene, the IRIS assessment could potentially require DOD to undertake a number of actions, including steps to protect its employees from the effects of this chemical and to clean up many contaminated sites. Although EPA started an IRIS assessment of Royal Demolition Explosive in 2000, it has made minimal progress on the assessment because EPA agreed to a request by DOD to wait for the results of DOD-sponsored research on this chemical. In 2007, EPA began to actively work on this assessment, although some of the DOD-sponsored research is still outstanding. Formaldehyde. EPA began an IRIS assessment of formaldehyde in 1997 because the existing assessment was determined to be outdated. Formaldehyde is a colorless, flammable, strong-smelling gas used to manufacture building materials, such as pressed wood products, and used in many household products, including paper, pharmaceuticals, and leather goods. While EPA currently classifies formaldehyde as a probable human carcinogen, the International Agency for Research on Cancer (IARC), part of the World Health Organization, classifies formaldehyde as a known human carcinogen. Since 1986, studies of industrial of workers have suggested that formaldehyde exposure is associated with nasopharyngeal cancer, and possibly with leukemia. For example, in 2003 and 2004, the National Cancer Institute (NCI) and the National Institute of Occupational Safety and Health (NIOSH) released epidemiological studies following up on earlier studies tracking about 26,000 and 11,000 industrial workers, respectively, exposed to formaldehyde; the updates showed exposure to formaldehyde might also cause leukemia in humans, in addition to the cancer types previously identified. According to NCI officials, the key findings in their follow-up study were an increase in leukemia deaths and, more significantly, an exposure/response relationship between formaldehyde and leukemia—as exposure increased, the incidence of leukemia also rose. As with the earlier study, NCI found more cases of a rare form of cancer, nasopharyngeal cancer, than would usually be expected. The studies from NCI and NIOSH were published in 2003 and 2004, around the time that EPA was still drafting its IRIS assessment. In November 2004, the Chairman of the Senate Environment and Public Works Committee requested that EPA delay completion of its IRIS assessment until an update to the just-released NCI study could be conducted, indicating that the effort would take, at most, 18 months. EPA agreed to wait—and more than 3 years later, the NCI update is not yet complete. As of December 2007, NCI estimates that the study will be completed in two stages, one in mid-2008 and the second one later that year. An NCI official said that the additional leukemia deaths identified in the update provide “greater power” to detect associations between exposure to formaldehyde and cancer. EPA’s inability to complete the IRIS assessment it started more than 10 years ago in a timely manner has had a significant impact on EPA’s air toxics program. Specifically, when EPA promulgated a national emissions standard for hazardous air pollutants covering facilities in the plywood and composite wood industries in 2004, EPA’s Office of Air and Radiation took the unusual step of not using the existing IRIS estimate but rather decided to use a cancer risk estimate developed by an industry-funded organization, the CIIT Centers for Health Research (formerly, the Chemical Industry Institute of Toxicology) that had been used by the Canadian health protection agency. The IRIS cancer risk factor had been subject to criticism because it was last revised in 1991 and was based on data from the 1980s. In its final rule, EPA stated that “the dose-response value in IRIS is based on a 1987 study, and no longer represents the best available science in the peer-reviewed literature.” The CIIT quantitative cancer risk estimate that EPA used in its health risk assessment in the plywood and composite wood national emissions standard indicates a potency about 2,400 times lower than the estimate in IRIS that was being re-evaluated and that did not yet consider the 2003 and 2004 NCI and NIOSH epidemiological studies. According to an EPA official, an IRIS cancer risk factor based on the 2003 and 2004 NCI and NIOSH studies would likely be close to the current IRIS assessment, which EPA has been attempting to update since 1997. The decision to use the CIIT assessment in the plywood national emissions standard was controversial, and officials in EPA’s National Center for Environmental Assessment said the center identified numerous problems with the CIIT estimate. Nonetheless, the Office of Air and Radiation used the CIIT value, and that decision was a factor in EPA exempting certain facilities with formaldehyde emissions from the national emissions standard. In June 2007, a federal appellate court struck down the rule, holding that EPA’s decision to exempt certain facilities that EPA asserted presented a low health risk exceeded the agency’s authority under the Clean Air Act. Further, the continued delays of the IRIS assessment of formaldehyde— currently estimated to be completed in 2010 but after almost 11 years still in the draft development stage—will impact the quality of other EPA regulatory actions, including other air toxics rules and requirements. Trichloroethylene. Also known as TCE, this chemical is a solvent widely used as a degreasing agent in industrial and manufacturing settings; it is a common environmental contaminant in air, soil, surface water, and groundwater. TCE has been linked to cancer, including childhood cancer, and other significant health hazards, such as birth defects. TCE is the most frequently reported organic contaminant in groundwater, and contaminated drinking water has been found at Camp Lejeune, a large Marine Corps base in North Carolina. TCE has also been found at Superfund sites and at many industrial and government facilities, including aircraft and spacecraft manufacturing operations. In 1995, the International Agency for Research on Cancer classified TCE as a probable human carcinogen, and in 2000, the Department of Health and Human Services’ National Toxicology Program concluded that it is reasonably anticipated to be a human carcinogen. Because of questions raised by peer reviewers about the IRIS cancer assessment for TCE, EPA withdrew it from IRIS in 1989 but did not initiate a new TCE cancer assessment until 1998. In 2001, EPA issued a draft IRIS assessment for TCE that proposed a range of toxicity values indicating a higher potency than in the prior IRIS values and characterizing TCE as “highly likely to produce cancer in humans.” The draft assessment, which became controversial, was peer reviewed by EPA’s Scientific Advisory Board and released for public comment. A number of scientific issues were raised during the course of these reviews, including how EPA had applied emerging risk assessment methods—such as assessing cumulative effects (of TCE and its metabolites) and using a physiologically based pharmacokinetic model— and the uncertainty associated with the new methods themselves. To help address these issues, EPA, DOD, DOE, and NASA sponsored a National Academies review to provide guidance. The National Academies report, which was issued in 2006, concluded that the weight of evidence of cancer and other health risks from TCE exposure had strengthened since 2001 and recommended that the risk assessment be finalized with currently available data so that risk management decisions could be made expeditiously. The report specifically noted that while some additional information would allow for more precise estimates of risk, this information was not necessary for developing a credible risk assessment. Nonetheless, 10 years after EPA started its IRIS assessment, the TCE assessment is back at the draft development stage. EPA estimates this assessment will be finalized in 2010. More in line with the National Academies’ recommendation to act expeditiously, five senators introduced a bill in August 2007 that, among other things, would require EPA to both establish IRIS values for TCE and issue final drinking water standards for this contaminant within 18 months. Tetrachloroethylene. EPA started an IRIS assessment of tetrachloroethylene—also called perchloroethylene or “perc”—in 1998. Tetrachloroethylene is a manufactured chemical widely used for dry cleaning of fabrics, metal degreasing, and making some consumer products and other chemicals. Tetrachloroethylene is a widespread groundwater contaminant, and the Department of Health and Human Services’ National Toxicology Program has determined that it is reasonably anticipated to be a carcinogen. The IRIS database currently contains a 1988 noncancer assessment based on oral exposure that will be updated in the ongoing assessment. Importantly, the ongoing assessment will also provide a noncancer inhalation risk and a cancer assessment. The IRIS agency review of the draft assessment was completed in February 2005, the draft assessment was sent to OMB for OMB/interagency review in September 2005, and the OMB/interagency review was completed in March 2006. EPA had determined to have the next step, external peer review, conducted by the National Academies—the peer review choice reserved for chemical assessments that are particularly significant or controversial. EPA contracted with the National Academies for a review by an expert panel, and the review was scheduled to start in June 2006 and be completed in 15 months. However, as of December 2007, the draft assessment had not yet been provided to the National Academies. After verbally agreeing with both the noncancer and cancer assessments following briefings on the assessments, the Assistant Administrator, Office of Research and Development, subsequently requested that additional uncertainty analyses—including some quantitative analyses—be conducted and included in the assessment before the draft was released to the National Academies for peer review. As discussed in our March 2008 report on IRIS (GAO-08-440), quantitative uncertainty analysis is a risk assessment tool that is currently being developed, and although the agency is working on developing policies and procedures for uncertainty analysis, such guidance currently does not exist. The draft tetrachloroethylene assessment has been delayed since early 2006 as EPA staff have gone back and forth with the Assistant Administrator trying to reach agreement on key issues such as whether a linear or nonlinear model is most appropriate for the cancer assessment and how uncertainty should be qualitatively and quantitatively characterized. EPA officials and staff noted that some of the most experienced staff are being used for these efforts, limiting their ability to work on other IRIS assessments. In addition, the significant delay has impacted the planned National Academies peer review because the current contract, which has already been extended once, cannot be extended beyond December 2008. The peer review was initially estimated to take 15 months. As a result, a new contract and the appointment of another panel may be required. Dioxin. The dioxin assessment is an example of an IRIS assessment that has been, and will likely continue to be, a political as well as a scientific issue. Often the byproducts of combustion and other industrial processes, complex mixtures of dioxins enter the food chain and human diet through emissions into the air that settle on soil, plants, and water. EPA’s initial dioxin assessment, published in 1985, focused on the dioxin TCDD (2,3,7,8-tetrachlorodibenzo-p-dioxin) because animal studies in the 1970s showed it to be the most potent cancer-causing chemical studied to date. Several years later, EPA decided to conduct a reassessment of dioxin because of major advances that had occurred in the scientific understanding of dioxin toxicity and significant new studies on dioxins’ potential adverse health effects. Initially started in 1991, this assessment has involved repeated literature searches and peer reviews. For example, a draft of the updated assessment was reviewed by a scientific peer review panel in 1995, and three panels reviewed key segments of later versions of the draft in 1997 and 2000. In 2002, EPA officials said that the assessment would conclude that dioxin may adversely affect human health at lower exposure levels than had previously been thought and that most exposure to dioxins occurs from eating such American dietary staples as meats, fish, and dairy products, which contain minute traces of dioxins. These foods contain dioxins because animals eat plants and commercial feed and drink water contaminated with dioxins, which then accumulate in animals’ fatty tissue. It is clear that EPA’s dioxin risk assessment could have a potentially significant impact on consumers and on the food and agriculture industries. As EPA moved closer to finalizing the assessment, in 2003 the agency was directed in a congressional appropriations conference committee report to not issue the assessment until it had been reviewed by the National Academies. The National Academies provided EPA with a report in July 2006. In developing a response to the report, which the agency is currently doing, EPA must include new studies and risk assessment approaches that did not exist when the assessment was drafted. EPA officials said the assessment will be subject to the IRIS review process once its response to the National Academies’ report is drafted. As of 2008, EPA has been developing the dioxin assessment, which has potentially significant health implications for all Americans, for 17 years. No, it is not mission critical. Development of a draft qualitative assessment critical? Yes, it is mission critical. No, there is no new research to close data gaps. gaps? Ye, there is interest in conducting research to close data gaps. Is the chemical mission critical? Darker shaded boxes are additional steps, under EPA’s planned changes, to its assessment process and indicate steps where EPA has provided additional opportunity for input from potentially affected federal agencies for mission-critical chemicals. Lighter shaded boxes with dotted lines indicate steps where EPA has provided additional opportunity for input from potentially affected federal agencies for all chemicals. White boxes with heavy lines indicate steps where potentially affected federal agencies already had an opportunity for input. No, it is not mission critical. critical? Ye, it is mission critical. new research to close data gaps. gaps? Ye, there is interest in conducting research to close data gaps. Is the chemical mission critical? Darker shaded boxes are additional steps under EPA’s changes to its assessment process and indicate where EPA has provided additional opportunity for input from potentially affected federal agencies for mission-critical chemicals. Lighter shaded boxes with dotted lines indicate steps where EPA has provided additional opportunity for input from potentially affected federal agencies for all chemicals. White boxes with heavy lines indicate steps where potentially affected federal agencies already had an opportunity for input. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Environmental Protection Agency's (EPA) Integrated Risk Information System (IRIS) contains EPA's scientific position on the potential human health effects of exposure to more than 540 chemicals. Toxicity assessments in the IRIS database constitute the first two critical steps of the risk assessment process, which in turn, provides the foundation for risk management decisions. Thus, IRIS is a critical component of EPA's capacity to support scientifically sound environmental decisions, policies, and regulations. This testimony discusses (1) highlights of GAO's March 2008 report, Chemical Assessments: Low Productivity and New Interagency Review Process Limit the Usefulness and Credibility of EPA's Integrated Risk Information System, and (2) key aspects of EPA's revised IRIS assessment process, released on April 10, 2008. For the March 2008 report, GAO reviewed and analyzed EPA data and interviewed officials at relevant agencies, including the Office of Management and Budget (OMB). For this testimony, GAO supplemented the prior audit work with a review of EPA's revised IRIS assessment process announced on April 10, 2008. In its March 2008 report, GAO concluded that the IRIS database is at serious risk of becoming obsolete because EPA has not been able to routinely complete timely, credible assessments or decrease its backlog of 70 ongoing assessments--a total of 4 were completed in fiscal years 2006 and 2007. In addition, recent assessment process changes, as well as other changes EPA was considering at the time of GAO's review, further reduce the timeliness and credibility of IRIS assessments. EPA's efforts to finalize assessments have been thwarted by a combination of factors, including two new OMB-required reviews of IRIS assessments by OMB and other federal agencies; EPA management decisions, such as delaying some assessments to await new research; and the compounding effect of delays-even one delay can have a domino effect, requiring the process to essentially be repeated. The two new OMB/interagency reviews of draft assessments involve other federal agencies in EPA's IRIS assessment process in a manner that limits the credibility of IRIS assessments and hinders EPA's ability to manage them. For example, the OMB/interagency reviews lack transparency, and OMB required EPA to terminate five assessments EPA had initiated to help it implement the Clean Air Act. The changes to the IRIS assessment process that EPA was considering, but had not yet issued at the time of our review, would have added to the already unacceptable level of delays in completing IRIS assessments and further limited the credibility of the assessments. On April 10, 2008, EPA issued its revised IRIS assessment process, effective immediately. In its February 2008 comments on GAO's draft report, EPA said it would consider the report's recommendations, which were aimed at streamlining the process and better ensuring that EPA has the ability to develop transparent, credible assessments. However, EPA's new process is largely the same as the draft GAO evaluated, and some key changes are likely to further exacerbate the productivity and credibility concerns GAO identified. For example, while the draft process would have made comments from other federal agencies on IRIS assessments part of the public record, EPA's new process expressly defines such comments as "deliberative" and excludes them from the public record. GAO continues to believe that it is critical that input from all parties--particularly agencies that may be affected by the outcome of IRIS assessments--be publicly available. In addition, the estimated time frames under the new process, especially for chemicals of key concern, will likely perpetuate the cycle of delays to which the majority of ongoing assessments have been subject. Instead of significantly streamlining the process, which GAO recommended, EPA has institutionalized a process that from the outset is estimated to take 6 to 8 years to complete. This is problematic because of the substantial rework such cases often require to take into account changing science and methodologies. Since EPA's new process is not responsive to GAO's recommendations, the viability of this critical database has been further jeopardized. |
The Army started fielding the M1 Abrams tank (the Army’s main battle tank) in the early 1980s. Table 1 shows as of October 1995, there were about 7,600 M1s (in various configurations) in active and reserve Army and Marine Corps units and war reserve and prepositioned storage sites. Since the initial fielding, the M1 has undergone several modernization and enhancement upgrades. The M1 tank was not designed with a depot overhaul maintenance strategy. The maintenance strategy envisioned that maintenance would be performed at the organizational, direct support, and general support levels. Tank items that could not be repaired at those maintenance levels would be sent to the depot for repair. It was never planned for the entire tank to be completely overhauled, unless the tank was involved in an accident, suffered battle damage, or experienced some other catastrophic failure. How much maintenance would be performed and where it would be performed was influenced by the Department of Defense’s decision to change repair parts funding. Beginning in 1992, Army units had to use their operation and maintenance funds to buy repair parts and major components. Prior to this, units did not pay for major components, such as engines or transmissions. These items were “free issue” to units and there was little incentive to repair them. It was easier and cheaper to order a new engine or transmission from the supply system. Concerns have been raised that under the new system, commanders might defer maintenance to conserve unit operation and maintenance funds. We used the Status of Resources and Training System (SORTS) report to assess the readiness of M1 tanks. SORTS uses C-rating designations to denote degrees of readiness: C-1 is the highest readiness rating and C-5 is the lowest. Our analysis of the SORTS data as of March 1995 showed that over 94 percent of the units with M1 tanks reported that their tanks were C-3 (can accomplish the majority of the assigned wartime missions) or higher and that about 56 percent of the units reported that their tanks were C-1 (can accomplish all of the assigned wartime missions). Table 2 shows the distribution of C-ratings. Discussions with officials at three Army divisions that have 834 M1 tanks confirmed that they were not experiencing any major readiness-related maintenance or supply problems with their tanks. The officials were confident that they could deploy as required and carry out their assigned missions. The M1 tanks at NTC and the M1 tanks that were in prepositioned storage were also reported to be in a high state of readiness (as shown in table 3). NTC is authorized 122 M1 tanks (2 battalions) for training. These tanks are operated at a higher tempo than tanks in a typical tactical unit. For example, each tank averages about 2,300 miles a year, compared with the Army-wide average of about 630 miles a year. The NTC M1 tank fleet averages about 8,400 miles, compared with the Army-wide average of about 3,500 miles. As a result of the high operating tempo, the NTC M1 tanks have experienced many more maintenance problems than the tanks in the tactical units. However, according to NTC officials, the tanks have not missed any training days due to the maintenance problems. The officials said that they are always able to provide the training unit with the required number of tanks because only one of the two tank battalions is being used at a time. Another factor that has enabled NTC to meet its training requirements is that its tanks are cycled through the Anniston Army Depot under the Army’s inspection and repair only as needed (IRON) program. Under the IRON program, the tanks are inspected and those components and systems that do not meet the minimum operating characteristics are repaired or replaced. For example, if an engine does not meet its 1,350 horsepower characteristic, repairs are performed. Anniston officials told us that the NTC tanks generally need a lot of work when they arrive. They said, however, that the tanks’ condition is about what could be expected considering the tanks’ high usage rate. NTC officials and officials from a unit that was training at NTC at the time of our visit said that the condition of the tanks and the maintenance problems had not detracted from the realism of the training. Unit officials also said that the condition of the NTC tanks may not be as good as the condition of the tanks at their home station, but this added to the training realism because, in a wartime situation, tanks will have maintenance problems and personnel need to know how to deal with them. Some Army officials have expressed concern that the change in repair parts funding could lead unit commanders to delay maintenance because they may not have the funds to buy the needed repair parts. In prior reports, we stated that this is generally not the case. With few exceptions, the lack of funds to buy repair parts has not been a problem. In fact, we have reported that units often transfer funds intended for repair parts and maintenance to other operation and maintenance purposes. None of the officials we spoke with at three Army divisions cited the lack of operation and maintenance funds to buy repair parts as a problem. The commanders said that the shortages they experienced were not caused by a lack of repair parts funds, but rather by a lack of repair parts in the supply system. During our visits to the three divisions and NTC, we compiled a list of repair parts that were in short supply at the units and determined their supply position at the wholesale level inventory control points. The results of our analysis are shown in table 4. The problems being experienced with the M1 tank’s rear engine module is illustrative of the type of problems the Army faces with the other parts shortages. As of December 7, 1995, there were only eight serviceable M1 tank rear engine modules in the supply system, and all eight modules were in prepositioned war reserve. At the same time, there were backorders for 75 modules, of which 53 were high priority backorders. According to Army officials, there are sufficient engine rear modules in the supply system, but most of the modules are unserviceable because of a shortage of repair parts to fix the modules. The officials attribute the shortage of repair parts to (1) insufficient demand forecasting due to Bosnia operations, (2) implementation of an engine service life extension program before the needed repair parts were in the system, (3) worsening condition of returns from the field (the returned items require extensive repairs), and (4) a reduced number of qualified part suppliers in the industrial base. Some Army officials in the maintenance community believe that an M1A1 overhaul program is needed because of the fleet’s age and because there is no new tank production planned. The officials acknowledge that reported readiness rates are high. However, they are concerned that there may be latent deficiencies in the tanks that are not detected during readiness inspections and that these deficiencies could affect the tanks’ operational capabilities during a conflict. To address the potential latent deficiencies, the officials proposed a joint proof of principle test program with General Dynamics (the M1 manufacturer) to essentially overhaul the M1A1 tanks. The proposed joint effort is referred to as the AIM XXI program, and the officials believe that it would produce a better-than-original M1A1 tank that would enhance training, be more reliable, and have sustained go-to-war capability. Additionally, the officials believe that the program would reduce the tank’s life-cycle operating and support costs. Under the AIM XXI proof of principle test, the Army would bring 17 M1A1 tanks to the Anniston Army Depot and completely rebuild and update them with the latest modifications. The estimated cost of this effort is $559,000 per tank, about $9.5 million total. The Army Materiel Systems Analysis Activity (AMSAA) would compare certain operational characteristics, for a 9-month period, of the AIM XXI tanks with IRON tanks and with tanks that had not received any depot level maintenance. On the basis of evaluation of the test data, the Army would decide whether to expand the AIM XXI program. Appendix I shows the scope of work under these two programs. AIM XXI program officials estimate that over a 20-year life cycle, the program for the 17 tanks would result in operating and support cost savings of about $28.8 million, compared with the IRON program. However, if the investment cost differential is considered, the overall savings for 20 years is reduced to about $24.4 million, about $1.2 million a year. AMSAA officials who have responsibility for validating the estimated savings told us that they could not project cost savings for an AIM XXI program beyond the proof of principle because any projected savings would not be data driven. They said that they believe the AIM XXI program would result in some operating and support savings, but they were unsure how much. The officials also said that they would be in a better position to estimate the savings after the proof of principle test was completed and the operational characteristics of the AIM XXI, IRON, and nondepot maintenance tanks are compared and evaluated. AMSAA and depot officials also told us that the savings calculations were based on certain assumptions on tank mileage and repair and maintenance costs that may not be representative of the M1 tank fleet. AMSAA officials said that the mileage (1,500) used to compute the annual operating and support cost was not typical of the usage in an operating unit, which averages about 630 miles a year. Consequently, the estimated savings between AIM XXI and IRON tanks would be much less and this, in turn, would reduce the life-cycle savings. Depot officials also told us that the direct IRON program costs had been reduced to $196,000 a tank for fiscal year 1996, compared with the $266,000 used in the analysis. This reduction would reduce the investment cost for the 17 IRON tanks to about $3.9 million. AIM XXI program officials told us that one of the difficulties they are facing is that there is no empirical data that shows there are latent deficiencies in the tanks as a result of not having a depot overhaul program. Additionally, the Army does not have a predictive readiness system to demonstrate that if the tanks are not overhauled, the tanks will not be able to maintain a high rate of operational readiness. The officials also told us that if the test data proved what they expected and that if the AIM XXI program was approved, they would like to begin inducting an average of 66 M1A1 tanks into the depot beginning in fiscal year 1998 and continue the program for 20 years. The concern raised by Army officials and unit commanders about the AIM XXI program centered on the impact the program could have on the M1A2 modernization effort. The officials said that in today’s budget environment the funds for the AIM XXI program would probably come from some existing program as it was unlikely that the Army would receive additional budget authority for the program. They said that while it would be nice to have overhauled M1A1 tanks, they would much rather have M1A2 tanks. Therefore, if the AIM XXI program would result in M1A2 fielding delays, they would opt for the M1A2 tanks. Anniston officials said that because General Dynamics is involved in both the AIM XXI and M1A2 programs and both programs could be performed in the same facilities, the M1A2 unit cost should be reduced. However, they were not able to estimate the extent of the cost reduction. Anniston officials also told us that in the absence of the AIM XXI program or some other heavy armor work, the depot could lose as much as 50 percent of its heavy armor repair capability and the lost capability would be difficult to replace in a surge situation. They said that when the IRON program is completed in fiscal year 1996, the depot’s workload will consist primarily of component repair. The officials also said that, in their opinion, the AIM XXI program would not only increase the availability, reliability, and fightability of the M1 tank fleet but also protect industrial base core capabilities that would be needed in time of conflict. To determine the readiness of the M1 tank fleet, we reviewed data from the Army’s readiness reporting system along with readiness reports from three Army divisions and the NTC, which we visited during our review. We also interviewed brigade and battalion officials at the three divisions and officials at NTC to obtain their views on the operating condition of their M1 tanks and the tanks’ ability to perform assigned missions. At NTC, we focused on the maintenance of the tank fleet and on training realism. We also obtained the views of contractor personnel who maintain the M1A1 tank fleet. To determine whether the change in repair parts funding had affected the units’ ability to maintain the M1 tank, we interviewed Army division officials at the three divisions. We also identified parts that were in short supply and that were (in the opinion of division officials) affecting the divisions’ maintenance capabilities. We then obtained the supply position of these items at the wholesale level and discussed the reasons for the shortages with wholesale level supply management officials. We interviewed Army and contractor officials and reviewed documentation relating to the proposed AIM XXI overhaul program for the M1 tank fleet. We obtained the officials’ views on the need for such a program, along with their proposals to test and implement the overhaul effort. We also reviewed the effect of the proposed overhaul program on future tank repair workload at the maintenance depot by examining depot workload statistics and forecasts and obtaining the views of depot officials. Our review was conducted at the Office of the Project Manager, Abrams Tank System, and the Army Tank-Automotive and Armaments Command, Warren, Michigan; Army Materiel Command, Alexandria, Virginia; Office of the Deputy Chief of Staff for Logistics, Pentagon, Washington, National Training Center, Fort Irwin, California; Anniston Army Depot, Anniston, Alabama; 1st Infantry Division (Mechanized), Fort Riley, Kansas; 1st Cavalry Division and the 2nd Armored Division, Fort Hood, Texas; and Army Materiel Systems Analysis Activity, Aberdeen Proving Grounds, Maryland. The Department orally commented that it fully concurred with our draft report. We conducted our review from August 1995 to February 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Army; the Director of the Office of Management and Budget; and the Chairmen of the House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, House and Senate Committees on Appropriations and Senate Committee on Armed Services. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix II. James S. Moores Darryl S. Meador The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the absence of a procurement program to modernize the M1 tank fleet beyond the upgrade of existing tanks and to address new tank threats, focusing on: (1) whether the current readiness level of the M1 tank is adequate to meet its war-fighting requirements; (2) whether the operating condition of the tanks at the National Training Center (NTC) is adequate to meet training requirements; (3) whether the change in repair parts funding has adversely affected unit maintenance; and (4) the status of the Army's proposed M1 tank overhaul program, referred to as the Abrams Integrated Management XXI (AIM XXI) program. GAO found that: (1) as of March 31, 1995, over 94 percent of the active and reserve Army units reported that their M1 tanks were ready to perform the majority of the assigned wartime missions, and about 56 percent of the units reported that their M1 tanks were ready to perform all of their assigned wartime missions; (2) because of the high operating tempo of the training tanks, the M1 tanks at NTC are experiencing more maintenance problems than tanks in active Army units; (3) however, in spite of the maintenance problems, NTC has fielded the required number of tanks to meet all of its training requirements; (4) on average, the NTC M1 fleet maintained an operational readiness rate of about 82 percent for the 8-month period that ended December 1995; (5) commanders at three Army divisions that have 834 M1 tanks told GAO that the change in repair parts funding had not caused them to alter their maintenance approach; (6) the commanders cited some instances in which they had experienced repair parts shortages; (7) however, they emphasized that lack of funds to buy the parts was not the reason for the shortages; (8) the parts were generally not available in the supply system; (9) notwithstanding, some Army officials have proposed a M1 overhaul program, at a cost of $559,000 a tank, because they were concerned that latent deficiencies that do not show up during routine readiness inspections could show up during wartime and affect the tanks' performance; (10) other Army officials, however, are resistant to the overhaul program because of concerns that the program would take funds away from the ongoing M1A2 upgrade program; (11) the Army does not maintain data that show the extent, if any, of the latent deficiencies, nor does the Army have a predictive readiness system that would show what would happen to operational readiness if there were no depot overhaul program; and (12) at the time GAO completed its review, the Army had not made a decision concerning the proposed overhaul program. |
Established in 1956, DI is an insurance program that provides monthly cash benefits to workers who are unable to work because of severe long term disability. Workers who have worked long enough and recently enough are insured for coverage under the DI program. To meet the definition of disability under the DI program, an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity (SGA). Individuals are considered to be engaged in SGA if they have countable earnings above a certain dollar level. Once a person is on the disability rolls, benefits continue until (1) the beneficiary dies, (2) the beneficiary becomes eligible for Social Security retirement benefits at full retirement age, (3) SSA determines that the beneficiary is no longer eligible for benefits because his or her earned income exceeds the SGA level, or (4) SSA decides that the beneficiary’s medical condition has improved to the point that he or she is no longer considered disabled. In 2002, SSA paid about $60 billion in DI cash benefits to 5.5 million disabled workers, with average monthly benefits amounting to $834 per person. In addition to receiving cash assistance, beneficiaries automatically qualify for Medicare after 24 months of DI entitlement. During the 1970s, as the number of disability awards increased significantly and resulted in substantial cost increases for the DI program, the Congress became concerned about the growth of the DI program and program rules that provided disincentives to returning to work. To encourage DI beneficiaries to return to work—and, potentially, to leave the benefit rolls—the Congress has, over the years, enacted legislation providing various work incentives. Such incentives include a trial work period during which beneficiaries may earn any amount for 9 months within a 60-month period and still receive full cash and medical benefits and continued Medicare coverage that allows beneficiaries to maintain eligibility for Medicare for at least 39 months following a trial work period as long as medical disability continues. In an effort to further address these issues, the Congress, in 1980, required SSA to conduct demonstration projects to evaluate the effectiveness of policy alternatives that could encourage DI beneficiaries to reenter the workforce. A key aspect of this demonstration authority is SSA’s ability to waive DI and Medicare program rules to the extent needed in conducting these projects. The legislation granting DI demonstration authority also identified a variety of policy alternatives for SSA to consider testing, including (1) alternative ways of treating DI beneficiaries’ work-related activity such as methods allowing for a reduction in benefits based on earnings and (2) modifications in other rules, such as the trial work period and Medicare eligibility waiting period, that may serve as obstacles to DI beneficiaries returning to work. In addition, this legislation identified several requirements pertaining to the design and evaluation of DI demonstration projects. In particular, these projects were required to be of sufficient scope and carried out on a wide enough scale to permit a thorough evaluation of the policy alternatives studied such that the results would be generally applicable to the operation of the DI program. The law additionally required SSA to submit reports to the Congress announcing the initiation of DI demonstration projects as well as periodic reports describing the status of these projects and a final report on all projects carried out under the demonstration authority. SSA was directed to make recommendations, when appropriate, for legislative or administrative changes in its reports to the Congress. Another important aspect of SSA’s DI demonstration authority is that unlike other SSA research activities, which are funded through congressional appropriations, these projects can be paid for with DI Trust Fund and Old-Age and Survivors Insurance Trust Fund monies. Therefore, SSA is not required to obtain congressional approval for DI demonstration expenditures, although it is required to receive approval from the Office of Management and Budget for an annual apportionment of Trust Funds for these demonstrations. SSA’s DI demonstration authority has always been granted on a temporary basis and therefore has been subject to periodic review and renewal by the Congress. After initially granting this authority for a 5-year period, the Congress subsequently renewed it several times, in 1986, 1989, 1994, 1999, and 2004. The renewal of SSA’s authority has sometimes been delayed so that SSA has, on several occasions, gone without DI demonstration authority. For example, after its demonstration authority expired in June 1996, SSA was not again granted DI demonstration authority until December 1999. Most recently, the Congress extended this demonstration authority through December 2005. In addition to granting this general DI demonstration authority, the Congress may enact legislative mandates for SSA to conduct specific DI demonstration projects. For example, the Ticket to Work and Work Incentives Improvement Act of 1999 required SSA to conduct a demonstration to assess the effectiveness of a benefit offset program under which DI benefits are reduced by $1 for every $2 in earnings (above a certain level) by a beneficiary. SSA’s authority to conduct this demonstration is similar in some respects to the authority it has under its general DI demonstration statute. For instance, the statute allows waiver of DI and Medicare program provisions to carry out this benefit offset demonstration. However, some differences exist between the two authorities. In particular, the benefit offset demonstration authority provides a more detailed and comprehensive list of demonstration objectives for SSA to fulfill than does SSA’s general authority. For example, the benefit offset demonstration authority lists six “matters to be determined,” which include assessments of project costs; savings to the Trust Funds; and project effects on employment outcomes such as wages, occupations, benefits, and hours worked. Regardless of the authority under which they are carried out, demonstration projects examining the impact of social programs are inherently complex and difficult to conduct. Measuring outcomes, ensuring the consistency and quality of data collected at various sites, establishing a causal connection between outcomes and program activities, and separating out the influence of extraneous factors raise formidable technical and logistical problems. Thus, these projects generally require a planned study and considerable time and expense. Adding to these complexities are other administrative or statutory requirements affecting SSA’s DI demonstrations. For example, SSA’s policy is that its demonstration projects must not make those who participate in the project worse off, which could limit the specific types of policy alternatives the agency can study or the methods used to study such alternatives. Although the legislation granting DI demonstration authority does not prescribe the use of particular methodological approaches, SSA has repeatedly recognized that the law’s general requirements for demonstration evaluations require SSA to conduct these projects in a rigorous manner that provides the agency with a reliable basis for making policy recommendations. Rigorous methods are required to estimate the net impact of a tested disability policy option because many other factors, such as the economy, can influence whether a beneficiary returns to work. In an August 2002 report to the SSA Commissioner, an SSA advisory panel stated that it is widely agreed that experimental designs, “when feasible from operational and budgetary perspectives and when they can be conducted without serious threats to their validity, are the best methodology for determining the effects of changes in government programs.” In addition, SSA officials and other researchers have noted the advantages of experimental designs in providing policymakers with more clear-cut results that are less subject to debate than results derived from other methods. However, when experimental designs are not feasible or desirable, the use of quasi-experimental designs offers a reasonably rigorous evaluation alternative that may, under certain circumstances, offer advantages over experimental designs. Other factors may also limit the rigor of DI demonstrations, including insufficient sample sizes, inconsistency in demonstration design or implementation across multiple project sites, and deficiencies in data collection. Such design, implementation, and evaluation weaknesses may hamper SSA’s use of project results as a basis for making policy recommendations because they limit the agency’s ability to (1) control for factors external to the demonstration, (2) generalize demonstration results to a wider population of DI beneficiaries, and (3) isolate the effects of specific policy interventions from the overall effects produced by a demonstration. The Office of Program Development and Research (OPDR) is the entity within SSA that develops and implements demonstration projects for the DI and Supplemental Security Income (SSI) Programs. OPDR program and research staff—sometimes with the assistance of outside research organizations—identifies the broad outlines and requirements of disability program demonstration projects, including the basic objectives, scope, and methodological standards for these projects. SSA then issues formal notices requesting public or private sector organizations to submit offers to conduct the demonstration projects, which may include development of a detailed design plan, provision of technical support, collection of project data, or evaluation of project results. On the basis of SSA’s review of submitted proposals and bids, the agency may enter into grants, cooperative agreements, or contractual arrangements with one or more organizations to carry out demonstration projects. For example, a single demonstration may involve cooperative agreements with states to design and implement projects as well as contracts with one or more research institutions to provide technical assistance to the states and evaluate demonstration results. Project managers in OPDR have the primary responsibility for overseeing demonstration projects to ensure that they meet SSA’s technical and programmatic requirements. OPDR collaborates with SSA’s Office of Acquisition and Grants in issuing formal project notices and solicitations and, subsequently, in overseeing grant or contract performance. SSA has not used its demonstration authority to extensively evaluate a wide range of DI policy areas dealing with return to work. Until very recently, SSA has focused its demonstration efforts primarily on a relatively narrow set of policy issues dealing with the provision of vocational rehabilitation and employment services, despite being given the authority to assess a much broader range of policy alternatives. Even in the area of vocational rehabilitation and employment issues, SSA’s use of DI demonstration authority has not been comprehensive and, therefore, did not extensively address key policy issues that the agency is currently grappling with under its Ticket to Work program. SSA’s recently initiated or proposed demonstrations have begun to address a broader range of policy issues. However, the agency has no systematic processes or mechanisms for ensuring that it is adequately identifying and prioritizing those issues that could best be addressed through use of its demonstration authority. The DI demonstration projects that SSA has conducted since 1980 have not extensively addressed a wide range of return-to-work policy issues. Since first being granted DI demonstration authority 24 years ago, SSA has used this authority to complete four projects, with another project nearing completion.Total costs for these projects amount to at least $107 million, of which about $42 million was paid for from the Old-Age and Survivors Insurance and Disability Insurance (OASDI) Trust Funds. The legislation granting DI demonstration authority to SSA provided the agency with an opportunity to examine a broad set of return-to-work policy alternatives and even identified some specific alternatives for SSA to consider studying, including (1) reducing, rather than terminating, benefits based on earnings; (2) lengthening the trial work period; (3) decreasing the 24-month waiting period for Medicare benefits; (4) altering program administration; (5) earlier referral of beneficiaries for rehabilitation; and (6) using employers and others to stimulate new forms of vocational rehabilitation. The projects SSA has conducted thus far have focused predominantly on the latter category of issues involving vocational rehabilitation and have focused to a lesser extent—or not at all—on other key policy issues affecting return to work (see table 1). More specifically, examination of policy alternatives dealing with the provision of vocational rehabilitation and employment services has been the primary objective of four of the five completed or nearly completed demonstrations. Although two of these projects also examined other DI return-to-work policy issues—such as the possible effects of changes in program work incentives and alterations in the provision of medical benefits—they did so to only a limited extent. None of the projects looked at other potentially significant DI policy issues, such as the possibility of changing SSA’s benefit structure to allow for a reduction in benefits, rather than a complete cutoff of benefits, based on earnings. Furthermore, SSA has not used its DI demonstration authority to comprehensively examine issues involving vocational rehabilitation, including key policy issues with which the agency is currently grappling. For example, SSA did not extensively test key elements of what eventually became the Ticket to Work program. Although the ticket program was not formally proposed by SSA in a legislative package until 1997, as early as 1989, in an annual report to the Congress on SSA’s demonstration activities, SSA noted that among its ideas for improving SSA’s ability to assist beneficiaries in returning to work was a voucher program that could be used to pay for vocational rehabilitation services from private providers. SSA told the Congress that such a program, as well as other possible policy changes, would need to be thoroughly tested as a prerequisite to developing a new nationwide program. However, only one project completed under SSA’s DI demonstration authority—Project Referral System for Vocational Rehabilitation Providers (Project RSVP), initiated in 1997—addressed an issue directly relevant to the ticket program, namely, the use of a contractor to perform certain administrative functions for an expanded vocational rehabilitation referral and reimbursement program. But our review of project documentation and our discussions with SSA officials indicate that Project RSVP was more of an effort to make an operational change in the way SSA managed its vocational rehabilitation program than a study to evaluate the advantages and disadvantages of such a change. In fact, we could not identify any end product or final results for this project. SSA also made another attempt, ultimately unsuccessful, to directly address issues related to establishment of a ticket program. In the Omnibus Budget Reconciliation Act of 1990, the Congress mandated that SSA use its DI demonstration authority to assess the advantages and disadvantages of permitting DI beneficiaries to select from among both public and private vocational rehabilitation providers. But in January 1993, SSA reported to the Congress that it would be unable to conduct this demonstration because of an insufficient number of providers willing to participate in the project. SSA explained that the performance-based reimbursement provisions of the proposed project appeared to be the reason why providers were reluctant to participate. Despite the Congress’ expressed interest in these issues, SSA did not attempt to identify alternative ways to carry out such a demonstration. In particular, given that SSA remained very interested in the expanded use of private rehabilitation providers for the DI program, the difficulties encountered in recruiting providers for the demonstration should have highlighted the need for SSA to further study the issue of provider reimbursement before proceeding with any policy initiatives in this area. SSA’s current Deputy Commissioner for Disability and Income Security Programs told us that if SSA had used its demonstration authority to study these types of issues in the 1990s, SSA might have been able to identify and possibly resolve these issues then rather than struggling to do so now. In addition, such information could have been helpful in the Congress’ consideration of the ticket legislation’s merits as it deliberated whether to enact this program. In contrast to the completed and nearly completed demonstration projects, SSA’s more recent projects, which are generally in the early planning or proposal stages, represent a much more wide-ranging set of demonstrations (see table 2). For example, the projects, as currently described, will deal with a variety of issues such as early provision of cash and medical benefits and a change in the benefit payment structure to allow a benefit offset for beneficiaries earning above the SGA level. This more comprehensive approach to demonstrations is due in part to legislative changes. For example, the Ticket to Work Act mandated that SSA conduct a benefit offset demonstration and also permitted SSA, for the first time, to conduct demonstrations involving DI applicants, thereby allowing SSA to test ideas such as early provision of cash and medical benefits and vocational rehabilitation services to individuals who have not yet entered the disability rolls. In addition, SSA has recently placed a high priority on conducting disability demonstration projects that examine the key issues affecting beneficiaries’ return to work. This priority was reflected in the SSA Commissioner’s September 25, 2003, testimony before the House Committee on Ways and Means, Subcommittee on Social Security, in which she announced several new demonstrations as part of a broader strategy to improve the DI and SSI programs. SSA estimates that these recently proposed and initiated projects will cost about $357 million, $293 million of which will be paid for from the OASDI Trust Funds. Despite SSA’s recent broadening of the scope of its projects, the agency does not have in place any systematic processes for identifying and assessing potential issues that could be well suited for study under SSA’s demonstration project authority. Therefore, there is no assurance that the agency will, in future demonstration efforts, maintain its current focus on a broad array of return-to-work policy issues. Our discussions with SSA officials and review of a study examining earlier demonstration efforts indicate that the agency’s agenda for demonstration projects is subject to significant change over time resulting, in part, from changes in executive branch and SSA leadership and senior management. The effects of such changes may include termination of projects or significant delays and modifications in their planning and implementation. For example, in its 1994 report examining SSA’s Research Demonstration Program, the agency’s Inspector General noted that changes in SSA leadership had disrupted the accomplishment of RDP objectives. The disability research and advisory officials we spoke with also indicated that SSA’s project priorities and decisions are significantly influenced by larger political and organizational changes, which may prevent SSA from focusing on long term research objectives. One advisory official noted that these difficulties in long-term planning have occurred despite the fact that the Congress—in making SSA an independent agency and establishing a 6-year term for the SSA Commissioner—intended that SSA would be better able to engage in the type of long-range planning required to address its program needs. SSA’s approach for identifying and prioritizing demonstrations has varied through the years. Soon after being granted DI demonstration authority in 1980, SSA developed a detailed demonstration research plan to directly address the policy issues identified in SSA’s authorizing legislation. However, our discussions with SSA officials and review of internal agency documents indicate that the plan was never acted upon because of competing organizational priorities and concerns over the potential cost of the demonstrations and possible technical limitations, such as the adequacy of systems support. Consequently, as its DI demonstration authority was due to expire in 1985, SSA had not used it to conduct any demonstrations. In the second half of the 1980s, after its demonstration authority was renewed, SSA changed course. Partly on the basis of solicitation of ideas from the public, SSA identified priority areas dealing mostly with vocational rehabilitation and employment services issues for which it would issue grants to public and private organizations to conduct demonstrations. The specific priority areas identified changed from year to year as SSA attempted to stimulate, test, and coordinate effective approaches toward employment assistance. In its required 1991 annual report to the Congress on its DI demonstration activities, SSA said that it was proceeding with broader testing of key elements of a comprehensive employment and rehabilitation system. But our review of agency documents and discussions with SSA officials indicate that SSA has not developed a formal, comprehensive, long-term agenda for conducting DI demonstration projects. Senior SSA officials told us that the agency’s current demonstration project decisions are, to some extent, based on discussions with outside research, advocacy, and other groups. But SSA has no formal mechanisms and requirements in place to ensure that the agency obtains such input and to decide how such input should be factored in with other considerations in determining the agency’s demonstration priorities. The need for explicit planning concerning SSA research, including demonstrations, has been identified in past reviews of SSA’s disability programs. For example, in 1998, the Social Security Advisory Board (SSAB) noted the need for SSA to develop a comprehensive, long-range research and program evaluation plan for DI and SSI that would guide the agency’s research and define priorities. SSAB also said that SSA’s research plan should reflect broad consultation with the Congress, other agencies, SSAB, and others and recommended the establishment of a permanent research advisory panel to advise in the development of a long range plan. In a 1996 report on SSA’s disability programs, the National Academy of Social Insurance noted the “dearth of rigorous research on the disability benefit programs” since the 1980s and said that SSA needed a comprehensive, long-range research program to address this deficiency.In addition, officials from disability research, advisory, and advocacy groups told us that they believe the establishment of a formal research agenda or an advisory panel with regard to demonstration projects would be helpful in ensuring that SSA adequately identifies its demonstration priorities and maintains its commitment to these priorities even in the face of political or administrative changes. SSA’s demonstration projects have had little influence on the agency’s and the Congress’ consideration of DI policy issues. This is due, in part, to methodological limitations that have prevented SSA from producing project results that are useful for reliably assessing DI policy alternatives. In addition, SSA lacks a formal process for fully considering the potential policy implications of its demonstration results. Furthermore, SSA’s reports on demonstration projects have not fully apprised the Congress of project results and their policy implications. The demonstration projects SSA has conducted under its DI demonstration authority have generally not been designed, implemented, or evaluated in a rigorous enough manner to allow the agency to reliably assess the advantages and disadvantages of specific policy alternatives. While SSA’s major DI demonstrations have varied significantly in their methodological rigor, all of them have experienced at least some significant methodological limitations. For example, SSA’s first major DI demonstration, the Research Demonstration Program, was characterized by a number of fundamental design and evaluation flaws such as the limited scope and small sample sizes of the RDP projects and the limited use of control groups. In its 1994 report on the RDP, the Department of Health and Human Services’ (HHS) Inspector General noted that because of such limitations, “grantees were unable to conduct research that SSA deemed necessary for definitive tests of alternatives to help beneficiaries obtain work.” In addition, SSA did not develop a plan for evaluating the overall RDP results as part of its initial project design. In a required 1994 annual report to the Congress on its demonstration activities, SSA acknowledged that the lack of a rigorous project design and the omission of a strong evaluation component limited the ways in which the project results could be generalized. But SSA also described a number of “observations” that resulted from the RDP and noted that this project had helped to identify the agency’s future demonstration priorities. However, given the significant limitations of the RDP, it is unlikely that its results could have provided a reliable basis for effectively establishing such priorities. In its next major DI demonstration effort, Project Network, which was initiated as the RDP projects were being completed, SSA avoided many of the major shortcomings of the RDP. For example, Project Network was rigorously designed, using an experimental approach based on the random assignment of beneficiaries to treatment and control groups. As a result, this project produced some reasonably clear results, which SSA thoroughly evaluated in an effort to assess the overall impact of the tested policy alternatives. Despite its generally rigorous design, Project Network also had some limitations that may have, to some extent, limited its usefulness for policy consideration. For example, in examining the effects of a case management approach for providing vocational rehabilitation services, Project Network used four different service delivery models.Although the Project Network evaluation provided information on the overall effects of a case management approach, it did not provide a basis for reliably assessing and comparing the separate effects of the four models even though such an assessment may have provided useful information for policy considerations. In addition, Project Network did not produce results that could be generalized to the larger population of beneficiaries, which, in turn, limited SSA’s ability to assess whether the tested policy should be implemented on a nationwide basis. As was the case with Project Network, SSA has made a significant effort under its State Partnership Initiative demonstration to avoid some of the problems encountered under the RDP. For example, SSA contracted with two research institutions to design an evaluation plan for the demonstration and to provide assistance with technical issues and data collection to the various states conducting this demonstration. Our discussions with SSA and contractor officials who have been involved in this demonstration as well as our own review of SPI project documents indicate that the efforts of the contractors appear to have introduced a certain degree of rigor in the design, implementation, and, potentially, evaluation of this demonstration. For example, SSA’s contractors have indicated that the SPI “core evaluation” will likely produce useful results regarding the effects on beneficiary employment of the overall package of policy alternatives tested under the demonstration. But despite these efforts, the SPI design also has a number of limitations that could substantially reduce the usefulness of its results for evaluating the effects of the demonstration’s individual policy alternatives. For example, SSA gave each of the 12 participating states significant discretion in designing and conducting projects, which resulted in 12 distinct state projects. Each project tested different combinations of policy alternatives, applied different research methods to study these alternatives, and used varying approaches to select beneficiaries for participation in the project. SSA officials told us that such differences across projects make it unlikely that SPI will produce final results that allow for reliable evaluations of specific policy alternatives on a national level. SSA and one of its SPI contractors have also noted other potential limitations in the design and implementation of SPI, such as problems with the quality of states’ data collection, that may detract from SSA’s ability to evaluate specific policy alternatives. SSA officials currently responsible for planning and conducting DI demonstrations acknowledged that the agency’s past demonstrations have generally not provided useful information for policy making largely because of the limited rigor with which these projects were conducted. However, they emphasized that the agency has, over the past couple of years, placed a new emphasis on ensuring that DI demonstrations are rigorously designed so that the results can be used to effectively evaluate specific policy options and develop recommendations. In particular, the officials noted the importance of using, whenever feasible, an experimental approach in its demonstration projects and of ensuring that demonstration results can be generalized to the larger population of DI beneficiaries. The officials also emphasized the need for SSA to hire additional staff with the expertise needed to carry out methodologically rigorous demonstration projects. Aside from the SPI demonstration, all of SSA’s other current DI demonstrations are in the early design phase or have been proposed only recently. Therefore, we were not able to assess the methodological rigor of these projects. However, our review of SSA’s request for proposal (RFP) for its Benefit Offset demonstration indicates that SSA is making a serious effort to comprehensively and rigorously study this policy issue. For example, SSA has proposed using an experimental design with random assignment to treatment and control groups. Nevertheless, the scope and complexity of SSA’s proposal suggest that this will be a very challenging project for SSA to carry out successfully, and that the agency will need to ensure that its project design avoids some of the pitfalls that have limited the usefulness of past demonstrations, such as insufficient sample size and lack of uniformity in tested interventions across sites. SSA does not have procedures or processes in place to ensure that project results—regardless of any limitations that they may have—are fully considered by senior officials within the agency for their policy implications or their implications for future SSA research and demonstrations. Without such processes, projects that begin with the support of senior managers under one administration may not receive adequate attention from a new group of senior managers under a future administration. Our discussions with current and former SSA officials and with officials from disability research, advocacy, and advisory organizations indicate that such shifting priorities have been the norm for SSA’s DI demonstration projects. For example, several of these officials told us that when Project Network was completed in 1999, its results were not formally reviewed and considered by senior SSA managers, in part because of the changes in presidential administrations and in senior agency leadership that had occurred since the start of the project. Officials from one of the groups we spoke with told us that SSA’s consideration of project results could be improved by the establishment of a panel to review project results and explore their policy implications. An additional factor that could limit SSA’s consideration of demonstration results is the lack of an adequate historical record—reflecting the outcomes and the problems or issues encountered—of the various projects that the agency has conducted under its demonstration authority. SSA has not maintained a formal record of its disability demonstration project activities and results, so basic information on these projects—such as project notices, design documents, and evaluation documents—is in some cases no longer available. As a result, information on some projects can be obtained only by relying on the recollection of SSA employees who were around when the study was conducted. While formal document retention requirements may not dictate that SSA maintain such information, several SSA officials told us that the agency would benefit from an institutional record of demonstration activity. According to these officials, such a record would constitute a body of knowledge that the agency should be building to improve DI return-to work policies. This becomes even more important in light of the expected retirement of a large percentage of SSA staff during this decade. In addition to having shortcomings in its consideration of demonstration results, SSA has not sufficiently communicated the status and results of its demonstration projects to the Congress. Although SSA had been required to issue various reports to the Congress regarding its DI demonstration projects, it has not always produced such reports. For example, although SSA was required to submit final reports on the use of its demonstration authority in 1985, 1990, 1993, and 1996, the only final report that SSA submitted was in 1996. In addition, SSA did not submit annual reports on its demonstration activities in 7 of the 16 years in which these reports were required. Furthermore, when these reports have been produced, they have not provided all of the information needed to fully inform the Congress of demonstration activities and results. For example, our review of these reports indicates that they have frequently lacked key information such as a discussion of a project’s potential policy implications, its limitations, and the costs of conducting the project. In allowing SSA to waive program provisions and use OASDI Trust Fund dollars, SSA’s DI demonstration authority provides the agency with a special, and potentially very valuable, means of studying policy alternatives to improve the agency’s return-to-work programs. SSA has spent tens of millions of dollars from the OASDI Trust Funds to conduct these projects—in addition to tens of millions of dollars from SSA’s general appropriations—and expects to spend hundreds of millions more within the next 10 years. While these amounts may be small as a percentage of the total Trust Funds, they nevertheless represent a substantial use of increasingly limited federal resources. After having this authority for more than two decades, SSA has yet to use it to propose or assess major policy options that could result in savings to the Trust Funds. Because SSA’s use of its DI demonstration authority has yet to achieve the Congress’ intended results—and because SSA is permitted to draw on increasingly limited Trust Funds to conduct these demonstrations—we believe it is important for the Congress to maintain close oversight of SSA’s use of this authority. We also believe that such oversight would be a greater challenge if the Congress were to grant this demonstration authority on a permanent basis. As the DI Trust Fund approaches exhaustion, the need for programmatic improvements becomes greater and greater. As part of a broader effort to address this need, SSA has recently initiated or proposed a number of DI demonstration projects that, according to SSA officials, are geared toward producing useful and methodologically sound results. Such results could provide an important basis for SSA to address some of the long-standing issues that have led GAO to identify federal disability programs as a high risk area. However, the challenges SSA has historically faced in conducting demonstration projects and the potential for changing priorities to adversely affect long-range research plans suggests that, in the long run, SSA may be unable to fulfill these demonstration goals. This is especially likely if SSA continues its informal approach to prioritizing and planning demonstrations and assessing their results. Without more formal mechanisms for establishing its commitment to effective and thorough DI demonstrations—including the submission of regular reports to the Congress on the results and implications of its demonstration projects— SSA will be unable to ensure that the extensive amount of time, effort, and funding devoted to these demonstrations is well spent. To help ensure the effectiveness of SSA’s DI demonstration projects, we recommend that the Commissioner of Social Security take the following actions: Develop a formal agenda reflecting the agency’s long-term plans and priorities for conducting DI demonstration projects. In establishing this agenda, SSA should consult broadly with key internal and external stakeholders, including SSA advisory groups, disability researchers, and the Congress. Establish an expert panel to review and provide regular input on the design and implementation of demonstration projects from the early stages of a project through its final evaluation. Such a panel should include SSA’s key research personnel as well as outside disability experts and researchers. SSA should establish guidelines to ensure that its project plans and activities adequately address the issues or concerns raised by the panel or provide a clear rationale for not addressing such issues. Establish formal processes to ensure that, at the conclusion of each demonstration project, SSA fully considers and assesses the policy implications of its demonstration results and clearly communicates SSA’s assessment to the Congress. Such processes should ensure that SSA consults sufficiently with internal and external experts in its review of demonstration project results and that SSA issues a report to the Congress clearly identifying (1) major project outcomes, (2) major project limitations, (3) total project costs, (4) any policy options or recommendations, (5) expected costs and benefits of proposed options or recommendations, and (6) any further research or other actions needed to clarify or support the project’s results. Another key aspect of such formal processes should be a requirement that SSA maintain a comprehensive record of DI demonstration projects. This record would help SSA in establishing an empirically based body of knowledge regarding possible return-to-work strategies and in deriving the full value of its substantial investments in demonstration projects. To facilitate close congressional oversight and provide greater assurance that SSA will make effective use of its DI demonstration authority, the Congress should consider the following actions: Continue to provide DI demonstration authority to SSA on a temporary basis but allow SSA to complete all projects that have been initiated prior to expiration of this authority. This would provide SSA with greater certainty and stability in its efforts to plan and conduct demonstration projects while preserving the Congress’ ability to periodically reassess and reconsider SSA’s overall use of DI demonstration authority. Require that SSA periodically provide a comprehensive report to the Congress summarizing the results and policy implications of all of its DI demonstration projects. The due date for this report could either coincide with the expiration of SSA’s DI demonstration authority or, if this authority is made permanent or extended for a period greater than 5 years, be set for every 5 years. Such reports could serve as a basis for the Congress’ assessment of SSA’s use of its demonstration authority and its consideration of whether this authority should be renewed. Establish reporting requirements that more clearly specify what SSA is expected to communicate to the Congress in its annual reports on DI demonstrations. Among such requirements could be a description of all SSA projects that the SSA Commissioner is considering conducting or is conducting some preliminary work on. For each demonstration project that the agency is planning or conducting, SSA should provide clear information on the projects’ specific objectives, potential costs, key milestone dates (e.g., actual or expected dates for RFP, award of contracts or grants, start of project operations, completion of operations, completion of analysis, and final report), potential obstacles to project completion, and the types of policy alternatives that SSA might consider pursuing depending on the results of the demonstration. This would provide the Congress with a more complete understanding of the direction and progress of SSA in its efforts to fulfill its DI demonstration requirements. More clearly specify the methodological and evaluation requirements for DI demonstrations to better ensure that such projects are designed in the most rigorous manner possible and that their results are useful for answering specific policy questions and for making, where appropriate, well-supported policy recommendations. Such requirements should not be entirely prescriptive given the need for SSA to have sufficient flexibility for choosing the right methodological approach based on the specific circumstances and objectives of a particular demonstration project. However, the requirements could call for SSA to choose, to the extent practical and feasible, the most rigorous methods possible in conducting these demonstrations. Whatever methods are ultimately selected, SSA should be sure that the methods used will allow for a reliable assessment of the potential effect on the DI program of the individual policy alternatives being studied. Finally, SSA’s legislative requirements could be revised to include a more explicit list of project objectives—such as assessments of specific employment outcomes, costs and benefits, and Trust Fund savings—similar to the language that was included under Sections 302(b)(1) and (b)(2) of the Ticket to Work and Work Incentives Improvement Act. In commenting on a draft of this report, SSA agreed with our recommendations. SSA agreed that in the past it has not used its demonstration authority to extensively evaluate DI policy but noted that its recently initiated or proposed demonstrations will play a vital role in testing program and policy changes. SSA also agreed that the use of experts in developing demonstration projects is very useful and commented that it has used the expertise of particular individuals on an ad hoc basis and plans to continue to use the advice and recommendations of experts in the development of future demonstrations. Finally, SSA agreed that a central source of information regarding the results and policy implications of disability demonstrations needs to be established and stated that it planned to fully analyze the results of demonstration projects to inform DI policy decisions. SSA’s comments appear in appendix II. Copies of this report are being sent to the Commissioner of SSA, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. To address the mandated objectives, we reviewed legislation authorizing the Social Security Administration (SSA) to conduct Disability Insurance (DI) demonstration projects, Congressional reports related to this legislation, and SSA regulations governing DI demonstration activities. We also examined internal SSA memorandums and planning documents discussing proposals to conduct demonstration projects and the nature, purpose, requirements, and distinguishing features of SSA’s demonstration authority. We interviewed a wide range of current and former SSA officials who have had involvement in or responsibility for conducting disability program demonstration projects, including officials from the Office of Disability and Income Security Programs (ODISP) and two offices operating under ODISP—the Office of Program Development and Research and the Office of Employment Support Programs—as well as officials from the Office of the Chief Actuary, the Office of Acquisition and Grants, the Office of Budget, the Office of Strategic Management, and the Office of Research, Evaluation, and Statistics. We also interviewed officials from disability research, advisory, and advocacy organizations. In addition, we examined other reviews of SSA’s disability demonstration and research programs, including prior GAO and Inspector General reports and reports from disability research and advisory groups. We also reviewed SSA budget documents identifying agency spending on disability program demonstrations and SSA testimony describing agency priorities related to the DI program in general and demonstration projects in particular. In addition, we examined SSA’s strategic plan, annual performance plans, and annual accountability reports. To obtain detailed information on SSA’s DI demonstration projects, we reviewed various documents related to SSA’s design, implementation, and evaluation of demonstration projects including agency reports to the Congress; public notifications of demonstration projects issued in the Federal Register; contract, grant, and cooperative agreement solicitation and award notices issued in the Federal Register or in the Commerce Business Daily; and project reports submitted to SSA by grantees or contractors, including project design and evaluation documents. We used information from these sources to identify key characteristics and outcomes of each project, including its broad goals, specific study objectives, types of program waivers applied, methodology, actual or expected costs, funding sources, major project milestones including actual or expected initiation and completion dates, project duration, involvement of outside contractors and grantees, key project strengths and limitations, and final project results, including any recommendations that may have been made. The type and extent of information we obtained for each demonstration project varied widely, in large part because SSA has not maintained comprehensive documentation on its prior demonstrations. In addition, documentation on SSA’s more recent demonstrations was very limited given that these projects are in the early planning and design stages. To provide a broader context for understanding SSA’s use of its demonstration authority, we reviewed other federal agencies’ legislative authorities for conducting demonstration and research activities. We also examined reports from GAO and other organizations that evaluated demonstration and research projects conducted by other federal agencies or that identified key evaluation and methodological issues related to such projects. We performed our work at SSA headquarters in Baltimore, Maryland, and at various locations in Washington, D.C. We conducted our work between October 2003 and August 2004 in accordance with generally accepted government auditing standards. The following individuals also made important contributions to this report: Jacquelyn D. Stewart, Erin M. Godtland, Corinna A. Nicolaou, Daniel A. Schwimer, Ronald La Due Lake, Michele C. Fejfar. | Since 1980, the Congress has required the Social Security Administration (SSA) to conduct demonstration projects to test the effectiveness of possible program changes that could encourage individuals to return to work and decrease their dependence on Disability Insurance (DI) benefits. To conduct these demonstrations, the Congress authorized SSA, on a temporary basis, to waive certain DI and Medicare program rules and to use Social Security Trust Funds. The Congress required GAO to review SSA's use of its DI demonstration authority and to make a recommendation as to whether this authority should be made permanent. SSA has not used its demonstration authority to extensively evaluate a wide range of DI policy areas dealing with return to work. Despite being given the authority to assess a broad range of policy alternatives, SSA has, until very recently, focused its demonstration efforts mostly on a relatively narrow set of policy issues--those dealing with the provision of vocational rehabilitation and employment services. SSA's recently proposed or initiated demonstrations have begun to address a broader range of policy issues, such as provisions to reduce, rather than terminate, benefits based on earnings above a certain level. However, the agency has no systematic processes or mechanisms for ensuring that it is adequately identifying and prioritizing those issues that could best be addressed through use of its demonstration authority. For example, the agency has not developed a formal demonstration research agenda explicitly identifying its broad vision for using its DI demonstration authority and explaining how ongoing or proposed demonstration projects support achievement of the agency's goals and objectives. SSA's demonstration projects have had little impact on the agency's and the Congress' consideration of DI policy issues. This is due, in part, to methodological limitations that have prevented SSA from producing project results that are useful for reliably assessing DI policy alternatives. In addition, SSA has not established a formal process for ensuring that its demonstration results are fully considered for potential policy implications. For example, SSA does not maintain a comprehensive record of its demonstration results that could be used to build a body of knowledge for informing policy decisions and planning future research. Furthermore, SSA's reporting of demonstration project results has been insufficient in ensuring that the Congress is fully apprised of these results and their policy implications. |
On November 19, 2002, pursuant to ATSA, TSA began a 2-year pilot program at 5 airports using private screening companies to screen passengers and checked baggage. In 2004, at the completion of the pilot program, and in accordance with ATSA, TSA established the SPP, whereby any airport authority, whether involved in the pilot or not, could request a transition from federal screeners to private, contracted screeners. All of the 5 pilot airports that applied were approved to continue as part of the SPP, and since its establishment, 21 additional airport applications have been accepted by the SPP. In March 2012, TSA revised the SPP application to reflect requirements of the FAA Modernization Act, enacted in February 2012. Among other provisions, the act provides the following: Not later than 120 days after the date of receipt of an SPP application submitted by an airport operator, the TSA Administrator must approve or deny the application. The TSA Administrator shall approve an application if approval would not (1) compromise security, (2) detrimentally affect the cost-efficiency of the screening of passengers or property at the airport, or (3) detrimentally affect the effectiveness of the screening of passengers or property at the airport. Within 60 days of a denial, TSA must provide the airport operator, as well as the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Homeland Security of the House of Representatives, a written report that sets forth the findings that served as the basis of the denial, the results of any cost or security analysis conducted in considering the application, and recommendations on how the airport operator can address the reasons for denial. All commercial airports are eligible to apply to the SPP. To apply, an airport operator must complete the SPP application and submit it to the SPP Program Management Office (PMO), as well as to the FSD for its airport. Figure 1 illustrates the SPP application process. Although TSA provides all airports with the opportunity to apply for participation in the SPP, authority to approve or deny the application resides in the discretion of the TSA Administrator. According to TSA officials, in addition to the cost-efficiency and effectiveness considerations mandated by FAA Modernization Act, there are many other factors that are weighed in considering an airport’s application for SPP participation. For example, the potential impacts of any upcoming projects at the airport are considered. Once an airport is approved for SPP participation and a private screening contractor has been selected by TSA, the contract screening workforce assumes responsibility for screening passengers and their property and is required to adhere to the same security regulations, standard operating procedures, and other TSA security requirements followed by federal screeners at non-SPP airports. TSA has developed guidance to assist airport operators in completing their SPP applications, as we recommended in December 2012. Specifically, in December 2012, we reported that TSA had developed some resources to assist SPP applicants, but it had not provided guidance on its application and approval process to assist airports. As it was originally implemented in 2004, the SPP application process required only that an interested airport operator submit an application stating its intention to opt out of federal screening as well as its reasons for wanting to do so. In 2011, TSA revised its SPP application to reflect the “clear and substantial advantage” standard announced by the Administrator in January 2011. Specifically, TSA requested that the applicant explain how private screening at the airport would provide a clear and substantial advantage to TSA’s security operations. At that time, TSA did not provide written guidance to airports to assist them in understanding what would constitute a “clear and substantial advantage to TSA security operations” or TSA’s basis for determining whether an airport had met that standard. As previously noted, in March 2012 TSA again revised the SPP application in accordance with provisions of the FAA Modernization Act, which became law in February 2012. Among other things, the revised application no longer included the “clear and substantial advantage” question, but instead included questions that requested applicants to discuss how participating in the SPP would not compromise security at the airport and to identify potential areas where cost savings or efficiencies may be realized. In December 2012, we reported that while TSA provided general instructions for filling out the SPP application as well as responses to frequently asked questions (FAQ), the agency had not issued guidance to assist airports with completing the revised application or explained to airports how it would evaluate applications given the changes brought about by the FAA Modernization Act. For example, neither the application instructions nor the FAQs addressed TSA’s SPP application evaluation process or its basis for determining whether an airport’s entry into the SPP would compromise security or affect cost-efficiency and effectiveness. Further, in December 2012, we found that airport operators who completed the applications generally stated that they faced difficulties in doing so and that additional guidance would have been helpful. For example, one operator stated that he needed cost information to help demonstrate that his airport’s participation in the SPP would not detrimentally affect the cost-efficiency of the screening of passengers or property at the airport and that he believed not presenting this information would be detrimental to his airport’s application. However, TSA officials at the time said that airports do not need to provide this information to TSA because, as part of the application evaluation process, TSA conducts a detailed cost analysis using historical cost data from SPP and non-SPP airports. The absence of cost and other information in an individual airport’s application, TSA officials noted, would not materially affect the TSA Administrator’s decision on an SPP application. Therefore, we reported in December 2012 that while TSA had approved all applications submitted since enactment of the FAA Modernization Act, it was hard to determine how many more airports, if any, would have applied to the program had TSA provided application guidance and information to improve transparency of the SPP application process. Specifically, we reported that in the absence of such application guidance and information, it may be difficult for airport officials to evaluate whether their airports are good candidates for the SPP or determine what criteria TSA uses to accept and approve airports’ SPP applications. We concluded that clear guidance for applying to the SPP could improve the transparency of the application process and help ensure that the existing application process is implemented in a consistent and uniform manner. Thus, we recommended that TSA develop guidance that clearly (1) states the criteria and process that TSA is using to assess whether participation in the SPP would compromise security or detrimentally affect the cost- efficiency or the effectiveness of the screening of passengers or property at the airport, (2) states how TSA will obtain and analyze cost information regarding screening cost-efficiency and effectiveness and the implications of not responding to the related application questions, and (3) provides specific examples of additional information airports should consider providing to TSA to help assess an airport’s suitability for the SPP. TSA concurred with our recommendation and, in January 2014, we reported that TSA had taken actions to address it. Specifically, TSA updated its SPP website in December 2012 by providing (1) general guidance to assist airports with completing the SPP application and (2) a description of the criteria and process the agency will use to assess airports’ applications to participate in the SPP. While the guidance states that TSA has no specific expectations of the information an airport could provide that may be pertinent to its application, it provides some examples of information TSA has found useful and that airports could consider providing to TSA to help assess their suitability for the program. Further, the guidance, in combination with the description of the SPP application evaluation process, outlines how TSA plans to analyze and use cost information regarding screening cost-efficiency and effectiveness. The guidance also states that providing cost information is optional and that not providing such information will not affect the application decision. As we reported in January 2014, these actions address the intent of our recommendation. In our December 2012 report, we analyzed screener performance data for four measures and found that there were differences in performance between SPP and non-SPP airports, and those differences could not be exclusively attributed to the use of either federal or private screeners. The four measures we selected to compare screener performance at SPP and non-SPP airports were Threat Image Projection (TIP) detection rates; recertification pass rates; Aviation Security Assessment Program (ASAP) test results; and Presence, Advisement, Communication, and Execution (PACE) evaluation results (see table 1). For each of these four measures, we compared the performance of each of the 16 airports then participating in the SPP with the average performance for each airport’s category (X, I, II, III, or IV), as well as the national performance averages for all airports for fiscal years 2009 through 2011. As we reported in December 2012, on the basis of our analyses, we found that, generally, screeners at certain SPP airports performed slightly above the airport category and national averages for some measures, while others performed slightly below. For example, at SPP airports, screeners performed above their respective airport category averages for recertification pass rates in the majority of instances, while at the majority of SPP airports that took PACE evaluations in 2011, screeners performed below their airport category averages. For TIP detection rates, screeners at SPP airports performed above their respective airport category averages in about half of the instances. However, we also reported in December 2012 that the differences we observed in private and federal screener performance cannot be entirely attributed to the type of screeners at an airport, because, according to TSA officials and other subject matter experts, many factors, some of which cannot be controlled for, affect screener performance. These factors include, but are not limited to, checkpoint layout, airline schedules, seasonal changes in travel volume, and type of traveler. We also reported in December 2012 that TSA collects data on several other performance measures but, for various reasons, the data cannot be used to compare private and federal screener performance for the purposes of our review. For example, passenger wait time data could not be used because we found that TSA’s policy for collecting wait times changed during the time period of our analyses and that these data were not collected in a consistent manner across all airports. We also considered reviewing human capital measures such as attrition, absenteeism, and injury rates, but did not analyze these data because TSA’s Office of Human Capital does not collect these data for SPP airports. We reported that while the contractors collect and report this information to the SPP PMO, TSA does not validate the accuracy of the self-reported data nor does it require contractors to use the same human capital measures as TSA, and accordingly, differences may exist in how the metrics are defined and how the data are collected. Therefore, we found that TSA could not guarantee that a comparison of SPP and non- SPP airports on these human capital metrics would be an equal comparison. Moreover, in December 2012, we found that while TSA monitored screener performance at all airports, the agency did not monitor private screener performance separately from federal screener performance or conduct regular reviews comparing the performance of SPP and non-SPP airports. Beginning in April 2012, TSA introduced a new set of performance measures to assess screener performance at all airports (both SPP and non-SPP) in its Office of Security Operations Executive Scorecard (the Scorecard). Officials told us at the time of our December 2012 review that they provided the Scorecard to FSDs every 2 weeks to assist the FSDs with tracking performance against stated goals and with determining how performance of the airports under their jurisdiction compared with national averages. According to TSA, the 10 measures used in the Scorecard were selected based on input from FSDs and regional directors on the performance measures that most adequately reflected screener and airport performance. Performance measures in the Scorecard included the TIP detection rate and the number of negative and positive customer contacts made to the TSA Contact Center through e-mails or phone calls per 100,000 passengers screened, among others. We also reported in December 2012 that TSA had conducted or commissioned prior reports comparing the cost and performance of SPP and non-SPP airports. For example, in 2004 and 2007, TSA commissioned reports prepared by private consultants, while in 2008 the agency issued its own report comparing the performance of SPP and non-SPP airports. Generally, these reports found that SPP airports performed at a level equal to or better than non-SPP airports. However, TSA officials stated at the time that they did not plan to conduct similar analyses in the future, and instead, they were using across-the-board mechanisms of both private and federal screeners, such as the Scorecard, to assess screener performance across all commercial airports. We found that In addition to using the Scorecard, TSA conducted monthly contractor performance management reviews (PMR) at each SPP airport to assess the contractor’s performance against the standards set in each SPP contract. The PMRs included 10 performance measures, including some of the same measures included in the Scorecard, such as TIP detection rates and recertification pass rates, for which TSA establishes acceptable quality levels of performance. Failure to meet the acceptable quality levels of performance could result in corrective actions or termination of the contract. However, in December 2012, we found that the Scorecard and PMR did not provide a complete picture of screener performance at SPP airports because, while both mechanisms provided a snapshot of private screener performance at each SPP airport, this information was not summarized for the SPP as a whole or across years, which made it difficult to identify changes in performance. Further, neither the Scorecard nor the PMR provided information on performance in prior years or controlled for variables that TSA officials explained to us were important when comparing private and federal screener performance, such as the type of X-ray machine used for TIP detection rates. We concluded that monitoring private screener performance in comparison with federal screener performance was consistent with the statutory requirement that TSA enter into a contract with a private screening company only if the Administrator determines and certifies to Congress that the level of screening services and protection provided at an airport under a contract will be equal to or greater than the level that would be provided at the airport by federal government personnel. Therefore, we recommended that TSA develop a mechanism to regularly monitor private versus federal screener performance, which would better position the agency to know whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non- SPP airports. TSA concurred with the recommendation, and has taken actions to address it. Specifically, in January 2013, TSA issued its first SPP Annual Report. The report highlights the accomplishments of the SPP during fiscal year 2012 and provides an overview and discussion of private versus federal screener cost and performance. The report also describes the criteria TSA used to select certain performance measures and reasons why other measures were not selected for its comparison of private and federal screener performance. The report compares the performance of SPP airports with the average performance of airports in their respective category, as well as the average performance for all airports, for three performance measures: TIP detection rates, recertification pass rates, and PACE evaluation results. Further, in September 2013, the TSA Assistant Administrator for Security Operations signed an operations directive that provides internal guidance for preparing the SPP Annual Report, including the requirement that the SPP PMO must annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. We believe that these actions address the intent of our recommendation and should better position TSA to determine whether the level of screening services and protection provided at SPP airports continues to be equal to or greater than the level provided at non-SPP airports. Further, these actions could also assist TSA in identifying performance changes that could lead to improvements in the program and inform decision making regarding potential expansion of the SPP. TSA has faced challenges in accurately comparing the costs of screening services at SPP and non-SPP airports. In 2007, TSA estimated that SPP airports would cost about 17 percent more to operate than airports using federal screeners. In our January 2009 report we noted strengths in the methodology’s design, but also identified seven limitations in TSA’s methodology that could affect the accuracy and reliability of cost comparisons, and its usefulness in informing future management decisions. We recommended that if TSA planned to rely on its comparison of cost and performance of SPP and non-SPP airports for future decision making, the agency should update its analysis to address the limitations we identified. TSA generally concurred with our findings and recommendation. In March 2011, TSA provided us with an update on the status of its efforts to address the limitations we cited in our report, as well as a revised comparison of costs for screening operations at SPP and non-SPP airports. This revised cost comparison generally addressed three of the seven limitations and provided TSA with a more reasonable basis for comparing the screening cost at SPP and non-SPP airports. In the update, TSA estimated that SPP airports would cost 3 percent more to operate in 2011 than airports using federal screeners. In March 2011, we found that TSA had also taken actions that partially addressed the four remaining limitations related to cost, but needed to take additional actions or provide additional documentation. In July 2014, TSA officials stated they are continuing to make additional changes to the cost estimation methodology and we are continuing to monitor TSA’s progress in this area through ongoing work. Chairman Hudson, Ranking Member Richmond, and members of the subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For questions about this statement, please contact Jennifer Grover at (202) 512-7141 or GroverJ@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement include Glenn Davis (Assistant Director), Charles Bausell, Kevin Heinz, Susan Hsu, Tyler Kent, Stanley Kostyla, and Thomas Lombardi. Key contributors for the previous work that this testimony is based on are listed in the products. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | TSA maintains a federal workforce to screen passengers and baggage at the majority of the nation's commercial airports, but it also oversees a workforce of private screeners at airports who participate in the SPP. The SPP allows commercial airports to apply to have screening performed by private screeners, who are to provide a level of screening services and protection that equals or exceeds that of federal screeners. This testimony addresses the extent to which TSA (1) provides guidance to airport operators for the SPP application process, (2) assesses and monitors the performance of private versus federal screeners, and (3) compares the costs of federal and private screeners. This statement is based on reports and a testimony GAO issued from January 2009 through January 2014. Since GAO's December 2012 report on the Screening Partnership Program (SPP), the Transportation Security Administration (TSA) has developed guidance for airport operators applying to the SPP. In December 2012, GAO found that TSA had not provided guidance to airport operators on its SPP application and approval process, which had been revised to reflect statutory requirements. Further, airport operators GAO interviewed at the time identified difficulties in completing the revised application, such as obtaining cost information requested in the application. GAO recommended that TSA develop application guidance and TSA concurred. In December 2012, TSA updated its SPP website with general application guidance and a description of TSA's assessment criteria and process. The new guidance addresses the intent of GAO's recommendation. TSA has also developed a mechanism to regularly monitor private versus federal screener performance. In December 2012, TSA officials stated that they planned to assess overall screener performance across all commercial airports instead of comparing the performance of SPP and non-SPP airports as they had done previously. Also in December 2012, GAO reported differences between the performance at SPP and non-SPP airports based on screener performance data. In addition, GAO reported that TSA's across-the-board mechanisms did not summarize information for the SPP as a whole or across years, making it difficult to identify changes in private screener performance. GAO concluded that monitoring and comparing private and federal screener performance were consistent with the statutory provision authorizing TSA to contract with private screening companies. As a result, GAO recommended that TSA develop a mechanism to regularly do so. TSA concurred with the recommendation and in January 2013, issued its SPP Annual Report , which provided an analysis of private versus federal screener performance. In September 2013, TSA provided internal guidance requiring that the report annually verify that the level of screening services and protection provided at SPP airports is equal to or greater than the level that would be provided by federal screeners. These actions address the intent of GAO's recommendation. TSA has faced challenges in accurately comparing the costs of screening services at SPP and non-SPP airports. In 2007, TSA estimated that SPP airports cost about 17 percent more to operate than airports using federal screeners. In January 2009, GAO noted strengths in TSA's methodology, but also identified seven limitations that could affect the accuracy and reliability of cost comparisons. GAO recommended that TSA update its analysis to address the limitations. TSA generally concurred with the recommendation. In March 2011, TSA described efforts to address the limitations and a revised cost comparison estimating that SPP airports would cost 3 percent more to operate in 2011 than airports using federal screeners. In March 2011, GAO found that TSA had taken steps to address some of the limitations, but needed to take additional actions. In July 2014, TSA officials stated that they are continuing to make additional changes to the cost estimation methodology and GAO is continuing to monitor TSA's progress in this area through ongoing work. GAO has made several recommendations since 2009 to improve SPP operations and oversight, which GAO has since closed as implemented based on TSA actions to address them. |
DOD acquires many different products developed by commercial companies to enable the warfighter to protect our country. For the purposes of this report, we describe the three types of products that DOD acquires as the following: 1. Products that are commercially available, such as computers and software. DOD acquires these products from a variety of suppliers. 2. Commercial products that are further developed by companies for DOD use based on those currently available in the marketplace, such as adding avionics equipment to an unmanned aerial vehicle. These products are the focus of this report because they are often produced by companies that do not work regularly with DOD (non-traditional companies). 3. Products developed exclusively for military use (military-unique), such as tanks, fighter jets, and submarines with military capabilities that do not have a commercial application. Since 2011, we found that when DOD acquires these products, it typically does so with companies such as The Boeing Company, Lockheed Martin Corporation, Northrop Grumman Corporation, Raytheon Company, General Dynamics Corporation, the General Electric Company, BAE Systems PLC, and Rockwell Collins, Inc. We consider these companies traditional companies because they have consistently worked with DOD to develop military-unique products. Before DOD acquires a product, it conducts market research to determine which of the three product types is most suitable for its particular need. DOD contracting officers then follow the Federal Acquisition Regulation (FAR) and Defense Federal Acquisition Regulation Supplement (DFARS) to procure the product. As shown in figure 1, the degree to which the product is commercially available and the risks associated with developing or producing the product influence the type of contract used and the contract’s terms and conditions. DOD may use commercial item acquisition procedures under FAR Part 12 to procure commercially available products and negotiated contract procedures under FAR Part 15 for military-unique products. Negotiated contracts for military-unique products generally contain more government- specific terms and conditions than commercial item acquisitions, in part because of the risk DOD takes to fund the development of these products. There can be a great deal of variation in the contract type for commercial products that are further developed for DOD’s use, as well as the number of contract terms and conditions that would apply. If DOD determines that products are commercial items as part of its commercial item determination process, it must acquire the product using FAR Part 12 procedures instead of FAR Part 15 procedures. Alternatively, DOD might find the desired product is not a commercial item, but nonetheless requires a relatively low risk development effort. In that case, DOD can negotiate a fixed-price-incentive contract under FAR Part 15. Congress also provides Other Transaction Authority (OTA) that allows DOD to enter into agreements with companies to complete research and development and prototype projects. OTAs are flexible agreements that typically include very few required terms and conditions and instead allow the parties to negotiate terms and conditions specific to the project. This flexibility can help agencies attract and partner with entities that have not done business with federal agencies due to concerns about standard government requirements. However, OTAs are not procurement contracts. DOD would still follow the FAR or another express authority to procure products successfully developed through an OTA. DOD has long played a large role in influencing innovation in the United States through its research and development investments. Among other things, DOD funds basic research performed by universities, as well as applied research and development performed by companies. Several studies and agency documents highlight how DOD’s funding has led to technological advances that enable the development of military products as well as commercial products. For example, the Defense Advanced Research Projects Agency supported the development of a communications network in the 1970s to facilitate information sharing. This network is considered the foundation of the modern internet. In the 1950s, the Air Force and the Defense Advanced Research Projects Agency funded research on speech recognition and artificial intelligence that enabled the development of the Cognitive Assistant that Learns and Organizes. In the 1990s and 2000s, commercial companies started leveraging this research to develop commercial technologies like Siri, the iPhone assistant. The Army has funded research that led to the development of powerful, lightweight lithium batteries, which are used in a variety of military products, such as night vision equipment. Today, lithium batteries are widely used in consumer electronics products and electric vehicles. Based on DOD and National Science Foundation research and development data, DOD’s influence on the type of technologies developed by U.S. companies began to diminish as companies significantly increased the amount they invest in research and development. As shown in figure 2 below, DOD spent about $69 billion on research and development in 1987, while U.S. companies spent about $114 billion. In 2013, DOD spent about $75 billion, while companies spent about $341 billion. Between 1987 and 2013, companies’ investments skyrocketed by approximately 200 percent. This growth was fueled, in part, by significant investments in the information, pharmaceuticals, and computer and electronics sectors. In its 2016 Annual Industrial Capabilities Report to Congress, DOD acknowledged that the department benefits when there is an influx of new companies with new technologies competing for business opportunities. The report further stated that DOD must take advantage of the rapid evolution of emerging commercial technologies that, when integrated with military systems and novel concepts of operations, could be a source of battlefield advantage. In order to take greater advantage of newly developed technologies coming out of the commercial sector, the report acknowledged that the department should leverage innovation created by non-traditional companies. However, available industry data, as well as DOD studies, indicate that it may be difficult for the department to attract non-traditional companies to sell or further develop their products for DOD’s own use. As shown in table 1, one reason for this is that DOD is not a significant customer for top innovative companies. In 2016, for example, Apple earned $216 billion in sales, of which about $70,000 came from contracts directly with DOD. Amazon earned $136 billion in sales, with about $275,000 coming from contracts directly with DOD. Google and Facebook did not earn any revenue through direct sales to DOD. According to company representatives that we spoke to, DOD’s acquisition environment presents unique challenges to non-traditional companies that they otherwise do not experience in the private industry. The acquisition environment is driven by laws that provide transparency and fairness, regulations that promote specific socio-economic goals, and DOD’s approach for implementing those laws and regulations. For the most part, the selected 12 companies we spoke with expressed frustration with the complexity of DOD’s acquisition process; the time, cost, and risk associated with competing for and executing a contract; and interacting with DOD’s contracting workforce. Table 2 highlights six key areas of DOD’s acquisition environment that create challenges for non-traditional companies, according to these companies. Together, these challenges create an environment wherein the selected non-traditional companies told us that their resources might be better spent pursuing commercial business where the cost to compete is lower and selection decisions are faster. Two of the 12 non-traditional companies in our review are currently not pursuing business with DOD as a result of these challenges. The non-traditional companies we spoke with identified several challenges related to the complexity of DOD’s acquisition process that made it difficult for them to do business with the department. One particular challenge is the difficulty companies had in identifying the right avenue to develop on-going or longer-term business arrangements with DOD. Several non-traditional company officials said that DOD acquisition program managers wanted to obtain their product but could not do so because DOD did not have a validated requirement for it. As a result, these non-traditional companies had to find alternative paths to sell their products to DOD. Some companies spent several years demonstrating their products to other organizations within DOD before establishing a viable business arrangement with one of these organizations. In some cases, multiple DOD decision-makers throughout the department weighed in, some of whom had no purchasing authority. This slowed down the process even more. Company officials said that in the commercial market they are used to communicating directly with people who have the authority to (1) discuss their needs, (2) gauge whether the company’s product could satisfy those needs, and (3) award a contract within months. Non-traditional companies we spoke to also raised concerns about the lengthy process for obtaining security clearances. Some company officials told us that DOD required their company representatives to obtain security clearances prior to DOD discussions on technology needs. Company officials also noted that the process for attaining personnel security clearances, which is shared between DOD and the Office of Personnel Management, can take over a year to complete. One company even said it took 5 years to obtain a facility clearance from DOD. Software companies identified the time and cost associated with obtaining multiple software certifications, which they said are required by DOD prior to competing for business, as an additional challenge they face when entering the defense market. This includes providing documentation to obtain the Federal Risk and Authorization Management Program (FedRAMP) certification, which is managed by the General Services Administration, and the FedRAMP Plus certification, which is managed by the Defense Information Systems Agency. These certifications provide a government-wide and DOD-specific standardized approach for cloud products and services, respectively. Officials from one large non- traditional software company said it has spent at least $40 million so far to obtain FedRAMP certifications for 50 products and it has taken on average 18 months to obtain the certifications. The company has also been working for almost 2 years to obtain DOD’s FedRAMP Plus certification for these products. A company official stated that DOD continues to add more requirements that sometimes conflict with the FedRAMP requirements or, at a minimum, add additional controls and create ambiguity. The official also said that, “as a company that provides services to numerous customers, it is unmanageable to comply with different rules and requirements for different agencies.” An official from one small non-traditional company we spoke to stated that they have invested over $100,000 and well over a year in pursuing FedRAMP certification even though there is no guarantee that they will win a contract. In addition, he estimated that once certified, monthly costs to maintain the certification would range from $10,000 to $20,000. The official also described the certification process as “a series of checklists that do not necessarily make products safer or more secure.” However, he said that they must obtain these software certifications because the federal government will not talk to companies without them. For example, the company has had to answer and provide data or documentation for a standard list of nearly 100 questions that the General Services Administration developed for companies to obtain FedRAMP certification. Some of the small non-traditional companies we spoke to expressed frustrations with DOD’s funding process, including the effect budgetary delays from continuing resolutions and sequestration have had on DOD’s ability to award contracts. One official said that doing business with any company or organization that has an unstable budget environment creates additional risk and could cause them to go out of business or lose investors. Their experiences with DOD, in some cases, drove them away from the defense market and back to pursuing business with more financially stable entities. It takes 2 years for major acquisition programs to receive funding through DOD’s budget process, which dates back to the 1960s. Adding to the challenges of this process, as shown in figure 3, DOD has started each fiscal year since 2010 operating under a continuing resolution. In general, continuing resolutions prohibit new activities and projects for which appropriations, funds, or other authority was not available in the previous fiscal year. As an example of the impact this environment can have on DOD’s ability to contract with a non-traditional company, after demonstrating its product for nearly 4 years, one company that produces augmented reality products was provided funding to support additional engineering and development activities by the Army. However, the Army program subsequently lost funding due to sequestration. As a result of these difficulties experienced in the past, the company is no longer actively pursuing business in the defense market, according to a company representative. Non-traditional companies that we spoke to stated that DOD’s contracting timelines are significantly longer than what they experience with the commercial market, and there is a potential for a bid protest when competing for DOD work that could further delay contract award. One official said that their investors would prefer that they pursue business in the commercial market where contracts are awarded more quickly. DOD’s contracting process can be very lengthy, depending on the dollar value of the contract. For example, in January 2017, the Army Contracting Command established standard contracting timelines that ranged from 55 days (about 2 months) for contracts valued less than $25,000 to 700 days (about 24 months) for contracts valued over $1 billion. In general, the timelines, as shown in table 3 below, increase as the dollar value of the contracts increase and competitively awarded contracts generally take longer to award than non-competitive contracts. Data collected by the Air Force show that in fiscal year 2016 it took an average of nearly 13 months from the time a request for proposal was issued until an award decision was made for 52 sole source contracts valued between $50 million and $500 million. Figure 4 shows the activities that contributed to this timeframe. The Air Force study found that companies spent on average nearly 5 months putting together their initial proposal and another 1.5 months revising the proposal based on DOD feedback that the proposal did not meet certain DFARS requirements. For example, a contractor may have received subcontractor proposals and included them in its proposal. However, the contractor may not have completed the required commerciality and price reasonableness analysis of the subcontractor proposals, which should have been reflected in the initial proposal to the Air Force. According to an Air Force official, there was a significant amount of back and forth between the Air Force and companies to make sure proposals adequately responded to the requirements in a solicitation. Once an adequate proposal was received, the Air Force, Defense Contract Management Agency, and Defense Contract Audit Agency then reviewed and evaluated the technical and financial aspects of proposals over the next 4 months. The Air Force spent the final months negotiating with companies and awarding a contract. Non-traditional company officials that we spoke to said they are accustomed to contracting timeframes that are much shorter, ranging from a few weeks up to about 6 months when working with commercial companies. In addition, they said that the time and resources they invest in developing a proposal for commercial companies is significantly less than for a DOD proposal. For example, one of the 12 companies GAO spoke to conducted a cost comparison study and found that it took 25 full time employees, 12 months and millions of dollars to prepare a proposal for a DOD contract. In contrast, the study found that the company used 3 part time employees, 2 months, and only thousands of dollars to prepare a commercial contract for a similar product. A company official explained that a lot of time and resources were spent developing detailed schedules that outline the engineering resources over the life of a project so that DOD could evaluate whether the company had the appropriate resources to complete the work. The official said the company had no plans to monitor how it performs against the detailed schedules and only prepared them for the purpose of submitting a DOD proposal. He said that they were not required to provide this type of detailed information for commercial proposals. Concerns raised by the non-traditional companies we spoke to regarding the length of time it could take to win a DOD contract were also identified by three traditional companies we spoke to. One of the companies shared a study that it conducted in 2016 that showed that it took on average over 12 months from the time the Air Force issued a request for proposal until a contract was awarded for 60 proposals the company submitted that were valued between $50 million and $500 million. Company officials also provided data that showed one of its large business units had experienced contract cycle times as long as 3 to 4 years from the time DOD released a request for proposal until an award decision was made. The Director of Defense Pricing noted that DOD’s contacting process typically takes longer than the commercial industry process because DOD has to be transparent in its dealings, ensure competition wherever possible, and protect the interests of the taxpayers. Most of the 12 non-traditional companies that we spoke to said they had commercial products that the department was clearly interested in obtaining. However, after discussions with DOD, they chose to not develop these products for DOD’s use because it might trigger a large number of contract terms and conditions that would be expensive to implement. Like other federal agencies, DOD includes standard terms and conditions in its contracts that are unique to the government that some companies we spoke to believe would add significant cost or add little value to the transaction. For example, based on the FAR, companies are required to establish a government-unique cost accounting system when it awards certain cost-type contracts to disclose actual cost accounting practices and to follow disclosed and established cost accounting practices consistently. DOD and other federal entities also require companies to comply with socio-economic obligations, such as those for equal employment opportunity, small business set asides, labor standards for government contractors, and a drug-free workplace. They could also require companies to use American-made materials in their products, provide whistle-blower protections, safeguard their information systems, and comply with cyber regulations related to cyber incident reporting. One non-traditional company conducted a study that determined it would take at least 15-18 months and cost millions to establish a government- unique cost accounting system. According to a company official, accepting DOD cost-type contracts with this requirement would mean that their engineers would have to log hours specific to the projects they are working on at any given time. The official explained that this additional step would not only add to their workloads, but create inefficiencies that might inhibit communication and undermine innovation, which he said, “is the very ethos of this company.” As a result, the official stated that the company has decided not to compete for DOD cost-type contracts that require a government-unique cost accounting system. The company official also said that the company’s contracts and agreements with DOD and another government agency have included anywhere from 27 to 69 terms and conditions. While this is significantly fewer than the roughly 200 terms and conditions the company official estimated would have been included in a cost-type contract, it is much more than the 12 that are typically included in contracts with commercial companies. Company officials pointed out that each additional clause adds costs and burden to the company, and are concerned that they could incur tremendous liability if the prescribed clauses are not strictly followed. Traditional companies we spoke to confirmed the difficulties, as well as the costs with implementing government-unique contract clauses. For example, one traditional company we spoke to stated that they must expend resources to track changes to the FAR in order to stay in compliance with government contracting regulations. Company officials said that they review an average of 100 new regulatory actions for applicability each month. Further, they typically direct all clauses to individual suppliers because it is difficult for the prime contractor to determine which ones would apply. We found, for example, that legislation regarding whistle-blower protections has changed several times since 2009 and that different rules apply depending on which federal agency awarded the contract, whether the agency was participating in a whistle-blower pilot program, or whether contracts were funded by the American Recovery and Reinvestment Act. This example demonstrates how companies with multiple contracts may have to comply with different whistle-blower protection simultaneously. In addition, the implementation of the regulations themselves is costly. For example, while officials from this same company acknowledged the need for cyber security, they estimated that new DOD cyber security regulations would cost the company an estimated $100 million to comply. They stated that these types of requirements contribute to the 12 to 14 percent price differential between their commercial and DOD products. Officials from another traditional company that we spoke to said there are also costs associated with ensuring that its suppliers comply with these clauses, and these costs contribute to the company’s lower rate of return on its defense business (7 to 10 percent profit) versus its commercial business (15 to 18 percent profit). These officials also stated that one of its suppliers turned down a $20 million performance-based logistics contract because it could no longer effectively manage the large amount of federal requirements included in contract clauses. The traditional companies we spoke with stated that in most cases they separate their commercial and defense business units to ensure that overhead costs that support their DOD business do not extend to their commercial business and make their products less competitive in the commercial space. For instance, officials from one traditional company stated that it has taken great care to keep a primarily commercial business unit separate and apart from its primarily DOD business unit, including supply chain, sourcing, engineering, sales, and related support functions. The Director of Defense Pricing indicated that DOD has heard similar concerns about the cost of compliance raised by traditional companies and said that the department has been trying to substantiate data with the companies for several years, in order to determine what actions may be necessary to address these concerns. Non-traditional companies we spoke to raised concerns about the possibility of losing their intellectual property rights when further developing their products for DOD’s use. According to an Air Force handbook related to the acquisition of technical data and software, DOD seeks access to technical data and computer software rights to enhance competition and sustain each system and its subsystems over their life cycle. Examples of technical data include product specifications, engineering drawings, and operating or maintenance manuals. Examples of computer software include source code, algorithms, and associated software design documentation. According to DOD acquisition policy, DOD ordinarily only acquires the technical data, computer software, and the associated data rights essential to meeting its needs. For example, in the case of noncommercial items: If the contractor developed an item or computer software exclusively with government funds, the contractor retains the copyright over the technical data pertaining to the item or the computer software, but the government acquires “unlimited rights” to use the data or software without restriction. If the contractor developed an item or computer software with mixed funding, then the government normally acquires “government purpose rights.” If the contractor developed the item or computer software completely at private expense, then DOD usually acquires only “limited rights” (for data) or “restricted rights” (for software). Both non-traditional and traditional companies we included in this review consider intellectual property, including technical data and software rights, to be essential to a company’s survival. As one official we spoke with explained, intellectual property is the “life-blood” of their company. It is what distinguishes a company in the marketplace and is an integral part of the value placed on a company. Companies try to protect their intellectual property so that others do not copy it and for that reason many of the companies we spoke to believe it is too risky to further develop their commercial products for DOD’s needs. Based on our review of documents, we found that in one recent court case, the Court of Federal Claims awarded a company expectation damages for lost profits after the government “repeatedly breached the Cooperative Research and Development Agreement by releasing the plaintiff’s proprietary information to unauthorized recipients, including its competitors.” Non-traditional companies we spoke to prefer to sell their commercial products to DOD so there is no negotiation between them and DOD as to the rights DOD will take in technical data or software. Even then, problems still occur. For example, an official from a non-traditional software company said his staff spends a great deal of time educating contracting officers on DOD’s software rights under the company’s software license agreement. He said that DOD acquisition officials are “stuck in research and development mode” and believe DOD should have greater software rights even though DOD did not contribute any money to the development of the software. Another non-traditional company official said that DOD shared a demonstration copy of his company’s software with the prime contractor who then tried to integrate the software into its own system. Although the prime contractor was unsuccessful in this endeavor, the official said that the prime contractor was competing against the company for DOD’s business. This official said the company is no longer doing business with DOD. Traditional companies we spoke to confirmed the non-traditional companies’ concerns. One official at a traditional company said that DOD is putting increased pressure on companies to grant unlimited technical data and software rights or government purpose rights rather than limited or restricted rights. For example, in a 2013 Army request for proposals, the program was pushing for an open systems architecture approach and companies were told that one evaluation criterion would be the extent of data rights (more rights) that they were willing to grant DOD. This was problematic for the company because the intellectual property used to build the components was developed at private expense. Officials from another traditional company said that a prime contractor it was working with expected the company to offer unlimited rights to its software to increase their chances of winning a contract. In this example, the agency’s request for proposals allowed offerors to propose their own technical solutions, but it also provided that, as part of the technical evaluation, offerors would be assessed a weakness where data rights assertions did not allow the agency to procure, maintain, and modify the hardware and software in a competitive environment. The company understood that to be competitive for award with this evaluation scheme, it had to provide at least government purpose rights to its software and technical data, as well as provide the source code for its software, regardless of whether they were commercial or had been developed at private expense. According to company officials, they were willing to negotiate with the prime contractor to some extent in order to help them win the contract, but it was not going to offer government purpose or unlimited rights in commercial data or software that it developed at private expense, or turn over software source code. The prime contractor told the company that its unwillingness to turn over the information was hurting its proposal. In the end, the prime contractor was not selected for this contract. Non-traditional companies that we spoke to generally described DOD’s contracting workforce as inexperienced, especially when procuring software services, such as access to the cloud, and performing market research to determine the types of products that could meet DOD’s needs and to make commercial item and price reasonableness determinations. Non-traditional and traditional companies that we spoke to provided several examples of their interactions with DOD’s contracting workforce. For example, officials from two non-traditional software companies said that DOD contracting officers they interacted with were inexperienced in how to buy cloud services. One company official said that contracting officers tried to use a firm-fixed-price contract to buy cloud services. While it may make sense to use a fixed-price type contract for acquiring hardware, such as laptops and printers, the official said that it is much more difficult to use a fixed-price contract for cloud services. Commercial cloud service providers price their services based on the amount of services a customer uses every month, which could vary based on changing needs In addition, in response to a DOD request for information, officials from a non-traditional company that provides data integration and analytics products stated that DOD issued a request for proposals to develop a military-unique solution for a requirement that could be met with existing commercial products. Based on our review of documents, we found that the company eventually protested DOD’s procurement on these grounds and the U.S. Court of Federal Claims agreed with the company. The court issued a permanent injunction ordering the DOD component to satisfy the requirements of 10 U.S.C. § 2377, which requires DOD to determine whether commercial items exist that can satisfy its needs, in whole or in part. Company officials attributed DOD’s initial decision to seek a military- unique solution, in part, to an inexperienced and risk-averse workforce. Traditional companies also pointed out other areas of market research where the contracting workforce is inexperienced and therefore could result in contributing to additional lengthy processes that non-traditional companies could face. All three traditional defense companies we met with stated that DOD contracting officials were requesting significantly more documentation than in the past to make determinations of commerciality and price reasonableness, partly because some contracting officials are inexperienced in these processes. One company, for example, spent an average of 220 hours (28 days) in 2008 to complete commercial item determination documentation for components on one military system, while in 2014 the average number of hours increased to 1,105 hours (138 days) for the same system. The companies also stated that, at times, DOD contracting officers are not following the FAR for establishing price reasonableness by first performing market research, such as comparing offers to published market prices or conducting an independent government cost estimate before asking the company for additional cost data. Some company officials stated that they have spent considerable time and money tracking down the information DOD has requested. Some company officials said DOD’s desire to obtain data related to the costs the company incurred to develop the product rather than the market price customers are paying for the product has also had an impact on companies’ suppliers, with some of them refusing to provide this information to DOD and others refusing to do business with DOD anymore. In a prior report related to market research, in which we examined 28 contracts, we found that the market research conducted by selected federal agencies, including DOD, varied. The agencies tended to conduct more robust market research for 12 higher dollar contracts than the 16 lower dollar contracts we reviewed. We recommended that DOD clearly document the basic elements of market research that was conducted. Overall, DOD and military service senior acquisition officials were aware of these concerns and in the case of market research, are interacting with commercial companies to identify ways DOD can improve its capabilities. One senior contracting official noted that very few people outside the companies that provide cloud, analytics, and certain types of software understand these products. Several acquisition and contracting officials said that many of the concerns raised by companies may be due, in part, to the large number of new contracting officers it has hired since 2008. Statistics collected by DOD’s Human Capital Initiatives Office show that the department increased the size of the contracting workforce by almost 5,000 positions over the past 8 years, from 25,680 personnel at the end of fiscal year 2008 to 30,669 at the end of fiscal year 2016, a 19 percent increase. As shown in figure 5, the influx of new personnel has helped DOD address concerns about having a disproportionate number of staff that were ready to retire compared to new staff that were being hired and trained to take their place. However, with the influx of new staff comes a degree of inexperience. DOD has established curriculum and experience requirements for contracting officers to achieve in order to advance in their career. For new staff, this includes classes on contract planning, execution, management, and pricing. Following a proficiency assessment in 2010, however, contracting leaders thought it was necessary for the Defense Acquisition University to add a 4-week research-intensive fundamentals course that provides new hires practical experience using the FAR and DFARS. Contracting leaders emphasized that it is not only important for contracting officers to master the what , but the how in being able to use critical thinking and sound judgment when applying knowledge. Congress and DOD recognize that changes to laws, regulations, and DOD’s implementation practices are needed to address the challenges cited by companies and are taking steps to address them. The fiscal years 2016 and 2017 National Defense Authorization Acts, for example, contain several provisions aimed at eliminating some contract terms and conditions that are burdensome to non-traditional companies. DOD is in the process of implementing some of the provisions. DOD has also taken actions to attract non-traditional companies by establishing industry outreach offices in high-tech areas across the country and piloting new, streamlined ways of doing business with these companies within a desired completion period of 60 days. Between April 2015 and March 2017, the offices facilitated 25 arrangements using OTAs between companies and DOD organizations worth $48.4 million. The military services are also examining ways to reduce the time it takes to award contracts. Because these initiatives are just getting underway, it is too soon to determine whether they will address the challenges faced by non- traditional companies. The Fiscal Year 2016 and 2017 National Defense Authorization Acts include provisions for DOD to address aspects of its acquisition environment that create challenges for companies. These include addressing some of the complexities associated with DOD’s acquisition processes; eliminating or reducing the burden of some contract terms and conditions; clarifying intellectual property rights policies; and addressing contracting workforce concerns. Table 4 highlights some of the new requirements. DOD has started to implement some of these legislative provisions. For example, in June 2016, the Defense Contract Management Agency established a Commercial Item Group to assist DOD contracting officers with complex determinations. The group, which had about 53 personnel in January 2017, also provides training on assessing whether a product qualifies as a commercial item and offers assistance to DOD contracting officers for conducting market research and analyzing the reasonableness of a contractor’s prices. According to Defense Contract Management Agency statistics, from October 1, 2016, to January 6, 2017, the Commercial Item Group was averaging 7 days to deliver a recommendation of commerciality. Of the items they reviewed, the group recommended 93 percent to be commercial. DOD is also working with several large commercial companies to enter into advanced agreements that DOD officials believe will significantly reduce the time associated with determining the commerciality of an item and the fair and reasonable price of such items. In addition, DOD established an 18 person advisory panel of current and former DOD executives, referred to as the 809 Panel, to identify opportunities to streamline the acquisition process. The National Defense Authorization Act identified two duties for the panel. First, the panel is expected to review the acquisition regulations applicable to DOD with a view toward streamlining and improving the efficiency and effectiveness of the defense acquisition process and maintaining the defense technology advantage. Second, the panel is expected to make any recommendation for the amendment or repeal of regulations it considers necessary to: Establish and administer appropriate buyer and seller relationships in the procurement system. Improve the functioning of the acquisition system. Ensure the continuing financial and ethical integrity of defense procurement programs. Protect the best interests of DOD. Eliminate any regulations that are unnecessary for the purposes described. According to the panel’s May 2017 interim report, the panel has established nine working groups that are focused on a variety of topics, including barriers to entry in the DOD market, cost accounting standards, budget issues, commercial buying practices, and streamlining regulations. The panel’s Executive Director stated that interim reports with recommended legislative changes will be issued by each working group as it completes its work. The panel will then issue a final report in 2018. DOD has established offices in high-tech areas of the country to build relationships and identify promising technologies developed by commercial technology providers or non-traditional companies and to help facilitate business agreements between these companies and DOD organizations. Known as Defense Innovation Unit Experimental (DIUx), the new outreach effort is part of DOD’s Defense Innovation Initiative that is focused on pursuing innovative ways to sustain and advance emerging technology capabilities. The initial DIUx office was announced in April 2015 and was opened in Silicon Valley in August 2015. For the first year, the office had no funding or authority to award contracts according to the director of DIUx at that time. Instead, office staff met with these companies to learn about their products and then helped facilitate meetings between the companies and interested DOD organizations. The former director stated that commercial companies they worked with became frustrated that DIUx could not help them overcome challenges with identifying and obtaining DOD business. In May 2016, the former Secretary of Defense appointed new leadership for DIUx and allocated funding and delegated contract award authority to the organization. DOD is now referring to the new effort as DIUx 2.0. The revamped office reports directly to the Office of the Secretary of Defense and was provided $20 million in research, development, test and evaluation funding. The office is using OTAs to enter into agreements with industry for prototyping projects. The office solicits proposals through an online Commercial Solutions Opening, which is similar to a broad agency announcement and then, with the assistance of contracting experts from the Army Contracting Command-New Jersey, is awarding OTAs to prototype commercial technology. The statutory authority behind the Commercial Solutions Offering Process, which is illustrated in figure 6 below, allows DIUx to mirror the contracting practices that commercial companies normally use—intended to enable DIUx to design projects, and negotiate payment milestones, terms and conditions, and intellectual property rights for a desired completion period of within 60 days. According to the DIUx Commercial Solutions Opening How-To Guide, the process begins with DIUx posting technology areas of interest on its website. Interested companies submit a short briefing online describing the proposed technology and information about the company. DIUx evaluates the briefs and if DIUx is interested in learning more, it may invite companies to pitch their products in person and then submit a full proposal. After a merit-based evaluation, DIUx officials select proposals to pursue and negotiate the terms and conditions of proposed projects, and, through the Army Contracting Command-New Jersey, awards OTAs. DIUx generally uses a combination of price analysis methods, such as a company price list or previous government or commercial contract prices to determine whether a price is acceptable. Due to the volume of companies submitting proposals, DIUx has decided to prioritize its selections to the following five research and development areas: artificial intelligence and machine learning, autonomy, human systems, information technology, and space. As of March 31, 2017, DIUx has awarded 25 agreements for a total value of $48.4 million. According to a DIUx official, prior to the fiscal year 2017 continuing resolution, DIUx awarded agreements in an average of 59 days. Due to funding constraints during the continuing resolution, DIUx’s average increased to 121 days. DIUx is now working to reduce that average back to 60 days. The director stated that DIUx’s most recent agreement, which started while under the continuing resolution, was awarded in 75 days. Projects funded include high-speed unmanned aircraft, network security detection, automated text analysis, and communication devices. For example, DIUx partnered with the Air National Guard to award an agreement with a non-traditional company to adapt a wireless, hands and ears-free, commercially available device as a communicator for warfighters. The Air National Guard was looking for a solution to replace existing communication tools, which add weight to a warfighter’s load, occupy their hands, and restrict visibility. The military services have also initiated efforts to streamline or standardize their contracting processes, one of the major challenges identified by non-traditional companies. For example, the Naval Sea Systems Command conducts analyses of its award cycles times and has undertaken initiatives to streamline them. In addition, the Air Force has focused its efforts on reducing the time it takes to award sole source contracts for sole source acquisitions valued between $50 million and $500 million. Between fiscal year 2014 and 2016, it reduced the time needed to award a contract from 16.1 months to 12.8 months, or by 20 percent. According to an Air Force official, the Air Force initiated several efforts to improve the contract cycle times, including (1) early coordination between companies and Air Force contracting officials, the Defense Contract Management Agency, and Defense Contract Audit Agency to help companies improve their proposals and reduce the amount of re-writing; and (2) an emphasis on training engineers who help evaluate the technical details of proposals. Previously, the Air Force found that the engineers had technical knowledge about technologies or products, but were not as knowledgeable or familiar with how to document their evaluations to aid contracting officers during the contract negotiation process. Air Force officials stated that the Air Force’s goal is to further reduce the contracting timeframe to less than 11 months in fiscal year 2017 by ensuring all contracting offices are following best practices and collecting additional lessons learned. In October 2016, the Assistant Secretary of the Army for Acquisition, Logistics and Technology issued a memorandum directing improvements to the Army’s contracting processes by eliminating redundant layers of management and oversight, improving accountability and transparency, and improving the contracting workforce and workload. For example, the memorandum stated that there are over 350 documents that potentially need to be included in a contract file, many of which are redundant. This inefficiency results in time spent on non-value-added activities instead of negotiating good business deals and conducting adequate post-award administration. The memorandum also states that more robust source selection guidance, sharing of best practices, and enhanced training may help drive more streamlined practices, reduced timelines, and better outcomes. The Army Contracting Command expects contracting officers and contract specialists to track their ability to meet various acquisition milestones and to communicate closely and often with their customers when establishing and adjusting milestones. According to an Army Contracting Command official, Army leadership plans to identify trends and areas of opportunity where contracting activities can be streamlined. One effort already directed by the Assistant Secretary of the Army for Acquisition, Logistics and Technology is to improve the customer’s ability to prepare a complete contract request package because inadequate or missing contract request documents significantly impact the contracting process, causing rework and delays in contract award timelines. In June 2017, we issued a report that examined the Army’s contracting operations and found that top Army leaders focus their contracting reviews on efforts to obligate funds before they expire, competition rates, and small business participation. Leaders have not consistently evaluated the efficiency and effectiveness of the Army’s contracting operations. While Army leaders, including successive Assistant Secretaries of the Army for Acquisition, Logistics, and Technology, have acknowledged a need for improvements in contracting since 2012 and have taken positive intermittent steps to do so, the leaders did not sustain the efforts or—alternatively—provide a rationale for not doing so. Among other things, we recommended that the Secretary of the Army establish and implement metrics to evaluate the timeliness of contract awards and to document the rationale for key decisions. DOD concurred with the recommendations. We are not making recommendations in this report. We sent a draft of this report to DOD for advance review and comment. In response, DOD informed us that it had no comments on the report. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretaries of the Air Force, Army, and Navy; the 15 companies we selected to prepare this report; and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. This report describes (1) key challenges identified by non-traditional companies when trying to do business with Department of Defense (DOD) and (2) actions DOD is taking to address them. For the purposes of this report, we define non-traditional companies as those that do not typically sell or develop products for DOD. We analyzed DOD and industry research and development spending from 1987 to 2013 to describe changes in spending over time. We obtained data on DOD research and development outlays from the White House Office of Management and Budget Summary of Outlays for the Conduct of Research and Development: 1949-2017. We obtained information on industry research and development spending from the National Science Foundation Survey of Industrial Research and Development and the National Science Foundation and U.S. Census Bureau Business Research and Development and Innovation Survey. To adjust for inflation, we converted then year dollars to 2017 dollars using the research and development deflator in the National Defense Budget Estimates for 2017 (Green Book). Private sector investment could include funding from DOD. To identify the challenges that non-traditional companies face when trying to do business with DOD, we first conducted a literature review. Our literature review included previous reports from GAO, think tanks such as the Brookings Institute and RAND Corporation, and the Defense Business Board, as well as testimonies delivered at congressional hearings. We then reviewed the documentation to identify challenges and to help inform interview questions posed to company representatives from 12 non- traditional companies to learn about their experiences in pursuing DOD business. We selected these 12 companies based on several factors, including the extent to which the company had conducted business with DOD, company size, and the types of technologies they have developed. Specifically, we selected companies that had few or no contracts with DOD from fiscal year 2010 through 2016 based on the number of contracts awarded to the company from the Federal Procurement Data System-Next Generation system. We selected large companies that are on the Fortune 500 list and smaller companies that are not on that list to ensure that we considered various perspectives of the challenges faced. We also considered whether companies were developing products in key technology areas identified in the Defense Innovation Initiative, including data analytics, cybersecurity, autonomous vehicles, and space launch vehicles. We used industry reports and information from company web sites to identify companies developing relevant technologies. To aid in our company selection, we conducted interviews with technology and industry experts at various think tanks and venture capital firms, along with DOD officials. We also reviewed industry lists of top small innovative companies, such as Fortune’s “These Big Data Companies are Ones to Watch,” Fast Company’s “The World’s Top 10 Most Innovative Companies in Robotics.” As a result of our research and these discussions, we selected 12 innovative companies to include in our review. Five companies asked to remain anonymous. The other companies include: Cylance, Inc., a cybersecurity company DreamHammer Products LLC, a drone management platform MotionDSP, Inc., a video software company Liquid Robotics, Inc., an autonomous vehicle company Amazon Web Services, Inc., a data analytics company Microsoft Corp., a data analytics company Palantir Technologies, a data analytics company With each of the companies, we interviewed senior representatives that were knowledgeable about their business in defense and commercial markets. We asked company officials to discuss the similarities and differences in selling their products to DOD and commercial customers. Companies provided specific examples of contracts or experiences they have had with DOD and commercial companies to illustrate similarities, differences, and challenges. Where possible, companies provided relevant documentation to support their examples. We analyzed the interview responses and supporting documentation and identified over 20 challenges. We then grouped these into six overarching challenges that nearly all of the non-traditional companies said they faced when trying to doing business with DOD. The statements expressed by participants represent the perspective of these companies and cannot be generalized because we used a non- probability method to select companies for the sample. We also obtained information on challenges mentioned by reviewing DOD studies, as well as through discussions with senior representatives from three traditional companies (The Boeing Company, Honeywell International, Inc., and another company that asked not to be identified). The traditional companies provided quantitative information about the challenges and also identified potential challenges that non-traditional companies could face based on their own experiences. We provided company representatives an opportunity to review a summary of the challenges section of this report and incorporated their comments, as appropriate. In addition, we spoke to knowledgeable acquisition and contracting officials within the Office of the Secretary of Defense and the military services. Among others, DOD officials included two senior acquisition executives, the Director of Defense Procurement and Acquisition Policy, the Director of Defense Pricing, acquisition officials from nine program executive offices, contracting officials from six program executive offices, and officials from the Office of Small Business Programs, the Strategic Capabilities Office, the Defense Contract Management Agency Cost and Pricing Center, and the Defense Innovation Unit Experimental (DIUx). To determine DOD efforts to address the challenges described by non- traditional companies, we first examined the National Defense Authorization Acts for Fiscal Years 2016 and 2017 and identified several provisions that may address the six overarching challenges identified by non-traditional companies. We obtained status documentation or updates from various DOD organizations related to its efforts to implement the provisions. Second, we collected and reviewed documentation on new DOD-wide efforts aimed at addressing specific challenges, including DIUx. This organization is DOD’s primary effort to identify promising technologies developed by non-traditional companies and then to help facilitate business deals between those companies and DOD organizations. Third, we met with senior DOD personnel and acquisition professionals from across the three military service departments and the Office of the Secretary of Defense to identify military-specific initiatives focused on addressing some of the cited challenges and collected pertinent documentation on these efforts. We conducted this performance audit from June 2015 to July 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Cheryl Andrew (Assistant Director), Sameena Ismailjee (Analyst in Charge), Emily Bond, Kurt Gurka, Joe Hackett, Jeff Hartnett, Alexandra Stone, Michelle Vaughn, Nate Vaught, and Robin Wilson made key contributions to this report. | Private industry investments in research and development have significantly outpaced DOD's own spending in this area over the past three decades. Recognizing that this situation is likely to continue, Congress has passed legislation aimed at enabling DOD to leverage technologies made by companies that do not typically do business with it, referred to in this report as non-traditional companies. A Senate report included a provision for GAO to review DOD efforts to attract non-traditional companies that could potentially develop their commercial products for DOD's use. This report describes (1) key challenges identified by non-traditional companies when trying to do business with DOD and (2) actions DOD is taking to address them. To perform this work, GAO conducted interviews with 12 non-traditional companies. Companies were selected based on size, the amount of business they had with DOD, and the type of technology they produce. GAO discussed the nature of the challenges identified with the companies. In addition, GAO obtained information from DOD on steps it is taking to mitigate identified challenges through document reviews and interviews with military service and Office of the Secretary of Defense officials. According to representatives from 12 innovative companies that do not typically do business with the Department of Defense (DOD), there are several challenges that deter them from selling their products and services to DOD or further developing their products and services for military use. These challenges can be grouped into the six areas shown in the table below. According to these company representatives, collectively these challenges have created an environment where companies choose to either not pursue DOD business or believe that their resources could be better spent pursuing commercial business where the cost to compete is lower and selection decisions are made faster. For example, 1 of the 12 companies GAO spoke with conducted a cost comparison study and found that it took 25 full time employees, 12 months and millions of dollars to prepare a proposal for a DOD contract. In contrast, the study found that the company used 3 part time employees, 2 months, and only thousands of dollars to prepare a commercial contract for a similar product. DOD is taking steps to implement some of the requirements that Congress mandated in recent legislation to address some of these challenges, as well as implementing other innovative solutions. For example, as required by Congress, DOD established an advisory panel to identify opportunities to streamline the acquisition process, including recommending regulations that should be eliminated. The panel, which consists of 18 current and former DOD executives, expects to issue a final report in 2018. Each of the military services also has efforts underway to shorten their contracting process. In addition, DOD established an innovation unit in April 2015 to reach out to companies that do not typically do business with the department and facilitate business agreements within a desired period of 60 days using the process below. Because many of the steps and initiatives that DOD is undertaking are in the early stages of implementation, it is too early at this time to determine whether they will address all of the challenges identified by companies that normally do not do business with the department. Although GAO is not making recommendations, DOD reviewed a draft of this report and had no comments. |
NCLBA required the Secretary of the Interior to develop a definition of AYP for BIE schools, but also allows tribal groups to waive all or part of BIE’s definition of AYP and propose an alternative. After a process of negotiated rulemaking, Interior issued regulations specifying that each BIE school must adopt the standards, assessments, and definition of the state in which the school is located. BIE has used agreements, or MOUs, with the states to delineate the terms of accessing state assessments and scoring arrangements. Tribal groups may submit an alternative proposal, but are obligated to use the state’s definition, content standards, and assessments until the alternative is approved by the Secretaries of Interior and Education. Tribal groups are obligated to develop alternative definitions of AYP if states do not give tribal groups access to their assessments. However, the regulations do not delineate how to determine whether a school has achieved AYP in those cases in which schools cannot access state assessments and have not developed an alternative. Under BIE regulations, a tribal group that requires assistance in developing an alternative must submit a written request to BIE. Then, within given time frames, BIE must acknowledge receipt of the request for technical assistance and identify a point of contact to work with the tribal group. In providing such assistance to tribal groups, BIE has access to federal funds designated to assist with assessment-related activities. BIE determined that for school year 2006-07, just under one-third of the 174 schools had made AYP, two-thirds had not, and 4 schools were held harmless, with no AYP determinations made. Under NCLBA, schools that fail to meet AYP for 2 consecutive years must implement specific types of remedial actions, although the requirements for BIE schools vary from those for public Title I schools (see table 1). For a BIE-operated school, implementation of required remedial actions is the responsibility of the BIE, whereas for schools that are tribally operated through contracts or grants, implementation of remedial actions is the responsibility of the tribal group. Almost all of the BIE schools adopted their state’s definition of AYP, content standards, and assessments, but BIE had signed MOUs that ensure access to state assessments with only 11 of the 23 states in which BIE schools are located, as of April 2008. In addition, BIE experienced some challenges in applying the state definitions to determine whether the 174 schools had met AYP. Because BIE schools generally use state definitions of AYP, BIE officials must apply 23 different state definitions. BIE officials told us that the AYP determinations were made by applying the criteria filed with Education by the relevant state, except in California and Florida, where BIE schools did not administer the state assessment, and in Arizona and North Carolina where there was a data constraint. The process is complex: some states assess students in additional areas, such as testing students in both reading and language arts, and the statistical formulas for calculating AYP also vary among states. Some states’ formulas include multiple confidence bands while other states use none. Similarly, annual measurable objectives, alternate AYP indicators, and formulas for calculating graduation rates also vary across states. BIE officials told us that, for several reasons, schools were not always notified of their AYP status prior to the beginning of the subsequent school year. As of December 2007, 93 of the 174 schools had been notified of their AYP status for school year 2006-07. By March 2008, the number of schools notified had increased to 146. BIE officials told us that the delay in notification was prolonged due to staffing issues, as well as schools and states missing deadlines to report assessment data. For example, BIE officials told us that it had been hard to collect attendance data and graduation data needed to make AYP determinations; however, they stated that these data will be more readily available in their new student information system—the Native American Student Information System. In addition, BIE officials told us that four schools, two in California and two in Florida, were not administering the state assessments for reasons that are discussed in the next section. These schools were continuing to administer the standardized tests they had used in prior years. Officials from all four schools told us that their schools had adopted the academic content standards of their respective states. BIE uses MOUs with states to delineate the terms of BIE-funded schools’ access to the states’ assessment systems; however, it had not completed MOUs with 12 of the 23 states, including 5 we visited—Arizona, California, Florida, Mississippi, and New Mexico. The 12 states without signed MOUs enroll about two-thirds of the students in BIE schools, but BIE officials told us that they did not actively pursue MOUs with these states, in part because most states were allowing BIE schools to access state assessments and scoring arrangements without such agreements. The MOUs generally specify responsibilities for the state and BIE. For example, states may be responsible for including BIE schools in relevant training, informing BIE of changes to the state’s definition of AYP, and scoring the BIE assessments. The MOUs also delineate responsibilities of BIE such as ensuring that staff are properly trained and that the assessments are administered according to state protocols. However, California state officials told us they had neither signed an MOU nor given BIE access to the state assessments because they feared a breach in test security. They noted that such a breach in security could undermine the validity of the test, in which the state had invested millions of dollars to develop. California officials stated that several entities, including private schools, had requested permission to administer the test and that their approach was to administer the test only to public schools in California. State officials were willing to make an exception for BIE schools to administer the assessment, but requested a $1 million bond as security. BIE and Education officials told us that they were trying to work with the state to resolve the issue. Education officials told us that they were hopeful that a solution, such as having BIE students assessed at public schools, could be worked out. Officials in other states also told us that they have delayed or rescinded MOUs because tribal groups indicated that they had not been consulted about the terms of the agreements (see table 2). For example, state officials in Washington told us that when they received the request to sign the MOU, they contacted tribal groups and realized that the tribal groups had been informed of the MOU, but not consulted regarding its details. After consulting with tribal groups, Washington state officials modified the proposed MOU and signed it. In addition, BIE does not currently have a valid MOU with New Mexico because the Governor of New Mexico suspended the state’s MOU with BIE shortly after signing it, in part because tribal groups indicated that they had not been consulted about the terms of the MOU. As of March 2008, three tribal groups—the Navajo Nation, OSEC, and Miccosukee—had formally notified the BIE of their intent to develop alternatives to state definitions of AYP. These tribal groups represent BIE- funded schools in five states and include about 44 percent of BIE students (see table 3). The tribal groups began the process of developing alternatives at different times, but all were still in the early stages of doing so. Officials from the Navajo Nation, with BIE schools in three states, have requested technical assistance for developing an alternative definition of AYP, citing the desire to include cultural components in the standards and assessments and to compare the progress of Navajo students across states. Navajo officials have recently (October 2007) requested technical assistance from BIE to develop an alternative “Navajo specific” measure that would influence AYP determination, regardless of the state in which the school was located. OSEC, a consortium of tribal groups in South Dakota, seeks to develop an alternative to improve student performance in its schools, to define the graduation rate to include 6 years rather than 4, and to replace the attendance component of the state’s definition of AYP with a language and culture component. OSEC has submitted a proposal to BIE officials that provides a framework for developing academic content standards for math, reading, and science—the subject areas that must be covered in a state assessment—as well as developing an assessment. OSEC officials consulted with BIE officials regarding the proposal, and BIE has since forwarded the proposal to Education for review. Education officials met with officials from BIE and OSEC in November 2007 to evaluate OSEC’s needs and offer technical assistance. Education officials told us that they have a consultant who could help OSEC ensure that the new standards and assessments meet Education’s guidelines. Officials from the Miccosukee Tribe have informed BIE that they did not want to implement the Florida assessment system because they thought it was flawed and inferior to the standardized test they were already using. They also told us that because attendance in the Miccosukee School was not compulsory, they rejected the use of attendance as an additional AYP indicator. After having met with Education officials and a consultant, the Miccosukee told us that they were considering various options in their development of an alternative assessment, including augmenting the current test, called the Terra Nova, or developing a new assessment based on a modified version of Florida’s academic content standards. Officials also told us that they were working on developing standards for Miccosukee culture and language to serve as the basis for an assessment that would serve as the additional AYP indicator in lieu of attendance for their students in third through eighth grade. Most remaining tribal groups have not pursued alternatives for various reasons, including the desire to maintain compatibility with public schools in their state, and potential challenges and resources required to develop alternatives. Officials representing BIE schools in California, Mississippi, and Washington told us that it was important that their schools be compatible with the local public schools. Officials from the BIE schools in Mississippi wanted to ensure that their students received the same diploma as other children in the state. Further, school officials and BIE education line officers identified several potential challenges that tribal groups might encounter in their efforts to develop alternative standards or assessments, including a lack of expertise, funding, and time (see table 4). According to ELOs and school and Education officials, the specialized knowledge needed to develop an alternative definition of AYP is generally beyond the capacity of tribal groups. With regard to financing the development of alternatives, Education officials stated that developing standards and assessments could cost tens of millions of dollars— financial resources that some tribal representatives and BIE officials told us are generally not available among many tribal groups. Education officials and ELOs also agreed that developing alternatives requires an extensive time commitment that may not be sustainable given changes in leadership. Most tribal groups, ELOs, and school officials we spoke with said they had received little guidance about the process BIE uses to help tribal groups develop alternatives and some expressed frustration with the pace and quality of communication with BIE. Officials representing the two tribal groups and one consortium that have formally requested technical assistance stated they were uncertain about the BIE process for applying for an alternative. Likewise, we found school officials were also unsure of BIE’s process for applying for an alternative. For example, officials from the two BIE schools in California said they had no knowledge of the BIE process to assist tribal governing bodies and school boards to develop alternatives. About half of the ELOs, despite being the first point of contact, told us they did not have enough information to accurately describe the process a tribal group would use to waive the Secretary of the Interior’s definition and pursue development of an alternative definition of AYP. This may be at least partly due to turnover among ELOs. Eight of the 21 ELOs said they had been in their current position for 12 months or less while 7 had been in their current position from 1 to 3 years. During our interviews, almost all of the ELOs (19 of 21) told us that they had not received any information from BIE officials on their role in providing technical assistance to tribes in developing content standards, assessments, or definitions of AYP. In addition, although BIE receives funds from Education that could be used to assist tribal groups with the development of alternatives, all 21 of BIE’s ELOs told us they had not been instructed that BIE funds were available for this purpose. Some school officials and tribal groups we interviewed reported slow responses to requests for assistance and a lack of communication from the BIE in other cases. For example, OSEC’s written request for technical assistance in developing an alternative definition of AYP was not acted upon for 8 months. In another case, the Miccosukee’s written request to waive the state assessment and develop an alternative went unanswered by the BIE from October 2006 to June 2007. BIE officials, in acknowledging their slow response to the tribal groups’ requests for technical assistance, stated that in some cases tribal groups’ written requests were not always clear about what they wanted from the BIE or had not adhered to the regulation that requires the waiver request be submitted by either a tribal governing body or school board. School officials we interviewed reported frustration with BIE’s failure to initiate communication when necessary. For example, officials from one of the BIE schools in California stated that, although BIE officials were aware that the state had not given the schools access to the state assessment, BIE had not communicated with or offered any type of assistance to the schools. To address tribal groups’ requests for technical assistance, BIE assigned a staff person as the primary BIE contact for tribal groups that are requesting technical assistance or seeking to develop alternatives. However, this BIE staff person has several other key responsibilities including responsibility for applying 23 state AYP definitions to calculate the AYP status of BIE schools in addition to other major responsibilities. In response to the requests, BIE and Education officials have recently offered technical assistance to those tribal groups that are seeking to develop alternatives. For example, officials from BIE and Education met with the Miccosukee and OSEC in November 2007 to assess the type of technical assistance needed in order for the tribal groups to pursue development of their alternatives. Likewise, officials from BIE and Education also met with representatives of the Navajo Nation in March 2008 to assess their technical assistance needs as they continue to pursue development of an alternative. Education officials told us they have also sent a contractor to assist tribal groups as they pursue the development of alternative assessments. Specifically, in South Dakota, the Education contractor is charged with working with the OSEC consortium to identify the actions needed to ensure that its alternative assessment will comply with NCLBA regulations. As of February 2008, according to BIE officials, none of the funds provided by Education to BIE under the NCLBA provision supporting assessment- related expenses had been spent to provide technical assistance to tribal groups seeking to develop alternatives. According to BIE, all of these funds had been obligated, primarily for improvements to BIE’s student information and tracking systems and other assessment-related uses, including professional development. BIE officials stated that none of these funds had been spent on technical assistance, as no fundable requests had been received from the tribal groups developing alternatives. However, the officials stated that they expected to spend some funds to provide technical assistance in the near future. Our June report recommended that, in order to improve support for tribal governments and school boards in their adoption of definitions of AYP, the Secretary of the Interior should direct BIE to: Coordinate with relevant tribal groups in pursuing negotiation of MOUs with states that lack them, seeking facilitation from Education when necessary and appropriate. In close coordination with Education, provide prompt assistance to tribal groups in defining assessment options, especially in instances in which tribal groups are not accessing state assessments. Such assistance could include delineating options—such as using an already established assessment, augmenting an assessment, or incorporating cultural components as an additional academic indicator—and their associated costs. Provide guidelines and training on the process for seeking and approving alternatives to all tribal governments, tribal school boards, and education line offices. Establish internal response time frames and processes to ensure more timely responses to all correspondence with tribal groups as well as proactive communication with tribal groups and Education to resolve issues related to waivers, requests for technical assistance, and development of alternative definitions of AYP. In written comments, the Department of the Interior agreed with our recommendations and indicated it had initiated steps to implement them. In preparation for this testimony, we requested an update on BIE’s actions. With regard to our recommendation about completing MOUs, BIE officials told us that they are in the process of working out the language for a memorandum of agreement with California state officials. BIE officials told us that the agreement will include language to assure the state that the assessment will be secured and properly administered. In addition, BIE officials told us that the three tribal groups seeking alternatives were working closely with a contractor, and BIE intended to release some funding to them in late September 2008. With regard to the recommendation to provide guidelines and training on the process for pursuing alternative assessments, BIE officials told us that they have taken a preliminary step by developing a presentation that should be available to attendees of the National Indian Education Conference in October 2008. Finally, they stated that the contractor that they have hired is also working with them to establish a process that will include internal time frames to ensure more timely communication with tribal groups. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Subcommittee may have. For further information regarding this testimony, please contact me at (202) 512-7215. Betty Ward-Zukerman, Nagla’a El-Hodiri, and Kris Trueblood made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The No Child Left Behind Act (NCLBA) requires states and the Department of the Interior's Bureau of Indian Education (BIE) to define and determine whether schools are making adequate yearly progress (AYP) toward the goal of 100 percent academic proficiency. To address tribes' needs for cultural preservation, NCLBA allows tribal groups to waive all or part of BIE's definition of AYP and propose an alternative, with technical assistance from BIE and the Department of Education, if requested. GAO is providing information on the extent of (1) BIE schools' adoption of BIE's definition of AYP; (2) tribal groups pursuit of alternatives and their reasons, as well as reasons for not pursuing alternatives; and (3) federal assistance to tribal groups pursuing alternatives. To prepare this testimony, GAO relied primarily on information from a recent GAO report, GAO-08-679 , and contacted BIE officials for updates on actions taken in response to GAO's prior recommendations. Although almost all of the 174 BIE schools have officially adopted BIE's definition of AYP--the definition of AYP of the state where the school is located--BIE had not yet completed memoranda of understanding (MOU) to delineate BIE and state responsibilities concerning BIE schools' access to the states' assessment systems for 12 of the 23 states with BIE schools. Without MOUs, states could change their policies regarding BIE schools' access to assessments and scoring services. Officials from the Navajo Nation, the Oceti Sakowin Education Consortium, and the Miccosukee Tribe have begun to develop alternatives to state AYP definitions, in part to make standards and assessments reflect their culture, while officials of other tribal groups have cited challenges, such as a lack of expertise, as reasons not to pursue alternatives. The three tribal groups developing alternatives, representing about 44 percent of the 48,000 BIE students, have requested technical assistance in developing their alternatives. Other tribal officials cited a desire to maintain compatibility with public schools and/or cited challenges, such as a lack of expertise, as reasons not to pursue alternatives. The three tribal groups pursuing alternatives reported a lack of federal guidance and communication, although they have recently received some initial technical assistance from BIE and Education officials. These tribal groups reported receiving little guidance from BIE and difficulties in communicating with BIE and the BIE did not always have internal response timelines or meet the ones it had. Moreover, BIE education line officers--the primary points of contact for information on the alternative provision--generally indicated that they had received no guidance or training on the provision. During the course of GAO's prior review, BIE and Education officials began offering technical assistance to the tribal groups working to develop alternatives. In response to GAO's recommendations in its June 2008 report that the Secretary of the Interior increase support, including technical assistance, guidance, training, and communication for tribal groups in their implementation of the provision for developing alternatives, BIE has taken several steps. In particular, BIE officials told GAO that they are in the process of working out the language for a memorandum of agreement with California state officials. In addition, BIE officials told GAO that the three tribal groups seeking alternatives were working closely with a contractor to develop proposals. With regard to the recommendation to provide guidelines and training on the process for pursuing alternative assessments, BIE officials told GAO that they have taken steps to develop a presentation on the process that they anticipated would be available in October 2008. |
Federal and tribal CHS programs in each of IHS’s 12 areas pay for services from external providers if services are not available directly through IHS-funded facilities, if patients meet certain requirements, and if funds are available. IHS uses three primary methods—base funding, annual adjustments, and program increases—to allocate CHS funds to the area offices. IHS administers contract health services through 12 IHS area offices, which include all or part of 35 states where many American Indian and Alaska Natives reside. (See fig. 1.) IHS uses CHS funds to pay for services from a variety of health care providers, including hospital- and office-based providers. IHS, among other things, sets program policy for and allocates CHS program funds to the area offices. The area offices distribute funds to individual federally operated and tribally operated CHS programs that purchase contract care services from outside providers. There can be multiple individual CHS programs within an area. Tribes currently administer 177 of the 243 (73 percent) individual CHS programs and receive about 54 percent of IHS’s funding for CHS. In addition to receiving federal funding through IHS, the tribes may provide supplemental funds to the CHS programs they administer. Patients must meet certain eligibility, administrative, and medical priority requirements to have their services paid for by the CHS program. Generally, to be eligible to receive services through the CHS program, patients must reside on a reservation or within a reservation’s federally established CHS Delivery Areas and be members of a tribe or tribes located on that reservation or maintain close economic and social ties with that tribe or tribes. In addition, if there are alternate health care resources available to a patient, such as Medicaid and Medicare, these resources must pay for services first because the CHS program is generally the payer of last resort. If a patient has met these requirements, a program committee (often including medical staff), which is part of the local CHS program, evaluates the medical necessity of the service. IHS has established four broad medical priority levels of health and each area office is required to care services eligible for payment,establish priorities that are consistent with these medical priority levels. Because IHS typically does not have enough funds to pay for all CHS services requested, federal CHS programs pay first for emergency and acutely urgent medical care to the extent funds are available. They may then pay for all or only some of the lower-priority services they fund, funds permitting. Tribal CHS programs must use medical priorities when making funding decisions, but unlike federal CHS programs, they may develop a system that differs from the set of priorities established by IHS. There are two primary paths through which patients may have their care paid for by the federal CHS program. First, a patient may obtain a referral from a provider at an IHS-funded health care facility to receive services from an external provider. That referral is submitted to the CHS program for review. If the patient meets the requirements and the CHS program has funding available, the services in the referral are approved by the CHS program and a purchase order is issued to the external provider and sent to IHS’s fiscal intermediary. Once the patient receives the services from the external provider, that provider obtains payment for the services in the approved referral by sending a claim to IHS’s fiscal intermediary. Second, in the case of an emergency, the patient may seek care from an external provider without first obtaining a referral. Once that care is provided, the external provider must send the patient’s medical records and a claim for payment to the CHS program. At that time, the CHS program will determine if the patient met the necessary program requirements and if CHS funding is available for a purchase order to be issued and sent to the fiscal intermediary. As in the earlier instance, the provider obtains payment by submitting a claim to IHS’s fiscal intermediary. In addition to funds appropriated annually for CHS, IHS also distributes funds to individual CHS programs from the Indian Health Care Improvement Fund, designed to reduce disparities and resource deficiencies at the local level as measured by IHS’s Federal Disparity However, because these funds may be used to pay for either Index. contract care or direct care services, it is possible that they may not finance contract care services in some programs. Further, this fund is small compared to both CHS and direct care funding. For example, in fiscal year 2010, funds distributed from the Indian Health Care Improvement Fund equaled about 6 percent of the CHS funding level, or about 2 percent of the funding level for direct care services. IHS has reported on a number of data limitations related to the current formula used to distribute funds from the Indian Health Care Improvement Fund. IHS uses three primary methods—base funding, annual adjustments, and program increases—to determine the allocation of CHS funds to the IHS area offices, which then distribute the funds to individual CHS programs.(See fig. 2.) IHS uses these methods sequentially. Base funding is the amount of CHS funds that equal the total amount of all CHS funds that each area received in the prior fiscal year. When appropriations for CHS are higher than the amount needed for base funding, IHS uses national measurements of population growth and inflation to determine annual funding adjustments. Each IHS area office receives the same percentage increase for the annual adjustments. Since 2001, when IHS has also received additional funding for what it refers to as “program increases,” IHS has used the CHS Allocation Formula to determine how to allocate those program increases to the 12 area offices. According to IHS officials, IHS established the CHS Allocation Formula in part to ensure that American Indians and Alaska Natives had equitable access to contract health funds. The Allocation Formula is based on a combination of factors, including variations in the number of people using health care services, geographic differences in the costs of purchasing health care services, and access to IHS or tribally operated hospitals. Most CHS funding, which IHS refers to as “base funding,” is allocated based on past funding history. Each year, each of the 12 IHS area offices receives an allocation of base funding equal to the total amount of all CHS funds they received the previous fiscal year. According to IHS, base funding is intended to maintain existing levels of patient care services in all areas. Because of adjustments or funding increases that are received in most years, a new level of base funding is created in those years. IHS officials have told us they do not know the exact origins of the base funding policy, but that it dates back to the 1930s, when the health programs were under the Bureau of Indian Affairs. In 1954, Congress transferred responsibility for the maintenance and operation of hospitals and health facilities for Indians from the Bureau of Indian Affairs in the Department of the Interior to what is now IHS in HHS. When appropriations for CHS are above the previous fiscal year’s level, IHS allocates each area office an additional amount to adjust for overall population growth and inflation. The population growth funding adjustment is based on national population increases determined by the U.S. Census Bureau with annual adjustments made for changes based on state birth and death data provided by the National Center for Health Statistics. The inflation adjustment is based on the prevailing Bureau of Labor Statistics’ Consumer Price Index for medical costs. IHS gives each area the same percentage increase to its base funding regardless of any population growth or cost-of-living differences among areas. IHS receives increases in CHS funding that are large enough that the agency can allocate at least some for annual adjustments, even if not the full amount. The funding adjustments for population growth and inflation provided to the area offices are incorporated into the next year’s base funding. In fiscal year 2009, each individual CHS program received a 1.5 percent adjustment for population growth and a 3.8 percent adjustment for inflation. In fiscal year 2010, those adjustments were 1.5 percent and 3.3 percent, respectively. 3 years. This active user population is then used as a multiplier for the cost adjustment and access to care factors. The cost adjustment factor provides an adjustment to account for geographic differences in the costs of purchasing health care services. It is based on a price index derived from the American Chamber of Commerce Researchers Association Regional Cost of Living index, which provides regional comparative costs for inpatient and outpatient services. The price index for each CHS program is multiplied by the active user population for each program to determine the value of the cost adjustment factor. The access to care factor provides an additional increase only for those individual CHS programs that do not have access to an IHS or tribally operated hospital. IHS area officials determine if individual CHS programs meet two qualifying criteria for this factor: (1) the individual CHS program has no IHS or tribally operated hospital with an average daily patient load of five or more, and (2) the individual CHS program does not have an established referral pattern to an IHS or tribally operated hospital within the area. These additional funds are allocated to each program where there is no access to an IHS or tribally operated hospital in an amount proportional to the cost adjustment factor. To allocate the program increase funding, IHS first designates 75 percent of the funds for increases based on the cost adjustment factor at each individual CHS program and 25 percent of the funds for the increases based on the access to care factor at each individual CHS program. IHS then totals the program increases for the individual CHS programs and allocates that total amount to the IHS area offices. Program increases allocated using the CHS Allocation Formula become part of the area offices’ base funding for the next fiscal year. IHS used the CHS Allocation Formula to allocate program increases in fiscal years 2001, 2002, and 2008 through 2010. In each of those years, IHS informed the IHS area offices of the total amounts of program increase funds to be allocated to the offices and the dollar values that IHS calculated under that formula for each individual CHS program in their areas. To specifically address health care needs in local communities, IHS permits area offices, in consultation with the tribes, to distribute program increase funds to local CHS programs using criteria other than the CHS Allocation Formula. Because these adjustments are made at the individual CHS program level, they do not affect future base funding which is determined at the area level. Funds allocated to the IHS area offices through base funding, annual adjustments, and program increases have increased substantially over the past 10 years. In fiscal year 2001, area offices received just over $386 million; in fiscal year 2010, they received just over $715 million in CHS funds. (See fig. 3.) IHS’s allocation of CHS funds has varied widely across IHS area offices, and IHS’s method of allocating CHS funds has maintained those funding differences. Moreover, the CHS Allocation Formula for determining program increases uses imprecise counts of CHS users. CHS funding varied widely across IHS area offices in fiscal year 2010. Total CHS funding for fiscal year 2010 ranged across the 12 area offices from nearly $17 million to more than $95 million. There were also substantial ranges in base funding, annual adjustments, and the program increase. For fiscal year 2010, base funding ranged from nearly $15 million to nearly $76 million, annual adjustments ranged from less than $1 million to more than $3 million, and the program increases ranged from around $1.5 million to more than $16 million across the area offices. (See table 1.) Because total funding may reflect variations in the size of the population of IHS areas, we also examined per capita funding for fiscal year 2010 using IHS’s count of active users from the most recent year for which data were available. Per capita CHS funding for fiscal year 2010 varied widely, ranging across the area offices from $299 to $801. In addition, per capita CHS funding was sometimes not related to areas’ dependence on CHS for the provision of IHS-funded inpatient services. For example, California received a level of per capita funding that was in the lower half of the range for all areas, while American Indians and Alaska Natives in that area rely entirely on CHS for their IHS-funded inpatient services because there are no IHS or tribally operated hospitals. Similarly, the Bemidji area depends almost entirely on CHS for its IHS-funded inpatient services, yet received levels of per capita CHS funding that were in the lower half of the range of CHS funding for all areas. Because CHS funds are used to purchase services not accessible or available through the direct care program, we compared patterns of funding for the direct care program and the CHS program across areas. On average, areas were allocated about three times as much in per capita direct care funding as they were in per capita CHS funding. We also found that, in general, the areas that were allocated higher amounts of per capita direct care funding were also allocated higher amounts of per capita CHS funding, and those areas that were allocated lower amounts of per capita direct care funding were also allocated lower amounts of per capita CHS funding. The notable exceptions were Alaska, which was allocated much more in per capita direct care funding than average, and Portland and Tucson, which were allocated much less in per capita direct care funding than average. Alaska was allocated per capita direct care funding ($3,340) that was about six times more than its per capita CHS funding ($548) and was the highest per capita direct care funding of all the areas, nearly double that of the area with the second highest per capita funding (Nashville, $1,869). Direct care funding for Alaska reflects the unique health care challenges that Alaska faces due to its remoteness and vast distances, which result in some of the highest costs for health care services in the United States. In contrast, the lower per capita direct care allocations to Tucson and Portland were somewhat offset by relatively higher levels of per capita CHS funding. Tucson was allocated the lowest per capita direct care funding ($1,324) but it received the third highest per capita CHS funding ($664). Similarly, Portland’s per capita direct care funding ($1566) was relatively low, but its per capita CHS funding ($799), was the second highest. In addition to variation in funding across IHS area offices, variation in funding may exist among individual CHS programs within area offices of which IHS headquarters is not aware. Some IHS area offices use methods other than the CHS Allocation Formula to distribute CHS program increases and IHS does not require the area offices to report these variations to headquarters. As a result, IHS may not be able to appropriately oversee agency operations. According to Standards for Internal Controls in the Federal Government, agency managers should establish appropriate and clear policies and procedures for internal reporting relationships that effectively provide managers with the information they need to carry out their job responsibilities. The standards further state that an agency must have reliable and timely communications relating to internal events to run and control its operations. IHS allows area offices, in consultation with the tribes, to distribute program increase funds to local CHS programs using different criteria than the CHS Allocation Formula to meet health care needs in local communities, but does not require that the areas inform IHS headquarters. By not requiring area offices to report to IHS headquarters about deviations in funding, IHS is not meeting internal control standards. For example, IHS headquarters officials identified two area offices that have used alternate methods to distribute CHS program increases to local CHS programs. We identified a third area that used alternative methods that IHS was not aware of, specifically using the count of actual CHS users at each individual CHS program. The allocation pattern of per capita CHS funds has been generally maintained over the 10-year period that we examined. Those areas that had the highest and the lowest levels of per capita CHS funding in fiscal year 2001 generally also had the highest and lowest levels of per capita CHS funding in fiscal year 2010. (See fig. 4.) Base funding, which is based solely on funding from the prior year, accounts for the great majority of CHS funds and therefore maintains any funding variations. For example, in fiscal year 2010, the year in which IHS received its largest program increase, base funding accounted for 82 percent of total CHS funds allocated to IHS area offices. (See fig. 5 for the allocation of funds in fiscal year 2010.) Annual adjustments for population growth and inflation are made as a percentage of base funding that is the same for all areas and therefore do not affect funding variations. Further, program increase funds allocated through the CHS Allocation Formula are not large enough to alter funding variations because they have been a relatively small proportion of the CHS funds that area offices receive. For example, in fiscal year 2010, CHS Allocation Formula funds amounted to about 14 percent of total CHS funding. Therefore, any variations in the original base funding amounts allocated to the areas are perpetuated since the occasional program increases are not sufficiently large to be able to close that gap. The CHS Allocation Formula IHS uses to allocate CHS program increases to IHS area offices is largely dependent on an estimate of active users that is imprecise, even though IHS considers population estimates to be a critical factor in allocating CHS funds. In 2010, IHS’s Data/Technical Workgroup noted that the active user population is not a precise measure of American Indians and Alaska Natives eligible for CHS services.of all users who had at least one direct care or contract care inpatient stay, or obtained at least one outpatient, ambulatory, or dental service during the preceding 3-year period. The active user estimates that IHS used to allocate program increases therefore included an unknown proportion of patients who had not received contract health services, but rather had received only direct care services. IHS has acknowledged that its method of counting active users for the CHS Allocation Formula does not measure the number of people who actually received CHS services, nor does it measure the number of people who are eligible for CHS services. Because the active user population is used to determine The CHS Allocation Formula allocates funds based on counts program increases, any inaccuracies in that number potentially could contribute to variation not linked to actual use of CHS services. While IHS has an information technology system that could produce actual counts of CHS users, IHS officials do not believe that the data in the system are complete or that areas collect these data in the same way. This system contains separate tabulations of users of direct care services, contract care services, and dental care services. However, IHS officials told us that they do not provide guidance to area offices on how to record data on active CHS user counts. Nevertheless, officials from one area told us that one of their statisticians separated out the CHS users from the active user population count identified by IHS for 2 recent years and found that the CHS user count is about half of the active user population count. Without accurate data, it is not possible for IHS to know if the proportion of actual CHS users is consistent across areas. IHS has taken few steps to evaluate the funding variations within the CHS program. In addition, IHS’s ability to address funding variations is limited by statute. IHS has taken few steps to evaluate the funding variations within the CHS program. IHS officials told us that they have not evaluated the effectiveness of base funding and the CHS Allocation Formula in meeting the health care needs of American Indians and Alaska Natives across the IHS areas and they do not plan to do so with respect to the determination of base funding amounts. Without such assessments, IHS cannot determine the extent to which the current variation in CHS funding reflects variation in health care needs. According to Standards for Internal Controls in the Federal Government, agency managers should compare actual performance to planned or expected results throughout the organization and analyze significant differences. Further, the standards specify that activities need to be established to monitor performance measures and indicators. IHS has not developed policies and procedures in the Indian Health Manual for its headquarters and field staff employees on how to conduct assessments of the CHS program funding methodologies, nor has it included goals, measures, and time frames for assessing the CHS program funding allocation performance within areas, which would potentially help IHS and the area offices identify and allocate CHS program funds to areas and local CHS programs with the greatest need. In March 2010, the Director of IHS formed the Director’s Workgroup on Improving CHS to review tribal input to improve the CHS program, to evaluate the existing formula for allocating program increases using the CHS Allocation Formula, and to recommend improvements in the way CHS business operations are conducted. The workgroup members agreed that their recommendations would apply only to program increases and not to base funding. In February 2011, the Director of IHS reported that she concurred with the four recommendations made by the workgroup in October 2010. The workgroup recommended that a technical subcommittee be created and charged with calculating the current CHS need and estimates of future CHS need. Such information would be essential to understanding the variation in CHS funding. However, we previously reported that IHS data on denials and deferrals that IHS used to estimate program need are incomplete and inconsistent. The workgroup recommended convening 12 Area Work Sessions to review and make recommendations about current CHS policies and procedures, which would then be used to revise the CHS chapter of the Indian Health Manual, specifically relating to issues of evaluating the cost of care and communication of CHS program requirements, among others. These sessions have been completed and the workgroup is developing a summary report. The workgroup recommended that an evaluation of the current CHS Allocation Formula be postponed until at least fiscal year 2013. The workgroup members said that the CHS program had only begun receiving substantial increases in fiscal years 2009 and 2010, and the full impact of these increases needed to be reviewed before making recommendations to change the formula. In contrast, we found that IHS has used the formula to allocate program increases, at least in part, in 5 years since 2001. Members of the workgroup we interviewed told us that outcome measures for the evaluation have not yet been defined. As part of this recommendation, they also suggested that a subcommittee be created to review the CHS Allocation Formula for equity across areas. An IHS representative to the workgroup told us that the recommendations of the subcommittee will not be considered by the full committee until the review of equity is complete. The workgroup recommended that the inpatient and outpatient components of the Consumer Price Index be used for any new CHS program increases that IHS may receive for fiscal year 2013 and beyond. Members of the 2010 Director’s Workgroup we spoke with expressed concern that the CHS Allocation Formula does not differentiate between large and small hospitals when determining the access to care factor, although the workgroup did not make a recommendation concerning this issue. Specifically, programs with access to small hospitals with minimal services do not receive an adjustment for access to care, and are therefore treated similarly to programs with access to large medical centers where a range of specialty care services may be available. As a result, the CHS Allocation Formula does not equitably compensate for limitations in hospital access. When the CHS Allocation Formula was created in 2001, its developers noted that the access to care factor should be refined to better reflect the complexities of the IHS system of health care. IHS has neither refined nor made any change to the way that access to care is defined. Federal law restricts IHS’s ability to reallocate funding should the agency desire to do so. Specifically, IHS officials identified two statutory provisions that limit IHS’s ability to adjust funding allocations. The Indian Self-Determination and Education Assistance Act currently prohibits reductions in funding for certain tribally operated programs, including some CHS programs, except for limited circumstances. In addition, the Indian Health Care Improvement Act imposes a congressional reporting requirement for proposed reductions in base funding for any recurring program, project, or activity of a service unit of 5 percent or more.officials told us that no such proposal to reallocate base funding has been transmitted to the Congress. IHS officials have told us that areas and tribes have resisted changes to the current funding allocation methods, particularly base funding, as consistent funding allows the areas and tribes to plan and manage their resources. However, minutes from a 2010 session of the Director’s workgroup show that not all tribes agree with the CHS Allocation Formula and that some workgroup members said that the current CHS Allocation Formula was not sufficiently equitable. Concerns about IHS’s funding methods are longstanding. For example, in 1982, we concluded that IHS’s practice of funding programs based on the previous year’s funding level caused funding inequities and that IHS did not distribute funds to the neediest programs in fiscal year 1981. There are wide variations in CHS funding across the 12 IHS areas, and these variations are largely maintained by IHS’s long-standing use of the base funding methodology. IHS officials are unable to link variations in funding levels to any assessment of health care need. As we have reported in the past and found once again in this evaluation, IHS’s continued use of the base funding methodology undermines the equitable allocation of IHS funding to meet the health care needs of American Indians and Alaska Natives. Program increases for the CHS program over the years have not significantly altered variations across the areas, primarily because they are too small to have a strong impact on overall funding. Funds from the Indian Health Care Improvement Fund, designed to reduce funding disparities, also have had little impact because they are relatively small and not targeted solely for the CHS program. Further, federal law restricts IHS’s ability to reallocate funding, principally by prohibiting reductions for certain tribally operated CHS programs, which account for more than half of total CHS funding. IHS also may be unaware of additional variation in funding across individual CHS programs because it does not require that area offices notify IHS headquarters when they choose different funding methodologies than those suggested by headquarters. IHS can improve the equity of how it allocates program increase funds to areas through improvements in its implementation of the CHS Allocation Formula, primarily by using counts of actual CHS users rather than by using the current method of estimating the number of overall IHS users, which now includes patients who never used a CHS service, and by refining the access to care factor to account for differences in available health care services at IHS and tribally operated facilities. However, because of the predominant influence of base funding and the relatively small contribution of program increases to overall CHS funding, it would take many years to achieve funding equity just by revising the methods for distributing CHS program increase funds. In order to ensure an equitable allocation of CHS program funds, the Congress should consider requiring IHS to develop and use a new method to allocate all CHS program funds to account for variations across areas that would replace the existing base funding, annual adjustment, and program increase methodologies, notwithstanding any restrictions currently in federal law. To make IHS’s allocation of CHS program funds more equitable, we recommend that the Secretary of Health and Human Services direct the Director of the Indian Health Service to take the following three actions for any future allocation of CHS funds: require IHS to use actual counts of CHS users, rather than all IHS users, in any formula for allocating CHS funds that relies on the number of active users; require IHS to use variations in levels of available hospital services, rather than just the existence of a qualifying hospital, in any formula for allocating CHS funds that contains a hospital access component; and develop written policies and procedures to require area offices to notify IHS when changes are made to the allocations of funds to CHS programs. HHS reviewed a draft of this report and provided written comments, which are reprinted in appendix I. In its comments, HHS concurred with two of our recommendations and did not concur with one recommendation. HHS did not comment on our general findings or our conclusion that IHS’s use of the base funding methodology has led to long-standing inequities in the distribution of CHS funds. HHS concurred with our recommendation that IHS use variations in levels of available hospital services to allocate CHS funds. HHS noted that the IHS Director’s Workgroup on Improving CHS will review the formula and make recommendations in fiscal year 2013. HHS also concurred with our recommendation to develop written policies to require area offices to notify IHS when changes are made in the allocations of funds to CHS programs. HHS noted that guidance requiring areas to report these changes to IHS headquarters will be added to the CHS manual; however, the agency did not specify a date for doing so. HHS did not concur with our recommendation that it should require IHS to use actual counts of CHS users, rather than all IHS users, in any formula for allocating CHS funds that relies on the number of active users. HHS stated that IHS’s combined count of all users of IHS direct care services and CHS users is intended to reflect the health care needs of those eligible for CHS services. However, as we reported, IHS’s own Data/Technical Workgroup found that the current IHS active user count does not measure the number of people who are eligible for CHS services, in part because not all users of IHS direct care services are eligible for CHS services. Further, as HHS acknowledged in its comments, the current count of active users also does not reflect those who actually received CHS services. Because CHS program increases are intended to reflect variations in the numbers of CHS users among areas, we continue to believe that IHS should use counts of actual CHS users in determining program increases. We are sending copies of this report to the Secretary of Health and Human Services, Director of the Indian Health Service, appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http//www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Martin T. Gahart (Assistant Director), George Bogart, Carolyn Feis Korman, and Laurie Pachter made key contributions to this report. | IHS, an agency in the Department of Health and Human Services (HHS), provides health care to American Indians and Alaska Natives. When care at an IHS-funded facility is unavailable, IHSs CHS program pays for care from non-IHS providers if the patient meets certain requirements and funding is available. The Patient Protection and Affordable Care Act requires GAO to study the administration of the CHS program, including a focus on the allocation of funds. IHS uses three primary methods to determine the allocation of CHS funds to the 12 IHS geographic area offices: base funding, which accounts for most of the allocation; annual adjustments; and program increases, which are provided to expand the CHS program. GAO examined (1) the extent to which IHSs allocation of CHS funding varied across IHS areas, and (2) what steps IHS has taken to address funding variation within the CHS program. GAO analyzed IHS funding data, reviewed agency documents and interviewed IHS and area office officials. The Indian Health Services (IHS) allocation of contract health services (CHS) funds varied widely across the 12 IHS geographic areas. In fiscal year 2010, CHS funding ranged from nearly $17 million in one area to more than $95 million in another area. Per capita CHS funding for fiscal year 2010 also varied widely, ranging across the areas from $299 to $801 and was sometimes not related to the areas dependence on CHS inpatient services, as determined by the availability of IHS-funded hospitals. The allocation pattern of per capita CHS funds has been generally maintained from fiscal year 2001 through fiscal year 2010. This is due to the reliance on base fundingwhich incorporates all CHS funding from the prior year to establish a new base each yearand accounts for the majority of funding. In fiscal year 2010, when CHS had its largest program increase and base funding was the smallest proportion of funding for any year, base funding still accounted for 82 percent of total CHS funds allocated to areas. Further, allocations of program increase funds are largely dependent on an estimate of CHS service users that is imprecise. IHS counts all users who obtained at least one service either funded by CHS or provided directly from an IHS-funded facility during the preceding 3-year period. This count therefore includes an unknown number of individuals who received IHS direct care only and who had not received contract health services. IHS has taken few steps to evaluate funding variation within the CHS program and IHSs ability to address funding variations is limited by statute. IHS officials told GAO that the agency has not evaluated the effectiveness of base funding and the CHS Allocation Formula. Without such assessments, IHS cannot determine the extent to which the current variation in CHS funding accurately reflects variation in health care needs. While IHS has formed a workgroup to evaluate the existing formula for allocating program increases, the workgroup recommended, and the Director of IHS concurred, that the CHS Allocation Formula for distributing program increases would not be evaluated until at least 2013. The workgroup members maintained that the CHS program had only begun receiving substantial increases in fiscal years 2009 and 2010, and the full impact of these increases needed to be reviewed before making recommendations to change the formula. However, GAO found that IHS has used the formula to allocate program increases, at least in part, in 5 years since 2001. GAO also concluded that, because of the predominant influence of base funding and the relatively small contribution of program increases to overall CHS funding, it would take many years to achieve funding equity just by revising the methods for distributing CHS program increase funds. Further, federal law restricts IHSs ability to reallocate funding, specifically limiting reductions in funding for certain tribally-operated programs, including some CHS programs, and imposing a congressional reporting requirement for proposed reductions in base funding of 5 percent or more. According to IHS officials, no such IHS proposal to reallocate base funding has ever been transmitted to the Congress. GAO suggests that Congress consider requiring IHS to develop and use a new method to allocate all CHS program funds to account for variations across areas, notwithstanding any restrictions now in federal law. GAO also recommends, among other things, IHS use actual counts of CHS users in methods for allocating CHS funds. HHS concurred with two of GAOs recommendations, but did not concur with the recommendation to use actual counts of CHS users. GAO believes that its recommendation would provide a more accurate count of CHS users. |
The Department of Education’s mission is to promote student achievement and preparation for global competitiveness by fostering educational excellence and ensuring equal access. Toward this end, Education distributes federal grant funds to applicants throughout the nation to improve access to and the quality of education. Education supplements and complements the efforts of states, local school systems, the private sector, public and private nonprofit educational research institutions, other community-based organizations, parents, and students. Education has seven principal offices that administer grants to entities that provide education or education-related services to students (see fig. 1). The principal offices focus on specific areas of education, such as special education, elementary and secondary education, and postsecondary education. Within each of these principal offices, there are individual program offices responsible for one or more specific grant programs. For example, the Office of Special Education Programs and the Rehabilitation Services Administration are program offices in the principal Office of Special Education and Rehabilitative Services. Program offices have directors, supervisors, and program specialists responsible for the everyday administration of grants in the department. Department-wide offices—the Risk Management Service, Office of Chief Financial Officer, Office of General Counsel, and Budget Service—provide technical assistance and guidance to the principal and program offices. Education describes its grant management processes as a “cradle-to-grave” strategy. As shown in figure 2, this strategy includes phases for pre-award, award, post-award, and close-out. Monitoring to ensure administrative, financial, and performance compliance occurs primarily during the post- award phase, after the grantee has successfully applied for and been awarded a grant from Education. In Education, grant monitoring is the responsibility of each program office. Each program office has the flexibility to tailor its monitoring to its respective grant programs. For example, program offices that oversee formula grants to state agencies generally conduct on-site monitoring on a 3-to-5-year cycle but can make more frequent visits if necessary. For discretionary grants, site visits to the recipients are generally less frequent, in part because of the relatively small size of the awards and the relatively large number of discretionary grant awards made by the department. Relatively more desk-top monitoring is used in monitoring discretionary grants. In general, recipients of grants from Education must: conform to the approved grant application and approved revisions; adhere to laws, regulations, conditions of the grant, and certifications; share progress on established performance measures; and manage federal funds according to federal cash management requirements. Education’s grant monitoring practices and procedures require that program office staff undertake numerous activities to monitor grantees for compliance with administrative, financial, and performance regulations and requirements to protect against fraud, waste, and abuse of federal resources. These activities include on-site visits and desk reviews of grantees, review of annual reports submitted by grantees, and evaluation of grant projects with respect to performance. For example, some financial monitoring activities that program office staff perform include reviewing reports generated in the Grant Administration and Payment System, Education’s primary information system and tool for financial oversight, and available audit reports. In cases where technical assistance and normal monitoring do not improve grantee performance, special grant conditions may be imposed on the grantee such as requiring the grantee to obtain prior approval for certain expenditures. Findings of material noncompliance are reported to other offices in Education, such as the Office of General Counsel, while findings of potential illegal activity involving fraud, waste, and abuse are reported to the Office of Inspector General for further action. Continuous monitoring of grantees offers program office staff the opportunity to provide customized technical assistance, appropriate feedback, and follow-up to help grantees improve in areas of need, identify project strengths, and recognize significant achievements. Education allows individual program offices to develop their own procedures for assessing grantee risk. While the department has not yet provided department-wide guidance on grantee risk assessment, the Risk Management Service (RMS) is planning to introduce several new efforts designed to help in this area. In the absence of a department-wide strategy for risk assessment procedures, we found that in the program offices we visited, the procedures for assessing grantee risk varied in rigor, with some offices using a variety of indicators or data elements to measure relative risk, while others had no formal grantee risk assessment process in place. Federal guidance directs that management should identify internal and external risks that may prevent an organization from meeting its objectives. In 2007, the department’s Grants Pilot Project Team recommended the establishment of a coordinated, comprehensive, and department-wide approach to risk-based grant monitoring for discretionary and formula grants. The Secretary created RMS in October 2007 to work with all components of the department to ensure that each office has effective procedures in place to assess and mitigate risk among its grantees. Specifically, RMS is to develop tools to assess grantee risk for use throughout the department and train department staff to use the tools. RMS has not yet issued department-wide guidance on assessing grantee risk, and key guidance, such as the updated discretionary grant handbook, does not provide information on how to develop a risk-based approach to monitoring grants. Program officials said that such guidancewould be a valuable tool for program offices in developing their own risk assessment procedures. RMS is currently testing software it developed that would assist Education staff in evaluating grantee risk. For exampl e, the new software collects financial information from Dun and Bradstreet, among other sources, and uses that information to calculate a score reflecting the financial stability of grantees. The software will also h with risk assessment by providing other information from agency and outside sources, along with relevant findings from grantee audits. However, RMS does not have a timetable for using this new software throughout th e department. RMS has worked closely with the Office of Postsecondary Education (OPE) to develop risk assessment procedures, and hopes to work with officials in the other principal offices for this purpose. RMS helped OPE develop an index that ranks the potential risk level of grantees based on such risk criteria as net operating results, status with an accrediting agency, enrollment trends, and ability to manage federal funds. RMS also provides OPE with monthly analyses of grantees’ level of financial risk. In the meantime, some program offices have developed risk assessment procedures on their own. We observed a wide range of risk assessment procedures that varied in rigor among the program offices we reviewed. In discussing how program offices assess grantees’ financial risk, we noted there was an indicator that program offices routinely reviewed to assess a grantee’s financial risk: the rate at which a grantee draws down grant funds, known as the drawdown rate. Although staff we met with in all of the offices that disburse funds through periodic drawdowns reported checking the drawdown rate, the frequency varied from office to office, with some checking the rate monthly and others checking it quarterly. Further, the drawdown rate is limited as an indicator of the soundness of a grantee’s financial management practices because it only shows when the grantee is using funds and does not show what the funds are used for. In addition to monitoring the grantees’ drawdown rates, staff in three of the offices we met with OPE, Student Achievement and School Accountability (SASA), and Office of Special Education Programs (OSEP) described using more rigorous risk assessment procedures than those developed in the other offices we visited. As discussed above, OPE worked with RMS to develop new risk assessment procedures. Staff in SASA have recently moved to risk-based monitoring procedures; they use a risk assessment procedure that incorporates an extensive list of risk indicators and numerous sources of information to determine an individual grantee’s level of risk and whether grantees are meeting performance expectations. These indicators include program performance data, the grantee staff’s level of experience, the size of the grant and the population served, and issues raised by the Office of Inspector General. Once the program specialists in that office collect and analyze the information, they tailor monitoring and technical assistance accordingly. The staff routinely track major compliance or performance issues, and also conduct staff briefings before site visits to share information the office has developed about the grantee and issues that may arise, and afterwards, to discuss findings and possible corrective action plans. Specialists in OSEP reported that they categorize grantees based on such factors as audit findings, data indicating how well the grantee is accomplishing its objectives, and special conditions attached to the grant. Based on this information, they categorize the level of risk for the grantee and the level of monitoring and technical assistance the grantee requires. For example, a grantee that has high staff turnover or recurring problems in external audits, or is unable to meet its performance expectations, would be monitored more closely and receive more technical assistance than a grantee with experienced staff that is consistently meeting the program’s administrative, financial, and performance requirements. Program specialists from OPE, Parental Options and Information, and the Continuation and Professional Grants Division told us that their risk assessment process begins when they first contact grantees and provide expectations for reporting performance and financial information. This early contact gives these program specialists a sense of the level of experience the grantee has in managing federal grants. In developing those assessments, the program specialists and supervisors said they can tailor their monitoring to provide additional technical assistance, for example, or reach out more frequently than they might otherwise to grantees that appear likely to have compliance problems. While some of Education’s program offices are making progress assessing and managing grantee risk as discussed above, staff in three other program offices described significant limitations of the risk assessment process in place for their grant programs: Program specialists in one office told us that experienced program specialists rely on their skills and experience to determine what to look for. However, without a formalized risk assessment process, they said a new hire might miss key issues while monitoring a grantee. They added that a more formal risk assessment process would be helpful. The program specialists in another office said they are not able to review all possible risk indicators that may pertain to grantees or do more in- depth risk assessments because of competing demands on their time. For example, one program specialist described a situation in which grant funds were improperly paid out to a grantee because the program specialist did not have time to check whether the grantee was in good standing. He added that the office was able to recover the funds, but he was concerned this could happen again and result in losses. One program office director told us that the large number of grantees makes it impossible to conduct routine risk assessments of them all. Program specialists from that office told us that because of their heavy work load—ranging from 85 to 260 grants per specialist—they did not have enough time to review all the grantees identified as being at risk. Directors and supervisors in the program offices we visited noted that while their staff generally have the expertise needed to perform their monitoring duties, limited financial expertise and training hinder effective monitoring of grantees’ compliance with financial requirements. In many of the program offices we visited, program specialists monitor grantees for compliance with administrative, financial, and performance requirements. In most program offices, staff we spoke with—including directors, supervisors, and specialists—said that the program specialists have limited financial knowledge and lack the skills needed for conducting financial reviews and ensuring grantees’ financial compliance. While monitoring protocols aid in reviewing compliance with basic financial requirements, the ability to verify or evaluate what grantees report about their use of funds is limited by a lack of expertise. For example, using findings from consolidated audit reports on grantees’ financial statements is an activity that aids in identifying monitoring issues, but staff in some program offices have difficulty accessing these reports or are not able to determine how to use the report findings to identify areas that need closer monitoring. Some program office staff said that training on performing financial reviews is needed to help fill the gap in this skill area. Education has identified the need for more financial review capability in its grant program staff through a skills assessment inventory it has conducted for the last several years. However, Education has not fully developed a strategy to enhance the financial review capacity of its grant program staff. According to federal control standards, as part of their management responsibilities, agencies should have standards or criteria for hiring qualified people. Program office directors and supervisors told us their staffs generally have the background and expertise needed to monitor grantees, but directors, supervisors, and staff in eight of the program offices we reviewed said their program specialists generally did not have a sufficient level of financial knowledge or skills needed to review grantee compliance in that area. Several noted that some of their program specialists previously worked in state or local education systems or have strong backgrounds in education programs, and that they seek individuals with these backgrounds or experience specific to their programs. However, program specialists from three of the groups we met with—that administer about 47 percent of Education’s total grant funding—told us specifically that they, as a group, did not possess the needed knowledge or skills for reviewing grantee financial compliance and that this hindered their offices’ ability to adequately monitor grantees. The director in one of those offices also expressed doubt that, in general, his staff have the ability to conduct more in-depth financial reviews of grantees beyond reviewing drawdown activity reports. Program offices took different steps to try to ensure proper financial reviews. Five of the 12 offices we reviewed addressed their need for financial expertise by designating staff to perform financial compliance reviews. However, the directors or supervisors from three of these offices said that more of their program specialist staff will need to be trained in financial monitoring as their office’s workload increases and as individuals with fiscal expertise retire. Program office staff can work with the department-wide offices that provide technical assistance and guidance (see fig. 1) on financial compliance issues. Six offices obtained assistance in conducting financial monitoring from other offices such as RMS and the Office of Chief Financial Officer (OCFO). However, these arrangements sometimes had limits. One program office found OCFO’s ability to assist was limited by their lack of program knowledge. RMS is responsible for providing principal and program offices with advice and assistance on issues concerning grant administration, but officials in RMS told us their offers of assistance to program offices are often met with skepticism or resistance. The director of RMS also has concluded that he would need additional staff to provide support to the program offices, including development of financial monitoring standards and training. In addition to designating staff to perform financial compliance reviews or obtaining assistance in conducting financial monitoring from other Education offices, four of the offices retained contractors to assist with monitoring activities, including participating in site visits. However, staff in two of these offices found that the contractor personnel also lacked sufficient knowledge and skills to conduct financial compliance monitoring. Another office terminated its contract because the contractor was not meeting the office’s standards for preparing site visit reports. Some of these program offices that used contractors did so to complement their own staffs with additional resources in order to meet their monitoring needs. One director told us he used contractors because more money was available for contractors than for hiring or training staff. Education has not assessed the effectiveness of using contractors to conduct fiscal monitoring. An official in the Office of the Secretary told us he was not aware of any attempts at such an analysis. Most of the program offices we reviewed use written tools or protocols that typically give instruction for monitoring compliance, including compliance with financial requirements, but these protocols generally do not provide instruction or guidance on verifying or evaluating information obtained during the review. The group of program specialists we met with in one office told us they usually have to rely on grantees’ self-reporting about their use of funds. These specialists said they do not have the background or skills needed to corroborate what the grantees are reporting, and the office’s protocols we reviewed do not provide further guidance or instruction on corroborating or evaluating information obtained. The director in this office also acknowledged his staff’s lack of financial skills and said he would like to see the department develop some better tools for assuring that grantees are complying with financial requirements and using their funds consistently with their plans. While Education has begun an effort to inventory the skills of its grant monitoring staff, it has not yet developed a training program or other strategy to fill gaps in their financial monitoring capacity. Some program office staff noted that training specifically in financial monitoring is needed and would help improve skills. Also, the Grants Pilot Project Team concluded that initial training efforts in financial monitoring had not yet yielded long-term and sustained improvements or a critical mass of better trained staff. To identify where financial monitoring training is needed, the department has been conducting an inventory of the skills of its program specialist staff for the last several years. Supervisory staff in the program offices are asked to identify the skills needs of each person reporting to them, including financial compliance knowledge. The results of the inventory are to be used to design training in financial compliance and determine how much training is needed in each office. Supervisors are encouraged to meet with their staff, discuss the skills needs, and inform those individuals about available training. However, under this program, financial management or analysis skills are not competencies on which all program specialist staff are assessed. Only about 10 percent of staff with grant administration responsibilities are assessed on a financial skills competency. Based on the most recent year for which inventory results are available, 25 of the individuals assessed on financial skills were identified as needing training on financial compliance. Financial monitoring is currently available as a module in classes on grant monitoring, but the course material is limited to use of the Grant Administration and Payment System and grantees’ use of tools such as fund carryovers and budget transfers. In addition, there are two courses on understanding the role of audits in grantee compliance, but they have been offered only once in 2007 and once in 2008 with an enrollment limit of 30. Similarly, Education also offers a course on basic accounting theory and principles for any department staff without prior accounting training. One of its goals is to provide knowledge for monitoring use of funds by grantees. The course most recently has been offered three times in 2009, and department officials estimate about 23 program office staff with grant monitoring responsibilities have successfully completed or are currently taking it. Education is planning to make changes in the financial compliance components of its grant administration training, but these efforts are just beginning and management has not committed to a time frame for full implementation. RMS is developing a class on grants management that will focus on financial and administrative requirements and compliance. Preliminary materials we reviewed indicate it will cover such topics as grantee cash management and payment systems, cost principles for grantees, and financial reporting. Two program offices have expressed interest in registering their staffs to take this class when it becomes available. RMS is also planning to develop a curriculum for newly hired grant administration staff that would be offered through “just-in-time” modules, one of which would focus on financial compliance. According to the RMS official responsible for developing these courses, though, the development is being delayed while he implements similar courses for grantees and their subrecipients. He also noted that his own instructor staff have limited financial knowledge, which could impose a constraint on the success of the new training. Education staff responsible for grant monitoring generally do not have access to relevant information on how well grantees comply with the requirements of other Education grants and whether their performance with respect to those grants meets expectations. Because many grantees receive multiple grants from Education in a given year, program specialists said this type of information sharing could help program specialists carry out their grant monitoring responsibilities more effectively (see fig. 3). Additionally, the program offices responsible for grant monitoring lack a systematic means to share information on promising practices for conducting grant monitoring. Program office managers and staff said it would be helpful to have information on ways to improve or enhance current monitoring practices. Program office management and staff identified challenges in accessing information relevant to grant monitoring. Program management and specialists acknowledged that while it might be useful to share information with other principal or program offices, there is no formal mechanism to do so. One program office director, for example, said such exchanges might provide other program offices information about systemic management or personnel problems state grantees are experiencing, since these types of problems could affect a wide variety of the department’s grant programs. A supervisor in another office observed that if his staff find questionable issues with a grantee, they do not have a systematic method of reporting it to other pr ogram or principal offices that may also be funding that same grantee. Moreover, program staff within the same office said they do not always have access to information about a grantee’s performance. In one of our discussion groups with program specialists, one participant was aware a grantee was having performance issues but another participant who also e monitors the same grantee under a different grant was unaware of thes performance issues. Had he known about these issues, he would have intensified his monitoring of that grantee. The program specialists in this office said they do not know who works with which grantees and th no formal process within the program office to share information. Managers in several program offices said they have databases or other repositories housing grant monitoring information including findings and past program performance, but these databases are not available to staff outside their program office. In those offices, information is available for internal program office use but is not typically shared with or accessib monitoring staff in other program offices. For example, one principal office has a database of monitoring findings and recommendations from the last 6 years, but we were told it was for use only in that principal office. Program office management and staff noted that Education has a shared computer drive where program offices may store current and historical information on grantees’ performance in folders, but access to their information on the drive is typically restricted to their own staff. The shared drive was never intended for sharing information among different principal offices. Staff in one program office would need to be given access to browse files saved in another program office’s folder on the shared drive. However, even with access to the folders on the shared drive, the information may not be easily searchable according to some program staff. One program officer explained to us that in his office, he and his colleagues can use this drive as a repository for all documents related to their work. However, they have to notify each other of what is available on the drive. One notable exception to the limitations in information sharing inv the department-wide team headed by RMS that is responsible for coordinating the monitoring of designated high-risk grantees. The designation is assigned when the results of audits or other monitoring olves activities show the grantee has significant deficiencies and is not meeting program, financial, or administrative requirements. Currently there ar e 17 grantees with this designation. This team meets weekly and includes representatives from the program offices, OCFO, and other department- wide offices. At these meetings, the team shares information about the high-risk grantees, monitoring issues involving those grantees, progres made in addressing corrective action plans, and other issues that may arise. As noted previously, RMS is nearing completion of an information sharing tool for the department that would provide program specia lists with relevant information related to all grantees department-wide. However, RMS does not h information-sharing tool. ave a targeted implementation date for the In addition to the limited accessibility of information on grantees, w found there is limited information on promising practices in grant monitoring. In 2007, the department’s Grants Pilot Project Team recommended disseminating promising practices for grant monitoring to all program offices. However, the program office directors and other staff we interviewed were generally not aware of a formal mechanism for sharing promising practices and desired a more formal approach. While management staff in some offices said they share information on promising practices through informal contacts and networks, managers in four of the offices we visited said that a means to share such information more systematically would be helpful to all offices and a good way to improve grant monitoring practices. Since it created RMS in October 2007, Education has made progress in developing a risk-based approach to monitoring its more than 18,000 grantees. While allowing individual program offices to develop their own procedures might make sense given the range of programs and mis the various offices, not all of the program offices have developed procedures for assessing grantee risk. RMS is developing training, software, and technical assistance that the program offices can use to ai in the development of their own risk assessment procedures; however, many of these efforts are in the planning stages and do not have an implementation timeline. In order to better target grant monitoring and ensure that monitoring st have the knowledge and information that would help them focus their monitoring efforts, we recommend that the Secretary of Education take the following three actions: Develop department-wide guidance on risk assessment, continue effo develop new grantee risk assessment tools that can be implemented department-wide, a are implemented. nd work with the program offices to ensure these tools Implement a strategy to ensure each program office has staff with sufficient financial monitoring expertise to conduct or assist other program specialists in conducting financial compliance reviews. This could include proceeding with plans for enhanced financial training an also assessing options such as using dedicated staff or contractors to conduct grantee financial reviews. Develop an easily accessible mechanism for sharing information across a offices about grantees’ past and present performance, and an accessible forum for sharing promising practices in grant monitoring to ensure all program offices are able to effectively and efficiently perform all of their duties and responsibilities. We provided a draft of this report to the Secretary of Education. Education’s comments are presented in appendix II. Education generally agreed with our recommendations and said it is taking various steps to address them. However, Education believed the draft report should provide a more complete and accurate picture of its overall monitoring efforts and provided technical comments to provide a more complete and accurate analysis of its monitoring practices. We believe our draft report provided an accurate portrayal of Education’s monitoring presented appropriate evidence for our conclusions and recommendations. We reviewed Educa incorporated them when appropriate. tion’s technical comments and We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other major contributors to this report are listed in appendix III. Each pie chart in figure 4 represents a principal office in Education that awards grants to state and local educational agencies, institutions of higher education, and other eligible entities. The chart for each office is sized according to its share of the total amount of Education’s grant funding in that office based on fiscal year 2008 appropriations. The program offices we included in our review, and their percentage of principal office grant funds, are shown by the labeled slices. The unlabeled slices represent the program offices that were not included in our review. Academic Improvement and Teacher Quality Programs $1.9 bil. Student Achievement and School Accountability $848 mil. $15.2 bil. $721 mil. $684 mil. Office of Special Education Program $280 mil. In addition to the individual named above, other GAO staff who made key contributions to this report are Bill Keller, Assistant Director; Joel Marus, Analyst-in-Charge; Travis Hill; Jill Yost; Charles Willson; Kate Van Gelder; Luann Moy; Walter Vance; Jim Rebbe; and James Bennett. | The Department of Education (Education) awards about $45 billion in grants each year to school districts, states, and other entities. In addition, the American Recovery and Reinvestment Act of 2009 provided an additional $97 billion in grant funding. In a series of reports from 2002 to 2009, Education's Inspector General cited a number of grantees for failing to comply with financial and programmatic requirements of their grant agreements. GAO was asked to determine: (1) what progress Education has made in implementing a risk-based approach to grant monitoring, (2) to what extent Education's program offices have the expertise necessary to monitor grantees' compliance with grant program requirements, and (3) to what extent information is shared and used within Education to ensure the effectiveness of grant monitoring. To do this, GAO reviewed agency documentation related to Education's internal controls and interviewed senior Education officials and staff in 12 of the 34 offices that monitor grants. In October 2006, Education began to look at ways to improve the efficiency and effectiveness of the department's grant management processes; in particular, it sought ways to more effectively monitor its grants after they were made. In 2007, Education created the Risk Management Service (RMS) to work with all components of the department to ensure that each office has an effective risk management strategy in place. Effective monitoring protocols and tools based on accepted control standards are key to ensuring that waste, fraud, and abuse are not overlooked and program funds are being spent appropriately. Such tools include identifying the nature and extent of grantee risks and managing those risks, having skilled staff to oversee grantees to ensure they are using sound financial practices and meeting program objectives and requirements, and using and sharing information about grantees throughout the organization. Our review of Education's current grant monitoring processes and controls found that it: (1)Has made uneven progress in implementing a department-wide, risk-based approach to grant monitoring. Education has not disseminated department-wide guidance on grantee risk assessment, but it has planned some new efforts in this area. In the absence of guidance on a department-wide risk assessment strategy, individual program offices have developed their own strategies for assessing and managing risk that vary in rigor. (2) Has limited financial expertise and training, hindering effective monitoring of grantees' compliance with financial requirements. Education has monitoring tools that aid in reviewing basic financial compliance, but the lack of staff expertise limits the ability to probe more deeply into grantees' use of funds. (3) Lacks a systematic means of sharing information on grantees and promising practices in grant monitoring throughout the department. These shortcomings can lead to weaknesses in program implementation that ultimately result in failure to effectively serve the students, parents, teachers, and administrators those programs were designed to help. |
The Missile Defense Agency’s mission is to develop an integrated and layered BMDS to defend the United States, its deployed forces, allies, and friends. The BMDS is expected to be capable of engaging all ranges of enemy ballistic missiles in all phases of flight. This is a challenging expectation, requiring a complex combination of defensive components— space-based sensors, surveillance and tracking radars, advanced interceptors, and a battle management, command, control, and communications component—that work together as an integrated system. A typical scenario to engage an intercontinental ballistic missile (ICBM) would unfold as follows: Infrared sensors aboard early-warning satellites detect the hot plume of a missile launch and alert the command authority of a possible attack. Upon receiving the alert, land- or sea-based radars are directed to track the various objects released from the missile and, if so designed, to identify the warhead from among spent rocket motors, decoys, and debris. When the trajectory of the missile’s warhead has been adequately established, an interceptor—consisting of a kill vehicle mounted atop a booster—is launched to engage the threat. The interceptor boosts itself toward a predicted intercept point and releases the kill vehicle. The kill vehicle uses its onboard sensors and divert thrusters to detect, identify, and steer itself into the warhead. With a combined closing speed on the order of 10 kilometers per second (22,000 miles per hour), the warhead is destroyed above the atmosphere through a “hit to kill” collision with the kill vehicle. To develop a system capable of carrying out such an engagement, MDA, until December 2007, executed an acquisition strategy in which the development of missile defense capabilities was organized in 2-year increments known as blocks. Each block was intended to provide the BMDS with capabilities that enhanced the development and overall performance of the system. The first 2-year block—Block 2004—fielded a limited initial capability that included early versions of the GMD, Aegis BMD, Patriot Advanced Capability-3, and C2BMC elements. The agency’s second 2-year block—Block 2006—culminated on December 31, 2007, and fielded additional BMDS assets. Block 2006 also continued the evolution of Block 2004 by providing improved GMD interceptors, enhanced Aegis BMD missiles, upgraded Aegis BMD ships, a Forward-Based X-Band- Transportable radar, and enhancements to C2BMC software. On December 7, 2007, MDA’s Director approved a new block construct that will be the basis for all future development and fielding. Table 1 provides a brief description of all elements currently being developed by MDA. MDA made progress in developing and fielding the BMDS during 2007. Additional assets were fielded and/or upgraded, several tests met planned objectives, and other development activities were conducted. On the other hand, fewer assets were fielded than originally planned, the cost of the block increased, some flight tests were deferred, and the performance of fielded assets could not be fully evaluated. During Block 2006, MDA increased its inventory of BMDS assets while enhancing the system’s performance. The agency fielded 14 additional Ground-based interceptors, 12 Aegis BMD missiles designed to engage more advanced threats, 4 new Aegis BMD destroyers, 1 new Aegis BMD cruiser, as well as 8 C2BMC Web browsers and 1 C2BMC suite. In addition, MDA upgraded half of its Aegis BMD ship fleet, successfully conducted four Aegis BMD and two GMD intercept tests, and completed a number of ground tests to demonstrate the capability of BMDS components. Considering assets fielded during Blocks 2004 and 2006, MDA, by December 31, 2007, had cumulatively fielded a total of 24 Ground-based interceptors, 2 upgraded early-warning radars, an upgraded Cobra Dane surveillance radar, 1 Sea-based X-band radar, 2 Forward-Based X-Band Transportable radars, 21 Aegis BMD missiles, 14 Aegis BMD destroyers, and 3 Aegis BMD cruisers. In addition, MDA had fielded 6 C2BMC suites; 46 warfighter enterprise workstations with situational awareness; BMDS planner and sensor management capabilities; 31 C2BMC Web browsers, 13 with laptop planners; and redundant communications node equipment to connect BMDS elements worldwide. In March 2005, MDA submitted to Congress the number of assets it planned to field during Block 2006. However, increasing costs, technical challenges, and schedule delays prompted the agency to reduce the quantity of planned assets. Consequently, in March 2006, shortly after submitting its fiscal year 2007 budget, MDA notified Congress that it was revising its Block 2006 Fielded Configuration Baseline. Although MDA did not meet its original block fielding goals, it was able in nearly all instances to meet or exceed its revised goals. Of the four elements delivering assets during Block 2006, one—Sensors—was able to meet its original goal. However, two elements—GMD and C2BMC—were able to exceed their revised fielding goals. Table 2 depicts the goals and the number of assets fielded. Although GMD did not meet its original goal of fielding up to 15 interceptors and partially upgrading the Thule early warning radar, the element was able to surpass its revised goal of fielding 12 interceptors. By December 31, 2007, the GMD element fielded 14 interceptors—2 more than planned. To achieve its revised goal, the element’s prime contractor added a manufacturing shift during 2007 and extended the number of hours that certain shifts’ personnel worked. These actions allowed the contractor to more than double its interceptor emplacement rate. Last year, we reported that MDA delayed the partial upgrade of the Thule early-warning radar—one of GMD’s original goals—until a full upgrade could be accomplished. According to DOD, the full upgrade of Thule is the most economical option and it meets DOD’s desire to retain a single configuration of upgraded early warning radars. The Thule early warning radar upgrade is being accomplished by two separate contract awards. Raytheon was awarded a contract in April 2006 to develop and install prime mission equipment; while Boeing was expected to receive a contract in January 2008 to integrate the equipment into the BMDS ground communication network. In March 2005, MDA included three C2BMC suites as part of its fielding goal for Block 2006. These suites were to be fielded at U.S. European Command, U.S. Central Command, and another location that was to be identified later. Faced with a $30 million reduction in C2BMC’s fiscal year 2006 budget, MDA in March 2006 revised this goal to replace the 3 suites with 3 less expensive Web browsers. However, by the end of Block 2006, MDA found an innovative way to increase combatant commands’ situational awareness and planning capability. In 2005, the C2BMC program conducted a network load analysis and concluded that situational awareness and planning capability—equivalent to that provided by a suite- -could be gained by combining Web browsers and planners. To prove that this approach would work, MDA fielded 4 Web browsers and one planner at the U.S. European Command. MDA learned that this combination of hardware, fielded in the quantities needed to meet a command’s needs and connected to an existing server, provided the situational awareness and planning capability of a suite at less cost. MDA extended this approach by fielding one Web browser and one planner at four other locations—U.S. Forces Japan; U.S. Forces Korea; the Commander of U.S. Strategic Command; and the Commander of the Space and Missile Defense Command. In addition, MDA fielded one suite at U.S. Pacific Command. The Aegis BMD element was able to meet its revised block goals for only one of its two components. The program upgraded all planned ships, but fielded three fewer Aegis BMD Standard Missile-3s (SM-3) than planned. The program did not meet its revised missile goal because three U.S missiles were delayed into 2008 to accommodate an unanticipated requirement to deliver three missiles to Japan. Figure 1 below depicts the location of current BMDS assets. MDA’s Block 2006 program of work culminated with higher than anticipated costs. In March 2007, we reported that MDA’s cost goal for Block 2006 increased by approximately $1 billion because of greater than expected GMD operations and sustainment costs and technical problems. During fiscal year 2007, some prime contractors performing work for the BMDS overran their budgeted costs. To stay within its revised budget, MDA was forced to reduce the amount of work it expected to accomplish during the block. The full cost of the block cannot be determined because of the deferral of work from one block to another. In addition, some MDA prime contractors too often employ a planning methodology that has the potential to obscure the time and money that will be needed to produce the outcomes intended. If the work does not yield the intended results, MDA could incur additional future costs. While MDA struggled to contain costs during Block 2006, the agency awarded two contractors a large percentage of available fee for performance in cost and/or program management although the contractor-reported data showed declining cost and schedule performance. Both award fee plans for these contractors direct that cost and schedule performance be considered as factors in making the evaluation. While these factors are important, MDA’s award fee plans provide for the consideration of many other factors in making award fee determinations. To determine if contractors are executing the work planned within the funds and time budgeted, each BMDS program office requires its prime contractor to provide monthly Earned Value Management reports detailing cost and schedule performance. If more work was completed than scheduled and the cost of the work performed was less than budgeted, the contractor reports a positive schedule and cost variance. However, if the contractor was unable to complete all of the work scheduled and needed more funds to complete the work than budgeted, the contractor reports a negative schedule and cost variance. Of course, the results can be mixed. That is, the contractor may have completed more work than scheduled but at a cost that exceeded the budget. As shown in table 3 below, the contractors for the nine BMDS elements collectively overran their fiscal year 2007 budgets by approximately $166 million. We estimate that at completion, the cumulative overrun in the contracts could be between about $1.3 billion and $1.9 billion. Our predictions of final contract costs were developed using formulas accepted within the cost community and were based on the assumption that the contractor will continue to perform in the future as it has in the past. It should also be noted that some contracts include more than Block 2006 work. For example, the STSS contract includes work being accomplished in anticipation of future blocks. Our analysis is presented in table 3 below. Appendix II provides further details on the cost and schedule performance of the contractors outlined in the table. Technical problems and software issues caused several BMDS elements to overrun their fiscal year 2007 budgeted costs. In addition, 4 of the 10 contracts we reviewed contained some kind of replanning activity during fiscal year 2007 and the ABL contract was partially rebaselined. Contractors may replan when they conclude that the current plan for completing the effort remaining on the contract is unrealistic. A replan can include reallocating the remaining budget over the rest of the work, realigning the schedule within the contractually defined milestones, and setting either cost or schedule variances to zero or setting both to zero. A rebaseline is similar, but it may also add additional time and/or funding for the remaining work. The ABL contractor was overrunning both its fiscal year 2007 budget and schedule early in the year. Although by year’s end it appears that the contractor recovered, the contractor would have continued to overrun both its budget and its schedule if most of the contract had not been rebaselined. The contractor realized cost and schedule growth as it worked to solve software integration problems in the Beam Control/Fire Control component and dealt with a low-power laser needed for flight tests that was not putting enough energy on the target. After encountering these problems, the ABL contractor did not have sufficient schedule or budget to complete the remaining contract work. Therefore, in May 2007, the program allowed the contractor to rebaseline all of the remaining work devoted to developing, integrating, flight testing, and delivering the ABL prototype. The rebaselining effort added about $253 million to the contract and extended the contract’s period of performance by more than a year. The THAAD prime contractor’s cost overrun of $91.1 million was primarily caused by technical problems related to the element’s missile, launcher, radar, and test components. Missile component cost overruns were caused by higher than anticipated costs in hardware fabrication, assembly, and support touch labor for structures, propulsion, and other subassembly components. Additionally, design issues with the launcher’s missile round pallet and the electronics assembly that controls the launcher caused the contractor to experience higher than anticipated labor and material costs. The radar component ended the fiscal year with a negative cost variance as more staff was required than planned to resolve hardware design issues in the radar’s prime power unit. The contractor also experienced negative cost variances with the system test component because the Launch and Test Support Equipment required additional set-up time at the flight test range. The STSS contractor’s $67.7 million fiscal year 2007 cost variance is primarily attributed to problems that occurred during thermal vacuum testing of the first satellite. Since the satellites are legacy hardware built under a former program, there are no spares available for testing. As a result, the contractor needed to handle the parts carefully to avoid damage to the hardware, increasing the time devoted to the test. Further test delays occurred when a number of interface issues surfaced during testing and when the cause of component problems could not be easily traced to their source. The program office believes that the cost variance would have been less if design engineers had been available during testing. Because engineers were not present to quickly identify the cause of component problems, a time-consuming analysis of each problem was needed. In March 2007, we reported that a full accounting of Block 2006 costs was not possible because MDA has the flexibility to redefine block outcomes. That is, MDA can delay the delivery of assets or other work activities from block to block and count the work as a cost of the block during which the work is performed, even though the work does not benefit that block. For example, MDA deferred some Block 2004 work until Block 2006 so that it could use the funds appropriated for that work to cover unexpected cost increases caused by technical problems recognized during development, testing, and production. With the deferral of the work, its cost was no longer counted as a Block 2004 cost, but as a Block 2006 cost. As a result, Block 2004’s cost was understated and Block 2006’s cost is overstated. Because MDA did not track the cost of the deferred work, the agency could not make an adjustment that would have matched the cost with the correct block. The cost of Block 2006 was further blurred as MDA found it necessary to defer some Block 2006 work until a future block. For example, when the STSS contractor overran its fiscal year 2007 budget because of testing problems, the program did not have sufficient funds to launch the demonstration satellites in 2007 as planned. The work is now scheduled for 2008. The consequence of deferring Block 2004 work to Block 2006 and Block 2006 to 2008 is that the full cost of Block 2006 cannot be determined. Some MDA prime contractors too often employ a planning methodology that has the potential to obscure the time and money that will be needed to produce the outcomes intended. Contractors typically divide the total work of a contract into small efforts in order to define them more clearly and to ensure proper oversight. Work may be planned in categories including (1) level of effort (LOE) —work that contains tasks of a general or supportive nature and do not produce a definite end product—or (2) discrete work—work that has a definable end product or event. Level of effort work assumes that if the staff assigned to the effort spend the planned length of time, they will attain the outcome expected. According to earned value experts and the National Defense Industrial Association, while it is appropriate to plan such tasks as supervision or contract administration as LOE, it is not appropriate to plan tasks that are intended to result in a product, such as a study or a software build, as LOE because contractors do not report schedule variances for LOE work. Therefore, when contractors incorrectly plan discrete work as LOE, reports that are meant to allow the government to assess contractor cost and schedule performance may be positive, but the government may not have full insight into the contractor’s progress. The greater the percentage of LOE, the weaker the link between inputs (time and money) and outcomes (end products), which is the essence of earned value analysis. Essentially, depending on the magnitude of LOE, schedule variances at the bottom line can be understated. The significant amount of BMDS work being tracked by LOE may have limited our assessment of the contractors’ performance. That is, the contractor’s performance may appear to be more positive than it would be if work had been correctly planned. In such cases, the government may have to expend additional time and money to achieve the outcomes desired. MDA Earned Value Management officials agreed that some BMDS prime contractors incorrectly planned discrete work as LOE, but the agency is taking steps to remedy this situation so that they can better monitor the contractors’ performance. While it is not possible to state with certainty how much work a contractor should plan as LOE, experts within the government cost community, such as Defense Contract Management Agency officials, agree that LOE levels over 20 percent warrant investigation. According to MDA, many of its prime contractors plan a much larger percentage than 20 percent of their work as LOE. Table 4 presents the percentage of work in each BMDS prime contract that is categorized as LOE. The Aegis BMD SM-3, MKV, ABL, and C2BMC contractors planned more than half of certain work as LOE. In several instances, MDA Earned Value Management officials and program office reviewers agreed that some of the LOE work could be redefined into discrete work packages. For example, from January through December 2007, the C2BMC contractor planned 73 percent of its work as LOE. This included activities such as software development and integration and test activities that result in two definable products—software packages and tests. At the direction of the C2BMC Program Office, the C2BMC contractor redefined some contract work, including software development and integration and test activities, as discrete, reducing the amount of LOE on the contract to 52 percent. The Aegis BMD element also reported a high percentage of LOE for its Standard Missile-3 contract, particularly considering that its products— individual missiles—are quite discrete. In August 2007, the element reported that the contractor had planned 73 percent of the contract work as LOE. The portion of the work that contained this amount of LOE was completed in March 2007 with an underrun of $7.2 million. Although the contractor reported an underrun for this work upon its completion, the high percentage of LOE may have, over the contract period, distorted the contractor’s actual cost and schedule performance. Similarly, it is important to note that the amount of LOE for the SM-3 work that is currently ongoing is considerably less. Program officials told us that prior to the commencement of this segment of work, the MDA Earned Value Management Group and program officials recommended that the program minimize the amount of LOE on its contracts. Currently, only 18 percent of the SM-3 contract is considered LOE. MDA uses award fees to encourage its contractors to perform in an innovative, efficient, and effective way in areas considered important to the development of the BMDS. Because award fees are intended to motivate contractor performance for work that is neither feasible nor effective to measure objectively, award fee criteria and evaluations tend to be subjective. Each element’s contract has an award fee plan that identifies the performance areas to be evaluated and the methodology by which those areas will be assessed. An award fee evaluation board—made up of MDA personnel, program officials, and officials from key organizations knowledgeable about the award fee evaluation areas–– judges the contractor’s performance against specified criteria in the award fee plan. The board then recommends to a fee determining official the amount of fee to be paid. MDA’s Director is the fee-determining official for all BMDS prime contracts that we assessed. During fiscal year 2007, MDA awarded approximately 95 percent, or $606 million, of available award fee to its prime contractors. While the cost, schedule, and technical performance of several contractors appeared to be aligned with their award fee, two contractors were rated as performing very well in the cost and/or program management elements and received commensurate fees even though earned value management data showed that their cost and schedule performance was declining. On the other hand, MDA did not award any fee to the THAAD contractor for its management of contract cost during a time when earned value data showed steadily increasing costs. Although DOD guidance discourages the use of earned value performance metrics in award fee criteria, MDA includes this as a factor in several of its award fee plans. The agency considers many factors in rating contractors’ performance and making award fee determinations, including consideration of earned value data that shows cost, schedule, and technical trends. In addition, MDA has begun to revise its award fee policy to align agency practices more closely with DOD’s current policy that better links performance with award fees. The ABL and Aegis BMD weapon system contractors received a large percentage of the 2007 award fee available to them for the cost and/or program management element. MDA rated the ABL contractor’s performance in cost and program management elements as “very good,” awarding the contractor 88 percent of the fee available in these performance areas. According to the award fee plan, one of several factors that is considered in rating the contractor’s performance as very good is whether earned value data indicates that there are few unfavorable cost, schedule, and/or technical variances or trends. During the February 2006 to January 2007 award fee period, earned value data shows that the contractor overran its budget by more than $57 million and did not complete $11 million of planned work. Similarly, the Aegis BMD weapon system contractor was to be rated as to how effectively it managed its contract’s cost. The award fee plan for this contractor also directs that earned value be one of the factors considered in making such an evaluation. During the fee period that ran from October 2006 through March 2007, MDA rated the contractor’s cost management performance as outstanding and awarded 100 percent of the available fee. Earned value data during this time period indicates that the contractor overran its budget by more than $6 million. MDA did not provide us with more detailed information as to other factors that may have influenced its decision as to the amount of fee awarded to the ABL and Aegis BMD Weapon System contractors. MDA recognizes that there is not always a good link between the agency’s intentions for award fees and the amount of fee being earned by its contractors. In an effort to rectify this problem, the agency released a revised award fee policy in February 2007 to ensure its compliance with recent DOD policies that are intended to address award fee issues throughout the Department. Specifically, MDA’s policy directs that every contract’s award fee plan include: Criteria for each element of the award fee that is specific enough to enable the agency to evaluate contractor performance and to determine how much fee the contractor can earn for that element. The criteria is to clearly define the performance that the government expects from the contractor for the applicable award fee period and the criteria for any one element must be distinguishable from criteria for other elements of the award fee; An emphasis on rewarding results rather than effort or activity; and An incentive to meet or exceed agency requirements. Additionally, MDA’s policy calls for using the Award Fee Advisory Board to not only make award fee recommendations to the fee determining official, but to also biannually report to MDA’s Director as to whether award fee recommendations are consistent with DOD’s Contractor Performance Assessment Report—a report that provides a record, both positive and negative, on a given contract for a specific period of time. Appendix II of this report provides additional information on BMDS prime contracts and award fees. During 2007, several BMDS programs experienced setbacks in their test schedules. The Aegis BMD, THAAD, ABL, STSS, and C2BMC elements experienced test delays, but all were able to achieve their primary test objectives. GMD, on the other hand, experienced a schedule delay caused by an in-flight target anomaly that prevented full accomplishment of one major 2007 test objective. The remaining three elements—MKV, KEI, and Sensors—were able to execute all scheduled activities as planned. The Aegis BMD, THAAD, C2BMC, ABL, and STSS elements continued to achieve important test objectives in 2007, although some tests were delayed. Aegis BMD proved its capability against more advanced threats, while THAAD proved that it could intercept both inside and outside of the atmosphere. C2BMC completed a number of software and system-level tests. The ABL and STSS programs saw delays in important ground tests, but ABL was able to begin flight testing its beam control/fire control component using a low-power laser in 2007 and STSS completed thermal vacuum testing of both satellites by the end of the year. However, the delays in the ABL and STSS programs may hold up their incorporation into the BMDS during future blocks. Although the Aegis BMD program encountered some test delays, it was able to achieve all fiscal year 2007 test objectives. In December 2006, the program stopped a test after a crew member changed the ship’s doctrine parameters just prior to target launch, preventing the ship’s fire control system from conducting the planned engagement. During this test event, the weapon system failed to recognize the test target as a threat, which prevented the SM-3 missile from launching. Also, according to program officials, the system did not provide a warning message which contributed to the mission being aborted prematurely and prevented the Aegis BMD program from meeting its test objectives. However, 4 months later, the same flight test event was successfully completed and all test objectives were met. During that event, the program was able to demonstrate that the Aegis BMD could simultaneously track and intercept a ballistic missile and an anti-ship cruise missile. In June 2007, the program successfully completed its first flight test utilizing an Aegis BMD destroyer to intercept a separating target, and in November, the program conducted its first test that engaged two ballistic missile targets simultaneously. During the last test, Aegis missiles onboard an Aegis BMD cruiser successfully intercepted two short-range non-separating targets and achieved all primary test objectives outlined for this event. The THAAD program expected to complete four flight tests prior to the end of fiscal year 2007 but was only able to complete three. Two tests successfully resulted in intercepts of short-range ballistic missiles at different levels of the atmosphere. The third test successfully demonstrated component capability in a high-pressure environment and was the lowest altitude interceptor verification test to date. However, the fourth test was delayed, initially due to target availability driven by late modifications to the target hardware configuration. Additionally, during pre-flight testing, the contractor found debris in the interceptor. This caused the interceptor to be returned to the factory for problem investigation. While the problem was corrected and the interceptor was returned to the test range in only 7 days, the test was rescheduled because the test range was not available before the end of fiscal year 2007. During fiscal year 2007, the C2BMC program completed BMDS-level ground and flight tests, successfully achieving its test objectives of verifying the capabilities and readiness of a new software configuration. The software is designed to provide the BMDS with improved defense planning capability, including better accuracy and speed; a new operational network; and additional user displays. Because of the integral nature of the C2BMC product, problems encountered in some elements’ test schedules have a cascading effect on C2BMC’s test schedule. Even though this limited C2BMC testing, a review of the integrated and distributed ground test data resulted in the decision to field the software in December 2007. ABL achieved most of its test objectives during fiscal year 2007, but experienced delays during Block 2006 that deferred future BMDS program decisions. The program experienced a number of technical problems during fiscal year 2006 that pushed some planned activities into fiscal year 2007. One such activity was the execution of the program’s first of four key knowledge points—a ground test to demonstrate ABL’s ability to acquire and track a target while performing atmospheric compensation. The test was conducted in December 2006, 3 ½ months later than planned. At the culmination of the test, program officials noted two problems. First, the system’s beam control/fire control software was not integrated as anticipated. In addition, the energy that the low-power laser placed on the target during the test was not optimal. According to program officials, both of these issues were resolved before the system began flight testing the full beam control/fire control component in February 2007. However, the delays caused the program to further postpone a key lethality demonstration—a demonstration in which the ABL will attempt to shoot down a short-range ballistic missile—until last quarter of fiscal year 2009. This demonstration is important to the program because it is the point at which MDA will decide the program’s future. Although the ABL program experienced some setbacks with its first key knowledge point, it was able to meet all objectives for each subsequent knowledge point. In addition to the first knowledge point, the program planned to demonstrate three additional knowledge points during fiscal year 2007. The second knowledge point was contingent upon completion of the first. To demonstrate the achievement of the two knowledge points, the contractor performed a flight test that showed the low-power laser was integrated and the beam control/fire control functioned sufficiently to perform target tracking and atmospheric compensation against an airborne target board. The third knowledge point was completed three months ahead of the planned 2007 schedule and demonstrated that ABL’s optical subsystem was adequate to support its high-power laser system. The fourth knowledge point–the completion of a series of flight tests to demonstrate the performance of the low-power laser system in flight—was completed in August 2007. Delays in the STSS test program, along with funding shortages, postponed the planned 2007 launch of the program’s demonstration satellites. The STSS program is integrating two demonstration satellites with sensor payloads from legacy hardware developed under a former program. The use of legacy hardware has complicated the test program because spares needed for testing are not available. In order to preserve the condition of the legacy components, the program must exercise caution in handling the components to prevent damage, which has caused delays in testing. Additionally, a thermal vacuum test on the first space vehicle, to assess the ability of the satellite to operate in the cold vacuum of space, took twice as long as scheduled, due to a number of interface issues. Although the program was able to complete the integration and test of both demonstration satellites in 2007—major objectives for the program—funds were not available to launch the satellites as planned. Program officials believe that the satellites could be launched as early as April 2008 and as late as July 2008, 1 year later than originally scheduled. According to the program office, there is no margin in the 2008 budget, so any unexpected issues could put the 2008 launch date at risk. The delays in launching the STSS demonstration satellites do not impact MDA’s Block 2006 fielding plans as the satellites are intended to demonstrate a surveillance and tracking capability and do not provide any operational capability during the block. However, the delay in launching the demonstration satellites is causing a delay in MDA’s ability to initiate development of an operational constellation, which may delay a BMDS global midcourse tracking capability. Despite delays in hardware and software testing and integration, other parts of the STSS program have proceeded according to schedule. Lessons learned from the thermal vacuum test for the first satellite’s sensor payload facilitated the completion of thermal vacuum testing of the second satellite’s payload in November 2007. Additionally, command and control capabilities of the ground segment were demonstrated and the second part of the acceptance test of STSS ground components was completed in September 2007. A target anomaly prevented the GMD element from achieving all 2007 objectives. The GMD program planned to conduct three flight tests–two intercept attempts and one radar characterization test—but was only able to conduct the radar test and one intercept test. The radar characterization test was conducted in March 2007. The target was launched from Vandenberg Air Force Base and was successfully tracked by the SBX radar and the radar of two Aegis BMD ships. During the test, officials indicated the SBX exhibited some anomalous behavior, yet was able to collect target tracking data and successfully transmit the information to the C2BMC element and the GMD fire control system at DOD’s Missile Defense Integration and Operations Center. No live interceptor was launched. However, an intercept solution was generated and simulated interceptor missiles were “launched” from Fort Greely, Alaska. To address anomalous behavior, MDA adjusted software and performance parameters of the SBX radar. In May 2007, the program attempted an intercept test, but a key component of the target malfunctioned. For that reason, the weapon system did not release the Ground-based interceptor and program officials declared the flight test a “no test” event. To date, program officials have not determined the root cause of the malfunction. In September 2007, the program successfully conducted a re-test and achieved an intercept of the target using target tracking data provided by the Beale upgraded early warning radar. MDA test officials told us that aging target inventory could have contributed to the target anomaly. The officials explained that some targets in MDA’s inventory are more than 40 years old and their reliability is relatively low. Target officials told us that they are taking preventive actions to avoid similar anomalies in the future. The time needed to complete the first 2007 intercept delayed GMD’s second planned intercept attempt until at least the second quarter of fiscal year 2008. The delayed test was to have determined whether the SBX radar could provide data in “real time” that could be used by the GMD fire control component to develop a weapon task plan. Although the weapon task plan was not developed in real time during 2007, GMD was able to demonstrate that the SBX radar could plan an engagement when the target was live but the interceptor was simulated. During 2007, the KEI program redefined its development efforts and focused on near-term objectives. Also, the MKV program redefined its strategy to acquire multiple kill capability. Once redefined, these programs conducted all planned activities as scheduled and each was able to meet all planned objectives. In addition, the Sensors program successfully completed all planned tests. In June 2007, MDA directed the KEI program to focus on two near-term objectives—the development of its booster and its 2008 booster flight test. Some work, such as development of the fire control and communications and mobile launcher, was deferred into the future. During fiscal year 2007, the KEI program conducted all planned test activities, including booster static fire tests that demonstrated the rocket motor’s performance in induced environments and wind tunnel tests that gathered data to validate aerodynamic models for the booster flight controls. MKV officials redefined their acquisition strategy by employing a parallel path to develop multiple kill vehicles for the GMD and KEI interceptors and the Aegis BMD SM-3 missile. MDA initiated the MKV program in 2004 with Lockheed Martin. In 2007, the MKV program added Raytheon as a second payload provider. According to program officials, the two payload providers may use different technologies and design approaches, but both adhere to the agency’s goal of delivering common, modular MKV payloads for integration with all BMDS midcourse interceptors. In fiscal year 2007, Lockheed Martin successfully conducted static fire tests of its Divert Attitude Control System as planned. Additionally, Raytheon, funded with excess KEI funds made available when that program was replanned, began concept development. Raytheon did not have any major test activities scheduled for the fiscal year. During 2007, the Sensors program focused on testing FBX-T radars that were permanently emplaced and newly produced. After the first FBX-T was moved from its temporary location in Japan to its permanent location in Shariki, Japan, various ground tests and simulations were conducted to ensure its interoperability with the BMDS. The program also delivered a second FBX-T to Vandenberg Air Force Base, where its tracking capability is being tested against targets of opportunity. According to program officials, a decision has not been made as to where the second FBX-T radar will be permanently located. As we reported in March 2007, MDA altered its original Block 2006 performance goals commensurate with the agency’s reductions in the delivery of fielded assets. However, insufficient data exists to fully assess whether MDA achieved its revised performance goals. The performance of some fielded assets is also questionable because parts have not yet been replaced that were identified by auditors in MDA’s Office of Quality, Safety, and Mission Assurance as less reliable or inappropriate for use in space. In addition, tests of the GMD element have not included target suite dynamic features and intercept geometries representative of the operational environment in which GMD will perform its mission and BMDS tests only allow a partial assessment of the system’s effectiveness, suitability, and survivability. MDA uses a combination of simulations and flight tests to determine whether performance goals are met. Models and simulations are needed to predict performance because the cost of tests prevents the agency from conducting sufficient testing to compute statistical probabilities of performance. The models and simulations that project BMDS capability against intercontinental ballistic missiles present several problems. First, the models and simulations that predict performance of the GMD element have not been accredited by an independent agency. According to the Office of the Director, Operational Test and Evaluation without accredited models, GMD’s performance cannot be predicted with respect to (1) variations in threat parameters that lie within the bounds of intelligence estimates, (2) stressing ground-based interceptor fly-outs and exoatmospheric kill vehicle engagements, and (3) variations in natural environments that lie within meteorological norms. Second, too few flight tests have been completed to ensure the accuracy of the models’ and simulations’ predictions. Since 2002, MDA has only completed two end-to- end tests of engagement sequences that the GMD element might carry out. While these tests provide some evidence that the element can work as intended, MDA must test other engagement sequences, which would include other GMD assets that have not yet participated in an end-to-end flight test. For example, MDA has not yet used the Sea-based X-band radar as the primary sensor in an end-to-end test. Additionally, officials in the Office of the Director, Operational Test and Evaluation told us that MDA needs more flight tests to have a high level of confidence that GMD can repeatedly intercept incoming ICBMs. Further testing is also needed to demonstrate that Aegis BMD can provide real-time, long-range surveillance and tracking data for the GMD element. In March 2006, we reported that the cancellation of a GMD flight test prevented MDA from exercising Aegis BMD’s long-range surveillance and tracking capability in a manner consistent with an actual defensive mission. Program officials informed us that the Aegis BMD is capable of performing this function and has demonstrated its ability to surveil and track ICBMs in several exercises. However, MDA has not yet shown that Aegis BMD can communicate this data to GMD during a live intercept engagement and that GMD can use the data to prepare a weapon task plan for actual— rather than simulated––interceptors. Officials in the Office of the Director for Operational Test and Evaluation told us that having Aegis BMD perform long-range surveillance and tracking during a live engagement would provide the data needed to more accurately gauge performance. Similarly, MDA has not yet proved that the FBX-T radar can provide real- time, long-range surveillance and tracking data for the GMD element. On several occasions, MDA has shown that the FBX-T can acquire and track targets of opportunity, but the radar’s data has not yet been used to develop a weapon system task plan for a GMD intercept engagement. Because the radar’s permanent location in Japan does not allow MDA to conduct tests in which the FBX-T is GMD’s primary fire control radar, the Director, Operational Test and Evaluation, in 2006 recommended that prior to emplacing a second FBX-T at its permanent location that MDA test the radar’s capability to act as GMD’s primary sensor in an intercept test. Confidence in the performance of the BMDS is also reduced because of unresolved GMD technical and quality issues. The GMD element has experienced the same anomaly during each of its flight tests since 2001. This anomaly has not yet prevented the program from achieving any of its primary test objectives, but to date neither its source nor solution has been clearly identified or defined. Program officials plan to continue their assessment of test data to identify the anomaly’s root cause and have implemented design changes to mitigate the effects and reduce risks associated with the anomaly. The reliability of emplaced GMD interceptors raises further questions about the performance of the BMDS. Quality issues discovered by auditors in MDA’s Office of Quality, Safety, and Mission Assurance nearly 3 years ago have not yet been rectified in all fielded interceptors. According to the auditors, inadequate mission assurance and quality control procedures may have allowed less reliable parts or parts inappropriate for use in space to be incorporated into the manufacturing process, thereby limiting the reliability and performance of some fielded assets. The program has strengthened its quality control processes and is taking several steps to mitigate similar risks in the future. These steps include component analysis of failed items, implementing corrective action with vendors, and analyzing system operational data to determine which parts are affecting weapon system availability. MDA has begun to replace the questionable parts in the manufacturing process and to purchase the parts that it plans to replace in fielded interceptors. However, it will not complete the retrofit effort until 2012. Additionally, test officials told us that although the end-to-end GMD test conducted during 2007 demonstrated that for a single engagement sequence military operators could successfully engage a target, the target represented a relatively unsophisticated threat because it lacked specific target suite dynamic features and intercept geometry. Other aspects of the test were more realistic—such as closing velocity and fly-out range—but these were relatively unchallenging. While the test parameters may be acceptable in a developmental test, they are not fully representative of an operational environment and do not provide high confidence that GMD will perform well operationally. Finally, because BMDS assets are being fielded based on developmental tests, which are not always representative of the operational environment, operational test officials have limited test data to determine whether all BMDS elements/components being fielded are effective and suitable for and survivable on the battlefield. MDA has added operational test objectives to its developmental test program, but many of the objectives are aimed at proving that military personnel can operate the equipment. In addition, limited flight test data is available for characterizing the BMDS’ capability against intercontinental ballistic missiles. Up until 2007, the overall lack of data limited the Office of the Director of Operational Test and Evaluation, in annual assessments, to commenting on the operational realism of tests and recommending other tests needed to characterize system effectiveness and suitability. In 2007, tests provided sufficient information to partially quantify the effectiveness and suitability of the BMDS' midcourse capability (Aegis BMD and GMD) and to fully characterize a limited portion of the BMDS' terminal capability (PAC-3). However, according to the Office of the Director of Operational Test and Evaluation, further testing that incorporates realistic operational objectives and verification, validation, and accreditation of models and simulations will be needed before the performance, suitability, and survivability of the BMDS can be fully characterized. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development of the BMDS. While the flexibility has enabled MDA to be agile in decision making and to field an initial capability relatively quickly, it has diluted transparency into MDA’s acquisition processes, making it difficult to conduct oversight and hold the agency accountable for its planned outcomes and costs. As we reported in 2007, MDA operates with considerable autonomy to change goals and plans, which makes it difficult to reconcile outcomes with original expectations and to determine the actual cost of each block and of individual operational assets. In the past year, MDA has begun implementing two initiatives—a new block construct and a new executive board–to improve transparency, accountability, and oversight. These initiatives represent improvements over current practices, although they provide for less oversight than statutes provide for other major defense acquisition programs. In addition, Congress has directed that MDA’s budget materials, after 2009, request funds using the appropriation categories of research, development, and evaluation, procurement, operations and maintenance, and military construction, which should promote accountability for and transparency of the BMDS. In 2007, MDA redefined its block construct to better communicate its plans and goals to Congress. The agency’s new construct is based on fielding capabilities that address particular threats as opposed to the biennial time periods that were the agency’s past approach to development and fielding. MDA’s new block construct makes many positive changes. These include establishing unit cost for selected block assets, including in a block only those elements or components that will be fielded during the block, and abandoning the practice of deferring work from block to block. Table 5 illustrates MDA’s new block construct for fielding the BMDS. MDA’s new block construct provides a means for comparing the expected and actual unit cost of assets included in a block. As we noted in our fiscal year 2006 report, MDA’s past block structure did not estimate unit costs for assets considered part of a given block or categorize block costs in a manner that allowed calculations of expected or actual unit costs. For example, the expected cost of Block 2006 GMD interceptors emplaced for operational use was not separated from other GMD costs. Even if MDA had categorized the interceptors’ cost, it would have been difficult to determine the exact cost of these interceptors because MDA acquires and assembles components into interceptors over several blocks and it has been difficult to track the cost of components to a specific group of interceptors. Under the new block construct, MDA expects to develop unit costs for selected block assets—such as THAAD interceptors—and request an independent verification of that unit cost from DOD’s Cost Analysis Improvement Group. MDA will also track the actual unit cost of the assets and report significant cost growth to Congress. However, MDA has not yet determined for which assets a unit cost will be developed and how much a unit cost must increase before that increase is reported to Congress. The new construct also makes it clearer as to which assets should be included in a block. Under the agency’s prior block construct, assets included in a given block were sometimes not planned for delivery until a later block. For example, as we reported in March 2007, MDA included costs for ABL and STSS as part of its Block 2006 cost goal although those elements did not field or plan to field assets during Block 2006. Agency officials told us those elements were included in the block because they believed the elements could offer some emergency capability during the block timeframe. Finally, the new block construct should improve the transparency of each block’s actual cost. Under its prior construct, MDA deferred work from one block to another; but it did not track the cost of the deferred work so that it could be attributed to the block that it benefited. For example, MDA deferred some work needed to characterize and verify the Block 2004 capability until Block 2006 and counted the cost of those activities as a cost of Block 2006. By doing so, it understated the cost of Block 2004 and overstated the cost of Block 2006. Because MDA did not track the cost of the deferred work, the agency was unable to adjust the cost of either block to accurately capture the cost of each. MDA officials told us that under its new block construct, MDA will no longer transfer work, along with its cost, to a future block. Rather, a block of work will not be considered complete until all work that benefits a block has been completed and its cost has been properly attributed to that block. Although improvements are inherent in MDA’s new block construct, the new construct will not dispel all transparency and accountability concerns. MDA has not yet estimated the full cost of a block. Also, MDA has not addressed whether it will transfer assets produced during a block to a military service for production and operation at the block’s completion, or whether MDA will continue its practice of concurrently developing and fielding BMDS elements and components. According to its fiscal year 2009 budget submission, MDA does not plan to initially develop a full cost estimate for any BMDS block. Instead, when a firm commitment can be made to Congress for a block of capability, MDA will develop a budget baseline for the block. This budget will include anticipated funding for each block activity that is planned for the 6 years included in DOD’s Future Years Defense Plan. MDA officials told us that if the budget for a baselined block changes, MDA plans to report and explain those variations to Congress. At some future date, MDA does expect to develop a full cost estimate for each committed block and is in discussions with DOD’s Cost Analysis Improvement Group on having the group verify each estimate; but documents do not yet include a timeline for estimating block cost or having that estimate verified. For accountability, other DOD programs are required to provide the full cost of developing and producing their weapon system before system development and demonstration can begin. Until the cost of a block of BMDS capability is fully known, it will be difficult for decision makers to compare the value of investing in a block of BMDS capability to the value of investing in other DOD programs or to determine whether the block of capability that is being initiated will be affordable over the long term. The new block construct does not address whether the assets included in a block will be transferred at the block’s completion to a military service for production and operation. Officials representing multiple DOD organizations recognize that the transfer criteria established in 2002 are neither complete nor clear given the BMDS’s complexity. Without clear transfer criteria, MDA has transferred the management of only one element—the Patriot Advanced Capability-3—to the military for production and operation. Joint Staff officials told us that for all other elements, MDA and the military services have been negotiating the transition of responsibilities for the sustainment of fielded elements—a task that has proven arduous and time consuming. Although MDA documents show that under its new block construct the agency should be ready at the end of each block to deliver BMDS components that are fully mission-capable, MDA officials could not tell us when MDA’s Director will recommend that management of components, including production responsibilities, be transferred to the military. MDA officials maintain that even though a particular configuration of a weapon could be fully mission- capable, that configuration may never be produced because it could be replaced by a new configuration. Yet, by the block’s end, a transfer plan for the fully mission-capable configuration will have been drafted, developmental ground and flight tests will be complete, elements and components will be certified for operations, and doctrine, organization, training, material, leadership, personnel, and facilities are expected to be in place. Another issue not addressed under MDA’s new block construct is whether the concurrent development and fielding of BMDS elements and/or components will continue. Fully developing a component or element and demonstrating its capability prior to production increases the likelihood that the product will perform as designed and can be produced at the cost estimated. To field an initial capability quickly, MDA accepted the risk of concurrent development and fielding during Block 2004. For example, by the end of Block 2004, the agency realized that the performance of some Ground-based interceptors could be degraded because the interceptors included inappropriate or potentially unreliable parts. MDA has begun the process of retrofitting these interceptors, but work will not be completed until 2012. Meanwhile there is a risk that some interceptors might not perform as designed. MDA also continued to accept this risk during Block 2006 as it fielded assets before they were fully tested. MDA has not addressed whether it will accept similar performance risks under its new block construct or whether it will fully develop and demonstrate all elements/components prior to fielding. In March 2007, the Deputy Secretary of Defense established a Missile Defense Executive Board (MDEB) to recommend and oversee implementation of strategic policies and plans, program priorities, and investment options for protecting the United States and its allies from missile attacks. The MDEB was also to replace existing groups and structures, such as the Missile Defense Support Group (MDSG). However, while it has some oversight responsibilities, the MDEB was not established to provide full oversight of the BMDS program and it would likely be unable to carry out this mission even if tasked to do so. The MDEB will not receive some information that the Defense Acquisition Board relies upon to make program recommendations, and in other cases, MDA does not plan to seek the MDEB’s approval before deciding on a course of action. In addition, there are parts of the BMDS program for which there will be no baseline against which progress can be measured, which makes oversight difficult. According to its charter, the MDEB is vested with more responsibility than its predecessor, the MDSG. When the MDSG was chartered in 2002, it was to provide constructive advice to MDA’s Director. However, the Director was not required to follow the advice of the group. According to a DOD official, although the MDSG met many times initially, it did not meet after June 2005. This led, in 2007, to the formation of the MDEB. This board’s mission is to review and make recommendations on MDA’s comprehensive acquisition strategy to the Deputy Secretary of Defense. It is also to provide the Under Secretary of Defense, Acquisition, Technology and Logistics, with a recommended strategic program plan and a feasible funding strategy based on “business case” analysis that considers the best approach to fielding integrated missile defense capabilities in support of joint MDA and warfighter objectives The MDEB will be assisted by four standing committees. These committees, which are chaired by senior-level officials from the Office of the Secretary of Defense and the Joint Staff, could play an important oversight role as they are expected to make recommendations to the MDEB, which in turn will recommend courses of action to the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD AT&L) and the Director, MDA, as appropriate. The following table identifies the chair of each standing committee as well as key committee functions. The MDEB will not have access to all information normally available to oversight bodies. For other major defense acquisition programs, the Defense Acquisition Board must approve the program’s progress through the acquisition cycle. Further, before a program can enter the System Development and Demonstration phase of the cycle, statute requires that certain information be developed. This information is then provided to the Defense Acquisition Board. However, in 2002, the Secretary of Defense allowed MDA to defer application of the defense acquisition system that among other things require programs to follow a defined acquisition cycle and obtain approval before advancing from one phase of the cycle to another. Because MDA does not follow this cycle, it does not enter System Development and Demonstration and it does not trigger the statutes requiring the development of information that the Defense Acquisition Board uses to inform its decisions. For example, most major defense acquisition programs are required by statute to obtain an independent verification of program cost prior to beginning system development and demonstration, and/or production and deployment. Independent life- cycle cost estimates provide confidence that a program is executable within estimated cost and along with other DOD-wide budget demands. Although MDA plans to develop unit cost for selected block assets and request that DOD’s Cost Analysis Improvement Group verify the unit costs, the agency does not initially plan to develop a block cost estimate and, therefore, cannot seek an independent verification of that cost. In addition, even when MDA estimates block costs, the agency will not be required to obtain an independent verification of that cost, because, as noted earlier, the BMDS program operates outside of DOD’s acquisition cycle. Although not required, MDA officials told us that they have initiated discussions with the Cost Analysis Improvement Group on independent verifications of block cost estimates. Statute also requires an independent verification of a system’s suitability for and effectiveness on the battlefield before a program can proceed beyond low-rate initial production. After the test is completed, the Director for Operational Test and Evaluation assesses whether the test was adequate to support an evaluation of the system’s suitability and effectiveness for the battlefield, whether the test showed the system to be acceptable, and whether any limitations in suitability and effectiveness were noted. However, a comparable assessment of the BMDS assets being produced for fielding will not be available to the MDEB. As noted earlier, the limited amount of testing completed, which has been primarily developmental in nature, and the lack of verified, validated, and accredited models and simulations prevent the Director of Operational Test and Evaluation from fully assessing the effectiveness, suitability, and survivability of the BMDS in annual assessments. MDA will also make some decisions without approval from the MDEB or any higher level DOD official. Although the charter of the MDEB includes the mission to make recommendations to MDA and the Under Secretary of Defense for AT&L on investment options, program priorities, and MDA’s strategy for developing and fielding an operational missile defense capability, the MDEB will not have the opportunity to review and recommend changes to BMDS blocks. According to a briefing on the business rules and processes for MDA’s new block structure, the decision to initiate a new block of BMDS capability will be made by MDA’s Director. Also cost, schedule, and performance parameters will be established by MDA when technologies that the block depends upon are mature, a credible cost estimate can be developed, funding is available, and the threat is both imminent and severe. The Director will inform the MDEB as well as Congress when a new block is initiated, but he will not seek the approval of either. Finally, there will be parts of the BMDS program that will be difficult for the MDEB to oversee because of the nature of the work being performed. MDA plans to place any program that is developing technology in a category known as Capability Development. These programs, such as ABL, KEI, and MKV, will not have a firm cost, schedule, or performance baseline. This is generally true for technology development programs in DOD because they are in a period of discovery, which makes schedule and cost difficult to estimate. On the other hand, the scale of the technology development in BMDS is unusually large, ranging from $2 billion to about $5 billion dollars a year—eventually comprising nearly half of MDA’s budget by fiscal year 2012. The MDEB will have access to the budgets planned for these programs over the next 5 or 6 years, each program’s focus, and whether the technology is meeting short-term key events or knowledge points. But without some kind of baseline for matching progress with cost, the MDEB will not know how much more time or money will be needed to complete technology maturation. MDA’s experience with the ABL program provides a good example of the difficulty in estimating the cost and schedule of technology development. In 1996, the ABL program believed that all ABL technology could be demonstrated by 2001 at a cost of about $1 billion. However, MDA now projects that this technology will not be demonstrated until 2009 and its cost has grown to over $5 billion. While the uncertainties of technology development must be recognized, some organizations suggest ways to establish a baseline appropriate for such efforts. For example, the Air Force Research Laboratory suggested a methodology to estimate a technology’s cost once analytical and laboratory studies physically validate analytical predictions of separate elements of the technology. In an effort to further improve oversight, the Joint Requirements Oversight Council proposed a plan to transition the BMDS into standard DOD processes. In August 2007, the Vice Chairman of the Joint Chiefs of Staff and Joint Requirements Oversight Council Chairman requested the Deputy Secretary of Defense approve a proposal to return MDA to the Joint Capabilities Integration and Development System process and direct the Joint Requirements Oversight Council to validate BMDS capabilities. The Vice Chairman believed that the council should exercise oversight of MDA in order to improve Department-wide capability integration. More specifically, he noted that: In 2002, the Secretary of Defense exempted the BMDS program from the traditional requirements generation process to expedite fielding the system as soon as practicable. Now that an initial capability for homeland defense has been deployed, there is no longer the same need for flexibility provided by the requirements exemption. The current process, with MDA exempted, does not allow the Joint Requirements Oversight Council to provide appropriate military advice or to validate missile defense capabilities. Without this change, there is increasing potential that MDA-fielded systems will not be synchronized with other air and missile defense capabilities being developed. The current process hinders the military departments’ ability to plan and program resources for fielding and sustainment of MDA-developed systems. In responding to the proposal, the Acting Under Secretary of Defense for AT&L recommended that the Deputy Secretary of Defense delay his approval of the Joint Staff’s proposal until the MDEB could review the proposal and provide a recommendation. However, he agreed that more Joint Requirements Oversight Council involvement was necessary for the BMDS, although he was not sure that returning BMDS to standard DOD processes was the appropriate solution to the agency’s oversight issues. Instead, he noted that the Deputy Secretary of Defense recently established the MDEB to recommend and oversee the implementation of strategic policies and plans, program priorities, and investment options for the BMDS. He stated that since the MDEB is tasked with determining the best means of managing the BMDS throughout its life cycle, it should consider the Joint Staff’s proposal. In an effort to improve the transparency of MDA’s acquisition processes, Congress has directed that MDA’s budget materials delineate between funds needed for research, development, and evaluation, procurement, operations and maintenance, and military construction. Using procurement funds will mean that MDA generally will be required to adhere to congressional policy that assets be fully funded in the year of their purchase, rather than incrementally funded over several years. The Congressional Research Service reported in 2006 that “incremental funding fell out of favor because opponents believed it could make the total procurement costs of weapons and equipment more difficult for Congress to understand and track, create a potential for DOD to start procurement of an item without necessarily stating its total cost to Congress, permit one Congress to ‘tie the hands’ of future Congresses, and increase weapon procurement costs by exposing weapons under construction to uneconomic start-up and stop costs.” Our analysis of MDA developed costs, which are presented in table 7, also shows that incremental funding is usually more expensive than full funding, in part, because inflation decreases the buying power of the dollar each year. The National Defense Authorization Act for Fiscal Year 2008 directed MDA to submit a plan to transition from using research and development funds exclusively to using procurement, operations and maintenance, military construction, and research and development funds by March 1, 2008. However, it allowed MDA to continue to use research and development funds in fiscal year 2009 to incrementally fund previously approved missile defense assets. The act also directed that beginning in fiscal year 2009, the MDA budget request include, in addition to RDT&E funds, military construction funds and procurement funds for some long lead items such as those required for the third and fourth THAAD fire units and Aegis BMD SM-3 Block 1A missiles. MDA did not request long lead funding for either THAAD or SM-3 missiles in its fiscal year 2009 budget because MDA has slipped the schedule for procuring fire units 3 and 4 by one year, and the National Defense Authorization Act for Fiscal Year 2008 was not signed in time to allow MDA to adjust its budget request for SM-3 missiles. Congress also provided MDA with the authority to use procurement funds for fiscal years 2009 and 2010 to field its BMDS capabilities on an incremental funding basis, without any requirement for full funding. Congress has granted similar authority to other DOD programs. In the conference report accompanying the Fiscal Year 2008 National Defense Authorization Act, the conferees indicated that if MDA wishes to use incremental funding after fiscal year 2010, DOD must request additional authority for a specific program or capability. Conferees cautioned DOD that additional authority will be considered on a limited case-by-case basis and that future missile defense programs will be funded in a manner more consistent with other DOD acquisition programs. Since 2002, MDA has been granted the flexibility to incrementally fund the fielding of its operational assets with research and development funds. In some cases, the agency spreads the cost of assets across 5 to 7 budget years. After reviewing the agency’s incremental funding plan for future procurements of THAAD fire units and Aegis BMD missiles, we analyzed the effect of fully funding these assets using present value techniques and found that the agency could save about $125 million by fully funding their purchase and purchasing them in an economical manner. Our analysis is provided in table 7. In addition, more detailed analysis is available in appendix III. According to our analysis, fully funding the THAAD and Aegis BMD assets will, in all instances, save MDA money. For example, full funding would save the THAAD program approximately $104 million and the Aegis BMD program nearly $22 million. In addition, by providing funds upfront, the contractors should be able to arrange production in the most efficient manner. By the end of Block 2006, MDA posted a number of accomplishments for the BMDS, including fielding more assets, conducting several successful tests, and progressing with developmental efforts. As a result, fielded capability has increased. On the other hand, some problems continue that make it difficult to assess how well the BMDS is progressing relative to the funds it has received and the goals it has set for those funds. First, under the proposed block construct, MDA plans to develop a firm baseline for each block and have it independently reviewed. However, MDA has not yet developed estimates for full block costs, so the initial baseline incorporates the budget for each block only through DOD’s Future Years Defense Plan. Second, while MDA expects to estimate unit costs and track increases, it is unclear as to what criteria will be used for reporting variances to Congress. Third, while MDA has gotten some contractors to lower the portion of work planned as level of effort, a substantial amount of work remains so planned. Fourth, while it may not be reasonable to expect the same level of accountability for technology development efforts as it is for development and production of systems, the high level of investment—up to half of its budget—MDA plans to make in technology development warrants some mechanism for reconciling the cost of these efforts with their progress. Finally, MDA fields assets before development testing is complete and without conducting operational testing. We have previously recommended that MDA return to its original non-concurrent, knowledge-based approach to developing, testing, and fielding assets. Short of that, the developmental testing that is done provides the primary basis for the Director of Operational Test and Evaluation to assess whether a block of BMDS capability is suitable and effective for the battlefield. So far, BMDS testing has not yielded sufficient data to make a full assessment. To build on efforts to improve the transparency, accountability, and oversight of the missile defense program, we recommend that the Secretary of Defense direct: MDA to develop a full cost for each block and request an independent verification of that cost; MDA to clarify the criteria that it will use for reporting unit cost MDA to examine a contractor’s planning efforts when 20 percent or more of a contract’s work is proposed as level of effort; MDA to investigate ways of developing a baseline or some other standard against which the progress of technology programs may be assessed; and MDA and the Director of Operational Test and Evaluation to agree on criteria and incorporate corresponding scope into developmental tests that will allow a determination of whether a block of BMDS capability is suitable and effective for fielding. DOD provided written comments on a draft of this report. These comments are reprinted in appendix I. DOD also provided technical comments, which we incorporated as appropriate. DOD concurred with three of our five recommendations—developing a full cost estimate for each block and requesting an independent verification of that cost, clarifying criteria for reporting unit cost variances to Congress, and examining contractors’ planning efforts when 20 percent or more of a contract’s work is proposed as level of effort. The Department indicated that MDA has already taken steps to develop new cost models aligned with its new block structure and met with DOD’s Cost Analysis Improvement Group to initiate the planning process for the independent verifications of MDA’s cost estimates. The cost estimates will extend until block completion and will not be limited by a 6-year Future Years Defense Plan window. MDA is also working to establish criteria for reporting unit cost variances and to incorporate them into an MDA directive. Finally, MDA has made a review of prime contractors’ work planning efforts part of the Integrated Baseline Review process and the Defense Contract Management Agency has agreed to continuously validate the appropriateness of each contractor’s planning methodology as part of its ongoing contract surveillance. DOD partially concurred with our recommendation that MDA investigate ways of developing a baseline or some other standard against which the progress of technology programs may be assessed. DOD observed that MDA uses knowledge points, technology readiness levels, and engineering and manufacturing readiness levels in assessing the progress of its technology programs and that it will continue to investigate other methods of making such assessments. While we recognize their value, these methods typically assess progress in the short term and do not provide an estimate of the remaining cost and time needed to complete a technology program. Because MDA must balance its efforts to improve the existing BMDS while developing new capability, DOD and MDA need to ensure that only the most beneficial technology programs in terms of performance, cost, and schedule are pursued. This will require an understanding of not only the benefit to be derived from the technology, but also an understanding of the cost and time needed to bring the technology to fruition. DOD also partially concurred with our last recommendation that MDA and the Director of Operational Test and Evaluation (DOT&E) agree on criteria and additional scope for developmental tests that will allow a full determination of the effectiveness and suitability of a BMDS block for fielding. DOD noted that it is MDA’s mission to work with the warfighter, rather than DOT&E, to determine that the BMDS is ready for fielding, but that MDA will work closely with DOT&E to strengthen the testing of BMDS suitability and effectiveness. We agree that DOT&E is not responsible for fielding decisions, but its mission is to ensure that weapon systems are realistically and adequately tested and that accurate evaluations of operational effectiveness, suitability, and survivability are available for production decisions. MDA improved the operational realism of testing in 2007 and for the first time DOT&E considered tests at least partially adequate to make an assessment of the BMDS. However, a full assessment is not yet possible and we continue to recommend that MDA and DOT&E take steps to make as full a BMDS evaluation as possible. In doing so, MDA and DOT&E can work cooperatively to reduce the number of unknowns that will confront the warfighter when the system is required operationally and improve the likelihood that the BMDS will perform as needed in the field. We are sending copies of this report to the Secretary of Defense and to the Director, MDA. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you, or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact Points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. The major contributors are listed in appendix V. The Missile Defense Agency (MDA) employs prime contractors and support contractors to accomplish different tasks that are needed to develop and field the ballistic missile defense system. Prime contractors receive the bulk of funds MDA requests each year and work to provide the hardware and software for elements of the Ballistic Missile Defense System (BMDS). Support contractors provide a wide variety of useful services, such as special knowledge and skills not available in the government and the capability to provide temporary or intermittent services. MDA has prime contracts with four defense companies—Boeing, Raytheon, Lockheed Martin, and Northrop Grumman—to develop elements of the BMDS. All current contracts and agreements are cost reimbursement type that provide for payment of reasonable, allowable, and allocable incurred costs to the extent provided in the contract or agreement. The contracts also provide fee for the contractor performing the work, but the amount earned depends on many variables, including the type of cost contract, contractor performance, technical risk, and complexity of the requirement. All of the cost reimbursement contracts used for the BMDS elements include cost plus award fee aspects. Cost plus award fee contracts provide for a fee consisting of a base fee—fixed at the inception of the contract that may be zero—and an award amount based upon a subjective evaluation by the government, meant to encourage exceptional performance. It should be noted that some award fee arrangements include objective criteria such as Key Performance events. The Multiple Kill Vehicle (MKV) contract and Command, Control, Battle Management and Communications (C2BMC) Other Transaction Agreement differ somewhat from the other elements' contracts. The MKV prime contractor was awarded an indefinite delivery/indefinite quantity cost reimbursement contract. This type of contract allows MDA to order services as they are needed through a series of task orders. Without having to specify a firm quantity of services (other than a minimum or maximum quantity), the government has greater flexibility to align the tasks with available funding. The C2BMC element operates under an Other Transaction Agreement with cost reimbursement aspects. These types of agreements are not always subject to procurement laws and regulations meant to safeguard the government. MDA chose the Other Transaction Agreement to facilitate a collaborative relationship between industry, government, federally funded research and development centers, and university research centers. DOD requires that all contractors awarded cost reimbursement contracts or other agreements of $20 million or greater implement an Earned Value Management System (EVMS) to integrate the planning of work scope, schedule, and resources, and to provide insight into their cost and schedule performance. To implement this system, contractors examine the totality of the work directed by the contract and break it into executable work packages. Each work package is assigned a schedule and a budget that is expected to enable the work’s completion. On a monthly basis, the contractor examines initiated work packages to determine whether the work scheduled for the month was performed on time and within budget. If more work was completed than scheduled and the cost of the work performed was less than budgeted, the contractor reports a positive schedule and cost variance. However, if the contractor was unable to complete all of the work scheduled and needed more funds to complete the work than budgeted, the contractor reports a negative schedule and cost variance. Of course, the results can be mixed. That is, the contractor may have completed more work than scheduled but at a cost that exceeded the budget. The contractor details its performance to MDA each month in Contract Performance Reports. These reports also identify the reasons that negative or positive variances are occurring. Used properly, the earned value concept allows program managers to identify problems early so that steps can be taken before the problems increase the contract’s overall cost and/or schedule. In the course of subdividing the total work of the contract into smaller efforts, contractors plan work according to its type. Included in these classifications are discrete work—work that is expected to produce a product, such as a study, lines of software code, or a test—and work considered to be level of effort (LOE). LOE is work that does not result in a product, but is of a general or supportive nature. Supervision and contract administration are examples of work that do not produce definable end products and are appropriately planned as LOE. Several contracts for BMDS systems have relatively high proportions of work planned as LOE. When work is incorrectly planned as LOE, the contractor’s performance becomes less transparent because earned value does not recognize schedule variances for such work. Rather, it is assumed that the time budgeted for an LOE effort will produce the intended result. Although an LOE work package will report cost variances, those variances will only be measured against how much the program intended to spend at certain time intervals. If LOE were to be used on activities that could otherwise be measured discretely, the project performance data could be favorably distorted and contractors and program managers might not be able to discern the value gained for the time spent on the task. Specifically, the program’s Contract Performance Reports would not indicate whether or not the work performed produced the product expected. By losing early insight into performance, the program could potentially need to spend more time and money to complete the task. Since earned value management is less suited for work that is not intended to produce a specific product, or work that is termed LOE, the Standard for Earned Value Management Systems Intent Guide instructs that although some amount of LOE activity may be necessary, it must be held to the lowest practical level. In addition, earned value experts such as Defense Contract Management Agency officials agree that if a contractor plans more than 20 percent of the total contract work as LOE, the work plan should be examined to determine if work is being properly planned. Although the amount of LOE should be minimized, some BMDS prime contracts have a relatively high percentage of LOE. As figure 2 illustrates, the MKV contractor planned much of the work for task orders open during fiscal year 2007 as LOE. Contractors for Aegis BMD SM-3 and C2BMC also planned a high percentage of their work as LOE. Both MDA’s Earned Value Management Group and program office reviewers encouraged the SM-3 and C2BMC contractors to reduce their LOE percentages. By the end of the fiscal year, the SM-3 and C2BMC contractors had reduced the amount of work planned as LOE. In December 2006, the Aegis BMD SM-3 contractor completed work to develop and produce initial Block 1A missiles with 73 percent of this work categorized as LOE—well above the 15 percent that the Aegis BMD SM-3 program reports as its industry standard. Although we have reported that the contractor completed this segment of work ahead of cost but slightly behind schedule, it is difficult to assess whether this represents the contractor’s actual performance. The high percentage of LOE associated with this work may have limited our assessment and distorted whether the work completed was in all respects the work planned. Subsequently, the contractor initiated procurement of long lead materials to produce an additional 20 Block 1A missiles before work packages were developed. Once work packages were developed, only 18 percent of the work was planned as LOE. The C2BMC program was able to reduce the percentage of work planned as LOE, but the program continues to encourage further reductions. During fiscal year 2007, the C2BMC contractor replanned its work and reduced the amount of work planned as LOE from 73 to 52 percent. This change was implemented after two closely related reviews suggested the percentage of LOE work was too high. Both the program office and its contractor acknowledge the high level of LOE and have made plans to limit it in future work. As noted in figure 2, the MKV contractor considered all work being completed under two task orders—Task Orders 4 and 5—as LOE. The primary objective of Task Order 4 is to update the program plan and complete the systems engineering effort necessary to integrate the MKV warhead into the BMDS to the extent required for the systems requirements review. Both the system concept review, completed in July 2006, and the system requirements review, scheduled for December 2008, are major milestones. However, the contractor did not plan these milestone reviews as products. According to program officials, Task Order 4 will be reevaluated in February 2008 to reduce the amount of LOE and recognize more work as discrete. The MKV program also planned 100 percent of Task Order 5 work as LOE. Under this task order, the contractor was to design a prototype propulsion system, assemble and integrate the hardware for the prototype, and perform a static hot fire test of the integrated system. This effort culminates in hardware—a tangible end product—that is expected to exhibit certain performance characteristics during the static hot fire test. The contractor could have categorized this task order, at least in some part, as discrete work since the work was expected to deliver a defined product with a schedule that could slip or vary. Because the contractor categorized all of this task order as LOE, the program lost its ability to gauge performance and to make adjustments that might prevent contract cost growth. We analyzed Fiscal Year 2007 Contract Performance Reports for MDA’s 10 prime contracts and determined that collectively the contractors overran budgeted costs by nearly $170 million but were ahead of schedule by nearly $200 million. However, the percentage of work planned as LOE should be scrutinized before accepting this as the contractors’ actual performance because a high percentage of LOE, as noted above, can potentially distort the contractors’ cost and schedule performance. The cumulative performance of one contractor is also distorted because it rebaselined part of its work. Rebaselining is an accepted EVM procedure that allows a contractor to reorganize all or part of its remaining contract work, add additional time or budget for the remaining effort, and, under some circumstances, set affected cost and/or schedule variances to zero. When variances are set to zero, the cumulative performance of the contractor appears more positive than it is. Four of the 10 contracts we reviewed also contained some kind of replanning activity during fiscal year 2007. Contractors may replan when they conclude that the current plan for completing the effort remaining on the contract is unrealistic. A replan can consist of any of the following: reallocating the budget for the remaining effort within the existing constraints of the contract, realigning the schedule within the contractually defined milestones, and setting cost and/or schedule variances to zero. During the course of replanning a contract, the contractor must provide traceability to previous baselines as well as ensure that available funding is not exceeded. The Aegis BMD program awarded two prime contracts for its major components, the Aegis BMD Weapon System and the Standard Missile-3. During the fiscal year, the contractors completed all work at less cost than budgeted. Both contractors ended the year with positive cumulative cost variances, but negative cumulative schedule variances. Based on our analysis, we project that if the contractors continue to perform at the same level, the weapon system contractor could underrun its budget by between $8.8 million and $17.7 million, while the SM-3 contractor could complete its work on 20 Block 1A missiles for $7.4 million to $11.1 million less than budgeted. The weapon system contractor’s fiscal year 2007 cost performance resulted in a positive cost variance of $7.7 million. The positive variance was realized as two software packages required less effort than anticipated and were completed earlier than expected. Combined with its performance from earlier periods, the contractor finished the year with a cumulative positive cost variance of $7 million. This upward trend is depicted in figure 3. The contractor produced a $3.8 million unfavorable schedule variance in fiscal year 2007. The contractor reported that the unfavorable cumulative variance was caused in part by a delay in receiving component materials for the radar’s processor. During fiscal year 2007, the Aegis SM-3 contractor closed out work related to missile development and initial production of Block 1A missiles and began new work in February 2007 to manufacture an additional 20 Block 1A missiles. In performing the new work, the contractor underran its cost budget by $6.2 million, but failed to complete $4.0 million of planned work. The Aegis BMD SM-3 contractor’s cumulative cost and schedule variances are highlighted in figure 4. The positive cost variance can be attributed to several factors including cost efficiencies realized from streamlining system engineering resources and lower than planned hardware costs. Our analysis predicts that if the SM-3 contractor continues to perform as it did through September 2007, it will underrun its budgeted costs for the 20 Block 1A missiles by between $7.4 million and $11.1 million. The contractor’s negative cumulative schedule variance of $4 million for the 20 missiles was primarily caused by delayed qualification testing and integration of hardware components. In May 2007, MDA allowed ABL’s contractor to rebaseline one part of its contract after the work associated with a key knowledge point could not be completed on schedule. Because the contractor did not achieve this knowledge point as planned, the program was forced to postpone its lethality demonstration until August 2009. Technical issues including weapon system integration, beam control/fire control software modifications, and flight testing discoveries, all contributed to the delay in completing the knowledge point for the program. To provide funds and time to support the delay in the lethality demonstration, the program extended the contract’s period of performance by approximately 1 year and increased the contract’s ceiling cost by $253 million. Once the new baseline was incorporated, the contractor was able to complete fiscal year 2007 with positive cost and schedule variances of $3.7 million and $24.2 million, respectively. Figure 5 depicts the contractor’s cumulative cost and schedule performance. As shown in figure 5 above, the ABL contractor was not able to overcome the negative cost and schedule variances of prior years and ended the fiscal year with an unfavorable cumulative cost variance of $74.2 million and an unfavorable cumulative schedule variance of $25.8 million. We estimate that, at completion, the contract could overrun its budget by between $95.4 million and $202.5 million. During fiscal year 2006, the C2BMC contractor did not report earned value because it was working on a replan of its Block 2006 increment of work (known as Part 4). Following the definitization of the Part 4 replan in November 2006, the C2BMC contractor resumed full EVM reporting with the first submittal covering February 2007 data. As part of the replan, the contractor adjusted a portion of its Part 4 work and set cost and schedule variances to zero in an effort to establish a baseline commensurate with the contractor’s replanning efforts. However, even with the adjustment, the C2BMC program ended fiscal year 2007 with negative fiscal year cost and schedule variances of $11.1 million and $1.5 million, respectively. Figure 6 shows the contractor’s cumulative performance in fiscal year 2007. The unfavorable fiscal year cost variance was largely due to adding staff to support a software release; while the unfavorable fiscal year schedule variance was attributable to delays in hardware delivery, initiation of a new training system, and completing training material for the new system. Added to prior year negative variances, the C2BMC contractor reported cumulative negative cost and schedule variances of $14.5 million and $3.5 million, respectively. The contractor completed Part 4 work in December 2007 and reported an overrun of $9.9 million. The GMD prime contractor’s cost performance improved significantly in fiscal year 2007. The contractor experienced a budget overrun of $22.1 million for the fiscal year following budget overruns in both fiscal years 2005 and 2006 that exceeded $300 million. Program officials attribute this turnaround in performance to several factors, including rigorous management of the contract’s estimate at completion, quality initiatives, and joint efforts by the contractor and program office to define scope, schedule, and price of change orders. The cumulative cost variance at the end of fiscal year 2007 was over $1 billion. We estimate that at completion the contract, with a target price of $15.54 billion, could exceed its budgeted cost by between $1.06 billion and $1.4 billion. The contractor was able to complete $84.9 million more work than scheduled for fiscal year 2007, but could not overcome poor performance in earlier years and ended the year with a negative cumulative schedule variance of $52.9 million. Figure 7 illustrates both cost and schedule trends in GMD fiscal year 2007 performance. The unfavorable fiscal year cost variance is primarily attributable to the EKV. During fiscal year 2007, the EKV contractor experienced negative cost variances as it incurred additional labor costs to recover delivery schedules, manufacturing schedule delays, hardware manufacturing problems, and embedded software development and system integration problems. With 18 percent of the EKV work remaining, the negative trends on this component could continue. As we reported last year, the contractor was in the process of developing a new contract baseline to incorporate the updated scope, schedule, and budget that the contractor was working toward. In September 2006, phase one of the new baseline, covering fiscal year 2006-2007 efforts, had been implemented and validated through the Integrated Baseline Review of the prime contractor and its major subcontractors. Phase two of the review was completed in December 2006. Subsequent to the reviews, fiscal year 2007 ground and flight tests were replanned to reflect a contract change that added additional risk mitigation effort to one planned flight test and added a radar characterization system test. The KEI contractor replanned its work in April 2007 when MDA directed the program to focus, in the near term, on two main objectives: booster vehicle development and the 2008 booster flight developmental test. Prior to the replan, the KEI program was developing a land-mobile capability with fire control and communications and mobile launcher components. Although the contractor’s primary objectives are now focused around the booster segment of work, it is still performing some activities related to the fire control and communications component. During fiscal year 2007, the contractor incurred a positive cost variance of $2.1 million and a negative schedule variance of $7.5 million. Combined with variances from earlier fiscal years, the cumulative cost variance is a positive $5.7 million and the cumulative schedule variance is a negative $12.8 million. Figure 8 illustrates KEI’s cumulative performance over the course of the fiscal year. KEI’s fiscal year favorable cost variance primarily results from completing work on the fire control and communications component, as well as systems engineering and integration with fewer staff than planned. We were unable to estimate whether the total contract is likely to be completed within budgeted cost since the contract is only 10 percent complete and trends cannot be developed until at least 15 percent of the contract is completed. Work related to the interceptor’s booster and systems engineering and integration contributed to KEI’s cumulative negative fiscal year schedule variance of $7.5 million. The contractor reports that the booster work was understaffed, which caused delays in finalizing designs that, in turn, delayed procurement of subcomponents and materials and delayed analysis and tests. While the reduction in staff for systems engineering and integration work reduced costs for the contractor, it also delayed completion of the weapon system’s scheduled engineering, flight, and performance analysis products. We could evaluate only two of five MKV task orders open during fiscal year 2007 because the contractor did not report sufficient earned value data to make an assessment of the other three meaningful. MDA awarded the MKV contract in January 2004 and has since initiated eight task orders through its indefinite delivery/indefinite quantity contract. During fiscal year 2007, the program worked on five of these task orders—Task Orders 4, 5, 6, 7, and 8. We evaluated the contractor’s cost and schedule performance for Task Orders 5 and 6 only. Of the three task orders that we did not evaluate, the contractor began reporting full earned value on two so late in the fiscal year that little data was available for analysis. In the third case, the contractor’s reports did not include all data needed to make a cost and schedule assessment. In June 2006, MDA issued Task Order 5 which directed the design, assembly, and integration of the hardware for a prototype propulsion system, and a static hot fire test of the integrated prototype. Because the contractor planned all activities for this task order as level of effort, the contractor reported zero schedule variance. Contract Performance Reports show that in preparation for the hot fire test in August 2007, the program discovered anomalies indicative of propellant contamination in the prototype’s propulsion system. These anomalies led to multiple unplanned propellant tank anomaly investigations, which contributed to the unfavorable $2.3 million cost variance for the fiscal year. Additionally, during the hot fire test, one of the thrusters in the propulsion system’s divert and attitude control component experienced anomalies due to foreign object contamination. This anomaly led to unplanned investigations which also contributed to increased costs. Figure 9 below depicts the unfavorable cumulative cost variance of $2.7 million and cumulative schedule variance of zero reported for Task Order 5. Based on our analysis, we predict the contractor will overrun its contract costs by between $2.6 million and $2.9 million. MKV’s objective for Task Order 6 is to manufacture a prototype seeker capable of acquiring, tracking, and discriminating objects in space. The program plans to demonstrate the prototype seeker, which is a component of a carrier vehicle, through testing in 2009. In contrast to Task Order 5, the contractor correctly planned the bulk of Task Order 6 as discrete work and has been reporting the work’s cost and schedule status since March 2007. During this time, the contractor has completed 37 percent of the work directed by the task order at $0.3 million less than budgeted. The contractor was also able to complete $0.9 million more work than planned. See Figure 10 for an illustration of cumulative cost and schedule variances for this task order. The program attributes its favorable fiscal year cost and schedule variances for Task Order 6 to the early progress made on interface requirements, hardware procurements, component drawings, and the prototype seeker’s architecture. Because detailed designs for the seeker are derived from models, the program is anticipating some rework will be needed as the designs are developed, processed, and released. Although program officials are expecting some degradation in cumulative cost and schedule variances to occur, the program does not expect an overrun of the contract’s budgeted cost at completion. Based on the contractor’s performance to date, we predict, at contract completion, the contractor will underrun costs by between $0.8 million and $2.5 million. The Sensors contractor’s performance during fiscal year 2007 resulted in a positive cost variance of $3.9 million and an unfavorable schedule variance of $8.8 million. Added to variances from prior years, the contractor is reporting cumulative positive cost and schedule variances of $24.1 million and $17.8 million, respectively. The contractor’s performance in 2007, suggests that at completion the contract will cost from $22.0 million to $46.8 million less than budgeted. The variances, depicted below in figure 11 represent the Sensors contractor’s cumulative cost and schedule performance over fiscal year 2007. The contractor has reported favorable schedule and cost variances since the contract’s inception because the program was able to leverage the hardware design of the THAAD radar to reduce development timelines and it implemented manufacturing efficiencies to reduce manufacturing costs. However, during fiscal year 2007, the contractor experienced a negative schedule variance as it struggled to upgrade software expected to provide an increased capability for the FBX-T radar. After replanning a portion of its work in October 2006, the STSS contractor in fiscal year 2007 experienced an unfavorable cost variance of $67.7 million and a favorable schedule variance of $84.7 million. Combined with performance from earlier periods, the contractor is reporting cumulative negative cost and schedule variances of $231.4 million and $19.7 million, respectively. Figure 12 shows both cost and schedule trends during fiscal year 2007. During the fiscal year, the contractor was able to accomplish a significant amount of work ahead of schedule after a replan added additional time for planned work efforts. However, the contractor was unable to overcome the negative schedule variances incurred in prior years. Delays in hardware and software testing as well as integration issues contributed to fiscal year 2007’s negative cost variance. We did not estimate the cost of the STSS contract at completion. The contract includes not only the effort to develop and launch two demonstration satellites (the Block 2006 capability) but also effort that will benefit future blocks. Block 2006 work is about 86 percent complete, while work on future blocks is about 16 percent complete. The THAAD contractor overran its fiscal year 2007 budgeted costs by $91.1 million but accomplished $19.0 million more work than scheduled. Cumulatively, the contractor ended the year with an unfavorable cost variance of $195.2 million and a negative schedule variance of $9.1 million, as shown by figure 13. The THAAD prime contractor’s cost overrun of $91.1 million was primarily caused by technical problems related to the element’s missile, launcher, radar, and test components. Missile component cost overruns were caused by higher than anticipated costs in hardware fabrication, assembly, and support touch labor as well as subcontractor material costs for structures, propulsion, and other sub-assembly components. Additionally, design issues with the launcher’s missile round pallet and the electronics assembly that controls the launcher caused the contractor to experience higher than anticipated labor and material costs. More staff than planned was required to resolve hardware design issues in the radar’s prime power unit, causing the radar component to end the fiscal year with a negative cost variance. The contractor also experienced negative cost variances with the system test component because the Launch and Test Support Equipment required additional set-up time at the flight test range. THAAD’s prime contractor fared better in performing scheduled work. It was able to reduce its negative cumulative schedule variance over the course of the fiscal year because subcontracted missile items were delivered early and three flight tests were removed from the test program to accommodate target availability and budget constraints, allowing staff more time to work on current efforts. The contractor projects an overrun of $174 million at contract completion, while we estimate that the overrun could range from $227.2 million to $325.8 million. To achieve its projection, the contractor needs to complete $1.04 worth of work for every dollar spent. In contrast, during fiscal year 2007, the contractor achieved an average of $0.82 worth of work for each dollar spent. Therefore, it seems unlikely that the contractor will be able to achieve its estimate at completion. Like other DOD programs, MDA has not always effectively used award fees to encourage contractors toward exceptional performance but it is making efforts to revise its award fee policy to do so. Over the course of fiscal year 2007, the agency sometimes rolled over large percentages of award fee—in most cases for work that was moved to later periods but also for one contractor that exhibited poor performance. In addition, some award fee plans allow fee to be awarded to contractors for merely meeting the requirements of their contract. For two contractors, MDA awarded fee amounts that were linked to very good or outstanding work in the cost and/or program management performance elements. During their award fee periods, the contractors’ earned value data showed declines in cost and/or schedule variances, although there are several other factors considered when rating contract performance. However, in June 2007, MDA issued a revised draft of its award fee guide in an effort to more closely link the amount of award fees earned with the level of contractor performance. In an effort to encourage its defense contractors to perform in an innovative, efficient, and effective way in areas considered important to the development of the BMDS, MDA offers its contractors the opportunity to collectively earn billions of dollars through monetary incentives known as award fees. Award fees are intended to motivate exceptional performance in subjective areas such as technical ingenuity, cost, and schedule. Award fees are appropriate when contracting and program officials cannot devise predetermined objective targets applicable to cost, technical performance, or schedule. Currently, all 10 of the contracts we assessed for BMDS elements utilize award fees in some manner to incentivize their contractor’s performance. Each element’s contract has an award fee plan that identifies the performance areas to be evaluated and the methodology by which those areas will be assessed. At the end of each period, the award fee evaluation board, made up of MDA personnel, program officials, and officials from key organizations knowledgeable about the award fee evaluation areas, begins its process. The board judges the contractor’s performance and recommends to a fee determining official the amount of fee to be paid. For all BMDS prime contracts we assessed, the fee determining official is the MDA Director. Table 1 provides a summary of the award fee process. GAO has found in the past that DOD has not always structured and implemented award fees in a way that effectively motivates contractors to improve performance and achieve acquisition outcomes. Specifically, GAO cited four issues with DOD’s award fee processes. GAO reported that in many evaluation periods when rollover—the process of moving unearned available award fee from one evaluation period to the next—was allowed, the contractor had the chance to earn almost the entire unearned fee, even in instances when the program was experiencing problems. Additionally, DOD guidance and federal acquisition regulations state that award fees should be used to motivate excellent contractor performance in key areas. However, GAO found that most DOD award fee contracts were paying a significant portion of the available fee from one evaluation period to the next for what award fee plans describe as “acceptable, average, expected, good, or satisfactory” performance. Furthermore, DOD paid billions of dollars in award fees to programs whose costs continued to grow and schedules increased by many months or years without delivering promised capabilities to the warfighter. GAO also found that some award fee criteria for DOD programs were focused on broad areas— such as how well the contractor was managing the program—instead of criteria directly linked with acquisition outcomes—such as meeting cost and schedule goals, and delivering desired capabilities. All of these DOD practices contribute to the difficulty in linking elements of contractor performance considered in award fee criteria to overall acquisition outcomes and may lessen the motivation for the contractor to strive for excellent performance. We assessed all award fee plans for the BMDS elements and fiscal year 2007 award fee letters for 9 of the 10 contractors. Our review revealed that during 2007 MDA experienced some of the same award fee problems that were prevalent in other DOD programs. MDA did not roll fee forward often, but when it did the contractor was, in one case, able to earn 100 percent of that fee. Also, MDA allowed another contractor to earn the unearned portion of fiscal year 2007 award fee in the same period through a separate pool composed of the unearned fee but tied to other performance areas. In two other instances, MDA awarded fee amounts that were linked to very good or outstanding work in the cost and/or program management performance element. However, during the award fee periods, earned value data indicates that these two contractors’ cost and/or schedule performance continued to decline. Although DOD guidance discourages use of earned value performance metrics in award fee criteria, MDA includes this as a factor in several of its award fee plans. MDA considers many factors in rating contractors’ performance and making award fee determinations, including considerations of earned value data that shows cost, schedule, and technical trends. Table 9 provides the award fee MDA made available to its contractors, as well as the fee earned during fiscal year 2007. MDA is awarding some BMDS contractors a large percentage of the fees rolled over from a prior period. The agency’s award fee plans allow the fee determining official, at his discretion, to rollover all fee that is not awarded during one period to a future period. For example, in accordance with MDA’s award fee policy, the fee determining official may consider award fee rollover when a slipped schedule moves an award fee event to another period, it is the desire of the fee determining official to add greater incentive to an upcoming period, and when the contractor improves performance to such a great extent that it makes up for previous shortfalls. During fiscal year 2007, MDA rolled fee forward for 3 of the 8 contractors for which award fee letters were available. Table 10 presents a synopsis of this data. As noted in table 10, MDA rolled over a large percentage of the fee that was not earned by the THAAD contractor during fiscal year 2007. During its last award fee period in fiscal year 2007, the THAAD contractor did not earn any of the fee associated with cost management. The award fee letter cited unfavorable cost variances and a growing variance projected at completion of the contract as the reasons for not awarding any of the fee for cost management. However, the fee determining official decided to roll 100 percent of that portion of the unearned fee to a rollover pool tied to minimizing cost overruns. Fee will be awarded from this pool at the end of the contract. By rolling the fee forward, MDA provided the contractor an additional opportunity to earn fee from prior periods. Rolling over fee in this instance may have failed to motivate the contractor to meet or exceed expectations. The award fee plan for the GMD contract allowed the contractor to not only rollover fee, but to earn all unearned fee in the same period. During the fiscal year, the GMD contractor earned 97.7 percent of the $330 million dollars in award fees tied to performance areas outlined in the award fee plan. However, the award fee plan made provisions for the contractor to earn the unearned $7.5 million by creating a separate pool funded solely from this unearned portion and awarding the fee for performance in other areas. In this instance, the contractor did not have to wait to earn rolled over fees in later award fee periods—it was able to receive the unearned portion in the same period despite not meeting all of the criteria for its original objectives. GMD officials told us that this fee incentivized the contractor to achieve added objectives. In contrast, the fee determining official handled rollover of fee on the ABL contract in accordance with DOD’s new policy. According to ABL’s award fee plan, MDA was to base its 2007 award fee decision primarily on the outcome of three knowledge points. During this period, the contractor completed two of the knowledge points, but could not complete a third. To encourage the contractor to complete the remaining knowledge point in a timely manner, the fee determining official rolled over only 35 percent of the fee available for the event. All of the award fee plans we assessed allowed MDA to award fees for satisfactory ratings—that is, work considered to meet most of the requirements of the contract. Some award fee plans even allow fee for marginal performance or performance considered to meet some of the requirements of the contract. By paying for performance at the minimum standards or requirements of the contract, the intent of award fees to provide motivation for excellence above and beyond basic contract requirements is lost. While the definitions of satisfactory or marginal differed from element to element, the award fee plans allotted roughly more than 50 percent award fee to contractors performing at these levels. According to the award fee plans, MDA allows between 51 and 65 percent of available fee for work rated as marginal for the C2BMC and KEI contractor and no less than 66 percent of available fee for satisfactory performance by the ABL contractor. MDA’s practice of allowing more than 50 percent of available fee for satisfactory or, even, marginal performance illustrates why DOD in April 2007 directed that no more than 50 percent of available fee be given for satisfactory performance on all contract solicitations commencing after August 1, 2007. Earned value is one of several factors that according to the award fee plans for the ABL and Aegis BMD Weapon System contractors will be considered in rating the contractors’ cost and program management performance. During a good part of fiscal year 2007, earned value data for both contractors showed that they were overrunning their fiscal year cost budgets. In addition, the ABL contractor was not completing all scheduled work. Even considering these variances, MDA presented the contractors with a significant portion of the award fee specifically tied to cost and/or program management. In contrast, the THAAD contractor also experienced downward trends in its cost variance during its last award fee period in fiscal year 2007, but was not paid any of the award fee tied to cost management. The ABL and Aegis BMD Weapon System contractors received a large percentage of the 2007 award fee available to them for the cost and/or program management element. According to ABL’s award fee plan, one of several factors that is considered in rating the contractor’s performance as “very good” is whether earned value data indicates that there are few unfavorable cost, schedule, and/or technical variances or trends. During the award fee period that ran from February 2006 to January 2007, MDA rated the contractor’s cost and program management performance as very good and awarded 88 percent of the fee available for these areas of performance. Yet, earned value data indicates that the contractor overran its budget by more than $57 million and did not complete $11 million of planned work. Similarly, the Aegis BMD weapon system contractor was to be rated in one element of its award fee pool as to how effectively it managed its contract’s cost. Similar to ABL’s award fee plan, the weapon system contractor’s award fee plan directs that earned value data be one of the factors considered in evaluating cost management. During the fee period that ran from October 2006 through March 2007, MDA rated the contractor’s performance in this area as outstanding and awarded the contractor 100 percent of the fee tied to cost management. Earned value data for this time period indicates that the contractor overran its budget by more than $6 million. MDA did not provide us with more detailed information as to other factors that may have influenced its decision as to the amount of fee awarded to the ABL and Aegis BMD contractors. In another instance, MDA more closely linked earned award fee to contractor performance. The THAAD contractor continued to overrun its 2007 cost budget, and was not awarded any fee tied to the cost management element during its last award fee period in fiscal year 2007. The award fee decision letter cites several examples of the contractor’s poor cost performance including cost overruns and an increased projected cost variance at contract completion. These and other cost management issues led the fee determining official to withhold the $9.8 million to be awarded on the basis of cost management. MDA has made efforts to comply with DOD policy regarding some of GAO’s recommendations and responded to the DOD issued guidance by releasing its own revised award fee policy in February 2007. According to the policy, every contract’s award fee plan is directed to include: a focus on developing specific award fee criteria for each element of an emphasis on rewarding results rather than effort or activity, and an incentive to meet or exceed MDA requirements. Additionally, the directive calls for using the Award Fee Advisory Board, established to make award fee recommendations to the fee determining official, to biannually review and report to the Director on the consistency between MDA’s award fees and DOD’s Contractor Performance Assessment Report—which provides a record, both positive and negative, on a given contract for a specific period of time. MDA’s directive also requires program managers to implement MDA’s new award fee policy at the earliest logical point, which is normally the beginning of the next award fee period. MDA is currently constructing a revised draft of its award fee guide that addresses the rollover and rating scale issues from DOD’s March 2006 and April 2007 memorandums. In the latest draft, MDA limits rollover to exceptional cases and adopts the Under Secretary’s limitation of making only a portion of award fee available for rollover. MDA’s latest draft of the guide also makes use of the latest ratings scale, referencing the Under Secretary’s April 2007 direction, and applies the usage of the new scales to contract solicitations beginning after July 31, 2007. MDA sometimes finds that events such as funding changes, technology advances, and concurrent development and deployment of the BMDS arise that make changes to the contract’s provisions or terms necessary. MDA describes contract changes that are within the scope of the contract but whose final price, or cost and fee, the agency and its contractor have not agreed upon as unpriced changes. MDA has followed the FAR in determining how quickly the agency should reach agreement on such unpriced changes’ price, or cost and fee. According to the FAR, an agreement should be reached before work begins if it can be done without adversely affecting the interest of the government. If a significant cost increase could result from the unpriced change, and time doesn’t permit negotiation of a price, the FAR requires the negotiation of a maximum price unless it is impractical to do so. In 2007, MDA began applying tighter limits on definitization of price. MDA also issues unpriced task orders. MDA uses this term to describe task orders issued under established contract ordering provisions, such as an indefinite delivery/indefinite quantity contract, for which a definitive order price has not yet been agreed upon. MDA has followed the FAR requirements that task orders placed under an indefinite delivery/indefinite quantity contract must contain, at least, an estimated cost or fee. During Block 2006—January 1, 2006 through December 31, 2007—MDA authorized 137 unpriced changes and task orders with a value of more than $6 billion. Consistent with the FAR requirements noted above, of the total 137 unpriced changes and unpriced task orders, 61 percent of these— totaling $5.9 billion—were not priced for more than 180 days. Agreement on the price of several was not reached for more than a year, and agreement on the price of one was not reached for more than two and a half years. Table 11 below shows the value of unpriced changes and task orders issued on behalf of each BMDS element and the number of days after the contractor was authorized to proceed with the work before MDA and its contractor agreed to a price, or cost and fee, for the work. Realizing that unpriced changes and unpriced task orders may greatly reduce the government’s negotiation leverage and typically result in higher cost and fee for the overall effort, MDA, in February 2007, issued new contract guidance that required tighter limits on the timeframes for reaching agreement on price, or cost and fee. The agency now applies some of the Defense Federal Acquisition Regulation Supplement guidelines established for undefinitized contract actions to unpriced changes and unpriced task orders. Undefinitized contract actions are different from MDA’s unpriced changes or unpriced task orders in that they are contract actions on which performance is begun before agreement on all contract terms, including price, or cost and fee, is reached. A contract modification or change will not be considered an undefinitized contract action if it is within the scope and under the terms of the contract. MDA has elected to follow some of the stricter undefinitized contract action guidelines because the agency believes the guidelines will lead to better cost results. Similar to the undefinitized contract action guidelines, the agency’s new guidelines require that MDA’s unpriced changes and unpriced task orders be definitized within 180 days, that the contractor be given a dollar value that it cannot exceed until price agreement is reached, and that approval for the unpriced change or task order be obtained in advance. MDA’s new policy also, to the maximum extent practicable, limits the amount of funds that a contractor may be given approval to spend on the work before agreement is reached on price to less than 50 percent of the work’s expected price. MDA officials maintain that support contracts provide necessary personnel and are instrumental in developing the BMDS quickly. The agency contracts with 45 different companies that provide the majority of the personnel who perform a variety of tasks. Table 12 illustrates the broad categories of job functions that MDA support contractors carry out. Last year we reported that MDA had 8,186 approved personnel positions. This number has not changed appreciably in the last year. According to MDA’s manpower database, about 8,748 personnel positions—not counting prime contractors—currently support the missile defense program. These positions are filled by government civilian and military employees, contract support employees, employees of federally funded research and development centers (FFRDC), researchers in university and affiliated research centers, as well as a small number of executives on loan from other organizations. MDA funds around 95 percent of the total 8,748 positions through its research and development appropriation. Of this 95 percent, 2,450, or about 29 percent, of the positions are set aside for government civilian personnel. Another 60 percent, or 5,005 positions, are allotted for support contractors. The remaining 11 percent are positions either being filled, or expected to be filled, by employees of FFRDCs and university and affiliated research centers that are on contract or under other types of agreements to perform missile defense tasks. MDA officials noted that nearly 500 of the 8,748 personnel positions available were currently vacant. Table 13 shows the staffing levels within the BMDS elements. Support contractors in MDA program and functional offices may perform tasks that closely support those tasks described in the FAR as inherently governmental. According to the FAR, tasks such as determining agency policy and approving requirements for prime contracts should only be performed by government personnel. Contract personnel that, for example, develop statements of work, support acquisition planning, or assist in budget preparation are carrying out tasks that may closely support tasks meeting this definition. Having support contractors perform these tasks may create a potential risk that the contractors may influence the government’s control over and accountability for decisions. MDA officials told us that when support contractors perform tasks that closely support those reserved for government employees the agency mitigates its risk by having knowledgeable government personnel provide regular oversight or final approval of the work to ensure that the data being generated is reasonable. In the tables below we provide more information comparing the cost of purchasing THAAD and Aegis BMD assets incrementally versus fully- funding the assets. Table 14 presents MDA’s incremental funding plans for THAAD fire units 3 and 4, 48 Aegis BMD (SM-3) missiles to be produced during Blocks 2012 and 2014, and 19 shipsets intended to improve the performance of Aegis BMD ships. Tables 15 through 17 present our analysis of the cost of purchasing these same assets with procurement funds and following Congress’ full-funding policy. To examine the progress MDA made in fiscal year 2007 toward its Block 2006 goals, we examined the accomplishments of nine BMDS elements. The elements included in our review collectively accounted for 77 percent of MDA’s fiscal year 2007 research and development budget request. We evaluated each element’s progress in fiscal year 2007 toward Block 2006 schedule, testing, performance, and cost goals. In assessing each element we examined Program Execution Reviews, test plans and reports, production plans, Contract Performance Reports, and MDA briefing charts. We developed data collection instruments that were completed by MDA and each element program office. The instruments gathered detailed information on completed program activities including tests, prime contracts, and estimates of element performance. To understand performance issues, we talked with officials from MDA’s Deputy for Engineering and Program Director for Targets and Countermeasures, each element program office, as well as the office of DOD’s Director, Operational Test and Evaluation. To assess each element’s progress toward its cost goals, we reviewed Contract Performance Reports and, when available, the Defense Contract Management Agency’s analyses of these reports. We applied established earned value management techniques to data captured in Contract Performance Reports to determine trends and used established earned value management formulas to project the likely costs of prime contracts at completion. We also interviewed MDA officials within the Deputy for Acquisition Management office to gather detailed information regarding BMDS prime contracts. We reviewed 10 prime contracts for the 9 BMDS elements and also examined fiscal year 2007 award fee plans, award fee letters, and gathered data on the number of and policy for unpriced changes and unpriced task orders. We became familiar with sections of the Federal Acquisition Regulation and Defense Federal Acquisition Regulation Supplement dealing with contract type, contract award fees, and undefinitized contract actions. To develop data on support contractors, we held discussions with officials in MDA’s Office of Business Operations. We also collected data from MDA’s Pride database on the numbers and types of employees supporting MDA operations. In assessing MDA’s accountability, transparency, and oversight, we interviewed officials from the Office of the Under Secretary of Defense’s Office for Acquisition, Technology, and Logistics and Joint Staff officials. We also examined a Congressional Research Service report, U.S. Code, DOD acquisition system policy, the MDEB Charter, and various MDA documents related to the agency’s new block structure. In determining whether MDA would save money if it fully funded THAAD and Aegis BMD assets rather than funding them incrementally, we used present value techniques to restate dollars that MDA planned to expend over a number of years to the equivalent number of dollars that would be needed if MDA fully funded the assets in the fiscal year that incremental funding was to begin. We also considered whether MDA would need to acquire long lead items for the assets and stated those dollars in the base year that their purchase would be required. We then compared the total cost of incrementally funding the assets, as shown in MDA’s funding plans, to the fully funded cost that our methodology produced. To ensure that MDA-generated data used in our assessment are reliable, we evaluated the agency’s management control processes. We discussed these processes with MDA senior management. In addition, we confirmed the accuracy of MDA-generated data with multiple sources within MDA and, when possible, with independent experts. To assess the validity and reliability of prime contractors’ earned value management systems and reports, we interviewed officials and analyzed audit reports prepared by the Defense Contract Audit Agency. Finally, we assessed MDA’s internal accounting and administrative management controls by reviewing MDA’s Federal Manager’s Financial Integrity Report for Fiscal Years 2003, 2004, 2005, 2006, and 2007. Our work was performed primarily at MDA headquarters in Arlington, Virginia. At this location, we met with officials from the Aegis Ballistic Missile Defense Program Office; Airborne Laser Program Office; Command, Control, Battle Management, and Communications Program Office; BMDS Targets Office, and MDA’s Agency Operations Office. We also met with DOD’s Office of the Director, Operational Test and Evaluation and the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics in Washington, DC. In addition, in Huntsville, Alabama, we met with officials from the Ground-based Midcourse Defense Program Office, the Terminal High Altitude Area Defense Project Office, the Kinetic Energy Interceptors Program Office, the Multiple Kill Vehicle Program Office, and BMDS Tests Office. We also met with Space Tracking and Surveillance System officials in Los Angeles, California. We conducted this performance audit from May 2007 to March 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Barbara Haynes, Assistant Director; LaTonya Miller; Sigrid McGinty; Michele R. Williamson; Michael Hesse; Steven Stern; Meredith Allen Kimmett; Kenneth E. Patton; and Alyssa Weir made key contributions to this report. | By law, GAO annually assesses the Missile Defense Agency's (MDA) progress in developing and fielding a Ballistic Missile Defense System (BMDS). Funded at $8 billion to nearly $10 billion per year, it is the largest research and development program in the Department of Defense (DOD). The program has been managed in 2-year increments, known as blocks. Block 2006, the second BMDS block, was completed in December 2007. GAO assessed MDA's progress in (1) meeting Block 2006 goals for fielding assets, completing work within estimated cost, conducting tests, and demonstrating the performance of the overall system in the field, and (2) making managerial improvements to transparency, accountability, and oversight. In conducting the assessment, GAO reviewed the assets fielded; contractor cost, schedule, and performance; and tests completed during 2007. GAO also reviewed pertinent sections of the U.S. Code, acquisition policy, and the charter of a new missile defense board. MDA made progress in developing and fielding the BMDS during Block 2006 but fell short of meeting its original goals. Specifically, it fielded additional assets such as land-based interceptors and sea-based missiles and upgraded other assets, including Aegis BMD-equipped ships. It also met most test objectives, with a number of successful tests conducted. As a result, fielded capability has increased. On the other hand, it is difficult to assess how well BMDS is progressing relative to the funds it has received because fewer assets were fielded than originally planned, the cost of the block increased by at least $1 billion, some flight tests were deferred, and the performance of the fielded system remains unverified. In particular, GAO could not determine the full cost of Block 2006 because MDA continued to defer budgeted work into the future, where it is no longer counted as a Block 2006 cost. Also making cost difficult to assess is a work planning method--referred to as level of effort--used by contractors that does not link time and money with what is produced. When not appropriately used, level-of-effort planning can obscure work accomplished, portending additional cost in the future. MDA is working to minimize the use of this planning method--a needed step as contractors overran their fiscal year 2007 budgets. Performance of the fielded system is as yet not verifiable because too few tests have been conducted to validate the models and simulations that predict BMDS performance. Moreover, the tests that are done do not provide enough information for DOD's independent test organization to fully assess the BMDS' suitability and effectiveness. GAO has previously reported that MDA has been given unprecedented funding and decision-making flexibility. While this flexibility has expedited BMDS fielding, it has also made MDA less accountable and transparent in its decisions than other major programs, making oversight more challenging. MDA, with direction from Congress, has taken several steps to address these concerns. MDA implemented a new way of defining blocks--its construct for developing and fielding BMDS increments--that should make costs more transparent. For example, under the newly-defined blocks, MDA will no longer defer work from one block to another. Accountability should also be improved as MDA will, for the first time, estimate unit costs for selected assets and report variances from those estimates. DOD also chartered a new board with more BMDS oversight responsibility than its predecessor, although it does not have approval authority for some key decisions made by MDA. Finally, MDA will begin buying certain assets with procurement funds like other programs. This will benefit transparency and accountability, because procurement funding generally requires that assets be fully paid for in the year they are bought. Previously, MDA, with Congressional authorization, was able to pay for assets incrementally over several years. Additional steps could be taken to further improve oversight. For example, MDA has not yet estimated the total cost of a block, and therefore, cannot have its costs independently verified--actions required of other programs to inform decisions about affordability and investment choices. However, MDA does plan to estimate block costs and have them verified at some future date. |
Under the authority of the Arms Export Control Act, the State Department controls the export and temporary import of defense articles and services. The State Department’s International Traffic in Arms Regulations explain specific licensing procedures. Companies that manufacture or export defense articles or provide defense services are required to register with the licensing office. Exporters must obtain a license to export defense articles or an agreement to export defense services. Exporters file license applications either electronically or in paper copy. Currently, 50 percent of applications are submitted electronically. For both electronic and paper copy applications, the State Department requires seven paper copies of supporting documentation, including brochures and technical data. The supporting documentation for one application can be several inches thick and occasionally much thicker. Applications are assigned a number and logged into the licensing office’s database. Applications are distributed to licensing officers for initial review according to munitions categories, for example, firearms, aircraft, ammunition, and spacecraft systems. The names of the parties involved in an application are automatically screened by the database against a watch list of parties about whom prior concerns have been raised to determine if more intensive reviews are necessary. Figure 1 shows the key phases of the license application review process. During the initial review, a licensing officer decides if there is enough information to make a decision. If there is, the officer makes a decision and takes final action on the application. If a licensing officer decides additional review is needed, the officer then decides which organizations, such as the Defense Department or other State Department offices, should conduct a further review. The Defense Department conducts a technical review, identifies national security concerns, and also identifies whether an application needs to be reviewed for Missile Technology Control Regime concerns. State Department offices review applications for foreign policy, human rights, and non-proliferation concerns. After deciding which offices need to review the application, the licensing officer forwards the application to administrative personnel who transmit the application package to the other agencies and offices. This referral process is not automated and relies on the physical distribution of paper documents via couriers to other agencies and inter-office mail to State Department offices. In fiscal year 2000, the licensing office made 28,496 referrals for 15,512 license applications (about one-third of all applications) to other agencies and State Department offices. The average processing time for these referred applications was 91 days. For the 66 percent of applications that were not referred to other agencies, the average processing time was 23 days. While applications are undergoing review outside the licensing office, administrative assistants maintain the application, answer calls from license applicants concerning the status of reviews, record agencies recommendations as they are received from reviewing agencies and offices, and attach the recommendations to the paper copy files of the applications. Once all recommendations have been received for an application, the assistants close the referral process and submit the application to the licensing officer for final review and action. Under the Arms Export Control Act, the State Department is also required to notify Congress before approving applications that involve significant military equipment exports of defense articles and services valued over $50 million, or exports of major defense equipment valued over $14 million. The State Department cannot approve such applications until 15 days after notification for applications to export to North Atlantic Treaty Organization countries and Australia, Japan, or New Zealand; 15 days after notification for exports of commercial communication satellites for launch from and by nationals of the Russian Federation, Ukraine, or Kazakhstan; and 30 days after notification for other countries. If the Congress enacts a joint resolution during that time period prohibiting the export, the State Department cannot issue the license. In fiscal year 2000, the State Department notified Congress of 123 applications. These applications averaged nearly 7 months to review. Our analysis did not include the portions of the license application review process associated with congressional notifications. Many license applications take substantial time to process because they require attention by the licensing office, other agencies, and other State Department offices. License applications that are referred to other agencies and offices for review take an average of more than two months longer to review than applications that do not leave the licensing office. However, the State Department has not established formal guidelines for licensing officers to use to determine which agencies and State Department offices need to see certain license applications. As a result, the licensing office may be referring more applications than necessary. Further, officials in State Department reviewing offices generally do not receive training on how the licensing process works or how to conduct a review and consider the reviews a secondary work priority. The State Department lacks procedures to control the flow of license applications through the review process, and as a result, in fiscal year 2000, hundreds of applications were lost and thousands more were delayed. To improve license processing time, Congress increased the licensing office’s budget. The licensing office has hired additional license officers and is planning to develop a new electronic business processing system, but improvement efforts also need to focus on guidance and training for referrals, and the new electronic system must incorporate procedures for ensuring the efficient flow of applications through the process. Licensing officers lack formal guidelines on when to refer applications to other agencies and offices. As a result, applications may be unnecessarily referred, which results in longer processing time. In lieu of guidelines, licensing officers told us that they rely on prior cases and certain “rules-of- thumb” that they have learned, over time, from their predecessors or supervisors. For example, applications involving new weapon systems or technical data and applications for license agreements, except for those involving minor amendments to previously approved agreements, are all referred to the Defense Department. When no existing rule applies, some licensing officers told us that they use their own rule, which is “when in doubt, refer it out.” Licensing officers told us that they once used the State Department’s country policy handbook as a guide for referring applications, but the handbook has not been updated since 1996 and is too out-of-date to be used. Licensing officers also told us that because of the lack of referral criteria, newer licensing officers tend to refer more applications. Reviewing agencies and offices generally do not tell the State Department’s licensing office which applications they need to review. Over half of the license referrals are sent to the Defense Department, but there is no formal guidance explaining what applications the Department needs to review. Of the 11 State Department offices that frequently review applications, only one office, the Bureau of Democracy, Human Rights and Labor, provides written guidance on the applications it needs to review. An official in the Political-Military Affairs Bureau’s Office of Regional Security and Arms Transfer Policy told us that his office asks the licensing office for all applications that are referred to the geographic bureaus. However, he could not provide documentation of that guidance and licensing officers did not mention this guidance when we asked. An official in the Bureau of European and Eurasian Affairs said that he does not need to see most of the applications he receives. He told us he only needed license applications related to three countries, but had not told the licensing office. The State Department does not provide training to license reviewers so that they understand how the licensing process works and what to look for when conducting a license application review. Several officials had only a limited understanding of the process and the purpose of their reviews. Of the officials we spoke with in State Department reviewing offices, only one told us that he attended a training course on the export license process. Officials in six reviewing offices were military officers on detail, generally as military attachés in geographic bureaus, and are only in their positions for a few years. Several license reviewers told us that they are not always sure why they have been asked to review specific license applications and do not always understand the issues or concerns associated with an application. One official told us that he calls other offices to make sure his recommendation is consistent with those offices. Two officials assumed that they received all license applications associated with their geographic region and were surprised to find out that they review only a portion of those applications. One senior licensing officer told us that State Department license application reviewers do not provide adequate information when recommending a license denial, and licensing officers must go back to the reviewing official to obtain additional information to ensure that a denial is justified. Reviewing officials in 10 State Department offices told us that reviewing license applications is only one of their duties, and in some offices, it is a secondary duty. For example, in geographic bureaus the military attaché, whose primary responsibility is providing military advice related to their geographic region, is often in charge of ensuring that license reviews are conducted. One attaché showed us a pile of license applications that he had accumulated over the past 4 weeks. The attaché explained that he waits for enough applications to come in so he can review them all in one afternoon. Other State Department reviewers told us that there are no backup personnel to handle application reviews. If a reviewer is on leave or work-related travel, the license applications wait for the reviewer’s return with no action taken in the interim. The State Department has not established procedures to ensure that agencies are conducting timely reviews of referred applications, that license application referrals are received when they are sent through the mail or by courier, and that applications that become lost or delayed are quickly identified. Timely Reviews of Referred Applications: There are no guidelines governing the time permitted to review license applications, no requirement for a reviewing agency or office to justify a lengthy review, and the licensing office does not routinely check the status of a review unless an applicant calls to ask why an application is taking a long time. While the majority of reviews by other agencies and offices are completed in 26 days, 10 percent of referrals take 57 days or more. Several State Department license reviewers told us that applications frequently sit on their desk or the desk of other officials awaiting attention. As explained previously, several reviewing offices do not have backup personnel to handle application reviews when the reviewer is out of the office. Ensuring Referrals Are Received: The licensing office has no procedures to ensure that other agencies or State Department offices receive license applications from the licensing office. Licensing office officials told us that they periodically send the Defense Department a list of outstanding applications. However, no lists are routinely sent to State Department offices. Further, periodic lists do not identify applications until they are delayed for several weeks or more. State Department license reviewers told us that they frequently receive calls from applicants asking why their application is taking a long time. Reviewers told us that many of these inquiries identified applications that were either sent by the licensing office but not received by the reviewing organization or identified applications where the reviewer had returned the recommendation, but it was never received by the licensing office. When these cases are identified, the licensing office either sends another copy of the application to the reviewing office or the reviewing office sends a copy of its recommendation to the licensing office. As shown in table 1, our analysis of applications completed in September 2000 that were referred to the Defense Department or State Department offices identified 233 instances where applications took more than 2 weeks to travel from the licensing office to a reviewing office or from a reviewing office back to the licensing office. We identified 101 instances in that month alone where an application took over 4 weeks to travel from one office to the next. For fiscal year 2000 as a whole, there were 254 instances where applications were lost between the licensing office and a reviewing agency or office. Once identified as missing, usually as the result of a contact from the license applicant, they had to be re-sent. These applications averaged 7 months in the review process. Tracking Lost or Delayed Applications: The progress of license applications are not tracked within the licensing office as applications move from one stage in the process to the next. While the majority of license applications took only 2 or 3 days to pass from one administrative point to the next within the licensing office during fiscal year 2000, we identified 2,777 instances where applications took over 2 weeks and 674 of these took over 4 weeks to move from one point to the next while no substantive review activity occurred. The following describe three key stages of the licensing process where applications were delayed within the licensing office. When a licensing officer decides to refer an application for review to another agency or State Department office, administrative personnel make copies and send the applications to each reviewing organization selected by the licensing officer. Licensing officers record the date they make this decision and administrative personnel record the date the application is sent to an agency or a State Department office. The majority of applications were sent to agencies and offices within 2 days of the licensing officers’ decision, but 586 applications took more than 2 weeks and 118 of these took over 4 weeks. State Department personnel were not able to explain the delays. Agencies and State Department offices return a recommendation on each license application referral. Administrative personnel record the date each recommendation is returned. When all recommendations are received, the license referral process is complete and administrative personnel return the application to the licensing officer. Administrative personnel told us that they periodically check their files to see if they have overlooked any applications. These periodic checks depend on their workload. Our analysis showed that the majority of applications are returned within 3 days after the last recommendation is received, but 1,861 took over 2 weeks and 443 of these took over 4 weeks to be sent to a licensing officer for a final decision on the license application. Once a licensing officer decides to approve, deny, or return an application without action, the officer records the date and provides the application to administrative personnel who send the response to the applicant. The majority of responses took 3 days from the licensing officers’ decision to the time the response was sent to the applicant, but 330 responses took over 2 weeks and 113 of these took over 4 weeks. The licensing office has taken steps to improve license processing time by hiring additional licensing officers and is planning to upgrade the office’s electronic business processing system. The office’s expenditures increased from $4.6 million in fiscal year 1999 to $9.3 million in fiscal year 2000. The number of licensing officers has risen from 23 in fiscal year 2000 to 34 in fiscal year 2001. The office reported that increased staffing has improved median processing time for referred applications from 69 days in fiscal year 2000 to 60 days in September 2001. The licensing office is also developing an information technology strategy with the long-term goal of automating the licensing process. It plans to automate the process for submitting license applications and develop a means to electronically send license applications and supporting documentation to the Defense Department, which is also developing its own electronic system to process applications; accommodate new processing requirements such as additional reports; add high-speed scanners and barcode printing and reading equipment; support future requirements in the areas of programming and support. However, the State Department’s plan to automate the licensing process needs to focus on making significant improvements to the licensing process before applying new technology. The Director of the licensing office told us that they will make process adjustments and changes in personnel as they are upgrading to a new electronic business system. In a 1994 study of fundamental practices that led to performance improvements in leading private and public organizations, we reported that electronic business system initiatives must be focused on process improvement. Information systems that simply use technology to do the same work, the same way, but only faster typically fail or reach only a fraction of their potential. In May 2000, we reported that when developing new electronic business processes, it is important to ensure that current business processes are working well before applying new technology. In fact, agency heads are required by the Clinger-Cohen Act of 1996 to analyze an agency’s mission and revise mission-related processes, as appropriate, before making significant investments in information technology. Not revising business processes prior to investing in new technology creates the risk of merely automating inefficient ways of doing business. In conducting our work, these comments were echoed by officials from other government agencies who we met with to understand ways to automate business processes that are similar to the license review process. Officials from the Defense Electronic Business Program Office and the Patent and Trademark Office told us that an essential ingredient for effectively transitioning to a new electronic business system is reengineering and streamlining of work processes before automating those processes. Automating an inefficient process will not likely make it more efficient. License applicants have long complained that they cannot predict how long a license review may take and are frustrated by delays. Although licensing officers and license reviewers require time to deliberate and ensure that license decisions are appropriate, a substantial number of applications become stalled between reviews by licensing officers and reviewers. Improving efficiency, predictability, and timeliness of the process may be achieved with relatively small changes in guidelines and procedures. To improve the efficiency and timeliness of the munitions licensing process, we recommend that the Secretary of State direct the Office of Defense Trade Controls in conjunction with reviewing agencies and offices to develop criteria for determining which license applications to refer to other agencies and offices, and formal guidelines and training for organizations that receive referrals so that reviewers clearly understand their duties when reviewing license applications, and establish timeliness goals for each phase of the licensing process. Further, we recommend that the Secretary of State direct the Office of Defense Trade Controls to establish a mechanism to track license applications through each phase of the process to ensure timeliness goals are met and applications are not lost or delayed. To prevent imbedding an inefficient process into the State Department’s planned electronic business processing system, we recommend that the Secretary of State ensure the steps outlined above are taken before proceeding with a new electronic processing system. The State Department should coordinate its efforts with the Defense Department because the Defense Department is also developing a new electronic system and receives the majority of license application referrals. In commenting on our draft report, the State Department said that certain of our findings appear to be premised on conjecture and a failure to comprehend how foreign policy provides the overall context for munitions export controls and that other findings appear to be exaggerated and reflect out-of-context presentations. Also, the Department stated that our presentation of data was inflammatory and trivialized the licensing officer’s role in referring license applications for review. Further, our characterization of its plans to enhance automation was totally inaccurate. The Department appears to have missed the point that our report, as stated in our scope and methodology, is primarily concerned with the procedures in place to ensure that license applications flow smoothly through the review process and not with the time spent in substantive license application reviews. In our review of State Department data, we took extreme care not to confuse legitimately lengthy license application reviews caused by national security and foreign policy concerns with delays caused by administrative inadequacies. Regarding the administrative process, the Department provided only one bit of additional information. That is that the licensing office reviews computer runs of pending license applications to determine their status. However, the point of our finding is that such monitoring needs to be done on a routine basis, not sporadically, which is the current situation. Licensing office personnel told us that these reviews of pending applications are generally done on an “as time permits” basis. We have modified our report to accommodate this additional information. The Department referred to our point, early in the report, that industry has raised concerns about the effect of the process on U. S. defense industry sales as an example of our exaggeration and out-of-context presentation. It is not clear from the Department’s comment whether it is taking issue with the validity of the comment or our statement that industry has raised the concern. This statement was not intended to validate industry concerns but was merely meant to explain the reason why we were asked to examine the State Department’s licensing process. The Department’s statement that the report is inflammatory relates to our statement that ‘hundreds” of applications were lost and “thousands” were delayed while no substantive reviews occurred. Our report provides a detailed explanation of the data on which our comment was based. Our use of the term “lost” refers to the fact that applications referred for review were sent by the licensing office but not received by the reviewing office and had to be re-sent. The Department states that no licenses were lost because the licensing office retains the original. The Department also pointed out that the “lost” applications are a very small percentage of the total number of license application referrals. We agree. Our point, however, is that applications that are lost could be easily identified and forwarded by a routine status review. Currently, the time required to process these lost applications, as we point out in the report, averages about 7 months. In terms of the delayed applications, the Department commented that it does not keep detailed diaries on every application and that the lack of an audit trail should not be a basis for “unqualified conjecture or speculation.” Our statement that thousands of applications were delayed is based solely on detailed data provided by the State Department. The Department stated that we trivialized the role of the licensing officer when we explained that there are no formal guidelines to assist in referring license applications and the Department further stated that decisions to refer license applications rely on practice, precedent, and the current state of foreign policy. The comments explained that licensing officers are trained to consider applications with the utmost seriousness. In our opinion, the lack of agreement and understanding between the licensing office and reviewing offices on the referral process demonstrates the problems that can occur when a process that requires actions and interpretations by a variety of people lacks formal guidelines. Our findings and recommendations were based on lengthy and structured interviews with all licensing officers who had over one year of experience and officials in State Department offices that receive these referrals. Based on the information provided by these officials, it is clear that State Department offices that receive referrals are at times confused about the referral process and licensing officers believe that further guidance would assist in making decisions to refer or not to refer a license application. In regard to the Department’s comment that our report is inaccurate concerning its automation plans, we held lengthy discussions with managers from the Office of Defense Trade Controls concerning their information technology plans and evaluated existing copies of automation plans. Based on the State Department comments, we requested any additional information on technology modernization plans that we had not seen. The Department provided no further information concerning its plans. As stated in the report, the Director of the Office of Defense Trade Controls told us that he plans to correct inadequacies in the licensing process during the modernization. As we pointed out, past GAO work has proven that proceeding with information technology modernization without first correcting problems in current systems risks merely automating inefficient ways of doing business. The State Department did not agree with our recommendation to develop criteria for determining which license applications to refer to other agencies and offices and to develop guidelines and training for offices that receive referrals. The Department commented that they have made a conscious, deliberate, decision not to develop guidelines that address every country or commodity. The Department explained that they have written operational and policy guidelines that are used extensively. The guidelines, however, are not written down in a single document and are heavily reliant on practice and the current state of foreign policy. The Department acknowledged that practice within certain regions needs to be updated and made uniform. During our structured interviews with licensing officers, we asked if there were written guidelines to guide license referral decisions and the licensing officers explained that there were none except for referrals related to the State Department’s Bureau of Democracy, Human Rights and Labor. The Department’s response to this recommendation did state that training for reviewing officers in State Department offices is needed, and the Department intends to discuss this issue as their information technology system is enhanced. In response to our recommendation to establish timeliness goals, the Department said that it is considering a timeliness goal of 25 working days for license referrals, which is similar to the Department of Defense’s self- imposed goal. The Department also explained that licensing officers have timeliness goals in their performance plans. Our concern in making this recommendation was not with the time spent in substantive review of applications but rather with the administrative procedures in the process. That is, those portions of the process in which paper moves from one desk to another during which there are no “value-added” steps occurring. The comments did not mention timeliness goals for administrative phases of the process within the Office of Defense Trade Controls. The Department agreed with our recommendation to establish a mechanism to track license applications; however, it also stated that the capability to track already exists and the information technology modernization plan that is under development will be engineered to enable tracking. We agree that tracking is a current capability. Our recommendation is to begin using that capability to routinely track license applications. We hope that the Department intends to do that rather than waiting for a new system that has not yet been developed. The Department did not comment on our recommendation to ensure the steps outlined in the previous recommendations are taken before proceeding with a new electronic processing system. To determine conditions that cause delays in the licensing process, we reviewed regulations governing the process, met with personnel who are involved in the licensing process, reviewed license applications, and collected and analyzed databases that show the flow of applications. We reviewed the Arms Export Control Act and the International Traffic in Arms Regulations to understand the rules that govern license processing. We also discussed guidelines with licensing office officials and license reviewers to understand written and verbal guidelines associated with the process. To understand the process of reviewing license applications, we met with all licensing officers with more than one year of experience, and administrative personnel from the licensing office. Our interviews with the licensing officers were detailed and structured and we provided our questions to Office of Defense Trade Controls management in advance. To understand the role of license reviewers, we met with reviewers in the 11 State Department offices that review nearly all referred applications within the State Department. We also met with Defense Department officials who manage the review of license applications. We selected a random sample of applications that were completed in September 2000 and took over 90 days to process in order to understand the progress of license applications that take longer to review. To analyze the flow of license applications through the process, we obtained the licensing office’s database that has dates associated with the progress of license application reviews. We reviewed data on all license applications completed in fiscal year 2000. To determine how efficiently applications were transferred from one office to another, we compared data logs from the Defense Department and State Department reviewing offices with the licensing office’s database for applications completed in September 2000. We cannot be certain of the reliability of the data we reviewed. The State Department does not have a data dictionary that explains the data. As a substitute, we discussed key elements of the database with a State Department representative to ensure that we accurately interpreted the data. In a recent review of the Office of Defense Trade Controls, the State Department Inspector General sampled selected elements of the database and found data entry errors. While conducting our analysis, we also found data entry inaccuracies. We worked with a State Department representative to correct some of these inaccuracies. However, some data fields did not have entries. As a result, data for some license applications was incomplete. We also collected information from the licensing office on their plans to improve license processing. We obtained information on their budget, staffing, and plans for a new electronic business system. We reviewed prior work to determine appropriate ways to implement new electronic business systems and met with the Defense Electronic Business Program Office and the Patent and Trademark Office to learn from their experiences. We also met with Defense Department officials who review State Department license applications to understand their efforts to coordinate the implementation of their electronic business system with State Department efforts. We conducted our work between May 2001 and November 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after its issuance. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Foreign Relations, the Senate Committee on Banking, Housing, and Urban Affairs, and the House Committee on International Relations. We will also send copies to the Secretaries of State, Defense, and the Director, Office of Management and Budget. This report will also be made available on GAO’s home page http://www.gao.gov. If you or your staff have questions concerning this report, please contact me at (202) 512-4841. Others making key contributions to this report were Blake Ainsworth, Heather Barker, Raymond H. Denmark, Thomas J. Denomme, Minette D. Richardson, and John P. Ting. 1. We changed the text to reflect that the U.S. export licensing process can be lengthy because of foreign policy and national security considerations, not just national security considerations. 2. We changed the text on page 8 to state that the licensing office does not routinely check the status of license reviews. 3. This State Department comment is not correct. Through discussions with State Department budgeting officials, we determined that the information in the draft report is correct. The data we reported are actual Office of Defense Trade Controls expenditures for fiscal years 1999 and 2000. The State Department comments refer to authorized funding levels that were not actually spent in fiscal year 1999. 4. The report states that the State Department does not have formal guidelines for referring license applications to other agencies and offices. 5. The licensing office did not provide sufficient information for us to validate this statement. 6. We revised text on page 8 of the report to say that no lists are routinely sent to State Department offices. 7. Text revised. 8. Text revised. 9. Text revised. | The U.S. defense industry and some foreign government purchasers have expressed concern that the U.S. export control process is unnecessarily burdensome. Defense industry officials contend that extended reviews of export license applications by the State Department have resulted in lost sales and are harming the nation's defense industry. The State Department's Office of Defense Trade Controls is responsible for licensing the export and temporary import of defense articles and services. Many license applications take a long time to review because of their complexity and the need to consider different points of view. However, several conditions make the application review process less efficient and cause delays. The State Department has not established formal guidelines for determining the agencies and offices that need to review license applications. As a result, the licensing office refers more license applications to other agencies and offices than may be necessary. Furthermore, many license application reviewers in State Department reviewing offices consider license reviews a low priority. The State Department lacks procedures to monitor the flow of license applications through the review process. The State Department has hired new licensing officers which license office officials say has decreased processing time, and plans to upgrade the office's electronic business system. However, the planned upgrade needs to (1) ensure a controlled and timely flow of applications and (2) track the progress of applications. |
As cyber threats have grown in sophistication, federal efforts to address them have evolved. Presidential Decision Directive 63, signed in May 1998, established a structure under White House leadership to coordinate the activities of designated lead departments and agencies, in partnership with their counterparts from the private sector, to eliminate any significant vulnerabilities to both physical and cyber attacks on our critical infrastructures, including computer systems. National cyber policy was updated in 2003 with The National Strategy to Secure Cyberspace. Presidential Decision Directive 63 was superseded later that year by Homeland Security Presidential Directive 7, which assigned the Secretary of Homeland Security responsibility for coordinating the nation’s overall critical infrastructure protection efforts, including protection of the cyber infrastructure, across all sectors (federal, state, local, and private) working in cooperation with designated sector-specific agencies within the executive branch. Both of these policies focused on defensive strategies, and Homeland Security Presidential Directive 7 did not emphasize protection of federal government information systems. Subsequent classified presidential directives and strategic planning documents have continued to reflect evolving federal policy in response to cyber threats. Recognizing the need for common solutions to improve cybersecurity, the White House, Office of Management and Budget, and various federal agencies have launched or continued several governmentwide initiatives that are intended to enhance information security at federal agencies. According to Director of National Intelligence implementing guidance, in 2008 the Comprehensive National Cybersecurity Initiative was begun in order to develop an approach to address current threats, anticipate future threats and technologies, and foster innovative public-private partnerships. It was created to bridge cyber-related missions for federal agencies, by asking them to undertake a set of 12 initiatives and 7 strategic enabling activities. According to DOD officials, these initiatives include defensive, offensive, research and development, and counterintelligence efforts. Programs focus primarily on the security of executive-branch networks, which represent only a fraction of the global information and communications infrastructure on which the United States depends. In May 2009, the National Security Council and Homeland Security Council completed a 60-day interagency review intended to assess U.S. policies and structures for cybersecurity and outline initial areas for action. The resulting report recommended, among other things, appointing an official in the White House to coordinate the nation’s cybersecurity policies and activities, preparing an updated national cybersecurity strategy, developing a framework for cyber research and development, and continuing to evaluate the Comprehensive National Cybersecurity Initiatives. Following the lead of federal government efforts, DOD initiated several efforts to develop policy and guidance on cyberspace operations. In 2006 and 2007, The National Military Strategy for Cyberspace Operations and associated Implementation Plan provided a strategy for the U.S. military to achieve military superiority in cyberspace and established a military strategic framework that orients and focuses DOD action in the areas of military, intelligence, and business operation in and through cyberspace. In 2008, U.S. Strategic Command developed the Operational Concept for Cyberspace, which identifies near-term concepts to improve operations in and through cyberspace and gain superiority over potential adversaries in support of national objectives. The 2009 Quadrennial Roles and Missions Review Report discussed efforts by the Cyber Issue Team, jointly led by the Office of the Under Secretary of Defense for Policy and U.S. Strategic Command, which addressed cyberspace issues related to developing, structuring, and employing the cyberspace force. Also in 2009, U.S. Strategic Command developed an Operations Order titled Operation Gladiator Phoenix to provide DOD with a strategic framework to operate, secure, and defend the global information grid. As early as 2006, the Quadrennial Defense Review highlighted the department’s need to be capable of shaping and defending cyberspace. DOD published a new Quadrennial Defense Review in February 2010, which designated cyberspace operations as a key mission area and discussed steps the department was taking to strengthen capabilities in the cyber domain, including centralizing command of cyber operations and enhancing partnerships with other agencies and governments. Currently, DOD continues to develop and update cyberspace policies. Different types of cybersecurity threats from numerous sources may adversely affect computers, software, networks, agency operations, industry, or the Internet itself. Cyber threats to federal information systems continue to evolve and grow. These threats can be unintentional or intentional, targeted or nontargeted, and can come from a variety of sources. Unintentional threats can be caused by inattentive or untrained employees, software upgrades, maintenance procedures, and equipment failures that inadvertently disrupt systems or corrupt data. Intentional threats include both targeted and nontargeted attacks. An attack is considered to be targeted when a group or individual attacks a specific system or cyber-based critical infrastructure. A nontargeted attack occurs when the intended target of the attack is uncertain, such as when a virus, worm, or other malicious software is released on the Internet with no specific target. Government officials are concerned about cyber attacks from individuals and groups with malicious intent, such as criminals, terrorists, and adversarial foreign nations. Threats to DOD computer networks posed by the intelligence branches of foreign countries and hackers alike represent an unprecedented national security challenge. For example, in February 2009, the Director of National Intelligence testified that foreign nations and criminals have targeted government and private-sector networks to gain a competitive advantage and potentially disrupt or destroy them, and that terrorist groups have expressed a desire to use cyber attacks as a means to target the United States. The Federal Bureau of Investigation has also identified multiple sources of threats to our nation’s critical information systems, including foreign nations engaged in espionage and information warfare, domestic criminals, hackers, virus writers, and disgruntled employees and contractors working within an organization. Table 1 summarizes those groups or individuals that are considered to be key sources of cyber threats to our nation’s information systems and cyber infrastructures. These groups and individuals have a variety of attack techniques at their disposal. Furthermore, as we have previously reported, the techniques have characteristics that can vastly enhance the reach and effect of their actions, such as the following: Attackers do not need to be physically close to their targets to perpetrate a cyber attack. Technology allows actions to easily cross multiple state and national borders. Attacks can be carried out automatically, at high speed, and by attacking a vast number of victims at the same time. Attackers can more easily remain anonymous. Table 2 identifies the types and techniques of cyber attacks that are commonly used. Various terms are used within the DOD cyberspace domain. For example, in May 2008, DOD defined cyberspace as the “global domain within the information environment consisting of the interdependent network of information technology infrastructures, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers.” Also, DOD defines computer network defense as actions taken to protect, monitor, analyze, detect, and respond to unauthorized activity within DOD information systems and computer networks. For further discussion of policies, programs, and tools that DOD uses to protect its networks, see appendix III. Table 3 lists several key terms used within the DOD cyberspace domain. DOD’s organization to address cybersecurity threats is decentralized and spread across various offices, commands, military services, and military agencies. DOD cybersecurity roles and responsibilities are vast and include developing joint policy and guidance and operational functions to protect and defend its computer networks. DOD is taking proactive measures to better address cybersecurity threats, such as developing new organizational structures, led by the establishment of the U.S. Cyber Command, to facilitate the integration of cyberspace operations. Cybersecurity roles and responsibilities within DOD are spread across various DOD components. The current cybersecurity organizational structure is decentralized and there are many DOD components that hold responsibilities. Cybersecurity roles and responsibilities include developing joint policy and guidance and operational functions to defend and secure DOD networks, and are spread among the Office of the Secretary of Defense, Joint Staff, functional and geographic combatant commands, military services, and military agencies. According to DOD officials, to ensure a holistic approach and limit potential stovepiping, the department has begun to develop cybersecurity expertise across various offices. Figure 1 illustrates DOD’s cyber organization as of March 2010. Additionally, there are other organizations that play a pivotal role in cybersecurity, such as the DOD intelligence agencies, National Guard, and defense criminal investigative organizations. DOD is taking proactive measures to reorganize and develop new organizational structures to better address cybersecurity threats. However, it is too early to tell if these organizational changes will help DOD better address cybersecurity threats. There are several offices within both the Office of the Secretary of Defense and the Joint Staff that share responsibility for developing joint cyber policy, guidance, and doctrine for DOD activities that occur in and through cyberspace. For example, within the Office of the Secretary of Defense, the offices of the Under Secretary of Defense for Policy; Assistant Secretary of Defense for Networks and Information Integration; and the Under Secretary of Defense for Intelligence, all share responsibility for developing joint cyber policy and guidance. For example, according to DOD officials, both the Assistant Secretary of Defense for Networks and Information Integration and the Under Secretary of Defense for Policy have responsibility for strategic-level guidance and oversight for computer network operations and information assurance. Appendix II provides more detailed information on the cyber-related responsibilities of the DOD offices. Several offices within the Joint Staff also hold responsibilities for developing joint cyber policy, guidance, and doctrine for DOD activities that occur in and through cyberspace. The Joint Staff’s cyber responsibilities include establishing and developing doctrine, policies, and associated joint tactics, techniques, and procedures for DOD’s global information grid, information assurance, and joint and combined operations. According to DOD directive O-3600.01, the Joint Staff is to develop and maintain joint doctrine for core, supporting, and related information operations capabilities in joint operations and ensure that all joint education, training, plans, and operations are consistent with information operations policy, strategy, and doctrine. The Joint Staff is also responsible for developing, coordinating, and disseminating information assurance policies and doctrine for joint operations. Additionally, several Joint Staff divisions and Joint Staff–led coordination forums have cybersecurity responsibilities. The U.S. Joint Forces Command also has doctrine development and operational roles. Chairman of the Joint Chiefs of Staff Instruction 5120.02B establishes U.S. Joint Forces Command as a voting member of the joint doctrine development community, responsible for developing and submitting recommendations for improving existing joint doctrine or initiating new joint doctrine projects and conducting front-end analyses of all joint doctrine project proposals and providing appropriate recommendations. Moreover, as with all other combatant commands, U.S. Joint Forces Command is responsible for conducting computer network defense to secure its portion of the DOD global information grid, including developing and implementing information operations and information assurance programs and activities. DOD also has numerous organizations with operational roles and responsibilities to defend and secure DOD computer networks. U.S. Strategic Command is considered the lead for cyberspace operations within DOD. According to the 2008 Unified Command Plan, U.S. Strategic Command is responsible for synchronizing DOD’s planning for cyberspace operations, and it does so in coordination with other combatant commands, the military services, and defense agencies. In order to operationalize its missions, U.S. Strategic Command delegated operational and tactical-level planning, force execution, and day-to-day management of forces to its joint functional component commands. Prior to the establishment of U.S. Cyber Command, these component commands conducted cyberspace-related operations for the U.S. Strategic Command while headquarters focuses on strategic-level integration and advocacy. These component commands were as follows: Joint Functional Component Command for Network Warfare (JFCC NW), which was responsible for planning, integrating, and coordinating cyberspace capabilities and integrating with all necessary computer network operations capabilities. Joint Task Force–Global Network Operations, which was responsible for DOD’s global network operations and directing the operation and defense of DOD’s global information grid. Joint Information Operations Warfare Center, which is the lead entity responsible for planning, integrating, synchronizing, and advocating for information operations across DOD including computer network operations, electronic warfare, psychological operations, military deception, and operations security. In June 2009, as part of the creation of U.S. Cyber Command, U.S. Strategic Command was directed by the Secretary of Defense to disestablish Joint Task Force–Global Network Operations and Joint Functional Component Command for Network Warfare in preparation for U.S. Cyber Command reaching its full operating capability, planned for October 2010. Additionally, the military departments were directed to identify and provide appropriate component support to U.S. Cyber Command to be in place and functioning by that same date. Other combatant commands also have operational roles and responsibilities for defending and securing DOD computer networks. According to DOD Directive 8500.01E, the combatant commands must also develop and implement their own information assurance programs for their respective portions of the DOD global information grid and must provide training and education for their information assurance personnel. Certain combatant commands have unique responsibilities. For instance, U.S. Northern Command has specific responsibilities and is the DOD lead in assisting the Department of Homeland Security and other civilian agencies during cyber-related incidents as part of its Defense Support of Civil Authorities missions—or civil support. During these incidents, U.S. Northern Command—and in some instances U.S. Pacific Command—will be supported by U.S. Strategic Command. Functional combatant commands have a global mission and a global requirement for network operations support. Some functional combatant commands, such as U.S. Special Operations Command, operate their own specific functional global networks. The military service components have a significant role in providing cybersecurity while operating and defending their respective networks within DOD’s global information grid. In their role, each military service is responsible for fielding, training, and equipping cyberspace forces. They also protect, defend, and conduct restoration measures for the networks they control, and ensure that service-managed portions of DOD’s global information grid are secure and interoperable, with appropriate information assurance and trained personnel. Appendix II has more information on the military services’ current cyber organization. Defense agencies also share responsibilities related to cyber operations. For example, the Defense Information Systems Agency is a combat support agency responsible for the day-to-day management of DOD’s global information grid, communication and computer-based information systems, and performs significant network operations support functions. Together with the military services, the agency has the responsibility to build, maintain, and operate DOD’s global information grid. It is also responsible for employing information assurance operations and securing DOD’s enterprise systems. The agency reports to the Assistant Secretary of Defense for Network and Information Integration, and its director also currently commands Joint Task Force–Global Network Operations. There are many other agencies and organizations that support DOD cyber efforts, including the DOD intelligence agencies, the National Guard, and defense criminal investigative organizations. The intelligence agencies play an integral role in enhancing cybersecurity both by increasing our ability to detect and identify adversary cyber activity and by expanding our knowledge of the capabilities, intentions, and cyber vulnerabilities of our adversaries. For example, the National Security Agency provides information assurance support to DOD, prescribes minimum standards for protecting national security systems, and provides warning support to other DOD components. The Director of the National Security Agency was also designated to serve as commander of the Joint Functional Component Command for Network Warfare. The Defense Intelligence Agency is a combat support agency that provides all-source intelligence to combatant commanders, defense planners, and national security policymakers, as well as manages, operates, and maintains its own network and information assurance program. The Office of the Director for National Intelligence provides direction for signals intelligence collection in cyberspace through the National Intelligence Strategy and National Intelligence Priority Framework.The National Guard—comprising the Army National Guard and Air National Guard—provides cyber capabilities to meet military service and combatant commander requirements and can be leveraged under state authorities to assist civil authorities. According to Air National Guard officials, skilled personnel that come from information technology, banking, and other sectors have been utilized to provide cyber capabilities to agencies with insufficient manpower. Defense criminal investigative organizations conduct cyber-related criminal and counterintelligence investigations that may involve offenses under title 18 of the U.S. Code. These organizations include: (1) the Naval Criminal Investigative Service; (2) the Air Force Office of Special Investigations; (3) the Defense Criminal Investigative Service; (4) the Army Criminal Investigation Command; (5) Army Counterintelligence, and the related DOD Cyber Crime Center. DOD is taking proactive measures to reorganize and develop new organizational structures to better address cybersecurity threats. As a result of significant cyber challenges and organizational constraints, DOD is conducting a multitiered organizational restructuring for cyber organizations, including the establishment of the U.S. Cyber Command, and changes within the Office of the Secretary of Defense and the military services. The establishment of U.S. Cyber Command is DOD’s primary organizational change to better address cybersecurity threats. On June 23, 2009, the Secretary of Defense signed a memorandum directing U.S. Strategic Command to establish the U.S. Cyber Command as a subordinate unified command with responsibility for military cyberspace operations. In this memorandum, the Secretary of Defense stressed the new national security risks that arise from DOD’s increasing dependency on cyberspace and the growing array of cyber threats and vulnerabilities. DOD has recognized that it lacks integration of computer network operations at the command and operational levels. DOD anticipates that the U.S. Cyber Command will focus on the integration of cyberspace operations, will synchronize DOD cyber missions and warfighting efforts, and will provide support to civil authorities and international partners. The Secretary of Defense recommended that the director of the National Security Agency become the commander of the U.S. Cyber Command and that the command retain current authorities to conduct cyberspace responsibilities that had been given to the U.S. Strategic Command in the 2008 Unified Command Plan. Additionally, U.S. Strategic Command will delegate its cyberspace missions to U.S. Cyber Command in a phased approach. Initial operating capability was established in October 2009; and full operational capability is anticipated in October 2010. By full operational capability, U.S. Strategic Command will disestablish both the Joint Task Force–Global Network Operations and the Joint Functional Component Command for Network Warfare, and their existing personnel will be incorporated into the new subunified command. As a result, the Director of the Defense Information Systems Agency will relinquish all duties as the Commander of the Joint Task Force–Global Network Operations. However, the Defense Information Systems Agency will establish a field office and a support element at U.S. Cyber Command to ensure an operational linkage between the new command and the agency. The Secretary of Defense also directed actions in his own office and in each military service intended to improve the diffuse efforts related to cyberspace operations. In response, the Office of the Under Secretary of Defense for Policy is leading a review of policy and strategy to develop a comprehensive approach to DOD cyberspace operations. Additionally, the Office of the Under Secretary of Defense for Policy is conducting an organizational realignment to better address cybersecurity. The office created a separate division—Deputy Assistant Secretary of Defense for Cyber and Space Policy—to be a central focal point for cyberspace policy in the Office of the Secretary of Defense. The military services are also working to identify and provide appropriate component support to the U.S. Cyber Command prior to its full operational capability in October 2010. The military services are developing and implementing the following new initiatives. On January 29, 2010, the U.S. Navy established the Fleet Cyber Command, 10th Fleet to provide component support to the U.S. Cyber Command. The Air Force initially planned to establish a major cyber command. Instead, it stood up the 24th Air Force which will provide cyber forces and capabilities to the U.S. Cyber Command. The Army plans to support the U.S. Cyber Command through the Army Forces Cyber, and the Marine Corps established Marine Forces Cyber. DOD officials we interviewed expressed varying opinions on whether the establishment of the U.S. Cyber Command will help DOD better address cybersecurity threats. Many officials with whom we spoke said that it was a step in the right direction as the command will potentially provide a single point of accountability for cyber-related issues. Additionally, the Joint Staff concluded that a four-star subunified Cyber Command under U.S. Strategic Command, dual-hatted as the Director of the National Security Agency, would be the most effective way to address the need to better integrate cyber defense, attack, exploitation, and network operations. However, officials from some combatant commands expressed concern about the command’s close relationship to the DOD intelligence community. These officials believed that with the Director of the National Security Agency dual-hatted as the Commander of U.S. Cyber Command, the U.S. Cyber Command will become too focused on intelligence structures in detriment to a focus on operations in support of the combatant commands. Additionally, DOD officials expressed some concern regarding the reduced role of the Defense Information Systems Agency with respect to the U.S. Cyber Command. The agency head was previously also the Commander of the Joint Task Force–Global Network Operations. Under the new relationship, the Defense Information Systems Agency will continue to provide network and information assurance technical assistance through a field office and a support element at U.S. Cyber Command. Several joint doctrine publications address aspects of cyberspace operations, but DOD officials acknowledge that this is insufficient. None of the joint publications that mention “cyberspace operations” contains a sufficient discussion of cyberspace operations. DOD doctrine also lacks key common definitions. DOD recognizes the need to develop and update cyber-related joint doctrine and is currently debating the merits of developing a single cyberspace operations joint doctrine publication in addition to updating all existing doctrine. However, there is no timetable for completing the decision-making process or for updates to existing doctrine. DOD has numerous joint doctrine publications that discuss cyber-related topics; however, the content is incomplete or out of date and DOD lacks joint doctrine that fully addresses cyberspace operations. The discussion of cyber-related topics in current joint doctrine publications is limited and insufficient, leaving problems such as incomplete definitions. Other discussions—such as what constitutes a cyber force—are not uniformly defined across DOD doctrine publications and guidance. DOD recognizes the need to develop and update cyber-related joint doctrine and is currently debating the merits of developing a single, overarching cyber joint doctrine publication in addition to updating all existing doctrine with respect to cyberspace operations. However, DOD has not set a timetable for the completion of these efforts. According to DOD, the purpose of joint doctrine is to enhance the operational effectiveness of U.S. forces. Joint doctrine consists of fundamental principles to guide the employment of U.S. military forces in coordinated action toward a common objective and should include key terms, tactics, techniques, and procedures. In order to be effective, combatant commands and military services need to understand the joint functions within the domain and the manner in which those joint functions are integrated globally as well as operationally. The cyberspace domain is inherently joint; it cuts across all combatant commands, military services, and agency boundaries and supports engagement operations for all geographic combatant commands. Therefore, DOD expects that a joint publication focusing on all aspects of cyberspace operation will not only enhance the operational effectiveness and performance of joint U.S forces but also provide a doctrinal basis for collaborative planning and interagency coordination. DOD determined that it has addressed cyberspace-related topics in at least 16 DOD joint doctrine publications and mentions “cyberspace operations” in at least 8 joint publications. This reflects the importance of cyber-related issues across the body of joint doctrine. However, according to combatant command officials, the discussions and content in these publications are insufficient and do not completely address cyberspace operations or contain critical related definitions. U.S. Joint Forces Command’s assessment of the existing state of joint doctrine for cyber issues concluded that while the term “cyberspace operations” was addressed or mentioned in 8 approved and draft publications, none contained a significant discussion of cyberspace operations. U.S. Joint Forces Command’s assessment of DOD joint publications showed that the majority of references to cyberspace operations come from Joint Publication 3-13, Information Operations—the current publication with the most relevance to cyber issues. While this publication may have been sufficient for its intentioned purposes at the time it was written in 2006, U.S. Strategic Command reported that Information Operations should be revised to use updated cyberspace terminology and content. U.S. Strategic Command reported that the publication is not currently sufficient and does not provide a basis for cyberspace joint doctrine for 3 key reasons. First, its definition of cyberspace does not reflect the scope of the current definition of cyberspace that was approved by the Deputy Secretary of Defense in May 2008. The definition in the publication restricts cyberspace to “digital information communicated over computer networks,” while the current approved definition recognizes cyberspace as a global domain within the information environment that includes the Internet, telecommunications networks, computer systems, and embedded processors and controllers. Second, the publication discusses computer network operations as a component of information operations by grouping it with military deception, operation security, psychological operations, and electronic warfare; but it does not recognize the scope of computer network operations as a warfighting domain. Third, Joint Publication 3-13 omits integral elements in the discussion of computer network operations that are important to provide a complete view and scope of cyberspace operations. For example, the publication discusses computer network attack and computer network defense but does not thoroughly address key elements such as computer network defense response actions, computer network attack–operational preparation of the environment, or network operations. Our analysis of the current usage of cyber-related terms confirms that these are considered important elements of both computer network operations and cyberspace operations. Another example of the shortfall in existing doctrine is the lack of a common definition for what constitutes cyber personnel in DOD. According to a U.S. Joint Forces Command report, the cyberspace operations community lacks a common dictionary of terms, and the terms defined in current doctrine are not used uniformly. This can cause confusion in planning for adequate types and numbers of personnel. Because career paths and skill sets are scattered across various career identifiers, the military services and commands vary in their scope and definitions of what constitutes cyber personnel. As a result, there are cases in which the same cyber-related term may mean something different among the services. In another report, the U.S. Joint Forces Command found that 18 different cyber position titles across combatant commands are used to identify cyberspace forces. Some of these titles may be inconsistent from command to command and are likely to be duplicates. According to the report, U.S. Pacific Command had the largest number of cyber personnel positions and position titles compared to other combatant commands, while some commands reported no cyber personnel. This may be due in part to duplicative and differing definitions among the combatant commands of what constitutes cyber personnel. Examples of cyberspace-related position titles from combatant commands include Computer Network Attack Intelligence Officer, Computer Network Attack Ops Officer, Computer Network Attack Ops Planner, Computer Network Attack Planner, Computer Network Attack Weapons Risk Assessor, Computer Network Defense Planner, Computer Network Operations Exercises Officer, Computer Network Operations Planner, Computer Network Operations Technician, Network Attack Planner, Network Defender, Network Defense Planner, Network Warfare Planner, and Information Assurance Support Person, Intelligence Support to Computer Network Attack, Intelligence Support to Computer Network Defense, Intelligence Systems Officer / Computer Network Defense, The lack of clear guidance on cyber personnel in joint doctrine is also reflected in the military services. The military services do not currently have specific job identifiers for cyberspace operations, and cyberspace- related jobs are generally identified under the umbrella of intelligence, communication, or command and control. While the military services bring unique capabilities based upon their individual core competencies, cyberspace forces must meet joint standards. U.S. Joint Forces Command, whose mission is to synchronize global forces, reported that it is unable to quickly and easily identify personnel who are certified for cyber operations, as there is no identifier in the personnel records that indicate if the individual is a “cyber warrior.” Additionally, U.S. Strategic Command reviewed current military service cyber force identifiers and reported that the Air Force identifies computer-related careers under “general” for enlisted personnel and under “non-technical” skills for officers; the Navy identifies computer network operation careers under “information warfare” for officers and “information systems technicians” or “intelligence and communications” for enlisted personnel. DOD recognizes the need to update and improve cyber-related joint doctrine. According to DOD, joint doctrine is being revised and updated and will include refined discussion of cyber-related issues. The U.S. Joint Forces Command’s assessment of the status of cyber-related joint doctrine reported that 14 of the 16 publications that discuss cyberspace- related issues are in various stages of review or revision and that virtually all will contain additional information that is consistent with the new definitions for cyberspace and cyberspace operations. The report also states that while pending revisions to various joint publications could provide the necessary coverage of these topics, the degree of coverage is not known until the draft revisions are available for review and comment. Ibid. doctrine are completed. However, until the revised doctrine publications are released, the full extent of the changes and their inclusion of cyber- related information will be unknown. While all of these efforts represent significant progress toward enhancing joint doctrine, there is no timetable for the completion of all cyber-related updates to existing joint publications. DOD is also currently debating the merits of developing a single, overarching cyber joint doctrine publication in addition to updating all existing doctrine. Separate joint doctrine publications are devoted to other major elements of operations in various “domains,” including such topics as mine warfare, amphibious operations, urban operations, operations other than war, counterdrug operations, and space operations. In 2007 the National Military Strategy for Cyberspace Operations Implementation Plan tasked a number of DOD commands and organizations with cyber-related studies, some of which evaluated cyber- related joint doctrine. There has subsequently been broad agreement within DOD about the need for improved joint doctrine. However, not all commands agreed about the need for a separate cyber-specific doctrine publication. Table 4 provides examples of some of the conclusions and recommendations stemming from studies related to cyber joint doctrine. In May of 2009, U.S. Strategic Command proposed the development of an overarching joint publication for cyberspace operations dedicated to all aspects of cyberspace operations. As the DOD command responsible for evaluating joint doctrine proposals, U.S. Joint Forces Command conducted a Front End Analysis that reviewed and analyzed the proposal to determine if a doctrinal void exists and if the proposal is appropriate for inclusion in the doctrine community. Additionally, the U.S. Joint Forces Command officials we spoke with expressed concern about developing a separate cyber joint publication and that this might create inefficiencies and disconnects with existing related doctrine in such areas as information operations. The Front End Analysis recommended that further consideration of a separate joint doctrine publication be postponed and that U.S. Strategic Command develop a joint test publication for cyberspace operations. In September 2009, the Joint Staff approved the development of the cyberspace operations joint test publication. A joint test publication is a proposed version of a joint doctrine that normally contains contentious issues. After the test publication is developed, it will be evaluated through U.S. Joint Forces Command, resulting in one of the following recommendations: (1) that DOD convert the cyber joint test publication into a joint publication; (2) that DOD incorporate the joint test publication or portions of it into existing joint publications; or (3) that DOD determine that the cyber joint test publication is not sufficient and discontinues work on it with no effect on joint doctrine. A test publication is not considered approved doctrine. The Joint Staff established a milestone of June 2010 for completion of the draft test publication. The Joint Staff told us it expects evaluation of the test publication to take 6 to 12 months. However, DOD has not determined a completion date for the evaluation or for the final decision on the joint test publication as part of the test publication development plan. Regardless of whether DOD proceeds with developing a separate joint doctrine, completion of its effort to update existing doctrine is crucial to further improve the understanding of key cyber-related terms and operational issues throughout DOD. According to DOD’s principal guidance for joint doctrine development, joint doctrine must evolve as the United States strives to meet national security challenges, and doctrinal voids are identified. Providing a baseline of common definitions and operational constructs for cyber operations in existing doctrine or in a separate overarching publication would provide the basis for future adaptation. DOD’s well-established joint doctrine development processes provides a sound structure to assess all aspects of cyber operations, propose doctrinal change or creation, and establish clear time frames for completing interim and final efforts. The lack of a time frame for cyber doctrine makes it difficult for DOD to plan for additional efforts that rely on doctrine and may permit delay while service and joint officials continue to debate the possible future of cyber operations rather than concentrate on establishing a solid basis upon which future efforts can be built. DOD has assigned authorities and responsibilities for implementing cyberspace operations among combatant commands, military services, and defense agencies. However, the supporting relationships necessary to achieve command and control of cyberspace operations remain unclear. In response to a major computer infection in 2008, U.S. Strategic Command identified confusion regarding command and control authorities and chains of command because the exploited network fell under the purview of both its own command and a geographic combatant command. DOD-commissioned studies have recommended command and control improvements. Lines of command and control of cyber forces are divided among U.S. Strategic Command, the geographic combatant commands, and the military services, through several policy and guidance documents. The National Military Strategy for Cyberspace Operations, the 2008 Unified Command Plan, DOD Directive O-8530.1, and the Standing Rules of Engagement are all relevant to command and control of cyberspace operations, but they sometimes conflict with each other and remain unclear because of overlapping responsibilities. The National Military Strategy for Cyberspace Operations, issued in December 2006, demonstrates DOD’s recognition that clear command and control relationships are necessary for the successful application of military power in cyberspace. The purpose of this strategy is to establish a common understanding of cyberspace and set forth a military strategic framework that orients and focuses DOD action in the areas of military, intelligence, and business operations in and through cyberspace. According to the strategy, the United States can achieve superiority in cyberspace only if command relationships are clearly defined and executed, and must support unity of effort in achieving combatant commanders’ missions as well as maintaining freedom of action in cyberspace. The strategy also states that cyberspace provides the foundation for command and control of military operations in other domains and that, due to the nature of cyberspace, command and control requires extremely short decision-making cycles. According to the strategy, effective command and control integrates, deconflicts, and synchronizes cyberspace operations at the speeds required for achieving awareness and generating effects, while failure to establish an integrated structure can hinder collaboration and lengthen decision-making cycles. The 2008 Unified Command Plan gave specific responsibilities for synchronizing planning for cyberspace operations to U.S. Strategic Command, including directing global information grid operations and defense, planning against designated cyberspace threats, coordinating with other combatant commands and U.S. government agencies, and executing cyberspace operations. The Unified Command Plan also states that, unless otherwise directed, combatant commanders will exercise command authority over all commands and forces assigned to them, in accordance with section 164 of title 10 of the U.S. Code. However, while individual service networks may reside within the area of responsibility of a particular geographic combatant command, that geographic commander does not possess the authority to direct the network operations of his component organizations, because those component networks are owned and directed by their respective service organizations through their role as Computer Network Defense Service Providers (defined within DOD Directive O-8530). This establishes a conflicting situation that affects the geographic combatant commanders’ visibility over networks in their areas of responsibility. Also, the Standing Rules of Engagement state that unit commanders always retain the inherent right and obligation to exercise unit self- defense in response to a hostile act or demonstrated hostile intent. This generally extends to commanders conducting information operations and includes the authorization to conduct protective, defensive, and restorative measures for the networks they control in response to all unauthorized network activity. However, when defensive measures would have potentially adverse effects across multiple DOD networks or on adversary or intermediary networks outside the DOD global information grid, they must be approved by the Commander of U.S. Strategic Command, under his responsibility for DOD-wide network operations, and coordinated with affected components and appropriate law enforcement or intelligence organizations. An incident of malware infection on DOD systems in 2008 illustrated that a lack of operational clarity significantly slowed down DOD’s response. As a result of this malware eradication effort, U.S. Strategic Command identified confusion regarding the exploited networks. This led to uncoordinated, conflicting, and unsynchronized guidance in response to the incident being issued in several forms via multiple channels. Our review confirmed that multiple directives contributed to confusion at the execution level, leaving operators and administrators to reconcile priorities and question which procedures were appropriate and most urgent to address the malware infection. Although DOD intends for the new U.S. Cyber Command to facilitate command and control, as late as December 2009, DOD noted that these problems had not been fully addressed, though the new U.S. Cyber Command is expected to be established by October 2010. Without complete and clearly articulated guidance on cyber command and control responsibilities that is well- communicated and practiced with key stakeholders, DOD may have difficulty in building unity of effort for carrying out cyber operations. DOD has recognized the need for improvements in its command and control organization for cyberspace operations and commissioned associated studies by U.S. Joint Forces Command and the Institute for Defense Analyses. Both classified studies evaluated DOD’s command and control organization and recommended improvements in 2008. DOD has started to act on these recommendations, by initiating key organization changes, such as establishing the U.S. Cyber Command. However, until DOD updates its policies and guidance to clarify command and control relationships for cyber operations and clearly communicates those to all DOD entities, its efforts to conduct coordinated and timely actions to defend DOD’s critical networks and other cyber operations will be degraded. DOD has identified some cyberspace capability gaps. DOD also continues to study the extent of these gaps. However, it has not completed a comprehensive, departmentwide assessment of needed resources associated with the capability gaps and an implementation plan to address any gaps. According to the 2006 National Military Strategy for Cyberspace Operations, military departments and certain agencies and commands should develop the capabilities necessary to conduct cyberspace operations, including consistently trained personnel, infrastructure, and organization structures. U.S. Strategic Command’s Operational Concept for Cyberspace reported in 2008 that national security vulnerabilities inherent in cyberspace make it imperative that the United States develop the requisite capabilities, policy, and tactics, techniques, and procedures for employing offensive, defensive, and supporting operations to ensure freedom of action in cyberspace. In addition, a study commissioned by the Joint Staff and conducted by the Institute for Defense Analyses states that the key underlying drivers of effectiveness in cyberspace are developing and deploying the right tools and building and sustaining an adequate cyber force of trained and certified people. Institute for Defense Analyses officials stated that unless DOD has adequate resources for cyber operations, organizational changes within the cyber domain will not be effective. DOD commands have identified capability gaps that hinder their ability to marshal resources to operate in the cyberspace domain. U.S. Strategic Command and other combatant commands highlighted their cyber capability gaps in their Integrated Priority Lists for fiscal years 2011- 2015. U.S. Strategic Command, which is tasked with being the global synchronizer for cyber operations within DOD, identified in its Integrated Priority List for fiscal years 2011-2015 gaps and associated priorities in such areas as the need to be able to defend against known threats, detect or characterize evolving threats, and conduct exploitation and counter operations, as desired. U.S. Strategic Command listed cyber- related gaps as its highest priority, emphasizing the need for and importance of resources to increase cyber capabilities. U.S. Pacific Command, U.S. Special Operations Command, and U.S. Joint Forces Command have also reported cyber capability gaps involving lack of sufficient numbers of trained personnel to support their cyber operations and a need for additional cyber intelligence capabilities. U.S. Strategic Command has reported that the lack of cyber resources it identified has affected the command’s ability to respond to requests for cyber capabilities from other combatant commands, particularly for full- spectrum cyberspace operations. It remains to be seen what effect the newly proposed U.S. Cyber Command will have on this process, particularly with Joint Functional Component Command for Network Warfare and Joint Task Force–Global Network Operations being merged into one organization within the new U.S. Cyber Command. A need for more cyber planners and cyber-focused intelligence analysts was a common theme during our meetings with officials at the combatant commands. Officials at several of the geographic combatant commands stated that without the proper planners and cyber-focused intelligence analysts, they lacked situational awareness of their networks and the ability to both plan cyber operations for their respective commands and request applicable support from U.S. Strategic Command. For example, cyber planners play a key part in the developmental process of a computer network attack operation. U.S. Central Command officials stated that although most computer network attack operations are being conducted in its area of responsibility, it does not have a single full-time dedicated cyber planner to assist in the development of such operations. Because it lacks the appropriate trained personnel and dedicated career path, U.S. Central Command has redirected personnel with cyber expertise to act as temporary planners. This greatly affected the command’s ability to match resources to, and plan for, all cyber-related functions. For example, a cyber planner within U.S. Central Command was borrowed from another career field, worked as a planner for a time, and then was reassigned to help resolve information technology issues at a help desk. Without a sufficient number of cyber planners in-theater, combatant commands will continue to struggle with being able to plan cyber activities to assist in accomplishing the commander’s mission objectives, and communicating their need for assistance to U.S. Strategic Command. The lack of skilled and highly trained cyber personnel presents challenges for many DOD components, and the lack of sufficient personnel prevents DOD components from fulfilling essential computer network operation activities. DOD’s Joint Capabilities Integration and Development System provides a framework from which DOD can assess and prioritize departmentwide cyber-related capability gaps, assign responsibility for addressing them, and develop an implementation plan for achieving and tracking results. This system is DOD’s primary means of identifying the capabilities required to support national strategies. It therefore helps the military services prepare long-term program plans to address critical joint capabilities. One of the key elements of this system is a capabilities- based assessment that defines a mission, identifies required capabilities, identifies gaps, assesses risk associated with those gaps, prioritizes gaps, assesses nonmateriel solutions, and recommends actions for the department to pursue. While the department’s review of cyberspace capability gaps and various studies on cyberspace operations are steps in the right direction, it remains unclear whether these gaps will be addressed, since DOD has not conducted the kind of comprehensive capabilities-based assessment outlined in the Joint Capabilities Integration and Development System or established an implementation plan to resolve any resulting gaps. For example, DOD conducted an assessment of computer network defense and computer network attack capability gaps in 2004 that highlighted the need for a broader effort to address gaps as part of the Joint Capabilities Integration and Development System. However, this assessment was not finalized for action. DOD has since conducted individual cyber-related studies focused on the lack of trained cyber personnel and also brought attention to cyber-related capability gaps listed in the combatant commanders’ fiscal year 2011- 2015 Integrated Priority Lists. In February 2009, the Joint Staff directed the Force Support Functional Capabilities Board to address future cyberspace force manning and organization gaps and to develop a current baseline manpower posture across cyberspace operations and present a consolidated view of all documented DOD cyberspace manpower requirements. The Force Support Functional Capabilities Board put together a Cyberspace Study Team to engage the combatant commands, services, and agencies in their efforts. In addition to the cyberspace studies discussed above, and as part of DOD’s Joint Capabilities Integration and Development System, the Joint Requirements Oversight Council issued a memorandum in June 2009 (JROCM 113-09) that reviewed and endorsed 85 capability gaps across DOD from the combatant commands’ reported Integrated Priorities Lists—4 of which were cyber-related. Throughout the Joint Capabilities Integration and Development System process, functional capabilities boards provide oversight and assessment, as appropriate, to ensure system documents take into account joint capabilities and alternative approaches to solutions. In this case, the memorandum stated that Functional Capabilities Boards will track the recommendations related to the capability gaps. The Functional Capabilities Boards periodically report on the way ahead for recommended actions and report recommendations to the Joint Requirements Oversight Council for decision. The Joint Requirements Oversight Council’s approval and implementation in a Joint Requirements Oversight Council Memorandum serves as the analytic underpinning for many future decisions related to capability gaps. However, capability gaps are considered “closed” based on the Joint Requirements Oversight Council’s decisions and the assumption that those decisions will be implemented. Failure to execute the Joint Requirements Oversight Council’s decision is not considered a capabilities gap assessment issue, although it may generate an input for the next capabilities gap assessment cycle. DOD has continued to make progress with respect to some of the individual capability gaps identified from the Integrated Priority Lists for fiscal year 2011-2015. Also the memorandum requested that U.S. Strategic Command lead the joint effort to create a concept of operations to inform future decisions but provided no specific time frame for these actions in the memorandum. Joint Staff officials we interviewed recognized that fully addressing the cyber capability gaps they have thus far identified may take years to complete. Some cyber capability gaps are relatively new, thus the Joint Requirements Oversight Council has deferred manpower studies to be completed first so that informed decisions can be made at a later time. For example, the Joint Staff officials also noted that some cyber-related resource requests involving computer network operations from U.S. Pacific Command could not be addressed immediately because of the lack of existing doctrine or policy on the appropriate authority to carry out this specific action. While the Joint Staff’s action to direct the Functional Capabilities Boards to track progress toward addressing capability gaps is a step in the right direction for developing a plan to address capability gaps, it remains unclear whether or when these gaps will be addressed. For example, as of December 2009, the Joint Staff listed all the cyber-related capability gaps noted by Joint Requirements Oversight Council Memorandum 148- 09 as closed; but for several of the gaps, the memorandum only cited the manpower study discussed above as rationale. Furthermore, the Joint Staff is also currently reviewing the most recent Integrated Priority List from the combatant commands for fiscal years 2012-2017, in which some previously-cited cyber capability gaps were repeated. Though DOD has previously begun efforts similar to a comprehensive capabilities-based assessment for cyberspace, it has not completed those efforts. The studies we discuss and ongoing efforts, such as the individual Functional Capabilities Board actions, provide much-needed information to DOD officials about where further action may be needed. But these efforts lack the scope of a complete capabilities-based assessment and do not include time frames or a funding strategy for addressing capability gaps. Further, in prior work, we found that best practices for strategic planning have shown that effective and efficient operations require detailed plans outlining major implementation tasks, defined metrics and timelines to measure progress, a comprehensive and realistic funding strategy, and communication of key information to decision makers. Absent such elements as a broad assessment of cyber-related capability gaps, time frames for assessing and addressing gaps, and a strategy for funding any required programs, combatant commands are compelled to report the same capability gaps they had in previous years without an assurance that they will be addressed; and the military services will be unable to fully plan for programs to address cyberspace requirements. As a result, cyber capability gaps across DOD will continue to hinder DOD’s ability to plan for and conduct effective cyber operations. DOD has been characterized as one of the best-prepared federal agencies to defend against cybersecurity threats, but keeping pace with the magnitude of cybersecurity threats DOD faces currently and will face in the future is a daunting prospect. DOD networks and our country’s critical infrastructure can be disrupted, compromised, or damaged by a relatively unsophisticated adversary and, as witnessed by the 2008 infections from removable media, this can potentially affect the conduct of military operations. The U.S. military is dominant in the land domain, unchallenged in the air, and has few near-peers in the maritime domain. However, the technical and economic barriers to entry into the cyber domain are much lower for adversaries and as a result place U.S. networks at great risk. DOD has taken many important steps to better organize its cyber efforts with the creation of the U.S. Cyber Command, but it is too early to tell whether this will provide the necessary leadership and guidance DOD requires to address cybersecurity threats. Based on public statements from DOD senior leadership, DOD understands the severity of the problem. DOD’s actions to reassess its organization for cyber-related operations, assess and update joint doctrine, assess command and control relationships, and study cyber-related capability gaps all take advantage of DOD’s considerable planning and operational experience. The next step to keep pace or stay ahead of the rapidly- changing environment reflected by the cyber domain is for DOD to further its efforts in each of these areas in a more comprehensive manner and as part of a cohesive policy. To strengthen DOD’s cyberspace doctrine and operations to better address cybersecurity threats, we recommend that the Secretary of Defense take the following two actions: direct the Chairman of the Joint Chiefs of Staff in consultation with the Under Secretary of Defense for Policy and U.S. Strategic Command to establish a time frame for (1) deciding whether or not to proceed with a dedicated joint doctrine publication on cyberspace operations and for (2) updating the existing body of joint doctrine to include complete cyberspace-related definitions, and direct the appropriate officials in the Office of the Secretary of Defense, in coordination with the Under Secretary of Defense for Policy and the Joint Staff, to clarify DOD guidance on command and control relationships between U.S. Strategic Command, the services, and the geographic combatant commands regarding cyberspace operations, and establish a time frame for issuing the clarified guidance. To ensure that DOD takes a more comprehensive approach to its cyberspace capability needs and that capability gaps are prioritized and addressed, we make two additional recommendations, that the Secretary of Defense direct the appropriate Office of the Secretary of Defense officials, in coordination with the secretaries of the military departments and the Joint Chiefs of Staff, to develop a comprehensive capabilities-based assessment of the departmentwide cyberspace-related mission and a time frame for its completion, and develop an implementation plan and funding strategy for addressing any gaps resulting from the assessment that require new capability development or modifications to existing programs. In written comments on a draft of this report, DOD agreed with our 4 recommendations and discussed some of the steps it is taking and planning to take to address these recommendations. DOD also provided technical comments, which we have incorporated into the report where appropriate. In response to our recommendation that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff in consultation with the Under Secretary of Defense for Policy and the U.S. Strategic Command to establish a time frame for deciding whether or not to proceed with a dedicated joint doctrine publication on cyberspace operations and for updating the existing body of joint doctrine to include complete cyberspace-related definitions, DOD agreed and stated that as part of implementing the National Military Strategy for Cyberspace Operations, an assessment of joint doctrine is under way and is expected to be completed by the end of fiscal year 2011. Furthermore, DOD said that this process will also include related cyber lexicon and definitions. While our report was in final processing, DOD began to publish some of the doctrinal updates they had agreed needed to be made. Since the National Military Strategy for Cyberspace Operations was published in 2006, we believe the new joint doctrine assessment represents progress that should help DOD address some of the existing gaps in joint doctrine with a time frame for completing the effort. We continue to believe that DOD’s overall assessment should include a decision on whether or not to proceed with a dedicated joint doctrine publication on cyberspace operations and a plan for updating the existing body of joint doctrine. DOD agreed with our recommendation that it clarify roles and responsibilities, including command and control relationships between the U.S. Strategic Command, the services, and the geographic combatant commands regarding cyberspace operations, and establish a time frame for issuing the clarified guidance. However, DOD stated it had already satisfied this recommendation by means of the June 23, 2009, memorandum establishing U.S. Cyber Command and the 2008 Unified Command Plan. According to DOD, both documents have promulgated clear guidance for command and control relationships. The Secretary of Defense memorandum establishing the U.S. Cyber Command does allude to the U.S. Cyber Command implementation plan, which does contain some information on command and control relationships, but does not provide the kind of clear guidance we describe as lacking in our report. The implementation plan further alludes to a U.S. Cyber Command Concept of Operations that will be published at a later date, which may provide further information on command and control guidance. While the 2008 Unified Command Plan discusses missions and responsibilities for U.S. Strategic Command in cyberspace operations, we believe this information is outdated, considering the memo directing the establishment of U.S. Cyber Command was issued in June 2009. Although it is early in the establishment process for the new U.S. Cyber Command, we continue to believe that DOD should take advantage of opportunities to develop and articulate clear command and control guidance that will provide a timely and cohesive approach to combating cyber threats throughout the chain of functional and geographic combatant commands, the services, and other DOD components in anticipation of the U.S. Cyber Command reaching full operating capability in October 2010. Vehicles for conveying this guidance might include the U.S. Cyber Command Concept of Operations, additional implementation plans, and revisions to the Unified Command Plan. DOD agreed with our recommendation that the Secretary of Defense direct the appropriate Office of the Secretary of Defense officials, in coordination with the Secretaries of the military departments and the Joint Chiefs of Staff, to develop a comprehensive capabilities-based assessment of the departmentwide cyberspace-related mission and a time frame for its completion. DOD indicated that cyber defense would be one focus area for risk management decisions as part of the upcoming budget cycle but provided no further information on how it planned to implement the steps in the recommendation. We recognize that fully addressing DOD’s cyber capability gaps will take years; however, we maintain the importance of establishing an assessment of these gaps and establishing a time frame to address them. DOD agreed with our recommendation that the Secretary of Defense direct the appropriate Office of the Secretary of Defense officials, in coordination with the Secretaries of the military departments and the Joint Chiefs of Staff, to develop an implementation plan and funding strategy for addressing any gaps resulting from the assessment that require new capability development or modifications to existing programs. DOD stated that its budget risk management decisions, as well as the development of a National Defense Strategy for Cyberspace Operations would help the department identify and mitigate gaps but provided no further information on how they planned to implement the steps identified in the recommendation. We continue to believe it is important to develop an implementation plan and funding strategy for addressing these gaps in order to avoid combatant commands reporting the same capability gaps they had in previous years without an assurance that they will be addressed and that the military services will be unable to fully plan for programs to address cyberspace requirements. Without this effort, cyber capability gaps across DOD will continue to hinder its ability to plan for and conduct effective cyber operations. DOD’s comments are reproduced in full in appendix V. We are sending copies of this report to appropriate congressional committees. We are also sending copies to the Secretary of Defense and the Chairman, Joint Chiefs of Staff. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Davi M. D’Agostino at (202) 512-5431 or Gregory C. Wilshusen at (202) 512-6244. We can also be reached by e-mail at dagostinod@gao.gov or wilshuseng@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. To address our objectives, we focused our work on the Department of Defense’s (DOD) organizations that are involved in computer network operations, including computer network defense, exploitation, and computer network attack. We reviewed a variety of unclassified and classified documents to understand the organization and challenges the department faces in addressing cyberspace operations. Sans Institute Carnegie Mellon CERT CC We reviewed policies, guidance, and directives involving organizations related to computer network operations. Also, we reviewed documents involving the reorganization and development of new organizations within the Office of the Secretary of Defense, U.S. Strategic Command, Air Force, and Navy to address cyber threats. To determine the extent to which DOD has developed an overarching joint doctrine that addresses cyberspace operations across DOD, we reviewed and analyzed current joint doctrine publications, such as Joint Publication 13-3, Information Operations, and other publications involving computer network operations for key definitions. Also, we reviewed U.S. Joint Forces Command’s analysis of cyber-related joint doctrine and U.S. Strategic Command’s current efforts to develop joint doctrine. In addition, we interviewed Joint Staff, U.S. Strategic Command, and U.S. Joint Forces Command officials regarding current department efforts to develop joint doctrine on cyberspace. We compared existing joint doctrine efforts and plans with the guidance in DOD’s joint doctrine development process. To assess the extent to which DOD has assigned command and control responsibilities, we reviewed the 2008 Unified Command Plan, Standing Rules of Engagement and other DOD plans, policies and guidance to determine authorities for functional and geographic combatant commands, military services, and defense agencies. Additionally, we reviewed and identified lessons learned from combatant commands following DOD’s response to malware infections during Operation Buckshot Yankee in 2008. In addition, we interviewed service and command officials directly involved with Operation Buckshot Yankee to discuss their challenges. We also reviewed recommendations on command and control from the Institute for Defense Analyses and U.S. Joint Forces Command and met with officials from these organizations to discuss analysis involving this area. To determine capability gaps involving computer network operations we analyzed the fiscal year 2010 and 2011-2015 Integrated Priority Lists to identify cyberspace capability gaps for the functional and geographic combatant commands. Also we analyzed the National Intelligence Estimate regarding The Global Cyber Threat to the U.S. Information Infrastructure, the Central Intelligence Agency’s Cyber Threat Intelligence Highlights, and prior GAO reports on cybersecurity to determine the depth of cyber threats facing the nation and DOD. We also interviewed various functional and geographic combatant command officials to identify capability gaps and resources needed to address these gaps. In addition, we met with Joint Staff officials to discuss their efforts to address capability gaps listed in the Integrated Priority Lists, including developing studies on manpower shortages and providing funding to computer network defense efforts. We reviewed DOD cyber-related capability assessments and compared them with DOD criteria for capabilities-based assessments as part of DOD’s Joint Capabilities Integration and Development System. We conducted this performance audit from November 2008 through April 2010 in accordance with generally accepted government auditing standards and worked with DOD from November 2010 to July 2011 to prepare an unclassified version of this report for public release. Government auditing standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following are examples of Department of Defense (DOD) offices and organizations with cyber-related roles and responsibilities. Table 5 shows certain cyber-related roles and responsibilities for various offices within the Office of the Secretary of Defense. Table 6 shows certain cyber-related roles and responsibilities for various Joint Staff offices. Table 7 shows certain cyber-related coordination forums. Table 8 shows certain cyber-related roles and responsibilities of U.S. Strategic Command. Table 9 shows certain cyber-related roles and responsibilities of Combatant Command Theater Network Centers and Theater and Global Network Operation Centers. Table 10 shows the cyber-related roles and responsibilities of services’ Network Operations Centers and Computer Emergency/Incident Response Teams Table 11 shows the military services’ current cyber organization in January 2009. Table 12 shows some of the cyber-related roles and responsibilities of the intelligence agencies. Table 13 shows certain cyber-related roles and responsibilities of defense criminal investigative–related organizations. The Secretary of Defense directed the office to lead a review of policy and strategy to develop a comprehensive approach to DOD cyberspace operations. As a result of this review and a separate review of DOD cyberspace policy conducted under the National Military Strategy for Cyberspace Operations Implementation Plan, the Office of the Under Secretary of Defense for Policy found that DOD required new and updated cyberspace policies to guide the integration of cyberspace, and that the existing policies were too focused on the individual pieces of cyberspace operations. Table 14 shows how the military services are supporting or plan to support the U.S. Cyber Command. The Department of Defense (DOD) defines computer network defense as actions taken to protect, monitor, analyze, detect, and respond to unauthorized activity within DOD information systems and computer networks. Computer network defense employs information assurance capabilities to respond to unauthorized activity within DOD information systems and computer networks in response to a computer network defense alert or threat information. Currently, DOD’s cyberspace defensive measures include Intrusion Detection Systems that alert network operators to the signatures of an incoming attack or can kill the network traffic. Strong firewall settings reduce the exposure to the outside world on the NIPR Network as well as block incoming traffic from origins known to launch attacks. Traffic, both inbound and outbound, in a symmetric network configuration, can be examined or prevented, causing most trivial attacks to be stopped at the NIPR Network borders. Several metrics are used to measure information assurance performance. These include documenting the correct certification and accreditation documentation and compliance with DOD directives, reporting this information as a part of the Federal Information Systems Management Act, vulnerability scanning, red and blue team testing, Defense Information Systems Agency evaluations performed on various networks, and other efforts. Below are several examples of policies, programs, and tools that DOD uses to protect its networks. DOD Directive O-8530.1, and its supporting document DOD Instruction O-8530.2, directed the heads of all DOD components to establish component-level computer network defense services to coordinate and direct all componentwide computer network defense and ensure certification and accreditation in accordance with established DOD requirements and procedures. Computer network defense service is provided or subscribed to by owners of DOD information systems or computer networks, or both, in order to maintain and provide computer network defense situational awareness, implement computer network defense protect measures, monitor and analyze in order to detect unauthorized activity, and implement computer network defense operational direction. DOD Directive O-8530.1 also required that all component information systems and computer networks be assigned to a certified computer network defense service provider. Computer network defense service providers are those organizations responsible for delivering protection, detection, and response services to its users. Computer network defense service providers are commonly a Computer Emergency or Incident Response Team and may be associated with a Network Operations and Security Center. The goal for the program is to improve the security posture of DOD information systems and networks by ensuring that a baseline set of services are provided by computer network defense service providers. Under the oversight of the Assistant Secretary of Defense for Networks and Information Integration and U.S. Strategic Command, the Defense Information Systems Agency conducts a certification program of the computer network defense service providers to ensure they are providing that critical baseline set of services. The Defense Information Assurance Certification and Accreditation Process was implemented by the DOD Chief Information Officer in DOD Instruction 8510.01 on November 28, 2007. According to DOD, the Defense Information Assurance Certification and Accreditation Process is the standard DOD process for identifying, implementing, validating, certifying, and managing information assurance capabilities and services, expressed as information assurance controls, and authorizing the operation of DOD information systems, in accordance with Title III of the E-Government Act, the Federal Information Security Management Act, DODD 8500.1, DODI 8500.2, and other statutory and regulatory requirements. The Federal Information Security Management Act of 2002 requires agencies to develop and implement an information security program, evaluation processes, and annual reporting. The act requires mandated annual reports by federal agencies and the Office of Management and Budget. The act also includes a requirement for independent annual evaluations of the agencies’ information security programs and practices by the agencies’ inspectors general or independent external auditors. Host-Based Security Systems are a suite of commercial-off-the-shelf software that provides a framework and point products to protect against cyber threats both at the network and host levels, and provide system baselining to support the Information Operations Condition process. The system includes, but is not limited to, host firewall, host intrusion detection, host intrusion prevention, system compliance profiling, rogue system detection, application blocking, and Information Operations Condition baselining. DOD expects to provide network administrators and security personnel with mechanisms to prevent, detect, track, report, and remediate malicious computer-related activities and incidents across all DOD networks and information systems. The deployment of Host-Based Security Systems was initially ordered by Joint Task Force-Global Network Operations in October 2007, with deployment on unclassified systems to be completed no later than June 2008. Deployment of Host- Based Security Systems to classified systems was to begin in January 2008. According to U.S. Strategic Command, as of February 2010, DOD NIPR and SIPR networks were still in the process of implementing Host- Based Security Systems, with 67 percent and 48 percent respectively implemented. The Information Assurance Vulnerability Management Program provides positive control of vulnerability notification, corresponding corrective action, and Information Assurance Vulnerability Alert status visibility for DOD network assets. It focuses on the status of DOD networks to mitigate or eliminate known vulnerabilities. Joint Task Force–Global Network Operations is responsible for monitoring relevant sources of information to discover security conditions that may require Information Assurance Vulnerability Management vulnerability notification and assess risk and potential operational effect associated with software vulnerabilities. Once a vulnerability is evaluated and warrants notification, Joint Task Force–Global Network Operations will publish an Information Assurance Vulnerability Management vulnerability notification and amplifying information as one of three products depending on risk level of the vulnerability: Information Assurance Vulnerability Alert (critical risk), Information Assurance Vulnerability Bulletin (medium risk), Technical Advisory (low risk). Response to Alerts is mandatory and combatant commands, military services, and defense agencies are required to implement directives, and report back to Joint Task Force–Global Network Operations on their Information Assurance Vulnerability Alert compliance. These inspections, formerly known as the Enhanced Compliance Validation visits, are conducted by the Defense Information Systems Agency at the direction of U.S. Strategic Command in order to provide an assessment of information assurance and compliance to DOD policies and configuration requirements of all combatant commands, military services, and DOD agencies. The Defense Information Systems Agency also uses these inspections to provide DOD component and local leadership with actionable recommendations for improving information assurance readiness. DOD officials considered these visits as risk assessments. Inspection teams provide penetration testing and security audits for client agencies, combatant commands, installations, and military services. The inspection teams use a holistic approach that evaluates more than computer hardware and software—such as personnel procedures and policies, and physical security of equipment and locations. According to Defense Information Systems Agency officials, the Defense Information Systems Agency and Joint Task Force–Global Network Operations scan DOD networks. Combatant commands, military services, and defense agencies are also responsible for scanning the local systems that they administer. The Defense Information Systems Agency scans systems prior to their connection to DOD networks and at regularly scheduled intervals thereafter. Additionally, Joint Task Force–Global Network Operations has directed all combatant commands, military services, and defense agencies to scan their networked devices on a regular basis. Joint Task Force–Global Network Operations has developed its NetOps Scorecard as a process for displaying NetOps compliance and readiness status for the entire DOD community. This quarterly review has been in effect for the military services since August 2007, and was expanded to cover all combatant commands, military services, and DOD agencies in February 2009. The Scorecard measures compliance to NetOps directives (such as communications tasking orders, Information Operations Conditions, and fragmentary orders), authority to operate, Information Assurance Vulnerability Alert compliance, and the status of inspections. U.S. European Command has developed its own Cyber Defense Playbook intended to standardize theater policy, tactics, and procedures related to computer network defense efforts and improve command and control relationships to ensure and maintain cyber/network readiness and coordinated responses to computer network defense events. The Playbook was developed by a working group from across the theater with participation from U.S. European Command, U.S. Army Europe, U.S. Air Force Europe, Special Operations Command Europe, U.S. Navy Europe, and the Defense Information Systems Agency. It incorporates information and best practices from the agencies listed above as well as from the Joint Functional Component Command for Network Warfare and Joint Staff guidance. It includes baseline computer network defense triggers, reporting and response timelines, checklists, tactics, techniques, and procedures for computer network defense related events, and basic computer network defense reference materials. The Playbook also includes contingency options for personnel to use should their recommended computer network defense tools be unavailable. According to officials from the DOD Cyber Crime Center, the Defense Industrial Base Collaborative Information Sharing Environment is an Office of the Secretary of Defense–initiated effort to generate more transparency about and share network security information among DOD’s private sector contractors. The Defense Industrial Base Collaborative Information Sharing Environment is run by the DOD Cyber Crime Center, and 28 Defense Industrial Base partners have voluntarily agreed to share information through the program as of March 2009. The 28 Defense Industrial Base partners are all major contractors and are responsible for approximately 90 percent of the information across the Defense Industrial Base. The information shared in Defense Industrial Base Collaborative Information Sharing Environment is anonymous because the Defense Industrial Base partners are concerned about public disclosure. They feel that if their shareholders and competitors learn that a Defense Industrial Base partner’s networks have been attacked, it could affect earning and the ability to win contracts in the future. The Defense Advanced Research Projects Agency is in the process of developing a National Cyber Range that will provide a test bed to produce qualitative and quantitative assessments of the security of various cyber technologies and scenarios. This effort is expected to provide a safe, instrumented environment for national cyber security research organizations to test the security of information systems. Several private, commercial, and academic institutions will develop the initial phase of the National Cyber Range. At the conclusion of the initial phase, the Defense Advanced Research Projects Agency will make decisions regarding future plans, which notionally could include a second phase with a critical design review, and a third phase to develop the full-scale National Cyber Range and start conducting tests. According to DOD officials, DOD mandates specific configuration settings for all prevalent technologies in the Global Information Grid through the use of Security Technical Implementation Guides and associated checklists. These Security Technical Implementation Guides are developed by the Defense Information Systems Agency in full collaboration with military services, agencies and selected combatant commands. According to DOD officials, the Security Technical Implementation Guides are updated periodically keeping pace with documented emerging threats and changes to technology. These Security Technical Implementation Guides are a basis for system administrators to securely maintain their systems and for certifiers and reviewers to evaluate those systems. In prior reports, we and various agency inspector general offices have made hundreds of recommendations to agencies for actions necessary to resolve prior significant control deficiencies and information security program shortfalls. For example, we recommended that federal agencies correct specific information-security deficiencies related to user identification and authentication, authorization, boundary protections, cryptography, audit and monitoring, physical security, configuration management, segregation of duties, and continuity of operations planning. We have also recommended that agencies fully implement comprehensive, agencywide information-security programs by correcting weaknesses in risk assessments, information-security policies and procedures, security planning, security training, system tests and evaluations, and remedial actions. In the past, we have also reviewed the Department of Defense’s (DOD) information-security weaknesses in various reports. For example, in 1991, we reported on foreign hackers penetrating DOD computer systems between April 1990 and May 1991, as a result of inadequate attention to computer security, such as password management and the lack of technical expertise on the part of some system administrators. In May 1996, we reported that unknown and unauthorized individuals were increasingly attacking and gaining access to highly sensitive unclassified information on DOD’s computer systems. We reported that external attacks on DOD computer systems were a serious and growing threat. According to DOD officials, attackers had stolen, modified, and destroyed both data and software. They had installed “back doors” that circumvented normal system protection and allowed attackers unauthorized future access. They had shut down and crashed entire systems and networks. In September 1996, we issued a report, based on detailed analyses and testing of general computer controls, that identified pervasive vulnerabilities in DOD information systems. We had found that authorized users could also exploit the same vulnerabilities that made external attacks possible to commit fraud or other improper or malicious acts. In fact, knowledgeable insiders with malicious intentions could pose a more serious threat than outsiders, since they could be more aware of system weaknesses and how to disguise inappropriate actions. Our report highlighted the lack of a comprehensive information security program and made numerous recommendations for corrective actions. In August 1999, we reported that DOD had made limited progress in correcting the general control weaknesses we reported in 1996. We also found that serious weaknesses in DOD information security continued to provide both hackers and hundreds of thousands of authorized users opportunities to modify, steal, inappropriately disclose, and destroy sensitive DOD data. As a result, numerous defense functions, including weapons and supercomputer research, logistics, finance, procurement, personnel management, military health, and payroll, have already been adversely affected by system attacks or fraud. In 2003, we reported that DOD faced many risks in its use of globally networked computer systems to perform operational missions—such as identifying and tracking enemy targets—and daily management functions, such as paying soldiers and managing supplies. Weaknesses in these systems, if present, could give hackers and other unauthorized users the opportunity to modify, steal, inappropriately disclose, and destroy sensitive military data. In addition, the Department of Defense Inspector General has completed annual reviews under the Federal Information Security Management Act involving a wide range of information assurance weaknesses that persist throughout DOD systems and networks. These reports have compiled information assurance vulnerabilities based on reports from Army Audit Agency, Naval Audit Service, Air Force Audit Agency, and GAO since 1991. From August 1, 2008, to July 31, 2009, the most frequently cited weaknesses were in the following information assurance areas: security policies and procedures/management oversight; access controls; configuration management; and plans of action and milestones to identify, assess, prioritize, and monitor the progress of corrective efforts for security weaknesses found in programs and systems. According to the DOD Inspector General, persistent weaknesses in information-security policies and practices continue to threaten the confidentiality, integrity, and availability of critical information and information systems used to support operations, assets, and personnel. The report also noted that without effective management oversight, DOD cannot be assured that systems are accurately reported and maintained, information systems contain reliable data, and personnel are properly trained in security policies and procedures. ! No text of specified style in document. In addition to the contacts named above, Lorelei St. James, Joseph Kirschbaum, Nelsie Alcoser, Neil Feldman, David Holt, Jamilah Moon, Grace Coleman, Joanne Landesman, and Gregory Marchand made key contributions to this report. | According to the U.S. Strategic Command, the Department of Defense (DOD) is in the midst of a global cyberspace crisis as foreign nation states and other actors, such as hackers, criminals, terrorists, and activists exploit DOD and other U.S. government computer networks to further a variety of national, ideological, and personal objectives. This report identifies (1) how DOD is organized to address cybersecurity threats; and assesses the extent to which DOD has (2) developed joint doctrine that addresses cyberspace operations; (3) assigned command and control responsibilities; and (4) identified and taken actions to mitigate any key capability gaps involving cyberspace operations. It is an unclassified version of a previously issued classified report. GAO analyzed policies, doctrine, lessons learned, and studies from throughout DOD, commands, and the services involved with DOD's computer network operations and interviewed officials from a wide range of DOD organizations.. DOD's organization to address cybersecurity threats is decentralized and spread across various offices, commands, military services, and military agencies. DOD cybersecurity roles and responsibilities are vast and include developing joint policy and guidance and operational functions to protect and defend its computer networks. DOD is taking proactive measures to better address cybersecurity threats, such as developing new organizational structures, led by the establishment of the U.S. Cyber Command, to facilitate the integration of cyberspace operations. However, it is too early to tell if these changes will help DOD better address cybersecurity threats. Several joint doctrine publications address aspects of cyberspace operations, but DOD officials acknowledge that the discussions are insufficient; and no single joint publication completely addresses cyberspace operations. While at least 16 DOD joint publications discuss cyberspace-related topics and 8 mention "cyberspace operations," none contained a sufficient discussion of cyberspace operations. DOD recognizes the need to develop and update cyber-related joint doctrine and is currently debating the merits of developing a single cyberspace operations joint doctrine publication in addition to updating all existing doctrine. However, there is no timetable for completing the decision-making process or for updates to existing doctrine. DOD has assigned authorities and responsibilities for implementing cyberspace operations among combatant commands, military services, and defense agencies; however, the supporting relationships necessary to achieve command and control of cyberspace operations remain unclear. In response to a major computer infection, U.S. Strategic Command identified confusion regarding command and control authorities and chains of command because the exploited network fell under the purview of both its own command and a geographic combatant command. Without complete and clearly articulated guidance on command and control responsibilities that is well communicated and practiced with key stakeholders, DOD will have difficulty in achieving command and control of its cyber forces globally and in building unity of effort for carrying out cyberspace operations. DOD has identified some cyberspace capability gaps, but it has not completed a comprehensive, departmentwide assessment of needed resources, capability gaps, and an implementation plan to address any gaps. For example, U.S. Strategic Command has identified that DOD's cyber workforce is undersized and unprepared to meet the current threat, which is projected to increase significantly over time. While the department's review of some cyberspace capability gaps on cyberspace operations is a step in the right direction, it remains unclear whether these gaps will be addressed since DOD has not conducted a more comprehensive departmentwide assessment of cyber-related capability gaps or established an implementation plan or funding strategy to resolve any gaps that may be identified. GAO recommends that DOD (1) establish a timeframe for deciding on whether to complete a separate joint cyberspace publication and for updating the existing body of joint publications, (2) clarify command and control relationships regarding cyberspace operations and establish a timeframe for issuing the clarified guidance, and (3) more fully assess cyber-specific capability gaps, and (4) develop a plan and funding strategy to address them. DOD agreed with the recommendations. |
FSA provides benefits through various programs of the Farm Security and Rural Investment Act of 2002. Appendix III provides a listing of USDA farm programs and payments made from 1999 through 2005. The three- entity rule applies to certain USDA payments, including direct and counter-cyclical payments; loan deficiency payments and marketing loan gains, under the Marketing Assistance Loan Program; and Conservation Reserve Program payments. Direct and Counter-Cyclical Payments Program provides two types of payments to producers of covered commodity crops, including corn, cotton, rice, soybeans, and wheat. Direct payments (formerly known as production flexibility contract payments) are tied to a fixed payment rate for each commodity crop and do not depend on current production or current market prices. Instead, direct payments are based on the farm’s historical acreage and yields. Counter-cyclical payments provide price- dependent benefits for covered commodities whenever the effective price for the commodity is less than a pre-determined price (called the target price). Counter-cyclical payments are based on a farm’s historical acreage and yields, and are not tied to the current production of the covered commodity. Marketing Assistance Loan Program (formerly known as the Commodity Loan Program) provides benefits to producers of covered commodity crops when market prices are low. Specifically, the federal government accepts harvested crops as collateral for interest-bearing loans (marketing assistance loans) that are due in 9 months. When market prices drop below the loan rate (the loan price per pound or bushel), the government allows farmers to repay the loan at a lower rate and retain ownership of their commodity for eventual sale. The difference between the loan rate and the lower repayment rate is called the marketing assistance loan gain. In lieu of repaying the loan, farmers may forfeit their crops to the government when the loan matures and keep the loan principal. In addition, farmers who do not have marketing assistance loans can receive a benefit when prices are low—the loan deficiency payment— that is equal to the marketing assistance loan gain that the farmer would have received if the farmer had a loan. Finally, farmers can purchase commodity certificates that allow them to redeem their marketing assistance loan at a lower repayment rate and immediately reclaim their commodities under loan. The difference between the loan rate and the lower repayment rate is called the commodity certificate gain. Conservation Reserve Program provides annual rental payments and cost- share assistance to producers to help them safeguard environmentally sensitive land. Producers must contractually agree to retire their land from agricultural purposes and keep it in approved conserving uses for 10 to 15 years. Most farmers receive farm program payments directly from FSA as an individual operator. However, some farmers use legal entities to organize their farming operations to reduce their exposure to financial liabilities or estate taxes or, in some cases, to increase their potential for farm benefits. Some of the more common ways farmers organize their operations include the following: Corporations have a separate legal existence from their owners; that is, the corporation, rather than the owners, is ordinarily responsible for farm business debts and can be sued. As a result, some individuals may incorporate their farm to protect their personal assets. General partnerships are a simple arrangement of two or more partners— individuals or entities—that do business together. Partners are personally liable for their own conduct and for the conduct of those under their direct supervision, as well as for negligence, wrongful acts, and misconduct of other partners and partnership employees. Partners are also personally liable for the partnership’s commercial obligations, such as loans or taxes. Joint ventures are two or more individuals who pool resources and share profits or losses. Joint ventures have no legal existence independent of their owners. Members in a joint venture are personally liable for the farm’s debts. Limited partnerships are an arrangement of two or more partners whose liability for partnership financial obligations is only as great as the amount of their investment. A limited partnership must have at least one general partner who manages the farm business and who is fully liable for partnership financial obligations to be considered eligible for farm program payments. Trusts (irrevocable and revocable) are arrangements generally used in estate planning that provide for the management and distribution of property. A revocable trust is amendable by the grantor during his or her lifetime who may also be the trustee and beneficiary. An irrevocable trust is an arrangement in which the grantor departs with ownership and control of property. Other types of entities that may qualify for farm program payments under payment limitation rules include a limited liability company—a hybrid form of a business entity with the limited liability feature of a corporation and the income tax treatment of a general partnership; a charitable organization; and a state or political subdivision. FSA is responsible for ensuring that recipients meet payment eligibility criteria and do not receive payments that exceed the established limitations. It carries out this responsibility through its headquarters office, 50 state offices, and over 2,300 field offices. IPIA requires the heads of federal agencies to annually review all programs and activities that they administer, identify those that may be susceptible to significant improper payments, and estimate and report on the annual amount of improper payments in those programs and activities. IPIA defines an improper payment as any payment that should not have been made or that was made in an incorrect amount, including any payment to an ineligible recipient. OMB defines significant improper payments as payments in any particular program that exceed both 2.5 percent of total program payments and $10 million annually. If a program’s estimated improper payments exceed $10 million in a year, IPIA and related OMB guidance requires agencies to prepare and implement a plan to reduce improper payments and report actions taken. Agencies are required to report this information, among other things, annually in their Performance and Accountability Reports. Specifically, OMB guidance requires agencies to report on (1) the causes of improper payments and corrective actions, (2) the steps the agency has undertaken to ensure that agency managers are held accountable for reducing and recovering erroneous payments, along with a realistic timetable, and (3) any statutory or regulatory barriers that may limit the agency’s corrective actions in reducing improper payments. In November 2006, we reported that federal agencies, including USDA, need to improve their reporting of improper payments under IPIA by better identifying programs susceptible to improper payments and improving statistical sampling methodologies to estimate improper payments made. While there are legitimate reasons for keeping estates open, we found that FSA field offices do not systematically determine the eligibility of all estates that have been kept open for more than 2 years, as regulations require, and when they do conduct eligibility determinations, the quality of the determinations varies. Without performing annual determinations, an essential management control, FSA cannot identify estates being kept open primarily for the purpose of receiving these payments and be assured that the payments are proper. We identified weaknesses in FSA’s eligibility determinations for 142 of the 181 estates we reviewed. In particular, FSA did not conduct any program eligibility determinations for 73, or 40 percent, of estates that required a determination from 1999 through 2005. Because FSA did not conduct the required determinations, the extent to which estates remained open for reasons other than for obtaining program payments is not known. Sixteen of these 73 estates received more than $200,000 in farm program payments and 4 received more than $500,000 during this period. In addition, 22 of the 73 estates had received no eligibility determinations during the 7-year period we reviewed, and these estates had been open and receiving payments for more than 10 years. In one case, we found that the estate has been open since 1973. The following provides examples of estates that received farm program payments but were not reviewed for eligibility by FSA: A North Dakota estate received farm program payments totaling $741,000 from 1999 through 2003, but FSA did not conduct the required determinations. An Alabama estate received payments totaling $567,000 from 1999 through 2005, but FSA did not conduct the required determinations. In this case, the estate has been open since 1981. Two estates in Georgia, open since 1989 and 1996, respectively, received payments totaling more than $330,000 each, from 1999 through 2005. Neither estate received the required determinations for any of the years we reviewed. An estate in New Mexico, open since 1991, received $320,000 from 1999 through 2005, but it did not receive any of the required determinations. According to FSA field officials, many determinations were either not done or not done thoroughly, in part because of a lack of sufficient personnel and time, as well as competing priorities for carrying out farm programs. However, FSA’s failure to conduct appropriate eligibility determinations means that it has no assurance that it is not making farm program payments to estates that have been kept open primarily to receive these payments. Even when FSA field offices determined estates’ eligibility for continued farm program payments, they did not always do so consistently. For the remaining 108 estates, 39 had eligibility determinations every year that a determination was required, while 69 had determinations at least once between 1999 and 2005, but not with the frequency required by regulations. Table 1 shows the number of years for which estates in our sample were required to have annual eligibility determinations compared with the number of years that FSA actually conducted determinations. The dark shaded numbers highlight the number of estates that received all the required annual eligibility determinations for the years that the estate received farm program payments—a total of 39 estates. As the table shows, the longer an estate was kept open, the fewer determinations it received. For example, only 2 of the 36 estates requiring a determination every year over the 7-year period received all seven required determinations. According to FSA guidelines, an estate should provide evidence that it is still making required reports to the court to be eligible for farm program payments. However, we found that FSA sometimes approved eligibility for payments when the estate had provided insufficient information—that is, information that was either nonexistent or vague. For example, in 20 of the 108 determinations, the minutes of FSA county committee meetings indicated approval of eligibility for payments to estates, but the associated files did not contain any documents that explained why the estate remained active. FSA also approved eligibility on the basis of insufficient explanations for keeping the estate open. In five cases, executors explained that they did not want to close the estate but did not explain why. In a sixth case, documentation stated that the estate was remaining active upon the advice of its lawyers and accountants, but did not explain why. Furthermore, some FSA field offices approved program payments to groups of estates that were kept open after 2 years without any apparent review. In one case in Georgia, minutes of an FSA county committee meeting listed 107 estates as eligible for payments by stating that the county committee approved all estates open over 2 years. Two of the estates on this list of 107 were part of the sample that we reviewed in detail. In addition, another 10 estates in our sample, from nine different FSA field offices, were also approved for payments without any indication that even a cursory review had been conducted. Additionally, the extent to which FSA field offices make eligibility determinations varies from state to state, which suggests that FSA is not consistently implementing its eligibility rules. Overall, FSA field offices in 16 of the 26 states we reviewed made less than one-half of the required determinations of their estates. For example, in Alabama and in Georgia, FSA field offices made only 22 percent and 31 percent of the required determinations for estates, respectively, compared with FSA field offices in Kansas and Texas, which made 62 percent and 87 percent of the required determinations, respectively. Table 2 shows, for the 181 estates in our sample, the variation in FSA’s conduct of eligibility reviews from 1999 through 2005 in states that had five or more estates to examine. Appendix IV shows the extent to which FSA conducted estate eligibility determinations in each state in our review. Under the three-entity rule, individuals receiving program payments may not hold a substantial beneficial interest in more than two entities also receiving payments. However, because a beneficiary of an Arkansas estate we reviewed received farm program payments through the estate in 2005, as well as through three other entities, the beneficiary was able to receive payments beyond what the three-entity rule would have allowed. FSA was unaware of this situation until we brought it to officials’ attention, and FSA has begun taking steps to recover any improper payments. Had FSA conducted any eligibility determinations for this estate during the period, it might have determined that the estate was not eligible for these payments, preventing the beneficiary from receiving what amounted to a payment through a fourth entity. We informed FSA of the problems we uncovered during the course of our review. According to FSA field officials, a lack of sufficient personnel and time, and competing priorities for carrying out farm programs explain, in part, why many determinations were either not conducted or not conducted thoroughly. Nevertheless, officials told us that they would investigate these cases for potential receipt of improper payments and would start collection proceedings if they found improper payments. FSA cannot be assured that millions of dollars in farm program payments it made to thousands of deceased individuals from fiscal years 1999 through 2005 were proper because FSA does not have appropriate management controls, such as computer matching, to verify that it is not making payments to deceased individuals. For example, FSA is not matching recipients listed in its payment database with individuals listed as deceased in the Social Security Administration’s Death Master File. In addition, complex farming operations, such as corporations or general partnerships with embedded entities, make it difficult for FSA to prevent improper payments to deceased individuals. At present, FSA relies on farming operations to advise the agency of any change in the operation, including the death of a member that would affect payments made to the operation. From fiscal years 1999 through 2005, FSA paid $1.1 billion in farm program payments to 172,801 deceased individuals—either as individuals or as members of entities, according to our matching of FSA’s payment databases with the Social Security Administration’s Death Master File. Of the $1.1 billion in farm payments, 40 percent went to individuals who had been dead for 3 or more years, and 19 percent went to individuals who had been dead for 7 or more years. Figure 1 shows the number of years in which FSA made farm program payments after an individual had died and the value of those payments. As the figure shows, for example, FSA provided $210 million in farm program payments to deceased individuals 7 or more years after their date of death. Three cases illustrate how FSA’s lack of management controls can result in improper payments to deceased individuals. In the first case, FSA provided more than $400,000 in farm program payments from 1999 through 2005 to an Illinois farming operation on the basis of the ownership interest of an individual who had died in 1995. According to FSA’s records, the farming operation consisted of about 1,900 cropland acres producing mostly corn and soybeans. It was organized as a corporation with four shareholders, with the deceased individual owning a 40.3-percent interest in the entity. Nonetheless, we found that the deceased individual had resided in Florida. Another member of this farming operation, who resided in Illinois and had signature authority for the operation, updated the operating plan most recently in 2004 but failed to notify FSA of the individual’s death. The farming operation therefore continued to qualify for farm program payments on behalf of the deceased individual. As noted earlier, FSA requires farming operations to certify that they will notify FSA of any change in their operation and to provide true and correct information. According to USDA regulations, failure to do so may result in forfeiture of payments and an assessment of a penalty. FSA recognized this problem in December 2006 when the children of the deceased individual contacted the FSA field office to obtain signature authority for the operation. FSA has begun proceedings to collect the improper payments. In the second case, FSA provided more than $200,000 in farm program payments from 1999 through 2002 to an Indiana farming operation on the basis of the ownership interest of an individual who had died in 1993. According to FSA’s records, the farming operation was a corporation, and the deceased individual held 100-percent ownership interest in the entity. The corporation operated farms in two counties, but upon the death of the individual, the corporation failed to notify the FSA field office in either county of the death. The corporation therefore continued to receive farm program payments on behalf of the deceased individual until 2002, when it filed a new farm operating plan with FSA that no longer included the deceased individual as a member. When we brought this case to the attention of FSA officials, they were unaware that the individual had died in 1993 and acknowledged that FSA provided improper payments to the farming operation from 1993 through 2002. According to agency officials, they intend to take action against the farming operation to recover the improper payments. In the third case, FSA provided about $260,000 in farm program payments from 1999 through 2006 to a corporation on the basis of the ownership interest of an individual who had died in 1993. According to FSA records, the farming operation had 14 shareholders, with the deceased individual holding a 14-percent interest. We found that another member of this farming operation, who had signature authority for the operation, updated the farm’s operating plan in 2004 but failed to notify FSA of the death of this member who we found had resided in a metropolitan area several hundred miles from the farm. The farming operation therefore continued to receive farm program payments on behalf of the deceased individual. FSA was unaware that the individual had died in 1993, but said it would investigate and if improper payments were made it would take action against the farming operation to recover the payments. USDA recognizes that its farm programs have management control weaknesses, making them vulnerable to significant improper payments. In its FY 2006 Performance and Accountability Report to OMB, USDA reported that poor management controls led to improper payments to some farmers, in part because of incorrect or missing paperwork. In addition, as part of its reporting of improper payments information, USDA identified six FSA programs susceptible to significant risk of improper payments with estimated improper payments totaling over $2.8 billion in fiscal year 2006, as shown in table 3. Farm program payments made to deceased individuals indirectly—that is, as members of farming entities—represent a disproportionately high share of post-death payments. Specifically, payments to deceased individuals through entities accounted for $648 million—or 58 percent of the $1.1 billion in payments made to all deceased individuals from 1999 through 2005. However, payments to individuals through entities accounted for $35.6 billion—or 27 percent of the $130 billion in farm program payments FSA provided from 1999 through 2005. Similarly, we identified 39,834 of the 172,801 deceased individuals as receiving farm program payments through entities when we compared FSA’s databases with the Social Security Administration’s Death Master File. The complex nature of some types of farming entities, in particular, corporations and general partnerships, increases the potential for improper payments. For example, a significant portion of farm program payments went to deceased individuals who were members of corporations and general partnerships. Deceased individuals identified as members of corporations and general partnerships received nearly three- quarters of the $648 million that went to deceased individuals in all entities. The remaining one-quarter of payments went to deceased individuals of other types of entities, including estates, joint ventures, limited partnerships, and trusts. With regard to the number of deceased individuals who received farm program payments through entities, they were most often members of corporations and general partnerships. Specifically, of the 39,834 deceased individuals who received farm program payments through entities, about 57 percent were listed in FSA’s databases as members of corporations or general partnerships. Table 4 shows the number and percent of farm program payments FSA made to deceased individuals through entities from 1999 through 2005. As we reported in 2004, some farming operations may reorganize to overcome payment limits to maximize their program benefits. Large farming operations are often structured as corporations or general partnerships with other entities embedded within these entities. Deceased individuals are sometimes members of these embedded entities. For example, as shown in table 4, 8,575 deceased individuals received payments through general partnerships from 1999 through 2005. Of these, 687 received farm program payments because they were members of one or more entities that were embedded in the general partnership. Generally, these partnerships are consistent with the 1987 Act, as amended, whereby an individual can qualify for up to three payments by being a member of three entities within one general partnership. Furthermore, of the 172,801 deceased individuals identified as receiving farm program payments, 5,081 received more than one payment because (1) they were a member of more than one entity, or (2) they received payments as an individual and were a member of an entity. According to FSA field officials, complex farming operations, such as corporations and general partnerships with embedded entities, make it difficult for FSA to prevent making improper payments to deceased individuals. In particular, in many large farming operations, one individual often holds signature authority for the entire farming operation, which may include multiple members or entities. This individual may be the only contact FSA has with the operation; therefore, FSA cannot always know that each member of the operation is represented accurately to FSA by the signing individual for several reasons. First, it relies on the farming operation to self-certify that the information provided is accurate and that the operation will inform FSA of any operating plan changes, which would include the death of an operation’s member. Such notification would provide USDA with current information to determine the eligibility of the entity to receive the payments. Second, FSA has no management controls, such as computer matching of its payment files with the Social Security Administration’s Death Master File, to verify that an ongoing farming operation has failed to report the death of a member. FSA has a formidable task—ensuring that billions of dollars in program payments are made only to estates and individuals that are eligible to receive them. Our review, however, demonstrates that FSA field offices do not always conduct the necessary annual determinations to ensure that estates are eligible to receive farm program payments. FSA’s performance of these determinations for estates that have been kept open for more than 2 years could serve as an effective deterrent to making improper program payments. However, these determinations can only be a deterrent if they are consistently and thoroughly conducted. As we have found, some FSA field offices have failed to conduct eligibility determinations or have not conducted them consistently and documented the results of their determinations. FSA has relied on farming operations to report the death of a member whose ownership interest makes the operation eligible for program payments. However, it appears that some individuals who certify program eligibility forms for farming operations are either not taking seriously their obligation to notify FSA of the death of a member of the operation or are deliberately withholding this information to maximize their receipt of farm program payments. Our matching of FSA’s farm payment database with the Social Security Administration’s Death Master File indicates that FSA’s reliance is misplaced, in at least some instances. We previously reported that we found examples of farming operations where recipients may circumvent the payment limits by organizing large farming operations to maximize program payments. The complex nature of these entities—such as entities embedded within other entities—increases the potential that deceased individuals will receive farm program payments because the status of these individuals is not easy for FSA to ascertain. Currently, FSA does not have effective management controls to verify that an individual receiving farm program payments, either directly or indirectly through an entity, is still alive. The lack of these controls increases the risk of improper payments being made over time. The shortcomings we have identified underscore the need for improved oversight of federal farm programs. Such oversight can help to ensure that program funds are spent as economically, efficiently, and effectively as possible, and that they benefit those engaged in farming as intended. To provide reasonable assurance that FSA does not make improper payments to estates and deceased individuals, we recommend that the Secretary of Agriculture direct the Administrator of the Farm Service Agency to instruct FSA field offices to conduct all annual estate eligibility determinations as required; implement management controls, such as matching payment files with the Social Security Administration’s Death Master File, to verify that an individual receiving farm program payments has not died; and determine if improper program payments have been made to deceased individuals or to entities that failed to disclose the death of a member, and if so, recover the appropriate amounts. In addition, we have referred the cases we identify in this report to USDA’s Office of Inspector General for further investigation. We provided FSA with a draft of this report for review and comment. FSA agreed with our recommendations and already has begun to take action to implement them. For example, FSA has issued a notice (Notice PL-158, May 31, 2007) to its field offices emphasizing the current payment eligibility rules, procedures, and review requirements for payments with respect to deceased individuals and estates. This directive instructs these offices to review the eligibility of all estates that have been open for more than 2 years and requested 2007 farm program benefits. Furthermore, according to FSA, it is currently working with the Social Security Administration to obtain access to the Death Master File of deceased individuals. FSA intends to develop a process for matching its payment data against the Death Master File on at least an annual basis. According to FSA, it will then have a reliable means for identifying deceased individuals who may also be payment recipients. In addition, once implemented, FSA will no longer have to depend on the farming operation to notify the agency of an individual’s death. Despite its concurrence with our recommendations, FSA did not agree with our use of the term “improper payments” in this report. FSA suggested that we revise the report to refer to the payments as at most “questionable” in view of current eligibility regulations, rather than improper. Specifically, the agency stated that the payments we describe do not meet the definition of improper payments under IPIA. We disagree. We believe three cases we highlight in examples in the report do meet the definition of improper payments under IPIA. IPIA defines improper payments as any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. This definition would include any payment made to an ineligible recipient either directly or through an entity. Our examples are consistent with this definition. Furthermore, officials in FSA’s field offices agreed with our findings and told us they intend to recover the payments. For the remaining farm program payments identified in the report, we continue to believe that the potential exists for improper payments because of the lack of FSA management controls and the complexity of some of the farming operations involved. Under current circumstances, FSA cannot be assured that millions of dollars in farm program payments are going to those who met eligibility requirements and thus should have received these payments. FSA’s written comments are presented in appendix II. FSA also provided us with suggested technical corrections, which we have incorporated into this report, as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to appropriate congressional committees; the Secretary of Agriculture; the Director, OMB; and other interested parties. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. At the request of the Ranking Member of the Senate Committee on Finance, we reviewed the Farm Service Agency’s (FSA) implementation of payment eligibility provisions to identify improper payments to estates and deceased individuals. Specifically, we evaluated the extent to which FSA (1) follows its regulations that are intended to provide reasonable assurance that farm program payments go only to eligible estates and (2) makes improper payments to deceased individuals. To determine how well FSA field offices carry out rules that prohibit payments to ineligible recipients, we reviewed guidance that FSA field offices use to determine farm program payment eligibility, including relevant statutes and regulations and agency policy, including the FSA Handbook on Payment Limitations, 1-PL (Revision 1). We reviewed relevant studies prepared by the U.S. Department of Agriculture’s (USDA) Office of Inspector General and the Congressional Research Service, as well as our own past reports. We also reviewed USDA’s FY 2006 Performance and Accountability Report to understand its assessment of internal controls for its farm programs. In addition, we spoke with FSA officials in headquarters, state offices, and local field offices who are responsible for ensuring that (1) estates are properly reviewed for eligibility and (2) payments are not made to deceased individuals. We obtained and analyzed FSA’s computer databases for information on payment recipients from 1999 through 2005. These databases included FSA’s Producer Payment Reporting System, Commodity Certificate file, and Permitted Entity file. The databases contain detailed information on payment recipients: Social Security numbers, payment amounts, the status of recipients as individuals or members of entities, their ownership interest in entities, types of entity, and additional organizational details. The databases also contain information on payments made under USDA’s farm programs, including the Direct and Counter-Cyclical Payments Program, Marketing Assistance Loan Program, Conservation Reserve Program, and Environmental Quality Incentives Program. We also compiled data on farm program benefits provided through cooperative marketing associations. Because our analysis covered the years 1999 through 2005, it also included farm payments from programs authorized before the Farm Security and Rural Investment Act of 2002, such as production flexibility contract payments authorized under the Agriculture Market Transition Act and market loss assistance payments and crop disaster assistance payments authorized under various ad hoc legislation. Appendix III provides a list of USDA farm programs we reviewed. To evaluate FSA’s application of regulations and guidance to assess the overall effectiveness of its review process for deciding whether estates are eligible to receive farm program payments, we reviewed a nonrandom sample of estate eligibility determinations. To identify estates for our review, we analyzed FSA’s databases. The data showed that 2,841 estates had received payments for more than 2 years between 1999 and 2005, thus requiring FSA to conduct a determination of eligibility. Of these, we examined 181 estates in 26 states and 142 counties. These estates included the 162 (i.e., 162 of 2,841) that received over $100,000 in farm program payments during this period. We also selected the 16 estates (i.e., 16 of 2,841) that (1) had received between $50,000 and $100,000 in farm program payments during this period and (2) had at least one member receiving payments through three other entities, which could indicate circumvention of the three-entity rule. Lastly, we selected the three estates (i.e., 3 of 2,841) that had at least one member receiving payments through seven or more other entities. For each estate selected, we reviewed case file documents to verify the basis for FSA field offices’ decisions to grant eligibility. Specifically, we obtained and reviewed files from FSA field offices that ideally would have included the following information to facilitate FSA’s determinations: letters testamentary from a probate court, minutes of the FSA county committee meeting that approved eligibility, explanation letters or documentation for the reason the estate remained active beyond 2 years, farm operating plans, and payment history. States and counties vary widely in the amount and type of documentation they require for probated estates. Consequently, we could not easily determine whether improper payments were made to estates. Furthermore, even in cases in which FSA had not done the required annual determinations, or when relevant documentation was missing or incomplete in the estate file, we could not determine whether improper payments were made without examining each case in depth. To evaluate the extent to which FSA makes improper payments to deceased individuals, we compared recipients of farm program payments in FSA’s computer databases with individuals whose Social Security numbers were listed in the Social Security Administration’s Death Master File, to identify post-death program payments for individuals who were deceased. The Death Master File contains information such as the name and Social Security numbers of deceased individuals in the United States. We assessed the reliability of FSA’s data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of our review. Although we did not assess the reliability of the Social Security Administration’s Death Master File, it is the most comprehensive list of death information available in the federal government and is generally used by other government agencies and researchers. Using FSA’s databases, we identified the 2.9 million individuals who received payments, either directly or indirectly through an entity, from 1999 through 2005. Payments were attributed to members of an entity by apportioning the payments according to each member’s percentage share of that entity. Using these Social Security numbers, we then compared these individuals with individuals listed in the Social Security Administration’s Death Master File to determine the extent to which deceased individuals may have received improper payments. The data match showed the number and dollar amount of payments FSA provided to deceased individuals from 1999 through 2005. To gain an understanding of circumstances behind seemingly improper payments, we obtained relevant documents from FSA, including farm operating plans and acreage reports, for selected cases. We conducted our review between June 2006 and May 2007 in accordance with generally accepted government auditing standards. 1. We believe the payments we highlight in three examples in the report meet the definition of improper payments under IPIA. IPIA defines improper payments as any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. This definition would include any payment made to an ineligible recipient either directly or through an entity. Our examples are consistent with this definition. Furthermore, officials in FSA’s field offices agreed with our findings and told us they intend to recover the payments. For the remaining farm program payments identified in the report, we continue to believe that the potential exists for improper payments because of the lack of FSA management controls and the complexity of some of the farming operations involved. Under current circumstances, FSA cannot be assured that millions of dollars in farm program payments are going to those who met eligibility requirements and thus should have received these payments. 2. For each of the three examples discussed in the report, we verified the accuracy of information in FSA’s payment system and discussed the estate with the FSA field office where the estate was located. Because the field offices have this information, we do not understand why FSA does not believe the report provided sufficient information to investigate these cases further. 3. We would expect FSA field offices to have appropriate documents to verify acceptable reasons for keeping the estate open. These files could have included the following information to facilitate FSA’s determinations: letters testamentary from a probate court, minutes of the FSA county committee meeting that approved eligibility, explanation letters or documentation for the reason the estate remained active beyond 2 years, and farm operating plans. However, when annual determinations were not done or relevant documentation was missing or incomplete in the files, we could not determine with certainty whether improper payments were made to estates. As we discuss on page 4 of this report, the wide variation in state and county documentation required for probated estates made it difficult for us to make eligibility determinations. We continue to believe that the failure of FSA’s field offices to conduct annual determinations of eligibility increases the risk of improper payments being made over time. 4. FSA implies that because the $1.1 billion in farm program payments paid to deceased individuals during 1999 through 2005 amounts to only 8/10 of 1 percent of the total payments made during this period, the amount is negligible. We disagree—a billion dollars is not a negligible sum. In addition, this amount represents only payments made to deceased individuals during this specific period; it does not capture payments made to deceased individuals before and after this period. FSA is obligated to ensure that program funds are spent as economically, efficiently, and effectively as possible. The nation’s current deficit and growing long-term fiscal challenges reinforce the importance of this obligation. Implementing management controls, such as matching payment files with the Social Security Administration’s Death Master File, to verify that an individual receiving farm program payments has not died is a simple, cost- effective means to achieve this end. 5. FSA is correct that counter-cyclical payments may be made for up to 3 years after an individual has died. However, according to our analysis, only $46.5 million (4.2 percent) of the $1.1 billion in payments made to deceased individuals from 1999 through 2005 were counter-cyclical payments made for the same program year as the year in which the individual died. Furthermore, a farming operation is subject to forfeiture of payments, including counter-cyclical payments, if it has not notified FSA of a change in the farming operation, such as the death of an individual who receives payments as a member of that operation. Many deceased individuals who received counter-cyclical payments during this period also received payments under other programs for which FSA should have been notified of the change in the farming operation. However, the fact that an individual was identified as deceased in our computer matching indicates FSA was not informed that a change in the farm operation had occurred, suggesting that the farming operation was not eligible to receive any of the payments, including the counter-cyclical payments. 6. As noted in the report, the source for information in table 3 (p. 17) is USDA’s FY 2006 Performance and Accountability Report. The improper payments and the percent error rate for each program in table 3 are USDA’s estimates. We acknowledge that improper payments made under the Noninsured Assistance Program are not exclusively the result of payments made to deceased individuals. Lamb Meat Adjustment Assistance Program $(20,257) $(22,175) $(777) (33,187) (467,556) (108,096) (470,359) (779,398) (755,675) (20,557) (2,905) Soil and Water Agricultural Assistance Program Trade Adjustment Assistance for Farmers Wool & Mohair Market Loss Assistance Program (4,975) (300) $23,807,786,218 (292,942,256) (8,252,201) (2,005,089) (1,569) (1,668) (532) (261) (80) (524) (76,838) (8,136) (1,153) Includes cotton user marketing certificate gains. Includes the Apple & Potato Quality Loss Program, Sugar Beet Disaster Program, Quality Loss Program, Crop Loss Disaster Assistance Program, Florida Nursery Losses Program, Florida Hurricane Charley Disaster Program, Disaster Reserve Flood Compensation Program, Florida Hurricane Nursery Disaster Program, Florida Hurricane Vegetable Disaster Program, Multi-Year Crop Loss Disaster Assistance Program, North Carolina Crop Hurricane Damage Program, Nursery Losses In Florida Program, and Single Year Crop Loss Disaster Assistance Program, as well as Disaster Supplemental Appropriation payments, Crop Disaster North Carolina payments, Crop Disaster Virginia payments, and 1999 Citrus Losses In California. Includes the Dairy Indemnity Program, Dairy Options Pilot Program, and Dairy Production Disaster Assistance Program. Includes the Livestock Assistance Program, Livestock Indemnity Program, Avian Influenza Indemnity Program, Cattle Feed Program, Pasture Flood Compensation Program, and Pasture Recovery Program. Includes “loan deficiency payment-like” grazing payments for wheat, barley, oats, and triticale. Includes supplemental appropriations for the Noninsured Assistance Program. Includes supplemental appropriations for the Oilseed Payment Program. Includes the Sugar Payment-In-Kind Diversion Program. Includes the Tobacco Loss Assistance Program and the Supplement Tobacco Loss Assistance Program. Includes the Yakima Basin Water Program, Flood Compensation Program for Harney County Oregon, Fresh Market Peaches Program, Idaho Oust Program, Livestock Compensation Program- Grants For Catfish Producers, Limited California Cooperative Insolvency Program, New Mexico Tebuthiuron Application Losses Program, New York Onion Producers Program, Potato Diversion Program, Poultry Enteritis Mortality Syndrome Program, Seed Corn Purchase Containing CRY9C Protein Program, Specialty Crops-Base State Grants Program, Specialty Crops-Value Of Production Program, and State Commodity Assistance Program, as well as Consent Decree payments and Interest Penalty payments. Table 5 shows the variation by state in FSA’s conduct of eligibility determinations from 1999 through 2005 for the 181 estates in our sample. Not all states are represented because we chose estates based on criteria other than location. Our sample of 181 estates included the 162 that received over $100,000 in farm program payments during this period. We also selected the 16 estates that (1) received between $50,000 and $100,000 in farm program payments during this period and (2) had at least one member receiving payments through three other entities, which could indicate circumvention of the three-entity rule. In addition, we selected the three estates that had at least one member receiving payments through seven or more other entities. In addition to the individual named above, James R. Jones, Jr., Assistant Director; Hamid E. Ali; Kevin S. Bray; Thomas M. Cook; Stephanie K. Fain; Ronald E. Maxon, Jr.; Jennifer R. Popovic; and Carol Herrnstadt Shulman made key contributions to this report. Improper Payments: Agencies’ Efforts to Address Improper Payment and Recovery Auditing Requirements Continue. GAO-07-635T. Washington, D.C.: March 29, 2007. Improper Payments: Incomplete Reporting under the Improper Payments Information Act Masks the Extent of the Problem. GAO-07-254T. Washington, D.C.: December 5, 2006. Improper Payments: Agencies’ Fiscal Year 2005 Reporting under the Improper Payments Information Act Remains Incomplete. GAO-07-92. Washington, D.C.: November 14, 2006. Financial Management: Challenges Continue in Meeting Requirements of the Improper Payments Information Act. GAO-06-581T. Washington, D.C.: April 5, 2006. Financial Management: Challenges Remain in Meeting Requirements of the Improper Payments Information Act. GAO-06-482T. Washington, D.C.: March 9, 2006. Financial Management: Challenges in Meeting Governmentwide Improper Payment Requirements. GAO-05-907T. Washington, D.C.: July 20, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-605T. Washington, D.C.: July 12, 2005. Financial Management: Challenges in Meeting Requirements of the Improper Payments Information Act. GAO-05-417. Washington, D.C.: March 31, 2005. Farm Program Payments: USDA Should Correct Weaknesses in Regulations and Oversight to Better Ensure Recipients Do Not Circumvent Payment Limitations. GAO-04-861T. Washington, D.C.: June 16, 2004. Farm Program Payments: USDA Needs to Strengthen Regulations and Oversight to Better Ensure Recipients Do Not Circumvent Payment Limitations. GAO-04-407. Washington, D.C.: April 30, 2004. Strategies to Manage Improper Payments: Learning From Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 1, 2001. Farm Programs: Changes to the Marketing Assistance Loan Program Have Had Little Impact on Payments. GAO-01-964. Washington, D.C.: September 28, 2001. Farm Programs: Information on Recipients of Federal Payments. GAO-01-606. Washington, D.C.: June 15, 2001. Financial Management: Billions in Improper Payments Continue to Require Attention. GAO-01-44. Washington, D.C.: October 27, 2000. | Farmers receive about $20 billion annually in federal farm program payments, which go to individuals and "entities," including corporations, partnerships, and estates. Under certain conditions, estates may receive payments for the first 2 years after an individual's death. For later years, the U.S. Department of Agriculture (USDA) must determine that the estate is not being kept open for payments. As requested, GAO evaluated the extent to which USDA (1) follows its regulations that are intended to provide reasonable assurance that farm program payments go only to eligible estates and (2) makes improper payments to deceased individuals. GAO reviewed a nonrandom sample of estates based, in part, on the amount of payments an estate received and compared USDA's databases that identify payment recipients with individuals the Social Security Administration listed as deceased. USDA has made farm payments to estates more than 2 years after recipients died, without determining, as its regulations require, whether the estates were kept open to receive these payments. As a result, USDA cannot be assured that farm payments are not going to estates kept open primarily to obtain these payments. From 1999 through 2005, USDA did not conduct any eligibility determinations for 73, or 40 percent, of the 181 estates GAO reviewed. Sixteen of these 73 estates had each received more than $200,000 in farm payments, and 4 had each received more than $500,000. Also, for the 108 reviews USDA did conduct, GAO identified shortcomings. For example, from 1999 through 2005, 69 of the 108 estates did not receive annual reviews for every year of payments received, and some USDA field offices approved groups of estates for payments without reviewing each estate. Furthermore, 20 estates that USDA approved for payment eligibility had no documented explanation for keeping the estate open. USDA cannot be assured that millions of dollars in farm payments are proper. It does not have management controls to verify that it is not making payments to deceased individuals. For 1999 through 2005, USDA paid $1.1 billion in farm payments in the names of 172,801 deceased individuals (either as an individual recipient or as a member of an entity). Of this total, 40 percent went to those who had been dead for 3 or more years, and 19 percent to those dead for 7 or more years. Most of these payments were made to deceased individuals indirectly (i.e., as members of farming entities). For example, over one-half of the $1.1 billion payments went through entities from 1999 through 2005. In one case, USDA paid a member of an entity--deceased since 1995--over $400,000 in payments for 1999 through 2005. USDA relies on the farming operation's self-certification that the information provided is accurate and that the operation will inform USDA of any changes, such as the death of a member. Such notification would provide USDA with current information to determine the eligibility of the entity to receive the payments. The complex nature of some farming operations--such as entities embedded within other entities--can make it difficult for USDA to avoid making payments to deceased individuals. |
Under the Direct Loan program, Education issues several types of student loans. They are Subsidized and Unsubsidized Stafford Loans, PLUS Loans, and Consolidation Loans. The federal government sets limits on the maximum interest rate, loan origination fee and other charges, and annual and aggregate amounts that can be borrowed (see table 1). Education contracts with 11 loan servicers to manage Direct Loan accounts. Loan servicing includes activities such as communicating with borrowers, counseling borrowers on selecting repayment plans, and processing payments. Education offers a variety of repayment plans for Direct Loan borrowers: Standard: Borrowers have fixed monthly payments with a fixed term of 10 years or less (or 10 to 30 years for Consolidation Loans, depending on the amount of the loan). Borrowers are automatically enrolled in 10-year Standard repayment if they do not choose another option. Graduated: Borrowers have a fixed term of up to 10 years (or 10 to 30 years for Consolidation Loans, depending on the amount of the loan). Monthly payments gradually increase. Extended: Borrowers have a fixed term of 25 years or less. Monthly payments may be fixed or graduated, and borrowers must have more than $30,000 in loans. Education also offers repayment plans that base monthly payments on income and family size for Direct Loan borrowers who meet certain eligibility requirements. They are: Income-Contingent Repayment (ICR); Income-Based Repayment (IBR); and Pay As You Earn (PAYE). Key features of these income-driven repayment plans include lower monthly payments, repayment periods of up to 25 years, and forgiveness of any remaining loan balances at the end of the repayment period. (See table 2.) The plans have provided progressively more generous repayment and forgiveness terms to help borrowers manage their federal student loan debt, and additional changes are expected. Specifically, Education issued proposed regulations on July 9, 2015, that would expand PAYE to Direct Loan borrowers regardless of when the borrower took out the loans. Education officials said they intend to complete rulemaking on the Revised Pay As You Earn plan by the end of 2015. The President and congressional leaders have also proposed streamlining these plans. As part of the application to participate in one of the income-driven repayment plans, borrowers must provide documentation of their adjusted gross income and certify their family size to their loan servicer, which determines eligibility on behalf of Education. Income-driven repayment plan participants must re-certify their adjusted gross income and family size annually, which may increase or decrease their monthly payments. In order to initially qualify for IBR and PAYE, borrowers must have income and student loan debt such that their monthly payment would be less under one of these plans than under the 10-year Standard repayment plan. These borrowers are described as having a “partial financial hardship.” Once enrolled, borrowers can remain on the plans and be eligible for loan forgiveness regardless of whether they have a partial financial hardship. However, the monthly payment for borrowers found to no longer have a partial financial hardship is based on (and never exceeds) the payment they would have owed under 10-year Standard repayment. Because of the number of payments required before loan forgiveness can be considered, Education officials said the earliest possible date that any borrower may receive loan forgiveness under ICR and the original IBR plan is July 1, 2019; PAYE is October 1, 2027; and IBR for new borrowers is July 1, 2034. Although loan forgiveness is a key feature of income-driven repayment plans, under current tax law any amount forgiven under these plans is subject to federal income tax. In addition, some borrowers will fully repay their loans before qualifying for forgiveness. Extending the repayment period may also result in some borrowers paying more interest over the life of the loan than they would under 10-year Standard repayment. Beginning in 2017, the Public Service Loan Forgiveness (PSLF) program is to offer loan forgiveness on the remaining Direct Loan balances of borrowers who complete at least 10 years of qualifying public service employment and meet other requirements. The program was established in 2007. To receive forgiveness, borrowers must make 120 on-time, scheduled, monthly payments while employed full-time by a qualified public service organization, such as a government or nonprofit organization. Borrowers must also be working for a public service organization at the time they apply for forgiveness and when the remaining balance on their loan is forgiven. Because only payments made after October 1, 2007 qualify, no borrowers are eligible to receive loan forgiveness before October 2017. Qualifying repayment plans include IBR, PAYE, ICR, 10-year Standard repayment, or another plan if the payments equal or exceed the 10-year Standard payment amount. However, borrowers enrolled in IBR, PAYE, and ICR are more likely to have balances remaining to be forgiven after 120 payments, because the 10-year Standard repayment plan is set to fully pay all loan principal and interest in 10 years or less. The amount of loans that may be forgiven is not capped. In January 2012, Education established a process to certify borrowers’ public service employment and loans for PSLF (see fig. 1). Education’s loan servicer responsible for PSLF communicates with borrowers who request certification about their employment, repayment plan, and qualifying payments, including counseling borrowers about any changes needed in order to qualify for the program. Borrowers may submit information about their employment at any time or wait until they apply for loan forgiveness beginning in October 2017. Many eligible borrowers do not participate in income-driven repayment plans. Using its income tax data and Education’s student loan data, Treasury estimated that about half (51 percent) of Direct Loan borrowers were eligible for IBR as of September 2012. Of these eligible borrowers, an estimated 20 percent participated in IBR or ICR, the only income- driven repayment plans available at the time of Treasury’s analysis. According to our review of more recent summary data from Education’s National Student Loan Data System, 15 percent of about 11.2 million Direct Loan borrowers in active repayment—not in deferment, forbearance, or default—participated in IBR (13 percent) or PAYE (2 percent) as of September 2014. An additional 4 percent of these borrowers participated in ICR (see fig. 2). Participation in these three income-driven repayment plans ranged from 15 percent of borrowers who entered repayment in fiscal year 2009 or earlier to 23 percent of those who entered repayment in fiscal year 2013. While we examined participation in IBR and PAYE as of September 2014, publicly available data from Education show participation in these plans has increased over time. According to Education’s data, from June 2013 to March 2015, IBR participation among Direct Loan recipients increased from 5.8 percent to 11.7 percent, and PAYE participation increased from 0.3 percent to 2.7 percent. These percentages differ from the ones we present based on summary data from Education’s NSLDS due to different borrower populations and time periods for analysis. While data on retention in IBR and PAYE are limited given the newness of the repayment plans, we found short-term retention rates were high, according to our review of summary data from NSLDS: IBR: 95 percent of Direct Loan borrowers participating in IBR with a partial financial hardship in July 2012 remained in the plan 2 years later (84 percent still had a partial financial hardship and paid less than the 10-year Standard repayment amount). PAYE: 98 percent of borrowers participating in PAYE with a partial financial hardship in July 2013 remained in the plan or were in IBR 1 year later (86 percent still had a partial financial hardship and paid less than the 10-year Standard repayment amount). Education officials and higher education experts we interviewed said many factors may affect eligible borrowers’ participation in income-driven repayment plans. They said some may not be aware of IBR or PAYE, may not understand them, or may have difficulty applying or meeting annual income certification requirements. Education officials also noted that some borrowers may choose non-standard repayment plans, such as the Extended or Graduated plans, which may offer lower initial monthly payments than income-driven plans. In addition, not all borrowers who are aware of IBR or PAYE and are eligible choose to participate after considering the costs and benefits. For some borrowers, the value of lower monthly payments on IBR or PAYE may outweigh the potential increase in total loan costs, while others may prefer to pay off their loans sooner at a potentially lower total loan cost if they can afford higher monthly payments on the 10-year Standard repayment plan. To understand the potential costs and benefits of participating in IBR or PAYE, we created two example borrowers—Borrower A and Borrower B—who are single with $20,000 in loan debt and different starting annual adjusted gross incomes that increase by five percent annually (see fig. 3). For Borrower A, who begins repayment with an annual adjusted gross income of $15,000, repaying with IBR or PAYE rather than the 10- year Standard plan would reduce both monthly payments and total loan costs. Under PAYE in particular, Borrower A would pay less over the life of the loan than the amount borrowed. Moreover, in this example, the federal government would collect less on the loan than it would on the 10-year Standard plan. In contrast, for Borrower B, who begins repayment with a higher annual adjusted gross income of $25,000, repaying with IBR or PAYE would initially reduce monthly payments, but the total cost of the loans would be higher than on the 10-year Standard plan. Compared to Borrower A, Borrower B has higher total loan costs under both IBR and PAYE due to paying more each month based on the higher income. As a result, in this example, the federal government would collect more on the loan from Borrower B than it would on the 10-year Standard plan. Many income-driven repayment plan participants had low annual adjusted gross incomes. For those with available income data, 70 percent of IBR participants and 83 percent of PAYE participants earned from $1 to $20,000, according to our review of September 2014 data from Education (see fig. 4). In contrast, 10 percent of IBR participants and 5 percent of PAYE participants had annual adjusted gross incomes greater than $40,000. We also found that IBR and PAYE participants had borrowed more than those participating in Standard repayment. For example, 64 percent of IBR participants and 45 percent of PAYE participants had borrowed more than $30,000, compared to 23 percent of borrowers participating in Standard repayment, according to September 2014 summary data from Education (see fig. 5). In addition, substantially lower percentages of IBR and PAYE participants had defaulted on their loan compared to those in Standard repayment, and the great majority were in active repayment as of September 2014. Education officials cautioned against comparing default rates across repayment plans because IBR and PAYE are newer and borrowers have not had as much time to default. However, when we examined the status of loans by cohort for borrowers who entered repayment in the same fiscal year, we found IBR and PAYE participants had substantially lower default rates than Standard plan participants. Specifically, among borrowers who entered repayment from fiscal year 2010 to fiscal year 2014, less than 1 percent of IBR and PAYE participants had defaulted on their loan, compared to 14 percent in Standard repayment (see fig. 6). According to Education officials, fundamental differences between borrowers who elect to participate in IBR and PAYE and Standard plan participants may account for the difference in default rates. They also noted that IBR and PAYE participants may have scheduled monthly payments as low as zero dollars. For more information about how IBR and PAYE participants compare to Standard plan borrowers on characteristics such as gender, age, highest academic level, and type of school attended, see appendix II. Education has taken steps intended to increase borrower awareness of income-driven repayment plans, including IBR and PAYE, but has not consistently provided information about these plans to borrowers who have entered repayment. According to Education’s Fiscal Year 2012- 2016 Strategic Plan for federal student aid, in support of its goal to provide superior information and service to borrowers, Education aims to compile and distribute information on the costs and benefits of higher education programs to improve financial literacy and support borrowers’ decision-making. Education reported in its fiscal year 2015 budget proposal that many borrowers seemed unaware of income-based or other repayment options. Further, in February 2015, Education officials highlighted ongoing concerns about awareness, noting that feedback they have obtained from borrowers suggests borrowers are less aware of income-driven repayment plans and many borrowers have not considered these plans because they did not have enough information about them. In addition, although 12 of the 14 borrowers we interviewed were aware of income-driven repayment plans, 9 said they had to do their own research to find information about them or did not have a good understanding of the plans. Education provides detailed information about income-driven repayment on its website, including repayment terms; eligibility requirements; a calculator that allows borrowers to estimate monthly loan payments and total loan costs under different repayment plans; and an online counseling tool that includes repayment options. Education also has begun publicizing IBR and PAYE through social media. However, borrowers must actively seek information through these sources. Education also provides information about repayment plans—including IBR and PAYE terms, benefits, and eligibility requirements—in the borrower rights and responsibilities statement that is provided when borrowers receive their loans and through required entrance and exit counseling completed by borrowers when they begin and end school. However, Education does not directly provide this information to borrowers once they have entered repayment, when they may have a better sense of whether they can afford their monthly payments. In an effort to increase awareness of IBR and PAYE, Education conducted outreach campaigns from fall 2013 through May 2015, in which it sent emails to almost 5 million borrowers in targeted groups, such as delinquent borrowers and borrowers in their grace period who had more than $25,000 in debt. The emails provided general information about income-driven repayment terms and benefits and directed borrowers to Education’s website for more information. Education officials said the department emailed these borrowers directly instead of having loan servicers do so because customer feedback has shown borrowers are not always familiar with their servicer. Education officials told us in June 2015 that they plan to email in-grace borrowers with over $25,000 in loan debt twice per year. In addition, Education has partnered with Treasury since 2014 to include a message about income-driven repayment options on the back of tax refund envelopes, and with Intuit Inc. to include information about these options for borrowers who used TurboTax to file their taxes. Education officials told us that, based on the success of these efforts, they are continuing their partnership with Intuit Inc. and that they formed a new partnership with H&R Block and Treasury to publicize income-driven repayment options. Once borrowers enter repayment, Education primarily relies on its loan servicers to communicate directly with them about repayment options. Although Education requires loan servicers to send certain communications to borrowers who already participate in income-driven repayment plans, it has not established specific requirements for how servicers communicate with other borrowers about the plans. Instead, Education officials said the department provides financial incentives to servicers to help keep borrowers current in repayment (e.g. not in delinquency, default, or forbearance). Representatives from the three selected loan servicers we interviewed, which collectively serve about half of borrowers with loans owned by Education, said they generally make information about income-driven repayment available through customer service representatives and websites. However, borrowers must actively seek information through these sources. Documentation from these three servicers also showed that they contacted some borrowers with information about repayment options, including IBR and PAYE, when borrowers missed monthly payments or their deferment or forbearance periods were ending. However, when we reviewed sample written communications the three loan servicers sent to all borrowers in repayment in 2014, we found inconsistency in the information they provided about income-driven repayment plans. In addition, these communications did not include information about how the plans work or their eligibility requirements. For example: Two servicers included a list of repayment plans, including IBR and PAYE, on the back of monthly billing statements sent to borrowers but did not describe the plans or their benefits. Another servicer, which serves more than 5 million borrowers, sent billing statements mentioning the availability of repayment plans that may help borrowers who are having difficulty making payments. The statements indicated these plans could reduce monthly payments and are based on income, but did not identify specific repayment plans. The inconsistency and gaps we identified in how Education and its loan servicers communicate with borrowers about income-driven repayment raise questions about the sufficiency of this information. Without such information, borrowers who are unaware of these plans may miss the opportunity to reduce their risk of delinquency or default. While information on PSLF participation will not be available until borrowers can begin applying for loan forgiveness in October 2017, about 147,000 borrowers have had their employment and loans certified for PSLF as of September 2014, according to data from Education’s loan servicer for the program. Although borrowers may wait until 2017 before requesting certification, those who participate in the voluntary process in advance learn whether they currently meet the basic eligibility requirements and the number of qualifying loan payments they have made. The number of borrowers who had employment and loans certified for PSLF increased steadily from January 2012, when Education established the voluntary process, through September 2014 (see fig. 7). For more information on approvals and rejections of PSLF employment certification forms, see appendix III. The exact number of borrowers eligible for and planning to apply for PSLF forgiveness when it becomes available beginning in 2017 is not known. Only borrowers who complete Education’s voluntary process provide their employment information to Education, and we identified no additional data source on both federal student loans and public service employment that would allow us to identify borrowers who may be eligible for PSLF. However, according to 2012 annual employment data from the Bureau of Labor Statistics, an estimated 24.7 percent of U.S. workers nationwide (32.5 million of 131.7 million) were employed in public service, considering federal, state, and local government agencies and 501(c)(3) nonprofit organizations. If rates of public service employment are comparable among Direct Loan borrowers, about 4 million current Direct Loan borrowers may be employed in public service. Furthermore, if rates of public service employment are comparable among Direct Loan borrowers across repayment plans, about 643,000 Direct Loan borrowers repaying their loans through IBR, PAYE, and ICR as of September 2014 may be employed in public service. As previously discussed, these repayment plans are more likely to leave borrowers with an outstanding balance after 120 payments and enable them to benefit from PSLF after it becomes available in 2017. Most of the borrowers who had their employment and loans certified for PSLF were enrolled in an income-driven repayment plan, had annual adjusted gross incomes exceeding $20,000, and had borrowed more than $30,000. As of September 2014, 71 percent (104,422 of 146,866) of borrowers who had their employment and loans certified for PSLF were enrolled in IBR, PAYE, or ICR (see fig. 8). Borrowers on these income- driven plans for longer periods of time are more likely to have remaining loan balances to be forgiven after making the required 120 payments, in contrast to those on other qualifying plans, such as 10-year Standard repayment, who would be set to fully repay their loans in 10 years or less. Officials from Education’s loan servicer for PSLF told us they encourage borrowers who had employment and loans certified for PSLF to enroll in repayment plans that are more likely to enable them to benefit from forgiveness. As of September 2014, nearly two-thirds of borrowers who had employment and loans certified for PSLF had annual adjusted gross incomes of more than $20,000 (see fig. 9). In addition, nearly two-thirds of borrowers were employed in federal, state, or local government (63 percent, or 93,257), and the remainder were employed in the nonprofit sector (37 percent, or 53,609). Borrowers who had employment and loans certified for PSLF had higher student loan debt than Direct Loan borrowers generally. According to the September 2014 data on borrowers who had employment and loans certified for PSLF, 80 percent of borrowers had borrowed more than $30,000, compared to 36 percent of Direct Loan borrowers overall based on Education’s data (see fig. 10). To understand the potential costs and benefits of PSLF, we created two example borrowers—Borrower A and Borrower B—and found that the program may provide substantial savings over the life of the loan for qualifying borrowers in IBR and PAYE, without the trade-off of higher loan costs faced by some borrowers in these repayment plans. In contrast, borrowers who make all qualifying payments on the 10-year Standard repayment plan would have paid their loans in full (i.e., have $0 balance) after 120 qualifying payments (see fig. 11). For each borrower, forgiveness under PSLF would reduce the amount of the loan that the federal government collects. Borrower A enrolled in IBR or PAYE has an initial annual adjusted gross income of $25,000. PSLF reduces total costs over the life of the loan to less than $20,000—substantially less than the $60,000 borrowed. Borrower B, who borrowed the same amount and has a higher initial annual adjusted gross income of $40,000, also has reduced total costs under PSLF. In particular, the borrower has lower monthly payments under IBR and PAYE compared to 10-year Standard repayment but does not have the higher total loan costs that some borrowers in IBR and PAYE face, due to PSLF loan forgiveness after 120 payments. Education has taken some steps intended to increase borrower awareness of PSLF, but it has not notified all borrowers who have entered repayment about the program. As previously noted, Education aims to compile and distribute information on the costs and benefits of higher education programs to improve financial literacy and support borrowers’ decision-making. Although Education provides general information about PSLF on its website and through social media, borrowers must actively seek information through these sources. Education also provides information about PSLF in the borrower rights and responsibilities statement that is provided when borrowers receive their loans, and through entrance and exit counseling that borrowers complete when they begin and end school. In addition, Education has included information about the program in targeted emails sent to borrowers in their grace period who had more than $25,000 in debt. However, Education has not examined borrower awareness of PSLF to determine how well these efforts are working. For example, although Education conducts regular surveys of borrowers to measure customer satisfaction, it has not included an assessment of borrower awareness of PSLF in these surveys. Beyond its current efforts, Education officials told us they are considering a PSLF email campaign targeted to borrowers on income-driven plans and an effort to publicize the program to public service employers. Education officials told us they have not provided information about PSLF to those employed in public service because they do not have a way to identify and target such Direct Loan borrowers. Apart from its targeted efforts, Education does not directly provide information about PSLF to all borrowers once they have entered repayment, which would eliminate the need to identify and target those employed in public service. Because borrowers are to apply for PSLF at least 10 years after they enter repayment and after they receive exit counseling, information provided during repayment could help them make decisions about forgiveness. While Education primarily relies on loan servicers to communicate with borrowers who have entered repayment, it has established few requirements about what information the servicers should provide on PSLF and when. Although Education requires servicers to provide documents on PSLF when borrowers request them—including the employment certification form and related information—they are not required to notify other borrowers about the program. While loan servicers make information about PSLF available through their websites and customer service representatives, borrowers may not seek this information if they are not aware the program is available. Education officials said they rely on a performance-based system to provide incentives to servicers to manage their loan portfolios and keep borrowers in repayment, rather than setting specific requirements about how servicers communicate with borrowers. However, our review of sample written communications from 2014 for the three selected loan servicers, which serve about half of borrowers with loans owned by Education, showed limitations in the PSLF information provided to all borrowers who have entered repayment. For example, two servicers mentioned PSLF in their billing statements but did not describe the program terms or benefits; instead, they directed borrowers to websites with more information. Another servicer representing about 23 percent of borrowers (more than 5 million) did not provide any information on PSLF to borrowers unless they requested it. As a result, many of these borrowers may be uninformed about PSLF. After discussing our preliminary findings with Education in June 2015, Education officials reported that they are developing plans to require servicers to include information about PSLF in their initial communications to borrowers, such as the welcome letter sent when borrowers are assigned or transferred to a servicer, by the end of September. However, Education had not yet specified the information servicers must provide. In addition, servicers would not be required to provide PSLF information in ongoing communications with borrowers beyond the initial notification or to the millions of borrowers already in repayment. Assessing its efforts to increase borrower awareness of PSLF could better position Education to identify gaps in borrower awareness of the program and strengthen its outreach as needed. Such efforts would support Education’s goal to provide superior information and service to borrowers. Borrowers who have entered repayment and have not been notified about the program may be making decisions without complete information and might miss the opportunity to benefit from the program when it becomes available in 2017. For example, borrowers might fail to account for the value of PSLF forgiveness in weighing decisions about whether to enter public service. Other borrowers who are employed in public service and meet all program requirements may forfeit potentially large amounts of loan forgiveness if they are unaware of the program or do not learn about it in time to make changes that ensure their payments count toward forgiveness. Borrowers need sufficient and timely information to ensure they are aware of their eligibility for and can make informed decisions about available repayment options. Although Education has used a variety of approaches to raise awareness about IBR and PAYE and participation in the plans has increased, the gap between participation and eligibility and Education’s own assessment of borrower feedback suggests that borrowers are not receiving sufficient information about income-driven repayment plans. Thus, providing consistent information to all borrowers who have entered repayment would support Education’s goal to provide superior information and service to borrowers. Moreover, the lower default rates among borrowers in IBR and PAYE suggest that these plans may be an important tool for preventing default on federal student loans. Borrowers also need sufficient and timely information about Public Service Loan Forgiveness. However, Education has little assurance that borrowers know about the program, given that it has not assessed its efforts to raise awareness and relatively few borrowers have had their employment and loans certified for PSLF. As a result, borrowers employed in public service for at least 10 years may miss opportunities to benefit from the program when it becomes available in 2017, potentially forgoing thousands of dollars in loan forgiveness. To help ensure that Income-Based Repayment, Pay As You Earn, and Public Service Loan Forgiveness serve their intended beneficiaries to the greatest extent possible, we recommend that the Secretary of Education: take steps to consistently and regularly notify all borrowers who have entered repayment of income-driven repayment plan options, including Income-Based Repayment and Pay As You Earn. take steps to examine borrower awareness of Public Service Loan Forgiveness and increase outreach about the program as needed. We shared a draft of this report with the Department of Education for review and comment. In written comments, Education generally agreed with our recommendations, stating that it is committed to ensuring the federal student loan borrowers have the information they need to manage their debt, including details regarding income-driven repayment plans and loan forgiveness programs. However, Education stated that it is not clear that providing information on repayment options to all borrowers is the most efficient or effective way to achieve this goal. Education indicated that the steps it is taking to raise awareness about income-driven repayment would include streamlined processes for learning about, applying for, and recertifying eligibility for income-driven repayment plans with enhanced communications targeted to borrowers most likely to benefit from these plans. While these are positive steps, because Education does not have income and family size information needed to determine which borrowers could benefit from income-driven repayment, we maintain it is important for Education to notify all borrowers of these options. In response to our recommendation regarding Public Service Loan Forgiveness, Education agreed to examine borrower awareness and use the results to inform its outreach efforts. Beyond our recommendations, Education expressed concern that the draft report overstated the extent to which borrowers lack awareness of income- driven repayment plans. We made revisions to acknowledge the increase in borrower participation in these repayment plans and to clarify Education’s ongoing concerns regarding borrower awareness of these plans. Education also highlighted several of its efforts to increase awareness of repayment options and support borrowers. We acknowledged these efforts in our report and incorporated additional details about them based on Education’s comments. Education also provided technical comments, which we incorporated as appropriate. Education’s comments are reproduced in appendix IV. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and to the Departments of Education and the Treasury, the Bureau of Labor Statistics, and the Consumer Financial Protection Bureau. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (617) 788-0534 or emreyarrasm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. This appendix discusses in detail our methodology for addressing two research questions for the Direct Loan program: (1) How does borrower participation in Income-Based Repayment (IBR) and Pay As You Earn (PAYE) compare to available estimates of eligibility, and to what extent has the Department of Education (Education) taken steps to increase borrower awareness of these plans? and (2) What is known about Public Service Loan Forgiveness (PSLF) certification and eligibility, and to what extent has Education taken steps to increase awareness of this program? To address these questions, we used data from Education, the Department of the Treasury (Treasury), the loan servicer that administers PSLF for Education, and the Department of Labor’s Bureau of Labor Statistics. We reviewed relevant federal laws, regulations, and documentation from Education. We also conducted interviews with officials from Education and three of its loan servicers, Treasury, and the Bureau of Labor Statistics; representatives of higher education associations; borrower advocacy groups; researchers; and a nongeneralizable sample of Direct Loan borrowers. We conducted this performance audit from November 2013 to August 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To examine participation and key characteristics of borrowers in IBR, PAYE, and other repayment plans, we reviewed summary data from Education’s National Student Loan Data System (NSLDS) on 19.3 million Direct Loan borrowers (excluding parent PLUS) who entered repayment and had an outstanding loan balance as of September 2014. We chose these parameters in order to get as close as possible to the eligibility criteria for IBR and PAYE. To determine participation in IBR and PAYE, we focused on data for 11.2 million borrowers in active repayment (not in deferment, forbearance, or default). For borrowers with multiple loans, the repayment plan was based on the most recent loan that either entered repayment or was loaded into NSLDS. While borrowers with multiple loans are able to participate in different repayment plans, we found that to be the case for only 1 percent of borrowers in our analysis. In addition, we reviewed available estimates of IBR eligibility from a Treasury analysis of tax return data and Education’s student loan data for a random sample of borrowers. These estimates, which are based on September 2012 NSLDS data for borrowers who entered repayment in 2010 or earlier and Internal Revenue Service tax return data from 2010 and 2011, depending on the most recent available for each borrower, are the most recent and only available estimates of IBR eligibility we identified. We were not able to estimate eligibility using data from Education because only borrowers who apply for income-driven repayment plans are required to provide information on their income and family size. We also analyzed data from the Pennsylvania Higher Education Assistance Agency, the loan servicer that administers PSLF for Education, on 146,866 borrowers who voluntarily requested and had their employment and loans certified for PSLF, as of September 22, 2014. Specifically, we analyzed the number of certifications over time, repayment plan participation, and available borrower characteristics (i.e., sector of employment, amount of student loan debt, and adjusted gross income). To approximate the percentage of Direct Loan borrowers who may be eligible for PSLF, we used 2012 Bureau of Labor Statistics data—the most recent available. We calculated the percentage of workers nationwide who were employed by federal, state, and local government agencies and 501(c)(3) nonprofit organizations, and applied it to our summary NSLDS data on 16.3 million Direct Loan borrowers (excluding parent PLUS) who were in repayment, deferment, or forbearance as of September 2014. We also applied this percentage to the sub-population of these Direct Loan borrowers who were participating in IBR, PAYE, or ICR, the repayment plans more likely to enable borrowers to benefit from PSLF. To examine how IBR, PAYE, and PSLF may affect total loan costs for borrowers with various characteristics, we used summary data from Education’s NSLDS, specifications from November 2014 for a calculator on Education’s website that allows borrowers to estimate loan payments, and program requirements based on federal laws and regulations. We developed repayment scenarios by assigning selected levels of adjusted gross income and loan debt to a set of hypothetical borrowers to simulate their total payments under IBR, PAYE, and 10-year Standard plans, and under PSLF. In each of our scenarios, we assumed: each borrower is single with no dependents; borrowers’ initial annual adjusted gross incomes will increase 5 percent annually, consistent with the assumption Education uses for its loan calculator; the poverty threshold will increase at an average annual rate of 2.3 percent, which is based on the Congressional Budget Office’s inflation rate projections from 2014 through 2024, and is consistent with the assumption Education uses for its loan calculator; all loans have an interest rate of 6.8 percent, the rate for certain borrowers with federal student loans disbursed from July 1, 2006 through June 30, 2013; all Direct Loans are subsidized. While borrowers may have a combination of subsidized and unsubsidized loans, making this assumption allowed us to show the effect of Education paying the first 3 years of interest if an IBR or PAYE borrower’s payments do not fully cover the interest owed on a subsidized loan. This assumption means we might understate total loan costs for IBR and PAYE borrowers with unsubsidized loans whose payments do not fully cover interest. Education’s loan calculator assumes that all Direct Loans are unsubsidized and therefore does not account for this potential interest benefit for IBR and PAYE borrowers with subsidized loans. Education officials told us this assumption would have a slight effect on total loan costs, and that the department plans to revise the calculator by December 2015 to account for this potential interest benefit. Changing the assumptions explained above would change the monthly and total loan costs for borrowers in our scenarios. To the extent possible, we validated our results against Education’s loan calculator and worked with Education officials to resolve discrepancies. These scenarios are intended for illustrative purposes only; they do not incorporate experiences that could affect individual borrowers’ eligibility for income- driven repayment or their payment amounts. For example, individual borrowers could experience periodic unemployment or job promotions, and get married or form families. These and other experiences could change income levels or household size, which help determine the applicable poverty threshold for monthly income-based payments. We determined that data from each of these sources were sufficiently reliable for the purposes of this report by reviewing existing information about the data and the systems that produced them, and by interviewing knowledgeable agency officials. To understand program terms and eligibility requirements, we reviewed relevant federal laws, regulations, and documentation from Education. To determine the extent to which Education has taken steps to raise awareness of IBR, PAYE, and PSLF, we reviewed program information Education makes available to borrowers on its website, including fact sheets; a loan repayment calculator; and entrance, exit, and financial awareness counseling tools. We also reviewed information about Education’s targeted efforts to raise awareness of IBR and PAYE, including documentation of borrower email campaigns and partnerships, press releases, and memoranda from the President. We compared information on Education’s efforts to criteria outlined in contract requirements applicable to Education’s 11 Direct Loan servicers related to communication with borrowers and the goals and objectives in the Office of Federal Student Aid’s Fiscal Year 2012-2016 Strategic Plan. To examine IBR and PAYE participation and eligibility, PSLF certification and eligibility, and determine the extent to which Education has taken steps to raise awareness of the programs, we interviewed officials from Education, Treasury, and the Bureau of Labor Statistics. In addition, we interviewed representatives of higher education associations, borrower advocacy groups, and researchers about student loan repayment and forgiveness, including factors that may affect borrowers’ decisions about repayment. We also interviewed representatives of, and reviewed documentation for, 3 of Education’s 11 loan servicers, which serviced about half of all recipients of loans owned by Education. In addition, during April and May 2015, we interviewed a nongeneralizable sample of 14 randomly selected borrowers about their awareness of income-driven repayment plans. Using a random sample of Direct Loans from a 4- percent random sample of loans from the NSLDS, we identified 4,000 borrowers who, as of January 2014, were in active repayment, deferment, or forbearance. Education sent emails to these borrowers inviting them to email us to participate in interviews. Income-Based Repayment (percent) Pay As You Earn (percent) Standard (percent) In addition to the contact named above, Debra Prescott (Assistant Director), Marissa Jones (Analyst-in-Charge), George Bustamante, and Jeff Miller made key contributions to this report. Additional assistance was provided by Susan Aschoff, Rachel Beers, Deborah Bland, Ben Bolitzer, Jessica Botsford, Holly Dye, Hedieh Fusfield, Nisha Hazra, Jenn McDonald, Jean McSween, Brittni Milam, John Mingus, Mimi Nguyen, Rhiannon Patterson, Ellen Phelps Ranen, and Cody Raysinger. | As of September 2014, outstanding federal student loan debt exceeded $1 trillion, and about 14 percent of borrowers had defaulted on their loans within 3 years of entering repayment, according to Education data. GAO was asked to review options intended to help borrowers repay their loans. For Direct Loan borrowers GAO examined: (1) how participation in Income-Based Repayment and Pay As You Earn compares to eligibility, and to what extent Education has taken steps to increase awareness of these plans, and (2) what is known about Public Service Loan Forgiveness certification and eligibility, and to what extent Education has taken steps to increase awareness of this program. GAO reviewed relevant federal laws, regulations, and guidance; September 2014 data from Education and its loan servicer for Public Service Loan Forgiveness; Treasury's eligibility estimates; and 2012 employment data (most recent available) from the Bureau of Labor Statistics. GAO also interviewed officials from three loan servicers that service about half of Education's loan recipients. Many eligible borrowers do not participate in the Department of Education's (Education) Income-Based Repayment and Pay As You Earn repayment plans for Direct Loans, and Education has not provided information about the plans to all borrowers in repayment. These plans provide eligible borrowers with lower payments based on income and set timelines for forgiveness of any remaining loan balances. While the Department of the Treasury estimated that 51 percent of Direct Loan borrowers were eligible for Income-Based Repayment as of September 2012, the most recent available estimate, Education data show 13 percent were participating as of September 2014. An additional 2 percent were in Pay As You Earn. Moreover, Education has reported ongoing concerns regarding borrowers' awareness of these plans. Although Education has a strategic goal to provide superior information and service to borrowers, the agency has not consistently notified borrowers who have entered repayment about the plans. As a result, borrowers who could benefit from the plans may miss the chance to lower their payments and reduce the risk of defaulting on their loans. Few borrowers who may be employed in public service have had their employment and loans certified for the Public Service Loan Forgiveness program, and Education has not assessed its efforts to increase borrower awareness. Beginning in 2017, the program is to forgive remaining Direct Loan balances of eligible borrowers employed in public service for at least 10 years. As of September 2014, Education's loan servicer for the program had certified employment and loans for fewer than 150,000 borrowers; however, borrowers may wait until 2017 to request certification. While the number of borrowers eligible for the program is unknown, if borrowers are employed in public service at a rate comparable to the U.S. workforce, about 4 million may be employed in public service. It is unclear whether borrowers who may be eligible for the program are aware of it. Although Education has a strategic goal to provide superior information and service to borrowers and provides information about Public Service Loan Forgiveness through its website and other means, it has not notified all borrowers in repayment about the program. In addition, Education has not examined borrower awareness of the program to determine how well its efforts are working. Borrowers who have not been notified about Public Service Loan Forgiveness may not benefit from the program when it becomes available in 2017, potentially forgoing thousands of dollars in loan forgiveness. GAO recommends Education consistently notify borrowers in repayment about income-driven repayment, and examine borrower awareness of Public Service Loan Forgiveness. Education generally agreed with GAO's recommendations, but it believed the report overstated the extent to which borrowers lack awareness of income-driven repayment. GAO modified the report to clarify this issue. |
O&M is large, diverse, and widespread. Since 1987, the O&M accounts have been the largest appropriation group in DOD’s budget and are expected to remain the largest through fiscal year 2001. O&M is one of six appropriation groups for DOD. When compared with the federal budget, DOD’s fiscal year 1997 O&M budget request represents approximately 18 percent of total federal discretionary spending and is larger than most federal agencies’ fiscal year 1997 budget requests. O&M funds support portions of DOD’s readiness and quality-of-life priorities. This appropriation funds a diverse range of programs and activities that include salaries and benefits for most civilian DOD employees; depot maintenance activities; fuel purchases; flying hours; environmental restoration; base operations; consumable supplies; and health care for active duty service personnel, dependents of active duty personnel, and retirees and their dependents. Moreover, each service and DOD agency spends O&M funds. Under DOD’s measurement of infrastructure, O&M funds approximately half of DOD’s infrastructure costs that can be clearly identified in DOD’s Future Years Defense Program (FYDP). Because DOD wants to decrease infrastructure costs to help pay for modern weapon systems, it must look at this appropriation group for some of the intended savings. Infrastructure comprises activities that provide support services to mission programs and primarily operate from fixed locations. O&M funding is affected by civilian and military personnel levels. DOD’s fiscal year 1997 budget includes funds for about 800,000 civilians and 1.5 million active duty and full-time National Guard and Reserve military personnel. Civilian personnel levels have a direct effect because the majority of civilian salaries and benefits are funded by O&M. Although O&M does not fund military pay and allowances, the appropriation group supports many readiness activities and quality-of-life programs that are affected by the number of military personnel. We examined trends in annual O&M funds and personnel levels and identified the activities funded by O&M appropriations using DOD’s FYDP. The FYDP is an authoritative record of current and projected force structure, costs, and personnel levels that have been approved by the Secretary of Defense. The FYDP displays resources and personnel levels by programs and activities known as program elements. There are about 3,800 program elements in the FYDPs between fiscal years 1985 and 2001. We analyzed FYDP data from several different perspectives: aggregate O&M, federal budget account structure, DOD organization, DOD’s Infrastructure Category and Defense Mission Category (DMC) analytical frameworks, and DOD’s major defense program structure. Each perspective produces a different, but equally valid, overview. Total O&M funding for DOD is projected to decline at a slower rate than either civilian or military personnel levels between fiscal years 1985 and 2001. Figure 1 shows that, between fiscal years 1985 and 2001, annual O&M funds are projected to decrease by over 20 percent (from $110.4 billion to $87.8 billion), and both civilian and military personnel levels are also projected to decline, but at different rates. Between fiscal years 1985 and 1996, the level of annual O&M funding declined by 13 percent, from $110.4 billion to $96.0 billion. However, this decline is projected to end during the 1997 FYDP period (fiscal years 1997-2001), and annual O&M funds are projected to increase slightly in fiscal years 2000 and 2001. Civilian personnel levels have fallen steadily since fiscal year 1989 and are projected to continue to decline through fiscal year 2001. This is important because, according to DOD, over 40 percent of annual O&M appropriations fund civilian salaries and benefits. O&M is projected to increase at the same time that the number of civilians is projected to decline. This indicates that other O&M-funded programs are projected to increase to a greater extent than O&M-funded civilian salaries are projected to decrease. The number of civilian personnel in DOD has fallen by about 27 percent between fiscal years 1985 and 1996, from 1.1 million persons to 830,000. By fiscal year 2001, DOD plans to have 729,000 civilians employed, an additional 12-percent decline. Although military personnel levels are projected to fall over the 17-year period covered by this report, most of the decline occurred prior to fiscal year 1996. Military personnel levels fell by over 30 percent between the peak of 2.2 million persons in fiscal year 1987 to 1.5 million in fiscal year 1996. After fiscal year 1996, military personnel levels are expected to decline by only 4 percent. Although military personnel salaries are not paid by O&M funds, O&M funds a variety of activities and programs that support military personnel and most readiness-related resources. Because personnel levels decline at a faster rate than annual O&M funding levels, annual O&M funds when allocated per person (military and civilian) are projected to increase by about 20 percent over the fiscal year 1985-2001 period, as shown in figure 2. O&M funding per person increased from $33,100 to $40,400 between fiscal years 1985 and 1996, a 21.9-percent increase. Although O&M funding per person is projected to decline in fiscal years 1997 and 1998, it is expected to increase by 4.2 percent after fiscal year 1998 to $39,700 per person in fiscal year 2001. A small portion of the increase in O&M funding per person may be a result of DOD’s transferring functions previously performed in house to outside providers. Our analysis of DOD’s budget documents shows that the purchase of goods and services through contracts or from other federal agencies uses over half of DOD’s annual O&M funds. The amount of contracting paid by O&M funds is projected to increase slightly between fiscal years 1988 and 1997. The following are some questions raised by the trend information presented in this section: Why are O&M funds not projected to decline during the period covered by the 1997 FYDP when civilian personnel levels decrease and military personnel levels stabilize? How will outsourcing impact O&M costs in the out-years, that is, after fiscal year 1997? O&M budget accounts are organized in two ways, by service and program. The 11 service-oriented accounts include funding for multiple programs and activities for specific-service, Defense-wide, and the services’ National Guard and Reserve programs. The number of service budget accounts remained stable at 11 from fiscal years 1985 to 2001. In contrast, the number of program accounts grew from 4 in fiscal years 1985 to 1989 and peaked at 11 in fiscal years 1993, 1994, and 1997. Projections show that between fiscal years 1998 and 2001, DOD will have 10 program accounts. Congress created the largest program account, the Defense Health Program, in fiscal year 1993. DOD moved all defense health care resources from the service and Defense-wide accounts to this account. As a share of total annual O&M funds, the Defense Health Program budget account is projected to grow from 10.6 percent in fiscal year 1993 to 11.8 percent in fiscal year 2001. Most program accounts were created to increase visibility for certain efforts or respond to unique needs. For example, the Former Soviet Union Threat Reduction budget account was created in fiscal year 1994 to help several newly independent states destroy weapons of mass destruction; store and transport the weapons to be destroyed; and reduce the risk of proliferation. Funds for this account peaked at $439 million in fiscal year 1995 and are projected to decline to $395 million in fiscal year 2001. The program accounts without the Defense Health Program represent a small share of O&M funds, from less than one-tenth of a percent in fiscal year 1989 to a peak of almost 3 percent in fiscal year 1999. The O&M budget accounts vary in size. From fiscal years 1985 to 1992, approximately 90 percent of O&M funds are concentrated in four budget accounts: Navy, Army, Air Force, and Defense-wide. This concentration (approximately 85 percent) continues through fiscal year 2001 with the addition of one account—Defense Health Program. Table 1 shows the concentration of resources by budget account for fiscal year 1996. DOD has considerable discretion in budgeting for and carrying out O&M activities. Unlike the military personnel appropriation accounts, which are primarily composed of entitlements, most O&M spending is not set by law. However, the O&M program accounts receive an annual appropriation separately. As a practical matter, this means that funding levels for these specific programs are set by law. For example, in fiscal year 1996, Congress appropriated $50 million for the program account, Overseas Humanitarian, Disaster, and Civic Aid. The fact that DOD has discretion over most O&M funds does not mean that O&M funds are available without any controls. O&M funds can only be obligated for authorized programs and purposes and are available for one fiscal year unless a longer period of availability is specified. Further, in annual authorization and appropriation acts, Congress can impose direction to carry out particular activities or programs and can limit or prohibit spending for other activities. Finally, although reprogramming of funds within an appropriation is permitted, DOD has committed itself to seek congressional approval before reprogramming $10 million or more in an O&M account. The following is a question raised by the trend information presented in this section: Should other budget accounts be created to increase visibility for O&M-funded programs? Prior to fiscal year 1992, the three services received about 90 percent of O&M funds, and DOD agencies received approximately 10 percent. During this period, the Navy/Marine Corps’ share of O&M funds declined the most, by almost 6 percent, while the Air Force’s annual share decreased by less than 2 percent. The significant decrease in the Navy/Marine Corps’ portion of annual O&M funds occurred even though the Navy/Marine Corps’ civilian personnel levels declined by 5 percent less than the Air Force and the Navy/Marine Corp’s military personnel levels grew slightly over this period. Only the Army experienced an increase in its share of annual O&M funds. Between fiscal years 1985 and 1990, the Army’s portion of funding increased by 4 percent, while Army military personnel levels fell by almost 3 percent and Army civilian personnel levels fell by about 9 percent. The Army received an additional increase of 5 percent in its share of annual O&M funds between fiscal years 1990 and 1991, but this surge in fiscal year 1991 funding was due to an infusion of O&M money for the Army for the Persian Gulf War. After fiscal year 1991, DOD centralized funding for health programs into a Defense-wide O&M appropriation by shifting the funds for the program from the services’ O&M appropriations. This change caused a significant increase in the total annual O&M funds provided to the combined DOD agencies. In fiscal year 1992, Defense-wide O&M was almost 20 percent of total DOD O&M funding. In fiscal year 1996, O&M funding became almost equally proportional among the three services and the combined DOD agencies. Defense-wide appropriations remain at about one-quarter of total annual O&M appropriations through fiscal year 2001. Although the proportion of O&M funds received by each of the three services declined after fiscal year 1991, the Army’s share declined the most. Beginning in fiscal year 1998, the Army will annually receive the smallest portion of O&M funds. By fiscal year 2001, the Army is expected to receive less than 23 percent of total annual O&M funds, while the Navy/Marine Corps and the Air Force will each get approximately 26 percent of total O&M funds. Figure 3 shows the changes in O&M funding distribution in fiscal years 1985, 1992, 1996, and 2001. The Navy/Marine Corps’ annual portion of O&M funds continued to decline after fiscal year 1991 and by fiscal year 1996 fell to 26.5 percent, almost 10 percent lower than the portion of funding in fiscal year 1985. The level of Navy/Marine Corps military personnel fell almost 10 percent less than the other two services, while Navy/Marine Corps civilian personnel levels fell by almost 30 percent, similar to the Army. After fiscal year 1996, the portion of O&M funding provided to the Navy/Marine Corps is projected to remain between 25.9 and 26.5 percent, while military personnel levels fall by 5 percent and civilian personnel levels decrease by 15 percent. The Air Force’s proportion of annual O&M funds changed the least of the three services. Although the Air Force’s share of O&M funds fell by about 2 percent between fiscal years 1991 and 1992, the Air Force’s annual portion of O&M funds are planned to remain between 24.6 and 27.6 percent for the fiscal year 1993 through 2001 period. During this period, both Air Force civilian and military personnel levels are projected to decline by 19 and 17 percent, respectively. Of the three services, the Air Force has the highest O&M cost per military and civilian person. As shown in table 2, even though the Air Force had fewer active military, full-time Guard, Reserve, and civilian personnel than either the Army or the Navy/Marine Corps, the Air Force’s O&M cost per person in fiscal year 1996 was more than $46,000 per person compared with about $31,000 per person for the Navy/Marine Corps and the Army. The Army had approximately 152,000 more military and about 76,000 more civilians than the Air Force but received $100 million less in O&M funds in fiscal year 1996. The following are some questions raised by the trend information presented in this section: What factors contribute to the major shifts in funds among the services and combined DOD agencies (even after taking into account the DOD health care funding migrations)? Specifically, Why is the Army’s share of annual O&M funds declining? What causes the Air Force to have the highest per person O&M costs among the three services? Using the FYDP, DOD has identified program elements that fund infrastructure activities. DOD refers to these program elements as “direct infrastructure.” O&M funds about 50 percent of direct infrastructure during the fiscal year 1985-2001 period. DOD assigned each infrastructure program element to one of the following eight categories on the basis of the program’s activities: acquisition infrastructure; installation support; central command, control, and communications; force management; central logistics; central medical; central personnel; and central training. These categories are described in appendix I. There are parts of infrastructure that DOD cannot identify using the FYDP. According to DOD officials, this is about 20 to 25 percent of DOD’s total infrastructure funding and mostly represents logistics purchases that cannot be identified specifically. Funding for logistics purchases would likely come from O&M appropriations. Therefore, the proportion of total DOD infrastructure funded by O&M is clearly greater than 50 percent. During fiscal years 1985 through 2001, direct infrastructure O&M funds decline by 22.6 percent, similar to total O&M trends. As shown in figure 4, O&M funding of direct infrastructure programs decreases after fiscal year 1991. This decline was primarily in the central logistics infrastructure category when the Defense Business Operations Fund was created. Moreover, the central logistics category received 35 percent less O&M funds in fiscal year 1992 than in fiscal year 1991, in part, due to the conclusion of the Persian Gulf War. Despite these reductions, this category accounted for about 27 percent of the total value of direct infrastructure in fiscal year 1992. When O&M funding for the central logistics infrastructure category is excluded, as shown in figure 5, O&M funding of direct infrastructure actually increased between fiscal years 1985 and 1996. Large increases occurred in four infrastructure categories: central medical; central command, control, and communications; central personnel; and acquisition infrastructure. The increase in central medical O&M funding had the largest impact because in fiscal year 1985 central medical accounted for 15 percent of total direct infrastructure (without central logistics) and, by fiscal year 1996, central medical’s portion had grown to over 20 percent. DOD projects a slight decrease, about 3 percent, in O&M-funded direct infrastructure (with and without central logistics) between fiscal years 1997 and 2001. Most of this decline is projected to occur in the installation support, force management, acquisition infrastructure, and central personnel infrastructure categories. The following are some questions raised by the trend information presented in this section: What causes the projected out-year increases in O&M-funded direct infrastructure (fiscal years 2000 and 2001)? Where will DOD get savings in infrastructure to pay for modernization? How will DOD’s modernization plans affect future O&M levels? Another way to analyze the changes and components of O&M funding is to aggregate FYDP data by DOD’s major defense programs. For its own force programming and budgeting purposes, DOD organizes the defense budget into program elements that consist of collections of weapons, manpower, and support equipment. Program elements are grouped into 11 major defense programs. Each major defense program reflects a force mission or support mission of DOD and contains the resources needed to achieve an objective or plan. Three major defense programs—general purpose forces; central supply and maintenance; and training, medical, and other general purpose activities—receive the majority of annual O&M funding. In fiscal year 1996, these three programs were allocated 65 percent of DOD’s O&M funds. Figure 6 shows that of the three programs, only the training, medical, and other general purpose activities program’s annual funding has continued to increase over the fiscal year 1985-2001 period. The training, medical, and other general purpose activities program’s annual share of O&M appropriations increased by almost $4 billion between fiscal years 1985 and 1996 to about $19 billion. DOD plans to maintain this level of O&M funding for this program through fiscal year 2001. O&M funding per person for training, medical, and other general purpose activities has almost doubled over the fiscal year 1985-2001 period; most of this growth occurred prior to fiscal year 1996. O&M funding for the general purpose forces program is projected to fall by 28 percent between fiscal years 1985 and 2001. This corresponds to the decline in DOD’s overall force level. The O&M funding per person assigned to this program is expected to generally remain between $25,000 and $30,000 per person over the entire fiscal year 1985 to 2001 period. Central supply and maintenance O&M funding declined significantly (by 34 percent) between fiscal years 1991 and 1992 when the Defense Business Operations Fund was created. Many of this program’s supply, maintenance, and service activities were no longer directly funded, and the funds to pay for the goods and services provided by the program’s activities were allocated to the customers (e.g., strategic and general purpose forces programs) of these services. The decline in program funding continued through fiscal year 1997, albeit at a slower rate, and is projected to remain fairly stable at about $12 billion annually until fiscal year 2001. Of the remaining eight major defense programs, the next two largest—(1) command, control, communications, intelligence, and space and (2) Guard and Reserve forces—are projected to receive approximately $10 billion and $8 billion, respectively, in annual O&M funds over the fiscal year 1985-2001 period. (See fig. 7.) Even with the downsizing of the force, the annual level of O&M funding for both of these programs has remained fairly constant over the entire 17-year period covered by this report. For the Guard and Reserve program, full-time personnel levels increased by almost 11,000 people over the fiscal year 1985-1996 period, yet part-time Guard and Reserve personnel levels declined by over 170,000 persons over the same period. Both full-time and part-time personnel numbers are projected to decline through 2001. Even though the command, control, communications, intelligence, and space program’s annual O&M funding level has not changed significantly throughout the fiscal year 1985-2001 period, its level of annual O&M funding per person associated with this program has increased by 30 percent over these 17 years. Most of this increase in O&M funding per person occurred prior to fiscal year 1995. The following are some questions raised by the trend information presented in this section: What factors cause the training, medical, and other general purpose activities program funding to increase steadily while military and civilian personnel levels decrease? Why is central supply and maintenance O&M funding not declining in the out-years (after fiscal year 1997) if DOD is improving the efficiency of these activities by using privatization and outsourcing? Why has the command, control, communications, intelligence, and space program’s O&M funding not declined over time as DOD has downsized? Why has the Guard and Reserve forces program’s O&M funding not declined as the overall force level has declined? Why did the level of full-time Guard and Reserve personnel increase when part-time personnel declined by 170,000 prior to fiscal year 1996? Partitioning total O&M funds using DOD’s DMC analytical framework shows that funding is concentrated among a few categories. From fiscal years 1985 to 2001, five mission categories received and are projected to receive about 50 percent of O&M funding. Between fiscal years 1993 and 2001, the five largest categories are land forces, medical, naval forces, tactical air forces, and other logistics support. In total, there are about 30 mission categories during the fiscal year 1993-2001 period. Figure 8 compares funding for different fiscal years for these five defense mission categories. Among the five largest categories in fiscal year 1996, medical is the only category that experiences real growth—from $5.9 billion in fiscal year 1985 to $10.2 billion in fiscal year 2001, a 72.8-percent increase. Most of the growth in medical occurs prior to fiscal year 1997, and the majority of these costs are for health care needs. In contrast, the naval forces category experiences the largest decline in real terms—from $15.2 billion in fiscal year 1985 to $8.4 billion in 2001, a 44.6-percent decrease. Figure 9 shows the distribution of fiscal year 1996 O&M funding by defense mission category. Eight categories make up 71 percent of O&M funding ($68.4 billion), and each category is greater than $5.1 billion. Remaining resources, $27.6 billion or 28.8 percent, reside in 22 categories and funding ranges from slightly more than $5 billion (intelligence) to $4.7 million (federal agency support). FYDP projections show that resources remain concentrated in the same eight categories for fiscal years 1997 through 2001. Our analysis of the eight categories with the highest dollar values in fiscal year 1996 shows that from fiscal years 1985 to 2001, three categories (medical, mobility forces, and departmental) are projected to grow and five categories (naval forces, other logistics support, training, land forces, and tactical air forces) are projected to decline. However, as shown in table 3, these overall trends are not consistent over the 17-year period. For example, the medical category increased by 73.2 percent between fiscal years 1985 and 1996 but is projected to decline between fiscal years 1996 and 2001. In the land forces category, there is a slight increase between fiscal years 1985 and 1996 but a substantial decrease projected for the fiscal year 1996-2001 period. Although projections show that all categories will decrease in real terms from fiscal years 1996 to 2001, medical’s projected decrease is insignificant. Appendix II provides a detailed analysis of trends and per person costs for the fiscal year 1996 eight highest dollar categories: land forces, medical, naval forces, tactical air forces, other logistics support, departmental, mobility forces, and training. A similar concentration emerges when distributing annual O&M funds by DMCs for the 11 service budget accounts throughout the 17-year period. For example, in fiscal year 1996, over 55 percent of each account’s O&M funds are concentrated in three defense mission categories. The three largest dollar categories differ for each service budget account. For example, in fiscal year 1996 Defense-wide’s three largest categories were intelligence, departmental, and other personnel support and received about 61 percent of total funding. In contrast, the Army’s three largest dollar categories were land forces, training, and other logistics support and received about 70 percent of total funding. Table 4 shows the distribution of fiscal year 1996 O&M funds by DMCs for the O&M, Navy budget account. O&M, Navy has and is projected to have the largest share of annual O&M funds compared with the other budget accounts—except in fiscal year 1991. O&M, Army was the largest budget account in fiscal year 1991. The following are some questions raised by the trend information presented in this section: What factors contribute to the significant decline in the naval forces category between fiscal years 1985 and 2001? What factors are projected to contribute to the substantial decline in the land forces category during the fiscal year 1996-2001 period? What factors contribute to the projected real decline during the fiscal year 1996-2001 period for medical, departmental, and mobility forces? (In contrast, these categories experienced substantial real growth during the fiscal year 1985-96 period.) Can analyzing trends in concentrated O&M areas help DOD in future budget plans? We analyzed trends of three O&M programs—the Defense Health Program (O&M budget account), environmental spending, and base operating support—because of congressional interest and relevance in DOD’s effort to reduce infrastructure costs. DOD’s health care system is considered a critical quality-of-life issue. The Defense Health Program budget account emerged in the fiscal year 1993 President’s Budget to centralize O&M health care resources. Prior to fiscal year 1993, the resources were located in the service and DOD-wide budget accounts. This budget account differs from the DMC medical in that the account does not include resources for medical contingency hospitals and medical readiness units. For fiscal year 1997, DOD estimates that 8.3 million beneficiaries are eligible to use the Defense Health Program. Figure 10 shows that trend data for this budget account remains relatively stable, a 1.1-percent real decline during fiscal years 1993 through 2001. However, in the fiscal year 1997 FYDP, DOD projected a 7.2-percent real decline between fiscal years 1996 and 1997. Discussions with a DOD official indicated that fiscal year 1997 health care funds were reduced by the Office of Secretary of Defense during preparation of the fiscal year 1997 President’s Budget submission. The DOD’s environment-related programs that we analyzed are the Defense Environmental Restoration Program, environmental compliance, environmental conservation, and pollution prevention programs. Annual O&M funding for these environment-related programs more than doubled between fiscal years 1991 and 1996, as shown in figure 11. Over 90 percent of the funds for these environment-related programs in fiscal year 1996 was for the Defense Environmental Restoration Program and environmental compliance. By fiscal year 2001, the level of O&M funding for DOD’s environment-related programs is projected to decline by 23 percent from its fiscal year 1996 peak of $3.3 billion. Most of this decline is due to a planned 25-percent decrease in Defense Environmental Restoration Program O&M funds and a projected 18-percent decrease in funding for environmental compliance programs. Base operations and maintenance activities are required to sustain mission capability, quality-of-life, and workforce productivity. Funding for these programs is found throughout DOD, in both force and support missions, and was analyzed at the FYDP program element level for this report. Figure 12 shows that overall annual funding for base operations and maintenance activities has declined since fiscal year 1985. The level of O&M funds these programs received decreased by 16 percent between fiscal years 1985 and 1996 and is projected to decline by an additional 18 percent by 1999. Most of the falloff in earlier years is due to a decrease in O&M funding for real property maintenance and support activities, while after fiscal year 1994 the level of O&M funds provided annually to base operations activities decreases, as shown in figure 13. In fiscal years 2000 and 2001, base operations and maintenance activities are projected to receive a slight increase (2 percent) in annual O&M funds as a result of an increase in funding of real property maintenance and support activities. The following are some questions raised by the trend information presented in this section: Why is funding for environmental-related programs expected to decline between fiscal years 1996 and 1997? What factors cause environmental projections to be considerably lower than prior-year spending (since fiscal year 1993)? What factors cause O&M base operating support projections to increase after fiscal year 1999? What impact has the Base Realignment and Closure decisions had on base operations and maintenance costs? Why are annual base operations O&M funding levels cyclical? Why are real property maintenance funding levels not expected to decline in the fiscal year 1997-2001 period? In oral comments, DOD agreed with the report and offered points of clarification. Specifically, DOD said that O&M funding per person is an inappropriate measure to assess future O&M requirements. In our report, we analyzed O&M trends in a number of different ways, including annual O&M on a per person basis. We believe each measure of O&M funding produces a different, but equally insightful and appropriate, overview. Furthermore, we did not attempt to determine an appropriate level of O&M funding. DOD noted that answering the questions following each section requires an understanding of the significant accounting changes that have occurred since fiscal year 1981. DOD recommended that future O&M analysis use normalized FYDP data. (Normalized FYDP data account for the movement of funds whether inside the O&M accounts or to and from other appropriation accounts.) We attempted to obtain the department’s normalized database but at the time of our review it was unavailable. Moreover, we recognize that there are significant accounting changes that impacted DOD’s O&M accounts. We discuss some of the changes in our report and structured our analysis to minimize their impact. To identify trends in annual O&M appropriations and personnel levels and to determine the programs and activities funded by O&M, we analyzed data contained in DOD’s FYDP. The FYDP is the most comprehensive and continuous source of current and historical defense resource data. We used funding and personnel data from the historical FYDP update (June 1995) for fiscal years 1985-1993, the fiscal year 1996 FYDP for fiscal year 1994 data, and the fiscal year 1997 FYDP for fiscal years 1995-2001. Historical FYDP data reflects actual (1) total obligational authority for programs and (2) personnel levels. We adjusted the nominal dollars to constant fiscal year 1997 dollars using 1997 DOD inflation indices for O&M costs. Since DOD had not yet released its revised FYDP database that adjusts FYDP data for known accounting and program changes since fiscal year 1975, while we were conducting our work, we were unable to normalize the data for these changes. We do note in the report where these changes have impacted the trends. We analyzed the FYDP data by DOD’s major defense programs, federal budget account structure, and operating organization. To aid in the identification and classification of the components that affect annual O&M funding levels, we also evaluated the FYDP data using two analytical tools developed by DOD—the DMC and the Infrastructure Categories. The DMC structure is used to analyze FYDP data in terms of a mission-oriented view of DOD resources rather than a service-specific program view, and the infrastructure categories structure aids in the analysis of the resources required to support the combat forces. We did not verify DOD’s allocation of program elements in its DMC and Infrastructure Category analytical tools. In addition, we interviewed officials in the following DOD offices: Office of the DOD Comptroller, Office of the Under Secretary of Defense (Personnel and Readiness), Office of the Under Secretary of Defense (Acquisition and Technology), Office of Program Analysis and Evaluation, and the Office of Reserve Affairs. We also met with officials from the Institute for Defense Analyses. We reviewed our prior reports, pertinent reports by the Congressional Budget Office, Congressional Research Service, DOD, the Institute for Defense Analyses, and others. Our work was conducted from June 1996 to January 1997 in accordance with generally accepted government auditing standards. We are providing copies of this report to appropriate congressional House and Senate committees; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Director, Office of Management and Budget. We will also provide copies to other interested parties upon request. If you have any questions concerning this report, please call me on (202) 512-3504. Major contributors to this report were Robert Pelletier, Edna Thea Falk, and Deborah Colantonio. Installation support consists of activities that furnish funding, equipment, and personnel to provide facilities from which defense forces operate. Activities include construction planning and design, real property maintenance, base operating support, real estate management for active and reserve bases, family and bachelor housing, supply operations, base closure activities, and environmental programs. Acquisition infrastructure consists of all program elements that support program management, program offices, and production support, including acquisition headquarters, science and technology, and test and evaluation resources. This category includes earlier levels of research and development, including basic research, exploratory development, and advanced development. Central logistics consists of programs that provide support to centrally managed logistics organizations, including the management of material, operation of supply systems, maintenance activities, material transportation, base operations and support, communications, and minor construction. This category also includes program elements that provide resources for commissaries and military exchange operations. Central training consists of program elements that provide resources for virtually all non-unit training, including training for new personnel, aviation and flight training, military academies, officer training corps, other college commissioning programs, and officer and enlisted training schools. Central medical consists of programs that furnish funding, equipment, and personnel that provide medical care to active military personnel, dependents, and retirees. Activities provide for all patient care, except for that provided by medical units that are part of direct support units. Activities include medical training, management of the medical system, and support of medical installations. Central personnel consists of all programs that provide for the recruiting of new personnel and the management and support of dependent schools, community, youth, and family centers, and child development activities. Other programs supporting personnel include permanent change-of-station costs, personnel in transit, civilian disability compensation, veterans education assistance, and other miscellaneous personnel support activities. Command, control, and communications consists of programs that manage all aspects of the command, control, and communications infrastructure for DOD facilities; information support services; mapping and charting products; and security support. This category includes program elements that provide nontactical telephone services, the General Defense Intelligence Program and cryptological activities, the Global Positioning System, and support of air traffic control facilities. Force management consists of all programs that provide funding, equipment, and personnel for the management and operation of all the major military command headquarters activities. Force management also includes program elements that provide resources for Defense-wide departmental headquarters, management of international programs, support to other defense organizations and federal government agencies, security investigative services, public affairs activities, and criminal and judicial activities. This appendix describes operation and maintenance (O&M) funding and military and civilian personnel trends in detail for eight defense mission categories (DMC) for fiscal years 1985 through 2001. In fiscal year 1996, the eight categories were the highest dollar missions and represented 71 percent of O&M funds. With assistance from the Institute for Defense Analyses, Department of Defense (DOD) developed the DMC structure to display (by mission) funds, personnel, and forces programmed in the Future Years Defense Program (FYDP). The DMC framework is multitiered with each tier progressively more detailed. For example, the first tier divides DOD programs into three basic categories: major force missions, Defense-wide missions, and Defense-wide support. These three programs are subdivided into five additional levels of detail. One of these levels is the training category. Table II.1 illustrates an example of the DMC structure for training. Our analysis of the eight categories aggregates O&M funds and personnel data at various levels of detail to provide comprehensive and useful information. For the land forces category, O&M funds consist of Army and Marine Corps division increments, non-divisional combat units, tactical support units, base operations and management headquarters, and operational support; Army systems support; and Army special mission forces. Total land forces O&M funds decrease more slowly than military and civilian personnel assigned to this category between fiscal years 1985 and 2001 as shown in figure II.1. O&M funds decrease by 22.2 percent, with most of the decline projected to occur between fiscal years 1996 and 2001. In contrast, between fiscal years 1985 and 2001 military and civilian personnel levels decline by 34.4 and 36.5 percent, respectively. Between fiscal years 1996 and 2001 military personnel decrease by an additional 4.6 percent, while civilians decrease by less than 1 percent. Annual per person costs increase from $16,865 in fiscal year 1985 to $20,114 in fiscal year 2001, a 19.3-percent increase. Figure II.1: Annual Land Forces O&M Funds and Personnel Levels for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) As expected, the Army has the largest share of total annual O&M funds for land forces. From fiscal years 1985 to 2001, the Army has and is projected to have about 67 to 80 percent of total annual O&M funds. Figure II.2 shows annual land forces O&M funds by federal budget account for fiscal years 1985 through 2001. Infusion of O&M funds for the Persian Gulf War contributed to the Army’s higher share of O&M land forces funds in fiscal year 1991. When funds are grouped by missions within land forces, base operations and management headquarters is the largest category from fiscal years 1985 to 2001. This category supports real property maintenance, base communications, and base operations and management headquarters at fixed Army and Marine Corps installations. When compared with the other three land forces mission categories, as shown in figure II.3, the base operations and management headquarters category experiences the largest decline, a 37.4-percent decrease from fiscal years 1985 to 2001. Most of the decline takes places after fiscal year 1991. Figure II.3: Annual Land Forces O&M Funds by Mission for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) The medical category provides funds for care for active duty personnel, retired military personnel, and dependents. The fiscal year 1997 President’s Budget estimates that 8.3 million beneficiaries are eligible to use the health care program in fiscal year 1997. Unlike the Defense Health Program budget account, the medical category includes funds for programs related to medical contingency hospitals and medical readiness. Another difference is that the medical category does not include funds for health personnel training; however, the Defense Health Program budget account has funds for education and training programs. Total medical O&M funds are projected to increase from $5.9 billion to $10.2 billion or by 72.8 percent between fiscal years 1985 and 2001. Although projections show that the fiscal year 1997-99 funding will be slightly lower than in fiscal year 1996, medical O&M funds will begin to rise starting in fiscal year 2000. Moreover, the fiscal year 2001 FYDP projection almost matches the fiscal year 1996 level. When funds are grouped by missions within the medical category, hospitals and other medical activities is the largest category between fiscal years 1985 through 2001. Trends for the hospital and other medical activities category mirror those of total medical O&M funds. O&M funds for the base operations and management headquarters category also grow between fiscal years 1985 and 2001, a 69.8-percent increase. Unlike the hospital category, which declines slightly between fiscal years 1996 and 2001, O&M funds for base operations and management headquarters increase by 12.1 percent during this period. Figure II.4 compares the trends in funding for hospitals and other medical activities, base operations and management headquarters, and overall medical category. Figure II.4: Annual Medical O&M Funds for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Figure II.5 shows that medical costs per eligible beneficiary will increase by 86 percent between fiscal years 1985 and 2001. Further, the fiscal year 2001 projected cost per eligible beneficiary of $1,223 nearly matches the fiscal year 1996 level of $1,229. Figure II.5: Annual Medical O&M Costs Per Eligible Beneficiary for Fiscal Years 1985-2001 (Constant 1997 dollars) Although the total number of eligible beneficiaries is projected to decline between fiscal years 1989 and 2001, starting in fiscal year 1995, the total number of retirees and their dependents exceed the total number of active duty military personnel and their dependents (see fig. II.6). Between fiscal years 1989 and 2001, retirees and their dependents increase by 16.7 percent, whereas active duty military personnel and their dependents decrease by 26.5 percent. The naval forces defense mission category consists of mission forces (submarines, surface combat ships, amphibious forces, service forces, mine warfare forces, maritime patrol, undersea surveillance forces, and sea based anti-submarine warfare air forces); fleet support (combat and logistics support, ordnance disposal forces, tactical communications, shore intermediate maintenance, and aircraft support squadrons); other operational support (command activities; sea control operational headquarters; and intelligence, communications, command, and control activities); and base operations and management headquarters. Annual O&M funding levels for naval forces is projected to decline by 45 percent between fiscal years 1985 and 2001, as shown in figure II.7. This decline was primarily caused by a 48-percent reduction in O&M funds for mission activities. Although mission activities funds have decreased, figure II.8 shows that mission activities still receive about 60 percent of O&M funding. Figure II.7: Annual O&M Funding for Naval Forces for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Almost all of the O&M funding for this category is for the active forces and, within active naval forces, the majority of O&M funding is for mission force activities. As shown in figure II.9, most of the decrease between fiscal years 1985 and 1996 in active naval forces O&M funds was for mission activities, although the fleet support programs’ O&M funding levels have declined as well over the same period. Figure II.9: Annual O&M Funding for Active Naval Forces Allocated by Activity for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) The level of O&M funding per person associated with active naval force mission activities has decreased from its 1985 value, as shown in figure II.10, but most of this decline occurred by fiscal year 1990. Between fiscal years 1996 and 2001, the O&M funds per person is expected to decline by only 5 percent. The level of O&M funding per person for base operations activities (the second largest activity within the naval forces mission) remains relatively stable throughout the fiscal year 1985-2001 period. Figure II.10: Per Person Annual O&M Funding for Selected Active Naval Force Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in thousands) O&M funds for the tactical air forces category consists of air-to-air combat squadrons; air-to-ground combat squadrons; defense suppression forces; tactical reconnaissance squadrons; tactical command, control, and communications; tanker/cargo squadrons; other tactical air warfare forces; non-strategic nuclear tactical forces; operations support; and base operations and management headquarters support activities. Like aggregate O&M trends, total tactical air forces O&M funds decrease at a slower rate than military and civilian personnel assigned to this category between fiscal years 1985 and 2001. (See fig. II.11.) During the fiscal year 1985-2001 period, O&M funds decrease by 11.6 percent, with most of the decline projected to occur between fiscal years 1996 and 2001. In contrast, most of the decline in both military and civilian personnel levels occurs between fiscal years 1985 and 1996, a respective 34.4-percent and 24.6-percent decrease. Further, projections show that the fiscal year 2001 level of $7.8 billion slightly exceeds the fiscal year 1997 level. Annual per person costs increase from $29,911 in fiscal year 1985 to $41,211 in fiscal year 2001, a 37.8-percent increase. Figure II.11: Annual Tactical Air Forces O&M Funds and Personnel Levels for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) When funds are grouped by service missions within the tactical air forces category, as shown in figure II.12, from fiscal years 1985 to 2001, the Air Force is the largest category and experiences the smallest percentage change in funding, a 5.9-percent decrease, when compared with the Navy and Marine Corps categories. The Navy tactical air forces category experiences the largest decline, a 35.9-percent decrease between fiscal years 1985 and 2001. Most of the decline occurs between fiscal years 1985 and 1996. Figure II.12: Annual Tactical Air Forces O&M Funds by Service Missions for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Marine In the tactical air forces mission category, when funds are grouped by primary mission, other tactical support, and base operations and management headquarters, funds for each category decline between fiscal years 1985 and 2001. (See fig. II.13.) However, these overall trends are not consistent over the 17-year period. Between fiscal years 1985 and 1996, base operations and management headquarters is the only mission activity that grows, a 5.5-percent increase. However, projections show that the base operations and management headquarters category will experience a 19.4-percent decrease in funds between fiscal years 1996 and 2001. For the other tactical support category, funds decrease by 6.7 percent between fiscal years 1985 and 1996; however, this decrease is nearly canceled by the projected growth between fiscal years 1996 and 2001, a 5.3-percent increase. Funding levels for primary missions decrease in both periods by 5.3 percent between fiscal years 1985 and 1996 and by 8.7 percent between fiscal years 1996 and 2001. Figure II.13: Annual Tactical Air Forces O&M Funds by Mission Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) The other logistics support mission includes the following activities: logistics base operations and management headquarters, and miscellaneous logistics support activities such as industrial preparedness, second destination transportation, administrative support, printing plants and laundries, and information automation. Annual O&M funding for other logistics support mission activities fell to $5.3 billion in fiscal year 1996 from its peak of $8.7 billion in fiscal year 1987, as shown in figure II.14. (The surge in fiscal year 1991 O&M funding for this mission was an anomaly caused by a $2.3 billion infusion of funds for Army and Air Force second destination transportation programs most probably for Persian Gulf War efforts. The following year, fiscal year 1992, O&M funding for these programs decreased by more than $2.7 billion and is projected to continue to decline at a slow steady rate through fiscal year 2000.) The other decreases are due to consistent annual declines in O&M funding of logistics base operations and headquarters activities. The declines in base operations had a significant impact on other logistics support O&M funding because base operations activities account for about 35 percent of total annual other logistics support funds. Overall, other logistics support O&M funds fell by 34 percent between fiscal years 1985 and 1996 but are expected to remain relatively stable through fiscal year 2001. Figure II.14: Annual O&M Funding for Selected Other Logistics Support Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) By fiscal year 2001, O&M funding per person for other logistics support activities is expected to be at about the same level as it was in fiscal year 1985, as shown in figure II.15. If the surge in fiscal year 1991 funding is ignored, O&M funding per person is planned to remain between $2,330 and $2,660 per person for the entire fiscal year 1985-2001 period. Figure II.15: Per Person Annual Operation and Maintenance Funding for Selected Other Logistics Support Activities (Constant 1997 dollars in thousands) The departmental mission includes a wide range of department-wide service support activities such as the Army’s Adjutant General, publications centers, and postal service agency; the Navy’s accounting and finance center and its petroleum reserve; and the Air Force’s audit agency, Intelligence Service, and its finance and accounting center. The mission also includes department-wide activities such as public affairs, personnel administration, service support to the Office of the Secretary of Defense and other defense agencies, Washington Headquarters Services, and the Office of Economic Adjustment. O&M funding levels for the departmental mission have grown since fiscal year 1985. Between fiscal years 1985 and 1996, O&M funding for departmental activities grew by 16 percent, and most of this growth occurred after fiscal year 1991. Much of the growth between fiscal years 1991 and 1996 was due to significant fluctuations in funding for programs assigned to this mission. For example, Washington Headquarters Services’ annual O&M funding level grew almost threefold between fiscal years 1994 and 1995 from $169 million to $498 million, remained at this high level in fiscal year 1996, but is projected to decline to $185 million in fiscal year 1997, where its annual funding level is expected to remain through fiscal year 2001. Other programs in the departmental mission only have had O&M funding in selected years, such as the Defense-wide administrative maintenance and repair program. O&M funding for this program appears only in fiscal years 1992 and 1993 for $563 million and $1.9 billion, respectively. O&M funding for the departmental mission decreases by 5 percent between fiscal years 1996 and 2001. This slight decline in the mission’s funding levels reflects the relative stability in annual O&M funding levels for most of the large programs contained in this mission, such as service-wide support (not otherwise accounted for), Office of the Secretary of Defense management headquarters, and Defense Contract Audit Agency activities. Figure II.16 shows the trend in O&M funding for departmental mission activities during the fiscal year 1985-2001 period. Figure II.16: Annual O&M Funding for Departmental Mission Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Seventy-five percent of O&M funds for the departmental mission support the active military, and the remaining 25 percent of O&M funds for this mission are for departmental activities that support the National Guard and Reserve. Figure II.17 shows that O&M funding for departmental activities that support the active military generally remained between $4.0 billion and $4.5 billion prior to fiscal year 1993 and are projected to remain between $4.5 billion and $5.0 billion for the fiscal year 1996-2001 period. Since the programs that caused the fluctuations in the overall departmental mission’s O&M funding levels between fiscal years 1991 and 1996, such as the Washington Headquarters Service, support the active military, these same programs caused the fluctuations shown in figure II.17. O&M funding for departmental missions that support the active military are projected to decline between fiscal years 1996 and 2001 by about 4 percent, only slightly less than the 5-percent decline in the overall departmental mission’s O&M funding level during the same period. Figure II.17: Annual O&M Funding for Departmental Mission Activities That Support the Active Military for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) O&M funding per person for departmental missions that support the active military has grown by over 50 percent between fiscal years 1985 and 1996 as shown in figure II.18. As the amount of O&M funds provided to the large departmental programs grew between fiscal years 1991 and 1996, the number of active military personnel fell and the civilians associated with the departmental activities that support the active military declined. After fiscal year 1996, the level of O&M funds allocated to each person for this mission is projected to remain virtually unchanged through fiscal year 2001. Figure II.18: Per Person Annual O&M Funding for Departmental Mission Activities That Support the Active Military (Constant 1997 dollars in thousands) O&M funds for the mobility forces category consist of programs and activities for multimode and intermodal lift forces, airlift forces, sealift forces, and land mobility forces. As shown in figure II.19, between fiscal years 1985 and 2001, total O&M funds for the mobility forces category increase from almost $3.8 billion to $5.5 billion, or by 45.5 percent. Most of the increase occurs between fiscal years 1985 and 1996, a 54.6-percent increase. Among the eight categories in our analysis, this category has the second largest percentage change in funding during the fiscal year 1985-2001 period. Figure II.19: Annual Mobility Forces O&M Funds for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) When funds are grouped by missions within mobility forces, between fiscal years 1985 and 2001, airlift forces is the largest category funded by direct O&M appropriations. (See fig. II.20.) The surge in fiscal year 1994 O&M funds was for airlift base operations. Throughout the 17-year period, land mobility forces O&M funds increase from $871 thousand in fiscal year 1985 to almost $653 million in fiscal year 2001 and have the largest percentage change in funding when compared with sealift and airlift forces. Figure II.20: Annual Mobility Forces O&M Funds by Mission for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) When funds are grouped by missions within airlift forces, during the fiscal year 1985-2001 period, military intertheater airlift has the largest percentage change in funding (225.6 percent) when compared with other airlift forces missions. Moreover, starting in fiscal year 1996, military intertheater airlift is projected to have the largest share of annual airlift forces O&M funds. Military intertheater airlift is comprised of active, National Guard, and Reserve airlift squadrons and support activities. Figure II.21 shows the distribution of total fiscal year 1996 O&M funds by mission within airlift forces. Projections show that, for the fiscal year 1997-2001 period, each airlift forces category share of annual O&M funding closely mirror the fiscal year 1996 share. Aeromedical airlift (0.31%) Airlift command, control, communications (0.72%) Airlift rescue and recovery (2.78%) Military intertheater airlift (37.78%) Military intratheater airlift (23.34%) The training defense mission category consists of all military personnel training, civilian personnel training, flight training, intelligence skill training, health personnel training, and training base operations and management headquarters. Figure II.22 shows that the O&M funding level for the training mission decreased by 22 percent between fiscal years 1985 and 1993 and is projected to remain fairly level through fiscal year 2001. If O&M funding for training base operations and management headquarters is removed from the overall O&M funding level of the training mission as shown in figure II.22, the amount of O&M funds provided to this mission is projected to decrease by only 14 percent between fiscal years 1985 and 2001. Military personnel training and flight training activities receive over 85 percent of the remaining annual O&M funds for this mission. Figure II.22: Annual O&M Funding for Training for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Much of the decline through fiscal year 1993 was for training base operations and management headquarters activities. Base operations and management headquarters activities have received and are planned to continue to receive the largest portion of annual O&M funds for this mission category, although its portion of the training mission’s O&M funds has declined. Figure II.23 shows that military personnel training (mostly general skills training and support of the training establishment) has declined as the active force declined, and funding for this activity is planned to remain fairly constant from fiscal year 1996 to 2001 when military personnel levels are projected to stabilize. O&M funding for flight training activities (mostly undergraduate pilot training), though, has not changed much over the 17 years covered by this report. Figure II.23: Annual O&M Funding for Selected Training Mission Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Since civilian and Guard and Reserve personnel training accounts for a very small portion of the total training mission, we focused our analysis of O&M funding for the mission by service on active military personnel training only. As displayed in figure II.24, the Army has received and plans to continue to receive more annual training mission O&M funds than either the Air Force or the Navy/Marine Corps, although the Army’s share of annual O&M funds for this mission has decreased along with its force structure. The Army is projected to receive 35 percent fewer annual training O&M funds in fiscal year 2001 than it received in fiscal year 1985. This decline is due mostly to planned declines in military personnel training (general skills training) and training base operations and management headquarters. The only Army training area that received an infusion of O&M funds in the fiscal year 1985 to 1996 period was flight training, but after fiscal year 1996, funds for this training are projected to decline by almost 7 percent. For the projected period through fiscal year 2001, only the military personnel training area is expected to receive an increase in O&M funds. Figure II.24: Annual O&M Funding by Service for Active Military Training Mission Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in billions) Figure II.24 also shows that O&M training funds for the Navy/Marine Corps and the Air Force have declined between fiscal years 1985 and 1996 in concert with declines in their force structure, but similar to the Army, funding levels are expected to remain fairly stable in the out-years. Between fiscal years 1985 and 1996, the Navy/Marine Corps O&M training mission funds decreased by 24 percent and the Air Force’s funding level decreased by 16 percent. After fiscal year 1996, O&M funding for the Navy/Marine Corps training base operations is projected to continue to decrease. However, O&M funding for Navy/Marine Corps military personnel, flight, and intelligence skill training is expected to grow. Flight training is the only area that is planned to receive an increase (12 percent) in O&M funds for the Air Force during the fiscal year 1996 through 2001 period. O&M funding for the training mission per full-time military and DOD civilians declined by 8 percent between fiscal years 1985 and 1993, but after fiscal year 1993 grew annually through fiscal year 1996, as presented in figure II.25. It is projected to remain stable after fiscal year 1996 until fiscal years 2000 and 2001 when funding per person will increase by approximately 2 percent per year. If base operations and management headquarters is removed from overall O&M funding of the training mission, the O&M funding per full-time military and DOD civilians remained fairly stable at about $1,200 per year through fiscal year 1993, when it began to increase annually. The level of O&M funding per person is projected to increase to over $1,400 by fiscal year 2001. Figure II.25: Per Person Annual O&M Funding for Training Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in thousands) Figure II.26 shows that the pattern of O&M funding per person for the training mission for each service’s active military differs. The O&M funding level per person for active Air Force training grew the most (over 30 percent) between fiscal years 1985 and 1996 and peaked at $3,300 per person by fiscal year 1996. Our analysis of FYDP data shows that this level of funding per person is projected to be reached again in fiscal years 1998 and 2001. Although the O&M funding for training per active Army military person fell to its lowest level in fiscal year 1994, it increased annually during fiscal years 1995 and 1996, and will increase annually again in fiscal years 2000 and 2001, when it will peak at $4,500 per person. The Navy/Marine Corps O&M funding per person for active military training fell by 15 percent between fiscal years 1985 and 1993 but is projected to increase annually through fiscal year 1997, and fall slightly in fiscal year 1998, when it will increase about 1 percent per year again until it reaches $2,600 per year in fiscal year 2000. Figure II.26: Per Person Annual O&M Funding by Service for Active Military Training Activities for Fiscal Years 1985-2001 (Constant 1997 dollars in thousands) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) budget request for fiscal year (FY) 1997 for its operation and maintenance (O&M) accounts, focusing on: (1) how annual funding relates to military and civilian personnel levels through 2001; (2) overall trends from fiscal years 1985 though 2001; and (3) key areas in which most money has been budgeted through 2001. GAO found that: (1) total DOD O&M funds, in constant FY 1997 dollars, are projected to decline at a slower rate than either civilian or military personnel levels between fiscal years 1985 and 2001; (2) however, beginning in FY 2000, projections show that O&M funds begin to rise at the same time civilian personnel decline and military personnel remain relative stable; (3) increases in other O&M-funded programs will more than offset the decline in O&M-funded civilian salaries; (4) since 1993, approximately 85 percent of the funds are concentrated in five budget accounts; (5) another data view shows that three major defense programs receive the majority of annual O&M funds; (6) between fiscal years 1993 and 2001, about 50 percent of annual O&M funds are found in five of DOD's mission categories; (7) in total, there are about 30 mission categories during the FY 1993-2001 time period; (8) from an organizational perspective, the military services' portion of total annual O&M funds declines; (9) beginning in FY 1998, the Army is projected to receive a smaller proportion of annual O&M funds than either of the other two services or the combined DOD agencies; (10) even though the Army will receive the smallest portion of annual O&M funds, this service will have the second largest active military force and the largest civilian workforce; (11) the Navy-Marine Corps' share of annual O&M funds declined by almost 10 percent prior to FY 1996; (12) in contrast, the Air Force's proportion of annual O&M funds changes the least of the three services, while Air Force military and civilian personnel levels fall significantly over the FY 1985-2001 time period; (13) only the combined DOD agencies' share of annual O&M funds increases between fiscal years 1985 and 2001 because of the health program funding consolidation into a Defensewide account; (14) regardless of how the O&M budget is analyzed, medical is the only area where consistent growth occurred; (15) O&M funds for medical activities increase by 72.8 percent from fiscal years 1985 through 2001; (16) the majority of these costs are for the health care needs of DOD's 8.3 million eligible beneficiaries; (17) during fiscal years 1985 through 2001, O&M infrastructure funds that can be clearly identified in the Future Years Defense Program decline by 22.6 percent and thus mirror total O&M trends; (18) despite increases, O&M continues to fund about half of DOD's clearly identifiable infrastructure costs; and (19) thus, if DOD is to identify significant savings from infrastructure to fund modernization, it must look to the O&M appropriations. |
Kenya has attempted constitutional reform several times over the past 50 years, but has been unsuccessful until recently. (See app. II for a detailed chronology of Kenyan constitutional reform-related events.) A disputed presidential election in 2007, followed by allegations of vote rigging and ethnic violence that killed more than 1,300 people and displaced approximately 350,000 more, catalyzed the need for reform. On May 23, 2008, Kenya’s new coalition government agreed to undertake a reform agenda that included constitutional reform. The Kenyan Parliament established a process to review and potentially replace the existing constitution with one that would better ensure security and stability, democratic governance, and protection of rights for all Kenyans. Parliament established two bodies to lead this process—a nongovernmental entity known as the Committee of Experts (COE) to draft the constitution, and a Parliamentary Select Committee (PSC) to assist the National Assembly in the constitutional reform process. Parliament also mandated that both the National Assembly and the Kenyan people would have to approve the draft. The COE produced three different drafts of the constitution, considering comments from the Kenyan people, the PSC, and others. The COE released the first draft to the public on November 17, 2009, and then revised the draft based on approximately 1 million suggestions from the public. The COE submitted this revised draft to the PSC on January 8, 2010, which in turn provided recommendations for the COE to consider as it prepared its third and final draft. The COE reviewed the PSC recommendations, consulted with experts in areas of contention, revised the draft, and presented its third and final draft to the Kenyan National Assembly on February 23, 2010. The National Assembly debated the draft and discussed potential amendments, but approved the draft without changes on April 1, 2010. The Kenyan people voted on this proposed constitution in a national referendum on August 4, 2010. Seventy-two percent of registered voters participated in the referendum, and 67 percent of Kenyan voters approved the constitution. The new constitution was enacted on August 27, 2010. Kenya’s prior constitution did not directly address the issue of abortion, though the Kenyan penal code does address the issue. Under Kenya’s existing penal code, abortion is generally illegal and is legally allowed only under certain circumstances. The new constitution, however, includes an article entitled “Right to Life.” This article states that the life of a person begins at conception and that abortion is not permitted unless, in the opinion of a trained health professional, there is a need for emergency treatment, the life or health of the mother is endangered, or it is permitted under another written law. The United States, in line with its objective of collaborating to foster peace and stability in East Africa, has supported Kenya’s efforts at governmental reform at all levels, with particular emphasis on constitutional reform. Since the signing of the comprehensive reform agenda in May 2008, USAID has funded 12 awards to 9 award recipients for work on Kenyan constitutional reform efforts. The award recipients have, in turn, given 182 smaller awards to 124 Kenyan partner organizations, or subrecipients. Prior to the constitutional referendum, these award recipients and subrecipients conducted program activities such as voter registration, logistical support, civic education, and technical assistance. Since the referendum, they have supported continued civic education, electoral reform, and conflict mitigation in preparation for the 2012 national elections. USAID’s Bureau for Democracy, Conflict, and Humanitarian Assistance (DCHA) has had primary responsibility for managing the awards. In implementing this assistance, State and USAID are prohibited from abortion-related lobbying. The prohibition, first enacted in 1981 and commonly referred to as the Siljander Amendment, currently appears in the annual Department of State, Foreign Operations, and Related Programs Appropriations Acts. It states in its entirety that “none of the funds made available under this Act may be used to lobby for or against abortion.” Between 2008 and 2010, U.S. officials publicly expressed support for Kenya’s comprehensive reform agenda, including constitutional reform, as an essential tool for maintaining peace and stability. We did not find any indication that U.S. officials gave an opinion publicly on the issue of abortion or attempted to influence the Right to Life article of the draft constitution. State and USAID officials supported Kenya’s constitutional reform process primarily through public statements and constitutional reform-related assistance programs. As noted in the 2010 State IG report and in press releases, high-level U.S. officials, including the President, Vice President, and Secretary of State, publicly expressed their support for constitutional reform in Kenya. The U.S. Ambassador to Kenya also spoke in support of the constitutional reform process at multiple public events and in Kenyan news media. In general, these statements from U.S. officials supported the reform process itself, although some statements implied preference for a “yes” vote in the referendum. For example, in June 2010, the Vice President told Kenyans that “putting in place a new constitution and strengthening your democratic institutions with the rule of law will further open the door to major American development programs . . . bring about reinvestment by American corporations and international organizations in Kenya that could provide millions of dollars in assistance.” Although we found no indication that USAID officials gave public speeches on the constitutional reform process, they lent their support through assistance programs such as civic education. Following the postelectoral violence in Kenya in 2008, key State and USAID officials we interviewed told us they supported the comprehensive reform agenda because they viewed it as essential to maintaining stability in Kenya as well as in East Africa. The officials added that they viewed constitutional reform as the cornerstone of the comprehensive reform agenda. They also said that a unique confluence of factors had made constitutional reform in Kenya a distinct possibility for the first time in decades. These factors included high-level support from both the Kenyan president and the prime minister, which gave the reform process legitimacy. In addition, former United Nations Secretary-General Kofi Annan lent his support as chair of the reform agenda negotiating team. Finally, U.S. embassy officials considered it important for Kenya to have a new constitution in place in advance of the 2012 elections or risk a repeat of the 2008 violence. While U.S. officials supported the constitutional reform process, we found no indication that U.S. officials took a public position on the proposed constitution’s abortion-related provisions or directly attempted to influence the text of the provisions. In addition to interviewing the ambassador and several other key State and USAID officials, we conducted an extensive search of U.S., Kenyan, and other international media sources (see app. I.). Our media search did not reveal any instances of U.S. officials publicly discussing the abortion-related provisions of the constitution, and the officials we interviewed stated that they never discussed abortion in public or sought to influence the text of the abortion-related provisions in the constitution. Moreover, a Kenyan parliamentarian we interviewed who had served on the PSC, which assisted the National Assembly in the constitutional reform process, told us that, to her knowledge, no U.S. official had discussed the abortion-related provisions with PSC members. This information is consistent with the findings of the 2010 State IG report. However, one key State official we interviewed briefly discussed the constitution’s abortion-related provisions during private meetings with Kenyan leaders as an issue that could affect the reform process. This official, the political officer in charge of tracking the progress of the reform process overall, said that in the course of his work he had private discussions with Kenyan parliamentarians and church leaders in which they raised the topic of the abortion-related provisions. He emphasized, however, that he never took a position on the issue in these discussions. Two U.S. officials also told us that they briefly discussed the constitution’s abortion-related provisions internally as an issue that could affect the reform process. The U.S. ambassador told us that the topic arose during regular embassywide meetings on the reform process. He and a political officer we interviewed indicated that during these meetings the ambassador instructed staff to remain objective and limit any statements on the issue to repeating the text of the constitution. None of the other relevant State and USAID officials we interviewed recalled ever discussing the abortion issue in these meetings. Two elements of U.S.-funded support for the constitutional reform process—civic education and technical assistance—addressed the issue of abortion to some extent. State did not have any constitutional reform- related programs. USAID-funded civic education forums sought to inform Kenyan citizens on the text of the proposed constitution, and we found that some forums included discussion of the constitution’s abortion- related provisions. Civic education facilitators addressed the provisions in a variety of ways, but we did not find any indication that award recipients or subrecipients cited them as a rationale to vote for or against the constitution. USAID also funded technical assistance to Kenyan organizations involved in the constitutional referendum; in doing so, one award recipient provided comments on the text of the entire draft constitution, including advice on the abortion-related provisions. Since Kenya adopted the new constitution in August 2010, U.S. support for its implementation has focused on continued civic education, electoral reform, and conflict mitigation and has not addressed abortion. USAID-funded civic education sought to inform Kenyans on the general contents of the proposed constitution, and sometimes addressed the abortion-related provisions. According to some of the U.S.-funded subrecipients we spoke to, educating the public on the contents of the constitution was necessary because many Kenyans were unaware of the actual contents of the constitution as they had not read the document or had heard misleading rumors about it. USAID did not give any awards for civic education specifically on the abortion-related provisions of the constitution; however, subrecipients sometimes conducted civic education on these provisions because they were commonly misunderstood. For example, some subrecipients told us that participants in their civic education forums came to the events with the understanding that the proposed constitution would allow unrestricted access to abortion. Furthermore, most subrecipients indicated that they addressed the abortion-related provisions in response to questions from participants at their civic education events. USAID funded 124 subrecipients to provide assistance related to constitutional reform, including civic education. To determine which subrecipients may have addressed the abortion-related provisions in their civic education forums, we reviewed all award documents and conducted an extensive media search on each subrecipient to identify those most likely to have addressed the issue of abortion (see app. I for a complete discussion of our methodology). Based on these criteria, we identified and interviewed 24 subrecipients. Four of these subrecipients told us that they did not address abortion at all during their civic education forums. The remaining 20 subrecipients told us that their facilitators addressed the proposed constitution’s abortion-related provisions in one or more of the following ways: Reading the text of the provisions. More than half of the subrecipients told us that when questions about abortion arose, they responded by reading aloud the text of the Right to Life article, which stated, “(1) Every person has the right to life; (2) The life of a person begins at conception; (3) A person shall not be deprived of life intentionally, except to the extent authorised by this Constitution or other written law; (4) Abortion is not permitted unless, in the opinion of a trained health professional, there is need for emergency treatment, or the life or health of the mother is in danger, or if permitted by any other written law.” Some subrecipient civic education materials addressed abortion and, in all but one case, did so by citing the Right to Life article. Indicating future legislation might be needed. Some subrecipients explained to civic education participants that, in their opinion, future legislation and judicial decisions would be required in order to fully interpret and implement the abortion-related provisions of the proposed constitution. According to a few of these subrecipients, this legislation would be based on the existing law. Addressing undefined terms. Some subrecipients we interviewed who addressed the abortion-related provisions went beyond reciting the text of the provisions and gave examples to try to clarify undefined terms. For instance, in attempting to answer questions about emergency situations in which an abortion might be legal, two subrecipients told us they gave the example of an ectopic pregnancy. More than half of the subrecipients told us civic education participants asked what the term “trained health professional” meant in order to understand who would be able to authorize an abortion. A few of these subrecipients told us they had legal and medical experts on hand to explain the term. While some U.S.-funded civic education subrecipients addressed the abortion-related provisions of the constitution, we did not find any indication that U.S.-funded award recipients or subrecipients cited the provisions as a rationale to vote for or against the constitution. We conducted an extensive search of U.S., Kenyan, and other international media sources for any possible mention of abortion in relation to Kenya and the constitution made by any award recipient or subrecipient. In addition, we reviewed all award documents. Neither our media search nor our document review revealed any information indicating that U.S.-funded award recipients or subrecipients cited the abortion-related provisions as a rationale to vote for or against the constitution. Moreover, in our interviews with the 24 subrecipients we identified as being most likely to have addressed abortion, we found no indication that any cited the abortion-related provisions as a rationale to vote for or against the constitution. Half of the subrecipients we interviewed told us that they conducted their civic education in an objective manner, regardless of the issue at hand. Furthermore, none of the subrecipients we spoke with told us they had ever used abortion as a rationale to convince Kenyans to vote for or against the constitution. U.S.-funded award recipients provided technical assistance to Kenyan organizations involved in the constitutional reform process, which included providing advice on the abortion-related provisions of the draft constitution to the COE, the nongovernmental entity charged with drafting the constitution. The International Development Law Organization (IDLO), the award recipient that provided technical assistance to the COE, did so at the request of the COE. This assistance included contracting a consultant to convene a selected group of international scholars to produce reports analyzing the text of the entire draft constitution at various stages for the COE. While the COE indicated to IDLO that it generally considered IDLO’s advice when revising the draft constitution, we were unable to confirm whether the COE changed the Right to Life article based on IDLO advice. In remarking on the first and second drafts of the constitution, IDLO commented on the Right to Life article and abortion in the following ways (see fig. 1). IDLO report on the first draft constitution. The COE published the first draft constitution in November 2009 and subsequently called for comments from the public. During this comment period, IDLO provided the COE analysis on the entire draft constitution, including advice on the issues of fetal rights and abortion, though the draft had not mentioned either issue at this point. Specifically, the IDLO report advised that the COE might consider adding language to make clear that the fetus lacks constitutional standing, and that the rights of women under these articles therefore take priority. IDLO also provided examples of countries whose courts have held that fetal rights to life serve as a partial barrier to the ability of national legislatures to protect the right to reproductive dignity via the legal right of access to abortion. The IDLO report went on to state that “given the de facto decriminalization of access to abortion in Kenya, and the health risks to women in Kenya associated with the current system of abortion provision, and the absence of any express intention to disturb the current situation, it also seems quite feasible that in the coming years, the Kenyan Parliament may wish to take such measures. One way to handle this would be to modify to make clear that a person is a human being who has been born.” The COE’s second draft did not include IDLO’s suggested revisions. IDLO report on the second draft constitution. The COE produced a second draft in early January 2010. Later that month, the PSC provided recommendations on this second draft, including adding clauses to clarify that “the life of a person begins at conception,” and that “abortion is not permitted unless in the opinion of a registered medical practitioner, the life of the mother is in danger.” IDLO commented on the draft that included the PSC’s recommendations, indicating that the language on abortion was unnecessarily restrictive and lacking international precedent. For example, the report commented that “even understanding the powerful feelings invoked on all sides of the abortion issue, the omission of a ‘health of the mother’ exception in this provision seems overbroad.” In addition to receiving IDLO’s comments, the COE reported that it had extended discussions with the PSC and members of the medical community on the draft Right to Life article during January and February 2010. The COE’s final draft constitution included an exception for allowing abortion when “the life or health of the mother is in danger.” USAID officials told us they were not aware of the advice and comments in these IDLO reports until after the COE had drafted the final constitution and the National Assembly had approved it for a referendum vote. USAID awarded IDLO a noncompetitive grant based on the recommendation of the COE, under which IDLO provided technical assistance. As we have previously reported, in contrast to other USAID funding mechanisms, typically under a grant agreement USAID has no substantial involvement in the implementation of the work. IDLO’s description of program activities, as established in the grant agreement and as agreed upon with USAID, included addressing general topics such as the Bill of Rights, but did not specifically mention the issue of abortion. USAID officials told us that oversight of the IDLO grant included requiring and reviewing an activity approval document, collecting and reviewing quarterly program reports, and calling IDLO to obtain clarification on the work it had conducted. The USAID official responsible for managing this grant told us that IDLO submitted the required quarterly program reports in a timely manner, with copies of its reports to the COE submitted as attachments, including those commenting on the constitution’s abortion-related provisions. She indicated, however, that she had not fully read the attachments until the USAID IG inquiry brought them to her attention in mid-2010. Since Kenya adopted the new constitution in August 2010, U.S. support for its implementation has focused on continued civic education, electoral reform, and conflict mitigation leading up to the 2012 national elections and has not addressed abortion. Senior State and USAID officials told us that U.S. assistance focuses on electoral reform and conflict mitigation because they are essential to holding fair, nonviolent elections in 2012. In addition, according to key U.S. officials we interviewed and the vice-chair of the Kenyan parliamentary committee overseeing the constitution’s implementation, Parliament is unlikely to address any legislation that might affect the abortion-related provisions before 2013. The U.S. officials we interviewed also said that the Kenyan government has not asked for assistance with implementing the Right to Life article of the constitution, and the United States has not provided any. Furthermore, the officials emphasized that State and USAID have no plans to provide such assistance. Neither State nor USAID has guidance on complying with the Siljander Amendment that includes a formal definition of lobbying, which some agency officials and award recipients indicated makes it difficult for them to determine what types of activities are prohibited. State has not developed any guidance on this legislative prohibition, and while USAID has developed some in the context of its family planning compliance resources, it has no specific guidance on the kinds of activities prohibited. Without clear guidance on the Siljander Amendment, some of the State and USAID officials and award recipients we interviewed said that they were unclear as to what specific activities were prohibited. The Siljander Amendment is an appropriations provision first enacted in 1981 that appears in the annual Department of State, Foreign Operations, and Related Programs Appropriations Acts, stating that “none of the funds made available under this Act may be used to lobby for or against abortion.” The term “lobby” is not defined in the legislation, and neither State nor USAID has developed a formal definition of lobbying in this context. Attorneys in State’s Office of the Legal Adviser told us they are available to provide legal advice for staff on the Siljander Amendment, although they do not provide a formal definition of lobbying. The attorneys said the language in the amendment is adequate to inform nonlegal State officials that a restriction exists. They also indicated that when a proposed activity relates to taking a position for or against abortion, the office would review the specific facts to determine whether the activity could be conducted consistent with the law. Furthermore, they said the office preferred to provide advice on a case-by-case basis rather than having nonattorneys interpreting legal provisions. Similarly, USAID attorneys told us they have not developed a formal definition of lobbying in the context of the Siljander Amendment, but they said they inform staff about the restriction and advise staff to seek legal counsel if they have questions regarding whether a particular activity complies with the law. USAID attorneys told us, however, that they developed an informal definition of lobbying with respect to the Siljander Amendment in the summer of 2010 to assist them in conducting their legal assessments in response to the USAID IG inquiry about U.S. assistance for Kenyan constitutional reform. They said the definition is an internal working one that is not formally documented anywhere, and it is not readily accessible to staff outside of the Office of the General Counsel. The attorneys went on to say that they used this definition to determine that IDLO, in providing advice to the COE on the abortion- related provisions of the Right to Life article, did not violate the Siljander Amendment. In making this determination, USAID officials said they considered the following factors: USAID had given IDLO a noncompetitive grant at the recommendation of the COE. IDLO coordinated a process in which the COE received advice that it specifically requested. The comments on the abortion-related provisions were made in the course of a clause-by-clause review of the entire constitution, and as such were neither emphasized over other comments nor were they a direct, explicit appeal for a change in the legal status of abortion in Kenya. The Right to Life article in the draft constitution did not represent a change in national law, but rather reflected existing Kenyan and commonwealth law regarding abortion, according to a Kenyan attorney who provided a legal opinion to USAID in 2010. The COE was a nongovernmental entity, and as such, USAID officials maintain that IDLO did not provide assistance to the Kenyan government. State has no specific guidance or training on the Siljander Amendment. Although a senior political officer in the U.S. embassy in Nairobi recalled having heard about the Siljander Amendment informally while in Washington, most State officials we spoke to said that they had not heard of it prior to the State IG’s special review in 2010. Political officers in Nairobi, including the Deputy Chief of Mission, also told us they did not receive guidance on the Siljander Amendment during the regular embassywide meetings leading up to the referendum, and the ambassador told us that he had not received guidance from Washington. USAID has developed various family planning compliance resources, primarily for health and legal officers, which includes some guidance on the Siljander Amendment. These resources, however, do not provide guidance on the kinds of activities prohibited under the Siljander Amendment. Some examples include the following. Family planning compliance team. USAID has a family planning compliance team that consists of advisers from the Bureau for Global Health, the regional bureaus, and the Office of the General Counsel. The team provides advice to field staff and assists them with developing tools and resources to facilitate monitoring of compliance with family planning requirements, including the Siljander Amendment. Team members are available to field questions on compliance as they arise, and they hold an annual teleconference with each Mission’s health, legal, and contracting staff to discuss family planning requirements and review specific concerns. The team’s written materials distributed to staff do not provide any description of the types of activities that Siljander prohibits. Family planning compliance training. USAID has offered compliance training for its health and legal officers on family planning-related legislation for years, according to USAID officials. In addition to routine training both in Washington and in the field, USAID has offered a computer-based course on family planning requirements since 2006. USAID officials told us they expect health officers to take the computer-based course or attend a live training session on the family planning legislative requirements annually. None of the training materials, however, describes the kinds of activities that might constitute lobbying under the Siljander Amendment. After the USAID IG inquiry in 2010, USAID began to incorporate the Kenyan constitutional reform example as an oral case study in some of its trainings to alert staff that activities without a family planning focus could be subject to the Siljander Amendment. Global Health intranet resources. USAID’s internal Global Health website offers a variety of family planning compliance tools, such as a chart of all family planning-related legislation, key documents related to family planning requirements, and a compliance plan template. In general, any mention of the Siljander Amendment within these resources does little more than repeat the amendment’s text. The compliance plan template warns staff that non-family planning programs could violate family planning-related legislation, but none of the materials on the intranet describes the types of activities that might be prohibited under Siljander. USAID began disseminating these compliance resources beyond health and legal officials in mid-2010, when it offered some training and general written guidance to other agency officials. A member of the family planning compliance team gave a presentation on abortion-related requirements at the annual DCHA conference for democracy and governance officers in June of both 2010 and 2011. Additionally, DCHA officials sent e-mails to all DCHA staff in late July 2010 and in March 2011, alerting them to the existence of the Siljander Amendment and advising them to seek legal counsel if they are unsure whether a particular activity complies with the law. Neither e-mail, however, details the types of activities that might constitute lobbying for or against abortion. USAID officials acknowledged in the e-mails that determining whether a particular activity complies with the Siljander Amendment is complex, and officials later told us that they did not add more detailed descriptions of the types of activities that might violate the amendment, because they do not want staff to undertake their own legal analysis. USAID award recipients have access to some of USAID’s family planning compliance resources, including the computer-based training, but these resources do not include examples of the types of activities prohibited under the Siljander Amendment. Two award recipients told us that USAID discussed the Siljander Amendment with them in June 2010—after the USAID IG inquiry had begun. One award recipient, who managed more than half of the subrecipients, told us it in turn reminded its subrecipients to be objective and remain neutral when discussing the proposed constitution in civic education forums. USAID also requires its award recipients and their subrecipients to abide by the Siljander Amendment through the inclusion of mandatory language prohibiting abortion-related activities in all awards. The language reads in part, “No funds made available under the award will be used to finance, support, or be attributed to . . . lobbying for or against abortion.” The language, however, does not specify what types of activities would constitute lobbying with U.S. assistance funds and would thus be prohibited. USAID officials told us this is consistent with other mandatory prohibition language in USAID awards. Furthermore, we found that the mandatory language prohibiting abortion-related activities was missing from some of the awards for Kenyan constitutional reform. (See app. III for a discussion of compliance with the requirement to include the mandatory language in each award related to Kenyan constitutional reform.) We found that without written guidance on the types of activities that might constitute lobbying for or against abortion, some key State and USAID officials as well as award recipients are unclear on what the Siljander Amendment prohibits them from doing. For example, the State political officer responsible for tracking the progress of the Kenyan constitutional reform process in 2010 told us that when he asked for guidance on the Siljander Amendment, officials in the Office of the Legal Adviser replied that they would not interpret it for him. As a result, he said he still did not know what activities would violate the legislative prohibition. An attorney in the Office of the Legal Adviser told us that the office’s consistent approach is to work with nonlegal State officials to determine what activities are proposed and to advise whether those activities are allowable. She said that with respect to legislative restrictions on the use of funding, the specific facts are often key, and abstract legal interpretations can be misapplied. Thus, she said the office advises nonlegal State officials on how to apply the law based upon specific facts as to how funds would be used for particular U.S.-funded activities. However, all of the State officials we interviewed in Nairobi said that guidance on what lobbying means in the context of the Siljander Amendment would be useful to help them avoid any potential violation of the amendment in other situations. In addition, DCHA officials we interviewed in Kenya told us that even after the USAID IG inquiry they do not know what types of activities constitute lobbying and therefore would be a violation of the Siljander Amendment. Moreover, the two award recipients who together have overseen over 70 percent of the subrecipients for the constitutional reform process told us they do not understand the Siljander Amendment and that clearer guidance on what constitutes lobbying under the amendment would be useful. The United States has long determined that it is vitally important to support nations in undertaking democratic reforms, such as Kenya’s constitutional reform. With the current political upheavals in parts of the Middle East and Africa, it is likely that several nations will either establish new constitutions or revise existing ones in the near future. The U.S. government has already expressed its willingness to assist with these and other kinds of democratic reforms. State’s political officers and USAID’s DCHA officers would be at the forefront of that assistance. However, constitutional reform can involve a wide spectrum of issues, including abortion and its corresponding U.S. legal restrictions, which are unfamiliar to some U.S. officials who deal with democracy and governance issues. Without clear guidance, including a description of what activities would constitute lobbying overseas, U.S. officials and implementing partners— award recipients and subrecipients—risk becoming involved in activities that may be interpreted by some as lobbying for or against abortion. Similarly, they may miss appropriate opportunities to provide assistance for fear they may potentially violate this prohibition. To help ensure the actions of U.S. officials and implementing partners comply with the legislative prohibition against using certain U.S. assistance funds to lobby for or against abortion, we recommend that the Secretary of State and the USAID Administrator develop specific guidance on compliance with the Siljander Amendment, indicating what kinds of activities may be prohibited, disseminate this guidance throughout their agencies, and make it available to award recipients and subrecipients. We provided a draft of this report to State and USAID. We received written comments from both agencies, which we have reprinted in appendixes IV and V, respectively. The agencies also provided technical comments, which we incorporated throughout the report, as appropriate. State partially agreed with our recommendation. Specifically, State agreed that informing employees throughout the department of the Siljander Amendment would be useful. State implied, however, that such information would not go beyond providing the text of the Siljander Amendment and encouraging staff to seek appropriate guidance on whether proposed activities are subject to the amendment. State does not believe that developing and disseminating specific guidance indicating the types of activities that may be prohibited is appropriate. We disagree. While we respect that State would like its officials in the field to seek guidance on whether an activity is permitted under the Siljander Amendment by presenting specific facts on a case-by-case basis, we do not believe that officials will necessarily know to seek such guidance if they are unaware of the types of activities that may raise compliance concerns. We believe that guidance providing examples of the types of activities that may violate the Siljander Amendment would help officials in the field better understand how the amendment affects their activities overseas and would help them better recognize those instances when they should seek guidance from the relevant State policy or legal office regarding a proposed activity. USAID agreed with our recommendation and indicated that it would develop additional guidance for USAID and award recipient and subrecipient staff on the Siljander Amendment. At the same time, USAID took issue with our graphic representation of the development of the Right to Life article (fig. 1 on p. 13), expressing the view that it dramatically overstated the importance of IDLO’s comments in the evolution of that article. In particular, USAID noted that the figure did not reflect the advice and comments COE received from other sources, and that it suggested a causal link between IDLO’s comments and revisions to the draft constitution. We have revised the title of the figure to more clearly indicate that it focuses on IDLO’s advice and comments to the COE. This does not mean that IDLO was the only entity providing advice. In fact, we state in the text immediately preceding the figure that others also provided advice and comments on the Right to Life article. The figure appropriately focuses on IDLO’s advice and comments, because IDLO was a USAID award recipient and, thus, a subject of our review. We disagree that the figure suggests a causal link between IDLO’s advice and comments and the COE’s revisions to the draft constitution. The figure shows the text of the Right to Life article as it appeared in each draft version and IDLO’s input on that text, and we clearly state in the text preceding the graphic that we were unable to confirm whether COE changed the text based on IDLO’s input. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of State, the USAID Administrator, and interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-3101 or williamsbridgersj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To describe any involvement that U.S. officials have had in the Kenyan constitutional reform process regarding the constitution’s abortion-related provisions, we conducted the following work. We used the 2010 Department of State (State) and the U.S. Agency for International Development (USAID) Inspector General (IG) reports on the same topic for our requesters as a foundation for our methodology. Specifically, we reviewed the reports for key findings and statements. We also spoke to the IG teams that produced the reports in order to identify key officials to interview and to clarify each team’s methodologies. We conducted an extensive review of Kenyan, international, and U.S. media to identify any public statements made by key State, USAID, and other Administration officials that mentioned the constitutional reform process, abortion, or reproductive health. Our review used the Nexis research database, which searched Kenyan media sources including Kenya Broadcast Corporation, The Nairobi Star, The Nation, The People, and The Standard and international and U.S. sources including Africa News, Associated Press, BBC, Federal News Service, Global Legal Monitor, Los Angeles Times, States News Service, The East African, The Monitor (Uganda),The Washington Times, and Xinhua. Search terms included any combination of the official’s full name, “Kenya,” “constitution,” “abortion,” “reproductive,” and “termination of pregnancy.” For most officials, this search covered the period from May 23, 2008—the date that the Kenyan government signed the Reform Agenda—through late March 2011. However, the number of results for President Obama, Vice President Biden, and Secretary of State Clinton exceeded the number of results that Nexis can return. We therefore limited the searches for these officials to the period from July 12, 2010—the date that the USAID IG submitted its preliminary report on support for Kenya’s constitutional reform to the requesters—through late March 2011. We also searched the Congressional Quarterly, Congressional Record, the transcripts database of Lexis Nexis, and executive branch websites using similar search terms for statements made by officials during the period from January 2009 through January 2011. These results included transcripts of congressional hearings, State Department press releases, and coverage of diplomatic speeches and comments. We interviewed key State and USAID officials in Washington, DC, and we traveled to Kenya to interview key officials at the embassy in Nairobi to obtain additional data and to discuss their involvement in the reform process, particularly as regards the issue of abortion. We interviewed State officials including the former ambassador, the Deputy Chief of Mission, the Political Counselor and other relevant political officers, and officials from the Bureau of African Affairs and the Office of the Legal Adviser. We also interviewed USAID officials including the Deputy Mission Director and officials from the Offices of the General Counsel and Acquisition and Assistance, and from the Bureaus for Africa, Global Health, and Democracy, Conflict, and Humanitarian Assistance, including the Offices of Transition Initiatives and Democracy and Governance. These officials have been responsible for managing and monitoring U.S. support for Kenya’s constitutional reform process. In addition, we requested an interview with the chair of the Parliamentary Select Committee (PSC), which assisted Parliament in the constitutional reform process, but embassy officials were unable to contact him. We did, however, interview another key parliamentarian who sat on the PSC and is the vice-chair of the Committee for the Implementation of the Constitution to discuss U.S. officials’ involvement in the reform process regarding the abortion-related provisions of the constitution. To describe the support provided by U.S.-funded award recipients for the constitutional reform process relating to the constitution’s abortion-related provisions, we conducted the following work. We asked the USAID IG and other USAID officials to identify the USAID award recipients and subrecipients that have conducted constitutional reform work in Kenya. The USAID IG provided us with a list of award recipients and subrecipients who had received U.S. funding through the date of the referendum, August 4, 2010. USAID officials notified us of some new awards and subawards that began in the implementation phase after the referendum, and award recipients provided us with information about an additional implementation award recipient as well as other implementation subawards. Together these lists identified 9 award recipients, who together received 12 awards, and 124 subrecipients, who together received 182 smaller awards. We reviewed the related USAID IG reports for key findings and data on USAID award recipients. We also spoke to the IG team that produced the reports in order to identify key officials to interview and to clarify the team’s methodology. We conducted an extensive review of Kenyan and international media on all 9 USAID award recipients and their 124 subrecipients. This media search sought to identify any statements that the award recipients or subrecipients made mentioning the constitutional reform process, abortion, or reproductive health. Like our media search for relevant statements from U.S. officials, this search covered similar Kenyan and international publications, used similar search terms, and covered the period from May 23, 2008 through mid-February 2011 for award recipients and subrecipients identified by the USAID IG. However, for award recipients and subrecipients who started their work after the USAID IG produced its report, the search covered the same Kenyan and international publications, but we adjusted our search terms to exclude the term “constitution” since the constitution had already been enacted and adjusted our search period to cover the period for which these awards were effective. We obtained and reviewed all award documentation for each USAID award recipient and subrecipient performing constitutional reform work in Kenya. These documents included the base award and any modifications, statements of work and project descriptions, progress reports, final reports, and any supplementary materials produced under the award. We interviewed relevant USAID officials in Washington, DC, and in Kenya. These officials included the USAID Deputy Mission Director and officials from the Offices of the General Counsel and Acquisition and Assistance, and from the Bureaus for Africa, Global Health, and Democracy, Conflict, and Humanitarian Assistance, including the Offices of Transition Initiatives and Democracy and Governance. These officials have responsibility for managing USAID’s awards and for planning, implementing, and overseeing USAID’s Kenyan constitutional reform awards. We interviewed all 9 award recipients—in Kenya if they still had an office there, or in Washington, DC. In addition to using a standard set of questions about award recipient activities and guidance received on complying with the Siljander Amendment, we added specific interview questions based on our media search results and document review. To identify which of the 124 subrecipients to interview during our limited time in Kenya, we analyzed the results of our media search and document review to determine which were most likely to have addressed the issue of abortion during the period leading up to the referendum. Our media search yielded more than 6,500 results, all of which we reviewed in order to identify those subrecipients who had publicly commented on abortion-related topics. These results identified 13 subrecipients whose names had appeared in media articles that also included at least one of our search terms. Our document review identified 26 subrecipients whose award documents mentioned having discussed abortion, “contentious issues,” reproductive health, or women’s issues during the period leading up to the referendum. Our document review also showed that of the 13 subrecipients identified through our media search, 6 subrecipients used their USAID funds to conduct civic education on topics that were unlikely to address abortion at all, such as land reform or decentralization. We therefore determined that we should request interviews with the remaining 7 subrecipients identified through our media search, as their activities were likely to be most relevant to our review. To come to this determination, one GAO analyst identified those subrecipients whose activities were most likely to be relevant to our review, and another GAO analyst independently reviewed them, resolving any disagreements in the determinations through discussion. We also determined that we should request interviews with all 26 subrecipients identified through our document review in order to clarify how they had addressed abortion during their U.S.- funded activities, if at all. Given some overlap between the 7 subrecipients identified through the media search and the 26 identified through our document review, and 1 additional subrecipient we identified based on professional judgment, we identified a total of 29 subrecipients for interview. We requested interviews with all 29 subrecipients in Kenya that we had identified based on our media search and document review, and we interviewed 24 of them. Of the remaining 5 subrecipients, 4 subrecipients could not meet with us because of scheduling conflicts. The remaining subrecipient, the Committee of Experts, is now a defunct entity and no former executive officers would meet with us or answer written questions. During our subrecipient interviews, we used a standard set of questions about activities and guidance received on complying with the Siljander Amendment. In addition, we added specific interview questions for individual subrecipients based on issues that we identified through our media search results or document review. To assess the extent to which agencies have developed and implemented guidance to help ensure compliance with the Siljander Amendment, which prohibits using certain U.S. assistance to lobby for or against abortion, we conducted the following work: We reviewed USAID program and procurement guidance and policies, as well as other relevant documents. This helped us determine what guidance on the Siljander Amendment USAID has available or requires for agency officials, award recipients, and subrecipients. We obtained and analyzed award documentation for all USAID award recipients performing constitutional reform work, as well as their subrecipients, to determine which awards contained USAID’s mandatory language provision prohibiting abortion-related activities. USAID considers this language to be a form of guidance on complying with the Siljander Amendment and requires that all assistance and acquisition awards contain the language. Award recipients, in turn, are required to pass this language on to awards with any subrecipients. To understand why this language was not included in some awards for the Kenyan constitutional reform process, we conducted interviews with responsible officials in USAID’s Offices of the General Counsel and Acquisition and Assistance, and the Bureau for Democracy, Conflict, and Humanitarian Assistance, including the Offices of Transition Initiatives and Democracy and Governance. We interviewed high-level State and USAID officials about their agency’s guidance on complying with the Siljander Amendment. In Washington, we spoke with responsible officials in State’s Bureau of African Affairs and the Office of the Legal Adviser, and interviewed the former U.S. ambassador to Kenya. We also spoke with responsible officials in USAID’s Offices of the General Counsel and Acquisition and Assistance, and the Bureaus for Africa and Democracy, Conflict, and Humanitarian Assistance. Additionally, we traveled to Kenya to interview key officials at the embassy and mission who are responsible for managing and monitoring U.S. support for Kenya’s constitutional reform process. We spoke with responsible State officials including the ambassador, Deputy Chief of Mission, Political Counselor, and other relevant political officers. We also spoke with responsible USAID officials including the Deputy Mission Director and officials from the Bureaus for Global Health and Democracy, Conflict, and Humanitarian Assistance, including the Offices of Transition Initiatives and Democracy and Governance. We also discussed guidance with the 9 award recipients and 24 subrecipients we interviewed, and we documented their responses given concerning any guidance USAID had given them regarding compliance with the Siljander Amendment. The information on foreign law in this report does not reflect our independent legal analysis, but it is based on interviews and secondary sources. We conducted our work between November 2010 and October 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Kenya has had a long history of attempted constitutional reform. Before passing the new constitution in August 2010, Kenya had amended its original constitution several times since gaining independence from the United Kingdom in 1963. For a chronological list of constitutional reform- related events, see figure 2. USAID requires language prohibiting abortion-related activities in all assistance and acquisitions awards. The mandatory provision language reads in part, “No funds made available under the award will be used to finance, support, or be attributed to . . . lobbying for or against abortion.” USAID officials told us that while the provision language had been included in family planning awards for decades, it became a USAID requirement for all assistance awards in May 2006 and for all acquisition awards in June 2008. Of the 12 awards GAO identified that USAID gave for the constitutional reform process in Kenya through June 2011, 5 were not in compliance at some point with the requirement to include the abortion-related language in awards. Of these 5 awards, 2 were associate awards from a leader award and 2 were task orders placed from an indefinite delivery/indefinite quantity (IDIQ) contract. In those instances, according to USAID officials, all of the mandatory provisions included in either the leader award or the IDIQ contract are assumed to flow down to the associate award or task order, respectively—without the need to be reprinted. However, in each of these 4 awards, the leader award or IDIQ contract was signed before the abortion-related language requirement took effect and did not include the language. The associate awards and task orders, therefore, did not include the language either. The abortion-related language, however, had not yet become a requirement when the two task orders themselves were signed. As shown in figure 3, these two acquisition awards were not modified when the abortion-related language requirement took effect. USAID contracting officials we interviewed told us the omission of the abortion-related language from the fifth award was likely due to human oversight. The USAID contracting officials told us they added the mandatory language to 4 of the awards as quickly as possible. They did so following the USAID IG inquiry that brought the omission to their attention in mid- 2010, although in three of the four cases they did not add the language until either the day before the August 4, 2010, referendum or afterward. According to USAID officials, the delay in adding the language was due to the nature of the contracting process. They told us USAID contracting officials cannot modify awards without a requisition for modification from the technical offices, including programmatic and financial officials. Furthermore, the contracting officials at the Mission did not have copies of all of the awards, particularly the IDIQ contracts, as those had been signed in Washington. The officials told us it was time-consuming to determine where the awards were located, whether they included the mandatory language or not, and whether they could modify them at the Mission or whether the contracting office in Washington had to do the modifications. They received that information in an e-mail on July 28, 2010, 1 week before the referendum, and made the modifications shortly thereafter. Officials from USAID’s Office of Acquisition and Assistance (OAA) told us that USAID’s new web-based procurement information system automatically includes mandatory provisions in awards, including the language prohibiting abortion-related activities, although the system is not foolproof. OAA officials in Washington told us that the Global Acquisition and Assistance System (GLAAS) procurement information system includes award templates with standard clauses for each type of award. They said that GLAAS generates mandatory provisions, such as the language on prohibiting abortion-related activities, based on the award type chosen. They went on to say that GLAAS greatly reduces the possibility of human error in including all mandatory provisions. The OAA officials we spoke with at the Mission agreed with this assessment, but they also emphasized that GLAAS is not foolproof. For example, they told us that GLAAS does not yet capture all types of award mechanisms, nor have all USAID staff begun using GLAAS. In addition, they said that while GLAAS automatically includes new mandatory provisions, contracting officials can copy language from a recently generated similar award and then upload that language into GLAAS, bypassing the standard and mandatory inclusions the system would otherwise make. To address this, USAID officials told us that OAA is in the process of issuing a policy directive to require that all contracting officials generate their award documents through GLAAS. Appendix V: Comments from the U.S. Agency for International Development Jacquie Williams-Bridgers Managing Director, International Affairs & Trade U.S. Government Accountability Office Washington, DC 20548 I am pleased to provide the formal response of the U.S. Agency for International Development (USAID) to the GAO draft report entitled "Foreign Assistance: Clearer Guidance Needed on Overseas Compliance with Legislation Prohibiting Abortion-Related Lobbying" (GAO-12-35). The enclosed USAID comments are provided for incorporation with this letter as an appendix to the final report. Thank you for the opportunity to respond to the GAO draft report and for the courtesies extended by your staff in the conduct of this audit review. Foreign Assistance: Clearer Guidance Needed on Compliance Overseas with Legislation Prohibiting Abortion-Related Lobbying (GAO-12-35) Constitutional reform in Kenya has been a cornerstone of the reform agenda endorsed by the Kenyan Coalition Government in the wake of the violence that devastated the country following the disputed December 2007 presidential elections. The U.S. Government supports the process of constitutional reform and, like the vast majority of Kenyans, believes a new constitution is a critical element in laying the foundation for deepened democracy and prosperity in Kenya. In August 2010, the Kenyan people overwhelmingly approved the referendum on the draft constitution. USAID has funded a broad spectrum of activities in support of the constitutional reform process, free and fair elections, increased transparency and efficiency in the government, and civic education and voter registration. Following the August 2010 referendum, USAID has continued to work with the Kenyan government and people to support the constitutional reform process in the country. USAID takes compliance with the abortion-related restrictions, including the Siljander Amendment, very seriously. Over the years, the Agency has taken a number of steps to ensure compliance with these restrictions, such as the inclusion of mandatory standard provisions in all Agency awards with implementing partners, the development of live and online training materials, presentations at Agency conferences, and the development of compliance tools and resources for USAID and partner staff. In the past year, the Agency has also taken steps to increase awareness of these restrictions among non-health staff, particularly those working in the area of democracy and governance. USAID is committed to ensuring compliance with these restrictions and continually seeks to strengthen and refine our existing compliance resources. Unlike the U.S. constitution, the new Kenyan constitution is a lengthy document containing 264 articles spanning nearly 200 pages of text. The Right to Life article referred to in your report is one article among hundreds in the document. As with several other sections of the draft constitution, this particular article generated significant discussion within and outside Kenya, and many entities – from medical associations to religious groups – expressed public views on it throughout the period leading up to the referendum. As your report notes, however, there is no indication that U.S. officials gave an opinion publicly on this issue or attempted to influence the provision. While your report also correctly notes that in several instances this article was addressed by USAID-funded implementing partners, in no case did the activities constitute a violation of the Siljander Amendment. We do not believe that the USAID-funded activity providing technical assistance to the drafters of the Kenyan constitution violated the Siljander Amendment. In 2009, the Kenyan Committee of Experts (COE), a non-governmental entity charged with drafting the constitution by the Kenyan government, requested that USAID provide funding to a specific public international organization (PIO) for purposes of providing advice on the draft constitution. At the COE’s request, USAID funded the PIO to provide such advice, and the PIO subsequently contracted with a group of constitutional scholars who prepared several lengthy reports analyzing the draft constitution, article by article. The scholars’ major recommendations related to issues such as the authorities of the executive and legislative branches, election processes, a proposed ban on ethnic minorities, and land tenure rights. Although two of the scholars’ reports included comments relating to the provisions in the Right to Life article, they did not highlight these points in particular or make them a focus of their key recommendations. In fact, we believe that the GAO’s draft report dramatically overstates the importance of the scholars’ comments on the evolution of the Right to Life article. As we noted above, many organizations in Kenya expressed ardent views on this provision leading up to the referendum, and these opinions may well have impacted the COE’s decisions on text. For example, the State Inspector General’s report on this issue, dated August 2010, found that the COE revised the text after consulting with Kenyan medical professionals. However, the chart set forth on page 13 of the draft report implies that the only entities advising the COE on this issue were the scholars and the Parliamentary Select Committee. Indeed, the chart suggests a causal link when the draft report itself does not find one, as you note that the GAO was “unable to confirm whether the COE changed the Right to Life article” based on the scholars’ advice. We therefore request that the GAO delete the chart in its entirety or indicate substantial input from other sources. In any event, we do not believe that the scholars’ two references to the Right to Life article constituted lobbying for or against abortion. We considered several factors in arriving at this conclusion. First, the scholars were providing advice to the COE upon the COE’s request. They did not reach out on their own initiative to express a view on abortion or any other issue related to the constitution. Second, the group did not single out the article for focus but rather commented on it as part of its exhaustive article-by-article review of the draft. Third, USAID obtained a legal opinion from Kenyan counsel indicating that the Right to Life article in the draft constitution would maintain the status quo on the country’s existing abortion law and would not represent a change. Finally, the COE was a non-governmental entity, separate and distinct from the Kenyan government. In light of these factors, USAID has concluded that there is no evidence of a violation of the Siljander Amendment in connection with the scholars’ reports. Similarly, there is no evidence that any USAID-funded civic education activities violated the Siljander Amendment. As your report notes, USAID-funded civic education activities sought to inform Kenyans on the general contents of the proposed constitution. In the context of general civic education, USAID-funded sub-recipients addressed questions from Kenyans on many provisions of the constitution, including in some cases the Right to Life article. They were not lobbying on the issue but rather trying to ensure that citizens were familiar with the text in the document. Recommendation: To ensure the actions of U.S. officials and implementing partners comply with the legislative prohibition against using certain U.S. assistance funds to lobby for or against abortion, we recommend that the Secretary of State and the USAID Administrator develop specific guidance on compliance with the Siljander Amendment, including what kinds of activities are prohibited, disseminate this guidance throughout their agencies, and make it available to award recipients and subrecipients. Management Comments: As noted above, USAID takes compliance with the abortion restrictions very seriously. USAID will build upon its existing compliance tools and resources to develop additional guidance for USAID and implementing partner staff on the Siljander Amendment. Jacquelyn L. Williams-Bridgers, (202) 512-3101 or williamsbridgersj@gao.gov. Key contributors to this report include Jess Ford, James Michels, Judith Williams, Chloe Brown, Mary Moutsos, William Tuceling, Martin De Alteriis, Debbie Chung, Etana Finkler, Christopher Mulkins, and Michael Kniss. | Following a 2007 disputed election and widespread violence, Kenya reformed its constitution, which its voters approved in August 2010. The United States has provided over $18 million to support this process to date. GAO was asked to (1) describe any involvement that U.S. officials have had in Kenya's constitutional reform process relating to abortion; (2) describe any support that U.S.-funded award recipients and subrecipients have provided in Kenya's constitutional reform process relating to abortion; and (3) assess the extent to which agencies have developed and implemented guidance on compliance with the Siljander Amendment, which prohibits using certain assistance funds to lobby either for or against abortion. GAO analyzed documents and interviewed officials from the U.S. Agency for International Development (USAID), the Department of State (State), award recipients and subrecipients, and the Kenyan government, and conducted an extensive media search. Between 2008 and 2010, U.S. officials, including the U.S. ambassador to Kenya, publicly expressed support for Kenya's constitutional reform process. GAO found no indication that U.S. officials opined on the issue of abortion publicly or attempted to influence the abortion-related provisions of the draft constitution--a finding corroborated by a key Kenyan parliamentarian who served on the committee assisting in the constitutional reform process. U.S.-funded award recipients and their subrecipients supported the constitutional reform process through activities that included civic education and technical assistance, both of which addressed the issue of abortion to some extent. USAID-funded civic education sought to inform Kenyans on the text of the draft constitution, and GAO found that some forums included discussion of abortion- related provisions. Some subrecipients undertook interpretation of the provisions at their forums, including describing scenarios in which abortion might be allowed. Several subrecipients explained to the public that, in their view, future legislation might be required to implement and further articulate the abortion- related provisions. While some subrecipients addressed the abortion-related provisions of the constitution, GAO found no indication that they cited the abortion provisions as a rationale to vote for or against the constitution. USAID also funded a technical assistance award to the International Development Law Organization (IDLO) to support the Committee of Experts (COE), the nongovernmental entity charged with drafting the constitution. In the course of providing comments and advice regarding the entire draft constitution, IDLO made suggestions relating to the issues of fetal rights and abortion during the early stages of drafting. IDLO later commented on broadening the exceptions when abortion would be legal. The COE has indicated that it generally considered IDLO's advice when revising the draft constitution. The final draft of the constitution is consistent with some of IDLO's advice relating to abortion, though GAO could not determine whether the COE made these changes in direct response to IDLO's advice. Neither State nor USAID has clear guidance for compliance with the Siljander Amendment, which makes it difficult for some agency officials and award recipients to determine what types of activities are prohibited. State has not developed any guidance at all on the prohibition. USAID has offered training for its health and legal officers on compliance with family planning-related legislation, including the Siljander Amendment, for years and began offering some training to other officials in 2010. However, USAID's training and other family planning resources do not identify specific types of activities that are prohibited under the amendment. State and USAID attorneys indicated that they are available to provide advice to staff on a case-by-case basis, upon request. However, some State and USAID officials and award recipients GAO spoke to said that they were unclear as to what specific activities were prohibited. GAO recommends that State and USAID develop specific guidance on compliance with the Siljander Amendment, indicating what kinds of activities may be prohibited, disseminate this guidance throughout their agencies, and make it available to award recipients and subrecipients. USAID concurred. State concurred that it should inform staff of the amendment but not that it should provide examples of potentially prohibited activities. GAO continues to believe that providing such examples would enable officials to better understand the amendment and when to seek additional guidance. |
One unconventional energy resource that has received renewed attention in recent years in the United States is oil shale. Historically, interest in oil shale development as a domestic energy source has waxed and waned since the early 1900s, as average crude oil prices have generally been lower than the threshold necessary to make oil shale development profitable over time. More recently, however, higher oil prices have renewed interest in developing oil shale. The federal government is in a unique position to influence the development of oil shale because nearly three-quarters of the oil shale within the Green River Formation lies beneath federal lands managed by the Department of the Interior’s (Interior) Bureau of Land Management (BLM). The Energy Policy Act of 2005 directed Interior to lease its lands for oil shale research and development. In June 2005, BLM initiated a leasing program for research, development, and demonstration (RD&D) of oil shale recovery technologies. By early 2007, it had granted six small RD&D leases: five in the Piceance Basin of northwest Colorado and one in the Uintah Basin of northeast Utah. The leases are for a 10-year period, and if the technologies are proven commercially viable, the lessees can significantly expand the size of the leases for commercial production into adjacent areas known as preference right lease areas. The Energy Policy Act of 2005 also directed Interior to develop a programmatic environmental impact statement (PEIS) for a commercial oil shale leasing program. During the drafting of the PEIS, however, BLM determined that, without proven commercial technologies, it could not adequately assess the environmental impacts of oil shale development and dropped from consideration the decision to offer additional specific parcels for lease. Instead, the PEIS analyzed making lands available for potential leasing and allowing industry to express interest in lands to be leased. Environmental groups then filed lawsuits, challenging various aspects of the PEIS and the RD&D program. Since then, BLM has initiated another round of oil shale RD&D leasing and the lawsuits were settled. Stakeholders in the future development of oil shale are numerous and include the federal government, state government agencies, the oil shale industry, academic institutions, environmental groups, and private citizens. Among federal agencies, BLM manages federal land and the oil shale beneath it and develops regulations for its development. The United States Geological Survey (USGS) describes the nature and extent of oil shale deposits and collects and disseminates information on the nation’s water resources, which are a significant consideration for oil shale development in the West. The Department of Energy (DOE), advances energy technologies, including oil shale technology, through its various offices, national laboratories, and arrangements with universities. The Environmental Protection Agency (EPA) sets standards for pollutants that could be released by oil shale development and reviews environmental impact statements, such as the PEIS. Also, Interior’s Bureau of Reclamation (BOR) manages federally built water projects that store and distribute water in 17 western states and provides this water to users, including states where oil shale research, development, and demonstration, is underway. Our October 2010 report found that oil shale development presents significant opportunities for the United States. Potential opportunities associated with oil shale development include increasing domestic oil production and socioeconomic benefits. Increasing domestic oil production. Being able to tap the vast amounts of oil locked within U.S. oil shale formations could go a long way toward satisfying the nation’s future oil demands. The Green River Formation—an assemblage of over 1,000 feet of sedimentary rocks that lie beneath parts of Colorado, Utah, and Wyoming—contains the world’s largest deposits of oil shale. USGS estimates that the Green River Formation contains about 3 trillion barrels of oil, and about half of this may be recoverable, depending on available technology and economic conditions. The Rand Corporation, a nonprofit research organization, estimates that 30 to 60 percent of the oil shale in the Green River Formation can be recovered. At the midpoint of this estimate, almost half of the 3 trillion barrels of oil would be recoverable. This is an amount about equal to the entire world’s proven oil reserves. The thickest and richest oil shale within the Green River Formation exists in the Piceance Basin of northwest Colorado and the Uintah Basin of northeast Utah. Figure 1 shows where these prospective oil shale resources are located in Colorado and Utah. Socioeconomic benefits. Development of oil shale resources could also yield important socioeconomic benefits, including the creation of jobs, increases in wealth, and increases in tax and royalty payments to federal and state governments for oil produced on their lands. Our October 2010 report did not attempt to quantify these potential socioeconomic benefits because of current uncertainty surrounding the technologies that might be used to develop oil shale resources, which would influence the ultimate size of a future oil shale industry. Our October 2010 report also found, however, that there are a number of key challenges associated with potential oil shale development in the United States, including: (1) uncertainty about viable technologies, (2) environmental impacts that affect water quantity and quality, air, and land, and (3) socioeconomic impacts. Uncertainty about viable technologies. A significant challenge to the development of oil shale lies in the uncertainty surrounding the viability of current technologies to economically extract oil from oil shale. To extract the oil, the rock needs to be heated to very high temperatures—ranging from about 650 to 1,000 degrees Fahrenheit— in a process known as retorting. Retorting can be accomplished primarily by two methods. One method involves mining the oil shale, bringing it to the surface, and heating it in a vessel known as a retort. Mining oil shale and retorting it has been demonstrated in the United States and is currently done to a limited extent in Estonia, China, and Brazil. However, a commercial mining operation with surface retorts has never been developed in the United States because the oil it produces competes directly with conventional crude oil, which historically has been less expensive to produce. The other method, known as an in-situ process, involves drilling holes into the oil shale, inserting heaters to heat the rock, and then collecting the oil as it is freed from the rock. Some in-situ technologies have been demonstrated on very small scales, but other technologies have yet to be proven, and none has been shown to be economically or environmentally viable at a commercial scale. According to some energy experts, the key to developing our country’s oil shale is the development of an in-situ process because most of the richest oil shale is buried beneath hundreds to thousands of feet of rock, making mining difficult or impossible. In addition to these uncertainties, transporting the oil produced from oil shale to refineries may pose challenges because pipelines and major highways are not prolific in the remote areas where the oil shale is located, and the large-scale infrastructure that would be needed to supply power to heat the oil shale is lacking. Environmental impacts on water, air, and wildlife. Developing oil shale resources poses significant environmental challenges, particularly for water quantity and quality but also for air and wildlife. Water quantity. Oil shale development could have significant impacts on the quantity of surface and groundwater resources, but the magnitude of these impacts is unknown because of the technological uncertainties, and also because the size of a future oil shale industry is unknown, and knowledge of current water conditions and groundwater flow is limited. Developing oil shale and providing power for oil shale operations and other associated activities will require significant amounts of water, which could pose problems, particularly in the arid West where an expanding population is already placing additional demands on available water resources. For example, some analysts project that large scale oil shale development within Colorado could require more water than is currently supplied to over 1 million residents of the Denver metro area and that water diverted for oil shale operations would restrict agricultural and urban development. The potential demand for water is further complicated by the past decade of drought in the West and projections of a warming climate in the future. Current estimates of the quantities of water needed to support a future oil shale industry vary significantly depending upon the assumptions that are made. However, as our 2010 report noted, while water is likely to be available for the initial development of an oil shale industry, the eventual size of the industry may be limited by the availability of water and demands for water to meet other needs of the region. Oil shale companies operating in Colorado and Utah will need to have water rights to develop oil shale, and representatives from all of the companies with whom we spoke for our 2010 report were confident that they held at least enough water rights for their initial projects and will likely be able to purchase more rights in the future. Sources of water for oil shale will likely be surface water in the immediate area, such as the White River, but groundwater could also be used. However, as we reported in 2010, the possibility of competing municipal and industrial demands for future water, a warming climate, future needs under existing compacts, and additional water needs for the protection of threatened and endangered fishes, may eventually limit the size of a future oil shale industry. Water quality. While the water quantity impacts from oil shale development are difficult to precisely quantify at this time, hydrologists and engineers have been able to more definitively determine the water quality impacts that are likely because other types of mining, construction, and oil and gas development cause disturbances similar to impacts expected from oil shale development. According to these experts, in the absence of effective mitigation measures, impacts from oil shale development to water resources could result from (1) disturbances to the ground surface during the construction of roads and production facilities, which could result in the degradation of surface water quality from the related runoff of sediment, salts, and possible chemicals to nearby rivers and streams, (2) the withdrawal of water from streams and aquifers for oil shale operations, which could decrease flows downstream and temporarily degrade downstream water quality by depositing sediment during decreased flows, (3) underground mining and extraction, which would permanently impact aquifers by affecting groundwater flows through these zones, and (4) the discharge of waste waters from oil shale operations, which could temporarily increase water flows into receiving streams, thereby altering water quality and water temperature. Air. Construction and mining activities during the development of oil shale resources can temporarily degrade air quality in local areas. There can also be long-term regional increases in air pollutants from oil shale processing and the generation of additional electricity to power oil shale development operations. Pollutants, such as dust, nitrogen oxides, and sulfur dioxide, can contribute to the formation of regional haze that can affect adjacent wilderness areas, national parks, and national monuments, which can have very strict air quality standards. Environmental impacts could also be compounded by the impacts of coal mining, construction, and extensive oil and gas development in the area, and air quality appears to be particularly susceptible to the cumulative effect of these development impacts. According to some environmental experts that we spoke to for our 2010 report, air quality impacts may be the limiting factor for the development of a large oil shale industry in the future. Wildlife. Oil shale operations are likely to clear large surface areas of topsoil and vegetation, and as a result, some wildlife habitat will be lost. Important species likely to be negatively impacted from loss of wildlife habitat include mule deer, elk, sage grouse, and raptors. Noise from oil shale operations, access roads, transmission lines, and pipelines can further disturb wildlife and fragment their habitat. Wildlife is also particularly susceptible to the cumulative effects of nearby industry development. In addition, the withdrawal of large quantities of surface water for oil shale operations could negatively impact aquatic life downstream of the oil shale development. Socioeconomic impacts. Large-scale oil shale development offers certain socioeconomic benefits outlined earlier, but it also poses some socioeconomic challenges. Oil shale development can bring a sizeable influx of workers, who along with their families, put additional stress on local infrastructure such as roads, housing, municipal water systems, and schools. As noted in our 2010 report, development from expansion of extractive industries, such as oil shale or oil and gas, has typically followed a “boom and bust” cycle, making planning for growth difficult for local governments. Furthermore, development of a future oil shale industry would have the potential to replace traditional rural uses by the industrial development of the landscape, and tourism that relies on natural resources, such as hunting, fishing, and wildlife viewing, could be negatively impacted. Our 2010 report noted that current federal research efforts on the impacts of oil shale development do not provide sufficient data for future monitoring and that there is a greater need for collaboration among key stakeholders to address water resources and research issues related to oil shale development. As noted earlier, the federal government is in a unique position to influence the development of oil shale because 72 percent of the oil shale within the Green River Formation lies beneath federal lands managed by BLM. In addition to its leasing of these lands, Interior has sponsored oil shale projects related to water resources—to develop a common repository of water data collected from the Piceance Basin and to begin monitoring groundwater quality and quantity within this basin using existing and future wells. The common repository project was funded jointly with Colorado cities and counties as well as with oil shale companies. DOE also plays an important role in developing these resources and has sponsored most of the oil shale research that involves water-related issues. DOE also provides technological and financial support for oil shale development, through its research and development efforts. However, our October 2010 report noted that Interior and DOE officials generally have not shared information on oil shale research and that there is a need for federal agencies to improve their efforts to collaborate and develop more comprehensive baseline information on the current condition of groundwater and surface water in these areas. Such information will be important for understanding the potential impacts of oil shale development on water resources in the region. To prepare for possible impacts from the potential future development of oil shale, which industry experts believe is at least 15-20 years away, we made three recommendations in our October 2010 report to the Secretary of the Interior. We recommended that the Secretary direct BLM and USGS to establish comprehensive baseline conditions for groundwater and surface water quality, including their chemistry, and quantity in the Piceance and Uintah Basins to aid in the future monitoring of impacts from oil shale development in the Green River Formation; model regional groundwater movement and the interaction between groundwater and surface water, in light of aquifer properties and the age of groundwater, so as to help in understanding the transport of possible contaminants derived from the development of oil shale; and coordinate with DOE and state agencies with regulatory authority over water resources in implementing these recommendations, and to provide a mechanism for water-related research collaboration and sharing of results. Interior fully supported the concepts in the report and agreed with the need to answer the science questions associated with commercial oil shale production prior to its development. In addition, Interior indicated that it already had begun to take some actions in response to our recommendations. For example, Interior told us that USGS is undertaking an analysis of baseline water resources conditions to improve the understanding of groundwater and surface water systems that could be affected by commercial-scale oil shale development. In addition, Interior stated that BLM and USGS are working to improve coordination with DOE and state agencies with regulatory authority over water resources and noted current ongoing efforts with state authorities. In conclusion, Mr. Chairman, while there are potential opportunities for commercial development of large unconventional oil and gas resources, such as oil shale, in the United States, these opportunities must be balanced with other potential technological, environmental and socioeconomic challenges. The recommendations in our October 2010 report on oil shale provide what we believe to be important next steps for federal agencies involved in the development of oil shale, particularly as it relates to water resources. By proactively improving collaboration between departments and state agencies and developing key baseline information the federal government can position itself to better monitor water resources and other environmental impacts should a viable oil shale industry develop in the future. Chairman Harris, Ranking Member Miller, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. For further information about this testimony, please contact Anu K. Mittal, Director, Natural Resources and Environment team, (202) 512-3841 or mittala@gao.gov. In addition to the individual named above, key contributors to this testimony were Dan Haas (Assistant Director), Alison O’Neill, Barbara Timmerman, and Lisa Vojta. Department of Energy: Advanced Research Projects Agency-Energy Could Benefit from Information on Applicants’ Prior Funding (GAO-12-112, January 13, 2012). Energy Development and Water Use: Impacts of Potential Oil Shale Development on Water Resources (GAO-11-929T, August 24, 2011). Federal Oil and Gas: Interagency Committee Needs to Better Coordinate Research on Oil Pollution Prevention and Response (GAO-11-319, March 25, 2011). Oil and Gas Leasing: Past Work Identifies Numerous Challenges with Interior’s Oversight (GAO-11-487T, March 17, 2011). Oil and Gas Management: Key Elements to Consider for Providing Assurance of Effective Independent Oversight (GAO-10-852T, June 17, 2010). Federal Oil and Gas Management: Opportunities Exist to Improve Oversight (GAO-09-1014T, September 16, 2009). Oil and Gas Management: Federal Oil and Gas Resource Management and Revenue Collection In Need of Stronger Oversight and Comprehensive Reassessment (GAO-09-556T, April 2, 2009). Department of the Interior, Bureau of Land Management: Oil Shale Management—General (GAO-09-214R, December 2, 2008). Advanced Energy Technologies: Budget Trends and Challenges for DOE’s Energy R&D Program (GAO-08-556T, March 5, 2008). Department of Energy: Oil and Natural Gas Research and Development Activities (GAO-08-190R, November 6, 2007). Department of Energy: Key Challenges Remain for Developing and Deploying Advanced Energy Technologies to Meet Future Needs (GAO-07-106, December 20, 2006). Energy-Water Nexus: Information on the Quantity, Quality, and Management of Water Produced during Oil and Gas Production (GAO-12-156, January 9, 2012). Energy-Water Nexus: Amount of Energy Needed to Supply, Use, and Treat Water Is Location-Specific and Can Be Reduced by Certain Technologies and Approaches (GAO-11-225, March 23, 2011). Energy-Water Nexus: A Better and Coordinated Understanding of Water Resources Could Help Mitigate the Impacts of Potential Oil Shale Development (GAO-11-35, October 29, 2010). Energy-Water Nexus: Many Uncertainties Remain about National and Regional Effects of Increased Biofuel Production on Water Resources (GAO-10-116, November 30, 2009). Energy-Water Nexus: Improvements to Federal Water Use Data Would Increase Understanding of Trends in Power Plant Water Use (GAO-10-23, October 16, 2009). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Fossil fuels are important to both the global and U.S. economies, and unconventional oil and gas resourcesresources that cannot be produced, transported, or refined using traditional techniquesare expected to play a larger role in helping the United States meet future energy needs. With rising energy prices one such resource that has received renewed domestic attention in recent years is oil shale. Oil shale is a sedimentary rock that contains solid organic material that can be converted into an oil-like product when heated. About 72 percent of this oil shale is located within the Green River Formation in Colorado, Utah, and Wyoming and lies beneath federal lands managed by the Department of the Interiors Bureau of Land Management, making the federal government a key player in its potential development. In addition, the Department of Energy (DOE), advances energy technology, including for oil shale, through its various offices, national laboratories, and arrangements with universities. GAOs testimony is based on its October 2010 report on the impacts of oil shale development (GAO-11-35). This testimony summarizes the opportunities and challenges of oil shale development identified in that report and the status of prior GAO recommendations that Interior take actions to better prepare for the possible future impacts of oil shale development. In its October 2010 report, GAO noted that oil shale development presents the following opportunities for the United States: Increasing domestic oil production. Tapping the vast amounts of oil locked within U.S. oil shale formations could go a long way toward satisfying the nations future oil demands. Oil shale deposits in the Green River Formation are estimated to contain up to 3 trillion barrels of oil, half of which may be recoverable, which is about equal to the entire worlds proven oil reserves. Socioeconomic benefits. Development of oil shale resources could lead to the creation of jobs, increases in wealth, and increases in tax and royalty payments to federal and state governments for oil produced on their lands. The extent of these benefits, however, is unknown at this time because the ultimate size of the industry is uncertain. In addition to these opportunities and the uncertainty of not yet having an economical and environmentally viable commercial scale technology, the following challenges should also be considered: Impacts on water, air, and wildlife. Developing oil shale and providing power for oil shale operations and other activities will require large amounts of water and could have significant impacts on the quality and quantity of surface and groundwater resources. In addition, construction and mining activities during development can temporarily degrade air quality in local areas. There can also be long-term regional increases in air pollutants from oil shale processing and the generation of additional electricity to power oil shale development operations. Oil shale operations will also require the clearing of large surface areas of topsoil and vegetation which can affect wildlife habitat, and the withdrawal of large quantities of surface water which could also negatively impact aquatic life. Socioeconomic impacts. Oil shale development can bring an influx of workers, who along with their families can put additional stress on local infrastructure such as roads, housing, municipal water systems, and schools. Development from expansion of extractive industries, such as oil shale or oil and gas, has typically followed a boom and bust cycle, making planning for growth difficult for local governments. Moreover, traditional rural uses would be displaced by industrial uses and areas that rely on tourism and natural resources would be negatively impacted. GAOs 2010 report found that federal research efforts on the impacts of oil shale development did not provide sufficient data for future monitoring and that there was a greater need for collaboration among key federal stakeholders to address water resources and research issues. Specifically, Interior and DOE officials generally have not shared information on their oil shale research efforts, and there was a need for the federal agencies to improve their collaboration and develop more comprehensive baseline information related to water resources in the region. GAO made three recommendations to Interior, which the department generally concurred with and has already begun to take actions to address. |
Polar-orbiting satellites provide data and imagery that are used by weather forecasters, climatologists, and the military to map and monitor changes in weather, climate, the oceans, and the environment. Since the 1960s, the United States has operated two separate operational polar-orbiting meteorological satellite systems: the Polar-orbiting Operational Environmental Satellite (POES) series, which is managed by NOAA, and the Defense Meteorological Satellite Program (DMSP), which is managed by the Air Force. Currently, there is one operational POES satellite and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, midmorning, and early afternoon polar orbits. In addition, the government is also relying on a European satellite, called Meteorological Operational, or MetOp, in the midmorning orbit. With the expectation that combining the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 Presidential Decision Directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program, NPOESS, is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring. To manage this program, DOD, NOAA, and NASA formed the tri-agency Integrated Program Office, located within NOAA. Within the program office, each agency has the lead on certain activities: NOAA has overall program management responsibility for the converged system and for satellite operations; the Air Force has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the cost of funding NPOESS, while NASA funds specific technology projects and studies. In addition, an Executive Committee— made up of the administrators of NOAA and NASA and the Under Secretary of Defense for Acquisition, Technology, and Logistics—is responsible for providing policy guidance, ensuring agency support and funding, and exercising oversight authority. The Executive Committee manages the program through a Program Executive Officer who oversees the NPOESS program office. Since the program’s inception, NPOESS costs have grown to $13.95 billion, and launch schedules have been delayed by up to five years. In addition, as a result of a 2006 restructuring of the program, the agencies reduced the program’s functionality by removing 2 of 6 originally planned satellites and one of the orbits. The restructuring also decreased the number of instruments from 13 (10 sensors and 3 subsystems) to 9 (7 sensors and 2 subsystems), with 4 of the sensors providing fewer capabilities. The restructuring also led agency executives to mitigate potential data gaps by deciding to use a planned demonstration satellite, called the NPOESS Preparatory Project (NPP) satellite, as an operational satellite providing climate and weather data. However, even after this restructuring, the program is still encountering technical issues, schedule delays, and the likelihood of further cost increases. Over the past year, selected components of the NPOESS program have made progress. Specifically, three of the five instruments slated for NPP have been delivered and integrated on the spacecraft; the ground-based satellite data processing system has been installed and tested at both of the locations that are to receive NPP data; and the satellites’ command, control, and communications system has passed acceptance testing. However, problems with two critical sensors continue to drive the program’s cost and schedule. Specifically, challenges with a key sensor’s (the Visible/infrared imager radiometer suite (VIIRS)) development, design, and workmanship have led to additional cost overruns and delayed the instrument’s delivery to NPP. In addition, problems discovered during environmental testing on another key sensor (called the Cross-track infrared sounder (CrIS)) led the contractor to further delay its delivery to NPP and added further unanticipated costs to the program. To address these issues, the program office halted or delayed activities on other components (including the development of a sensor planned for the first NPOESS satellite, called C1) and redirected those funds to fixing VIIRS and CrIS. As a result, those other activities now face cost increases and schedule delays. Program officials acknowledge that NPOESS will cost more than the $13.95 billion previously estimated, but they have not yet adopted a new cost estimate. Program officials estimated that program costs will grow by about $370 million due to recent technical issues experienced on the sensors and the costs associated with halting and then restarting work on other components of the program. In addition, the costs associated with adding new information security requirements to the program could reach $200 million. This estimate also does not include approximately $410 million for operations and support costs for the last two years of the program’s life cycle (2025 and 2026). Thus, we anticipate that the overall cost of the program could grow by about $1 billion from the current $13.95 billion estimate—especially given the fact that difficult integration and testing of the sensors on the NPP and C1 spacecrafts has not yet occurred. Program officials reported that they plan to revise the program’s cost estimate over the next few weeks and to submit it for executive-level approval by the end of June 2009. As for the program’s schedule, program officials estimate that the delivery of VIIRS to the NPP contractor will be delayed, resulting in a further delay in the launch of the NPP satellite to January 2011, a year later than the date estimated during the program restructuring—and seven months later than the June 2010 date that was established last year. In addition, program officials estimated that the first and second NPOESS satellites would be delayed by 14 and 5 months, respectively, because selected development activities were halted or slowed to address VIIRS and CrIS problems. The program’s current plans are to launch C1 in March 2014 and the second NPOESS satellite, called C2, in May 2016. Program officials notified the Executive Committee and DOD’s acquisition authority of the schedule delays, and under DOD acquisition rules, are required to submit a new schedule baseline by June 2009. These launch delays have endangered our nation’s ability to ensure the continuity of polar-orbiting satellite data. The final POES satellite, called NOAA-19, is in an afternoon orbit and is expected to have a 5-year lifespan. Both NPP and C1 are planned to support the afternoon orbit. Should the NOAA-19 satellite fail before NPP is launched, calibrated, and operational, there would be a gap in satellite data in that orbit. Further, the delays in C1 mean that NPP will not be the research and risk reduction satellite it was originally intended to be. Instead, it will have to function as an operational satellite until C1 is in orbit and operational—and if C1 fails on launch or in early operations, NPP will be needed to function until C3 is available, currently planned for 2018. The delay in the C2 satellite launch affects the early morning orbit. There are three more DMSP satellites to be launched in the early and midmorning orbits, and DOD is revisiting the launch schedules for these satellites to try to extend them as long as possible. However, an independent review team, established to assess key program risks, recently reported that the constellation of satellites is extremely fragile and that a single launch failure of a DMSP, NPOESS, or the NPP satellite could result in a gap in satellite coverage from 3 to 5 years. Although the program’s approved cost and schedule baseline is not achievable and the polar satellite constellation is at risk, the Executive Committee has not yet made a decision on how to proceed with the program. Program officials plan to propose new cost and schedule baselines in June 2009 and have reported that they are addressing immediate funding constraints by deferring selected activities to later fiscal years in order to pay for VIIRS and CrIS problems; delaying the launches of NPP, C1, and C2; and assessing alternatives for mitigating the risk that VIIRS will continue to experience problems. Without an executive-level decision on how to proceed, the program is proceeding on a course that is deferring cost growth, delaying launches, and risking its underlying mission of providing operational weather continuity to the civil and military communities. While the NPOESS Executive Committee has made improvements over the last several years in response to prior recommendations, it has not effectively fulfilled its responsibilities and does not have the membership and leadership it needs to effectively or efficiently oversee and direct the NPOESS program. Specifically, the DOD Executive Committee member with acquisition authority does not attend Committee meetings—and sometimes contradicts the Committee’s decisions, the Committee does not aggressively manage risks, and many of the Committee’s decisions do not achieve desired outcomes. Independent reviewers, as well as program officials, explained that the tri-agency structure of the program makes it very difficult to effectively manage the program. Until these shortfalls are addressed, the Committee is unable to effectively oversee the NPOESS program—and important issues involving cost growth, schedule delays, and satellite continuity will likely remain unresolved. We and others, including the Department of Commerce’s Inspector General in a 2006 report, have reported that the Committee was not accomplishing its job effectively. However, since then, the Committee has met regularly on a quarterly basis and held interim teleconferences as needed. The Committee has also sought and reacted to advice from external advisors by, among other actions, authorizing a government program manager to reside onsite at the VIIRS contractor’s facility to improve oversight of the sensor’s development on a day-to-day basis. More recently, the Executive Committee sponsored a broad-based independent review of the NPOESS program and is beginning to respond to its recommendations. As established by the 1995 and 2008 memorandums of agreement signed by all three agencies, the members of the NPOESS Executive Committee are (1) the Under Secretary of Commerce for Oceans and Atmosphere; (2) the Under Secretary of Defense for Acquisition, Technology, and Logistics; and (3) the NASA Administrator. Because DOD has the lead responsibility for the NPOESS acquisition, the Under Secretary of Defense for Acquisition, Technology, and Logistics was also designated as the milestone decision authority—the individual with the authority to approve a major acquisition program’s progression in the acquisition process, as well as any changes to the cost, schedule, and functionality of the acquisition. The intent of the tri-agency memorandums was that acquisition decisions would be agreed to by the Executive Committee before a final acquisition decision is made by the milestone decision authority. However, DOD’s acquisition authority has never attended an Executive Committee meeting. This individual delegated the responsibility for attending the meetings—but not the authority to make acquisition decisions—to the Under Secretary of the Air Force. Therefore, none of the individuals who attend the Executive Committee meetings for the three agencies have the authority to approve the acquisition program baseline or major changes to the baseline. As a result, agreements between Committee members have been overturned by the acquisition authority, leading to significant delays. To provide the oversight recommended by best practices, including reviewing data and calling for corrective actions at the first sign of cost, schedule, and performance problems and ensuring that actions are executed and tracked to completion, the Executive Committee holds quarterly meetings during which the program’s progress is reviewed using metrics that provide an early warning of cost, schedule, and technical risks. However, the Committee does not routinely document action items or track those items to closure. Some action items were not discussed in later meetings, and in cases where an item was discussed, it was not always clear what action was taken, whether it was effective, and whether the item was closed. According to the Program Executive Officer, the closing of an action item is not always explicitly tracked because it typically involves gathering information that is presented during later Committee meetings. Nonetheless, by not rigorously documenting action items—including identifying the party responsible for the action, the desired outcome, and the time frame for completion—and then tracking the action items to closure, the Executive Committee is not able to ensure that its actions have achieved their intended results and to determine whether additional changes or modifications are still needed. This impedes the Committee’s ability to effectively oversee the program, direct risk mitigation activities, and obtain feedback on the results of its actions. Best practices call for oversight boards to take corrective actions at the first sign of cost, schedule, and performance slippages in order to mitigate risks and achieve successful outcomes. The NPOESS Executive Committee generally took immediate action to mitigate the risks that were brought before them; however, a majority of these actions were not effective—that is, they did not fully resolve the underlying issues or result in a successful outcome. The Committee’s actions on the sensor development risks accomplished interim successes by improving the government’s oversight of a subcontractor’s activities and guiding next steps in addressing technical issues—but even with Committee actions, the sensors’ performance has continued to falter and affect the rest of the program. Independent reviewers reported that the tri-agency structure of the program complicated the resolution of sensor risks because any decision could be revisited by another agency. Program officials explained that interagency disagreements and differing priorities make it difficult to effectively resolve issues. When NPOESS was restructured in June 2006, the program included two satellites (C1 and C2) and an option to have the prime contractor produce the next two satellites (C3 and C4). In approving the restructured program, DOD’s decision authority noted that he reserved the right to use a different satellite integrator for the final two satellites, and that a decision on whether to exercise the option was to be made in June 2010. To prepare for this decision, DOD required a tri-agency assessment of alternative management strategies. This assessment was to examine the feasibility of an alternative satellite integrator, to estimate the cost and schedule implications of moving to an alternative integrator, and within one year, to provide a viable alternative to the NPOESS Executive Committee. To address DOD’s requirement, the NPOESS Program Executive Officer sponsored two successive alternative management studies; however, neither of the studies identified a viable alternative to the existing satellite integrator. The Program Executive Officer plans to conduct a final assessment of alternatives prior to the June 2010 decision on whether to exercise the option to have the current system integrator produce the next two NPOESS satellites. Program officials explained that the program’s evolving costs, schedules, and risks could mean that an alternative that was not viable in the past would become viable. For example, if the prime contractor’s performance no longer meets basic requirements, an alternative that was previously too costly to be considered viable might become so. In the report being released today, we are making recommendations to improve the timeliness and effectiveness of acquisition decision-making on the NPOESS program. Specifically, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to attend and participate in NPOESS Executive Committee meetings. In addition, we are recommending that the Secretaries of Defense and Commerce and the Administrator of NASA direct the NPOESS Executive Committee to take the following five actions: (1) establish a realistic time frame for revising the program’s cost and schedule baselines; (2) develop plans to mitigate the risk of gaps in satellite continuity; (3) track the Committee’s action items from inception to closure; (4) improve the Committee’s ability to achieve successful outcomes by identifying the desired outcome associated with each of the Committee actions, as well as time frames and responsible parties, when new action items are established; and (5) improve the Committee’s efficiency by establishing time frames for escalating risks to the Committee for action so that they do not linger unresolved at the program executive level. In written comments on a draft of our report, NASA and NOAA agreed with our findings and recommendations and identified plans to implement them. DOD concurred with one and partially concurred with our other recommendations. For example, regarding our recommendation to have the appropriate official attend Executive Committee meetings, the agency partially concurred and noted that the Under Secretary for Acquisition, Technology, and Logistics would evaluate the necessity of attending future Executive Committee meetings. DOD also reiterated that the Under Secretary of the Air Force was delegated authority to attend the meetings. While we acknowledge that the Under Secretary delegated responsibility for attending these meetings, it is an inefficient way to make decisions and achieve outcomes in this situation. In the past, agreements between Executive Committee members have been overturned by the Under Secretary, leading to significant delays in key decisions. The full text of the three agencies’ comments and our evaluation of those comments are provided in the accompanying report. In summary, continued problems in the development of critical NPOESS sensors have contributed to growing costs and schedule delays. Costs are now expected to grow by as much as $1 billion over the prior life cycle cost estimate of $13.95 billion, and problems in delivering key sensors have led to delays in launching NPP and the first two NPOESS satellites— by a year or more for NPP and the first NPOESS satellite. These launch delays have endangered our nation’s ability to ensure the continuity of polar-orbiting satellite data. Specifically, if any planned satellites fail on launch or in orbit, there would be a gap in satellite data until the next NPOESS satellite is launched and operational—a gap that could last for 3 to 5 years. The NPOESS Executive Committee responsible for making cost and schedule decisions and addressing the many and continuing risks facing the program has not yet made important decisions on program costs, schedules, and risks—or identified when it will do so. In addition, the Committee has not been effective or efficient in carrying out its oversight responsibilities. Specifically, the individual with the authority to make acquisition decisions does not attend Committee meetings, the Committee does not aggressively manage risks, and many of the Committee’s decisions do not achieve desired outcomes. Until the Committee’s shortfalls are addressed, important decisions may not be effective and issues involving cost increases, schedule delays, and satellite continuity may remain unresolved. Mr. Chairman and members of the Subcommittee, this concludes our statement. We would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. If you have any questions on matters discussed in this testimony, please contact David A. Powner at (202) 512-9286 or at pownerd@gao.gov. Other key contributors to this testimony include Colleen M. Phillips, Assistant Director; Kate Agatone; Neil Doherty; Kathleen S. Lovett; Lee McCracken; and China R. Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The National Polar-orbiting Operational Environmental Satellite System (NPOESS)--a tri-agency acquisition managed by the Department of Commerce's National Oceanic and Atmospheric Administration (NOAA), the Department of Defense (DOD), and the National Aeronautics and Space Administration (NASA)--is considered critical to the United States' ability to maintain the continuity of data required for weather forecasting (including severe weather events such as hurricanes) and global climate monitoring. Since its inception, NPOESS has experienced escalating costs, schedule delays, and technical difficulties. As the often-delayed launch of its demonstration satellite (called the NPOESS Preparatory Project--NPP) draws closer, these problems continue. GAO was asked to summarize its report being released today that (1) identifies the status and risks of key program components, (2) assesses the NPOESS Executive Committee's ability to fulfill its responsibilities, and (3) evaluates efforts to identify an alternative system integrator for later NPOESS satellites. The NPOESS program's approved cost and schedule baseline is not achievable and problems with two critical sensors continue to drive the program's cost and schedule. Costs are expected to grow by about $1 billion from the current $13.95 billion cost estimate, and the schedules for NPP and the first two NPOESS satellites are expected to be delayed by 7, 14, and 5 months, respectively. These delays endanger the continuity of weather and climate satellite data because there will not be a satellite available as a backup should a satellite fail on launch or in orbit--loss of a Defense Meteorological Satellite Program (DMSP) satellite, an NPOESS satellite, or NPP could result in a 3 to 5 year gap in data continuity. Program officials reported that they are assessing alternatives for mitigating risks, and that they plan to propose a new cost and schedule baseline by the end of June 2009. However, the Executive Committee does not have an estimate for when it will make critical decisions on cost, schedule, and risk mitigation. While the NPOESS Executive Committee has made improvements over the last several years in response to prior recommendations, it has not effectively fulfilled its responsibilities and does not have the membership and leadership it needs to effectively or efficiently oversee and direct the NPOESS program. Until its shortfalls are addressed, the Committee will be unable to effectively oversee the NPOESS program--and important issues involving cost growth, schedule delays, and satellite continuity will likely remain unresolved. The NPOESS program has conducted two successive studies of alternatives to using the existing system integrator for the last two NPOESS satellites, but neither identified a viable alternative to the current contractor. Program officials plan to conduct a final study prior to the June 2010 decision on whether to proceed with the existing prime contractor. |
Information security is an important consideration for any organization that depends on information systems and computer networks to carry out its mission or business. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. However, without proper safeguards, these developments pose enormous risks that make it easier for individuals and groups with malicious intent to intrude into inadequately protected systems and use such access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer networks and systems. Further, the number of individuals with computer skills is increasing, and intrusion, or hacking, techniques are readily available and relatively easy to use. The rash of cyber attacks launched in February 2000 against major U.S. firms illustrates the risks associated with this new electronic age. Computer-supported federal operations are also at risk. Our previous reports, and those of agency inspectors general, describe persistent computer security weaknesses that place a variety of critical federal operations, including those at IRS, at risk of disruption, fraud, and inappropriate disclosure. This body of audit evidence led us, in 1997 and again in 1999 reports to the Congress, to designate computer security as a governmentwide high-risk area. It remains so today. How well federal agencies are addressing these risks is a topic of increasing interest in both the Congress and the executive branch. This is evidenced by recent hearings on information security, recent legislation intended to strengthen information security, and the President’s January 2000 National Plan for Information Systems Protection. As outlined in this plan, a number of new, centrally managed entities have been established and projects initiated to assist agencies in strengthening their security programs and improving federal intrusion-detection capabilities. In its role as the nation’s tax collector, IRS has a demanding responsibility in collecting taxes, processing tax returns, and enforcing the nation’s tax laws. IRS processes more than 150 million tax returns, accounts for approximately $1.9 trillion in collections, and pays about $185 billion in refunds to taxpayers annually. To efficiently fulfill its tax processing responsibilities, IRS places extensive reliance on interconnected computer systems to perform various functions, such as collecting and storing taxpayer data, processing tax returns, calculating interest and penalties, generating refunds, and providing customer service. Due to the nature of its mission, IRS collects and maintains a significant amount of personal and financial data on each American taxpayer. These data typically include the taxpayer’s name, address, social security number, dependents, income, source of certain types of income, and certain deductions and expenses. The confidentiality of this sensitive information is important because American taxpayers could be exposed to a loss of privacy and to financial loss and damages resulting from identity theft and financial crimes should this information be disclosed to unauthorized individuals. IRS’ e-file program offers taxpayers an alternative to filing traditional paper returns. With e-file, a taxpayer may file an electronic tax return (1) through a tax professional who is also an authorized IRS e-file provider, (2) through a personal computer to an e-file transmitter, or (3) over the telephone. The e-file program is beneficial because IRS receives tax and information returns in electronic form and does not have to manually enter data into its computer systems as it does with paper returns. IRS has asserted that data on electronic tax returns cost less to process and are more accurate than on paper returns, taxpayers receive refunds faster, and taxpayer privacy and security are assured. The number of individuals filing returns electronically is increasing. During 2000, IRS reported that over 35 million individual taxpayers, about 20 percent more than the previous year, filed their returns electronically. The number of e-file individual returns represented about 28 percent of all individual returns projected to be filed during 2000. The IRS Restructuring and Reform Act of 1998 established a goal that 80 percent of all tax and information returns be filed electronically by 2007. In an attempt to meet this goal, IRS has aggressively marketed the e-file program and has authorized private firms and individuals to be its e-file trading partners. These partners include electronic return originators, who prepare electronic tax returns for taxpayers, and transmitters, who transmit the electronic portion of a return directly to IRS. Except for taxpayers who file electronic returns using telephones, IRS does not allow individual taxpayers to transmit their electronic tax returns directly to the agency. Electronic filers must use the services of an IRS trading partner. The Director, Electronic Tax Administration, is responsible for overseeing IRS’ electronic tax programs, including e-file, and for improving taxpayer awareness of electronic tax administration products and services. The Chief Information Officer has overall responsibility for developing, operating, and securing IRS information systems including those used for electronic filing. The Director, Submission Processing, is responsible for processing electronically filed tax returns. Computer access controls are key to ensuring that only authorized individuals gain access to sensitive and critical agency data. They include a variety of tools, such as telecommunications and network control devices, including secure dial-in and firewalls, which can be used to prevent or limit inappropriate access to information system resources; passwords, intended to authenticate authorized users; and encryption, which can be used to keep the contents of a message or data file confidential if security is breached. IRS did not adequately safeguard tax return data on e-file computers. Our tests, conducted in May 2000, showed that access controls over IRS’ electronic filing systems were not effective in adequately reducing the risk of intrusions and misuse of electronically filed taxpayer data. We demonstrated that unauthorized individuals, both internal and external to IRS, could have viewed and modified electronically filed taxpayer data on IRS computers. For example, we were able to access a key electronic filing system using a common handheld computer. We identified weaknesses that, if exploited during the 2000 tax filing season, could have allowed unauthorized individuals to have viewed, copied, or modified files containing electronically filed tax return data before they were sent to the IRS mainframe computer for further processing and to have viewed, altered, deleted, or redirected network traffic. In summary, during the 2000 tax filing season, IRS did not effectively restrict external access to its computers supporting the e-file program. A firewall and similar perimeter defenses are an organization’s first line of defense against outside intrusion. However, IRS had not installed effective perimeter defenses to protect its e-file computers. IRS did not securely configure the operating system on its e-file computers. We demonstrated, for example, that the operating system permitted the use of several risky and unnecessary services that could have aided an intrusion attempt. IRS did not implement adequate password management and user account practices on its e-file computers. We identified serious weaknesses in IRS’ controls over the confidentiality and complexity of its passwords and in the administration of its user accounts. For example, we were able to guess many passwords based on our knowledge of commonly used passwords. We also found user-IDs and passwords that were posted in clear view on a monitor in an unsecured area at one IRS data processing facility. Poor password management and user account practices increased the risk that unauthorized individuals could determine password and user account combinations to gain unauthorized access to IRS e-file systems. IRS did not sufficiently restrict access to computer files and directories containing tax return and other system data. We determined that certain e-file system users with no need for access to electronically filed tax return data could have viewed and modified that data, contrary to IRS’ “need to know” policy. In addition, we determined that all system users had the capability to modify numerous files on e-file computers, including sensitive data and system files, leaving those files much more susceptible to inadvertent or deliberate unauthorized modification. IRS did not encrypt tax return data while the data were stored on e-file computers. The Internal Revenue Manual requires that cryptography be used for protecting information systems from the threat of intruders gaining access by way of remote telephone systems, a threat applicable to e-file computers. IRS did not ensure the protection of sensitive business, financial, and taxpayer data on other critical systems in its servicewide network during the 2000 tax filing season. Weak controls over internal IRS networks could have allowed intruders to use e-file computers to gain unauthorized access to other IRS systems. Certain network control devices—largely intended to protect other internal IRS computer systems from unauthorized access— were not effectively configured or deployed to prevent such intrusions. For example, IRS personnel “turned off” (bypassed) the network control devices in order to speed up the processing of electronic tax returns. However, these actions exposed other systems attached to IRS’ wide area network to unauthorized access. Along with the results of our computer control reviews at several IRS facilities, we determined that control weaknesses over other IRS networks and systems increased the risk of successful intrusions into those other systems. Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they take steps to promptly detect intrusions and misuse before significant damage can be done. Documenting and analyzing security problems and incidents are effective ways for organizations to gain a better understanding of threats to their information and operations and of the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be addressed to help reduce the risk of similar intrusions and misuse. While IRS stated that it did not have evidence that intruders accessed or modified taxpayer data on its e-file systems, its capabilities for detecting intrusions and misuse resulting from the exploitation of vulnerabilities on e-file systems during the 2000 tax filing season were not adequate. IRS did not record certain key events in system audit logs, did not regularly review those logs for unusual or suspicious events or patterns, and had not deployed software to facilitate the detection and analysis of logged events. For example, IRS did not recognize or record much of the activity associated with our test activities. These serious access control weaknesses existed because IRS had not taken adequate steps during the 2000 tax filing season to ensure the ongoing security of electronically transmitted tax return data on its e-file systems. For example, IRS had not followed or fully implemented several of its own information security policies and guidelines when it developed and implemented controls over its electronic filing systems. It decided to implement and operate its e-file computers before completing all of the security requirements for certification and accreditation. Also, IRS had not fully implemented a continuing program for assessing risk and monitoring the effectiveness of security controls over its electronic filing systems. IRS’ senior management moved promptly to address the access control weaknesses related to electronic filing. In meetings with senior IRS management and technical staff, we alerted IRS to significant security vulnerabilities identified by our testing that warranted immediate remediation. This interaction was productive. IRS developed corrective action plans that identified the specific actions required to improve the security over e-file computers and IRS internal networks. According to IRS officials, they have completed most of the planned improvements, including correction of the critical vulnerabilities, in time for the 2001 tax filing season. IRS officials stated that they have revamped the e-file system architecture, installed effective perimeter defenses, improved their configuration management practices, strengthened password controls, reconfigured the operating systems of e-file systems, established a process to identify excessive file permissions, added intrusion-detection capability, and made certain management changes. IRS stated that its actions demonstrate that it has taken a systematic, risk- based approach to correcting these weaknesses. Such an approach is important in helping to ensure that improvement efforts are effective and appropriate. It is also important that these actions to strengthen technical controls be supported by improvements in the way IRS continually manages information security. As part of our normal follow-up review of IRS’ implementation of the recommendations contained in this report, we will test the effectiveness of IRS’ recent improvement actions. Application controls should be designed and implemented to help ensure the reliability of data processed by the application. Such controls help make certain that transactions are valid, properly authorized, and completely and accurately processed. IRS believes that electronically filed tax returns are more accurate than paper returns for several reasons. For example, commercial software used by taxpayers to prepare electronic returns contains mathematical formulas and edit routines that can help to reduce computational errors on returns. Electronic returns also eliminate the data entry errors associated with typing paper return data into IRS’ tax processing systems. In addition, IRS has implemented many application controls that were designed to enhance the reliability of data processed by e-file computers. However, we identified additional opportunities to strengthen application controls for IRS’ processing of electronic tax return data. These opportunities are discussed below. A key control relating to the authenticity and accuracy of a tax return is the taxpayer’s signature and certification that the return is true, correct, and complete to the best of the taxpayer’s knowledge and belief. IRS requirements state that taxpayers who file tax Form 1040 electronically must submit this signature and certification to IRS on Form 8453. Certain taxpayers participating in an IRS pilot program may use an IRS-provided personal identification number to authenticate their electronically filed return in lieu of submitting Form 8453. IRS processed electronic tax returns and paid refunds even when it did not receive a signed Form 8453 or when a personal identification number was not used to authenticate the return. This practice is inconsistent with the IRS practice of withholding payments for refunds claimed on unsigned paper returns. Agency statistics through August 24, 2000, showed that IRS did not receive Forms 8453 for almost 1.2 million—about 3 percent—of tax returns filed electronically during the 2000 tax filing season and that about 93 percent of electronically filed returns were entitled to a refund or had no balance due. According to IRS, the average refund issued for the 1999 tax filing season for on-line filers was $2,041, and for practitioner-prepared electronic returns, $1,910. Based on these statistics, IRS paid refunds of about $2.1 billion on electronic tax returns that were not authenticated by taxpayers as of August 24, 2000. Further, according to agency criminal investigators, the absence of a signed Form 8453 may preclude perjury prosecutions against taxpayers who provide false information on electronic tax returns. Unless electronic returns are supported with a signed Form 8453 or a personal identification number before paying claimed refunds, IRS is vulnerable to paying improper refunds based on unauthenticated electronically filed tax returns. Another control activity involves identifying erroneous data at the point that it enters the application system, or at some later point in the processing cycle. This is accomplished through a process called data validation and editing. Programmed validation and edit checks are key to this process and are generally performed on transaction data entering the system (before the master files are updated) and on data resulting from processing. We identified several instances in which an e-file system did not detect erroneous or invalid data in our test transactions. For example, an e-file system did not detect several arithmetical errors and inconsistent data or amounts between related data fields on the Form 1040 and the attachments. As a result, there was an increased risk that IRS did not detect certain erroneous or inconsistent data on electronically filed tax returns. An essential control for ensuring the integrity of a computer application is to prevent software programmers and developers from having access to the application in the production environment. Denying such access to software programmers and developers can help to reduce the risk of unauthorized changes to production programs and data. However, a software developer was capable of viewing and modifying taxpayer data on production e-file computers during the 2000 tax filing season. In addition, software development tools were installed on those computers. As a result, there was an increased risk last year that the software developer could have introduced unauthorized programs, made unauthorized changes to production programs, and viewed or modified electronically filed taxpayer data on e-file computers. Taxpayers who electronically file tax returns may not have been aware that transmitters could view and modify taxpayers’ electronic tax return data and that such data are transmitted to IRS in clear text. Transmitters have this level of access because IRS decided (1) not to allow taxpayers to file most electronic returns directly with IRS, (2) to require taxpayers who elect to file electronically to use the services of third-party transmitter, and (3) not to accept electronic tax returns in encrypted form. Also, taxpayers may not have been aware of other risks related to electronic filing. Links provided on the IRS Web site to certain IRS trading partners emphasized the use of state-of-the-art encryption when electronic filers send tax information over the Internet to IRS trading partners, but these links did not disclose that the trading-partner-to-IRS portion of the data transfer was sent in clear text. Thus, taxpayers who prepared their electronic tax returns and sent the returns in an encrypted form to a transmitter for transmission to IRS may not have known that the transmitter could have viewed, modified, or copied their tax returns, or that their returns were transmitted to IRS in clear text. Similarly, taxpayers who used the services of an electronic return originator to prepare their electronic returns may not have realized that their returns were sent to IRS in clear text or that the electronic return originator may have sent their returns to a transmitter for transmission to IRS. As a result, taxpayers may not have been fully informed as to which businesses and individuals could have viewed, modified, and copied the personal and financial data contained on their electronically filed tax returns. IRS did not adequately inform taxpayers of other risks related to filing electronic tax returns. Although IRS noted that it did not endorse the products, services, or privacy or security policies of its electronic filing trading partners, IRS asserted in promotional materials on e-file that the security and privacy of tax return data filed electronically was “assured.” However, the security and privacy of such data were subject, in part, to the (1) effectiveness of the transmitters’ security controls over their computing environments and (2) character of the transmitters’ employees who had access to the taxpayer data. IRS had no assurance about the security of transmitter systems that contained or transmitted tax return data to IRS’s e- file systems, including whether users of such systems were properly authorized, and had only limited assurance about the character or background of the transmitters. Other than providing guidance about protecting certain passwords, IRS did not prescribe minimum computer security requirements for transmitters and did not assess or require an independent assessment of the effectiveness of computer controls within the transmitters’ operating environment. IRS monitored transmitters for compliance with the applicable revenue procedure and e-file program requirements. According to IRS, monitoring may have included reviewing e-file submissions, investigating complaints, scrutinizing advertising material, visiting offices, examining files, observing office procedures, and conducting annual suitability checks. However, IRS did not assess computer security over transmitters’ computer systems as part of its monitoring efforts. In addition, although IRS stated it performed an annual suitability check of its trading partners, including e-file transmitters, most were not subjected to criminal background or fingerprint checks. The Treasury Inspector General for Tax Administration reported in September 1999 that although IRS improved the 1998 suitability screening process, the overall process was not completely successful in preventing inappropriate e-file trading partners from participating in the e-file program. For example, IRS had approved individuals to be e-file trading partners who had unpaid tax liabilities, filed tax returns late, filed false tax returns, or had been assessed Trust Fund Recovery penalties. Importantly, however, transmitters and electronic return originators may be subject to criminal or civil penalties if they improperly disclose or misuse tax return information. A number of serious control weaknesses in IRS’ electronic filing systems placed personal taxpayer data in IRS’ electronic filing systems at significant risk of unauthorized disclosure, use, and modification during last year’s tax filing season. IRS recognized the importance of promptly addressing these weaknesses and stated that it has taken steps to correct them prior to the current tax filing season. Ensuring that ongoing controls over electronic filing are effective requires top-management support and leadership, disciplined processes, and consistent oversight. IRS’ efforts to achieve the goal that 80 percent of all tax and information returns be filed electronically by 2007 must be balanced with the need to adequately ensure the security, privacy, and reliability of taxpayer and other sensitive information. Failure to maintain adequate security over IRS’ electronic filing systems could erode public confidence in electronically filing tax returns, jeopardize IRS’ ability to meet the 80 percent goal, and deprive IRS of the many benefits that electronic filing offers. The following recommendations are based on information security weaknesses identified during last year’s 2000 tax filing season. As noted in this report, IRS has acted to correct critical weaknesses prior to the 2001 tax filing season. We will assess the effectiveness of these corrective actions as part of our normal follow-up review. We recommend that the IRS Commissioner direct the Chief Information Officer to complete efforts to implement an action plan for strengthening access controls over IRS electronic filing systems and networks. To assist in this effort, we have provided technical recommendations that addressed specific access control weaknesses that IRS should address as part of its efforts. Because of the significance of the electronic filing systems to the future operations of IRS, we also recommend that the Chief Information Officer periodically report to the Commissioner on progress made to implement this action plan and on the results of efforts to continually monitor the risks and effectiveness of security controls over IRS electronic filing systems and electronically filed taxpayer data. We also recommend that the IRS Commissioner direct the Chief Information Officer to complete actions required for the certification and accreditation of an fully implement procedures to assess risks and monitor the effectiveness of security controls over IRS’ electronic filing systems on an ongoing basis; enhance the edit and data validation routines in an e-file system to detect erroneous or invalid data on electronically filed tax returns; and improve the integrity of the e-file production environment by removing software development tools from the production environment, if feasible, or restricting access to the tools to the minimum number of users who require it and disallowing developers access to production environments and taxpayer data. We recommend that the Commissioner direct the Director of Submission Processing to implement an alternative means for taxpayers to authenticate electronically filed returns or to strengthen procedures for receiving signed Forms 8453 for electronically filed tax returns. We recommend that the Commissioner direct the Director of Electronic Tax Administration to provide notice to taxpayers concerning (1) transmitter access to electronic tax return data in clear text and (2) electronic transmission of tax returns to IRS in clear text. In commenting on a draft of this report, the Commissioner of Internal Revenue stated that the report accurately identified areas that needed strengthening during last year’s filing season and that IRS initiated timely actions to strengthen important security controls when the audit findings were brought to its attention. He further indicated that IRS has completed actions for correcting all of the critical access control vulnerabilities we identified and for certifying the systems. As a result, the Commissioner stated, the electronic filing systems now satisfactorily meet critical federal information security requirements to provide strong controls to protect taxpayer data and that taxpayers can feel safe and secure using e-file during the 2001 filing season. The Commissioner added that the report’s findings and GAO’s assistance have been instrumental in supporting IRS’ continuing efforts to improve its computer security capabilities. The Commissioner’s written response indicated that it has taken or will take appropriate steps to implement eight of our nine recommendations. IRS’ Director of Security Evaluation and Oversight stated orally that IRS has taken corrective action to resolve the final recommendation. We will assess the effectiveness of IRS’ corrective actions as part of our normal follow-up review on recommendations. In addition to responding to our recommendations, the IRS Commissioner provided additional comments about IRS’ security program, our report, and other controls over electronic filing. We addressed these comments in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies to Senator Joseph Lieberman and other interested congressional committees. We will also send copies of this report to the Honorable Paul H. O’Neill, Secretary of the Treasury; the Honorable Charles O. Rossotti, Commissioner of Internal Revenue; and the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget. Copies will be made available to others upon request. If you have questions about this report, please contact me at (202) 512-3317 or by e-mail at DaceyR@gao.gov. Key contributors to this assignment were West Coile, Hal Lewis, Karlin Richardson, and Gregory Wilshusen, (202) 512-6244, WilshusenG@gao.gov. Our objective was to assess the effectiveness of key computer controls that were designed to ensure the security, privacy, and reliability of IRS’ electronic filing systems and electronically filed taxpayer data. To accomplish our objective, we applied appropriate sections of our Federal Information System Controls Audit Manual (GAO/AIMD-12.19.6), which describes our methodology for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized data. To assess the security over IRS’ electronic filing systems and privacy of electronically filed taxpayer data, we tested the effectiveness of key computer access controls over electronic filing systems, reviewed IRS policies and procedures, researched prior reports by IRS’ Internal Audit and the Treasury Inspector General for Tax Administration, interviewed system administrators and program officials, assessed the design and architecture of e-file systems, and examined the operating system configuration and control implementation for electronic filing systems’ host computers and network servers, routers, and control devices. In addition, we attempted to exploit identified control weaknesses to verify the vulnerabilities they presented. We also met with officials at IRS national offices to discuss the possible reasons for the vulnerabilities we identified and their plans for future improvement. To assess the reliability of electronically filed tax return data processed by e-file systems, we examined controls designed to ensure that electronically filed tax return data were valid, properly authorized, and accurately and completely processed. In addition, we reviewed IRS policy and procedures; examined application system documentation; interviewed system and security administrators, users, and officials at selected IRS facilities; observed procedures and controls in place; examined transaction source documents and control documents; processed test transactions and assessed results; and inspected application logs and reports. We also assessed IRS’ information system general controls and their impact on these applications. We performed our review at several IRS facilities and at our headquarters in Washington, D.C., at various times from July 1999 through August 2000, in accordance with generally accepted government auditing standards and our Federal Information System Controls Audit Manual. 1. We have previously reported that although IRS has made significant strides in improving computer security at certain facilities, an effective computer security management program had not yet been fully implemented across the service. 2. We agree that the report does not focus on the likelihood of the threats occurring and focuses on the risks associated with the threats. However, we do not believe the report’s message unreasonably promotes undue concern about the risks associated with electronic filing. In our view, we are presenting the facts about certain risks that accompany electronic filing−risks that may or may not be present in paper filing, and risks that the public is entitled to know. Because of its role as the nation’s tax collector, IRS’ computer systems may be a target for certain individuals or groups. Our tests, which successfully identified and exploited weaknesses in IRS’ e-file computers, were not sophisticated. It is important to note that IRS immediately recognized the seriousness of the weaknesses we identified and said it has taken prompt action to correct all of the critical vulnerabilities. 3. We neither state nor imply that sending data unencrypted over public switched networks is an unacceptable risk. However, we continue to believe that the risk of unauthorized disclosure is greater when electronic tax returns are transmitted in clear text than in encrypted text. It is important to note that IRS regulations require encryption and secure dial-in for remote access from the public switched telephone network to any IRS system that contains sensitive data. 4. As noted in the report, although IRS did not have evidence that intruders accessed or modified taxpayer data on its e-file systems, its capabilities for detecting intrusions and misuse resulting from the exploitation of vulnerabilities on e-file systems during the 2000 tax filing season were not adequate. 5. The Treasury Inspector General for Tax Administration reported in September 1999 that although IRS improved the 1998 suitability screening process, the overall process was not completely successful in preventing inappropriate e-file trading partners from participating in the e-file program. 6. We recognize in the report that IRS performs annual suitability checks and monitors its trading partners. However, we continue to believe that taxpayers have a right to know that IRS does not subject most of its trading partners to criminal background or fingerprint checks and does not assess computer security over transmitters’ computer systems as part of its monitoring efforts. 7. We do not believe that “total assurance” is necessary, only full disclosure. At the time IRS asserted on its Web page that taxpayers’ “privacy and security are assured,” we identified serious access control weaknesses over IRS electronic filing systems that could have allowed unauthorized individuals to view and modify taxpayer data. Further, IRS had no assurance about the effectiveness of computer controls within the transmitters’ operating environments—environments that affect the privacy and security of electronically filed tax return data. We believe that IRS should inform taxpayers of the risks as well as the benefits of filing electronic tax returns so they can make informed decisions on the tax filing method that is appropriate for them. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system) | A number of serious control weaknesses in the Internal Revenue Service's (IRS) electronic filing systems placed personal taxpayer data in IRS' electronic filing system at significant risk of unauthorized disclosure, use, and modification during the 2000 tax filing season. IRS recognized the importance of promptly addressing these weaknesses and stated that it has taken steps to correct them prior to the current tax filing season. Ensuring that ongoing controls over electronic filing are effective requires top-management support and leadership, disciplined processes, and consistent oversight. IRS' efforts to achieve the goal that 80 percent of all tax and information returns be filed electronically by 2007 must be balanced with the need to adequately ensure the security, privacy, and reliability of taxpayer and other sensitive information. Failure to maintain adequate security over IRS' electronic filing systems could erode public confidence in electronically filing tax returns, jeopardize IRS' ability to meet the 80 percent goal, and deprive IRS of the many benefits that electronic filing offers. |
Most federal highway transportation funds are distributed as grants to states as part of the federal aid highway program through a set of complex formulas that take into account a number of factors, including the estimated share of taxes highway users in each state contributes. The Highway Trust Fund is the principal source of funding for federal aid highway programs and is funded through motor fuel and other highway use taxes. Grants for transit projects are distributed as part of the federal transit program through a collection of formula-based and discretionary programs and are funded primarily by the Mass Transit Account of the Highway Trust Fund. Supplementing these federal programs is a collection of financing methods that allow project sponsors—such as state DOTs and transit agencies—to borrow money through bonds, loans, or other mechanisms. State DOTs and other project sponsors can raise money in the bond market through, for example, revenue bonds backed by anticipated project revenues like tolls; bonds backed by future federal transportation funds, such as Grant Anticipation Revenue Vehicles (GARVEE) or Grant Anticipation Notes (GAN); or general obligation bonds backed by the full faith and credit of a state or municipality. Project sponsors may also seek private investment through bank debt, private equity, or private activity bonds. Through TIFIA, DOT provides loans or other credit assistance to sponsors of surface transportation projects. Declining Highway Trust Fund revenues and states’ budget constraints, as well as the high cost and size of major transportation projects, have prompted project sponsors to seek alternative methods of funding transportation infrastructure. The TIFIA program’s primary goal is to leverage limited federal resources and stimulate private capital investment in transportation infrastructure by providing credit assistance to projects of national or regional significance. Underlying the TIFIA program is the notion that the federal government can perform a constructive role in financing large transportation infrastructure projects by supplementing, but not supplanting, existing capital finance markets. In this role, DOT identifies five key objectives for the TIFIA program: facilitate projects with significant public benefits; encourage new revenue streams and private participation; fill capital market gaps for secondary (subordinate) capital; be a flexible, “patient” investor willing to take on investor concerns about investment horizon, liquidity, predictability, and risk; and limit federal exposure by relying on market discipline. DOT provides TIFIA credit assistance in three forms: direct loans, loan guarantees, and standby lines of credit. The maximum maturity for all types of TIFIA credit assistance is 35 years after substantial completion of a project. Lines of credit can supplement project revenues during the first 10 years of project operations. In addition, DOT can defer the first TIFIA repayment until 5 years after substantial completion of a project, and most project sponsors avail themselves of this option. Other credit assistance terms include that (1) TIFIA assistance may provide no more than 33 percent of total project costs, (2) senior debt has an investment- grade credit rating (Baa3/BBB- or higher), and (3) TIFIA assistance can be subordinate to the project’s senior debt, meaning that senior creditors may receive project revenues ahead of DOT. According to DOT officials, the TIFIA program is one of the few federal credit programs in which federal assistance routinely takes a subordinate position to other, nonfederal lenders with respect to cash flows. However, to protect taxpayers, TIFIA loans may not be subordinated to the claims of other creditors with respect to the loan recipients’ bankruptcy, insolvency, or liquidation. Both public and private entities are eligible to receive TIFIA assistance for a range of surface transportation-related projects including highway, transit, railroad, intermodal freight, and port access. Borrowers can include entities like state DOTs, toll authorities, transit agencies, and private concessionaires. Other eligibility requirements include that a project must have total costs of at least $50 million (or $15 million for intelligent transportation systems projects); must be included in state and local transportation plans; and must have dedicated revenues, like tolls, user fees, or pledged taxes, for repayment. ITS encompasses a broad range of electronics and communication technologies to enhance the capacity and efficiency of surface transportation systems, including traveler information, public transportation, and commercial vehicle operations. reauthorized the TIFIA program, authorizing budget authority of $122 million for each of fiscal years 2005-2009 from the Highway Trust Fund for the program’s credit subsidy cost and administrative expenses. The credit subsidy is the estimated long-term cost to the government of providing assistance. Extensions of SAFETEA-LU have authorized budget authority of $122 million for the TIFIA program for each subsequent fiscal year. Any uncommitted budget authority remains available for obligation in subsequent years, unless Congress chooses to reprogram or rescind these amounts. According to DOT, $10 million in TIFIA budget authority can generally be leveraged to provide $100 million in credit assistance. In fiscal year 2008, total requests for TIFIA assistance exceeded DOT’s available budgetary resources for the first time. Prior to this, when there was lower demand for the program, DOT allowed project sponsors to seek TIFIA assistance on a “first come, first served” basis defined by the sponsor’s schedule. Figure 1 shows the number and amount of credit assistance requested each fiscal year. In its 2010 report to Congress on TIFIA, DOT attributed the increased demand for TIFIA assistance to several factors, including the growing demand for infrastructure investment relative to other sources of funding (like declining fuel tax receipts), the economic downturn and difficulty accessing capital markets, and the increasing use of innovative approaches, like public-private partnerships, to finance and deliver projects. After demand exceeded available budget resources, DOT terminated the open application process and instituted an annual, fixed- date solicitation process for sponsors to submit LOIs for credit assistance. In fiscal year 2010, DOT began evaluating and competitively selecting projects based on how well they align with the TIFIA selection criteria and the availability of budget resources. As shown in figure 2, there are four primary stages for securing TIFIA credit assistance. For each project, the amount of time needed to complete each stage varies. For instance, in its 2002 report to Congress, DOT stated that the length of credit agreement negotiations is affected by the complexities and uncertainties of large infrastructure projects as well as the learning curve of both project sponsors and DOT as they encounter unique legal and financial issues with projects. After determining that a project meets the eligibility requirements, DOT uses eight statutory criteria weighted by regulation to select projects to receive credit assistance. Beginning in fiscal year 2010, DOT defined the statutory criteria in the notice of funding availability (funding notice), which included clarification of the national or regional significance and environment criteria. (See table 1.) In addition, projects that received funding through DOT’s Transportation Investment Generating Economic Recovery (TIGER) grant program were eligible for TIFIA credit assistance. Specifically, project sponsors could use TIGER grant funds for TIFIA credit assistance, known as a TIGER TIFIA payment. In each of the four rounds through which DOT TIGER grants have been available, a portion of the funds could be used to support the credit subsidy and administrative costs of projects eligible for federal credit assistance. When a project sponsor is offered a TIFIA award through the TIGER program, the project goes through the regular TIFIA process, including the TIFIA credit evaluation, credit agreement negotiation, and oversight and monitoring. DOT implements and manages the program using internal staff and a pool of external financial and legal advisors. There are nine internal TIFIA office staff who are responsible for assisting with reviewing LOIs, selecting projects to apply for credit assistance, negotiating credit agreements, monitoring loan disbursements and the financial performance of executed credit agreements, and tracking credit subsidy calculations. Currently, DOT supplements its internal staff with a pool of five financial and legal advisors, contracted for 5-year periods, who assist in the review of applications and negotiation of credit agreements. Since the TIFIA program was created in 1998, DOT has executed 27 credit agreements for 26 projects. To date, assistance has been provided through 26 loans and one loan guarantee. Of the 26 projects, 17 are located in 5 states—California, Colorado, Florida, Texas, and Virginia (see fig. 3). Overall, sponsors from 33 states, the District of Columbia, and Puerto Rico have submitted LOIs for projects that vary by mode and purpose, but most have high total costs. Highway projects account for a majority of all LOIs submitted to TIFIA program. According to DOT data, highway projects—such as building new roads and replacing bridges—accounted for about 60 percent of the 182 LOIs submitted to the TIFIA program from 1999 to 2012. Transit and intermodal projects—such as building new transit systems and constructing parking garages and facilities linking various transport modes near airports—account for 18 percent and 10 percent of all LOIs, respectively. In addition, rail, ferry, and ITS projects account for 4 percent, 2 percent, and 1 percent of LOIs during this time, respectively. However, no projects in these three modes have received TIFIA assistance to date. Over the history of the program, the average total cost of projects seeking TIFIA assistance has been $1.2 billion. Through fiscal year 2012, no sponsor in the other 17 states has submitted an LOI to the TIFIA program. (See fig. 4.) According to DOT data, TIFIA credit agreements have been used mostly for large, high-cost highway projects. Overall, DOT has provided TIFIA assistance to 17 highway projects. Some of these projects—like the President George Bush Turnpike-Western Extension (SH 161) in Texas— were to construct new roads, and others—like the I-595 Corridor Roadway Improvements project in Florida—to reconstruct and expand existing roads. Projects receiving credit assistance also tend to have high total costs. Of the 25 projects, 20 projects cost more than $500 million and 16 projects cost more than $1 billion. The average total cost of projects receiving TIFIA credit assistance is $1.4 billion. According to DOT, TIFIA assistance can help advance large-scale projects that otherwise might be delayed or deferred because of size or complexity, and as such, TIFIA projects to date have mainly been large-scale projects. On average, TIFIA assistance accounts for 24 percent of total project costs, about 9 percent less than the 33 percent currently permitted by law. To a lesser extent, TIFIA has also been used for transit and intermodal projects. Four intermodal projects have received credit assistance, including the Reno ReTRAC project in Nevada, which includes rail and roadway improvements to improve freight capacity and address environmental and safety concerns. Five transit projects have received TIFIA assistance. The Tren Urbano project in Puerto Rico, for example, constructed a new, fixed-guideway transit system to relieve congestion in the San Juan area. DOT officials told us that the balance of projects is becoming more diverse in terms of mode. They noted that sponsors of transit projects have been slower to use TIFIA assistance in the past, primarily because transit projects have access to low-cost municipal debt and do not generate revenue in excess of their operating costs to repay assistance. Moreover, it can be difficult to integrate TIFIA assistance with federal funding for transit provided through the New Starts program. However, in fiscal years 2010 and 2011, DOT invited the sponsors of 12 projects to apply for TIFIA assistance, which is the next stage in securing TIFIA assistance, 4 of which were transit projects. The Federal Transit Administration’s (FTA) New Starts program, part of the Capital Investment Grant program, is the federal government’s primary financial resource for supporting new major transit capital projects that are locally planned, implemented, and operated, such as light rail and bus rapid transit. the private equity accounts for about 17 percent of total project costs. Defining private participation more broadly, 17 projects with active credit agreements include either private equity or debt. The average private investment for projects with active credit agreements, including equity and debt, is 37 percent of total project costs. The North Tarrant Express, for example, is a public-private partnership between the Texas Department of Transportation and a private concessionaire—NTE Mobility Partners— to design, build, finance, operate, and maintain a 13-mile section of highway in the Dallas-Fort Worth area. Project funds include $426 million in equity from NTE Mobility Partners and $398 million from private activity bonds. These two sources of private investment account for about 40 percent of the project’s total cost. Projects with credit agreements typically pledge user fees or dedicated tax revenue to repay TIFIA assistance. For 16 credit agreements, user fees like tolls are pledged to repay assistance, while for 8 credit agreements, tax-backed revenue streams like local sales taxes are pledged to repay assistance. The remaining 3 credit agreements use other dedicated revenues, like availability payments, to repay assistance. As of April 2012, DOT reported that it has provided nearly $9.1 billion— $8.5 billion through direct loans and $600 million in loan guarantees—to projects at a budgetary cost of about $654 million. The budgetary cost of TIFIA assistance is the total credit subsidy for all projects, with the credit subsidy, as noted earlier, being the estimated long-term cost to the government of providing assistance calculated on a net present value basis, excluding administrative costs. As such, the credit subsidy reflects the estimated risk of the loan or assistance. According to DOT, the original credit subsidy cost for credit agreements ranges from less than 1 percent to over 15 percent of the amount of TIFIA assistance. Projects that pledge user fees tend to have higher subsidy costs and, thus, generally entail greater risk to the federal government because actual usage and fees for a project (such as traffic and toll revenue on a new road) may not meet projections, particularly early in its operation. In such cases, where repayment of TIFIA assistance relies solely on revenues from user fees, poor performance—such as less than projected use of a facility—could result in nonpayment. Project sponsors are actively drawing funds from DOT for about half of the projects with TIFIA credit agreements. Many of these projects—14 of 26 projects—are currently under construction. As a result, many sponsors are drawing and not yet repaying TIFIA loans. Six project sponsors have retired their TIFIA credit agreements through early repayment, by refinancing the loan, or because of expiration of the credit agreement in the case of a loan guarantee. For example, the Puerto Rico Highway and Transportation Authority refinanced its TIFIA loan for the Tren Urbano project with tax-exempt debt about 3 years after DOT fully disbursed the loan. The sponsor paid back the TIFIA loan 32 years ahead of schedule and anticipated saving about $31.7 million in interest payments to DOT by refinancing the TIFIA loan. The sponsor of one TIFIA project—the South Bay Expressway in San Diego County, California—declared bankruptcy in 2010 but has not defaulted on any TIFIA payments. At the time of the bankruptcy filing, the outstanding balance of the TIFIA loan was $172 million, including interest. The Plan of Reorganization ordered by the U.S. bankruptcy court reduced the value of the loan’s principal. DOT’s unsecured claim was $73 million, or 42 percent of the outstanding loan balance. Following the sale of the project to and the assumption of the TIFIA loan by the San Diego Association of Governments (SANDAG) in December 2011, DOT expects to recover the original loan value through higher interest rates charged on the restructured loan. While DOT tracks certain aspects of a project, such as monitoring whether the project is meeting its construction timeline and comparing actual and projected receipts of the revenue pledged to repay TIFIA assistance, the agency does not systematically assess whether its portfolio as a whole is achieving the program’s goals of leveraging federal funds and encouraging private co-investment. The Secretary of Transportation is required to report biennially to Congress on the financial performance of projects that are receiving or have received TIFIA assistance and whether the goals of the TIFIA program are best served by continuing the program under the authority of the Secretary, establishing another entity to administer to program, or phasing out the program. In the past, we also recommended that a Department of Energy credit assistance program develop performance measures to evaluate program progress. See GAO, Department of Energy: New Loan Guarantee Program Should Complete Activities Necessary for Effective and Accountable Program Management, GAO-08-750 (Washington, D.C.: July 7, 2008). measures to address program performance. For example, FRA set goals to reduce the rate of train accidents in its proposed fiscal year 2013 budget, and FRA tracks these goals and actual accident rates over time to measure whether or not it is meeting its safety goals. Also, for FHWA’s Express Lanes Demonstration program, DOT developed performance measures to evaluate projects’ performance along four program goals— such as travel, traffic, and air quality—and uses information collected annually from project sponsors to report to Congress on the projects’ performance. In its first report to Congress in 2002, DOT examined the extent to which projects approved to receive assistance collectively met key TIFIA goals and objectives. For instance, DOT calculated that the amount of private co-investment in projects totaled $3.1 billion, or about 20 percent of the projects’ total costs. DOT also calculated that TIFIA had a federal leverage ratio of 4.8, meaning that every dollar of federal investment in projects approved to receive assistance—including TIFIA as well as other federal funds—represented nearly $5 in total infrastructure investment. However, DOT has not presented similar data on its progress in meeting the program’s goals in any of the subsequent reports to Congress. In these reports, DOT provides only broad descriptive information on the financial status of the projects and highlights project innovations, such as reporting on the use of new revenue sources like availability payments to repay assistance. DOT provides information on each project with a TIFIA credit agreement that describes each project and lists funding sources, but DOT does not aggregate this information for the portfolio of projects on its website or in other program documents. Given that the agency collects such data on projects that received TIFIA assistance, it could use these data—in particular, the amount of federal and nonfederal funding and financing, as well as the amount of private equity and debt—to better evaluate the progress toward meeting program goals and objectives, like leveraging limited federal resources and stimulating private capital investment in transportation infrastructure. In response to increased program demand and the uncertain budget environment, DOT used a competitive, two-step process to assess LOIs and select projects to apply for credit assistance in fiscal years 2010 and 2011. DOT officials told us that they began using the current evaluation process—focusing on the LOIs to pre-assess a project’s alignment with TIFIA’s statutory criteria—to address the significant increase in demand for the program coupled with the current uncertain and limited budgetary environment due in part to a lack of a long-term surface transportation reauthorization bill. These circumstances, according to DOT, required the agency to establish a process that allows the agency to choose amongst best-qualified projects in each fiscal year instead of accepting eligible projects on a first-come, first-served basis as was the case when the program was undersubscribed. DOT’s process for competitively selecting amongst LOIs involved two steps; first, DOT convened a multimodal team to assess, score, and group projects using statutory criteria, and second, DOT used a team of senior-level staff—called the executive leadership team—to review the multimodal team’s assessments and invite select project sponsors to submit an application for credit assistance (see fig. 5). The multimodal team—composed of staff from different DOT modal administrations—individually assessed each LOI against the statutory criteria to assign preliminary scores. Multimodal team members read and assigned each LOI a numeric score of 0 to 4 for each of the six criteria, with 0 indicating that a project was not consistent with a criterion and 4 indicating that a project was most consistent with a statutory criterion.While the funding notices defined each of the statutory criteria, they did not describe specific project qualifications or benefits that would merit a higher or lower score. Additionally, multimodal team members did not use any guidance beyond the funding notices to delineate what the possible range of scores signified in terms of project qualifications and benefits. DOT officials said that evaluators relied primarily on content in the submitted LOIs, as well as their own modal expertise as necessary, to evaluate projects. Each LOI contained information to describe the project and its proposed financial plan, identify the proposed borrower, explain how the TIFIA statutory selection criteria are met, and describe the benefits of the proposed project and its use of TIFIA assistance. To finalize their individual scores for each LOI, multimodal team members compared LOIs with one another to determine the relative merits of each project when assigning scores. For example, if an LOI received a preliminary score of 3.5 for the private participation criterion, but when compared with other projects in its cohort appeared less well aligned with the private participation criterion, its score would be lowered to reflect its relative rank among the LOIs. Also, multimodal team members met over several weeks to discuss and compare LOIs in an effort to help ensure reliability in scoring across team members. To arrive at a final score for each LOI, individual team members’ final scores were combined for each criterion and a cumulative weighted total score based on assigned weights in regulation was calculated for each project. Last, the multimodal team rank ordered the LOIs by the total score and grouped them into three categories—A, B, and C. DOT officials said that the multimodal team grouped LOIs based on natural breaks in the numerical scores. LOIs placed in category A were those that scored the highest numerically, and thus were considered to be the most consistent with the statutory criteria. Table 2 details the category grouping for LOIs by fiscal year. As shown in table 2, there was a threefold increase in the number of category A LOIs from fiscal year 2010 to fiscal year 2011. DOT officials attributed this increase to higher-quality LOI submissions as well as improved project readiness of resubmitted projects. For projects that submitted an LOI in fiscal year 2011, 10 of the 34 had previously submitted an LOI in 2010. For projects that submitted an LOI in fiscal year 2012, 16 of 26 had applied in either fiscal year 2010 or 2011. After the multimodal team grouped LOIs and provided a briefing about its assessment of all projects to the executive leadership team, this second team reviewed the projects and selected a subset of the category A projects to advance. According to DOT officials, in its review of projects, the executive leadership team was not aware of the scoring or ranking distinctions amongst LOIs in the category because numerical scores assigned by the multimodal team were removed. Instead, only basic project information, including high-level project summaries and category groupings (A, B, or C), were provided to the executive leadership team.Similar to the multimodal team evaluation, the executive leadership team did not use any guidance beyond the funding notice in its review of LOIs and relied primarily on content in the LOIs to score projects. However, in some cases, the team sought clarification from DOT staff, including FHWA division offices in various states, to gather additional information on a project’s readiness, like its status in the environmental review process. In the fiscal year 2012 funding notice DOT further clarified that it would use factors like budget authority and geographic dispersion to select from amongst highly rated projects. encompasses factors like the project’s progress in completing environmental review requirements—was an important consideration in picking among projects that were consistent with the statutory criteria, particularly in 2011, when there were a higher number of category A LOIs. As a result of its review, the executive leadership team invited 4 projects in fiscal year 2010 and 8 projects in fiscal year 2011 to submit a full TIFIA application. (See table 3.) Overall, this two-step process ensured that projects invited to apply were from among the highest-scoring LOIs overall—that is, from category A— but did not ensure that the projects selected were those that scored the highest numerically by the multimodal team. According to DOT officials, relying on numerical scores alone could provide a false sense of precision in selecting projects to advance. To date, because only category A projects are forwarded, no category B or C projects from the multimodal team evaluation have advanced over category A projects in the executive leadership team evaluation. DOT officials said that while there are no specific requirements to do so, the executive leadership team has only considered advancing LOIs in category A. After the executive leadership team has invited the sponsors of projects to apply for credit assistance, project sponsors must submit a full TIFIA application, after which DOT conducts a full evaluation of the application and makes a recommendation to the Credit Council. Then, the Secretary of Transportation makes the final decision on whether to approve a project to receive TIFIA credit assistance. Six of the 12 of the project sponsors that were invited to apply in fiscal years 2010 and 2011 have not yet submitted an application to DOT but all are still pursuing TIFIA loans. (See table 4.) DOT officials and project sponsors that had executed TIFIA credit agreements said that the amount of time it takes for sponsors to complete the application and negotiation process varies by project. Several factors—such as the status of a project’s environmental review, the complexity of the project’s finance and delivery approach, and changes to the project—can influence the length of these processes. According to DOT, sponsors of four invited projects that have not yet submitted an application are completing work to comply with federal environmental requirements. Also, some project sponsors we spoke with said that the TIFIA application and negotiation processes can be longer for projects that have more complex financial plans, such as having a less frequently used revenue stream or relying on future state appropriations. For example, after being invited to apply, one sponsor we interviewed had to complete the process to select a private concessionaire for the project; then, since submitting the TIFIA application, the sponsor has been working with the TIFIA office regarding uncertainty around appropriations from the state legislature before beginning the negotiation process. For projects not invited to apply, staff from the TIFIA office provided feedback to sponsors on their LOIs upon request. According to DOT officials, the primary aim of feedback is to explain how the LOI performed against each criterion. In addition, feedback included information on how a project sponsor could improve an LOI for resubmission, such as explaining that it needs to provide more details on specific project benefits. However, through this feedback, a project sponsor is not informed about the numeric scoring or ranking of its LOI relative to other LOIs. In some cases, DOT officials said that the feedback provided indicated that there was nothing “wrong” with a project’s LOI but that it was not invited to apply given the strength of the pool of LOIs submitted in that round. While project sponsors and other stakeholders we interviewed were satisfied with many aspects of DOT’s selection process, they cited two areas of the TIFIA selection process that they found to be less satisfactory—DOT’s application of selection criteria and the uncertainty of the timing of the process. Twenty-seven of the 36 recent TIFIA applicants that responded to our survey indicated that they were satisfied with DOT’s explanation of the application process in funding notices, and 28 of the recent applicants reported that the LOI format allowed them to provide sufficient detail about their project. In addition, several applicants told us that the TIFIA selection process was fairly simple to understand and not overly burdensome, and many applicants and advisors we interviewed told us that they found the TIFIA staff to be very cooperative and helpful. Moreover, many recent applicants told us that they appreciated that DOT gave feedback to the sponsors of unsuccessful LOIs. DOT provided these clarifications beginning in fiscal year 2010 in the annual TIFIA funding notices. The livability, economic competitiveness, and safety clarifications are part of the National or Regional Significance criterion and sustainability and state of good repair clarifications are part of the Environment criterion. unclear how one qualified project is selected over another in the competitive process. In addition, one recent applicant we interviewed said that it does not know what characteristics DOT looks for or uses to determine if a project does or does not meet a criterion, particularly for the livability clarification in national or regional significance. Some recent applicants also indicated that the LOI evaluation and selection process remained unclear, even after receiving feedback from DOT. Of the 21 recent applicants that indicated they received feedback, 8 reported that it was slightly or not at all useful in understanding the scoring of their LOIs—the primary aim of feedback. Several financial and legal advisors as well as private concessionaires we interviewed also said that there is a lack of transparency in the application of the criteria. These advisors indicated that DOT could be more transparent about the selection criteria and scoring process it uses to select projects. As we reported previously regarding competitively selected funding programs, were DOT to make additional information on its selection decisions publicly available, potential applicants would have better information on how to create and submit well-developed projects. When such information is not made available, DOT may invite speculation that projects were selected for reasons other than merit. In addition, recent applicants and financial and legal advisors we interviewed said the timing of the LOI evaluation and selection process is inconsistent from year to year and therefore creates uncertainty. Specifically, several applicants and advisors we interviewed told us that the inconsistent timing in both the dates of the release of DOT’s funding notice and LOI submission deadline, as well as the announcement of the outcome of the selection process, contributes to this uncertainty.Because TIFIA projects are typically high-cost projects with multiple funding and financing streams, the uncertainty about when a project can submit an LOI and more importantly, when a project can count on a TIFIA credit agreement to fill a funding gap, can affect the financial feasibility of these projects. For example, one financial advisor said that because the current LOI process occurs only once per year, it makes it difficult to plan and to coordinate with other vital project planning pieces, like state budget cycles, environmental reviews, and private investors’ timelines. If a project sponsor misses the solicitation for a particular year, it has to wait another year to submit an LOI. The uncertainty as to when the outcome of the selection process will be announced can also affect projects. According to one financial advisor, project delays can affect construction costs or public support for the project, among other things. DOT officials said that the timing of the annual solicitation is due, in part, to receiving budget authority for the TIFIA program on a year-to-year rather than a multiyear basis. On the basis of feedback from fiscal year 2010 applicants, DOT has also tried to shorten the LOI evaluation and selection process in subsequent years so that applicants learn outcomes sooner. For fiscal year 2010, this process, measured from the date LOIs were due to the announcement of which LOIs were invited to apply, took about 6 months, but for fiscal years 2011 and 2012, the process took 5 months and 4 months, respectively. DOT has made changes to try to improve the LOI evaluation process since returning to a competitive fixed-date selection process. In particular, DOT officials said that they are applying best practices from other DOT discretionary programs such as the TIGER program and learning from past rounds of TIFIA solicitations. The changes include the following: In fiscal year 2011, DOT increased its documentation of key decisions for the LOI evaluation and selection process. For each LOI, the multimodal team summarized its deliberations on the extent to which a project met each statutory criterion in a standard form. In addition, to aid in providing feedback to unsuccessful applicants, TIFIA office staff produced an internal memo to document the multimodal team’s rationale for LOI scores and grouping as well as the executive leadership team’s concurrence with these evaluations. In fiscal year 2012, DOT made changes to the LOI evaluation process at the multimodal team level. Specifically, the team assigned qualitative scores—”not aligned,” “somewhat aligned,” “well aligned,” and “very well aligned”—rather than numeric scores to LOIs for each criterion. According to DOT officials, these changes facilitate discussion within the team and accelerate progress to consensus on project scores and impressions. In addition, DOT officials said the qualitative scores are more reflective of the actual evaluation process than the numeric scoring system used in past rounds of solicitation and prevent the team from focusing too heavily on the numeric scores. In fiscal year 2012, DOT further clarified the TIFIA funding notice. In particular, DOT included the two statutory selection criteria that had not been considered as part of the LOI selection process for fiscal year 2010 or 2011—creditworthiness and consumption of budget authority. It also stated that in selecting LOIs to advance, it may give priority to projects that enhance the geographic diversity of the TIFIA portfolio and may consider the project’s readiness and timeline to proceed to financial close. DOT officials said it did so as part of its efforts to improve its communication of the criteria and selection process to applicants through the funding notices over time. For fiscal year 2012, DOT invited the sponsors of five projects to apply for TIFIA credit assistance. In addition, in response to concerns raised by project sponsors as well as the lack of certainty about future funding levels associated with the TIFIA program because of the absence of a long-term surface reauthorization, DOT officials said that an expedited review process would be created for additional highly- rated projects if TIFIA budgetary resources are significantly increased based on the President’s Budget Request for fiscal year 2013. Since many of DOT’s changes to the selection process occurred in the fiscal year 2012 TIFIA solicitation, it is too soon to know whether these changes will address the transparency and uncertainty concerns raised by recent applicants and financial and legal advisors. DOT officials said that they will continue to explore other changes to the process, such as creating additional internal guidance on scoring projects or changing feedback. Additionally, DOT officials said that the variety of TIFIA projects by size and mode could make it difficult for DOT to specify how particular benefits translate to a score for an LOI. For instance, in 2010 sponsors submitted LOIs for projects that varied greatly in terms of benefits, size, and mode, as exemplified by the $360 million Southeast Waterfront project—a 5-mile bus, auto, bicycle, and pedestrian corridor that is part of a redevelopment project in San Francisco—and the $1.5 billion Goethals Bridge project—the replacement of a existing bridge connecting New York and New Jersey. DOT officials said that the current LOI selection process was developed in response to the combination of high demand and uncertain budgetary environment, and indicated that it would likely modify the evaluation and selection process in response to an increase in TIFIA’s budget authority. The TIFIA program’s flexibility and low interest rates are the predominant reasons why sponsors seek TIFIA assistance. TIFIA’s flexibility extends to both repayment terms and debt structuring. For states that have not sought TIFIA assistance, state DOTs indicated that a variety of factors contributed to their decision not to use TIFIA, such as a lack of projects that met the eligibility requirements or the availability of other financing options. Looking ahead, future demand for TIFIA is difficult to gauge because it is influenced by a number of factors such as changes to interest rates or state fiscal conditions. As shown in table 5, most recent applicants we surveyed cited TIFIA’s repayment terms and options, low interest rate, and ability to serve as subordinate debt as very or somewhat important in their decision to seek assistance in fiscal years 2010 and 2011. In addition to recent applicants we surveyed, other project sponsors, financial and legal advisors, and private concessionaires we interviewed consistently cited TIFIA’s flexible terms as a major benefit of the program. According to DOT officials, the major benefits of TIFIA are that it can be a patient, flexible lender and can help a sponsor secure a portion of the project’s lending to attract other financing. For example, one project sponsor said that deferring payment for 5 years after substantial completion is very important for new toll road projects to allow time for usage to grow and thus revenues to ramp up after opening. Beyond favorable repayment terms, TIFIA assistance can be subordinate to other debt, meaning that this other debt may receive project revenue ahead of All six DOT except in the case of bankruptcy, insolvency, or liquidation.of the private concessionaires we interviewed said that this structure is a key benefit of the program, as it can help improve their ability to raise senior debt. Many project sponsors we spoke to also cited TIFIA’s relatively low interest rate as a main benefit of the program. The interest rate of TIFIA assistance is based on U.S. Treasury securities of a similar maturity and, since 2008, these Treasury rates have been lower than municipal bond interest rates. To a lesser extent, recent applicants we surveyed cited several other factors as important in their decision to seek TIFIA assistance in fiscal years 2010 and 2011. (See table 6.) For instance, survey responses indicate that the TIFIA program can provide financing to projects that is unavailable in the financial markets, particularly for projects with unproven revenue streams. For example, one project sponsor we interviewed that received a TIFIA loan said that obtaining subordinate debt in the financial markets would have been prohibitively expensive, since the project was a new toll road. In addition, the ability of TIFIA to help accelerate the delivery of projects was also important among recent applicants. Officials from the Florida Department of Transportation estimated that its TIFIA loan helped accelerate completion of the Miami Intermodal Center by 10 years. Two other project sponsors we interviewed said TIFIA plays an important role in accelerating not only the projects for which they received assistance but other major capital projects too, as TIFIA assistance helps free up funds for other projects. However, our survey also indicated that some states have neither sought nor plan to seek TIFIA assistance. In states where sponsors have never sought TIFIA assistance, the extent to which certain factors affected state DOTs’ decisions to not seek credit assistance varied, but many of these state DOTs indicated that they have not submitted LOIs because they (1) do not have projects that meet the eligibility requirements including the required cost threshold, (2) get financing from other source(s), or (3) have state restrictions on borrowing funds for transportation projects.table 7.) Regarding lack of eligible projects, one state DOT indicated in the survey that a lack of dedicated revenues or private investment prevents TIFIA from being a viable option for rural states now and in the future. Several financial and legal advisors we interviewed also said that some states lack projects that are large enough to benefit from the TIFIA program. For the TIFIA program, a sponsor must pay a $50,000 application fee if invited to apply after the LOI stage. Then, if selected to receive assistance, the TIFIA borrower must pay a transaction fee, typically between $300,000 and $400,000, to cover the costs incurred by DOT to negotiate and execute the credit agreement, like costs for external advisors. Borrowers can also incur additional costs from hiring their own advisors and obtaining a credit rating for the project. Therefore, the cost of applying for TIFIA may outweigh the benefits of TIFIA for lower-cost projects. Moreover, DOT officials as well as several financial advisors, a private concessionaire, and an industry association we spoke to said that the TIFIA program may be better suited to states with more urban populations and a greater need for large-scale projects. States with sponsors that have never sought TIFIA assistance tend to have a smaller portion of their population living in urban areas—that is, areas with a total population of 50,000 or more—than states with sponsors that have sought TIFIA assistance. DOT officials said that to date, TIFIA projects have been located in states with large urban areas that have major transportation needs and can more easily charge tolls or generate other project revenues. Based on our survey, it is unlikely that many of the states that have not sought TIFIA assistance will seek such assistance in the future. Of the 16 state DOTs from states that have never sought assistance that responded to our survey, only 1 indicated that it anticipated seeking TIFIA assistance in the next 5 years. In addition, most of these state DOTs indicated that changes to the program—such as making more funds available for the program or increasing the portion of project costs that TIFIA assistance could cover—would have somewhat or little to no increase on the likelihood that they would seek TIFIA assistance. According to our interviews with DOT, project sponsors, advisors, and private concessionaires, overall demand for the TIFIA program is likely to continue. However, the magnitude of this demand is difficult to estimate because it is influenced by a variety of external factors like changes to interest rates, use of public-private partnerships, and state fiscal conditions. Changes to the TIFIA interest rate relative to municipal debt interest rates could considerably affect the demand for TIFIA credit assistance. For the last 3 fiscal years, sponsors submitted LOIs for credit assistance totaling more than 10 times what the program’s current budget authority can support. Several legal and financial advisors we interviewed said that many project sponsors sought TIFIA in recent years because of depressed market conditions and attractive TIFIA interest rates, relative to interest rates on municipal debt, and a few of these advisors and one industry association said that demand for TIFIA will likely decrease if TIFIA interest rates become less attractive relative to municipal debt interest rates. The relatively low TIFIA interest rates made the program attractive to a greater number of sponsors, even those with access to other financing options. For example, one recent applicant we interviewed said that in the past, TIFIA was a more expensive finance option than issuing its own debt, and its interest in TIFIA during the last few years is primarily driven by the program’s relatively low interest rates. The applicant noted that should interest rates on TIFIA loans increase in the future, it will likely seek financing in the private capital markets. Two other factors will influence the demand for TIFIA assistance. Greater use of public-private partnerships and other alternative project delivery approaches could result in a greater demand for TIFIA credit assistance. Many private concessionaires we interviewed said that TIFIA is an important financing tool for public-private partnerships. According to DOT officials, TIFIA credit assistance has been part of the financing package for most large-scale public-private partnership projects in the United States in recent years. In addition, some states, like Colorado and Virginia, have set up offices to facilitate public- private partnerships, so sponsors in such states may be more likely to use this approach given this support. State-specific conditions will also influence the demand for TIFIA assistance. As federal and state fuel taxes may not be a sustainable long-term source of transportation funding, state DOTs may make greater use of finance tools like TIFIA to deliver projects. We have previously reported that state and local governments face persistent and long-term fiscal pressures. At the same time, estimates to repair, replace, or upgrade aging transportation infrastructure—as well as expand capacity to meet increased demand—top hundreds of billions of dollars. As a result, DOT anticipates more demand for the TIFIA program as states and localities look to leverage limited funds. One state we interviewed, for example, said that pay-as-you-go funding—a more traditional means of funding transportation infrastructure whereby a sponsor builds projects in phases or increments as funds are available—no longer keeps pace with infrastructure needs. Therefore, the state DOT has turned to TIFIA to help finance big, high-cost projects that need federal assistance to advance. Looking ahead, 15 of the 42 state DOTs that responded to our survey indicated that they have projects for which they will likely seek TIFIA in the next 5 years. Most of these state DOTs (13) have sought TIFIA assistance in the past and indicated that they are likely to seek TIFIA for 1-5 projects, while a few indicated they are likely to seek TIFIA for 6-10 projects. With the pending reauthorization of the surface transportation programs, the tight budgetary environment, and the increase in demand for TIFIA, government and industry officials have proffered options to modify the program. We reviewed surface transportation reauthorization bills—H.R. 7, the American Energy and Infrastructure Jobs Act as reported by the House Committee on Transportation and Infrastructure, and S. 1813, Moving Ahead for Progress in the 21st Century Act (MAP-21) as adopted by the Senate, respectively—to identify proposed changes to the program. Based on our interviews with select project sponsors, financial and legal advisors, and others, as well as our survey of state DOTs and recent applicants, we identified several recurring options that have been proposed to modify the TIFA program. Some options require congressional action to implement, while others would require DOT to change program-level policies. Each option has advantages and disadvantages, and thus implementing any of these options would require policy trade-offs. Moreover, some options could affect the overall demand for the program and the sphere of projects that could apply for or benefit from TIFIA. Table 8 provides a list of proposed options to modify the TIFIA program in the surface transportation reauthorization bills and the President’s fiscal year 2013 budget. Two proposed changes, increasing the amount of authorized budget authority and allowing project sponsors to pay fees to contribute to the credit subsidy cost, could potentially allow the TIFIA program to provide more assistance to projects. Increase amount of authorized budget authority. Members of Congress, DOT, and others have proposed increasing the amount of authorized budget authority to cover the subsidy costs for the TIFIA program. Proposals vary from increasing this amount to $1 billion, as in the reauthorization bills, to a smaller increase of $500 million proposed in DOT’s fiscal year 2013 budget. Congressional support for an increase in authorized budget authority for TIFIA is rooted in the program’s ability to leverage funds and stretch federal dollars further than a traditional grant program. These proposals represent significant increases to TIFIA’s current annual authorized budget authority of $122 million. Proposals to increase the amount of authorized budget authority for the TIFIA program occur during an austere federal budget environment. The Budget Control Act of 2011 places limits on discretionary spending for the next 10 fiscal years. As a result, an increase in one area of discretionary spending, like the TIFIA program, requires a decrease in another area of discretionary spending. Increasing the amount of authorized budget authority is strongly supported by recent applicants we surveyed as well as legal and financial advisors we interviewed. For example, 32 out of 36 recent applicants that responded to our survey strongly support expanding funding for the TIFIA program. An increase in funding would likely allow the program to provide more credit assistance, in terms of the number of projects receiving credit assistance or the amount of credit assistance provided to each project. An increase in funding could also allow the program to come closer to meeting the current demand for the program, which is more than 10 times what the current budget authority could support. However, DOT officials and other stakeholders told us that an increase in funding would need to be accompanied by an increase in administrative resources. According to project sponsors and other stakeholders, the TIFIA office has been very responsive and helpful, but a few said that response time has slowed in recent years. With increased funding, DOT would likely see an increase in the number of applications to review, credit agreements to negotiate, and credit agreements to monitor. DOT officials said they are prepared to adjust staffing levels in the event that Congress provides the TIFIA program with an increase in authorized budget authority as is proposed in the surface transportation reauthorization bills. Further, DOT officials said that an increase in TIFIA funding may require DOT to reexamine how it manages the program—such as how it selects projects and negotiates credit agreements—and issue new regulations. Allow sponsors to pay fees to contribute to the credit subsidy cost of assistance. H.R. 7 would mandate that DOT allow project sponsors to pay fees to reduce the credit subsidy cost of assistance if DOT funds run out. According to DOT, current law allows but does not require DOT to let the approved sponsor pay a fee to reduce the credit subsidy cost of the project in the event that there is insufficient budget authority to fund credit assistance for a selected TIFIA project. Over the life of the TIFIA program, three project sponsors have paid fees to reduce the credit subsidy cost of their TIFIA assistance; all three cases occurred after the program became oversubscribed in fiscal year 2008. Among recent applicants we surveyed and project sponsors we interviewed, many supported this program change. Supporters of this option said that given the high demand for TIFIA credit assistance and limited budget authority, allowing project sponsors to pay fees to cover the credit subsidy cost when DOT’s budget authority runs out would allow more eligible projects to be built and reduce the oversubscription of the program. However, DOT previously decided against instituting this option more broadly through a pilot program in 2010. DOT officials told us that while allowing project sponsors to pay fees to cover the credit subsidy cost provides flexibility, especially when demand outpaces budget authority, it complicates the negotiation of credit agreements. While DOT would have to follow its subsidy estimation methodology to determine a project sponsor’s fee, the project sponsor may want to negotiate the fee. Project sponsors we interviewed said that for this option to work, DOT would need to provide them with more information on how the credit subsidy cost is calculated. Under FCRA, OMB is responsible for subsidy cost estimates. OMB may delegate this authority to the agency providing credit assistance, but the delegation should be based on the written guidelines or criteria developed by OMB. OMB retains the responsibility and final approval of subsidy cost estimates. Given these complexities, this option may be difficult to implement, though it could be done relatively quickly. In addition, to the extent that DOT underestimates the initial subsidy costs and does not collect enough fees from borrowers, taxpayers will ultimately have to pay for any shortfalls. Allowing project sponsors to pay fees to cover the credit subsidy cost could remove the congressional limit on the size of the TIFIA program and thus increase the federal government’s exposure. According to DOT, SAFETEA-LU removed the cap on the amount of credit assistance the TIFIA program could provide each year, so the only limit on the TIFIA program’s size currently is the budget authority provided by Congress. DOT officials said that allowing project sponsors to pay the subsidy cost could allow the program to grow larger than Congress authorized through budget authority. DOT officials told us that for other DOT credit programs, such as the Railroad Rehabilitation and Improvement Financing (RRIF) program, project sponsors are required to pay fees towards the credit subsidies for loans because they do not have budget authority for this purpose, but the RRIF program has a statutory limit on total outstanding credit assistance.the government’s exposure to financial losses. Moreover, if this proposed change were adopted in combination with other proposed changes to the program—requiring the Secretary to approve all qualifying applications— the total size and exposure of the TIFIA program could expand dramatically. Another option in the reauthorization bills would increase the portion of eligible project costs TIFIA assistance could cover from 33 percent to 49 percent. Among think tank and industry group proposals, project sponsors, and other stakeholders we interviewed, support for this option varied. Those that support increasing the TIFIA share said it would reduce the burden on sponsors to find nonfederal sources of debt and allow them to borrow more funds on favorable terms. For example, several project sponsors said that for very large infrastructure projects, finding a combination of federal, state, and private financing can be difficult. Those that do not support this option expressed concern that it would reduce the incentive to find private and other nonfederal financing and potentially reduce market discipline that comes from other lenders to projects. For example, several stakeholders we interviewed said that increasing the percentage of total project costs that TIFIA can finance could result in project sponsors substituting TIFIA credit assistance for private debt or private equity investments. Others expressed concern that increasing the share of costs TIFIA covers would potentially reduce the availability of TIFIA assistance, especially if Congress does not increase budget authority for the TIFIA program. Moreover, increasing the portion of costs covered by TIFIA could decrease the program’s ability to achieve one of its key goals—leveraging federal funds. DOT officials told us that changing the statute to increase the TIFIA share could reduce the number of projects supported (for a given amount of budget authority) and reduce the leveraging of federal funds as project sponsors seek more financing through TIFIA rather than other sources. Currently, DOT estimates that each $10 million in budget authority can provide up to $100 million in TIFIA credit assistance and leverage $300 million in transportation infrastructure investment. If the limit on TIFIA assistance were increased to 49 percent, this same amount of budget authority could leverage about $200 million in transportation infrastructure investment. This change could also increase the exposure of the federal government to the risk of loan defaults if the size of the credit assistance for each project increases. The reauthorization bills propose exceptions to the nonsubordination clause. For example, the Senate reauthorization bill, S. 1813, allows exceptions to the nonsubordination clause for certain types of borrowers.capital programs and have senior bonds outstanding could be exempt from the nonsubordination clause if (1) the outstanding bonds are rated A or higher, (2) the TIFIA assistance and outstanding bonds are secured by revenues not affected by project performance (e.g., sales tax), and (3) the TIFIA assistance is 33 percent or less of the total project costs. Among recent applicants we surveyed, 22 out of 36 strongly or moderately support allowing waivers to the nonsubordination clause. Several legal and financial advisors and other project sponsors we interviewed support removing the nonsubordination clause altogether. Specifically, public agencies that are financing ongoing While there is general support for allowing waivers to or eliminating the nonsubordination clause, many of those we interviewed indicated that the clause does not pose an insurmountable challenge to negotiating a credit agreement, and that it provides needed protection for the federal government. Eliminating or waiving the nonsubordination clause could address some issues identified by financial advisors and credit ratings agencies we interviewed. For example, the TIFIA nonsubordination clause can be difficult to integrate with existing terms for outstanding bonds secured by the same revenue stream. If the nonsubordination clause is triggered due to project bankruptcy or insolvency, project sponsors must make special arrangements to ensure this bond covenant is not violated. However, despite these issues, many project sponsors and legal and financial advisors said that the nonsubordination clause provides an important protection to taxpayers. Moreover, few if any could point to instances where it prevented the closing of a credit agreement. DOT officials said that the nonsubordination clause helps protect the federal government and taxpayers. For the TIFIA program, the nonsubordination clause is used to lessen the risk to the federal government. While the nonsubordination clause can cause issues for borrowers, DOT officials said that they can work with borrowers to try to address financial difficulties before they must legally invoke the clause. For example, DOT can defer invoking the nonsubordination clause for up to a year after a missed payment, but to date no sponsor has missed a payment. In addition, DOT officials told us that removing the nonsubordination clause would increase the federal government’s risk because it would lower the likelihood of recovering funds. According to DOT officials, the nonsubordination clause facilitated its involvement in bankruptcy discussions for the South Bay Expressway and, as a result, DOT expects to recover, through the restructuring of the project’s debt and assumption of the loan by SANDAG, up to 100 percent of the original loan value. Further, DOT officials said that without the nonsubordination clause, the credit subsidy cost required for a project would increase significantly, because of the increased risk to the federal government, and thus reduce the amount of assistance the TIFIA program could provide. Modify selection criteria. Both the reauthorization bills propose eliminating TIFIA’s selection criteria and adding to the current eligibility requirements. H.R. 7 would expand eligibility requirements to include creditworthiness, regional significance, beneficial effects, and project readiness.eligibility requirements. S. 1813 adds creditworthiness to the program’s current Twenty-three of 36 recent applicants that responded to our survey support modifying the TIFIA selection criteria, but when asked how the criteria should be modified, these respondents most often indicated that they want more transparency in how selection criteria are applied. Project sponsors and advisors we interviewed said they would prefer more transparency in the evaluation of LOIs and a better explanation of how selection criteria are applied. Several project sponsors and advisors expressed concern about the definitions of some criteria—in particular, the livability and sustainability clarifications—as well as how the criteria are applied to LOIs. However, altering or eliminating the selection criteria could modify the nature of the TIFIA program, changing it from a discretionary program where select projects receive assistance to more of an eligibility-based program where all eligible, creditworthy projects can receive assistance. DOT supports retaining the statutory criteria to use in selecting projects to receive credit assistance. In this way, the TIFIA program would continue to provide assistance to projects that meet DOT’s national transportation goals. DOT officials added that just because a project is creditworthy does not ensure that it will have positive transportation benefits. Further, modifying or eliminating the selection criteria could be implemented in several ways, each entailing different trade-offs. For example, one of the new eligibility requirements in H.R. 7 is “beneficial effects,” which collapses some existing statutory criteria and program goals—specifically, fostering public-private partnerships, attracting private debt or equity investment, enabling a project to proceed faster than without the credit assistance, and reducing federal grant assistance—into one category. While fostering public-private partnerships, for example, is one of the selection criteria for the current TIFIA program, projects without a public- private component are still eligible to apply. Depending on how the beneficial effects eligibility requirement, if enacted, is defined and implemented, it could render some projects—including some that recently received credit assistance—ineligible. Return to an open application cycle. The reauthorization bills propose returning to an open application process and prohibiting a fixed-date solicitation. Several project sponsors as well as financial and legal advisors we interviewed support a return to an open application cycle. Some project sponsors said this would allow sponsors to seek TIFIA credit assistance according to a project’s schedule, rather than trying to alter this schedule to fit the annual TIFIA solicitation. One state DOT said that the projects applying for TIFIA credit assistance are very complex and must manage multiple timelines for various financing stakeholders, which is further complicated by TIFIA’s once-a-year solicitation. A few financial advisors and one project sponsor we spoke with also indicated that increasing the number of solicitations per year would be an improvement if DOT did not return to an open application cycle. Moreover, due to the fixed-date solicitation process, some project sponsors may be submitting LOIs for projects not yet ready to use TIFIA assistance. DOT previously reported that based on its use of an annual, fixed-date application process from 1999 to 2001, project sponsors may have been applying for assistance prematurely in response to the limited application window. DOT switched to an open application process to allow sponsors to apply based on a project’s schedule. For example, one recent applicant told us it submitted an LOI early, as it planned and obtained permits for the project, to familiarize DOT with the project and improve its chances of obtaining TIFIA credit assistance in the next few years. DOT officials told us the fixed-date application cycle is currently a necessity because of limited resources; however, if they had more funds to pay the credit subsidy costs for credit assistance, they would prefer to use an open application system that allows a sponsor to seek TIFIA when it best fits a project’s schedule. Importantly, returning to an open application cycle removes the competitive nature of the TIFIA program. If the TIFIA program’s authorized budget authority remains at current levels or does not meet total demand, a project’s order in line would determine whether it receives assistance, not its relative merit. Further, if this option were adopted, DOT would have to reconsider its current two-step selection process and determine the extent to which it has the discretion to distribute assistance based on geographic location, project readiness, or other factors not included in the statutory eligibility requirements. Until recently, the innovative credit assistance offered by the TIFIA program to finance the construction of large-scale surface transportation projects was underutilized. However, demand for the program surged, in part because of the tightening of commercial credit markets and low federal treasury interest rates. TIFIA is increasingly becoming a more recognized approach for filling funding and financing gaps for complex transportation projects that can help to mitigate mobility and other transportation issues in many congested urban areas in the United States. DOT, project sponsors, legal and financial advisors, and other stakeholders in the transportation industry have expressed strong support for the program, and Members of Congress have recently developed several reauthorization proposals aimed at greatly increasing the authorized budget authority for the program and modifying other aspects of the program to make it more accessible. DOT has taken some steps to monitor and assess the program through its project oversight and credit monitoring of individual TIFIA credit agreements and, early in the program’s tenure, by tracking and reporting on the private investment and leveraging effect of TIFIA to gauge its progress in meeting program goals. However, since that time, DOT has not publicly reported on these or other measures to assess the program as a whole. Without other measures in place going forward, Congress will not have the complete and aggregated data needed to make informed decisions about the program’s size and structure. Additionally, in response to increased demand for the program and multiple extensions of the surface transportation reauthorization over the last 3 years, DOT has had to adapt its process for selecting projects, focusing its review of projects on applicants’ LOIs and selecting projects based on their relative merits. The new process, whereby DOT balances a limited program budget authority with selecting projects that are most consistent with the statutory selection criteria, is a work in progress. In response to feedback from applicants and lessons from this and other discretionary programs, DOT has taken steps to make the TIFIA selection process transparent by publicizing the selection criteria and other factors that contribute to project selection and providing feedback to unsuccessful applicants, and many think these steps have been useful. However, many recent applicants and financial and legal advisors that assist applicants in developing projects still feel that the process lacks transparency, making it difficult for them to advance well-developed LOIs. While federal agencies rarely publicly disclose the reasons for their selection decisions in a competitive review process, the considerable demand for TIFIA and changes to the selection process suggest that publicly disclosing additional information about how selection decisions are made would better enable potential applicants to identify how DOT is using the statutory criteria to select projects and develop effective LOIs. To improve the implementation of the TIFIA program and enable Congress and DOT to better assess program performance, we recommend that the Secretary of Transportation further develop and define performance measures to monitor and evaluate progress toward meeting the program’s goals and objectives. To ensure that future project selections in the TIFIA program are transparent to Congress, applicants, and the public, we recommend that the Secretary of Transportation better disclose information, through notices of funding availability or other program guidance, regarding how DOT evaluates and selects projects. We provided a draft of this report to DOT for review and comment. In response, DOT said it would carefully consider the results of our review but did not take a position on whether it agreed with our recommendations. DOT told us that it objectively evaluates applications for TIFIA participation using comprehensive, data-driven processes to identify the most highly qualified projects, and that DOT encourages strong communication with applicants and offers transparent discussion of applicants’ submittals to ensure they are fully informed of the basis for program participation decisions. Further, the agency stated that it is continuously reevaluating its processes to ensure they are as effective as possible. The agency also provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation, the Administrator of the Federal Highway Administration, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or flemings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix IV. To address our objectives, we reviewed Department of Transportation (DOT) program guidance for the Transportation Infrastructure Finance and Innovation Act (TIFIA) program, relevant legislation and regulations, and DOT’s biennial reports to Congress on the TIFIA program. To describe the characteristics and results of the TIFIA program, we analyzed DOT data on past letters of interest (LOI) and applications for assistance to describe the projects that sought credit assistance. We also analyzed data on the projects receiving TIFIA credit agreements through April 2012 to describe these projects, including mode of transport, total cost, amount of TIFIA assistance, amount of private investment, and geographic location. For mode of transport, we used DOT’s available characterizations for all projects with credit agreements and for LOIs for fiscal year 2011, while for the remaining LOIs we determined the mode for projects by applying DOT’s characterization scheme. When considering the amount of private investment, we followed DOT’s convention established in its 2002 report to Congress on TIFIA. Namely, (1) the project must feature investor-held debt or equity and (2) the investment return must be derived from project-generated revenues or other revenues levied specifically to support the project. We only included active credit agreements—those for which sponsors had not repaid or refinanced their credit agreements—as we did not have complete information on the funding sources for all the retired credit agreements. We assessed the reliability of the data by reviewing DOT’s data documentation, interviewing knowledgeable officials, and conducting independent validation through use of our web survey. We found the data to be sufficiently reliable for our purposes. In addition, we interviewed DOT officials to learn about the program’s goals and the tools DOT uses or plans to use to track and evaluate the performance of credit agreements and the program. To describe and assess DOT’s process for evaluating and selecting projects to invite to apply in fiscal years 2010 and 2011, we examined legislation, regulations, and agency guidance, including notices of funding availability, to describe the statutory and regulatory criteria DOT uses to select projects for credit assistance. We also analyzed and summarized data and documents provided by DOT—including scores assigned and reviewers’ assessments of project letters of interest—and interviewed DOT officials to describe the decision-making processes used by the agency to select projects for credit assistance. We focused on federal fiscal years 2010 and 2011, the years for which DOT used a fixed-date competitive solicitation for projects after demand for credit assistance exceeded the program’s budget authority and for which the evaluation and selection processes were complete. To assess DOT’s process for selecting projects, we compared DOT’s process with statute, regulations, and guidance; GAO’s Standards for Internal Control in the Federal Government; and, as appropriate, past GAO work on federal credit assistance and grant programs. In addition, we gathered and analyzed data on state-level characteristics, such as federal highway apportionments and whether states have legislative restrictions on borrowing, to determine whether such characteristics were correlated to past demand for the TIFIA program. To explore the potential future demand for TIFIA credit assistance, we analyzed data from DOT on interest in the program in the last 2 fiscal years. To identify the options proposed to modify the TIFIA program, we reviewed reauthorization proposals for surface transportation programs from congressional committees, DOT, and industry and research organizations. We also interviewed a variety of stakeholders to inform our objectives. We interviewed select current and potential project sponsors (such as state DOTs and transit agencies) to learn about their experiences with the TIFIA selection process and the factors that influenced whether they sought TIFIA assistance. In particular, we interviewed project sponsors in the 5 states—California, Colorado, Florida, Virginia, and Texas—that constitute the majority of TIFIA awards to date, as well as a few states that have had little or no experience with the program—North Carolina and Iowa—that varied in terms of geographic location and legislative authority to borrow and use public-private partnerships. In each of these states, we interviewed the state DOT and all project sponsors that received TIFIA credit assistance as of April 1, 2012. In addition, we interviewed legal and financial advisors that help sponsors apply for TIFIA credit assistance and private concessionaires that invest in large infrastructure projects to learn about their experiences with the TIFIA program, including the selection process. We also interviewed credit rating agencies and industry associations such as the American Association of State Highway and Transportation Officials (AASHTO) and the American Road and Transportation Builders Association (ARTBA), to learn about their experiences with the TIFIA program and to gain additional information about the types of projects that have sought or received TIFIA credit assistance. In our interviews, we also asked about the factors that would influence future demand for the program as well as options to modify the program and the potential trade-offs of implementing such changes to the TIFIA program. Table 9 lists the organizations we interviewed. In order to gather opinions of the TIFIA program from the users’ standpoint, we designed and administered a web-based survey. The survey was administered to the state DOTs in all 50 states, the District of Columbia, and Puerto Rico, as well as to all recent applicants that submitted an LOI to the TIFIA program in fiscal years 2010 and 2011.The survey population consisted of four unique groups of respondents: state DOTs from states from which no sponsor had ever applied to the TIFIA program; state DOTs from states from which a sponsor had applied to the TIFIA program but not in recent years—that is, 2010 and 2011; state DOTs who had recently applied to the TIFIA program; and other, non-state DOT organizations who had recently applied to the TIFIA program. Survey respondents were presented with different questions in the survey depending on their past experience with the TIFIA program, and whether or not they were from a state DOT. In general, the survey topics included the following: factors contributing to organizations’ decision to seek, or not to seek, TIFIA assistance; satisfaction with the process for submitting an LOI to the TIFIA program; opinions on proposed modifications to the TIFIA program; potential future demand for the TIFIA program; and characteristics of the state DOTs. In developing the survey, we took steps to ensure the accuracy and reliability of responses. We cognitively tested the survey with representatives from 5 state DOTs and one other organization included in the respondent population to ensure that questions were clear, comprehensive, and unbiased, and to minimize the burden the survey placed on respondents. On the basis of feedback from the six pretests we conducted, we made changes to the content and format of some survey questions. We obtained contact information for the survey recipients from two sources. First, we obtained contact information for the state DOTs from AASHTO, specifically, from its Standing Committee on Finance and Administration. Second, we obtained contact information for recent applicants from DOT. We also contacted all of the survey recipients in advance, by e-mail, to ensure that we had identified the correct respondents and to request their completion of the questionnaire. The survey was administered between January 25, 2012, and April 4, 2012. We distributed a link for the survey to the 83 organizations by e- mail and also subsequently e-mailed and telephoned nonrespondents to encourage a higher response rate. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. Most of the survey questions included close-ended response categories; however, a few survey questions asked respondents to provide a written response to an open-ended question. When analyzing written responses, one analyst read the responses and assigned them to different categories, while a second analyst reviewed this categorization. We received completed surveys from 66 respondents for an overall response rate of 80 percent. The survey response rates for the four groups of respondents are presented in table 10 below: We conducted this performance audit from July 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOT has awarded 27 TIFIA credit agreements to projects through 26 loans and one loan guarantee. Table 11 provides information on each credit agreement, including the name and location of the project receiving assistance, the amount of credit assistance, and the status of the credit agreement. We distributed a survey to all state departments of transportation,as the organizations who have submitted a letter of interest for TIFIA assistance during federal fiscal years 2010-2011, to gain insight into their experience with and opinions regarding the TIFIA program. In total the survey went to 83 recipients, and we received completed surveys from 66 of 83 recipients for a response rate of 80 percent. Tables 12-28 below show responses to questions from the survey related to the TIFIA program and project finance. We also provided examples and definitions for certain terms used in the questions, which are reprinted below. Survey respondents were presented with different questions in the survey depending on their past experience with the TIFIA program, and whether or not they were from a state DOT. For example, we only asked organizations that submitted an LOI in 2010 or 2011 (recent applicants) about their experience with the TIFIA evaluation and selection process. For more information about our methodology for designing and distributing the survey, see appendix I. ITS stands for intelligent transportation system. The nonsubordination clause (also known in the context of the TIFIA program as the springing lien) means that the TIFIA lien on project revenues can be subordinated to those of senior lenders except in the event of bankruptcy, insolvency, or liquidation of the obligor. In such an instance, the TIFIA lien would rise to parity with senior creditors. This provision can be effected through a master trust agreement, an intercreditor agreement, or other agreement entered into at the time of execution of the credit agreement. Examples of other dedicated revenue stream(s) to repay TIFIA credit assistance may include pledged sales taxes, tax increment financing, and availability payments. The TIFIA eligibility requirements are (1) the project shall be consistent with the state transportation plan, if located in a metropolitan area shall be included in that area’s metropolitan transportation plan, and shall appear in an approved state transportation improvement program before the DOT and the project sponsor execute a term sheet or credit agreement that results in the obligation of funds; (2) the state, local servicer, or other entity undertaking the project shall submit a project application to the Secretary of Transportation; (3) a project shall have eligible project costs that are reasonably anticipated to equal or exceed the lesser of $50 million or 33 1/3 percent of the amount of federal aid highway funds apportioned for the most recently completed fiscal year to the state in which the project is located (in the case of a project principally involving the installation of intelligent transportation systems (ITS), eligible project costs shall be reasonably anticipated to equal or exceed $15 million); (4) project financing shall be repayable, in whole or in part, from tolls, user fees or other dedicated revenue sources; and (5) in the case of a project that is undertaken by an entity that is not a state or local government or an agency or instrumentality of a state or local government, the project that the entity is undertaking shall be included in the state transportation plan and an approved State Transportation Improvement Program. The TIFIA selection criteria are (1) national or regional significance (including consideration of livability, economic competitiveness, and safety), (2) private participation, (3) environment (including consideration of sustainability and state of good repair), (4) project acceleration, (5) credit worthiness, (6) use of new technology, (7) consumption of budget authority, and (8) reduced federal grant assistance. Examples of user fees to repay TIFIA credit assistance may include tolls and rental car customer facility charges. In addition to the contact named above, Susan Zimmerman, Assistant Director; Sarah Arnett; Carl Barden; Marcia Carlsen; Carol Henn; Bert Japikse; David Lin; Joanie Lofgren; Ruben Montes de Oca; Josh Ormond; Amy Rosewarne; Andrew Von Ah; and Elizabeth Wood made key contributions to this report. | Created in 1998, the TIFIA program is designed to fill market gaps and leverage substantial nonfederal investment by providing federal credit assistance to help finance surface transportation projects including highway, transit, rail, and intermodal projects. Since 2008, demand for the program has surged, annually exceeding budget resources for the program by a factor of more than 10 to 1. Given the increased demand and recent proposals to expand and modify the program, GAO was asked to review (1) the characteristics of TIFIA projects and how DOT tracks progress toward the programs goals, (2) the process DOT used to evaluate and select projects that submitted LOIs to apply for credit assistance in fiscal years 2010 and 2011, (3) the factors that affect project sponsors decisions about whether to seek TIFIA credit assistance, and (4) the options proposed to modify the program. GAO reviewed laws and program guidance; interviewed DOT officials, project sponsors, and advisors involved in procuring credit assistance; and surveyed all state departments of mtransportation and other recent applicants about the TIFIA program. Projects that received credit assistance through the Transportation Infrastructure Finance and Innovation Act (TIFIA) program, administered by the Department of Transportation (DOT), tend to be large, high-cost highway projects. As of April 2012, DOT has executed 27 TIFIA credit agreements for 26 projects with project sponsors such as state DOTs and transit agencies. Overall, DOT has provided nearly $9.1 billion in credit assistance through 26 loans and one loan guarantee. By mode, there are 17 highway, 5 transit, and 4 intermodal projects. Most projects have a total cost of over $1 billion. DOT monitors individual credit agreements but does not systematically assess whether its TIFIA portfolio as a whole is achieving the programs goals of leveraging federal funds and encouraging private co-investment. DOT has identified goals and objectives for the TIFIA program, but its limited use of performance measures makes it difficult to determine the degree to which the program is meeting these goals and objectives. Given that DOT already collects project data, it could use these data to better evaluate the programs overall progress toward meeting its goals. In fiscal years 2010 and 2011, DOT used a competitive two-step process to evaluate and invite projects to apply for TIFIA credit assistance to address the considerable increase in demand for the program. First, a multimodal team scored and grouped letters of interest (LOI) using statutory criteria. Second, a group of senior DOT staff reviewed the LOIs based on the criteria and other factors, like available budget authority, and invited a subset to applythe next step in securing TIFIA assistance. While recent applicants were satisfied with many aspects of the process, they also indicated, along with legal and financial advisors, that the selection process lacks transparency and creates uncertainty in their ability to implement projects. For example, some recent applicants told us it is difficult to understand what characteristics DOT uses to measure how well a project meets each criterion. DOT officials said the agency is taking steps to improve its evaluation process, but since many of the changes were initiated in 2012, it is too soon to tell if they will address recent applicants concerns. Several factors influence whether project sponsors seek TIFIA assistance. More than 30 of 36 recent applicants we surveyed cited TIFIAs repayment options (like deferring repayments for 5 years after project completion), low interest rate, and flexible structure (i.e., ability to subordinate TIFIA repayment) as important in their decision to seek assistance. To date, sponsors from 17 states have never sought TIFIA assistance. State DOT respondents from these states cited various reasons for this, including lack of eligible projects and state-imposed borrowing restrictions. Many of these state DOTs indicated that regardless of options for modifying the program, they have no plans to seek TIFIA assistance. Several options to change the TIFIA program have been proposed by, among others, Congress and DOT; these options include increasing the programs funding, increasing the portion of costs that may be covered by TIFIA from 33 percent to 49 percent of project costs, and modifying the selection process. Each option has advantages and disadvantages and, if adopted, some could alter the original goals of the programto leverage public funds and encourage private co-investment. GAO recommends that DOT develop and use program performance measures to better assess progress in meeting TIFIAs goals and objectives. DOT should better disclose information on how it selects projects to apply for TIFIA assistance through program guidance or other means to help ensure that the program is more transparent to Congress, applicants, and the public. DOT said it would consider the studys results. |
In recent years, the Congress passed two pieces of legislation intended, in part, to foster greater coordination between education, welfare, and employment and training programs. The Workforce Investment Act was passed in 1998 to consolidate services of many employment and training programs, mandating that states and localities use a centralized service delivery structure—the one-stop center system—to provide most federally funded employment and training assistance. The Temporary Assistance for Needy Families block grant, which was created 2 years earlier by the 1996 Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) and replaced the Aid to Families with Dependent Children (AFDC) Program, gave states greater flexibility to design employment and training services for clients receiving cash assistance. While TANF is not one of the federal programs mandated to provide services through the one- stop system, states and localities have the option to include TANF as a partner. For over 30 years prior to TANF and WIA implementation, states’ welfare and workforce development systems collaborated at some level to provide employment and training services to welfare clients. These efforts began in 1967 with the Work Incentive (WIN) Program’s requirement that states administer employment and training programs for their welfare clients. WIN’s successes were limited, according to critics, largely because the program lacked coordination between welfare agencies and local employment and training agencies. WIN was replaced in 1988 when the Family Support Act created the Job Opportunities and Basic Skills (JOBS) Program to provide welfare clients with a broad range of services, including education and training services. Unlike WIN, which had a clear and formal role for the workforce development system, JOBS was to be administered or supervised by the welfare agency, but could be coordinated with existing employment, training, and education programs within each state. Our previous work shows that workforce development programs like the one created by the Job Training Partnership Act (JTPA) played a key role in providing services to welfare recipients. In fact, welfare agencies could contract with these existing programs to provide JOBS services, which some state welfare agencies did. Collaboration efforts continued between 1987 and 1996, a period during which states were allowed to further experiment with their AFDC and JOBS programs as HHS began allowing waivers to provide states with more flexibility. States often used these waivers to strengthen work requirements for welfare clients and to try new ways of delivering services to welfare clients, sometimes using the workforce development system. With the enactment of PRWORA and the creation of the TANF block grant in 1996, states were given more flexibility than they had under predecessor programs to determine the nature of financial assistance, the structure of their cash assistance programs, the types of client services provided, and how services are delivered. TANF also established new accountability measures for states—focused in part on meeting work requirements— and a 5-year lifetime limit on federally funded TANF cash assistance. These measures heighten the importance of helping TANF clients find work quickly and retain employment. As states have focused more on this goal of helping TANF clients obtain employment, the importance of coordinating services has received increased attention. To help clients get and retain jobs, states need to address clients’ work-related needs through services such as job search and job readiness, as well as child care and transportation assistance. Frequently, addressing these issues requires those who work directly with welfare clients to draw on other programs to provide a wide array of services. While local welfare agencies administer cash assistance and sometimes Food Stamps and Medicaid, housing authorities, education agencies, and state Employment Services offices often administer other programs that provide key services to TANF clients. In addition, PRWORA broadened both the types of TANF services that could be contracted and the types of organizations that could serve as TANF contractors, and therefore nongovernmental agencies are often involved in the provision of services to TANF clients. During welfare reform, states were also experimenting with better ways to coordinate employment and training services, often using one-stop centers. Labor’s efforts to coordinate service delivery began in fiscal year 1994, when they awarded One-Stop Planning and Implementation grants to some states. These grants required that most Labor-funded programs be included in one-stop centers, which were intended to integrate services in order to create a customer-driven system that was accountable for outcomes and available to all job seekers. When WIA was enacted, all local areas nationwide were required to use the one-stop system to provide the majority of federally funded employment and training services. WIA extended the one-stop concept beyond Labor programs, requiring states and localities to form partnerships with other agencies offering employment and training services. Seventeen categories of programs, funded through four federal agencies—the Departments of Labor, Education, Health and Human Services, and Housing and Urban Development—must provide services through the one-stop center system under WIA. While TANF is not one of 17 federal programs mandated to provide services through the one-stop system, states and localities have the option to include TANF as a partner. WIA emphasizes state and local flexibility and does not require that all program services be provided on site, as they may be provided through electronic linkages with partner agencies or by referral, but WIA does require that the relationships and services be spelled out in a Memorandum of Understanding (MOU). Other recent legislation has also attempted to strengthen the relationship between welfare and workforce development agencies. For example, in the Balanced Budget Act of 1997, the Congress authorized welfare-to-work (WtW) grants to be administered through the workforce development system. These grants were awarded by Labor to states and were intended to help hard-to-employ persons receiving TANF cash assistance and noncustodial parents of minor children in families receiving TANF cash assistance obtain employment. Forty-four states have received formula grants and 191 competitive grants have been awarded to 189 entities. States have until fiscal year 2004 to spend these funds. WtW’s inclusion as one of the mandatory partners in one-stop centers under WIA encourages welfare and workforce agencies to coordinate. Nearly all states reported coordinating TANF and WIA services at the state or local level, and some of these coordination efforts increased between 2000 and 2001. Coordination between state TANF and WIA agencies increased slightly in 2001 and ranged from formal methods, such as MOUs, to informal methods, such as information sharing. In addition to these methods, states increasingly used TANF funds to support the operations or the infrastructure of their one-stop systems. Some of the largest gains in coordination occurred at the local level, particularly in the use of informal linkages, such as periodic program referrals. Other methods used by local areas included both formal linkages, such as financial agreements between a local TANF agency and the one-stop center, and coordinated planning. In addition, many localities coordinated the provision of services for TANF clients through one-stop centers, either by colocation or electronic linkages and client referrals, and these efforts increased in 2001. Although many states and localities coordinate TANF services with one-stop centers to some extent, some still provide services for TANF clients outside of one-stop centers. Most states reported some level of coordination between state agencies administering TANF and WIA, and coordination efforts increased slightly between 2000 and 2001. Coordination methods used by the states ranged from formal linkages, such as MOUs, to informal methods, such as information sharing. Twenty-eight states reported that they made extensive use of formal linkages, such as MOUs and state-level formal agreements, between the agencies administering TANF and WIA in 2001, compared with 27 states in 2000. Similarly, there was a slight increase in the states’ use of coordinated planning in 2001, with 19 states reporting that they used it to a great extent, compared with 18 states in 2000 (see fig. 1). In addition, 17 states reported using more coordination methods to a great extent in 2001. Moreover, 9 states used all five of the coordination methods that we analyzed—up from 7 states in 2000. Increased coordination between TANF and WIA programs was also seen in the use of TANF funds to support one-stop center infrastructure or operations. The number of states using TANF funds to support one-stop centers increased to 36 in 2001 from 33 in 2000. In addition, the number of states ranking TANF as one of the three largest funding sources for their one-stop centers rose to 15 in 2001 from 12 in 2000. Sometimes TANF employment and training funds were completely transferred to the state workforce agency to provide all employment and training services to TANF clients in the state. For example, in both Michigan and Connecticut, all TANF employment and training funds were allocated to the state workforce agencies, which took responsibility for providing all employment and training services to TANF clients through the one-stops. In other states, the state TANF agency retained responsibility for TANF employment and training funds, transferring only a portion to the workforce agency, sometimes on a contractual basis. For example, in New Jersey, the state TANF and WIA agencies established a contract that directed a portion of TANF funds to the state Department of Labor to be used for providing employment-related services to TANF clients at the one-stops; the remaining funds were retained by the TANF agency and distributed to local areas at the TANF agency’s discretion. In addition, states sometimes established other formal or informal relationships to further strengthen the coordination between TANF and WIA agencies. For example, in Texas, the Texas Workforce Commission and the Health and Human Services Commission are required to jointly develop and adopt a formal MOU, providing for coordinated case management of hardest-to-serve TANF clients. In California, the relationship between the two agencies often took more informal forms, with TANF and WIA agencies participating in joint planning efforts, workgroups that focused on service duplication, and policy groups that addressed pertinent operational issues affecting both agencies. Local-level coordination of TANF-related services with one-stop centers also increased between 2000 and 2001, with some of the most dramatic changes occurring in the use of informal linkages between local TANF agencies and one-stop centers. In addition to these methods, local one- stops were increasingly providing services to TANF clients by colocation or electronic linkages and referrals. Besides TANF- and WIA- funded services, many local areas also provided WtW services to TANF clients through the one-stop system. Some of the largest gains in program coordination between 2000 and 2001 were seen at the local level, with the most dramatic changes occurring between local TANF agencies and one-stop centers in informal linkages, such as periodic program referrals or information services. Forty-four states reported that most of their one-stop centers had informal linkages with their TANF programs in 2001, compared with 35 states in 2000 (see fig. 2). Similarly, 16 states reported that most of their one-stop centers had shared intake or enrollment systems in 2001—up from 13 in 2000, and 15 states reported in 2001 that they used an integrated case management system in most of their one-stop centers—an increase of 1 state from our 2000 results. Also, more coordination methods were in use at local one- stops. The number of states that reported that most of their one-stop centers used all seven methods of local-level coordination increased to 10 states in 2001 from 7 in 2000. Increases in coordination between TANF services and one-stop centers were also seen in the use of the one-stop system to provide services to TANF clients. Localities increasingly coordinated the provision of services to TANF clients through local one-stop centers—either through colocation of services at the one-stop or through electronic linkages and client referrals to providers outside the one-stop. Moreover, the number of states with services colocated in at least some of their local one-stop centers increased between 2000 and 2001 (see fig. 3). For example, the number of states with TANF work services colocated in at least some of their one- stops increased to 39 in 2001 up from 32 in 2000. Moreover, of the 18 states in 2000 that did not have TANF work services colocated in any of their one-stops, 8 had colocated TANF work services at some or all of their one- stops by 2001. While the same number of states—24—reported in both 2000 and 2001 that TANF work services were colocated at the majority of their one-stops, the use of electronic linkages or referrals increased. Fifteen states reported in 2001 that work-related services for TANF clients were either electronically linked to the majority of their one-stop centers or provided by referring clients from the one-stop to services located outside the one-stop, while 11 states reported these types of linkages in 2000. A variety of TANF work services were available at the one-stops. These services included job search and registration, skills enhancement, vocational training, assistance in developing individual employability plans, and case management geared toward addressing barriers to employment. For example, in local areas that we visited in New Jersey, clients came to the one-stop to participate in job readiness courses and self-paced adult education curricula, or to receive assistance with résumé writing and job interviewing skills. A local area in Connecticut provided TANF clients at the one-stops with an opportunity to take part in on-site recruitment by local employers. Sometimes states instituted policies to further strengthen the relationships between the programs and ensure that clients were connected to work services at the one-stop centers. In Michigan and Texas, for example, TANF clients were required to attend an orientation session at the one- stop before they could receive cash assistance. Similarly, in Connecticut, because of low participation rates for TANF clients at one-stop centers, the legislature enacted a law requiring TANF clients to use one-stop centers as a condition of receiving cash assistance. In addition to TANF work services, states also increasingly coordinated TANF cash assistance, Food Stamps, and Medicaid programs with the one- stop centers. Colocation of cash assistance increased in 2001—16 states reported that they provided cash assistance services at least part time at the majority of their one-stop centers, compared with 9 states in 2000. Colocation of Food Stamps and Medicaid also increased. For example, although 7 states in both years reported that they conducted Medicaid eligibility at the majority of their one-stops, the number of states reporting that Medicaid eligibility was conducted in at least some of their one-stops increased to 20 in 2001 from 14 in 2000. For Food Stamp eligibility, 10 states reported providing this service at the majority of their one-stops in 2001, up from 7 states in 2000. Moreover, the number of states with Food Stamp eligibility conducted in at least some of their one-stops was 26 in 2001, up from 16 states in 2000. When states did not colocate services, they sometimes coordinated them by using electronic linkages or by referral. About half of the states coordinated their TANF cash assistance or Food Stamps or Medicaid programs with the one-stop centers, electronically or by referral, in 2000 and 2001. In 2001, Food Stamp eligibility was available electronically or by referral at the majority of one-stops in 29 states, and Medicaid eligibility was available in the same manner at the majority of one-stops in 27 states—up from 26 and 24 states, respectively, in 2000. For example, state officials in both Connecticut and New Jersey reported that even though one-stop staff did not determine eligibility for Medicaid and Food Stamps at the one-stops, the staff were expected to refer clients to appropriate support services outside of one-stop centers. Although colocation, electronic linkages, and referrals were all used to serve TANF clients through the one-stops, in general, the form of coordination between TANF programs and one-stop centers varied depending on particular services provided. For example, when TANF work services were coordinated through the one-stop centers, they were more likely to be colocated. TANF cash assistance and the Food Stamps and Medicaid programs were more likely to be connected with one-stop centers electronically or by referral (see fig. 4). We also saw wide variation in the degree to which other support services, such as child care and transportation, were provided through the one-stop system. For child care assistance, the forms of coordination included colocation of child care programs at the one-stop as well as the provision of information on child care services available elsewhere. In New Jersey, for example, representatives from child care assistance programs were colocated at some of the one-stop centers, whereas in Arizona, coordination was limited to childcare information brochures on display at one-stop centers. Officials reported that in one county in New York, WIA funds were used to provide daycare vouchers to TANF clients. Many of the one-stops that we visited provided some kind of transportation assistance, although the nature of the services and whether or not the services were reserved for TANF clients varied from locality to locality. For example, in one location in New Jersey that we visited, the one-stop center reimbursed any low-income client attending training for transportation expenses, whether or not the client was covered under TANF. Another New Jersey one-stop provided van services to transport former TANF clients to and from job interviews and, once clients were employed, to and from their jobs, even during evening and night shifts. Similarly, in a one-stop in Connecticut, current and former TANF clients could receive mileage reimbursement for their expenses associated with going to and from their jobs. And in Louisiana, a one-stop we visited contracted with a nonprofit agency to provide van services to transport TANF clients to and from work-related activities. Other support services were sometimes provided through the one-stop as well. For example, under an agreement between human service and WIA officials in one local area of Tennessee, TANF clients are referred to the workforce agency where caseworkers work with them to identify needed support services, such as dental care and auto repair, and connect the TANF clients with providers of those services. In some states, TANF clients were served at the one-stops through the use of Labor’s WtW grant program—a mandatory partner at the one-stops under WIA. Some state and local officials said that the WtW program helped promote local-level coordination between welfare and workforce agencies, a finding that we reported in our earlier work. Although work- related services for TANF clients were available both through the one-stop centers and outside of them—sometimes using a variety of funding streams—the hardest-to-employ TANF clients were increasingly accessing services at the one-stops through the WtW program. In 2001, 42 states had WtW services colocated in at least some of their local one-stops, compared with 34 states in 2000. In addition, states reported that WtW services were physically located at the majority of one-stop centers in 31 states in 2001, up from 27 states in 2000. Some WtW services included assistance given to clients in developing Personal Responsibility Plans, helping the hardest-to- serve clients prepare for job interviews, or following up with TANF clients who recently entered the workforce. Through the WtW program, local areas in Louisiana placed state Labor staff administering the program in social services offices across the state to assess TANF clients’ eligibility for WtW and refer eligible clients to the one-stops for appropriate services. Sometimes WtW grants were also used to provide support services to current or former TANF clients at the one-stops, including child care, transportation, and other assistance. For example, a local one-stop that we visited in Arizona used the WtW grants for a Sick Child Care Program, an initiative that, under a contract with a local nonprofit organization, provides for nurses to be sent to the homes of TANF clients with sick children, thus enabling them to participate in work-related or training activities. A local one-stop that we visited in New Jersey used WtW funds to establish an Individual Development Account Program whereby clients transitioning into the workforce could save money matched by the one- stop for a work-related purpose, such as purchasing a car to get to the workplace. The same one-stop also used WtW funds in employing an outside financial services company to help those who recently left TANF for employment apply for their Earned Income Tax Credit. Some officials expressed concerns about the ability of local one-stops to continue providing work-related services to TANF clients once all states’ WtW funds expire. For example, officials reported that in one state, where local TANF offices previously referred TANF clients to the one-stops as part of the state’s WtW program, few referrals have been done since the depletion of WtW funds. In California, where WtW funds are sometimes the only funding source available to serve TANF clients at the one-stops, one county is currently developing a formal transition plan to provide services to TANF clients at the one-stops using WIA funds after WtW funds expire. A California state official told us, however, that the expectation in other areas is that no other funding sources will be available to serve this population and that clients will have to be sent back to be served by separate TANF agencies. Despite increased coordination of TANF work services through the one- stops, many states and localities still provided services to TANF clients outside of one-stop centers at separate TANF offices. However, the number of states not coordinating any work services to TANF clients through the one-stops—either by colocation or electronic linkages and referrals—declined between 2000 and 2001. While 12 states in 2000 reported that they were not providing TANF work services through any of their one-stop centers, the number had declined to only 4 states in 2001. Some states— Indiana, Maryland, and Mississippi, for example—offered a full range of employment and training services to clients through their local TANF agencies, which were located in every local area. In other states, separate TANF agencies were maintained even though some work services were still coordinated through the one-stops. For example, in Alabama, where work services were available through the one-stops by means of electronic linkages or referrals, clients received all employment and training programs at county welfare offices where they could also access all needed support services. Similarly in Louisiana, each parish had an Office of Family Support where TANF clients received employment and training assessments, counseling, and referrals. A variety of conditions—including historical relationships, geographic considerations, adequate facilities, and different perspectives on how best to serve TANF clients—influence how states and localities choose to coordinate services with one-stop centers. States are affected differently by these conditions. While these conditions sometimes facilitated states’ coordination efforts, other states faced with similar conditions found coordination difficult. Although research has shown that a variety of conditions influence coordination efforts, it has not clearly examined how coordinated service delivery through one-stops affects TANF clients’ outcomes. A variety of conditions continue to affect how states and localities coordinate TANF services with one-stop centers. The nature of historical relationships between welfare and workforce agencies at the state and local level, specifically agencies’ experience in working with each other in the past, often sets the stage for the level of present coordination. Geographic considerations, such as variations in layout of agency service districts, physical distance between one-stop centers and welfare offices, and the number of TANF clients in a given area, can also affect how states and localities coordinate services. The availability of adequate facilities can also influence state and local coordination efforts. In addition, welfare and workforce agencies often have different perspectives on how to best serve TANF clients. While some states and localities have had success in using the flexibility afforded them under WIA and TANF to coordinate in spite of these conditions, others lack information on the coordination efforts of other states and localities. Although there is some “promising practices” information currently available on selected websites, it is not generally organized in a way that allows readers to readily obtain information on coordinating services. The existing level of coordination between TANF services and one-stop centers is often a reflection of how state and local agencies have worked with each other in the past. Some officials said that their efforts to coordinate TANF services with one-stop centers have been complicated by state and local agencies’ lack of experience working together, which sometimes resulted in a lack of trust between agencies. For example, some officials reported that coordination was difficult because, historically, there has been little cooperation between workforce and welfare agencies in their state. Some states that had previously coordinated other employment and training programs among multiple agencies, noted that this experience made coordination of TANF services with one-stop centers easier. For example, in Idaho, the state Department of Labor invited the state’s welfare agency to join a focus group on coordination as early as 1992, and a TANF representative has served on the state management team for workforce development since their earliest one-stop implementation efforts. Also, officials in Illinois reported that TANF staff regularly attended JTPA meetings in the past and have been involved with WIA since it was implemented, laying the framework for coordinating TANF services with one-stop centers. Local areas sometimes have found ways to creatively coordinate services even in states where state agencies had little experience working together. For example, although TANF clients in Louisiana access TANF services outside of one-stop centers, staff at a local one-stop we visited reported that they work closely with parish welfare staff to ensure that TANF clients were aware of the full range of services available at the one-stop. According to local officials, the mutual commitment between welfare and workforce officials enabled them to work together to meet the needs of all clients. In Arizona, where state welfare and workforce agencies operate services for TANF clients outside the one-stops, a local one-stop has regularly organized job fairs in conjunction with welfare staff since the implementation of WIA. Various geographical considerations can affect how TANF services are coordinated with one-stop centers. In some states, the layout of agency service districts, physical distance between one-stop centers and welfare offices, and the number of TANF clients in a given area have affected the extent of coordination. For example, HHS regional office personnel reported that West Virginia social service agencies were reluctant to coordinate with one-stop centers because service districts for TANF and WIA were not the same, and TANF officials did not always know what local workforce investment areas encompassed their agency. Other states’ efforts to coordinate services were limited by the lack of one- stop centers within the state. For example, officials in Alabama reported that, although welfare agencies were located in every county, one-stop centers were not. For this reason, they believed that the existing one-stops could not accommodate all TANF clients in the state. In addition, other state efforts to coordinate services were limited due to the decline of the TANF population that resulted in a small number of TANF clients in some areas. For example, in Illinois, where caseload declines had left few TANF recipients in some areas, state officials stressed the importance of allowing local areas the flexibility to determine when and how to coordinate TANF-related services with one-stop centers. These geographic considerations can also encourage state and local coordination efforts. HHS regional office personnel reported that smaller states with only one local workforce investment area believed that the small size of the state encouraged the coordination of services. Existing research has confirmed that locating one-stop centers near facilities where other TANF services are offered to clients facilitated coordination. In addition, officials at a local one-stop in Connecticut reported that having a social service office and the one-stop center located on different floors in the same facility made it easier for agencies to communicate with each other and for clients to get services. Other states have located one-stop centers in areas that are more accessible to TANF clients in order to make coordination beneficial for them. Both New Jersey and Louisiana have established plans to create satellite one-stop centers in public housing areas. The New Jersey Department of Labor has a contract with a local housing authority to establish an on-site employment center for serving WtW-eligible TANF clients residing on the premises of the housing authority. The New Orleans workforce investment board is also in the process of locating seven satellite one-stop centers in housing projects within the city limits. Both efforts were undertaken to improve TANF clients’ access to one-stop centers, which in turn encourages greater coordination between the local workforce and welfare agencies. Availability of adequate facilities can shape how states and localities coordinate TANF services with one-stop centers. Officials in several states reported that coordination efforts were hampered because available space at one-stop centers was limited and the centers could not house additional programs or service providers. For example, in a local Louisiana one-stop, staff were unable to colocate more partners because they did not have space to accommodate additional providers. In addition, state officials explained that long-term leases often prevented relocation of TANF services to one-stop centers because agencies administering those services could not afford to incur the cost of breaking those leases in order to move to one-stops. Other states facing similar limitations in facilities have developed alternatives, such as rotating welfare staff to one-stop centers or locating workforce staff in welfare offices. For example, in order to help TANF clients access employment and training and to link them to one-stop centers, the Louisiana Department of Labor located a WtW representative in most local welfare offices. WtW staff provided key information to TANF clients about services available at one-stop centers. Officials’ perspectives on how best to serve TANF clients can affect whether TANF services will be offered in one-stop centers. While some believe TANF clients are best served in separate social service facilities, others consider that coordination through the one-stop is more beneficial. Some officials argued that TANF clients who have multiple barriers to employment might not receive priority of service in a one-stop center environment. As a result officials in some states were hesitant to coordinate services for TANF clients with one-stop centers because they believe that the needs of TANF clients were better served in social service facilities by staff trained to meet their specific needs. For example, HHS regional representatives reported that Rhode Island social service officials believe TANF clients often need exposure to pre-employment experiences such as English language services—not always available at one-stop centers—before they can fully benefit from the work-related programs at the one-stops. Also, state officials in Washington reported that TANF clients need a higher level of supervision and more structured assistance than they believe one-stops can provide in order to help clients maintain participation in the program and achieve desired outcomes. According to several HHS regional officials, some states are concerned that it may be difficult for TANF clients to access all support services (especially child care, substance abuse counseling, and transportation) through the one- stops. Other states told the HHS regional officials that they were hesitant to coordinate TANF services with one-stop centers as long as other needed support services continued to be provided outside that structure. HHS regional officials said that in other states, state officials reported that coordinating TANF services with one-stop centers was beneficial to TANF clients, and services were structured accordingly. HHS regional officials reported that some state officials believe that, because workforce staff have more experience in getting people into jobs, exposing TANF clients to one-stops would better prepare them for work. For example, welfare officials in Georgia supported the coordination of TANF services with one- stop centers because they believed that TANF clients would benefit from the workforce expertise of one-stop staff. HHS officials said that other states also agree that TANF clients have access to a greater array of employment and training services at one-stop centers and that early contact with these services can help ensure continued access to services once TANF clients no longer receive cash assistance. Other officials reported that provision of services for TANF clients through one-stop centers encourages program staff to be more aware of other services available in both welfare and workforce systems. While research shows that a variety of conditions influence if, and how, states and localities choose to coordinate TANF services, limited research is available on the effectiveness of coordinated service delivery on TANF clients’ outcomes. In our analysis of the literature, we did not find a national study that compared the effectiveness of coordinated service delivery to that of other service delivery methods in supporting successful outcomes for welfare clients. Without research on the effectiveness of coordinated service delivery, states and localities must make decisions without the benefit of thorough evaluation and analysis. In general, we found few recent research studies on the coordination of welfare and workforce development services. Although the research is limited, findings from existing research address conditions that promote or inhibit coordination between agencies. Some of the conditions identified in the research as promoting coordination included a history of working together and good working relationships between agency officials. Conditions identified as inhibiting coordination included agency space limitations and different geographic boundaries. All of these conditions are similar to those we found and previously mentioned in this report. Research has not shown that there is any one method or model of coordination that works best or that could be consistently applied in all settings. (See appendix I for a listing of reviewed research studies and their relevant findings.) Although limited research focused on welfare and workforce coordination efforts, no study compared the effectiveness of coordinated service delivery to that of other service delivery structures for welfare clients.One study examined outcomes for welfare clients who received services at five one-stop centers in five states, but the study did not compare outcomes of welfare clients receiving services in one-stop centers to those who received them through different delivery structures. Both HHS and Labor have research authority and, since the enactment of TANF and WIA, both have used this authority to encourage various evaluations of policy changes influenced by the legislation. However, federal research efforts on the effectiveness of coordinated service delivery on welfare recipients’ outcomes have been limited. To examine the effectiveness of various employment and training strategies, HHS and Labor are currently co-sponsoring a 5-year experimental study on employment retention and advancement to identify how to best provide post-employment services to the welfare population and which interventions work best in promoting retention and advancement of welfare recipients. Though this study will focus on local areas where services are delivered through one-stops and local areas where services are delivered through other structures, the current study design does not focus on how the different service delivery structures—one-stop centers and welfare agencies—affect the outcomes of welfare recipients. In addition, little evaluation of the effects of different service delivery structures on welfare clients’ outcomes has occurred, although Labor’s 2000-2005 research plan identifies research on interventions to assist welfare clients as a high-priority research area. Several challenges— including different program definitions, complex reporting requirements between TANF and WIA, and different information systems that do not share data—inhibit state and local coordination efforts. Although HHS and Labor have each provided some assistance to the states on how to coordinate services, the available guidance has not specifically addressed the challenges that many continue to face. Moreover, HHS and Labor have not addressed differences in program definitions and reporting requirements under TANF and WIA. However, a recent legislative proposal has called for Labor and HHS to jointly address the commonalities or differences in data elements, definitions, performance measures, and reporting requirements between TANF and WIA. Different program definitions and reporting requirements in TANF and WIA constrain the flexibility that states and localities have to coordinate TANF services through one-stop centers. The overall difference in how the success of TANF and WIA is measured, as defined by program definitions and reporting requirements challenges states and localities in their efforts to coordinate services. As states and localities attempt to coordinate services for TANF clients with one-stop centers, they encounter challenges to harmonizing different program definitions within TANF and WIA. Although both TANF and WIA focus on work, different program definitions—such as what constitutes work or what income level constitutes self-sufficiency—make coordination between the programs difficult. While many definitions are established by legislation and cannot be readily changed, a few can be locally determined, and two states we contacted found ways to harmonize their locally determined definitions. For example, Connecticut developed a self-sufficiency standard that could be uniformly applied across TANF and WIA so that both programs could place clients in jobs with similar wage levels. Having one self-sufficiency standard enables welfare and workforce staff to use one process to determine suitable job training programs and identify appropriate jobs. Similarly, one local one-stop center we visited in Arizona worked to accommodate what qualifies as a work activity for TANF clients. At this center, welfare and one-stop officials worked together to develop training for both programs that enabled TANF clients to meet the requirement of a TANF work activity. However, officials in other states reported that definition differences between TANF and WIA programs, including dissimilar self-sufficiency standards, made coordination efforts more difficult. In addition, differences in reporting requirements, resulting from how the success of each program is measured, also hinder coordination efforts. Each program has its own separate measures of success that subsequently drive program design and use of funds. While WIA’s performance measures focus on participant outcomes, such as increases in average earnings change and employment retention rate, TANF measures focus on the overall caseload, such as work participation rates and caseload reductions. States can also measure the success of TANF through the use of indicators required for high performance bonus reporting, similar to WIA’s performance measures. But data for the measures are not tracked uniformly across states, the measures are not defined in the same way, and participation in the TANF high performance bonus is voluntary. Because the mandatory federal measures for both programs evaluate very different things, officials found that tracking performance for the TANF and WIA programs together was difficult. Subsequently, these differences lead to different program designs and hamper state and local ability to coordinate TANF services with one-stop centers. In addition, similar to a finding in our prior report on WIA performance measures, several state officials expressed concern that, when WIA funds were used to serve TANF clients, the reporting requirements could lead one-stop staff to only serve those TANF clients they believed stood a better chance of meeting WIA’s outcome-based performance measures. Welfare and workforce agencies often use different information systems, complicating efforts to coordinate TANF services with one-stop centers. Efforts to increase coordination require greater data sharing across organizations. However, as we reported in the past, some of the systems used by agencies providing services to TANF clients do not readily share data with other systems, hampering the case manager’s ability to deliver services to the client in a timely manner. In some cases, this may mean that data needed to determine what services should be provided to a client are not readily available to the case manager. In other cases, having multiple systems may mean that agency workers have to enter the same data multiple times. In addition, antiquated information systems of both welfare and workforce agencies have made it difficult for agencies to take advantage of new technologies, such as Web-based systems. During our site visits and telephone interviews, some local officials said that they could not merge or share data and were not equipped to collect information on clients in different programs. TANF clients are often tracked separately from clients of other programs, and even the One-Stop Operating System (OSOS), funded by Labor, does not allow one-stop centers to include TANF programs. In addition, other officials expressed concerns that sharing data across programs would violate client confidentiality protections. Some states have been able to overcome this challenge to coordination by developing ways to merge data across multiple information systems. As reported in our previous work, we found that many states are extracting and consolidating data from multiple systems in data warehouses and other specialized databases. For example, the agency that administers TANF in Kansas developed a data warehouse to allow one-stop partners to access the data they needed on TANF clients without having to breach clients’ confidentiality. Other localities have created their own information management systems. To compensate for the limitations of OSOS, a New Jersey one-stop opted to use its own system, which allows the center’s staff to manually input all information on any client that is served through any program—including dates, work activities, and outcomes. Though some states have been able to merge information systems, the issues of incompatible computer systems are not easily resolved. Officials from two states we visited said that their states’ TANF and WIA agencies were exploring the development of a shared information system but that cost estimates were too high for it to be implemented at this time. Although TANF is not a mandatory partner in the one-stops under WIA, it is clear that TANF and WIA coordination is increasing, especially at the point of service delivery—the local level. It appears that, as the systems have matured and their shared purposes and goals have become evident, many states and localities have found it advantageous to coordinate their TANF and WIA services—linking TANF clients with one-stop centers that are positioned to help them throughout their lifetime, long after they leave time-limited, cash assistance. This move toward service coordination is not happening everywhere—it has been left to state and local discretion. Many officials use the flexibility in the programs to coordinate services for TANF clients, but their efforts continue to be hampered by lack of accessible information on state and local coordination efforts and lack of clear research on the effectiveness of coordinated service delivery on TANF clients’ outcomes. Labor and HHS have made efforts to work together to address some of the obstacles that states and localities have faced, but their efforts have not produced clear information on ways to improve coordination for states wishing to do so. And, while some states have been successful at developing strategies to overcome obstacles to coordination, others have not been. Without a mechanism to share successful approaches, states and localities that have met with success in their coordination efforts will remain an untapped resource. The information they could share may help other states and localities struggling in their efforts to design more coordinated service delivery approaches. In addition, though many states and localities have chosen to coordinate welfare and workforce services, research has yet to help state and local decision makers determine whether and how coordinated service delivery can be an effective method for improving TANF clients’ employment success. It is unknown whether promoting coordinated service delivery will result in improved outcomes for TANF clients because limited research exists on this topic. Clear research findings would help guide federal, state, and local officials in developing service delivery approaches that work best for TANF clients and make the best use of available resources. To help states more effectively address some of the obstacles to coordination, we recommend that Labor and HHS work together to jointly develop and distribute information on promising approaches for coordinating services for TANF clients through one-stops. To enable states and localities to determine whether coordinated service delivery is the most effective method for improving TANF clients’ employment success, we recommend that Labor and HHS promote research that would examine the role of coordinated service delivery on outcomes of TANF clients. We provided a draft of this report to Labor and HHS for their review and comment. Formal comments from Labor and HHS appear in appendix II and III respectively. In addition to the comments discussed below, HHS provided technical comments that we incorporated where appropriate. Labor and HHS generally agreed with our findings and recommendations and Labor noted that the report, in their opinion, contained an accurate portrayal of the extent of current collaboration between TANF and WIA services. Labor and HHS stated that they support efforts to share promising practices. Labor noted that they have awarded a contract to develop a comprehensive website for this purpose. We are hopeful that once fully developed it will be a ready source of information on many promising practices including the coordination of TANF and WIA services. HHS noted that ongoing research, in which they have both informal and formal linkages with Labor, would likely provide information on successful service delivery models. HHS commented that our recommendation to promote research that would examine the role of coordinated service delivery on outcomes of TANF clients could require an experimental research design, which is not compatible with the delivery of human service programs in the real world. We recognize the difficulty in setting up a rigorous comparison, and do not suggest that experimental research design is the only type of research that would fulfill our recommendation. Our recommendation is to have Labor and HHS encourage and support research that focuses a portion of the analysis on how the service delivery structure of services affects outcomes for TANF clients. HHS studies have provided some information on the success of different service models in serving TANF clients and we are hopeful that future research will focus on how service delivery structures affect outcomes for TANF clients. We are sending copies of this report to the Secretaries of HHS and Labor, relevant congressional committees, and others who are interested. Copies will also be made available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Other major contributors to this report are listed in appendix IV. Grubb, W. Norton, et al. Toward Order from Chaos: State Efforts to Reform Workforce Development Systems. Berkeley, CA: National Center for Research in Vocational Education, 1999. This study began in 1997 and analyzed data obtained from officials interviewed in 10 states and in 2 localities within each of the states. Though findings from this study primarily focused on workforce development reform efforts, the study also addressed factors that promote service coordination and challenges to service coordination. Researchers found that good personal relationships among administrators and consistency of efforts over time promoted workforce development reform and service coordination, and conflicts between the welfare work first philosophy and the workforce development education and training philosophy presented a challenge to service coordination. Martinson, Karin. Literature Review on Service Coordination and Integration in the Welfare and Workforce Development Systems. Washington, D.C.: Urban Institute, 1999. This literature review, written by the Urban Institute and released by the Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation, summarized 16 studies released between 1989 and 1998 that addressed service coordination between welfare and workforce systems. The review summarized both barriers to coordination and factors that promoted coordination efforts between welfare and workforce agencies. Barriers to coordination included incompatible management information systems and different program performance measures; factors that promoted coordination included the federal strategy of providing information on successful examples of coordination and the local strategy of documenting and evaluating coordination efforts. The review concluded that studies do not suggest that one method of coordination was consistently successful in bringing together welfare and workforce systems. McIntire, James L. and Amy F. Robins. Fixing to Change: A Best Practices Assessment of One-Stop Job Centers Working With Welfare Recipients. Washington: Fiscal Policy Center, University of Washington, 1999. This study, released by HHS’s Office of the Assistant Secretary for Planning and Evaluation, examined outcomes for TANF clients who received services at five one-stop centers in five states. Data collection occurred in 1997, and data analyzed included administrative data and focus group discussions with one-stop management and staff, employers of welfare clients, and both current and former welfare clients. Welfare clients examined were both AFDC clients and TANF clients—depending on the one-stop examined—because data collection occurred during the period of initial TANF implementation. This study found that these five one-stop centers produced partially successful outcomes for welfare clients, as evidenced by employment rates, wage rates, and hours worked. This study did not compare outcomes of welfare clients receiving services at the one-stop centers to outcomes of welfare clients receiving services provided through different delivery structures, such as local welfare agencies or other service providers. Pindus, Nancy, et al. Coordination and Integration of Welfare and Workforce Development Systems. Washington, D.C.: Urban Institute, 2000. This study, released by HHS’s Office of the Assistant Secretary for Planning and Evaluation in 2000 and written by the Urban Institute, examined recent state and local coordination efforts of welfare and workforce agencies. Data analyzed included interviews with officials from TANF and workforce agencies in 12 localities within 6 states that occurred in the summer of 1999. Findings included factors that generally promoted coordination between welfare and workforce agencies and those that created barriers to coordination. In the study, a prior history of coordination between agencies, the availability of flexible funding sources, and other factors were found to promote coordination between welfare and workforce agencies. In contrast, agency space limitations that hindered collocation and different program goals were identified as some of the challenges to coordination. This study concluded that there is not one ideal model, schedule, or set of guidelines that will result in successful service delivery coordination. Suzanne Lofhjelm, Natalya Bolshun, Kara Finnegan Irving, and Rachel Weber made significant contributions to this report. In addition, Jessica Botsford and Richard Burkard provided legal support and Corinna Nicolaou provided writing assistance. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development. GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Coordination between TANF Programs and One-Stop Centers Is Increasing, but Challenges Remain. GAO-02-500T. Washington, D.C.: March 12, 2002. Workforce Investment Act: Coordination between TANF Services Through One-Stops Has Increased Despite Challenges. GAO-02-739T. Washington, D.C.: May 16, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Human Services Integration: Results of a GAO Cosponsored Conference on Modernizing Information Systems. GAO-02-121. Washington, D.C.: January 31, 2002. Means-Tested Programs: Determining Financial Eligibility Is Cumbersome and Can Be Simplified. GAO-02-58. Washington, D.C.: November 2, 2001. Workforce Investment Act: New Requirements Create Need for More Guidance. GAO-02-94T. Washington, D.C.: October 4, 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: October 4, 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO-01-368. Washington, D.C.: March 15, 2001. Multiple Employment Training Programs: Overlapping Programs Indicate Need for Closer Examination of Structure. GAO-01-71. Washington, D.C.: October 13, 2000. Welfare Reform: Work-Site Based Activities Can Play an Important Role in TANF Programs. GAO/HEHS-00-122. Washington, D.C.: July 28, 2000. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients. GAO/HEHS-99-22. Washington, D.C.: February 26, 1999). | The 1998 Workforce Investment Act (WIA) required states to provide most federally funded employment-related services through one-stop centers. Two years earlier, welfare reform legislation created the Temporary Assistance for Needy Families (TANF) block grant which provided flexibility to states to focus on helping needy adults with children find and maintain employment. Nearly all states reported some coordination of their TANF and WIA services at the state or local level, and the use of some of these coordination methods increased between 2000 and 2001. Historical relationships, geographic considerations, adequacy of facilities, and different perspectives on how best to serve TANF clients influenced how states and localities choose to coordinate services with one-stop centers. Several challenges, including program differences between TANF and WIA and different information systems used by welfare and workforce agencies, inhibit state and local coordination efforts. Though some states and localities have found creative ways to work around these issues, the differences remain barriers to coordination for many others. |
IRS develops its tax gap estimate by measuring the rate of taxpayer compliance—the degree to which taxpayers fully complied with their tax obligations. IRS uses such compliance data, along with other data and assumptions, to estimate the dollar amount of taxes not paid accurately and on time. For tax year 2001, IRS estimated that from 83.4 percent to 85 percent of owed taxes were paid voluntarily and on time, and that from $312 billion to $353 billion in taxes were not paid that should have been. IRS also estimates the amount of the gross tax gap that it will recover through enforcement and other actions and subtracts that to estimate the net annual tax gap. For tax year 2001, IRS estimated that it would eventually recover about $55 billion for a net tax gap from $257 billion to $298 billion. As we have reported in the past, closing the entire gap may not be feasible since it could entail more intrusive recordkeeping or reporting than the public is willing to accept or more resources than IRS is able to commit. However, given the size of the tax gap, even modest reductions would yield very significant financial benefits. IRS has estimated the tax gap on multiple occasions, beginning in 1979. IRS’s earlier tax gap estimates relied on the Taxpayer Compliance Measurement Program (TCMP), through which IRS periodically performed line-by-line examinations of randomly selected tax returns. TCMP started with tax year 1963 and examined individual returns most frequently— generally every 3 years—through tax year 1988. IRS contacted all taxpayers selected for TCMP studies. IRS did not implement any TCMP studies after 1988 because of concerns about costs and burdens on taxpayers. Under NRP, a program that we have encouraged, IRS recently completed its initial review of about 46,000 randomly selected individual tax returns from tax year 2001 (see app. I for a list of conducted TCMP and NRP surveys). Unlike with TCMP studies, IRS did not need to contact taxpayers for every tax return selected under NRP, handled some taxpayer contacts through correspondence rather than face-to-face examinations, and generally only asked taxpayers to explain information that it was otherwise unable to verify through IRS and third-party databases. In addition, unlike operational examinations, NRP examinations were randomly selected and used to measure compliance rather than target suspected noncompliance. IRS has a strategic planning process through which it supports decisions about strategic goals, program development, and resource allocation. Under GPRA, agencies are to develop strategic plans as the foundation for results-oriented management. GPRA requires that agency strategic plans identify long-term goals, outline strategies to achieve the goals, and describe how program evaluations were used to establish or revise the goals. GPRA requires federal agencies to establish measures to determine the results of their activities. To provide information on the estimated amount that each major type of noncompliance contributed to the 2001 tax gap, we reviewed IRS’s tax gap estimate for 2001. To determine IRS’s views on the certainty of its estimate, we reviewed IRS studies about tax gap estimation and interviewed IRS research officials to understand the data and methodologies used. We also spoke with IRS officials regarding planned changes to the data sources and estimation methodologies for the tax gap estimate. We determined that the tax gap estimates presented in this report are sufficiently reliable for the specific purposes of our engagement, particularly since IRS already has publicly released its tax gap estimates and disclosed their weaknesses. These purposes include discussing the major tax gap components and the orders of magnitude for various components, IRS’s concerns about the certainty of its estimates, and our recommendations on IRS’s compliance data and efforts. We reviewed IRS, academic, and our own reports and interviewed IRS officials to identify the various reasons for noncompliance. We talked with IRS officials to determine the extent and reliability of data and coding on the reasons for noncompliance, and reviewed IRS’s Examination Operational Automation Database, which is a database of tax return examination results that includes examiners’ determinations of the reasons for any noncompliance. We also talked with IRS officials to determine any plans to develop better data on reasons for noncompliance. To determine IRS’s approach to reducing the tax gap and whether the approach incorporates established results-oriented planning principles, we reviewed IRS strategic and performance plans and interviewed IRS strategic planning officials at the agency and operating division levels. We asked IRS to identify its key efforts to reduce the tax gap as well as the related rationales, goals, and results. As part of our work on whether the approach incorporates established results-oriented planning principles, we used what we learned about IRS’s approach to determine the extent to which it incorporated selected planning principles consistent with GPRA’s requirements. For purposes of this review, we focused on elements of results-oriented planning that, previously, we found common to leading organizations successfully pursuing results-oriented management–defining desired results, measuring performance, and using performance information to support agency missions. IRS estimates that underreporting of taxes accounted for about $250 billion to $292 billion of the $312 billion to $353 billion tax gap for 2001, while underpayment and nonfiling accounted for about $32 billion and $30 billion, respectively. The actual tax gap could be higher or lower due to various factors that affect the certainty of the estimate, such as old compliance data. IRS is taking some steps designed to improve portions of its compliance measurement efforts and its preliminary tax gap estimate and plans to release a revised tax gap estimate by the end of 2005. While IRS has proposed a schedule for NRP studies over the next several years, IRS has no approved plans to regularly measure tax compliance, which it could use to update the tax gap estimate and guide its compliance efforts. As table 1 indicates, underreporting of individual income taxes represented about half of the tax gap for 2001 (the estimate ranges from $150 billion to $187 billion out of a gross tax gap estimate that ranges from $312 billion to $353 billion). Within the underreporting estimate, IRS attributed about $150 billion to $187 billion, or about 50 percent of the total tax gap, to individual income tax underreporting, including underreporting of business income, such as sole proprietor, informal supplier, and farm income (about $83 billion to $99 billion); nonbusiness income, such as wages, interest and capital gains (about $42 billion to $57 billion); overstated income adjustments, deductions, and exemptions (about $14 billion to $16 billion); and overstated credits (about $11 billion to $14 billion). Underreporting of corporate income tax contributed an estimated $30 billion, or about 10 percent, to the 2001 tax gap, which included both small corporations (those reporting assets of $10 million or less) and large corporations (those reporting assets of over $10 million). (For a more detailed table of IRS’s estimates for the various components of the 2001 tax gap, see app. II). Employment tax underreporting accounted for an estimated $66 billion to $71 billion, or about 20 percent, of the 2001 tax gap and included several taxes that must be paid by self-employed individuals and employers. Self- employed individuals are generally required to calculate and remit Social Security and Medicare taxes to the U.S. Treasury each quarter. Employers are required to withhold these taxes from their employees’ wages, match these amounts, and remit withholdings to Treasury at least quarterly. Underreported self-employment and employer-withheld employment taxes respectively contributed an estimated $51 billion to $56 billion and $14 billion to IRS’s tax gap estimate. The employment tax underreporting estimate also includes underreporting of federal unemployment taxes (about $1 billion). Although a significant portion of IRS’s new tax gap estimate is based on recent compliance data, IRS has concerns with the certainty of the overall tax gap estimate in part because of incomplete and old data, outdated methodologies, and measurement difficulties. Table 2 shows IRS’s certainty level in the estimates, as well as the underlying data sources. As table 2 shows, IRS’s estimate for the 2001 tax gap does not include estimates of excise tax underreporting and nonfiling. According to IRS, the reason for this omission is that numerous federal excise taxes, many of which have specific exclusions or varying applications, complicate excise tax computations. Further, data on excise tax transactions are typically maintained at the state level and are often incomplete. Also, according to an IRS research official, the estimate does not include corporate income tax and employment tax nonfiling because IRS lacks good, representative data for corporate and employment tax nonfilers. Further, data from IRS’s operational programs to identify nonfilers exclude those whom IRS does not know about and do not include the full tax liability of nonfilers whom IRS has identified. The 2001 tax gap estimate also does not include any estimates for taxes due from illegal source income, as the magnitude of such income is difficult to estimate. Moreover, the federal government seeks to eliminate most illegal activities altogether, rather than derive revenue from these activities. Old data also contribute to IRS’s “weaker” level of certainty for certain segments of the underreporting portion of its 2001 tax gap estimate. For example, IRS used data from the 1970s and 1980s to estimate underreporting of corporate income taxes and employer-withheld employment taxes. For large corporate income tax underreporting, IRS based its estimate on the amount of tax recommended from operational examinations rather than the tax ultimately assessed as part of the total tax liability. According to IRS officials, IRS relies on the amount of tax recommended because it is difficult to determine the true tax liability of large corporations due to complex and ambiguous tax laws that create opportunities for differing interpretations and that complicate the determination. These officials further stated that because these examinations are not randomly selected and are not focused on identifying all tax noncompliance, the estimate produced from the examination data is not representative of the tax gap for all large corporations. They also explained that due to these complexities and the costs and burdens of collecting complete and accurate data, IRS has not systematically measured large corporation tax compliance through statistically valid studies, even though the officials acknowledged that such studies would be useful in estimating the related tax gap. Further, some methodologies IRS used to estimate the tax gap are based on older data and contribute to the uncertainty surrounding the tax gap estimate. For example, because IRS knew that it would not detect all underreporting noncompliance, IRS multiplied the detected amounts of underreporting to help calculate a total estimate for underreported individual income tax. IRS officials explained that they used a number of “multipliers,” including one derived from the 1976 TCMP study of individual tax returns, which was before IRS expanded and improved its computer matching programs to better detect various types of underreported income. In addition, IRS estimated individual income tax nonfiling based on the assumption that the relationship between individual income nonfiling and underreporting has been constant since the 1988 TCMP survey was conducted. Finally, it is inherently difficult for IRS to observe and measure some types of underreporting or nonfiling. For example, underreporting of income or nonfiling of tax returns by informal suppliers can be hard for IRS to detect because the tax laws generally do not require third parties to withhold income tax or file information returns on payments made to informal suppliers, as are required with other types of individuals such as wage earners. Similarly, academic studies have discussed the difficulty in tracking cash payments that businesses make to their employees, as businesses may not report these payments to IRS in order to avoid paying employment taxes and employees may not report these payments on their income tax return to avoid paying income taxes. IRS is taking several steps that could improve the preliminary tax gap estimate for tax year 2001. IRS intends to publish a revised tax gap estimate by the end of 2005 based on the results of these steps. For example, IRS officials stated that IRS plans to further analyze the preliminary NRP results in an attempt to improve the certainty of the estimate. NRP is a significant achievement and its data should be valuable in improving IRS operations and for other uses. However, those officials added that because IRS is still assessing the quality of the NRP data, it has not yet finalized the certainty levels for the preliminary estimates for individual income tax and self-employment tax underreporting. Likewise, we cannot yet be certain about the quality of the NRP data collected because IRS is still assessing the data. IRS plans to implement three changes to its estimation methodology for its revised tax gap estimate. Although it is too soon to know whether these changes will improve the estimate, IRS expects that the changes will help address known methodological weaknesses. According to IRS, these changes include the following: IRS plans to replace the multiplier it derived in the 1970s and used to estimate individual income tax underreporting. IRS is developing a new methodology, known as detection controlled estimation (DCE). DCE is a regression-based model that will use 2001 NRP data and control for variables that could affect the amount of underreporting detected. IRS plans to develop a new technique as well as replace the data from the 1981 and 1985-1986 University of Michigan surveys to estimate the individual income tax underreporting portion of the tax gap attributable to informal suppliers. IRS intends to update its estimate of individual income tax nonfiling, which is currently based on 1988 nonfiler TCMP data, by using “Exact Match” data provided by the U.S. Census Bureau. Census will match data from its Current Population Survey against the IRS Master Files to identify the extent of nonfiling by individual taxpayers. The Census data to be provided to IRS will be aggregated and not contain information on specific individuals. In addition, IRS research officials are planning a compliance measurement study that will allow IRS to update underreporting estimates involving flow-through entities. This study, which IRS intends to begin in October 2005, would take 2 to 3 years to complete. Because individual taxpayers or corporations may be recipients of income (or losses) from flow-through entities, this study could affect IRS’s underreporting estimates for individual and corporate income tax. While these data and methodology updates could improve the tax gap estimates, IRS has no approved plans to periodically collect more and better compliance data over the long term beyond the study of flow- through entities. IRS Research officials said that they recently proposed a schedule for additional NRP studies over the next several years. However, these officials also said this proposal is under consideration but has not been finalized. IRS has indicated that given its current research priorities, it could not begin another NRP study of individual income tax returns before 2008, at the earliest, and would not complete such a study until at least 2010. According to IRS officials, IRS has not committed to regularly collecting compliance data because of the associated costs and burdens. Taxpayers whose returns are examined through compliance studies such as NRP bear costs in terms of time and money. Also, IRS incurs costs, including direct costs and opportunity costs (or revenue that IRS potentially forgoes by examining randomly selected returns, which are more likely to include returns from compliant taxpayers than returns selected because they are likely to contain noncompliance that would produce additional tax assessments). Regularly measuring compliance can offer many benefits, including helping IRS identify new or growing types of noncompliance, identify changes in tax laws and regulations that may improve compliance, more effectively target examinations of tax returns, understand the effectiveness of its programs to promote and enforce compliance, and determine its resource needs and allocations. For example, by analyzing 1979 and 1982 TCMP data, IRS identified significant noncompliance with the number of dependents claimed on tax returns and justified a legislative change to address the noncompliance. As a result, for tax year 1987, taxpayers claimed about 5 million fewer dependents on their returns than would have been expected without the change in law. Tax compliance data are useful outside of IRS as well. Other federal agencies and offices use compliance data for tax policy analysis, revenue estimating, and research. For example, the Department of Commerce’s Bureau of Economic Analysis had used TCMP data to adjust its national income and product accounts. Additionally, state tax authorities have used IRS compliance data to develop state compliance programs and estimate state tax gaps. Also, policy makers in the executive branch and Congress can use the results from compliance measurement studies to help decide on appropriate funding levels for IRS. As we have reported in the past, the longer the time between compliance measurement surveys, the less useful they become given changes in the economy and tax law. According to IRS, without current compliance data, it has limited capability to determine key areas of noncompliance to address and actions to take to maximize the use of its limited resources. For example, the formulas that IRS creates from compliance data to select returns for examination have enabled IRS to focus examination resources on noncompliant returns rather than burdening compliant taxpayers. When IRS updated the formulas in the early 1990s with compliance data from the 1988 TCMP, IRS selected a lower percentage of compliant tax returns for examination. However, after 3 years of using formulas based on the 1988 data, the percentage of compliant tax returns examined increased each year through 1998, placing additional burdens on compliant taxpayers and leaving less time for IRS to examine noncompliant returns that resulted in an additional tax assessment. Historically, IRS has varied how frequently it measured compliance for particular types of taxpayers and taxes. As appendix I shows, the period between measurements of individual income tax reporting compliance, which consistently has accounted for the largest portion of the tax gap, never exceeded 4 years between 1963 and 1988. In planning the 2001 NRP to measure individual income tax compliance, IRS envisioned doing the NRP on a 3-year cycle. Appendix I also shows that IRS measured compliance less frequently for other types of taxpayers and taxes, such as for small corporation income taxes, and that IRS never measured compliance for large corporations or for excise taxes. Although regularly measuring tax compliance can be beneficial, how often measurements should be made is a judgment that depends on many potential criteria including (1) the amount that a particular type of noncompliance is thought to contribute to the tax gap, (2) whether IRS has reason to believe that compliance may have changed (e.g., due to tax law changes), and (3) costs, particularly when IRS officials said that resources to conduct operational examinations are already limited. Using these criteria, IRS would likely vary the frequency of compliance measurement studies. Based on these criteria as well as our previous reports, decisions about compliance measurement would also be affected by the following factors. Precision. The costs and benefits of measuring compliance can vary with how precisely IRS wishes to measure compliance to achieve an intended use (e.g., tax gap estimation or examination return selection). Obtaining more precise and more detailed compliance data for more detailed populations of taxpayers or tax issues (e.g., types of income or deductions) would likely be more costly but potentially more useful. Capacity. Each compliance measurement study requires having enough resources such as staffing, training, tools, and systems to capture the data. Regular compliance measurement through smaller efforts targeted at particular types of taxpayers or taxes and sampling designs that collect data across consecutive tax years rather than for one year could help reduce costs and sustain long-term compliance measurement. Several factors concern IRS about its data on the reasons for noncompliance, which can be unintentional or intentional. Although IRS is developing a system intended to capture better examination data, IRS does not have firm or specific plans to develop better data on the reasons for noncompliance, even though the lack of such data makes it harder to decide whether it should address specific areas of noncompliance through nonenforcement efforts, such as designing clearer forms or publications, or enforcement efforts. IRS has concerns with its data on the unintentional and intentional reasons for noncompliance. Various types of unintentional or intentional reasons could explain why taxpayers fail to comply with the tax laws. Unintentional reasons can include being unaware of recordkeeping requirements, accidentally entering an item on the wrong line of a tax return, or following inaccurate advice from a tax practitioner. Intentional reasons for noncompliance can include intentionally omitting income from a tax return or interpreting vague tax laws to evade tax liability. IRS collects data on the reasons for noncompliance for specific tax issues during its operational examinations of tax returns. In many of these cases, it is difficult for examiners to determine a taxpayer’s intent– whether the noncompliance is unintentional or intentional. Unless the evidence clearly points to the reason, the examiner would have to make subjective judgments about why the noncompliance occurred. IRS has a number of other concerns with the data: The database is incomplete because not all examination results, including data on reasons for noncompliance, were being entered into the database. IRS has not tested the adequacy of the controls for data entry or the reliability of the data being collected. IRS has found instances where examiners close examinations without assigning a reason for noncompliance or by assigning the same reason to all instances of noncompliance, regardless of the situation. IRS has not trained all examiners to ensure consistent understanding and use of the various codes to indicate the reason for noncompliance. The data do not represent the population of noncompliant taxpayers but rather only those who had their tax returns examined. According to IRS officials, the agency does not have firm or specific plans to develop better data on the reasons for noncompliance. One official explained that IRS decided not to improve the consistency of its current reason data because it is devoting its limited resources to other efforts, such as developing the Examination Desktop Support System (EDSS). Although this system is intended to allow examiners to capture better examination data, specific system features have not yet been identified to improve examiners’ selection of reason codes. IRS officials said that the system could be enhanced in the future to improve the reason data and that they plan to consider such enhancements. As the National Taxpayer Advocate recently testified, data on whether taxpayers are unintentionally or intentionally noncompliant with specific tax provisions are critical to IRS for deciding whether its efforts to address specific areas of noncompliance should focus on nonenforcement activities, such as improved forms or publications, or enforcement activities to pursue intentional noncompliance. For example, taxpayers may unintentionally claim the Earned Income Tax Credit (EITC) because they do not understand the child residency requirements for this credit (i.e., a dependent must live with the taxpayer for more than half of the year). This type of unintentional noncompliance may require IRS to more clearly explain the EITC requirements within related forms and publications. However, other taxpayers may file false EITC claims with the intent of evading tax liability, which may suggest a strategy that relies on IRS’s enforcement programs and tools. Similar situations could exist for other tax code provisions. If IRS is to develop better data on the reasons for noncompliance, it will be important for IRS to consider factors in data collection such as the following. Data reliability. To minimize examiner subjectivity and ensure that the data are complete and accurate, IRS would need to refine the reason categories, provide adequate training, establish system and data entry controls, and provide supervisory oversight. Scope. IRS would need to decide whether the reason categories are to be captured for selected types of noncompliance or all types of noncompliance. Examination selection. IRS currently collects reason data annually through hundreds of thousands of operational examinations. IRS also collected reason data through NRP. In the future, IRS would need to decide whether to collect reason data (1) during all operational examinations, (2) for a statistical sample of operational examinations, or (3) for examinations performed through periodic compliance studies such as NRP. Collecting data for a sample of examinations or through periodic compliance studies might be done with a smaller cadre of examiners specially trained and overseen to maximize consistency of decisions about the reasons why taxpayers are noncompliant. Also, data from samples of examinations could be used to generalize reasons for noncompliance for all examinations, and data from compliance studies of all taxpayers could be used to generalize these reasons for the population of taxpayers. Our past reports have supported the concept of rigorously researching the causes of noncompliance. Recognizing the benefits of better compliance data, the National Taxpayer Advocate has also urged IRS to consider performing additional research into causes of noncompliance. IRS approaches tax gap reduction through improving service to taxpayers and enforcing tax laws and has established two broad strategic goals and related key efforts that are intended to support this approach. However, IRS has not established long-term, quantitative compliance goals and regularly collected data to track progress in reducing the tax gap, which would complement its current important compliance efforts. Establishing clear compliance goals and measuring progress towards them benefits both IRS and external stakeholders and are consistent with the results- oriented performance management principles set forth in GPRA. Although IRS has lacked such data in the past and faces other challenges, NRP and EITC data provide an improved base for setting compliance goals and reexamining existing programs intended to reduce the tax gap. IRS’s overall approach to reducing the tax gap consists of improving service to taxpayers and enhancing enforcement of the tax laws. Through efforts such as education and outreach programs, IRS seeks to improve voluntary compliance with the tax system by helping people understand their tax obligations. In addition, IRS attempts to simplify the tax process, such as by revising forms and publications to make them more easily understood by diverse taxpayer communities and electronically accessible. In conjunction with taxpayer service, IRS uses its enforcement authority to ensure that taxpayers are reporting and paying the proper amount of taxes. Through efforts such as examining tax returns and collaborating with state governments to share leads on abusive tax avoidance transactions, IRS seeks to detect and deter noncompliance. Two of IRS’s three strategic goals, along with their associated objectives and strategies, are intended to directly support this approach. Goal 1—Improve Taxpayer Service—is intended to promote voluntary compliance. This goal has three objectives (1) improve service options for the tax paying public (2) facilitate participation in the tax system by all sectors of the public and (3) simplify the tax process. Goal 2—Enhance Enforcement of the Tax Law—is intended to ensure, through IRS’s enforcement authority, that taxpayers are meeting their tax obligations. The four objectives for this goal are (1) discourage and deter noncompliance with emphasis on corrosive activity by corporations, high- income individual taxpayers, and other contributors to the tax gap; (2) ensure that attorneys, accountants, and other tax practitioners adhere to professional standards and follow the law; (3) detect and deter domestic and off-shore-based tax and financial criminal activity; and (4) deter abuse within tax-exempt and governmental entities and misuse of such entities by third parties for tax avoidance or other unintended purposes. To achieve these objectives, IRS has 15 strategies, such as “re-examine and adjust audit processes to target likely areas of noncompliance.” In addition to these goals, IRS’s service and enforcement efforts outlined in its strategic plan are also intended to support tax gap reduction. IRS’s strategic plan mentions over 60 service and enforcement efforts targeted at improving taxpayer compliance. Because the plan did not prioritize these efforts, we asked IRS officials to identify the key efforts in reducing the tax gap. In response, IRS provided over 40 key efforts. Enforcement efforts included pursuing high income nonfilers (taxpayers with income over $100,000 who have not filed a tax return) through direct enforcement actions and identifying higher priority collection cases through analytical models. Service, or nonenforcement, efforts included a taxpayer education program on tip reporting. (See app. III for a summary of the key efforts provided.) IRS has developed a strategic planning and budgeting process to help the agency comply with GPRA requirements. However, IRS’s strategies for improving compliance generally lack a clear focus on long-term, quantitative goals and results measurement. IRS has established broad qualitative goals and strategies for improving taxpayer service and enhancing enforcement of the tax laws. IRS has also identified measures, such as compliance rates for tax reporting, filing, and payment as well as the percentage of Americans who think it is acceptable to cheat on their taxes, which are intended to gauge the progress of its strategies toward its broad goals. However, IRS does not collect recent data to update all of these compliance measures and has not established quantitative goals against which to compare the measures and judge any progress made through its compliance strategies. Although IRS has not focused on quantitative, results-oriented goals for improving voluntary compliance, IRS has established many output-related goals and measures that track activity level, such as the number of taxpayers contacted, collection cases closed, or returns examined. In contrast, IRS has fewer outcome-related goals and measures that track results, such as refund timeliness or examination quality. In the past, IRS had set a long-term goal of improving overall compliance to 90 percent by 2001. This goal was to be achieved through a research approach rooted in IRS’s Compliance 2000 philosophy. The Compliance 2000 philosophy envisioned using nonenforcement efforts to correct unintentional noncompliance and reserving enforcement efforts for intentional noncompliance. To carry out this philosophy, in the early 1990s, IRS initiated many research projects across IRS’s 63 district offices to identify noncompliant market segments, root causes for the noncompliance, and innovative ways to improve compliance. However, the lack of objective compliance data, among other factors, limited the success of this approach. Recently, external stakeholders, such as the IRS Oversight Board, have supported the concept of setting a numeric, long- term goal for increasing the voluntary compliance rate. In response to a President’s Management Agenda initiative to better integrate budget and performance information, IRS officials said that they are considering various long-term goals for the agency. IRS has not yet set a time frame for publicly releasing the goals. Nor have IRS officials indicated whether any goals will be related to improving taxpayer compliance or whether they will be quantitative and results-oriented. Focusing on outcome-oriented goals and establishing measures to assess the actual results, effects, or impact of a program or activity compared to its intended purpose can help agencies improve performance and stakeholders determine whether programs have produced desired results. As such, long-term, quantitative compliance goals offer several benefits for IRS, as discussed below. Perhaps most important, compliance goals coupled with periodic measurements of compliance levels would provide IRS with a better basis for determining to what extent its various service and enforcement efforts contribute to compliance. Additionally, long-term, quantitative goals may help IRS consider new strategies to improve compliance, especially since these strategies could take several years to implement. For example, IRS’s progress toward the goal of having 80 percent of all individual tax returns electronically filed by 2007 has required enhancement of its technology, development of software to support electronic filing, education of taxpayers and practitioners, and other steps that could not be completed in a short time frame. Focusing on intended results can also promote strategic and disciplined management decisions that are more likely to be effective because managers who use fact-based performance analysis are better able to target areas most in need of improvement and select appropriate interventions. Likewise, agency accountability can be enhanced when both agency management and external stakeholders such as Congress can assess an agency’s progress toward meeting its goals. Finally, setting long-term, quantitative goals would be consistent with results-oriented management principles that are associated with high- performing organizations and incorporated into the statutory management framework Congress has adopted through GPRA. Not unlike other agencies we have reported on in the past, IRS faces challenges in implementing a results-oriented management approach, such as identifying and collecting the necessary data to make informed judgments about what goals to set and to subsequently measure its progress in reaching its goals. However, having completed the NRP review of income underreporting by individuals, IRS now has an improved foundation for setting goals for improving taxpayers’ compliance. IRS’s effort to address noncompliance with the EITC provides an example of how a more data-driven planning approach can help IRS become more results-oriented over time. IRS’s most recent EITC compliance study estimated that between $8.5 billion and $9.9 billion, or between 27 percent and 32 percent, respectively, of the EITC claims filed for tax year 1999 should not have been paid. Following the release of this study, a task force of IRS and Treasury officials determined the three leading types of errors that accounted for about $7 billion annually in overclaims. On the basis of compliance data and other research, IRS started an initiative to improve service, fairness, and compliance and designed specific corrective actions targeting the three types of errors. IRS is evaluating these actions to determine their effectiveness at reducing the overclaim rate in each of the three errors. Because IRS targeted its EITC effort based on data on the sources and extent of taxpayer errors, it was better able to determine what actions to take and how well, using systematic data collection and program evaluation, the effort is meeting its intended purpose. Measuring progress toward any goals that may be set could be challenging. For example, IRS researchers have found it difficult to determine the extent to which its enforcement actions deter noncompliance or its services improve compliance among taxpayers who want to comply. Although widespread agreement exists that IRS enforcement programs generally increase voluntary tax compliance, challenges such as collecting reliable compliance data, developing reasonable assumptions about taxpayer behavior, and accounting for factors outside of IRS’s actions that can affect taxpayer compliance, such as changes in tax law, make it difficult to estimate the effect of IRS’s enforcement and service activities. Even if IRS is unable to empirically estimate the extent to which its actions directly affected compliance rates, periodic measurements of compliance levels can indicate the extent to which compliance is improving or declining and provide a basis for reexamining existing programs and triggering corrective actions if necessary. Recently, several research studies have offered insights to better understand the direct tax revenue effects of IRS’s activities as well as the indirect effects on voluntary tax compliance. IRS researchers have hypothesized that the indirect effect of an examination varies among taxpayer segments. Further, a recent study concluded that criminal investigations have positive direct and indirect tax effects. Although these studies generally indicate that IRS activities have positive tax effects, the magnitude of these effects is not yet known with a high level of confidence given compliance measurement challenges, as mentioned earlier. According to IRS, these studies serve as a valuable baseline for further research, but it has not yet determined how it will use these studies to make operational decisions. As discussed in our recent testimony on the tax gap before the Senate Committee on Finance, and underscored by IRS, periodic tax compliance measurement is critically important to IRS’s ability to estimate the tax gap and design compliance programs intended to reduce the tax gap. Without current, reliable compliance data, it can be difficult for IRS to monitor trends or identify new types of noncompliance, determine its compliance resource needs and how to allocate such resources, and justify budget and staffing requests to policy makers in Congress and the executive branch. Consequently, completion of NRP, which covered the largest portion of the tax gap and was designed and implemented with an eye to reducing the costs and burdens of data collection, is a substantial achievement. However, although IRS has recently proposed a schedule for future NRP studies, it has no approved plans to repeat this study or periodically measure compliance across the various components of the tax gap. Doing periodic compliance studies in areas that have previously been measured, such as individual income tax underreporting, would provide valuable information to support a more data-driven and risk-based approach towards improving compliance and reducing the tax gap. Although it may not be feasible or necessary to measure compliance for all components of the tax gap at the same frequency or with the same level of investment, where practical methodologies exist, periodic measurements should be taken. Where practical methodologies do not yet exist, such as for excise tax or for large corporations, looking for ways to overcome challenging compliance measurement difficulties would be worthwhile. The tax gap is both a measure of the burden and frustration of taxpayers who want to comply but are tripped by tax code complexity and of willful tax cheating by a minority who do not wish to pay their fair share to support government programs. As such, collecting data on the reasons why noncompliance occurs can help IRS more effectively tailor its efforts to improve compliance. It can be difficult for IRS examiners to consistently determine the reasons why taxpayers have failed to comply with the tax laws. However, IRS has no specific plans to address this issue and, as a result, is missing opportunities to gather better data than it already collects. Certain immediate steps, like improving reason codes, better training examiners in applying the codes, and possibly reducing the number of examiners who would be responsible for making judgments on the reasons taxpayers are noncompliant may improve the data IRS currently collects. Nevertheless, given the difficulty of consistently determining why taxpayers are noncompliant, sustained research on these reasons also would be needed to develop a better understanding. Reducing the tax gap will be a challenging task given persistent levels of noncompliance and will not likely be achieved through a single solution. Rather, the tax gap must be attacked on multiple fronts and with multiple strategies over a sustained period of time. Without long-term, quantitative voluntary compliance goals and related performance measures, it will be more difficult for IRS to determine the success of its strategies, adjust its approach when necessary, and remain focused on results, especially since factors that affect compliance change over time. Having compliance goals, coupled with recently collected NRP data, would provide a solid base upon which IRS can develop a more strategic, results-oriented approach to reducing the tax gap. Taken together, these steps—periodically measuring compliance, determining the reason taxpayers are noncompliant, and setting results- oriented long-term goals—can help IRS build a foundation to understand how its taxpayer service and enforcement efforts affect compliance, improve its efforts, and make progress on reducing the tax gap. To establish a stronger foundation for improving IRS’s efforts to reduce the tax gap, the Commissioner of Internal Revenue should do the following. Develop plans to periodically measure tax compliance for areas that have been previously measured, such as for individual income tax underreporting, and study ways to cost effectively measure compliance for other components of the tax gap that have not been measured, such as for excise tax and large corporations. Those plans and that study should take into account risk management factors such as the amount the component contributes to the gap, changes that may have affected compliance levels since a measurement was last taken, and the cost of measuring compliance. Take steps to ensure that IRS regularly collects complete, accurate, and consistent data, to the extent possible, on the reasons taxpayers are noncompliant and that sufficient broader research is undertaken to continue learning about the reasons why noncompliance occurs. Establish a long-term, quantitative voluntary compliance goal for individual income tax underreporting and for tax underpayment, as well as for other areas of noncompliance as data become available. The Commissioner of Internal Revenue provided written comments on a draft of this report in a letter dated July 6, 2005, which is reprinted in appendix IV. In the letter, the Commissioner agreed with our recommendations. In response to the recommendation that IRS develop plans to periodically measure tax compliance, the Commissioner recognized the need for and value of developing and regularly updating compliance measures for various taxpayer populations and said that IRS will continue to consult with stakeholders to develop and refine its compliance measurement plans. In response to our recommendation that IRS take steps to regularly collect complete, accurate, and consistent data on the reasons for noncompliance, the Commissioner agreed that a better understanding of taxpayer noncompliant behavior would be useful in shaping strategic priorities and defining efforts to improve compliance. He further said that the operating divisions will continue to partner with the IRS research community to identify and better understand specific reasons for noncompliance and that IRS will ensure that auditors are trained to properly apply reason codes in the new report-writing system IRS is developing. In response to the recommendation that IRS develop long- term quantitative compliance goals, the Commissioner agreed with the concept of developing such goals and discussed factors that make goal- setting challenging. We appreciate IRS’s current actions related to our recommendations and recognize the challenges involved in balancing a number of complex issues related to obtaining and using tax compliance data. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Minority Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you or your staff have any questions, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The following table summarizes the Internal Revenue Service’s (IRS) efforts to measure voluntary compliance using TCMP surveys and the National Research Program (NRP) survey of individual income tax returns for tax year 2001. Years provided for individual income tax surveys refer to tax years. Years provided for surveys for all other types of tax refer to return processing years. The following table shows estimates for the various portions of the preliminary 2001 tax gap, the sources, including the age, of the data the Internal Revenue Service (IRS) used for these estimates, IRS’s level of certainty for each estimate, and areas for which IRS could not develop an estimate because of insufficient data. The Internal Revenue Service’s (IRS) strategic plan outlines, but does not prioritize, service and enforcement efforts to improve compliance. Therefore, we asked IRS officials to identify IRS’s key efforts to reduce the tax gap. IRS’s divisions provided lists that totaled 47 efforts, which are described in the following examples. The Small Business/Self-Employed Division identified 15 efforts such as models to identify higher priority collection cases to pursue, a computer matching program to identify underreported income, initiatives on high income nonfilers, attempts to improve tip income reporting, and efforts to identify abusive tax avoidance transactions. The Wage and Investment Division identified 7 efforts including various initiatives on tax collection, Earned Income Tax Credit, and using private contractors to collect certain types of tax debts. The Large and Mid-Sized Business Division identified 5 efforts such as identifying compliance risks, starting examinations sooner and doing them faster, and improving the treatment of abusive tax avoidance transactions. The Tax Exempt and Government Entities Division identified 8 efforts including abusive tax avoidance transactions in employee plans, abuses in tax-exempt bond financing, pension plan noncompliance, and abuses by credit counseling organizations. The Criminal Investigation Division identified 12 efforts including those involving questionable refunds, nonfilers, employment tax evasion, corporation fraud, and offshore abusive tax schemes. In addition to the contact named above, Jeff Arkin, Ralph Block, Elizabeth Curda, Elizabeth Fan, Evan Gilman, Shannon Groff, George Guttman, Michael Rose, Sam Scrutchins, and Tom Short made key contributions to ithis report. | According to the Internal Revenue Service (IRS), a gap arises each year between what taxpayers pay accurately and on time in taxes and what they should pay under the law. The tax gap is composed of underreporting of tax liabilities on tax returns, underpaying of taxes due from filed returns, and nonfiling of required tax returns altogether or on time. GAO was asked to provide information on (1) the estimated amount that each major type of noncompliance contributed to the 2001 tax gap and IRS's views on the certainty of its tax gap estimates, (2) reasons why noncompliance occurs, and (3) IRS's approach to reducing the tax gap and whether the approach incorporates established results-oriented planning principles. IRS estimates that underreporting of taxes accounted for about $250 billion to $292 billion of the $312 billion to $353 billion tax gap for 2001, while underpayment and nonfiling accounted for about $32 billion and $30 billion, respectively. Although IRS has collected recent compliance data, it still has concerns with some outdated methodologies and data used to estimate the tax gap. IRS is taking laudable steps intended to improve the estimate, which it plans to revise by the end of 2005. IRS has also developed a proposed schedule of compliance studies, but it has no approved plans to periodically measure compliance for the tax gap components. While it may not be feasible or necessary to measure compliance for all components at the same frequency or level of investment, periodic compliance studies would support a more data-driven and risk-based approach to reducing the tax gap. IRS recently began to capture data on the reasons why taxpayers are noncompliant. However, IRS has concerns about the data, such as examiners assigning the same reason for noncompliance regardless of situation. Also, it is often difficult for examiners to determine a taxpayer's intent--whether the noncompliance is unintentional or intentional. Collecting better data on reasons can help IRS focus its activities on taxpayer service or enforcement. Although IRS is developing a system intended to capture better examination data, IRS does not have firm or specific plans to develop better reason data. IRS approaches tax gap reduction through improving taxpayer service and enforcing tax laws and has two broad strategic goals and related key efforts that are intended to support this approach. However, IRS has not established long-term, quantitative compliance goals and regularly collected data to track its progress, which would complement its current, important compliance efforts. Establishing clear goals and measuring progress towards them would be consistent with results-oriented management principles. IRS has begun to consider additional goals, but it is not yet clear if they will be compliance related. Long-term, quantitative compliance goals, coupled with updated compliance data, would provide a solid base upon which to develop a more strategic, results-oriented approach to reducing the tax gap. |
Systems engineering and test and evaluation are critical parts of the weapon system acquisition process and how well these activities are conducted early in the acquisition cycle can greatly affect program outcomes. Systems engineering translates customer needs into specific product requirements for which requisite technological, software, engineering, and production capabilities can be identified through requirements analysis, design, and testing. Early systems engineering provides the knowledge that weapon system requirements are achievable with available resources such as technologies, time, people, and money. It allows a product developer to identify and resolve performance and resource gaps before product development begins by reducing requirements, deferring them to the future, or increasing the estimated cost for the weapon system’s development. Systems engineering plays a fundamental role in the establishment of the business case for a weapon acquisition program by providing information to DOD officials to make tradeoffs between requirements and resources. Systems engineering is then applied throughout the acquisition process to manage the engineering and technical risk in designing, developing, and producing a weapon system. The systems engineering processes should be applied prior to the start of a new weapon acquisition program and then continuously throughout the life-cycle. Test and evaluation provides information about the capabilities of a weapon system and can assist in managing program risk. There are generally two broad categories of testing: developmental and operational. Developmental testing is used to verify the status of technical progress, substantiate achievement of contract technical performance, and certify readiness for initial operational testing. Early developmental testing reduces program risks by evaluating performance at progressively higher component and subsystem levels, thus allowing program officials to identify problems early in the acquisition process. Developmental testing officials in the Office of the Secretary of Defense and the military services provide guidance and assistance to program managers on how to develop sound test plans. The amount of developmental testing actually conducted however, is controlled by the program manager and the testing requirements explicitly specified in the development contract. In contrast, operational testing determines if a weapon system provides operationally useful capability to the warfighter. It involves field testing a weapon system, under realistic conditions, to determine the effectiveness and suitability of the weapon for use in combat by military users, and the evaluation of the results of such tests. DOD’s Director of Operational Test and Evaluation conducts independent assessments of programs and reports the results to the Secretary of Defense and Congress. In 2008, the Defense Science Board reported that operational testing over the previous 10 years showed that there had been a dramatic increase in the number of weapon systems that did not meet their suitability requirements. The board found that failure rates were caused by several factors, notably the lack of a disciplined systems engineering process early in development and a robust reliability growth program. The board also found that weaknesses in developmental testing, acquisition workforce reductions and retirements, limited government oversight, increased complexity of emerging weapon systems, and increased reliance on commercial standards (in lieu of military specifications and standards) all contributed to these failure rates. For example, over the last 15 years, all service acquisition and test organizations experienced significant personnel cuts, including the loss of a large number of the most experienced technical and management personnel, including subject matter experts, without an adequate replacement pipeline. The services now rely heavily on contractors to help support these activities. Over the past two decades, the prominence of the developmental testing and systems engineering communities within the Office of the Secretary of Defense has continuously evolved, as the following examples illustrate. In 1992, a systems engineering directorate did not exist and the developmental test function was part of the Office of the Director of Test and Evaluation, which reported directly to the Under Secretary of Defense for Acquisition. At that time, the director had direct access to the Under Secretary on an array of issues related to test policy, test assets, and the workforce. In 1994, the Development Test, Systems Engineering and Evaluation office was formed. This organization effectively expanded the responsibilities of the former testing organization to formally include systems engineering. The organization had two deputy directors: the Deputy Director, Development Test and Evaluation, and the Deputy Director, Systems Engineering. This organization was dissolved in 1999. From 1999 to 2006, systems engineering and developmental testing responsibilities were aligned under a variety of offices. The responsibility for managing test ranges and resources, for example, was transferred to the Director of Operational Test and Evaluation. This function was later moved to the Test Resource Management Center, which reports directly to AT&L, where it remains today. In 2004, a Director of Systems Engineering was re-established and then in 2006 this became the System and Software Engineering Directorate. Developmental testing activities were part of this directorate’s responsibilities. As a result, systems engineering and developmental testing issues were reported indirectly to AT&L through the Deputy Under Secretary for Acquisition and Technology. Congress passed the Weapon Systems Acquisition Reform Act of 2009 (Reform Act)—the latest in a series of congressional actions taken to strengthen the defense acquisition system. The Reform Act establishes a Director of Systems Engineering and a Director of Developmental Test and Evaluation within the Office of the Secretary of Defense and defines the responsibilities of both offices. The Reform Act requires the services to develop, implement, and report on their plans for ensuring that systems engineering and developmental testing functions are adequately staffed to meet the Reform Act requirements. In addition, it requires the directors to report to Congress on March 31 of each year on military service and major defense acquisition program systems engineering and developmental testing activities from the previous year. For example, the report is to include a discussion of the extent to which major defense acquisition programs are fulfilling the objectives of their systems engineering and developmental test and evaluation master plans, as well as provide an assessment of the department’s organization and capabilities to perform these activities. Figure 1 shows some of the major reorganizations over the past two decades, including the most recent change where DOD decided to place the two new directors’ offices under the Director of Defense Research and Engineering. DOD has made progress in implementing the systems engineering and developmental test and evaluation provisions of the Reform Act, but has not yet developed performance criteria that would help assess the effectiveness of the changes. Some requirements, such as the establishment of the two new offices, have been fully implemented. The implementation of other requirements, such as the review and approval of systems engineering and developmental test and evaluation plans, has begun but requires sustained efforts. The department has not fully implemented other requirements. For example, DOD has begun development of joint guidance that will identify measurable performance criteria to be included in the systems engineering and developmental testing plans. DOD initially decided that one discretionary provision of the act—naming the Director of Developmental Test and Evaluation also as the Director of the Test Resource Management Center—would not be implemented. However, the Director of Defense Research and Engineering is currently examining the implications of this organizational change. It will be several years before the full impact of the Reform Act provisions is known. The offices of the Director of Systems Engineering and Developmental Test and Evaluation were officially established by the Under Secretary of Defense for AT&L in June 2009 to be his principal advisors on systems engineering and developmental testing matters. The directors took office 3 months and 9 months later, respectively, and are working on obtaining the funding, workforce, and office space needed to accomplish their responsibilities. The directors have also completed evaluations of the military services’ organizations and capabilities for conducting systems engineering and developmental testing, and identified areas for improvement. These evaluations were based on reports provided by the services that were also required by the Reform Act. As shown in table 1, many of the requirements that have been implemented will require ongoing efforts. The directors have the responsibility for reviewing and approving systems engineering and developmental test and evaluation plans as well as the ongoing responsibility to monitor the systems engineering and developmental test and evaluation activities of major defense acquisition programs. During fiscal year 2009, the Director of Systems Engineering reviewed 22 systems engineering plans and approved 16, while the Director of Developmental Test and Evaluation reviewed and approved 25 developmental test and evaluation plans within the test and evaluation master plans. Both offices are monitoring and reviewing activities on a number of major acquisition programs, including the Virginia Class Submarine, the Stryker Family of Vehicles, and the C-130 Avionics Modernization Program. Once their offices are fully staffed, the directors plan to increase efforts in reviewing and approving applicable planning documents and monitoring the activities of about 200 major defense acquisition and information system programs. Evaluations of 42 weapon systems were included in the directors’ first annual joint report to Congress. The individual systems engineering program assessments were consistent in that they typically included information on 10 areas, including requirements, critical technologies, technical risks, reliability, integration, and manufacturing. In some cases, the assessments also included an overall evaluation of whether the program was low, medium, or high risk; the reasons why; and a general discussion of recommendations or efforts the director has made to help program officials reduce any identified risk. Examples include the following. In an operational test readiness assessment of the EA-18G aircraft, the Director of Systems Engineering found multiple moderate-level risks related to software, communications, and mission planning and made recommendations to reduce the risks. The program acted on the risks and recommendations identified in the assessment and delayed the start of initial operational testing by 6 weeks to implement the fixes. It has completed initial operational testing and was found to be effective and suitable by Navy testers. The Director of Operational Test and Evaluation rated the system effective but not suitable, and stated that follow-on testing has been scheduled to verify correction of noted deficiencies. The program received approval to enter full rate production and is rated as a low risk in the joint annual report. The systems engineering assessment of the Global Hawk program was high risk pending the determination of actual system capability; it also stated that there is a high probability that the system will fail operational testing. The assessment cited numerous issues, including questions regarding the system’s ability to meet mission reliability requirements, poor system availability, and the impact of simultaneous weapon system block builds (concurrency). Despite the director’s concerns and efforts to help the program office develop a reliability growth plan for Global Hawk, no program funding has been allocated to support reliability improvements. The Expeditionary Fighting Vehicle assessment did not include an overall evaluation of risk. The assessment noted that the program was on track to meet the reliability key performance parameter of 43.5 hours mean time between operational mission failure. Problems related to meeting this and other reliability requirements were a primary reason why the program was restructured in 2007. However, the assessment did not address the high degree of concurrency between development and production, which will result in a commitment to fund 96 low-rate initial procurement vehicles prior to demonstrating that the vehicle can meet the reliability threshold value at initial operational test and evaluation, currently scheduled for completion by September 2016. Developmental testing assessments covered fewer programs and were not as structured as those provided by the systems engineering office in that there were no standard categories of information that were included in each assessment. Part of the reason is that the Director of the Developmental Test and Evaluation office was just developing the necessary expertise to review and provide formal assessments of programs. For the programs that were reviewed, the assessments included a status of developmental testing activities on programs and in some cases an assessment of whether the program was low, medium, or high risk. For example, the Director of Developmental Test and Evaluation supported an assessment of operational test readiness for the C-5 Reliability Enhancement and Reengining Program. The assessment stated that due to incomplete testing and technical issues found in developmental testing, there is a high risk of failure in operational testing. The assessment recommended that the program resolve these issues before beginning operational testing. The Reform Act also requires that the Director of Systems Engineering develop policies and guidance on, among other things, the use of systems engineering principles and best practices and the Director of Developmental Test and Evaluation develop policies and guidance on, among other things, the conduct of developmental testing within DOD. The directors have issued some additional policies to date, such as expanded guidance on addressing reliability and availability on weapon programs and on incorporating test requirements in acquisition contracts. The directors plan to update current guidance and issue additional guidance in the future. According to DOD officials, there are over 25 existing documents that provide policy and guidance for systems engineering and developmental testing. The directors also have an ongoing responsibility to advocate for and support their respective DOD acquisition workforce career fields, and have begun examining the training and education needs of these workforces. Two provisions, one of which is discretionary, have not been completed. The Reform Act requires that the directors, in coordination with the newly established office of the Director for Program Assessments and Root Cause Analysis, issue joint guidance on the development of detailed, measurable performance criteria that major acquisition programs should include in their systems engineering and testing plans. The performance criteria would be used to track and measure the achievement of specific performance objectives for these programs, giving decision makers a clearer understanding each program’s performance and progress. The offices have begun efforts to develop these policies and guidance, but specific completion dates have not been identified. At this time, it is unclear whether the guidance will include specific performance criteria that should be consistently tracked on programs and any risks associated with these programs, such as ones related to technology maturity, design stability, manufacturing readiness, concurrency of development and production activities, prototyping, and adequacy of program resources. Finally, the Reform Act gives DOD the option of permitting the Director of Developmental Test and Evaluation to serve as the Director of the Test Resource Management Center. DOD initially decided not to exercise this option. However, the Director of Defense Research and Engineering recently stated that his organization is examining the possibility of consolidating the offices. The director stated that it makes sense to combine the two offices because it would merge test oversight and test resource responsibilities under one organization, but the ultimate decision will be based on whether there are any legal obstacles to combining the two offices. While most of the Reform Act’s requirements focus on activities within the Office of the Secretary of Defense, the military services are ultimately responsible for ensuring that their weapon systems start off with strong foundations. To that end, in November 2009, the services, in reports to the Directors of Systems Engineering and Developmental Test and Evaluation, identified plans for ensuring that appropriate resources are available for conducting systems engineering and developmental testing activities. The individual reports also highlighted management initiatives undertaken to strengthen early weapon acquisition activities. For example, the Army is establishing a center at Aberdeen Proving Ground that will focus on improving reliability growth guidance, standards, methods, and training for Army acquisition programs. The Navy has developed criteria, including major milestone reviews and other gate reviews, to assess the “health” of testing and evaluation at various points in the acquisition process. The Air Force has undertaken an initiative to strengthen requirements setting, systems engineering, and developmental testing activities prior to the start of a new acquisition program. Air Force officials believe this particular initiative will meet the development planning requirements of the Reform Act. Experts provided different viewpoints on the proper placement of the new systems engineering and developmental test and evaluation offices, with some expressing concern that as currently placed, the offices will wield little more power or influence than they had prior to the passage of the Reform Act. According to the Director of Defense Research and Engineering, the Under Secretary of Defense for AT&L placed the new offices under his organization because the department wanted to put additional emphasis on systems engineering and developmental testing prior to the start of a weapons acquisition program. The director believes this is already occurring and that both offices will continue to have a strong relationship with acquisition programs even though they do not report directly to an organization with significant involvement with major defense acquisition programs. However, many current and former DOD systems engineering and developmental testing officials we spoke with believe the offices should be closely linked to weapon acquisition programs because most of their activities are related to those programs. Similarly, the Defense Science Board recommended that a developmental testing office be established and report directly to an organization that has significant involvement with major defense acquisition programs. In addition, officials we spoke with believe several other significant challenges, including those related to staffing and the culture of the Defense Research and Engineering organization, are already negatively affecting the offices’ effectiveness. DOD has not established any performance criteria that would help gauge the success of the new directors’ offices, making it difficult to determine if the offices are properly aligned within the department or if the Reform Act is having an impact on program outcomes. After the passage of the Reform Act, DOD considered several options on where to place the new offices of the Director of Systems Engineering and Director of Developmental Test and Evaluation. According to an official who helped evaluate potential alternatives, DOD could have aligned the offices under AT&L in several different ways (see fig. 2). For example, the offices could have reported directly to the Under Secretary of AT&L or indirectly to the Under Secretary of AT&L either through the Assistant Secretary of Defense (Acquisition) or the Director of Defense Research and Engineering. DOD decided to place the offices under the Director of Defense Research and Engineering, an organization that previously primarily focused on science and technology issues. Under Secretary of Defense for Acquisition, Technology & Logistics (USD AT&L) The Director of Defense Research and Engineering is aware of the challenges of placing the offices under an organization whose primary mission is to develop and transition technologies to acquisition programs, but believes that the current placement makes sense given congressional and DOD desires to place more emphasis on activities prior to the start of a new acquisition program. He stated that the addition of systems engineering and developmental testing not only stretches the role and mission of his organization, but also strengthens the organization’s role in acquisitions because it helps give the organization’s research staff another point of view in thinking about future technologies and systems. He plans for the offices to perform both assessment and advisory activities, including: providing risk assessments of acquisition programs for the Defense Acquisition Board, continuing to help programs succeed by providing technical insight and assisting the programs in the development of the systems engineering plan and the test and evaluation master plan, and educating and assisting researchers to think through new concepts or technologies using systems engineering to inform fielding and transition strategies. According to the Director of Defense Research and Engineering, the offices are already performing some of these functions. For example, the new directors have provided technical input to the Defense Acquisition Board on various weapons programs. The director stated the systems engineering organization is reviewing manufacturing processes and contractor manufacturing readiness for weapons programs such as the Joint Strike Fighter. In addition, a developmental testing official stated they are assisting the Director of Defense Research and Engineering Research Directorate in conducting technology readiness assessments and helping programs identify the trade spaces for testing requirements while reviewing the test and evaluation master plan. The director believes the value of having the offices perform both assessment and advisory activities is that they can look across the acquisition organization and identify programs that are succeeding from a cost, schedule, and performance perspective and identify common threads or trends that enable a program to succeed. Conversely, they could identify common factors that make programs fail. The Director of Defense Research and Engineering identified three challenges that he is trying to address in order for systems engineering and developmental testing to have a more positive influence on weapon system outcomes. First, the director would like to improve the technical depth of the systems engineering and developmental testing offices. Both functions have atrophied over the years and need to be revitalized. This will require the offices to find highly qualified people to fill the positions, which will not be easy. Second, the director wants to improve the way the Defense Research and Engineering organization engages with other DOD organizations that are involved in weapon system acquisition. The director noted that there are a lot of players and processes involved in weapon acquisition and that the systems engineering office can play a large role in facilitating greater interaction. Third, the director would like the Defense Research and Engineering organization to find better ways to shape, engage with, contract with, and get information from the defense industrial base. In addition to the three challenges, it will also be difficult to determine whether the two new offices are having a positive impact on weapon system outcomes. The Directors of Systems Engineering and Developmental Test and Evaluation are not reporting the number of recommendations implemented by program managers or the impact the recommendations have had on weapon programs, which would allow senior leaders to gauge the success of the two offices. This type of information could help the Under Secretary of AT&L determine if the offices need to be placed under a different organization, if the offices need to place more emphasis on advisory or assessment activities, and if the Reform Act is having an impact on program outcomes. The vast majority of current and former DOD systems engineering and test officials we spoke with were opposed to the placement of the offices under the Director of Defense Research and Engineering. Their chief concern is that the mission of the Director of Defense Research and Engineering organization is primarily focused on developing new technologies and transitioning those technologies to acquisition programs. While they recognize that the systems engineering and developmental testing offices need to be involved in activities prior to the official start of a new weapons program, they believe the offices’ expertise should be focused on helping DOD acquisition programs establish doable requirements given the current state of technologies, not on the technologies themselves. Therefore, they believe the offices would be more appropriately placed under the newly established offices of the Principal Deputy Under Secretary of Defense for AT&L or the Assistant Secretary of Defense for Acquisition, whose missions are more closely aligned with acquisition programs. Some officials we spoke with believe that a cultural change involving the focus and emphasis of the office of the Director of Defense Research and Engineering will have to take place in order for that organization to fully support its role in overseeing acquisition programs and improving the prominence of the two new offices within the department. However, these same officials believe that this cultural change is not likely to occur and that the Director of Defense Research and Engineering will continue to focus primarily on developing and transitioning new technologies to weapon programs. Therefore, the offices may not get sufficient support and resources or have the clout within DOD to effect change. One former systems engineering official pointed out that the historic association of systems engineering with the Director of Defense Research and Engineering does not bode well for the systems engineering office. Based upon his experience, the Director of Defense Research and Engineering’s focus and priorities resulted in a fundamental change in philosophy for the systems engineering mission, the virtual elimination of a comprehensive focus on program oversight or independent identification of technical risk, and a reduction in systems engineering resources. In short, he found that the Director of Defense Research and Engineering consistently focused on science and technology, in accordance with the organization’s charter, with systems engineering being an afterthought. Likewise, current and former developmental testing officials are concerned about the Director of Defense Research and Engineering’s support for developmental testing activities. They identified several staffing issues that they believe are key indicators of a lack of support. First, they pointed out that it took almost 9 months from the time the Director of Developmental Test and Evaluation office was established before a new director was in place compared to 3 months to place the Director of Systems Engineering. If developmental testing was a priority, officials believe that the Director of Defense Research and Engineering should have filled the position earlier. Second, test officials believe the Director of Developmental Test and Evaluation office needs to have about the same number of staff as the offices of the Director of Systems Engineering and the Director of Operational Test and Evaluation. According to officials, DOD currently plans to have about 70 people involved with developmental testing activities, 180 people for systems engineering, and 250 for operational testing. However, testing officials believe the offices should be roughly the same size given the fact that developmental testing will cover the same number of programs as systems engineering and operational testing and that roughly 80 percent of all testing activities are related to developmental tests, with the remaining 20 percent being for operational tests. Third, even though the Director of Developmental Test and Evaluation expects the office to grow to about 70 people by the end of fiscal year 2011, currently there are 30 people on board. The director believes there are a sufficient number of qualified people seeking positions and therefore the office could be ramped up more quickly. Finally, the Director of Developmental Test and Evaluation stated that his office has only one senior-level executive currently on staff who reports to him and that there are no plans to hire more for the 70-person organization. The director believes it is crucial that the organization have more senior-level officials because of the clout they carry in the department. The director believes that the lack of an adequate number of senior executives in the office weakens its ability to work effectively with or influence decisions made by other DOD organizations. Further, officials from other testing organizations, as well as the systems engineering office, indicated they have two or more senior executive-level employees. A May 2008 Defense Science Board report, which was focused on how DOD could rebuild its developmental testing activities, recommended that developmental testing be an independent office that reports directly to the Deputy Under Secretary of Defense (Acquisition and Technology). At that time, according to the report, there was no office within the Office of the Secretary of Defense with comprehensive developmental testing oversight responsibility, authority, or staff to coordinate with operational testing. In addition, the existing residual organizations lacked the clout to provide development test guidance and developmental testing was not considered to be a key element in AT&L system acquisition oversight. According to the study director, placing the developmental testing office under the Director of Defense Research and Engineering does not adequately position the new office to perform the oversight of acquisition programs. The military services, the Directors of Systems Engineering and Developmental Test and Evaluation, and we have identified a number of workforce and resource challenges that the military services will need to address to strengthen their systems engineering and developmental testing activities. For example, it is unclear whether the services have enough people to perform both systems engineering and developmental testing activities. Even though the services reported to the directors that they have enough people, they do not have accurate information on the number of people performing these activities. The Director of Developmental Test and Evaluation disagreed with the services’ assertions, but did not know how many additional people are needed. Service officials have also expressed concern about the department’s ability to train individuals who do not meet requisite certification requirements on a timely basis and being able to obtain additional resources to improve test facilities. The military services were required by the Reform Act to report on their plans to ensure that they have an adequate number of trained systems engineering and developmental testing personnel and to identify additional authorities or resources needed to attract, develop, train, and reward their staff. In November 2009, the military services submitted their reports to the respective directors within the Office of the Secretary of Defense on their findings. In general, the services concluded that even with some recruiting and retention challenges, they have an adequate number of personnel to conduct both systems engineering and developmental testing activities (see table 2 below). According to service officials, this determination was based on the fact that no program offices identified a need for additional staffing to complete these activities. The reports also stated the services generally have sufficient authorities to attract and retain their workforce. In DOD’s first annual joint report to Congress, the Director of Developmental Test and Evaluation did not agree with the military services’ assertion that they have enough staff to perform the full range of developmental testing activities. The director does not know how many more personnel are needed, but indicated that the office plans to work with the services to identify additional workforce needs. The Director of Systems Engineering agreed with the services’ reports that they have adequate staffing to support systems engineering activities required by current policy. According to the director, this was based on the 35,000 current personnel identified in the System Planning, Research Development, and Engineering workforce—a generic workforce category that includes systems engineering activities—as well as the services’ plans to hire over 2,500 additional personnel into this same workforce category over the next several years. Although not clearly articulated in the services’ reports, military service officials acknowledged that the personnel data in their reports may not be entirely accurate. For example, officials believe the systems engineering numbers identified in table 2 overstate the number of people actually performing systems engineering activities because that particular career field classification is a generic category that includes all types of engineers. The developmental test workforce shown in the table does not completely reflect the number of people who actually perform developmental testing activities because the information provided by the military services only identifies the personnel identified in the test and evaluation career field. Service officials told us that there are many other people performing these activities who are identified in other career fields. The Director of Developmental Test and Evaluation believes these other people may not be properly certified and that in the case of contractors, they do not possess certifications which are equivalent to the certification requirements of government personnel. This director plans to request another report from the services in fiscal year 2010. This report will address the overall workforce data; it will cover current staffing assigned to early test and evaluation activities, training, and certification concerns they have related to in-sourcing staff, rapid acquisition resource plans, and infrastructure needs for emerging technologies. The Director of Systems Engineering does not intend to request another report from the services. Nevertheless, each of the military services plans to increase its systems engineering workforce over the next several years. The exact number of personnel is uncertain because the services’ hiring projections relate to a general engineering personnel classification, not a specific systems engineering career field. The directors also identified challenges they believe the services will face in strengthening systems engineering and developmental testing activities. The Director of Systems Engineering pointed out that the services need to put greater emphasis on development planning activities, as called for by the Reform Act. The services are currently conducting these activities to some extent, but the director believes a more robust and consistent approach is needed. The Director of Developmental Test and Evaluation highlighted two other challenges facing the military services. First, the director would like to increase the number of government employees performing test and evaluation activities. The services experienced significant personnel cuts in these areas in the mid-1990s and has to rely on contractors to perform the work. DOD’s joint report to Congress noted that the Air Force in particular relies heavily on prime contractor evaluations and that this approach could lead to test results that are inaccurate, misleading, or not qualified, resulting in turn, in premature fielding decisions since prime contractors would not be giving impartial evaluations of results. The director believes there are a number of inherently governmental test and evaluation functions that produce a more impartial evaluation of results and that a desired end state would be one where there is an appropriate amount of government and contractor testing. Second, the director is concerned that DOD does not have the capacity to train and certify an estimated 800 individuals expected to be converted from contractor to government employees within the required time frame. While most of the contractors are expected to have some level of training and experience performing test activities, they probably will not meet certifications required of government employees because they have not had the same access to DOD training. In addition to those challenges recognized by the directors, we have identified other challenges we believe the services may face in implementing more robust systems engineering and developmental testing, including the following. According to the military services, they plan to meet hiring targets primarily through the conversion of contractors who are already performing those activities, but do not have plans in place to ensure that they have the right mixture of staff and expertise both now and in the future. DOD officials acknowledge that they do not know the demographics of the contractor workforce. However, they believe many contractors are often retired military with prior systems engineering experience. Therefore, while they may be able to meet short-term needs, there could be a challenge in meeting long-term workforce needs. Army test officials indicated that they have experienced a significant increase in their developmental testing workload since the terrorist attacks of September 2001, with no corresponding increase in staffing. As a result, personnel at their test ranges are working longer hours and extra shifts, which testing officials are concerned may affect their retention rates. Army officials also indicated that test ranges are deteriorating more quickly than expected and they may not have the appropriate funding to upgrade and repair the facilities and instrumentation. Test personnel are often operating in obsolete and outdated facilities that cannot meet test requirements, resulting in safety issues, potential damage to equipment, and degraded quality of life. DOD’s increased emphasis on fielding rapid acquisition systems may require the services to tailor their approach to systems engineering. According to an Air Force official, efforts that normally take months to complete for a more traditional acquisition program, have to be completed in a matter of weeks for rapid acquisition programs. DOD efforts to implement Reform Act requirements are progressing, but it will take some time before the results of these efforts can be evaluated. Current and former systems engineering and developmental testing officials offer compelling insights concerning the placement of the new directors’ offices under the Office of the Director of Defense Research and Engineering, but it is still too soon to judge how effective the offices will be at influencing outcomes on acquisition programs. The current placement of the offices may present several challenges that could hinder their ability to effectively oversee weapon system acquisition programs and ensure that risks are identified, discussed, and addressed prior to the start of a new program or the start of operational testing. Foremost among these potential challenges is the ability of the Director of Defense Research and Engineering to change the focus of the organization to effectively assimilate the roles and missions of the two new offices and then ensure that the offices are properly staffed and have the appropriate number of senior leaders. The mission of the office of the Director of Defense Research and Engineering has been to develop technology for weapon programs; its focus has not been to manage the technical aspects of weapon system acquisition programs. Ultimately, the real proof of whether an organization outside of the major defense acquisition program arena can influence acquisition program decisions and outcomes should be based on results. The directors’ offices have started to assess and report on the systems engineering and developmental testing activities on some of the major defense acquisition programs. They have also made recommendations and worked with program officials to help reduce risks on programs such as the EA-18G, Global Hawk, and the C-5 Reliability Enhancement and Reengining programs. However, guidance on the development and tracking of performance criteria that would provide an indication of how much risk is associated with a particular weapon system—such as those related to technology maturity, design stability, manufacturing readiness, concurrency of development and production activities, prototyping, and adequacy of program resources—has yet to be developed. Further, the directors are not reporting to Congress on the extent to which programs are implementing recommendations and the impact recommendations are having on weapon programs, which would provide some insight as to the impact the two offices are having on acquisition programs. Although not required by the Reform Act, this type of information could be useful for Congress to gauge the effectiveness of the directors’ offices. The military services, which face increasing demands to develop and field more reliable weapon systems in shorter time frames, may need additional resources and training to ensure that adequate developmental testing and systems engineering activities are taking place. However, DOD’s first joint annual report to Congress, which was supposed to assess the department’s organization and capabilities for performing systems engineering and developmental testing activities, did not clearly identify the workforce performing these activities, future workforce needs, or specific hiring plans. In addition, DOD’s strategy to provide the necessary training within the required time period to the large number of staff it plans to hire is unclear. Therefore, workforce and training gaps are unknown. In order to determine the effectiveness of the newly established offices, we recommend that the Secretary of Defense direct the Directors of Systems Engineering and Developmental Test and Evaluation to take the following five actions: Ensure development and implementation of performance criteria for systems engineering plans and developmental test and evaluation master plans, such as those related to technology maturity, design stability, manufacturing readiness, concurrency of development and production activities, prototyping, and the adequacy of program resources. Track the extent to which program offices are adopting systems engineering and developmental testing recommendations. Work with the services to determine the appropriate number of government personnel needed to perform the scope of systems engineering and developmental testing activities. Develop plans for addressing the training needs of the new hires and contractors who are expected to be converted to government personnel. Report to Congress on the status of these efforts in future joint annual reports required by the Reform Act. DOD provided us with written comments on a draft of this report. DOD concurred with each of the recommendations, as revised in response to agency comments. DOD’s comments appear in appendix I. Based upon a discussion with DOD officials during the agency comment period, we revised the first recommendation. Specifically, instead of recommending that the Directors of Systems Engineering and Developmental Test and Evaluation develop a comprehensive set of performance criteria that would help assess program risk, as stated in the draft report, we now recommend that the directors ensure the development and implementation of performance criteria for systems engineering plans and developmental test and evaluation master plans. The wording change clarifies the nature and scope of performance criteria covered by our recommendation and is consistent with Reform Act language that requires the directors to develop guidance on the development of detailed, measurable performance criteria that major acquisition programs should include in their systems engineering and developmental testing plans. According to DOD officials, the military services are then responsible for developing the specific criteria that would be used on their respective programs. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, and interested congressional committees. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or sullivanm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Bruce Thomas, Assistant Director; Cheryl Andrew; Rae Ann Sapp; Megan Hill; and Kristine Hassinger. | In May 2009, Congress passed the Weapon Systems Acquisition Reform Act of 2009 (Reform Act). The Reform Act contains a number of systems engineering and developmental testing requirements that are aimed at helping weapon programs establish a solid foundation from the start of development. GAO was asked to examine (1) DOD's progress in implementing the systems engineering and developmental testing requirements, (2) views on the alignment of the offices of the Directors of Systems Engineering and Developmental Test and Evaluation, and (3) challenges in strengthening systems engineering and developmental testing activities. In conducting this work, GAO analyzed implementation status documentation and obtained opinions from current and former DOD systems engineering and testing officials on the placement of the two offices as well as improvement challenges. DOD has implemented or is implementing the Reform Act requirements related to systems engineering and developmental testing. Several foundational steps have been completed. For example, new offices have been established, directors have been appointed for both offices, and the directors have issued a joint report that assesses their respective workforce capabilities and 42 major defense acquisition programs. Many other requirements that have been implemented will require sustained efforts by the directors' offices, such as approving systems engineering and developmental testing plans, as well as reviewing these efforts on specific weapon programs. DOD is studying the option of allowing the Director, Developmental Test and Evaluation, to serve concurrently as the Director of the Test Resource Management Center. The directors have not yet developed joint guidance for assessing and tracking acquisition program performance of systems engineering and developmental testing activities. It is unclear whether the guidance will include specific performance criteria that address long-standing problems and program risks, such as those related to concurrency of development and production activities and adequacy of program resources. Current and former systems engineering and developmental testing officials offered varying opinions on whether the new directors' offices should have been placed under the Director of Defense Research and Engineering organization--an organization that focuses primarily on developing and transitioning technologies to acquisition programs. The Director of Defense Research and Engineering believes aligning the offices under his organization helps address congressional and DOD desires to increase emphasis on and strengthen activities prior to the start of a new acquisition program. Most of the officials GAO spoke with believe the two offices should report directly to the Under Secretary for Acquisition, Technology and Logistics or otherwise be more closely aligned with acquisition programs because most of their activities are related to weapon programs. They also believe cultural barriers and staffing issues may limit the effectiveness of the two offices under the current organizational structure. Currently, DOD is not reporting to Congress on how successfully the directors are effecting program changes, making it difficult to determine if the current placement of the offices makes sense or if the Reform Act is having an impact. The military services face a number of challenges as they try to strengthen systems engineering and developmental testing activities on acquisition programs. Although the services believe they have enough staff to perform both of these activities, they have not been able to clearly identify the number of staff that are actually involved. The Director of Developmental Test and Evaluation does not believe the military services have enough testing personnel and is concerned that DOD does not have the capacity to train the large influx of contractors that are expected to be converted to government employees. |
The Drug-Free Media Campaign Act of 1998, 21 U.S.C. 1801 et seq., required the Office of National Drug Control Policy to conduct a national media campaign to reduce and prevent drug abuse among America’s youth. The act specified certain uses of funds provided for the media campaign to include (1) the purchase of media time and space; (2) out-of- pocket advertising production costs; (3) testing and evaluation of advertising; (4) evaluation of effectiveness; (5) partnerships with community, civic, and professional groups and with government organizations; (6) collaboration with the entertainment industry to incorporate anti-drug messages in movies, television, Internet media projects, and public information; (7) news media outreach; and (8) corporate sponsorship and participation, among other uses. The act also mandated a matching requirement. To implement this requirement, ONDCP developed a pro bono match program requiring media vendors who sell advertising time or space to the media campaign to provide (1) an equivalent amount of free public service time or space or (2) an equivalent in-kind contribution. Congress has appropriated over $1 billion for ONDCP’s media campaign since it was initiated in 1998. However, the media campaign’s annual appropriations have declined since Congress initially funded the program. ONDCP’s 2005 appropriation provides $120 million for the media campaign, which represents a $25 million decline from the 2004 appropriation and a $75 million decline from the first-year funding in 1998. The media campaign employs an iterative three-phase advertising development and research process. The first phase, the exploratory research phase, occurs before advertisements are created. For example, before developing the “Monitoring/Love” advertisement series—a message targeting parents, promoting awareness of their children’s whereabouts— extensive research was conducted to help ad creators understand methods of communicating effectively with parents of teens. The second phase consists of creating advertisements and subjecting them to research and expert review. For example, in the “Monitoring/Love” series, focus groups were used to assess parents’ reactions to a set of advertising concepts. The concepts were subsequently revised in response to the feedback. Once the concepts were approved by ONDCP, the actual advertisements were produced and tested for effectiveness. The third and final phase begins after the advertisements have been determined to meet ONDCP’s effectiveness standards and involves the strategic placement of the advertisements in television, radio, and print media. For example, the “Monitoring/Love” series advertisements were aired during television shows and radio programs most popular with the target audience, the parents of teens. This phase also involves measuring the effectiveness of specific advertisements over time within target audiences. See figure 1 for a depiction of the three-stage process. Appendix II provides a more detailed description of the campaign’s advertising development and research process. ONDCP uses advertising contractors to supplement its in-house capabilities regarding the development, production, and placement of paid advertisements on television, radio, print, and the Internet. The media campaign also used a contractor to provide assistance with public communications and outreach for the campaign, for example, encouraging the entertainment industry to portray the negative consequences of drug use in movies and television. In addition to developing advertisements and conducting public outreach, ONDCP is required to assess whether the media campaign’s efforts have been effective in changing American youths’ behavior regarding drug use. During fiscal years 2002 through 2004, ONDCP used four prime contractors with varying responsibilities to carry out the campaign’s requisite tasks: Ogilvy & Mather, The Advertising Council, Inc. (The Ad Council), Fleishman-Hillard, Inc. (Fleishman- Hillard), and Westat, Inc. (Westat). These contractors used funds from their contracts to secure additional specialized expertise from subcontractors. During fiscal years 2002 through 2004, the four major prime contractors were responsible for a variety of services that generally fall into three broad categories—advertising, public communications and outreach, and evaluation. According to our analysis, an estimated $520 million was awarded to the prime contractors, of which an estimated $373 million—72 percent—was committed to purchasing media time and space for advertisements. The remaining $147 million—28 percent—was for the services provided by the prime contractors. Tasks associated with advertising and advertisement development were performed by prime contractors Ogilvy & Mather and the Ad Council. Ogilvy & Mather was responsible for managing the creative development and production of advertising that is targeted toward changing drug beliefs and behaviors among America’s youth and parents. More specifically, Ogilvy & Mather’s tasks included (1) media planning, placement, and purchase; (2) qualitative and quantitative research for advertising creation; and (3) advertising assessment and review. The total estimated amount awarded to Ogilvy & Mather for these services was about $97 million. The Ad Council was responsible for implementing several specific aspects of the advertising component of the media campaign, including (1) overseeing the use of media match space and time for public service announcements that are not part of the media campaign, (2) creating and managing a community-based anti-drug strategy advertising campaign, and (3) administering reviews of media campaign advertisement production costs. The total estimated amount awarded to the Ad Council for these services was about $5 million. The purpose of public communications and outreach, which was implemented by Fleishman-Hillard, was to extend the reach and influence of the campaign through nonadvertising forms of marketing communications. To achieve this end, Fleishman-Hillard’s tasks included (1) conducting media outreach—for example, submitting articles relating to key campaign messages such as effective parenting or the effects of marijuana on teen health to newspapers and magazines; (2) building partnerships and alliances—for example, coordinating positive activities for teens with local school and community groups; (3) creating Web sites and exploring other alternative media approaches—for example, designing and hosting message-oriented Web sites such as theantidrug.com; and (4) entertainment industry outreach—for example, encouraging the entertainment industry to portray the negative consequences of drug use in movies and television. The total estimated amount awarded to Fleishman-Hillard for these services was about $27 million. To evaluate the effects of the campaign, ONDCP entered into an interagency agreement with the National Institute on Drug Abuse (NIDA). NIDA, in turn, contracted with Westat to design, develop, and implement an evaluation of the outcome and impact of the media campaign in reducing illegal drug use among youth. To accomplish this, Westat designed a multiphase study to measure the attitudes and behavior of critical target audiences—preteens, teenagers, and parents. The total estimated amount awarded to Westat for these services was about $18 million. To fulfill their responsibilities, the prime contractors retained the expertise and services of 102 subcontractors for approximately $14 million. Table 1 shows the estimated award amounts for subcontractors during fiscal years 2002 through 2004. Ogilvy & Mather retained 20 subcontractors for nearly $5 million to provide two types of services: (1) multicultural media planning and buying agencies and (2) substance use behavioral change experts, who constituted the Behavioral Change Expert Panel (BCEP). The multicultural subcontractors received more than $4 million (about 90 percent of the nearly $5 million awarded by Ogilvy & Mather to subcontractors) for providing marketing services and strategies with regard to specific minority audiences. For example, one subcontractor, Bromley Communications, was responsible for strategically purchasing media time and space for advertisements targeting Hispanic parents and youth. Bromley Communications also provided advice on how to develop effective advertising for Hispanic audiences. The BCEP received less than $500,000 (about 10 percent of the $5 million awarded by Ogilvy & Mather to subcontractors) for applying behavioral science expertise to several aspects of the campaign. For example, one behavioral change expert provided consulting services related to developing drug use prevention messages targeted to parents by reviewing advertising concepts and recommending revisions to enhance effectiveness. See appendix III for a more comprehensive description of these services. The Ad Council retained one subcontractor, Madison Advertising Management, LTD., (MAM), to provide advertising production cost review services for about $636,000. MAM was responsible for tracking, analyzing, and managing estimates and invoices detailing the production costs for media campaign advertisements to ensure that production costs were reasonable and adhered to ONDCP guidelines. MAM’s goals were to work with the pro bono advertising agencies, their production companies, ONDCP, The Partnership for a Drug-Free America (PDFA), and the Ad Council to minimize production costs without infringing on the creative process and to maximize the cost efficiency of the media campaign. Fleishman-Hillard awarded about $8 million of its total contract award to 80 subcontractors for public communications and outreach services. These subcontractors provided a wide range of services, including photography and video services, research services, Internet technology services, and an assortment of speaker and panelist services. See appendix IV for a complete description of all services provided by Fleishman-Hillard subcontractors and the associated award amounts for these services. Of the estimated $8 million awarded by Fleishman-Hillard to subcontractors, the vast majority—89 percent—went to 14 subcontractors that provided campaign message promotion services. These services were designed to extend the reach and influence of the media campaign beyond the paid advertisements by using a variety of marketing techniques to publicize the media campaign’s anti-drug messages. For example, Rogers & Associates was responsible for promoting the campaign’s message by encouraging the entertainment industry to incorporate specific media campaign messages—such as the negative consequences of drug use—into television show and movie plots. Another campaign message promotion subcontractor, Campbell & Company, was responsible for using its social marketing and public health experience to conduct public outreach to the African American community—for example, developing partnerships with school and community organizations to lend credibility to and extend the reach of the media campaign. Westat retained one subcontractor—the Annenberg School of Communication at the University of Pennsylvania (Annenberg)—for an estimated $785,000. Although Annenberg was responsible for providing overall support to Westat with respect to the entire scope of work detailed in the prime contract, it was specifically directed to provide particular support for the following six tasks: (1) project management, (2) development of the campaign evaluation plan, (3) instrument development, (4) data analysis and report generation, (5) preparation of contract reports, and (6) modification of the campaign evaluation plan. To determine the full range of subcontractor services, we reviewed the agreements between the prime contractors and their 102 subcontractors. From our analysis, we identified 16 distinct categories of services. Table 2 contains definitions and examples for each category. We provided a draft of this report to the Director of the Office of National Drug Control Policy for comment. In a March 14, 2005, letter, the Director commented on the draft. His written response is presented in its entirety in appendix V. In its comments, ONDCP generally agreed with our report’s findings, and we incorporated its technical comments where appropriate. At the same time, ONDCP expressed some concerns about our definition of consulting services as it had done throughout our review. Specifically, ONDCP argued that the “common use of the term” defines consultants as providing advice only, not services. As discussed with ONDCP officials throughout this review, we defined “consultants” as the prime contractors and their subcontractors that provided services, including expert advice, to implement the media campaign. Although the senate committee report that mandated our review did not define the term “consultants,” through our consultations and its previous hearings, the committee expressed concerns about the use of contractors and their subcontractors for the media campaign. We used our definition of consultants to comprehensively account for how campaign funds were being used to supplement ONDCP’s in-house capabilities regarding the advertising, public communications and outreach, and evaluation aspects of the media campaign. ONDCP also commented on a footnote in appendix IV of this report, which cites a GAO appropriations law decision holding that ONDCP violated publicity or propaganda prohibitions and the Anti-Deficiency Act when it is used appropriated funds to produce several prepackaged news stories which failed to disclose that ONDCP produced them for video news releases (VNRs) used in the media campaign. ONDCP commented that it has not produced a VNR since well before May 19, 2004, when GAO issued its first decision, B-302710, on VNRs and prepackaged news stories. ONDCP also said that it has no further plans to produce any VNRs, stating that GAO’s guidance on prepackaged news stories provided in our Circular Letter, B-304272, February 17, 2005, is “inherently incompatible with contemporary news gathering methods, thus rendering VNRs impracticable.” However, the guidance in the Circular Letter addresses the lack of attribution in prepackaged news stories, which are only one part of VNRs. The Circular Letter advises agencies that prepackaged news stories can be utilized without violating the law, so long as there is clear disclosure to the television viewing audience that this material was prepared by or in cooperation with the government department or agency. We are sending copies of this report to the Director, Office of National Drug Control Policy, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on this report, please call Glenn Davis on (202) 512-4301 or me on (202) 512-8777. Our review of contractor services and contract award amounts associated with the Office of National Drug Control Policy’s (ONDCP) National Youth Anti-Drug Media Campaign covered fiscal years 2002 through 2004. To develop background critical to describing and evaluating key aspects of the campaign, we conducted our work at the headquarters of ONDCP, The Partnership for a Drug Free America (PDFA), and media campaign prime contractors in Washington, D.C., and New York City. We reviewed the legislation authorizing the campaign—The Drug-Free Media Campaign Act of 1998—and subsequently enacted campaign legislation, as well as reports, testimony, interagency agreements, contracts, subcontracts, invoices, and vouchers. In addition, to obtain information on the media campaign process, we interviewed officials from ONDCP and PDFA. We also interviewed officials from the four prime contractors: Ogilvy & Mather, Fleishman-Hillard, the Ad Council, and Westat. To supplement our understanding of some of the kinds of services provided by subcontractors, we also interviewed officials from three of the subcontractors. In addition, we reviewed guidelines, reports, and other background documents relevant to the media campaign process provided by the officials we interviewed. Finally, we reviewed the contracts between the prime contractors and ONDCP, which laid out the objectives, strategies, and processes of the campaign, as well as the subcontracts issued under those prime contracts. While we reviewed the contract and subcontract documents, we did not review any of the products resulting from those contracts or subcontracts to determine whether they complied with any applicable laws. To describe the services provided by contractors and their subcontractors in support of the media campaign, we analyzed the contracts of the four prime contractors and the subcontracts of the 102 subcontractors. We obtained information about the roles and responsibilities of each of the four prime contractors from the background, scope of work, and task description sections of their respective contracts. Additionally, to describe services provided by the 102 subcontractors, we developed a data collection instrument (DCI) to allow us to analyze these services uniformly by capturing the following information: (1) the subcontract agreement date(s), (2) the prime contractor issuing the subcontract(s), and (3) what task categories captured the tasks listed in the subcontract agreement(s). We supplemented our analysis of the prime contracts and subcontracts with information from interviews with officials from ONDCP and PDFA and representatives from several prime contractors and subcontractors. We estimated the amounts awarded to each of the four prime contractors based on the award data contained in their contracts and any subsequent modifications to these contracts related to awards. Each of the four prime contracts was a “cost plus fixed fee” contract, meaning that with the exception of a fixed fee, payments were disbursed in the form of reimbursements for invoiced costs. Therefore, the award amounts listed in the contract agreements were estimates of the amounts the contractors would actually receive in reimbursements. Because these estimates were constantly revised based on the status of campaign projects or other information, contract modifications were used to update the contract award data. For the purposes of this review, we used the latest contract modifications to estimate the prime contractors’ awards as they contained the most recent information. Each of these contracts covered multiple years. Awards for each year of the contract were estimated at the beginning of the contract, and those yearly estimates were modified throughout the life of the contract. The year time frames established by the contracts (with the exception of the Westat contract) did not correspond to government fiscal years and differed with each contractor. For example, Ogilvy & Mather’s contract year was from January to January and Fleishman-Hillard’s contract year was from December to December. In order to estimate the prime contractors’ award amounts by fiscal year, it was necessary to prorate the award data listed in the contracts and modifications. By prorating the award data, we obtained estimated award data for each month and were then able to calculate estimated award amounts by fiscal year. An example of this type of calculation appears below. The major limitation of this method of analysis is that it assumes an equal distribution of the total estimated award over the term of the contract, which may not reflect the actual schedule of reimbursements to the contractor. Another limitation of our analysis is that it relies on estimates of the actual costs (i.e., estimated award amounts). We decided to use estimated award data instead of the expenditure data provided by ONDCP because the expenditure data were not complete. We estimated the amounts awarded to each of the 102 subcontractors based on the award data contained in their subcontracts and modifications to these subcontracts. In 18 cases where subcontract award data were insufficient, we used invoices and vouchers provided by the prime contractors to estimate expenditure data. Subcontract award data were determined to be insufficient if (1) the subcontract did not contain any estimated award data or (2) the subcontract listed a rate of compensation for services but did not specify a maximum term or compensation. We classified the award data contained in the subcontracts of the 102 subcontractors into five types: (1) cost-reimbursable, (2) cost plus fixed fee, (3) indefinite quantity/indefinite delivery, (4) firm fixed price, and (5) rate-based. We analyzed each type of award data differently to produce estimated award data for the 102 subcontractors for fiscal years 2002 through 2004. We analyzed the subcontracts containing cost-reimbursable, cost plus fixed fee, and indefinite quantity/indefinite delivery award data using the same method used to analyze the prime contractor award data. We analyzed the subcontracts containing firm fixed price award data using the prorating method described above only if the term of the subcontract covered multiple fiscal years. Many of these subcontracts had terms that fell completely within a single fiscal year, in which case we assigned the total award amount listed in the subcontract to the appropriate fiscal year. Firm fixed price subcontracts are agreements in which the subcontractor receives a fixed amount for the services it provides. Regardless of the time the subcontractor requires to complete its assigned tasks or whether the subcontractor incurs additional unexpected costs in the completion of its assigned tasks, it will not receive any additional funds without a subsequent modification to the subcontract. Consequently, the award data contained in the firm fixed price subcontracts represents the actual amount the subcontractor should have received. We analyzed subcontracts containing rate-based awards in a two-step process to produce estimated awards by fiscal year. Subcontracts containing rate-based data contain (1) a rate of compensation for the subcontractor (for example, $200 per hour), (2) a maximum term (such as 10 hours) or maximum compensation (such as $2,000), and (3) a term or period of performance (i.e. the period of time during which the subcontractor will provide its service, such as between June 1, 2002, and June 30, 2002). We calculated the maximum possible award by multiplying the rate of compensation by the maximum term (unless the subcontract specified a maximum compensation). We considered this calculation of maximum possible awards as the total estimated award amounts for all rate-based subcontracts. If the term (period of performance) of the subcontract fell within a single fiscal year, then the total estimated award of the contract was assigned to the appropriate fiscal year. If the term (period of performance) of the subcontract covered multiple fiscal years, then the total estimated award was prorated as previously described, and total estimated awards for each fiscal year were calculated. An example of this type of analysis appears below. In the 18 cases where we used invoices and vouchers to estimate expenditure data because subcontract award data were insufficient, we grouped the invoices and vouchers of each subcontractor by fiscal year and totaled the invoice/voucher amounts for each fiscal year. The methods of analysis used to produce estimated award data for subcontractors for fiscal years 2002 through 2004 have many of the same limitations as the method used to analyze the prime contract award data (i.e., much of the subcontract award data had to be prorated and some of the subcontract award data represented estimated reimbursements). In addition, we had to substitute expenditure data in the case of 18 subcontracts that did not contain sufficient award data. Consequently, we based some of our calculations related to total subcontractor estimates on different types of data (expenditure or award). We decided to use estimated award data whenever possible to ensure data consistency (i.e., to avoid comparing contractor awards based on estimated award data with subcontractor awards that were based on expenditure data). We conducted our work from March 2004 through February 2005 in accordance with generally accepted government auditing standards. To develop anti-drug television, print, Internet, and radio ads, the media campaign employs a three-phase advertising development and research process. The three phases of the advertising development and research process are (1) the exploratory research phase (pre-ad creation); (2) the qualitative and quantitative research and expert review phase (during ad creation); and (3) the media planning, placement, and tracking phase (post-ad creation). The initial exploratory phase consists of extensive research to understand the subject matter and covers many sources of information, including (1) consumer insights, (2) national studies, (3) behavioral change experts, and (4) subject matter experts. PDFA is a major source for this background research. In addition, the Behavioral Change Expert Panel (BCEP), assembled by Ogilvy & Mather, is composed of a number of individuals possessing specialized expertise relevant to specific aspects of the media campaign, such as the sociology of behavior change in youth or communicating with minority audiences. The BCEP is responsible for developing a Behavioral Brief, which is a background document that describes the major insights of research and literature to consider when developing advertising intended to reach youth audiences. The final goal of the exploratory research phase is for Ogilvy & Mather and PDFA to produce a Creative Brief for each advertising series. The Creative Brief is a compilation of information provided by subject matter experts, including (1) information relevant to the specific messages of the campaign and (2) relevant portions of the qualitative research provided by PDFA regarding consumer insights and national studies. The pro bono agencies responsible for the creative development of a given advertising series use the Creative Brief and the Behavioral Brief to inform their efforts. The second phase involves ad creation and qualitative and quantitative research. PDFA is responsible for soliciting pro bono advertising agencies that create the advertising concepts using the Creative and Behavioral Briefs. The media campaign uses multiple pro bono advertising agencies to develop advertisements. One example of a media campaign advertising series is the “Monitoring/Love” series of advertisements—a message targeting parents, promoting awareness of their children’s activities. A single pro bono ad agency developed all of the advertisements within this series. After initial advertising concepts are developed, the Formative Creative Evaluation Panel (FCEP) and the BCEP review these initial concepts. Next, feedback from FCEP and BCEP is used to revise the advertising concepts. Any recommendations or observations that may be relevant to future campaign efforts are to be kept for possible applications to new Creative Briefs during the initial exploratory research phase (i.e., the feedback loop in this iterative process). Once the advertising concepts have been reviewed and revised, production estimates are calculated and reviewed for maximum cost efficiency. Once this process is completed, ONDCP is responsible for reviewing the ad concepts and approving funding for production of the advertisements. After advertisements are produced, they are submitted for copytesting, a process used to determine whether advertisements meet effectiveness standards for distribution. In the copytesting process, large sample audiences (usually consisting of 300 youths and 150 parents per copytest session) view the ads and are surveyed regarding their responses to the advertisement, drug attitudes, beliefs, and behaviors. Copytesting relies on a comparison of exposed audiences and nonexposed control audiences to determine effectiveness of advertisements. According to Ogilvy & Mather (the contractor responsible for implementing copytesting), the audience is split evenly across ethnic, gender, and age categories. One-half of the audience is exposed to the advertisement and the other half is not. Copytesting researchers then survey and compare the drug beliefs and intentions of each group to determine the effectiveness of the advertisement. If an advertisement does not meet effectiveness standards set by ONDCP, the advertisement is not aired. To successfully pass the copytesting process, an advertisement must significantly strengthen anti-drug beliefs or weaken intentions to use marijuana without creating any adverse effects. Copytesting questions are designed so that the information provided by the responses can be used to revise advertisements that fail to meet effectiveness standards. Media planning (determining where, when, and for how long to air or print the advertisements) occurs concurrently with the advertising development and assessment process. The media plan is finalized and executed (the advertisements are distributed to media vendors) once the advertisements have successfully completed the copytesting phase and the advertisements have undergone a final review by ONDCP. After the advertisements air, audience reactions are to be tracked through an evaluative process that measures the effectiveness of specific ads over time within specific audience populations. During fiscal years 2002 through 2004, Ogilvy & Mather retained the services of two groups of subcontractors: (1) multicultural media planning and buying agencies and (2) substance use behavioral change experts— the Behavioral Change Expert Panel. Ogilvy & Mather awarded nearly $5 million to its 20 subcontractors. Six multicultural subcontractors provided services in support of Ogilvy & Mather’s media planning, placement, and purchase responsibilities. Each multicultural subcontractor provided marketing services and strategies with regard to a specific minority audience. Each multicultural subcontractor was responsible for planning and buying media advertising time and space targeting its minority audience, managing the pro bono match activity that accompanied its media purchases, and trafficking advertising to media vendors. The multicultural subcontractors also assisted Ogilvy & Mather with its advertising creation and assessment responsibilities by providing strategic input with regard to marketing to minority audiences, particularly at the preliminary qualitative research and initial ad concept review phases. Ogilvy & Mather awarded more than $4 million to the multicultural subcontractors, constituting about 90 percent of the nearly $5 million amount awarded by Ogilvy & Mather to subcontractors during fiscal years 2002 through 2004. The awards received by multicultural subcontractors covered only the cost of labor, overheard, and fees and did not include any funding specifically designated for the purchase of media advertising time and space. BCEP subcontractors mainly applied their specialized expertise to three aspects of the advertising development and research process: (1) the development of the Behavioral Brief, (2) the review and revision of initial advertising concepts, and (3) the evaluation of ad effectiveness in the postproduction and postdistribution phases of the campaign. During the initial exploratory research phase, the BCEP developed the Behavioral Brief and contributed to the development of the Creative Brief. The pro bono advertising agencies engaged by PDFA used the Behavioral and Creative Briefs to develop initial advertising concepts and preliminary ads. During the qualitative research and expert review portion of the ad creation phase, the BCEP reviewed the initial advertising concepts and preliminary ads and contributed to the qualitative research process by recommending improvements and revisions to the ads to foster behavior changes in the target audiences. After the final production of the ads, the BCEP worked with PDFA and Ogilvy & Mather to develop the questions used during the copytesting and postdistribution evaluation processes to determine the nature and extent of the effect of the ads on audience beliefs and intentions. At any point during the advertising development and research process, BCEP subcontractors were to provide strategic input and advice to any media campaign partner on an as-needed basis. Ogilvy & Mather awarded less than $500,000 to the BCEP subcontractors, constituting about 10 percent of the nearly $5 million awarded by Ogilvy & Mather to all of its subcontractors during fiscal years 2002 through 2004. To support its public communications and outreach efforts, Fleishman- Hillard retained the services of 80 subcontractors, which we categorized in the following 10 groups: (1) campaign message promotion, (2) photography and video production, (3) campaign message development, (4) contracting management, (5) research, (6) internet technology, (7) Marijuana & Kids Briefings panelists and speakers, (8) Library Working Group experts, (9) Asian American and Pacific Islander Marijuana Media Roundtable panelists and speakers, and (10) Teen Advisor Program experts. Fleishman-Hillard awarded about $8 million to its 80 subcontractors. Approximately 89 percent of the estimated $8 million dollars that Fleishman-Hillard awarded was provided to a single category of subcontractor—those responsible for campaign message promotion. Table 3 depicts award amounts within the remaining 11 percent (about $900,000), which was awarded to nine categories of subcontractors. Eleven photography and video production subcontractors provided a wide array of services, including photographing media campaign promotional events and creating audiovisual materials promoting media campaign messages. For example, one photography subcontractor was responsible for photographing the media campaign’s Boston Parent Wake-Up Rally and processing the photographs for Web display and digital reproductions. Gourvitz Communications, Inc. was responsible for producing a number of videos for the media campaign, including the Marijuana Initiative Video News Release and the Marijuana Community Coalition Video. Fleishman-Hillard awarded an estimated total of nearly $345,000 to photography and video production subcontractors during fiscal years 2002 through 2004. Within this group, the two largest awards went to video production subcontractor Gourvitz Communications, Inc. (an estimated $262,000) and to Court TV (an estimated $77,000). The remaining nine awards were each for an estimated $1,500 or less. Fourteen campaign message development subcontractors provided a wide array of services, including planning and implementing promotional events and researching and drafting feature articles for submission to print and online media venues. For example, one campaign message development subcontractor, Students Against Destructive Decisions, Inc. (SADD), was responsible for raising public awareness of the risks of marijuana use by planning and executing five guerrilla “Wake-Up” student rallies in which students, dressed in distinctive clothing designed by ONDCP and SADD, distributed media campaign materials in highly public urban sites during rush hour. Another campaign message development subcontractor answered “Ask the Expert” questions submitted through the media campaign’s “theantidrug.com” Web site and researched and wrote feature articles on media campaign key messages that were placed on the Web site and submitted to print media venues. Fleishman-Hillard awarded an estimated $214,000 to campaign message- development subcontractors during fiscal years 2002 through 2004. Within this group, the four largest awards went to SADD (an estimated $44,000), to Pride Youth Programs (an estimated $30,000), and to two individual experts (estimated amounts of $54,000 and $25,500). The remaining 10 awards were each for an estimated $14,000 or less. The sole subcontractor providing contract management services was a temporary placement agency. This subcontractor provided temporary personnel staff to Fleishman-Hillard to assist with the preparation of invoices to be submitted to ONDCP regarding Fleishman-Hillard projects. Fleishman-Hillard awarded an estimated $174,000 to this subcontractor during fiscal years 2002 through 2004. Five research subcontractors provided a wide array of services, including analyzing media campaign marketing strategies and reporting on the kinds of drug-related messages currently influencing America’s youth. For example, one research contractor, MarketBridge was responsible for demonstrating and quantifying the value of corporate partnerships to the media campaign. Another research subcontractor, Mediascope, was responsible for conducting a study on the prevalence and context of substance use and abuse in the 150 most popular music videos for the purposes of identifying the negative and positive substance-related messages targeting youth audiences. Fleishman-Hillard awarded an estimated $83,000 to research subcontractors during fiscal years 2002 through 2004. Within this group, the largest award, an estimated $56,000, went to MarketBridge. The remaining four awards were each for an estimated $10,000 or less. Four Internet technology subcontractors provided a wide range of services including e-mail distribution and Web site development. For example, an Internet technology subcontractor, Experian eMarketing Services, was responsible for creating and sending e-mail messages to recipient lists created by Fleishman-Hillard, using content provided by Fleishman- Hillard. Another Internet technology subcontractor, TestPros, assessed the usability of two media campaign Web sites. Fleishman-Hillard awarded an estimated $35,000 to Internet technology subcontractors during fiscal years 2002 through 2004. Within this group, the largest award, an estimated $17,500, went to Experian eMarketing Services. The remaining three awards were each for an estimated $11,000 or less. Twelve Marijuana & Kids Briefings subcontractors served as panelists and speakers in roundtable discussions addressing the latest science on marijuana’s neurological, health, and developmental effects on youth. Fleishman-Hillard awarded an estimated $15,000 to these subcontractors during fiscal years 2002 through 2004 to panelists and speakers for its Marijuana and Kids Briefings. All of the Marijuana & Kids Briefings’ subcontractors were individual experts, rather than firms. Most of these subcontractors were paid at a daily rate of $500, with a maximum term of service of 1 day. Within this group, the largest award went to an individual expert for an estimated $9,000. The remaining 11 awards were each for an estimated $1,000 or less. The purpose of the Library Working Group was to explore how librarians and other adults can help kids find accurate, high-quality information about drugs on the Internet. Five Library Working Group subcontractors provided a range of services including advising on common library and Internet issues; assisting in the development of instructional products about cyberliteracy and illicit drugs; and recommending strategies, vehicles, and partnerships to accomplish program goals. Fleishman-Hillard awarded an estimated $5,000 to Library Working Group subcontractors during fiscal years 2002 through 2004. All of the Library Working Group subcontractors were individual expert, rather than firms. Each of the five subcontractors received a total estimated award of $1,000. Ten Asian American and Pacific Islander Marijuana Media Roundtable subcontractors served as panelists and speakers in roundtable discussions to address the latest scientific findings on marijuana’s neurological, health, and developmental effects on youth. Fleishman-Hillard awarded an estimated $5,000 to Asian American and Pacific Islander Marijuana Media Roundtable subcontractors during fiscal years 2002 through 2004. All of the Asian American and Pacific Islander Marijuana Media Roundtable subcontractors were individual experts, rather than firms. Each of these subcontractors received a total estimated award of $500. Four Teen Advisor Program subcontractors were responsible for providing insight and feedback on the campaign’s youth-oriented strategies in order to guide the development of teen programs, events, and Web site content. Fleishman-Hillard awarded an estimated $800 to Teen Advisor Program subcontractors during fiscal years 2002 through 2004. All of the Teen Advisor Program subcontractors were individual experts, rather than firms. Each of the four subcontractors received a total estimated award of $200. In addition to those named above, the following individuals contributed to this report: David Alexander, Leo Barbour, R. Rochelle Burns, Christine Davis, Wendy C. Johnson, Weldon McPhail, Jean McSween, Brenda Rabinowitz, Tami Weerasingha, Bill Woods, and Kathryn Young. | The Office of National Drug Control Policy (ONDCP) was required by the Drug Free Media Campaign Act of 1998 (21 U.S.C. 1801 et seq.) to conduct a national media campaign to reduce and prevent drug use among America's youth. Since 1998, Congress has appropriated over $1 billion for the media campaign. However, a 2003 report by the Senate Committee on Appropriations expressed some concerns about the media campaign, including concern that a large portion of the campaign's budget had been used for consulting services rather than the direct purchase of media time and space. The report, therefore, directed GAO to review the use of consultants to support the media campaign. This report describes the services provided by consultants (defined by GAO as the prime contractors and their subcontractors) in support of the media campaign, along with the estimated award amounts for these services. Our analysis of contracts covering ONDCP's National Youth Anti-Drug Media Campaign from fiscal years 2002 through 2004 revealed that four contractors provided many of the services required to execute the campaign. These four prime contractors provided an array of services that fell within three broad categories: (1) advertising, (2) public communications and outreach, and (3) evaluation services to gauge the campaign's effectiveness. The prime contractors also acquired additional specialized expertise from 102 subcontractors. Some of the specific tasks performed by the contractors and their subcontractors included conducting qualitative and quantitative research for advertising creation, working with the entertainment industry to portray the negative consequences of drug use in television and movies, and conducting an evaluation intended to measure the effectiveness of the media campaign. Based on our analysis of contracts covering fiscal years 2002 through 2004, we estimated that $520 million was awarded to the four prime contractors, of which an estimated $373 million--72 percent--was committed to purchasing media time and space for campaign advertisements. The remaining $147 million--28 percent--was for the services provided by the prime contractors. Contractors, in turn, awarded $14 million of that amount to their subcontractors. |
The federal government has enriched uranium for use by commercial nuclear power plants and for defense-related purposes for more than 40 years at three plants, located near Oak Ridge, Tennessee; Paducah, Kentucky; and Portsmouth, Ohio (see fig. 1). The Oak Ridge plant, known as East Tennessee Technology Park, is located on 1,500 acres of land; the oldest of the three plants, it has not produced enriched uranium since 1985. The Paducah plant, located on about 3,500 acres, continues to enrich uranium for commercial nuclear power plants under a lease to a private company, the United States Enrichment Corporation (USEC). The Portsmouth plant, a 3,700-acre site, ceased enriching uranium in May 2001 because of reductions in the commercial market for enriched uranium. Later that year, the plant was placed on cold standby (an inactive status that maintains the plant in a usable condition), so that production at the facility could be restarted in the event of a significant disruption in the nation’s supply of enriched uranium. USEC was awarded the contract to maintain the plant in cold standby, a condition that continues today. Yet because of newer, more efficient enrichment technologies and the globalization of the uranium enrichment market, all three uranium enrichment plants have become largely obsolete. Therefore, DOE now faces the task of decontaminating, decommissioning, and undertaking other remedial actions at these large and complex plants, which are contaminated with hazardous industrial, chemical, nuclear, and radiological materials. In 1991, at the request of the House Subcommittee on Energy and Power, GAO analyzed the adequacy of a $500 million annual deposit into a fund to pay for the cost of cleanup at DOE’s three uranium enrichment plants. We reported that a $500 million deposit indexed to inflation would likely be adequate, assuming that deposits would be made annually into the fund as long as cleanup costs were expected to be incurred, which, at the time of our study, was until 2040. Additionally, in a related report, we concluded that the decommissioning costs at the plants should be paid by the beneficiaries of the services provided by DOE—in this case, DOE’s commercial and governmental customers. In 1992, the Congress passed the Energy Policy Act, which established the Uranium Enrichment Decontamination and Decommissioning Fund to pay for the costs of decontaminating and decommissioning the nation’s three uranium enrichment plants. The Energy Policy Act also authorized the Fund to pay remedial action costs associated with the plants’ operation, to the extent that funds were available, and to reimburse uranium and thorium licensees for the portion of their cleanup costs associated with the sale of these materials to the federal government. The act authorized the collection of revenues for 15 years, ending in 2007, to pay for the authorized cleanup costs. Revenues to the Fund are derived from (1) an assessment, of up to $150 million annually, on domestic utilities that used the enriched uranium produced by DOE’s plants for nuclear power generation and (2) federal government appropriations amounting to the difference between the authorized funding under the Energy Policy Act and the assessment on utilities. Congress specified that any unused balances in the Fund were to be invested in Treasury securities and any interest earned made available to pay for activities covered under the Fund. DOE’s Office of Environmental Management is responsible for managing the Fund and plant cleanup activities, which, through fiscal year 2003, were mostly carried out by DOE contractor Bechtel Jacobs. The department’s Oak Ridge Operations Office in Oak Ridge, Tennessee, had historically provided day-to-day Fund management and oversight of cleanup activities at all three plants. In October 2003, however, DOE established a new office in Lexington, Kentucky, to directly manage the cleanup activities at the Paducah and Portsmouth plants. The Oak Ridge Operations Office continues to manage the Fund and the cleanup activities at the Oak Ridge plant. Currently, the Fund is used to pay for the following activities: Reimbursements to uranium and thorium licensees. The Energy Policy Act provides that the Fund be used to reimburse licensees of active uranium and thorium processing sites for the portion of their decontamination and decommissioning activities, reclamation efforts, and other cleanup costs attributable to the uranium and thorium they sold to the federal government. From fiscal year 1994, when the Fund began incurring costs, through fiscal year 2003, $447 million was used from the Fund for uranium and thorium reimbursements (in 2004 dollars). Cleanup activities at the uranium enrichment plants. Cleanup activities at the plants include remedial actions, such as assessing and treating groundwater or soil contamination; waste management activities, such as disposing of contaminated materials; the surveillance and maintenance of the plants, such as providing security and making general repairs to keep the plants in a safe condition; the decontamination and decommissioning of inactive facilities by either cleaning them up so they can be reused or demolishing them; and other activities, such as covering litigation costs at the three plants and supporting site-specific advisory boards. From fiscal year 1994 through fiscal year 2003, a total of $2.7 billion from the Fund was used for these cleanup activities (in 2004 dollars). Under a variety of models using DOE’s projected costs and revenues, the Fund will be insufficient to cover all of its authorized activities. Using DOE’s projections that 2044 would be the most likely date for completion of cleanup at the plants, we estimated that cleanup costs would exceed Fund revenues by $3.8 billion to $6.2 billion (in 2007 dollars). Because DOE had not determined when decontamination and decommissioning work would begin at the Paducah and Portsmouth plants, and because federal contributions to the Fund have been less than the authorized amount, we developed several alternative models to assess the effects of different assumptions on the Fund’s sufficiency. Specifically, we developed the following models: Baseline model. This model was developed in consultation with DOE and its contractor officials about what the most likely cleanup time frames would be and used cost estimates assuming that cleanup at all plants would be completed by 2044. Accelerated model. Because DOE had not determined when the final decontamination and decommissioning would begin at its Paducah and Portsmouth plants, we developed the accelerated model under the assumption that cleanup work could be completed faster than under the baseline model, given unconstrained funding. DOE and its contractor officials provided additional cost estimates, where Paducah’s final work would begin in 2010 and be completed by 2024 and Portsmouth’s final decontamination and decommissioning work would begin in 2007 and be completed by 2024. Deferred model. This model was developed under the assumption that, given current funding constraints, it may not be realistic for two major decontamination and decommissioning projects to be done concurrently. Thus, deferred time frames were determined by DOE, assuming that all work would be completed at the Portsmouth plant first and then initiated at the Paducah plant. For the deferred model, Portsmouth’s final decontamination and decommissioning work was estimated to be completed from 2010 to 2037 and Paducah’s from 2038 to 2052. Revenue-added model. This model was developed to assess the effect of the government’s meeting its total authorized annual contributions on the balance of the Fund, which by the start of fiscal year 2004, was $707 million less than authorized under the Energy Policy Act. For the revenue-added model, we used baseline time frames but assumed that government contributions to the Fund would continue annually at the 2004 authorized level until all government contributions as authorized by law had been met, which would occur in fiscal year 2009. Revenue-added-plus-interest model. For this model, we built on the revenue-added model to include the effect of forgone interest that the Fund could have earned had the government contributed the full authorized amount. We assumed that these additional payments would be made to the Fund in the same amounts as the 2004 annual authorized amount and extended payments through fiscal year 2010. Irrespective of which model we used, we found that the Fund would be insufficient to cover the projected cleanup costs at the uranium enrichment plants (see table 1). At best, assuming no additional funding is provided beyond the 2007 authorized amount, Fund costs could outweigh revenues by $3.8 billion (in 2007 dollars). Even with current authorized amounts extended out through fiscal year 2010, the Fund could still be insufficient by close to $0.46 billion (in 2007 dollars). Although our analysis was able to capture several uncertainties potentially affecting the Fund—including interest rates, inflation rates, cost and revenue variances, and the timing of decontamination and decommissioning—additional uncertainties exist that we could not capture. These uncertainties included possible changes to the scope of the cleanup; whether the Fund would be required to pay for additional activities, such as long-term water monitoring once the plants were closed; and the extent of potential future litigation costs that the Fund would have to support. For example, a risk analysis completed by DOE in 2004 for the Paducah plant indicated that changes in the scope of cleanup could increase cleanup costs by more than $3 billion and extend the time frame for cleanup to more than 30 years past the original scheduled date of 2019. In addition, when they developed their cleanup cost estimates, DOE officials assumed that the costs of long-term stewardship activities—such as groundwater monitoring, which may continue after all necessary cleanup costs have been completed—would be covered by a separate funding source. DOE officials acknowledged, however, that if another funding source were not available, they may be required to use resources from the Fund. Uncertainty over the extent of the Fund’s insufficiency remains because DOE has not issued plans that identify the most probable time frames and costs for the decontamination and decommissioning of the Paducah and Portsmouth plants. DOE was required to develop a report to Congress containing such information, but because DOE was significantly revising its cost estimates, it determined the report would not be accurate and did not finalize it. According to DOE officials, it is now in the process of finalizing a report that contains new schedule and cost information for both plants and addresses the sufficiency of the Fund. This report was due to Congress in October 2007 but has yet to be issued by DOE. Because the report has not been finalized, DOE officials were unwilling to provide us with updated information on current schedule and cost estimates. As a result, we are unable to assess how any new information may affect the Fund’s sufficiency. Until DOE resolves uncertainties surrounding the plants’ cleanup, including when cleanup activities are expected to both begin and end, it is not possible to more precisely determine the total funding needed to cover the authorized cleanup activities. If, however, closure and cleanup time frames extend past the originally projected schedules at the plants, then the total costs the Fund is authorized to support may increase, particularly costs for maintenance, safety, and security activities and other fixed costs that must be maintained until cleanup work at the plants is complete. In closing, we believe that an extension to the Fund may be necessary to cover cleanup costs at the nation’s three uranium enrichment plants. The information currently available on the projected costs and revenues authorized by the Fund suggests that it may be insufficient by up to several billion dollars. DOE appears to be taking steps to develop new, detailed time frames and cost estimates for the decontamination and decommissioning of its uranium enrichment plants. However, until this detailed information is made available, we cannot assess how DOE’s updated time frames and cost estimates may affect the Fund’s sufficiency. As a result, we believe that DOE should finalize plans for the Paducah and Portsmouth plants so that it can better determine the extent to which Fund extensions may be needed. Unless the Fund is extended beyond its current expiration in 2007, cleanup activities that could not be paid for from the Fund because of a shortfall may have to be financed entirely by the federal government and could add an additional fiscal burden at a time when our government is facing already significant long-term fiscal challenges. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information, please contact Robin M. Nazzaro at (202) 512-3841 or nazzaror@gao.gov. Sherry L. McDonald, Assistant Director; Ellen W. Chu, Alyssa M. Hundrup, Mehrzad Nadji, and Barbara Timmerman made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Cleaning up the nation's three uranium enrichment plants will cost billions of dollars and could span decades. These plants--located near Oak Ridge, Tenn.; Paducah, Ky.; and Portsmouth, Ohio--are contaminated with radioactive and hazardous materials. In 1992, the Energy Policy Act created the Uranium Enrichment Decontamination and Decommissioning Fund (Fund) to pay for plant cleanup. Fund revenues come from an assessment on domestic utilities and federal government appropriations. In 2004, GAO reported on the Fund's sufficiency to cover authorized activities. GAO recommended that Congress consider reauthorizing the Fund for 3 more years, to 2010, and require the Department of Energy (DOE) to reassess the Fund's sufficiency before it expired to determine if further extensions were needed. Because decisions not yet made by DOE could affect the cost of cleanup and the Fund's sufficiency, GAO also recommended that DOE develop decontamination and decommissioning plans for the Paducah and Portsmouth plants that would identify the most probable time frames and costs for completing the cleanup work. This testimony is based on GAO's 2004 report. It summarizes the extent to which the Fund may be sufficient to cover authorized activities and the status of DOE's progress in developing decontamination and decommissioning plans for the Paducah and Portsmouth plants. GAO's analysis showed that the Fund will be insufficient to cover all authorized activities. Using DOE's estimates for the cleanup costs at the three plants and current and likely revenue projections, GAO developed a number of simulation models that factored in annual cost and revenue projections and uncertainties surrounding inflation rates, costs, revenues, and the timing of the final cleanup work at the Paducah and Portsmouth plants. Specifically, GAO's baseline model demonstrated that by 2044, the most likely date for completing all cleanup activities at the plants, cleanup costs will have exceeded revenues by $3.8 billion to $6.2 billion (in 2007 dollars). Importantly, GAO found that the Fund would be insufficient irrespective of which estimates were used or what time frames were assumed. DOE has not yet issued plans for the decontamination and decommissioning of the Paducah and Portsmouth plants as GAO recommended. According to DOE officials, the department is developing a report to Congress that will contain updated information for both plants. DOE did not make that information available to GAO, however, and hence GAO was unable to assess how any new schedule or cost estimates may affect the Fund's sufficiency. Until DOE issues plans that provide the most probable time frames and costs for completing decontamination and decommissioning at the Paducah and Portsmouth plants, it is not possible to more precisely determine the total funding needed to cover the authorized cleanup activities. |
The Fifth and Fourteenth Amendments prohibit law enforcement officers from engaging in discriminatory behavior on the basis of individuals’ race, ethnicity, or national origin. The Fifth Amendment protects against discrimination by federal law enforcement officers, and the Equal Protection Clause of the Fourteenth Amendment protects against discrimination by state and local law enforcement officers. Two federal statutes also prohibit discrimination by law enforcement agencies that receive federal financial assistance. Title VI of the Civil Rights Act of 1964prohibits discrimination on the basis of race, color, or national origin by all recipients of federal financial assistance. The Omnibus Crime Control and Safe Streets Act of 1968 prohibits discrimination on the basis of race, color, national origin, sex, or religion by law enforcement agencies that receive federal funds pursuant to that statute. In addition, a 1994 statute grants the Attorney General the authority to seek injunctive relief when a state or local law enforcement agency engages in a pattern or practice of conduct that violates the Constitution or federal law, regardless of whether the agency is a recipient of financial assistance. The Fourth Amendment guarantees the rights of people to be secure from unreasonable searches and seizures. The temporary detention of individuals during the stop of an automobile by police constitutes a seizure of persons within the meaning of the Fourth Amendment. The Supreme Court recently held that regardless of an officer’s actual motivation, a stop of an automobile is reasonable and permitted by the Fourth Amendment when the officer has probable cause to believe that a traffic violation occurred. The Court noted, however, that the Constitution prohibits selective enforcement of the law based on considerations such as race, but the constitutional basis for objecting to intentionally discriminatory application of laws is the equal protection provisions of the Constitution, not the Fourth Amendment. Some have expressed concern that the escalation of this country’s war on drugs has placed minorities at increased risk of discriminatory treatment by law enforcement. The allegation is that law enforcement officers stop minority motorists for minor traffic violations when, in reality, the stop is a pretext to search for drugs or other contraband in the vehicle. In 1986, the Drug Enforcement Administration (DEA) established Operation Pipeline, a highway drug interdiction program that trains federal, state, and local law enforcement personnel on indicators that officers should look for that would suggest possible drug trafficking activity among motorists. In a 1999 report, the American Civil Liberties Union (ACLU) stated that Operation Pipeline fostered the use of a racially biased drug courier profile, in part by using training materials that implicitly encouraged the targeting of minority motorists. DEA’s position is that it did not and does not teach or advocate using race as a factor in traffic stops. Further, according to DEA officials, a 1997 review of Operation Pipeline by the Justice Department’s Civil Rights Division, which is responsible for the enforcement of statutory provisions against discrimination, concluded that Operation Pipeline did not instruct trainees to use race as a factor in traffic stops. Representatives of organizations representing law enforcement officers have stated that racial profiling is unacceptable. The National Association of Police Organizations, representing more than 220,000 officers nationwide, has expressed opposition to pulling over an automobile, searching personal property, or detaining an individual solely on the basis of the individual’s race, ethnicity, gender, or age. The International Association of Chiefs of Police, one of the largest organizations representing police executives, stated that stopping and searching an individual simply because of race, gender, or economic level is unlawful and unconstitutional and should not be tolerated in any police organization. Neither group supports federally mandated collection of data on motorist stops. Lawsuits alleging racial profiling have been filed in a number of states, including Oklahoma, New Jersey, Maryland, Illinois, Florida, Pennsylvania, and Colorado. For example, in Colorado, a class action suit filed on behalf of 400 individuals asked the court to halt racially based stops by a Sheriff’s Department highway drug interdiction unit. Traffic infractions were cited as the reason for stopping the motorists, but tickets were not issued. The court ruled that investigatory stops based solely on motorists’ match with specified drug courier indicators violated the Fourth Amendment’s prohibitions against unreasonable seizures. A settlement was reached that awarded damages to the plaintiffs and disbanded the drug unit. In another case, a class action lawsuit filed by ACLU against the Maryland State Police resulted in a settlement that included a requirement that the state maintain computer records of motorist searches. These records are intended to enable the state to monitor for any patterns of discrimination.In yet another case, a Superior Court in New Jersey ruled that the New Jersey State Police engaged in discriminatory enforcement of the traffic laws. The Justice Department’s Civil Rights Division has recently completed investigations in New Jersey and Montgomery County, MD, which included reviewing complaints of discriminatory treatment of motorists. In the New Jersey case, Justice filed suit in U.S. District Court alleging that a pattern or practice of discriminatory law enforcement had occurred. The parties filed a joint application for entry of a consent decree, which the judge approved in December 1999. Under the consent decree, state troopers in New Jersey will be required to collect data on motorist stops and searches, including the race, ethnicity, and gender of motor vehicle drivers. In the Maryland case, the Justice Department and Montgomery County signed a Memorandum of Understanding in January 2000 that resolved the issues raised in Justice’s investigation. The agreement included the requirement that the Montgomery County Police Department document all traffic stops, including information on the race, ethnicity, and gender of drivers. Lack of empirical information on the existence and prevalence of racial profiling has led to calls for local law enforcement to collect data on which motorists are stopped, and why. To support local data collection efforts, the Bureau of Justice Assistance plans to release a Resource Guide in spring of 2000. The guide is expected to focus on how data can be collected to monitor for bias in traffic stops, with specific “lessons learned” and implementation guidance from communities that have begun the data collection process. Our objectives were to provide information on (1) analyses that have been conducted on racial profiling of motorists by law enforcement; and (2) federal, state, and local data currently available, or expected to be available soon, on motorist stops. To obtain information on analyses that have been conducted on racial profiling of motorists, we did a search of on-line databases and reviewed all of the quantitative analyses that we identified that attempted to address whether law enforcement officers stop motorists on the basis of race. We also contacted the authors of the analyses and obtained references to any other analysis or research sources they considered to be pertinent. Our criterion for selecting analyses to be included in this report was that they provide quantitative information on motorist stops, although these analyses might have also measured searches, arrests, and/or other activities. We used social science research principles to assess the methodological adequacy of the available analyses and to discuss factors that should be considered in collecting stronger empirical data. Our review is not intended to constitute a statement regarding the legal standard for proving discrimination in this context. To obtain information on the federal government’s efforts to collect data on racial profiling of motorists, we reviewed published and electronic literature and discussed data sources with officials at the Justice Department’s Bureau of Justice Statistics (BJS), officials in the office of the Attorney General, academic experts, the American Civil Liberties Union (ACLU), and several police associations. To obtain information on states’ efforts to collect data on racial profiling of motorists, we conducted Internet searches and reviewed the literature. We also held discussions with academic experts, state officials, ACLU officials, and representatives of the National Conference of State Legislatures. To obtain information on local efforts to collect data on racial profiling of motorists, we reviewed the literature and held discussions with academic experts, interest groups, local police officials, and knowledgeable federal officials. On the basis of these discussions, we judgmentally selected several communities that had voluntarily decided to require their police departments to collect motorist stop data. In September 1999, we visited four police departments in California—in San Diego, San Jose, Alameda, and Piedmont. We selected these police departments because they appeared to be furthest along in their plans for collecting data, could provide examples of different data collection methods, and varied greatly in size. We performed this work from August through February 2000 in accordance with generally accepted government auditing standards. We found no comprehensive, nationwide source of information on motorist stops to support an analysis of whether race has been a key factor in law enforcement agencies’ traffic stop practices. We identified five quantitative analyses on racial profiling that included data on motorist stops. The quantity and quality of information that these analyses provided varied, and the findings are inconclusive for determining whether racial profiling occurred. Although inconclusive, the cumulative results of the analyses indicate that in relation to the populations to which they were compared, African Americans in particular, and minorities in general, may have been more likely to be stopped on the roadways studied. A key limitation of the available analyses is that they did not fully examine whether the rates and/or severity of traffic violations committed by different groups may have put them at different levels of risk for being stopped. Such data would help determine whether minority motorists are stopped at the same level that they commit traffic law violations that are likely to prompt stops. Most analyses either compared the proportion of minorities among stopped motorists to their proportion in a different population (e.g., the U.S. population, the driving age population of a state) or did not use a benchmark comparison at all. There appears to be little comparative research on traffic violations committed by different racial groups, including possible differences in the type or seriousness of traffic violations. Therefore, there are no firm data indicating either that the types and seriousness of driving violations committed by whites and minorities are comparable, nor that they are not. Although we have no reason to expect that such differences exist, collecting research data on this issue— though difficult to do—could help eliminate this as a possible explanation for racial disparities in the stopping of motorists. The studies with the best research design collected data on the population of travelers on sections of interstate highways and on the portion of those travelers who violated at least one traffic law. The studies compared the racial composition of these groups against that of motorists who were stopped. However, the studies made no distinction between the seriousness of different traffic violations. Although violating any traffic law makes a driver eligible to be stopped, it is not clear that all violations are equally likely to prompt a stop. interview, respondents were asked whether they had committed a series of specific unsafe actions while driving. Demographic data, including race and ethnicity, were obtained on each respondent. Although answers to the unsafe or aggressive driving behavior questions were analyzed by some demographic characteristics, no analyses by race or ethnicity of driver were conducted. National Survey of Speeding and Other Unsafe Driving Actions, U.S. Department of Transportation, National Highway Traffic Safety Administration, September 15, 1998. study is notable in that it attempted to determine the percentage and characteristics of drivers who put themselves at risk for being stopped. However, we are uncertain whether traveling over the speed limit by at least 6 miles per hour on a major highway is the violation for which most police stops occurred. In a similar analysis of motorists traveling along a segment of Interstate 95 in northeastern Maryland, Lamberth found the following: (1) 17 percent of the cars had an African American driver; (2) 18 percent of cars exceeding the speed limit by at least 1 mile per hour or violating another traffic law had an African American driver; (3) 29 percent of the motorists stopped by the Maryland State Police were African American. This study also found that 92 percent of all motorists were violating the speeding law, 2 percent were violating another traffic law, and 7 percent were not violating any traffic law. However, we are uncertain whether Lamberth’s criteria for traffic violations were the basis for which most police stops were made. Another analysis examined motorist stops in Florida. Using data that were first presented in 1992 in two Florida newspaper articles, Harris reported that more than 70 percent of almost 1,100 motorists stopped over a 3-year period in the late 1980s along a segment of Interstate 95 in Volusia County, FL, were African American or Hispanic. In comparison, African Americans made up 12 percent of Florida’s driving age population and 15 percent of Florida drivers convicted of traffic offenses in 1991. Harris also reported that African Americans and Hispanics made up 12 percent and 9 percent, respectively, of the U.S. population. The findings reported by Harris were based on videotapes of almost 1,100 motorist stops made by Volusia County Sheriff deputies. However, videotapes of stops were not made for much of the 3-year period, and sometimes deputies taped over previous stops. Because no information was provided on other motorist stops made by the deputies over the 3-year period, we do not know whether the videotaped stops were representative of all stops made during that period. In addition, no information was provided on drivers who put themselves at risk for being stopped. The Philadelphia ACLU reported that motorists stopped by Philadelphia police in selected districts during 2 weeks in 1997 were more likely to be minority group members than would be expected from their representation in census data. Limitations of this analysis included the use of census data as a basis for comparison and an absence of information on drivers who put themselves at risk for being stopped. In addition, there were substantial amounts of missing data. The race of the driver was not recorded for about half of the approximately 1,500 police stops made during the 2 weeks. The New Jersey Attorney General’s Office reported that African Americans and Hispanics, respectively, represented 27 percent and 7 percent of the motorists stopped by New Jersey State Police on the New Jersey Turnpike. Interpreting these results is difficult because no benchmark was provided for comparison purposes. Because of the limited number of analyses and their methodological limitations, we believe the available data do not enable firm conclusions to be made from a social science perspective about racial profiling. For example, we question the validity of comparing the racial composition of a group of stopped motorists on a given roadway in a given location with the racial composition of a population that may be vastly different. It would be more valid to compare the racial characteristics of stopped motorists with those of the traveling population who violated similar traffic laws but were not stopped. This is what Lamberth did, although we are not certain that the traffic violations committed by the motorists observed in his studies were the same as those that prompted police stops. Nonetheless, Lamberth’s analyses went furthest by attempting to determine the racial composition of motorists at risk of being stopped by police as a function of traveling on the same roadways and violating traffic laws. We believe that the state of knowledge about racial profiling would be greater if Lamberth’s well-designed research were augmented with additional studies looking at the racial characteristics of persons who commit the types of violations that may result in stops. Other significant limitations of the available analyses were that the results of some analyses may have been skewed by missing data and may not have been representative of roadways and locations other than those reviewed. These limitations notwithstanding, we believe that in order to account for the disproportion in the reported levels at which minorities and whites are stopped on the roadways, (1) police officers would have to be substantially more likely to record the race of a driver during motorist stops if the driver was a minority than if the driver was white, and (2) the rate and/or severity of traffic violations committed by minorities would have to be substantially greater than those committed by whites. We have no reason to expect that either of these circumstances is the case. Appendix II contains a discussion of some of the methodological considerations and information needs involved in getting stronger original data from empirical research on the racial profiling of motorists. These include the need for high-quality data from multiple sources, such as from law enforcement records, surveys of motorists and police, and empirical research studies. By high quality, we mean data that are complete, accurate, and consistent and that provide specific information on the characteristics of the stop and the individuals involved in the stop in comparison to those who are not stopped. The accumulation of these data would form a better foundation for assessing whether, and to what extent, racial profiling exists on the roadways. Although the federal government has a limited role in making motorist stops, several federal activities currently planned or under way represent the first efforts to collect national level information. The Police Public Contact Survey conducted by BJS will include information on the characteristics of individuals reporting they were subject to traffic stops and other information about the stop. BJS is also conducting surveys of state and local law enforcement agencies to determine what motorist stop data they maintain. In addition, to help determine whether federal law enforcement agencies engage in racial profiling, three federal departments are under a presidential directive to collect information on the race, ethnicity, and gender of individuals whom they stop or search. A national household survey now under way asks respondents to discuss their contacts with police during motorist stops. As part of BJS’ 1999 Police Public Contact Survey, BJS is conducting interviews with 90,000 people aged 16 or older to ask them up to 36 questions pertaining to the most recent occasion (if any) during the prior 12 months that their motor vehicles were stopped by police officers. For example, the interview questions ask for information on the race of the motorist and police officer, the reason for the stop, whether a search was conducted, and whether the officer asked what the person was doing in that area. (See app. III for the survey questions to be asked.) BJS completed the survey in December 1999, and expects the results to be available in September 2000. BJS is conducting two surveys in an effort to determine whether law enforcement agencies collect stop data that can be used to address the question of racial profiling. One survey targets state police agencies; the other survey targets both state and local law enforcement agencies. In April 1999, BJS administered a survey of all state police agencies in the nation. The Survey of State Police Agencies asked, in general, whether the agency required its officers to report demographic information on the driver or other occupants of every vehicle stopped for a routine traffic violation. If the agency reported that it did collect such information, then more detailed questions were to be answered, such as whether individual records were kept detailing the driver’s race and immigration status and whether a search was conducted. BJS issued the results of the state police survey in February 2000. BJS found that 3 of the nation’s 49 state law enforcement agencies whose primary duties included highway patrol reported that they required officers to collect racial/ethnic data for all traffic stops. Of the three states, Nebraska and New Mexico reported storing the racial/ethnic data electronically, and New Jersey reported that it did not store the data electronically. BJS administers the Law Enforcement Management and Administrative Statistics (LEMAS) survey to a sample of state and local law enforcement agencies every 3 to 4 years. The survey collects information on the budget, salaries, and administrative practices of the agencies. The 1999 survey included a single question asking if the agencies collected data on traffic stops. The survey was sent to a sample of about 3,000 police/sheriff departments and was to include all agencies with 100 or more employees. The 1999 survey results are expected to be available during the summer of 2000. According to a BJS official, the 2000 LEMAS survey will contain more questions about what records are kept on motorist stops and whether they contain information on race. Pursuant to a presidential directive, three federal departments are to collect data on contacts between their law enforcement officers and the public. The directive did not instruct the departments to focus solely on motorist stops, but data on motorist stops are to be included. In June 1999, the President issued a memorandum on fairness in law enforcement that addressed the issue of racial profiling. The memorandum directed the Departments of Justice, the Interior, and the Treasury to design and implement a system for collecting and reporting statistics on the race, ethnicity, and gender of individuals who are stopped or searched by law enforcement. The three departments were tasked with developing data collection plans within 120 days and implementing field tests within 60 days of finalizing the plans. After 1 year of field testing, the departments are to report on complaints received that allege bias in law enforcement activities, the process for investigating and resolving complaints, and their outcome. The memorandum also required a report to the President within 120 days of the directive concerning each department’s training programs, policies, and practices regarding the use of race, ethnicity, and gender in law enforcement activities, as well as recommendations for improvement. The departments submitted data collection plans and proposed locations for the field tests to the White House in October 1999. (See app. IV for the list of data elements to be collected and all federal data collection test sites.) Federal law enforcement offices and proposed locations likely to be involved in motorist stops included the following: INS inspectors at the land border crossing at Del Rio, TX; INS border patrol agents from San Diego, CA; Yuma, AZ; and El Paso, TX; National Park Service officers at eight national parks; and National Park Service officers on three federally maintained memorial highways. According to Department of Justice plans, officials will also pursue a variety of techniques at some sites to try to determine if the characteristics of those stopped differed from populations encountered at the field site in general. Most traffic stops are made by state and local law enforcement officers. Consequently, state and local agencies are in the best position to collect law enforcement data on the characteristics of stopped motorists. Several states have introduced legislation that would require their state and/or local police departments to collect data on motorists’ traffic stops. However, few bills have passed. As of October 15, 1999, at least 15 states had taken some action to address concerns about racial profiling of motorists. Two of the 15 states—North Carolina and Connecticut—enacted legislation requiring the collection and compilation of data on motorist traffic stops. Similar legislation requiring the collection of specific stop data was introduced in 11 states. The legislation was pending in 7 of those 11 states and was either not carried over to the next legislative session or vetoed in 4. The two remaining states, New Jersey and Virginia, issued resolutions. New Jersey’s resolution calls for the investigation of racial profiling, and Virginia’s resolutions call for data on traffic stops to be compiled and analyzed. See table 1 for a list of the states that had proposed or enacted traffic stop bills or resolutions and their status as of October 15, 1999. All 13 states with data collection legislation proposed to collect data on driver’s race or ethnicity, the alleged traffic violation that resulted in a motorist stop, and whether an arrest was made. Most of these states also proposed to collect data on age, on whether a search was conducted, and on whether an oral warning or citation was issued. The number of data elements that each state proposed to collect ranged from 6 to 16. For a list of data elements that each of the 13 states proposed to collect, see appendix V. North Carolina passed legislation in April 1999 that called for the collection of statistics on a variety of law enforcement actions. Part of the legislation detailed what information on routine traffic stops by state law enforcement officers should be collected, maintained, and analyzed. All of the state’s approximately 40 state law enforcement agencies are to collect the data, although about 90 to 95 percent of all traffic stops are made by the North Carolina State Highway Patrol. Connecticut’s legislation passed in June 1999 and requires collection of certain traffic stop data on stops made by state as well as local police departments. In addition, Connecticut’s legislation bans the practice of racial profiling and calls for the collection of data on complaints that were generated as a result of law enforcement officer actions at traffic stops. North Carolina and Connecticut were both in the process of developing specifications for data collection. They planned to begin data collection January 1, 2000. We visited four California police departments—San Diego, San Jose, Alameda, and Piedmont—to learn about local efforts to collect traffic stop data. These departments had either begun or planned to begin to voluntarily collect traffic stop data. Some officials told us that their departments were interested in collecting traffic stop data because they wanted to address community concerns about racial profiling. San Jose began collecting data in June 1999, Alameda and Piedmont began collecting data in October 1999, and San Diego began collecting data January 2000. The departments generally planned to collect similar data; however, their data collection methods and plans for analyzing the data differed. All four police departments planned to collect data on five data elements: race or ethnicity, age, and gender of the driver; the reason for the traffic stop; and whether the stop resulted in a warning or citation or an arrest. In addition, Alameda, Piedmont, and San Diego planned to collect data on searches conducted during traffic stops. San Diego planned to collect six additional pieces of information. Table 2 summarizes the data that the four police departments will collect. In San Jose, officers use their police radios to report traffic stop information to the dispatcher, who then enters the data into a computer system. Officers can also use mobile computers located in their patrol cars to report traffic stop information, and this can be transmitted directly to the computer system. In San Diego, officers initially are collecting vehicle stop data using manually completed forms, and plan later to use a wireless system to transmit information to the department’s database. The Alameda police department also planned to use its computer-assisted dispatch system to collect data, but only on stops where citations are not issued, such as stops resulting in warnings or arrests. For stops in which the motorist receives a citation, traffic stop data are to be abstracted from patrol officers’ ticket books and from motor officers’ hand-held computer printouts and input into a citations database. Police officials in Piedmont, a police department consisting of 21 officers, decided that manually recording traffic stop information on paper forms would work best for its small department. Three of the four departments indicated that they expect to analyze their traffic stop data. A preliminary report, issued in December 1999 and providing analysis results on data collected between July and September 1999 in San Jose, indicated some racial disparity in traffic stops.According to the San Jose Police Department, the differences were due to socioeconomic factors rather than ethnicity. The report noted that more police were assigned to areas of San Jose that generated more police calls, and those neighborhoods tended to have more minorities. Because more police were available in these areas to make traffic stops, more stops were made there than in districts with a lower police presence. Within each police district, the stops reportedly reflected the demographics of the district. In the report, the San Jose Police Chief emphasized that more data were needed, along with the cooperation of the community to analyze what the data mean. Alameda officials told us they had no current plans to analyze their data, but the data will be available should there be a public request. None of the four departments planned to independently validate the accuracy of the data provided by the police officers. They said they rely on the integrity of the officers and supervisory oversight to ensure that the data are correct. Officials from two of the departments reported that the amount of data to be collected was limited so as not to be burdensome for officers. However, a lack of information may limit the types of analyses possible. For example, the data collection efforts do not require data on the specific violation for which a motorist was stopped, so questions about whether minorities were stopped more often for less serious violations cannot be answered. None of four localities planned to collect this information. Officials noted, however, that trade-offs needed to be considered: police officers would be more likely to record motorist data if the data collection requirements imposed on them were not overly detailed or burdensome. For a more detailed discussion on each of the four police departments’ traffic stop data collection plans, see appendix VI. The five quantitative examinations of racial profiling that we identified did not produce conclusive findings concerning whether and to what extent racial profiling exists. Although methodologically limited, their cumulative results indicate that in relation to the populations to which they were compared, African Americans in particular, and minorities in general, may have been more likely than whites to be stopped on the roadways studied. Because of methodological weaknesses in the existing analyses, we cannot determine whether the rate at which African Americans or other minorities are stopped is disproportionate to the rate at which they commit violations that put them at risk of being stopped. Although definitive studies may not be possible, we believe that more and better research data on the racial characteristics of persons who commit the types of violations that may result in stops could be collected. To date, little empirical information exists at the federal, state, or local levels to provide a clear picture of the existence and/or prevalence of racial profiling. Data collection efforts that are currently planned or under way should provide more data in the next few years to help shed light on the issue. These efforts are steps in the right direction. However, it remains to be seen whether these efforts will produce the type and quality of information needed for answering questions about racial profiling. We requested comments on a draft of this report from the Justice Department. Based on a January 18 meeting with a Deputy Associate Attorney General and other Justice officials, and technical comments provided by Justice, we made changes to the text as appropriate. In addition, Justice’s Acting Assistant Attorney General for Civil Rights provided us with written comments, which are printed in full in appendix VII. Justice agreed with us that there is a paucity of available data for assessing whether and to what extent racial profiling of motorists may exist. Justice also agreed that current data collection efforts by law enforcement agencies, as well as additional research studies, could generate information that may help answer questions about racial profiling. Justice felt, however, that our report set too high a standard for proving that law enforcement officers discriminate against minority motorists. We believe that Justice’s letter mischaracterized the conclusion of our report. Justice states that it disagrees with the “draft report’s conclusion that the only ‘conclusive empirical data indicating’ the presence of racial profiling would be data that proved the use of race to a scientific certainty.” Our conclusion, however, was that the “available research is currently limited to five quantitative analyses that contain methodological limitations; they have not provided conclusive empirical data from a social science standpoint to determine the extent to which racial profiling may occur” (page 1). We also noted that to account for the disproportion in the reported levels at which minorities and whites are stopped on roadways, (1) police officers would have to be substantially more likely to record the race of a driver during motorist stops if the driver was a minority than if the driver was white, and (2) the rate and/or severity of traffic violations committed by minorities would have to be substantially greater than those committed by whites. We do not believe that our approach to reviewing the research studies was so rigorous that we required “scientific certainty” in the data to draw conclusions about the occurrence of racial profiling. And we make clear in the report that our review was not intended to comment on the legal standard for proving discrimination in this context (see our Scope and Methodology section). With respect to Justice’s suggestion that we required research studies to provide scientific certainty of racial profiling, we would note that the concept of scientific certainty is generally not applicable to social science research. This is because social science research data are generally imperfect because they are collected in the “real world” rather than under controlled laboratory conditions. A fundamental, universally accepted, social science research principle that we did incorporate into our assessment of study results was whether the studies ruled out plausible alternative explanations for findings. We found that the available research on the racial profiling of motorists did not sufficiently rule out factors other than race—that is, other factors that may place motorists at risk of being stopped— that may have accounted for differences in stops. We observed that the two studies by Professor Lamberth were well-designed and went further than others in attempting to determine whether race was related to traffic violations that increased the risk of being stopped. But Lamberth established a criterion in each study that cast the net so wide that virtually the entire population of motorists was eligible to be stopped (i.e., traveling at least 1 and 6 miles above the speed limit, respectively, on two major interstate highways), and his studies provided little information about why motorists actually were stopped. Although law enforcement officers can use their discretion in deciding whom to stop, more information is needed on the actual reasons why they stop motorists before a firm conclusion can be made that the reason was race. As we indicate in the report, current data collection efforts by local, state, and federal law enforcement agencies may provide information on the reasons for stops that may help answer this question. With respect to what kind of data would be needed to “prove” the use of race in motorist stops, this issue was outside the scope of our work. We recognize that the evidentiary standards that a court may apply in ruling on an allegation of race-based selective enforcement of the law may be different from the social science principles that we used to review these studies. It was not our intention to express or imply anything about legal standards to prove discrimination. Justice also criticized our work for failing “to recognize or comment on the extensive scholarly debate on the subjects of the degree of statistical certainty, and the extent to which potential variables must be examined in order to demonstrate discrimination from a social science perspective.” We did not comment on the matter of statistical certainty because it was not the basis for our determination that the available research on racial profiling is inconclusive. The problems that we identified with the research studies dealt primarily with the design of the studies; that is, using inappropriate or questionable benchmarks to isolate race from other factors. More and better data are needed on what traffic violations trigger stops and whether race is related to them. Justice agrees that it is important to use an appropriate benchmark against which to compare the racial composition of stopped motorists. Justice disagrees, however, about the importance of examining whether certain driving behaviors or characteristics of vehicles may affect the likelihood of being stopped. In this context, Justice suggests that we make the unwarranted assumption in our report that severe traffic violations account for such a large proportion of traffic stops that they have a significant effect on the data. We did not intend, nor do we believe, that the report makes any assumptions about the reasons for which motorists are stopped. We simply believe that if the objective is to determine whether minority motorists are disproportionately more likely to be stopped than whites, then it is important to know what portion of the driving population on that roadway or in that jurisdiction commits the traffic offenses for which motorists are actually stopped—as opposed to being eligible to be stopped. This is the type of benchmark information that would isolate, to the extent possible, race from other variables that could influence traffic stops. As arranged with your office, unless you publicly announce the contents of this letter earlier, we plan no further distribution until 15 days after the date of this report. At that time, we will send a copy to other appropriate congressional parties, the Honorable Janet Reno, the Attorney General, and to others upon request. If you or your staff have any questions concerning this report, please contact me or Evi L. Rezmovic, Assistant Director, on 202-512-8777. Other key contributors to this report are listed in appendix VIII. As part of our work, we reviewed all available quantitative analyses that we could identify pertaining to the use of race as a factor in motorist stops. This appendix provides a summary of the design, results, and limitations for each of the five analyses. Lamberth, J.L (1994, unpublished). Revised Statistical Analysis of the Incidence of Police Stops and Arrests of Black Drivers/ Travelers on the New Jersey Turnpike Between Exits or Interchanges 1 and 3 From 1988 Through 1991. This analysis, done as part of a research study for a court case, provided a comparison of the races of vehicle occupants who were involved in traffic stops and arrests, drivers who violated traffic laws, and motorists in general who traveled along a segment of the southern end of the New Jersey Turnpike. The study involved three types of data collection: (1) direct observation of motorists from fixed observation points along the side of the road; (2) a moving survey in which an observer drove on the roadway and noted the races of drivers and whether they were speeding; and (3) obtaining law enforcement records from the New Jersey State Police (NJSP). In the first data collection effort, observers were stationed beside the road. Using binoculars, they noted the number of cars that passed the observation point, the race of the driver and/or any other occupant, and the vehicle’s state of registration. One observer was assigned to each lane of traffic, and a data recorder was present to record their observations. Observations were made in 18 randomly selected 3-hour blocks of time at 4 locations between 8 a.m. and 8 p.m. over a 2-week period in June 1993. The author noted that “most if not all” of the 26 pending cases in Gloucester County Superior court arose between these hours. Observers were reported to have been between 14 and 45 feet from the roadway. According to the observations, 42,706 cars were counted as traveling on the turnpike, and the race(s) of the occupants were recorded for nearly 100 percent. An African American driver and/or other occupant were in 14 percent of the cars. Seventy-six percent of the cars were registered out of state. In the second data collection effort, a moving survey was conducted to identify the racial distribution of all drivers on the road who violated the speed limit. In this phase, one observer drove at a constant 60 miles per hour (5 miles per hour above the speed limit at the time), and he recorded onto a tape recorder the race of each driver who passed him and whom he passed. The observer noted all cars that passed him as violators and all cars that he passed as nonviolators. In the moving survey, 1,768 cars were counted. More than 98 percent were speeding and classified as “violators.” Fifteen percent of the cars observed speeding had an African American driver or other occupant. A third data collection effort involved gathering data from NJSP. The data included the race of drivers who were stopped or arrested on randomly selected days between April 1988 and May 1991 along the section of the Turnpike covered by the traffic surveys and an additional section of the roadway. These data included 1,128 arrest reports from turnpike stops; 2,974 stops from patrol activity logs from 35 randomly selected days; and police radio logs from 25 of the selected days. (The 1988 radio logs had been destroyed.) Of the 2,974 stops, 870 were from the section covered by the traffic surveys. Data were not provided on the number of arrests from this section. Of 1,128 NJSP reports, the race of the driver/occupants was noted in 1,059 of them. According to these 1,059 reports, 73 percent of those arrested were African American. The patrol logs and radio logs noted 2,974 events as “stops.” Of the 2,974 stops, all but 78 noted the state of the registration of the car. Twenty-three percent of the stops were of New Jersey cars. Lamberth noted that race was “rarely if ever” noted on the patrol activity logs and that in the radio logs, race appears about one-third of the time for the records that had not been destroyed. (Out of 2,974 stops, race was not noted in 2,041, or 69 percent of the stops. Of the 870 stops that were in the sections covered by the traffic surveys, race was not recorded in 649, or 75 percent of them.) According to the available race data on all stops, 35 percent of drivers stopped were African American; 29 percent of all race- identified stops involved out-of-state African Americans; and 6 percent of the same stops involved in-state African Americans. Of the 221 race- identified stops from the section covered by the traffic surveys, 44 percent of the drivers were African American. In a separate analysis, Lamberth examined the race of individuals who were ticketed by three different units of the Moorestown, New Jersey State Police barracks. He compared the proportion of tickets issued to African Americans by the (1) Radar Unit, which used a remote van and left no discretion in the hands of patrol officers; (2) Tactical Patrol Unit, which concentrated on traffic problems at specific locations on the roadway and exercised more discretion on whom to stop than the Radar Unit; and (3) Patrol Unit, which was responsible for general law enforcement and exercised the most discretion among the three units. Lamberth found that African Americans received 18 percent of the tickets issued by the Radar Unit, about 24 percent of the tickets issued by the Tactical Patrol Unit, and about 34 percent of the tickets issued by the Patrol Unit. These results suggested that increasing levels of trooper discretion translated into increasing percentages of African American stops. Although the data suggest that African Americans may have been disproportionately represented among motorists stopped and arrested, because of several limitations in the study’s methodology, this study does not provide clear evidence of racial profiling of African American drivers. First, the percentage of drivers violating traffic laws was measured by determining the percentage of drivers who were driving at least 6 miles per hour over the posted speed limit. The study did not attempt to distinguish motorists who were driving 6 miles per hour over the speed limit from those who were speeding more excessively. On the basis of the criterion used to indicate speeding violation, the report concluded that 98 percent of the cars were violating at least one traffic law. We are uncertain whether this is an adequate indication of the type or seriousness of traffic violations that put motorists at risk for being stopped by police. We also do not know the reasons for which motorists were stopped. Second, the traffic surveys and the data on police stops and arrests were not from comparable time periods. The police data were from about 2 to 5 years prior to when the traffic surveys were conducted—the traffic surveys were done in June 1993, and the police data were from randomly selected days from April 1988 to May 1991. Third, the observed differences in the percentage of African Americans ticketed by Radar, Tactical Patrol, and general Patrol units may or may not have been due to discriminatory practices on the part of law enforcement officers. For the Tactical and general Patrol units, we do not know the reasons why tickets were issued, nor do we know if different groups may have been at different levels of risk for being stopped because they differed in their rates and/or severity of committing traffic violations. Fourth, among stopped vehicles, the occupants’ race was not recorded for three-fourths of cases along the portion of the highway where the traffic surveys were conducted; race was not recorded for two-thirds of cases along a larger portion of the highway. Therefore, the race of most motorists stopped is unknown. Statisticians performed calculations to determine the implications of the missing data for drawing conclusions about racial disparities in stops. The calculations revealed that if the probability of having race recorded if one was African American and stopped was up to three times greater than if one was white and stopped, then African Americans were stopped at higher rates than whites. Because we do not know what factors affected officers’ decisions to record race, the true extent to which officers tended to record race for African Americans versus whites is unknown. This analysis, done as part of a research study for a court case, provided a comparison between the racial distribution of motorists stopped by the Maryland State Police (MSP) on I-95 in northeastern Maryland, motorists whose cars were searched by MSP, all motorists on the roadway, and motorists on the roadway who violated traffic laws. The study involved two types of data collection: (1) a moving survey in which a team of researchers drove on the roadway and noted the race of drivers and whether they were speeding, and (2) obtaining law enforcement records from the Maryland State Police. In the first data collection effort, a moving survey was conducted to determine the races of highway motorists and the races of highway motorists who violated traffic laws. A team of observers drove separately at the posted speed limit (either 55 or 65 miles per hour) and recorded the race of each driver who passed him or her and whom he or she passed. The observer noted all cars who passed him or her as violators and all cars that he or she passed as nonviolators (unless they were observed violating some other traffic law.) Twenty-one observation sessions were conducted on randomly selected days between 8 a.m. and 8 p.m. during the period June to July 1996. In the moving survey, over 5,700 cars were counted. The author reported that driver’s race was identified for 97 percent of cars. Seventeen percent of cars had African American drivers, and 76 percent had white drivers. Ninety-three percent of cars were observed violating traffic laws. Eighteen percent of the violators were African American, and 75 percent were white. In the second data collection effort, data on motorists traveling a segment of I-95 were obtained from MSP. These data included information on (1) motorist stops made between May and September 1997 in Baltimore, Cecil, and Harford counties; (2) searches conducted between January 1995 and September 1997; (3) searches by MSP on roadways outside this corridor; and (4) drug arrests resulting from these searches. The MSP data indicated that along the I-95 segment studied, 11,823 stops were made by MSP between May and September 1997. Of the 11,823 vehicles stopped, it was reported that 29 percent had an African American driver, 2 percent had a Hispanic driver, 64 percent had a white driver, and 5 percent had a driver of another race/ethnicity. With respect to searches, 956 motorists were searched between January 1995 and September 1997. It was reported that 71 percent were African American, 6 percent were Hispanic, 21 percent were white, and 2 percent had a driver of another race/ethnicity. The proportion of searched cars in which contraband was found was the same for whites and African Americans and the same for I- 95 as compared to the rest of Maryland. In comparison, there were 1,549 motorist searches outside the I-95 segment. Of these searches, 32 percent were African American, 4 percent were another minority, and 64 percent were white. Although the data suggest that African Americans may have been disproportionately represented among motorists stopped and/or searched, because of several limitations in the study’s methodology, this study does not provide clear evidence of racial profiling of African American drivers. First, we are uncertain whether the study adequately measured the type or seriousness of traffic violations that put motorists at risk for being stopped by police. For example, motorists who greatly exceed the speed limit, commit certain types of violations, or commit several violations simultaneously may be more likely to be stopped than others. The measure used to determine whether a car was speeding was whether it was traveling at any speed over the posted limit. As with the New Jersey study by the same researcher, this study did not attempt to distinguish between motorists who drove 1 mile over the speed limit and those who sped more excessively. Furthermore, this study recorded whether traffic violations other than speeding were committed but treated them as equal in seriousness and equally likely to prompt a stop. This may or may not have been a valid assumption. In addition, we do not know the reasons for which motorists were stopped. Second, the data on police stops and police searches were not from comparable time periods. The data for stops were from May through September of 1997, and the data on searches were from January 1995 through September 1997. Lamberth noted in a correspondence to us that the stop data were not provided in time for his initial report. These problems do not necessarily indicate a systematic bias, however. Harris, David A.; Driving While Black and All Other Traffic Offenses: The Supreme Court and Pretextual Traffic Stops. Published in The Journal of Criminal Law and Criminology 87 (2): 1997. The analysis provides quantitative data from Florida and Maryland. The Florida data first appeared in two Florida newspaper articles in 1992. The Maryland data were obtained by the author from lawyers involved in a Maryland lawsuit. The journal article compares the racial characteristics of drivers involved in videotaped stops on a segment of I-95 in Volusia County, FL, over 3 years in the late 1980s (obtained from the County Sheriff’s Department by the Orlando Sentinel) with population and observational data. It was reported that videotapes of stops were not made for much of the 3-year period and sometimes deputies taped over previous stops. More than 70 percent of the persons stopped among nearly 1,100 videotaped stops on I- 95 were African American or Hispanic. African Americans, however, made up 12 percent of the driving age population in Florida, 15 percent of the traffic offenders in Florida in 1991, and 12 percent of the U.S. population. (Hispanics were 9 percent of the U.S. population.) Moreover, according to the Orlando Sentinel’s observations of 1,120 vehicles on I-95, about 5 percent of the drivers were dark-skinned. The article also noted that of the nearly 1,100 stops, 243 were made for swerving, 128 for exceeding the speed limit by more than 10 mph, 71 for burned-out tag lights, 46 for improper license tags, 45 for failure to signal, and a smattering of other offenses. Roughly half of the cars stopped were searched, 80 percent of the cars searched belonged to African American or Hispanic drivers, and African American and Hispanic drivers were detained for twice as long as whites. Only 9 of the 1,100 drivers stopped received tickets. In Maryland, the only data provided in the article are the percentages of African Americans and Hispanics among 732 motorists stopped and searched by 12 Maryland State Police officers with drug-sniffing dogs between January 1995 and June 1996. The article stated that 75 percent of the persons searched were African American; and 5 percent were Hispanic. Of the 12 officers involved, 2 stopped only African Americans. Over 95 percent of the drivers stopped by one officer were African American and 80 percent of the drivers stopped by six officers were African American. Because of several methodological limitations, this analysis does not provide clear evidence of racial profiling of African American or Hispanic drivers. For the Florida data, the validity of the comparisons made is questionable. For example, the data from the videotaped stops combined African Americans and Hispanics, but the comparison data for the driving age population of Florida included African Americans only. More importantly, no information was provided on the percentage of African Americans and Hispanics among traffic offenders. It is also not clear how accurately information on “dark-skinned” drivers was captured. In addition, there was an unknown amount of missing data because videotapes of stops were not made for much of the period. Therefore, we do not know whether the videotaped stops were representative of all stops. For the Maryland data, no comparative data are provided on the percentage of African Americans and Hispanics among motorists generally, among stopped motorists, or among motorists who violated traffic laws. The data for drivers in Maryland included only motorists who were stopped and consented to being searched. Plaintiffs’ Fourth Monitoring Report: Pedestrian and Car Stop Audit, Philadelphia Office of the American Civil Liberties Union, July 1998. This was an analysis of the racial characteristics of motorists and pedestrians stopped by the Philadelphia Police Department in selected districts and persons stopped by the department’s Narcotics Strike Force. All police incident reports recording interactions between police and civilians that involved stops and investigations of pedestrians or automobiles in the 8th, 9th, 18th, and 25th Police Districts for the week of October 6, 1997, were obtained. Hardcopy and computerized records were reviewed and coded according to whether tickets or arrests resulted from the stops and, if not, whether the record indicated any legal explanation for the stop. Previously unreported data were also provided on pedestrian and automobile stops in the 9th, 14th, and 18th Police Districts for the week of March 7, 1997. All reports filed by the Narcotics Strike Force for incidents in the 4th, 12th, 17th, 25th, and 35th Police Districts that involved a pedestrian or a vehicle stop during August 1997 were obtained. Records were coded in the same way as described above. Demographic data for all Philadelphia residents from a 1995 census were provided as a benchmark for the city as a whole, and demographic data by census tract from the 1990 U.S. census were provided as benchmarks for the district-specific analyses. (The report mentions that Philadelphia Police Districts approximately encompass specific census tracts.) For the week of March 7, there were police records of 516 motorist stops in the 3 districts. Overall, the race of the driver was recorded for only 51 percent of these stops, with race being recorded for between 40 and 58 percent of the stops in the three districts. For the week of October 6, there were police records of 1,083 motorist stops in the 4 districts. Overall, race of the driver was recorded for only 48 percent of these stops, with race being recorded for between 44 and 46 percent of the stops in three of the districts. (No separate data were provided for the 25th District, and no explanation was given for this omission.) In both weeks in each district, for stops with race of driver recorded, the driver was more likely to be a member of a minority group than would be expected on the basis of racial characteristics of the district as indicated by 1990 census tract data. Additionally, for stops with race recorded, the report indicated that minorities were more likely than whites to be involved in stops that were judged as not having a legally sufficient explanation than in stops judged to have a legally sufficient explanation for the March data, but not for the October data. There were records of 214 stops by the Narcotics Strike Force in August 1997. (Task Force data were not presented separately for motorists and pedestrian stops.) However, the race of the individual stopped was recorded for only 68 percent of the stops. For stops with race recorded, the report indicated that minorities were more likely to be involved in stops judged not to have a legally sufficient explanation—43 percent African American, 39 percent Hispanic, and 18 percent white—than in stops judged to have a legally sufficient explanation—33 percent African American, 47 percent Hispanic, and 20 percent white. Because of several methodological limitations, this analysis does not provide clear evidence of discriminatory targeting of minority drivers. First, data on the racial characteristics of most motorists covered in the study were not available. The absence of these data is a severe limitation because the race of most drivers stopped is unknown. Second, 1990 census tract data were used as benchmarks for the racial characteristics of the residents of the selected police districts. However, as the study notes, these census tract data were several years old at the time the study was conducted, and it is unknown how well these 1990 census data portrayed the 1997 population of these parts of Philadelphia. More importantly, no information was provided on the race of drivers who put themselves at risk for being stopped. Interim Report of the State Police Review Team Regarding Allegations of Racial Profiling, New Jersey Attorney General’s Office, April, 20, 1999. The report provides the racial characteristics of drivers stopped, searched, and arrested by the New Jersey State Police (NJSP) along the New Jersey Turnpike. Data were obtained from NJSP on the numbers of stops and searches made by troopers assigned to the Moorestown and Cranbury police barracks—two of three barracks assigned to the turnpike. Motorist stop data were from April 1997 through November 1998 (except February 1998). Data on motorist searches resulting from stops were from the same two barracks. Only data on searches for which motorists gave their consent for the search were available. Motorist search data were from selected months in 1994, all months in 1996 except February, and every month from April 1997 to February 1999. Data were obtained on motorist arrests made by troopers assigned to the Cranbury, Moorestown, and Newark barracks. Data on these arrests were from January 1996 through December 1998. Over 87,000 motorists were stopped by NJSP. Twenty-seven percent of motorists stopped were African American, 7 percent were Hispanic, 7 percent were another minority, and 59 percent were white. Little difference was reported between the two NJSP barracks in the racial characteristics of motorists stopped. Only 627, or less than 1 percent, of these stops involved a search, but the racial characteristics of the motorists searched were not reported separately. Racial characteristics were available for 1,193 motorists who gave consent for searches. Fifty-three percent of motorists searched were African American, 24 percent were Hispanic, 1 percent were another minority, and 21 percent were white. Little difference was reported between the two NJSP barracks in the racial characteristics of motorists searched. Approximately 2,900 motorists were identified in the state’s Computerized Criminal History Database as being arrested by troopers assigned to all three barracks. Sixty-two percent of motorists arrested were African American, 6 percent were of another minority, and 32 percent were white. Little difference between the three NJSP barracks in the racial characteristics of motorists arrested was reported. Because of several methodological limitations, this analysis does not provide clear evidence of racial profiling of minority drivers. First, direct comparisons between the racial characteristics of drivers stopped, drivers searched, and drivers arrested are problematic because comparable data for stops, searches, and arrests were not reported. Although there is some overlap, data for stops, searches, and arrests were reported for different time periods. Second, search data were provided for consent searches only. Data on instances when motorists denied troopers’ search requests were not available. Without data on denied search requests, it is not possible to know the racial characteristics of all motorists from which nonwarrant and nonprobable cause searches were requested. Overall, as the report acknowledges, it is difficult to interpret the significance of the study’s results because of the absence of any benchmark data, such as data from a survey to determine the racial or ethnic characteristics of turnpike motorists or the racial characteristics of motorists who put themselves at risk for being stopped. Determining whether and to what extent racial profiling may occur on the nation’s roadways is a complicated task that would require collecting more and better data than are currently available. Additional studies using comparison groups that are similar to the stopped motorist group in terms of their risk of being stopped for a traffic violation would contribute to our understanding of this issue. Federal, state, and local data collection efforts currently under way should augment the available information provided that the data are complete, accurate, consistent, and specific. To the extent that such data are gathered by a number of jurisdictions, a more complete picture of which motorists are stopped and why may emerge. Surveys of motorists and police officers and reviews of police protocols and training guides can also contribute to the state of knowledge about racial profiling. In our judgment, such a multifaceted examination of the issues is the means for developing a full and meaningful answer to questions about racial profiling. We have noted that some of the existing analyses may have made comparisons that were not valid. These analyses generally compared the racial characteristics of motorists who were stopped with the racial characteristics of a larger population. The larger population may have been a state’s driving age population or the U.S. population as a whole, among others. The limitation of such analyses is that they do not address whether different groups may have been at different levels of risk for being stopped because they differed in their rates and/or severity of committing traffic violations. Although discretion may play a part in an officer’s decision to pull over a driver, the justification for initiating a stop is a violation or infraction committed by drivers. The available research on racial profiling, however, has given very little attention to potential differences across groups in the relative risk of being stopped. Lamberth’s studies have been important steps in the direction of estimating the relative risks of being stopped, but they did not provide conclusive results. In both studies, Lamberth found that more than 9 out of 10 motorists violated a traffic law and were thus legally eligible for being stopped by the police. However, it is not clear that the driving violations that made motorists legally eligible for being stopped were the same violations that would prompt actual stops by law enforcement officers. For example, one of Lamberth’s studies considered only speeding, although this type of infraction is not the only reason that motorists are stopped. The extent to which motorists exceed the speed limit and/or the number of violations they commit simultaneously may also affect their likelihood of being stopped. Lamberth’s other study considered speeding plus other traffic law violations. However, this study also did not differentiate between the type or seriousness of different violations. For example, motorists who greatly exceeded the speed limit, committed certain types of violations, or committed several violations simultaneously may have been more likely to be stopped than others. None of the analyses that we identified examined whether there may be racial disparities in motorist stops that are related to the type or seriousness of the traffic violation committed. We recognize that it is difficult to determine which traffic violations specifically prompt a law enforcement officer to stop one motorist rather than another. Different jurisdictions and officers may use different criteria, and candid information on the criteria may be difficult to obtain. Nonetheless, to understand the extent to which motorist stops may have a discriminatory basis, data are needed on traffic violations— including the type and seriousness of those violations—that produce stops and the relative rates at which different groups of drivers in a particular jurisdiction commit those violations. Although we have no reason to expect that there are racial differences in committing traffic violations, such data would enable the most appropriate comparisons to be made in order to answer a key question; that is, how do the racial characteristics of motorists who are stopped for a particular traffic violation compare with the racial characteristics of all drivers who commit the same violation but are not stopped? Both observational studies and driver surveys may be useful in developing such comparative information. Federal, state, and local efforts to collect data on motorist stops should increase the amount of information on law enforcement practices on the roadways. However, the usefulness of such data for addressing research questions about racial profiling will depend on the extent to which the data are complete, accurate, consistent, and sufficiently specific to provide meaningful information. Although we recognize that no empirical data are likely to be perfect, it would be difficult to draw conclusions about racial profiling if (1) stop data were selectively recorded, (2) race or other stop information is inaccurately recorded, (3) different jurisdictions capture different information, and/or (4) the information recorded is too broad to understand what happened. For example, recording “vehicle code violation” as the reason for the stop—when such a code can represent anything from failing to signal a lane change within a designated distance to a serious speeding offense—could make it difficult to discern whether and how the traffic violations for which motorists are stopped differ between racial groups. In addition, confidence in the quality of data would be enhanced if provisions were made to validate the accuracy and completeness of data that are collected. Also, it would be constructive to have a mechanism in place for agencies to communicate and coordinate with one another to ensure that they are collecting comparable information, and at a sufficient level of specificity, to be useful for answering questions about racial profiling in a meaningful way. It could also be instructive to examine whether there was a correlation between the race of the law enforcement officer and that of the stopped motorist. In addition, information is needed on the extent to which officers exercise discretion in the process of stopping, citing, and searching drivers. Toward this end, a review of established police protocols and training guides could be useful. In addition, a survey of officers could provide information on what observations and judgments they factor into their decisions to make stops. Although survey data of this sort would be subject to response biases, including the possibility that respondents would offer socially acceptable responses, well-designed surveys of police officers could be a useful supplement to official data. Further, in addition to querying drivers about the frequency with which they were stopped, cited, and searched, driver surveys could also ask about how many miles the drivers typically drove and how often they committed infractions that were likely to prompt stops. Data from police records and surveys could then be compared with them. We estimate that it will take from 5 to 10 minutes to complete this interview with 10 minutes being the average time. If you have any comments regarding these estimates or any other aspect of this survey, send them to the Associate Director for Management Services, Room 2027, Bureau of the Census, Washington, DC 20233 or to the Office of Information and Regulatory Affairs, Office of Management and Budget, Washington, DC 20503. NOTICE – Your report to the Census Bureau is confidential by law (U.S. code 42, Sections 3789g and 3735). All identifiable information will be used only by persons engaged in and for the purposes of the survey, and may not be disclosed or released to others for any purpose. (5-14-99) Line no. FIELD REPRESENTATIVE – Complete a PPCS-1 for all persons 16+ in all interviewed households. Complete a PPCS-1 through Item D for each NCVS Type Z person or NCVS proxy interview. DO NOT complete any PPCS-1 forms if the household is a Type A. Personal (Self) Telephone (Self) Noninterview – FILL ITEM D FIELD REPRESENTATIVE – Read introduction INTRO 1– Now I have some additional questions about any contacts you may have had with the police at any time during the last 12 months, that is, any time since 1, 1998. Exclude contacts with private security guards, police officers you see on a social basis, police officers related to you, or any contacts that occurred outside the United States. Include contacts which occurred as a result of being in a vehicle that was stopped by the police. However, please exclude those contacts which occurred because your employment or volunteer work brought you into regular contact with the police. 1a. Did you have any contact with a police officer during the last 12 months, that is, any time since 1, 1998? 1b. Were any of these contacts with a police officer in person, that is face-to-face? CONTACT SCREEN QUESTIONS – Continued 1c. How would you best describe the reason or reasons for these in-person contacts with the police during the last 12 months, that is, any time since ______________1, 1998? As I read some reasons, tell me if any of the contacts occurred once, more than once, or not at all. box 2 to the FLAP on page 11. Mark (X) all that apply. A motor vehicle stop: (1) You were in a motor vehicle stopped by the police. . . . . . . . . . . . . . . . . . . . . . . . . . You contacted a police officer: (2) To report a crime . . . . . . . . . . . . . . . . . . . . . . (3) To report a crime you had witnessed . . . . . (4) To ask for assistance or information . . . . . (5) To let the police know about a problem in the neighborhood . . . . . . . . . . . . . . . . . . . . . (6) To tell the police about a traffic accident you had witnessed . . . . . . . . . . . . . . . . . . . . (7) For some other reason – Please specify . . . A police officer contacted you because: (8) You were involved in a traffic accident . . . (9) You were a witness to a traffic accident. . (10) You were the victim of a crime which someone else reported to the police . . . . . (11) The police thought you might have been a witness to a crime. . . . . . . . . . . . . . . . . . . . . (12) The police asked you questions about a crime they thought you were involved in . (13) The police had a warrant for your arrest . . (14) The police wanted to advise you about crime prevention information . . . . . . . . . . . (15) Some other reason we haven’t mentioned – Please specify . . . . . . . . . . . . . Was the motor vehicle stopped only once? (Is box 1 marked in Item 1c(1)?) Was the motor vehicle stopped more than once? (Is box 2 marked in Item 1c(1)?) Yes – Ask Item 1d No – SKIP to Item 37 1d. You said that you were in a motor vehicle that was stopped by the police on more than one occasion in the last 12 months. How many different times were you stopped? (Record actual number.) FIELD REPRESENTATIVE – Read introduction INTRO 2– You reported that you were in a motor vehicle that was stopped by the police on more than one occasion. For the following questions, please tell me about the most recent occasion. 2. How many people age 16 or over, INCLUDING YOURSELF, were in the vehicle? 3. Were you the driver? Yes No – SKIP to Item 37 4. How many police officers were present during (this/the most recent) incident? One – SKIP to Item 6 More than one (Record actual number.) FORM PPCS-1 (5-14-99) MOTOR VEHICLE STOPS – Continued 5. Were the police officers White, Black, or some other race? All White All Black All of some other race Mostly White Mostly Black Mostly some other race Equally mixed Don’t know race of any/some 6. Was the police officer White, Black, or some other race? White Black Some other race Don’t know 7. Were you arrested? Yes – SKIP to Item 9 No Don’t know 8. Did the police officer(s) threaten to arrest you? Yes No Don’t know 9. Did the police officer(s) search the vehicle? Yes – Ask Item 10 No Don’t know 10. At any time during (this/the most recent) incident did the police officer(s) ask permission to search the vehicle? Yes – Ask Item 11 No Don’t know 11. Did you give the police officer(s) permission to search the vehicle? Yes No Don’t know 12. Did the police officer(s) find any of the following items in the vehicle? (Read answer categories.) Mark (X) all that apply. Other evidence of a crime – Please specify 13. Do you think the police officer(s) had a legitimate reason to search the vehicle? Yes No Don’t know 14. At any time during (this/the most recent) incident, did the police officer(s) search you, frisk you, or pat you down? Yes – Ask Item 15 No Don’t know 15. At any time during (this/the most recent) incident, did the police officer(s) ask permission to search you, frisk you, or pat you down? Yes – Ask Item 16 No Don’t know 16. At any time during (this/the most recent) incident, did you give the police officer(s) permission to search you, frisk you, or pat you down? Yes No Don’t know 17. Did the police officer(s) find any of the following items on or near you? (Read answer categories.) Mark (X) all that apply. Other evidence of a crime – Please specify 18. Do you think the police officer(s) had a legitimate reason to search you, frisk you, or pat you down? FORM PPCS-1 (5-14-99) Page 3 19. Did the police officer(s) give a reason for stopping the vehicle? Yes – Ask Item 20 No Don’t know 20. What was the reason or reasons? Anything else? Mark (X) all that apply. A vehicle defect, such as a burned out tail light or an expired license plate Roadside check for drunk drivers To check the respondent’s license plate, driver’s license, or vehicle registration The police officer suspected the respondent of something Some other reason – Please specify 21. Would you say that the police officer(s) had a legitimate reason for stopping you? Given a warning? Given a traffic ticket? (Read answer categories.) Tested for drunk driving? Mark (X) all that apply. Charged with driving while under the influence of drugs or alcohol? Questioned about what you were doing in the area? 23. Not including anything just mentioned, were you charged with any of the following? (Read answer categories.) Mark (X) all that apply. Something else – Please specify 24. At any time during (this/the most recent) incident were you handcuffed? USE OF FORCE IN TRAFFIC STOPS 25a. During (this/the most recent) incident, did the police officer(s) for any reason use or threaten to use physical force against you, such as grabbing you or threatening to hit you? Yes – SKIP to Item 26 No Don’t know 25b. Aside from being handcuffed, did the police officer(s) for any reason use or threaten to use physical force against you, such as grabbing you or threatening to hit you? FORM PPCS-1 (5-14-99) Actually push or grab you in a way that did not cause pain? Actually push or grab you in a way that did cause pain? (Read answer categories) Actually kick you or hit you with the police officer’s hand or something held in the police officer’s hand? Mark (X) all that apply. Actually unleash a police dog that bit you? Actually spray you with a chemical or pepper spray? Actually point a gun at you but did not shoot? Actually fire a gun at you? Actually use some other form of physical force? – Please specify Threaten to push or grab you? Threaten to kick you or hit you with the police officer’s hand or something held in the police officer’s hand? Threaten you with a police dog? Threaten to spray you with a chemical or pepper spray? Threaten to fire a gun at you? Threaten to use some other form of physical force? – Please specify 27. Do you feel that any of the physical force used or threatened against you was excessive? Yes – Ask Item 28 No Don’t know 28. FIELD REPRESENTATIVE – Mark without asking when ONLY ONE box is marked in Item 26. Specifically, what type of physical force do you feel was excessive? (Read items marked in Item 26.) Actually pushing or grabbing the respondent in a way that did not cause pain? Mark (X) all that apply. Actually pushing or grabbing the respondent in a way that did cause pain? Actually kicking the respondent or hitting the respondent with the police officer’s hand or something held in the police officer’s hand? Actually unleashing a police dog that bit the respondent? Actually spraying the respondent with a chemical or pepper spray? Actually pointing a gun at the respondent but did not shoot? Actually firing a gun at the respondent? Actually using some other form of physical force? – Please specify Threatening to push or grab the respondent? Threatening to kick the respondent or hit the respondent with the police officer’s hand or something held in the police officer’s hand? Threatening the respondent with a police dog? Threatening to spray the respondent with a chemical or pepper spray? Threatening to fire a gun at the respondent? FORM PPCS-1 (5-14-99) USE OF FORCE IN TRAFFIC STOPS – Continued 29a. Were you injured as a result of (this/the most recent) incident? Yes No – SKIP to Item 30 29b. Did your injuries include any of the following? Broken bones or teeth knocked out (Read answer categories.) Mark (X) all that apply. Bruises, black eyes, cuts, scratches, or swelling Any other injury – Please specify 29c. What type of care did you receive for your (injury/injuries)? No care received Respondent treated self Emergency services only Hospitalization Other – Please specify 30. Do you think any of your actions during (this/the most recent) incident may have provoked the police officer(s) to use or threaten to use physical force? Yes No Don’t know 31. At any time during (this/the most recent) incident did you: (Read answer categories.) Argue with or disobey the police officer(s)? Curse at, insult, or call the police officer(s) a name? Mark (X) all that apply. Say something threatening to the police officer(s)? Resist being handcuffed or arrested? Resist being searched or having the vehicle searched? Try to escape by hiding, running away, or being in a high-speed chase? Grab, hit, or fight with the police officer(s)? Use a weapon to threaten the police officer(s)? Use a weapon to assault the police officer(s)? Do anything else that might have caused the police officer(s) to use or threaten to use physical force against you? – Please specify 32. Were you drinking at the time of (this/the most recent) incident? Yes No Don’t know 33. Were you using drugs at the time of (this/the most recent) incident? Yes No Don’t know 34. Looking back at (this/the most recent) incident, do you feel the police behaved properly or improperly? Properly – SKIP to Check Item B1 Improperly Don’t know – SKIP to Check Item B1 35. Did you take any formal action, such as filing a complaint or lawsuit? Yes – Ask Item 36 No Don’t know 36. With whom did you file a complaint or lawsuit? Law enforcement agency employing the police officer(s) (Read answer categories.) Mark (X) all that apply. FORM PPCS-1 (5-14-99) Was respondent the driver during the traffic stop? (Is box 1 marked in Item 3?) Was physical force used or threatened? (Is box 1 marked in Item 25a OR 25b?) Other than a motor vehicle stop, did the respondent have any other in-person contacts with the police? (Are there any entries marked in categories (2) through (15) on the FLAP on page 11?) USE OF FORCE IN OTHER FACE-TO-FACE CONTACTS 37. Earlier you reported you had a face-to-face contact with the police for the following reason(s), (Read items marked on the Flap on page 11.) Did (this/any of these) contact(s) result in the police handcuffing you or using or threatening to use physical force against you, such as by grabbing you or threatening to hit you during the last 12 months, that is, any time since ______________1, 1998? Yes – Ask Item 38 No Don’t know 38. On how many different occasions did the police handcuff you or use or threaten to use physical force against you? FIELD REPRESENTATIVE – Read Introduction INTRO 3–You reported that, on more than one occasion, you had contact with the police in which the police handcuffed you or used or threatened to use physical force against you. For the following questions, please tell me about the most recent occasion. 39. FIELD REPRESENTATIVE – Mark without asking when ONLY ONE box is marked on the FLAP on page 11. Which of these contacts that you reported earlier resulted in a police officer using or threatening to use physical force? Respondent contacted a police officer: To report a crime respondent had witnessed To ask for assistance or information To let the police know about a problem in the neighborhood To tell the police about a traffic accident respondent had witnessed For some other reason – Please specify A police officer contacted you because: Respondent was involved in a traffic accident Respondent was a witness to a traffic accident Respondent was the victim of a crime which someone else reported to the police The police thought the respondent might have been a witness to a crime The police asked the respondent questions about a crime they thought you were involved in The police had a warrant for the respondent’s arrest The police wanted to advise the respondent about crime prevention information For some other reason – Please specify 40. How many police officers were present during (this/the most recent) incident? Record actual number. FORM PPCS-1 (5-14-99) President Clinton directed the Attorney General, Secretary of the Treasury, and Secretary of the Interior in a June 9, 1999, memorandum to design and implement a system to collect and report statistics relating to race, ethnicity, and gender for law enforcement activities in their departments. Within 120 days of the directive, in consultation with the Attorney General, the departments were to develop proposals for collecting the data; and within 60 days of finalizing the proposals, the departments were to implement a 1-year field test. This appendix presents the field locations and data elements that the Attorney General’s October 1999 proposal indicated would be collected during the field test. Five agencies in three federal departments are to be involved in collecting data on individuals who are stopped or searched by law enforcement. The agencies include the Department of Justice’s Drug Enforcement Administration and the Immigration and Naturalization Service; the Department of the Interior’s National Park Service; and the Department of the Treasury’s U.S. Customs Service and uniformed division of the Secret Service. Between six and nine of the following Drug Enforcement Administration Operation Jetway sites are to be included in the field test: Detroit Metropolitan Airport; Newark International Airport; Chicago-O’Hare International Airport; George Bush Intercontinental Airport (Houston); Miami International Airport; Charleston, SC, bus station; Cleveland, OH, train station; Albuquerque, NM, train station; and Sacramento, CA, bus station. The following Immigration and Naturalization sites are to be included in the field test: John F. Kennedy International Airport (New York City); George Bush Intercontinental Airport (Houston); Seattle/Tacoma Airport; El Cajon, CA, Station; Yuma, AZ, Station; El Paso, TX, Station; and Del Rio, TX, land-border crossing. The National Park Service was the only agency identified by the Department of the Interior with regular public contact. The following Park Service sites are to be included in the field test. Lake Mead National Recreation Area (Nevada and Arizona); Yosemite National Park (California); Grand Canyon National Park (Arizona); Glen Canyon National Recreation Area (Arizona and Utah); National Expansion Memorial Park (Missouri); Indiana Dunes National Lake Shore (Indiana); Natchez Trace Parkway (Mississippi and Tennessee); Blue Ridge Parkway (Virginia and North Carolina); Valley Forge National Historical Park (Pennsylvania); Delaware Water Gap National Recreation Area (Pennsylvania and New Baltimore Washington Parkway (Washington, D.C., and Maryland). The Department of the Treasury identified the U.S. Customs Service and the uniformed division of the Secret Service as the agencies with regular public contact. The following sites are to be included in the field test: Chicago O’Hare International Airport; JFK International Airport (New York City); Newark International Airport; Miami International Airport; and Los Angeles International Airport. The Secret Service uniformed division will collect data in Washington D.C.. Agencies are to collect data describing demographic characteristics, such as gender, race, ethnicity, national origin, and date of birth based on agent’s observation, or from official documents such as drivers’ license when available. All participating agencies are to collect a core set of data elements, but they may collect additional data as they deem appropriate. Following is a core set of data elements contained in the data collection proposal: date of encounter, start time of contact, motorist’s gender, motorist’s race and ethnicity, motorist’s national origin, location of contact, motorist’s suspected criminal activity, reason for contact, external sources of information on person contacted, law enforcement action taken, and end time of contact. Proposed data elements Race or ethnicity Age Gender Reason for stop/violation Search conducted Who, what searched Legal basis of search Oral warning or citation Issued Arrest made Contraband; type, amount Property seized Resistance to arrest Officer use of force Resulting injuries Location, time of stop Investigation led to stop Officer demographics Passenger demographics Auto description, license number Number of Individuals stopped for routine traffic violations Total number of data elements to be collected 8 Including nature of offense for which arrest was made, whether felony or misdemeanor, and whether occupants checked for prior criminal record, outstanding warrants, or other criminal charges. The San Diego Police Department initiated its program to collect vehicle stop data as a result of concerns about police racial profiling that were expressed by community groups, such as the Urban League and the National Association for the Advancement of Colored People. Beginning January 1, 2000, San Diego’s police force, with 1,300 patrol and 60 motor officers, is to begin using forms to manually collect stop data. Later, plans are to use laptop or hand-held computers to collect information that would be sent to a department database via a new wireless system. Initial officer concerns about the data collection effort were addressed through departmental assurances that data would be collected in the aggregate, keeping officers’ and motorists’ names anonymous. In addition, the new data collection system is to track when a stop was initiated for a special assignment, such as when targeting African American gang members. For each stop, officers are to capture the following information: motorist’s race/ethnicity; motorist’s age; motorist’s gender; reason for the stop; whether a search was conducted and whom/what was searched; legal basis for the search; whether a consent form was obtained; whether an oral warning or citation was issued; whether an arrest was made; whether property was seized; whether contraband was found; and whether the officer was on special assignment. San Diego police officials said that they plan to enlist the assistance of a statistical expert in analyzing the data. They hope to obtain an initial analysis after the first 6 months of data collection. The department is also working with community-based organizations to address questions they have about the project and how data will be interpreted. San Diego has no plans to validate data submitted by officers. However, officials noted that actions by officers could always be reviewed and scrutinized by their supervisors. The San Jose Police Department also began its program to collect traffic stop data in response to community concerns about racial profiling by police. According to police officials, the data collection will allow them to learn more about the types of stops being made and to demonstrate the department’s commitment to working with all members of the community. In addition, if analysis of the data reveals a pattern suggesting that race was a factor in motorist stops, then additional training and supervision will be considered to ensure fair treatment for all. San Jose began collecting motorist stop data on June 1, 1999, and plans to continue the effort until May 31, 2000. For each stop, officers are to capture the following information: motorist’s race/ethnicity; motorist’s age; motorist’s gender; reason for the stop; and what action was taken during the stop, for example whether a citation was issued or whether an arrest was made. Identities of the officer and motorist involved in each stop will be kept anonymous and not included in any reports. San Jose officers call in traffic stop information by police radio to a radio dispatcher or by keying the information into a mobile computer terminal located in patrol cars. Dispatchers enter the radioed information into the computer-aided dispatch (CAD) system, and information entered into the mobile terminal is automatically entered into the CAD system. Officers use single digit alpha codes to identify traffic stop data elements. San Jose’s code system has been in place since the 1970s; however, what is new is the addition of three new data elements to the existing code system. In addition to gender and traffic stop disposition, San Jose now collects reason for stop, race, and age information. The hardware and software cost to implement the data collection system was less than $10,000. According to a police official, costs were minimal because the department was able to make modifications to its existing automated system, thereby avoiding the need to design a new, potentially costly, one. The department’s Crime Analysis Unit is to compile the statistics and prepare two formal reports; one summarizing results for the first 6 months of data collection, and the other summarizing results for the full year. An initial review of the data from July 1, 1999, to September 30, 1999, was released by the San Jose Police Department in December, 1999. Aggregate figures indicate that Hispanic citizens in particular were stopped at a rate above their representation in the population. A spokesman for the department stated that the results do not support this conclusion when the figures are disaggregated by police district, although population figures by police district are not available. The official explained that more officers are assigned to areas with higher calls for service, and thus more stops are made in these areas, which tend to have higher minority populations. More analysis will be forthcoming. If results suggest that race may be a factor in motorist stops, the department may decide to collect data beyond 1 year. San Jose does not plan to check the validity of the data being submitted by officers, except to see if officers have entered the correct number of codes. However, a police official told us that supervisors have access to data submitted by officers, and they can “stop-in” on an officer call at any time. According to Alameda Police Department officials, most of Alameda County’s police departments began to voluntarily collect motorist stop data in anticipation of state and federal legislation requiring the collection of such data. The Alameda Police began collecting motorist stop data on October 1, 1999. Alameda police officials told us that stop data are recorded on written or automated citations, if issued. For all noncitation stops, such as warnings or arrests, officers use the CAD system to call in each of the required data elements. For each stop, officers are to capture the following information: motorist’s race/ethnicity, motorist’s age, motorist’s gender, reason for the stop, who/what was searched, whether an oral warning was given, and whether an arrest was made. Alameda police officials said that information patrol officers write on citations will be keyed into an automated citations database. In addition, motorcycle officers have hand-held computers that they use to input and store traffic stop information. These data will be printed out and keyed into the automated citations database as well. A separate database is to contain the CAD-collected data for noncitation stops. Although officers’ and motorists’ information will be captured in the data system, the department has no plans to generate any reports from the data collected. According to Alameda police officials, the police department does not plan to analyze, validate, or publish its data. They said that the data would be made available to the public if requested. The Piedmont Police Department, located in Alameda County, began voluntary collection of motorist stop data in anticipation of pending state and federal legislation. Piedmont began collecting motorist stop data on October 1, 1999. According to a Piedmont police official, Piedmont is a small department with 21 officers who record motorist stop data manually. For each traffic stop, the officer is to fill out an index card that contains data fields for recording the motorist’s race, sex, and age. At the bottom of the card, the officer is to record the reason for stop, whether the vehicle was searched, whether an oral warning or citation was issued, and whether an arrest was made. No officer or motorist names will be included on the cards. A department official indicated that she expects a volume of no more than 400 cards per month. Information from the cards is to be input into an Excel spreadsheet for analysis, and results are to be tallied on a monthly basis. The department reportedly has no planned effort to validate the information that officers record on the cards. Piedmont police officials said that the watch commander can monitor the activity of officers by listening to interactions between the officers and motorists over the dispatch system. The watch commander can then compare the information overheard on the dispatch system with that recorded on the index cards submitted by the officers. Laurie E. Ekstrand (202-512-8777) Evi. L. Rezmovic (202-512-8777) In addition to those named above, David P. Alexander, Carla D. Brown, Ann H. Finley, Monica Kelly, Anne K. Rhodes-Kline, Jan B. Montgomery, and Douglas M. Sloane made key contributions to this report. Ordering Copies of GAO Reports The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists. Viewing GAO Reports on the Internet For information on how to access GAO reports on the INTERNET, send e-mail message with “info” in the body to: or visit GAO’s World Wide Web Home Page at: Reporting Fraud, Waste, and Abuse in Federal Programs To contact GAO FraudNET use: Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-Mail: fraudnet@gao.gov Telephone: 1-800-424-5454 (automated answering system) | Pursuant to a congressional request, GAO provided information on the racial profiling of motorists, focusing on the: (1) findings and methodologies of analyses that have been conducted on racial profiling of motorists; and (2) federal, state, and local data available, or expected to be available, on motorist stops. GAO noted that: (1) GAO found no comprehensive, nationwide source of information that could be used to determine whether race has been a key factor in motorist stops; (2) the available research is limited to five quantitative analyses that contain methodological limitations; (3) they have not provided conclusive empirical data from a social science standpoint to determine the extent to which racial profiling may occur; (4) however, the cumulative results of the analyses indicate that in relation to the populations to which they were compared, African American motorists in particular, and minority motorists in general, were proportionately more likely than whites to be stopped on the roadways studied; (5) data on the relative proportion of minorities stopped on a roadway, however, is only part of the information needed from a social science perspective to assess the degree to which racial profiling may occur; (6) a key limitation of the available analyses is that they did not fully examine whether different groups may have been at different levels of risk for being stopped because they differed in their rates or severity of committing traffic violations; (7) although GAO has no reason to expect that this occurred, such data would help determine whether minority motorists are stopped at the same level that they commit traffic law violations that are likely to prompt stops; (8) several analyses compared the racial composition of stopped motorists against that of a different population, but the validity of these comparison groups was questionable; (9) federal, state, and local agencies are in various stages of gathering data on motorist stops, and these efforts should augment the empirical data available from racial profiling studies; (10) the federal government, which has a limited role in making motorist stops, is undertaking several efforts to collect data; (11) in accordance with a presidential directive, three federal departments are preparing to collect data on the race, ethnicity, and gender of individuals whom they stop or search; (12) state and local agencies are in the best position to provide law enforcement data on motorist stops because most motorist stops are made by state and local law enforcement officers; (13) a number of state legislatures are considering bills to require state or local police to collect race and other data on motorist stops; (14) several local jurisdictions are also making efforts to collect motorist stop data; and (15) whether the efforts that are underway will produce the type and quality of information needed to answer the questions about racial profiling remains to be seen. |
The Federal Acquisition Regulation (FAR) Part 15 allows the use of several best value competitive source selection techniques to meet agency needs. Within the best value continuum, DOD may choose an approach that it considers the most advantageous to the government, including the lowest price technically acceptable (LPTA) process and the tradeoff process. DOD may elect to use the LPTA process in acquisitions where the requirement is clearly definable and the risk of unsuccessful contract performance is minimal. In such cases, DOD may determine that cost or price should play a dominant role in source selection. When using the LPTA process, DOD specifies its minimal technical requirements in the solicitation. Once DOD determines that the contractors meet or exceed the technical requirements, no tradeoffs between cost or price and non- cost factors are permitted and the award is made based on the lowest price offered to the government. By contrast, DOD may elect to use a tradeoff process in acquisitions where the requirement is less definitive, more development work is required, or the acquisition has greater performance risk. In these instances, non-cost evaluation factors, such as technical capabilities or past performance, may play a dominant role in the source selection and tradeoffs among price and non-cost factors allow DOD to accept other than the lowest priced proposal. This report focuses on DOD’s use of the tradeoff process, and specifically, in which non-cost factors, when combined, were considered more important than cost or price. When using a tradeoff process, the FAR requires that evaluation factors and significant subfactors that affect contract award and their relative importance be clearly stated in the solicitation; and the solicitation must provide whether all evaluation factors other than cost or price, when combined, are significantly more important than, approximately equal to, or significantly less important than cost or price. Additionally, the FAR requires that each factor represent key areas of importance and emphasis to be considered in the source selection decision and that support meaningful comparison and discrimination between and among competing proposals. The FAR also requires the source selection authority document the perceived benefits of the higher priced proposal and the rationale for tradeoffs in the contract file. The resulting source select decision should be based on a comparative assessment of proposals against all source selection criteria in the solicitation. The decision must also include the rationale for any business judgments and tradeoffs made or relied on by the source selection official, including benefits asso ciated with additional costs. Although the rationale for the source selection decision must be documented, the documentation need not quantify the tradeoffs that led to the d ecision. In fiscal year 2009, DOD obligated about $380 billion on contracts for goods and services. Our analysis of data reported by DOD to FPDS-NG indicates that $69.9 billion or 18 percent of DOD’s obligations were made on new contracts competitively awarded in fiscal year 2009 (see figure 1). By contrast, about $176 billion were modifications to or orders issued under contracts that were awarded prior to fiscal year 2009 and $133 billion were awarded non-competitively, which in combination totaled to nearly 82 percent of DOD’s reported contract obligations in fiscal year 2009. Properly managing the acquisition of goods and services requires an acquisition workforce with the right skills and capabilities. In March 2009, however, we reported that DOD lacked complete information on the skill sets of the current acquisition workforce and whether these skill sets were sufficient to accomplish its missions. In April 2009, the Secretary of Defense announced his intent to grow the acquisition workforce by 15 percent by fiscal year 2015. As part of this strategy, DOD indicated that it intends to grow its contracting career field by more than 6,400 personnel, an increase of more than 28 percent from fiscal year 2008 staffing levels. DOD relies heavily on the use of the best value process to evaluate offers from potential contractors. DOD chose a best value process for approximately 95 percent of its new, competitively awarded contracts on which it had obligated $25 million or more in fiscal year 2009. Almost half of DOD’s contracts—47 percent—were awarded using a tradeoff process in which non-cost evaluation factors, when combined, were more more important than price. Figure 2 shows how often DOD used the different important than price. Figure 2 shows how often DOD used the different best value processes and other source selection approaches. best value processes and other source selection approaches. Bet ve process (95%) Bet ve trdeoff process (69%) In 69 percent of the contract awards, DOD used the best value tradeoff process. When doing so, it acquired services approximately four times as often as it acquired products. Over half of these procurements were for building or civil engineering construction services, including projects for troop housing, administrative facilities, and hurricane protection systems. Other services procured using the tradeoff process were equipment maintenance and professional management services. For example, in fiscal year 2009, the Army Corps of Engineers awarded a contract worth more than $963 million to construct one of the largest pumping stations in the world, along with floodgates and floodwalls for hurricane protection. Similarly, the Air Force awarded the Contract Field Team program multiple-award contract, with an estimated base value of $2.6 billion for modification, maintenance and repair of systems including aircraft and missile defense for the departments of the Army, Navy, Air Force and several federal agencies. Small arms and electronic countermeasure equipment were among the products most frequently procured using a tradeoff process, including the contracts for the Squad Automatic Weapons—lightweight, automatic rifles issued to each Army and Marine rifle squad—and an Army contract to procure devices that counteract radio-controlled improvised explosives. Our analysis of selected characteristics of contracts awarded using a best value tradeoff process in fiscal year 2009 is shown in figure 3. Percentages may not sum to 100 percent due to rounding. Combination refers to contracts that allow for orders to be placed using more than one pricing arrangement. As part of our work, we reviewed 10 IDIQ contracts that had been awarded using a best value tradeoff process and 23 task and delivery orders under these contracts which had obligations ranging from $11 million to over $319 million. In most cases, DOD did not issue the task or delivery orders we reviewed using a tradeoff process. For example, an Air Force official explained that the initial task orders for the Contract Field Team program, including 13 orders in our sample, were issued under an LPTA process because data needed to assess contractor performance and timeliness were not yet available to use given the short time between award of the base contract and issuance of the first orders. An Air Force official indicated that once they had obtained sufficient performance data, they intended to issue task orders using a tradeoff process when possible. DOD officials issued six other task orders on the basis of negotiating with a contractor who had been awarded a single award IDIQ contract. The four remaining orders were awarded using a tradeoff process. For example, the Army wanted infrared vision enhancement equipment for nighttime and battlefield use in Iraq and Afghanistan to be delivered as quickly as possible. Consequently, the Army used a contractor’s ability to meet delivery requirements as the principal evaluation factor in selecting the contractor for delivery order award. Some DOD officials noted that the use of various source selection evaluation methods can change over time. For example: Defense Logistics Agency (DLA) officials noted that they have recently transitioned from principally using the tradeoff process to using the LPTA process for most fuel purchases because the majority of their procurements were for a commercial product in relatively stable domestic and international markets. They noted, however, that they still use the tradeoff process in less stable areas, such as Iraq and Afghanistan, where they require more information about vendors’ past performance and technical capability when operating in war zones. Conversely, Army Corps officials in New Orleans reported that they have been using the tradeoff process more frequently since the increase in civil works construction projects following Hurricane Katrina. While they typically used sealed bids in the past, they told us that use of the tradeoff process enabled them to better assess contractors’ ability to meet safety and schedule requirements. DOD officials tended to use a best value tradeoff process with non-cost factors weighted more important than price when they were willing to accept a higher price if a contractor could demonstrate certain advantages, such as meet a deadline, demonstrate that it understood complex technical issues, or propose an innovative approach. DOD often indicated in tradeoff solicitations that non-cost factors would be significantly more important than price in making award decisions, but our analysis indicated that DOD selected a lower priced proposal among those offerors remaining in the final competition almost as often as it selected a higher technically rated, but more costly, proposal. Overall, DOD paid a price differential—the difference in the price of the offeror awarded the contract and the price of the offeror next in line for award—in 21 of the 68 contracts in which a price differential was considered. Most differentials were less than 5 percent. While DOD officials told us that the tradeoff process provides an essential tool to obtain desired capabilities, they rely on the case-by-case judgment of contracting and program officials to determine the best acquisition approach suited to program requirements and do not specifically track whether use of the tradeoff process is in DOD’s interest. The FAR and DOD guidance generally provide acquisition staff flexibility to develop evaluation factors that meet their procurement needs and does not indicate which evaluation factors should be most important. The FAR requires that DOD officials consider, among other things, past performance on all negotiated competitive acquisitions exceeding $100,000, but DOD officials have broad discretion in selecting other non- cost factors and their relative importance. The factors are intended to provide meaningful discriminators to evaluate proposals. Army, Navy, and Air Force officials told us that they formed interdisciplinary teams that developed evaluation factors and the factors’ relative importance by consensus. We found that 88 of the 129 contracts we reviewed used a best value tradeoff process. Our analysis shows that DOD considered past performance and technical evaluation factors as the most important among the non-cost factors. Figure 4 shows the five most frequently used non-cost evaluation factors for the 88 contracts in our review in which a tradeoff was conducted and how often the technical and past performance factors were most important among the non-cost factors. DOD officials told us that the selection of these evaluation factors and their relative importance was based on specific acquisition requirements, such as the ability to meet production deadlines, ensure compatibility with existing ship and aircraft systems, or provide needed security for delivery of goods in war zones. Our analysis found that DOD considered non-cost factors more important than price in 60 of the 88 contracts awarded using a tradeoff process. The following illustrate instances where DOD’s acquisition needs led them to make non-cost factors the principle criteria for source selection. Army officials had to quickly meet surge requirements based on a Joint Urgent Operational Needs Statement for roadside bomb detectors as well as services to provide training and support the system once fielded, and accordingly, made technical capability the most important evaluation factor. The acquisition plan specified that deliveries of the critical technology and support services needed to be made within 6 months of contract award. Army officials sought contractors with innovative approaches and a superior understanding on how to counter the threat of roadside bombs in awarding a professional services contract for a range of training programs to be used within the Military Service Combat Training Centers. According to the Army, the selected contractor provided a proposal that was superior in nearly all the technical categories sought by the Army. The Navy considered contractors’ proposed technical approach the most important evaluation factor for a helicopter upgrade kit procurement because the design had to be compatible with existing helicopters. In this case, the timing of fleet deployment was also critical and the Navy sought a contractor that could meet their schedule. DLA used a tradeoff process primarily for commercial fuel contracts in dangerous areas, such as Iraq and Afghanistan, due to the heightened need for contractor reliability in these war zones. In these situations, DLA officials explained the tradeoff process allowed them to emphasize security and past performance in their evaluations to mitigate acquisition risks, especially since they do not know the vendors well. Army Corps of Engineers officials that needed to procure construction services for barracks for wounded soldiers made the technical and performance capability factors most important because they needed to be responsive to new schedule and price targets. These officials used the tradeoff process to incentivize timeliness and price reductions, and they were also able to obtain better features, such as more durable materials. DOD officials also told us they used these non-cost factors to encourage contractors to provide innovative solutions to meet DOD’s needs. For example, Army officials expressed a need for technological innovation in a solicitation for equipment, field support services, and associated maintenance needed to intercept enemy communications. The Army encouraged the contractors to develop a system that would enable them to upgrade the equipment frequently over the life of the contract. The statement of work clarified that these upgrades would be essential to maintain relevancy in the battlefield and keep pace with technology advancements. Similarly, Marine Corps officials we spoke to about an urban warfare training system told us that they used the tradeoff process to seek innovative designs when awarding a 5-year, $1 billion dollar contract. Marine Corps officials indicated that they had a system that worked, but wanted to push industry to come up with a solution that allowed the Marines to reconfigure building structures more quickly and to provide more realistic and current combat scenarios prior to deployment. In the winning design, the offeror proposed using modular building sets that Marines could assemble more quickly to maximize the training opportunities available in the field. In contrast, our analysis found that the 28 cases in which DOD officials considered non-cost factors as equal to or less important than price were nearly all related to construction projects. For example, in 15 cases we reviewed, the Army Corps of Engineers considered non-cost factors such as management and technical, experience, and past performance as equal to price to address less complex project requirements such as building a new runway for aircraft and constructing a maintenance facility. In these instances, the contracting and program officials were able to request and review information from potential contractors and conduct a tradeoff process that would not be available through an LPTA approach, but still considered price of equal importance to non-cost factors in the award decision. For the 88 contracts awarded using a best value tradeoff process, DOD considered whether to pay a price differential in 68 contracts. Our analysis indicated that DOD selected the lower priced option nearly as often as it selected the highest rated, but more costly, proposal. In the 18 cases in which DOD officials decided not to pay a price differential, they determined that the lower price outweighed the advantages of the offeror with the higher technical rating. In doing so, DOD officials decided not to pay over $800 million in price differentials. In 29 other cases, DOD awarded contracts to the offerors that had both the lowest price and the highest non-cost factor rating. DOD accepted a higher price in 21 of the 68 contracts in which a price differential was considered, for a combined difference of more than $230 million. Most differentials paid were less than 5 percent above the price submitted by the offeror next in line for award. The largest price differential from the contracts in our sample was 48 percent higher, or roughly $13.6 million more, than the next in line offeror’s price. In this case, Marine Corps officials determined that the product—burn resistant clothing for use by soldiers in Iraq—was worth the price difference because it provided substantially greater 2nd and 3rd degree burn protection than the product proposed by the other offeror. Figure 5 shows the frequency with which DOD elected to pay or not pay a price differential for the 68 contracts in which a price differential was considered, as well as the value of the price differentials either paid or not paid. DOD contracting and program officials believed that the use of best value tradeoffs provide DOD an essential tool, which allows them to obtain better insights into the contractors’ capabilities and their understanding of the government’s needs, and the reasonableness of the contractor’s approach. DOD and military department officials stated that they do not specifically track use of the tradeoff process to determine if DOD’s interests are met. Instead, they rely on the judgment of contracting and program officials to select the best acquisition approach suited to program requirements on a case by case basis. Further, DOD officials stated that they do not track whether the solicitation approach used correlated with whether the contractor successfully met the terms of the contract and noted that many factors ultimately contribute to the success or failure of an individual acquisition that may not have been foreseeable when awarding the contract. For example, DOD officials noted that DOD would often use a best value tradeoff process to award a contract to develop a major weapons system. As our work has found, DOD often encounters cost increases, schedule delays, and performance shortfalls on its major systems. DOD officials acknowledged several challenges in using the best value tradeoff process such as the difficulties in developing meaningful evaluation factors, the additional time investment needed to conduct best value procurements, and the business judgment required of acquisition staff when compared to other acquisition approaches. DOD officials also noted that the complexity of the tradeoff process increases the risk of bid protests. To help address source selection challenges, DOD is drafting a source selection guide to improve consistency and standardize source selection procedures for competitively awarded negotiated procurements. DOD officials told us that developing non-cost factors that meaningfully discriminate between offers is a challenging part of the tradeoff process. They noted that as the complexity of the acquisition increases, so does the need for individuals with the expertise to help develop the evaluation factors. For example, Army Corps of Engineers officials told us that the contract for one of the world’s largest flood pump stations required experts with experience in issues ranging from water flow management to real estate to develop evaluation factors. Further, Navy officials explained that while they often use past performance as a non-cost discriminator, it can be difficult to identify differences between contractor proposals because contractors often provide their best performance examples and the government often lacks data to evaluate additional contractor projects. Our past work has also identified governmentwide challenges in obtaining needed past performance information to support contract award decisions. Further, the absence of meaningful non-cost discriminators can result in offerors receiving equal scores on the factors that were identified as being significantly more important than price. As such, the decision may default simply to a consideration of price alone. For example, Air Force officials noted that they are considering updating factors used to award task orders under the Contract Field Team contract because contractors tend to receive the highest ratings for each non-cost factor reviewed, so price is typically the only discriminator. DOD officials also noted that using the best value tradeoff process is often far more time-consuming than other approaches. Navy officials told us that the tradeoff process is administratively burdensome and requires a large time investment from program staff, which can make it challenging to keep the same acquisition team together for an entire procurement. During our site visits, many contract and program staff told us that the tradeoff process often takes between 18 and 24 months. In addition, in Afghanistan and Iraq, the challenges of conducting a tradeoff process have contributed to decisions by the CENTCOM Joint Theater Support Contracting Command and Army Corps of Engineers to discourage its use. For example, recent Army Corps of Engineer projects in Afghanistan have emphasized using simpler, less complex designs or requirements that are more suitable for the use of a lowest price technically acceptable approach. The complex nature of the best value tradeoff process, including decisions on whether to pay a price differential, requires much greater business judgment when compared to other acquisition approaches. DOD officials stated that making tradeoff decisions, particularly when to pay a price differential, is among the most difficult aspects of the tradeoff process, which will become more challenging with less experienced staff coming into the acquisition workforce. DOD officials indicated that DOD intends to increase the size of its contracting career field by more than 6,400 personnel through fiscal year 2015. With the influx of new staff, many of the contracting officers we met with noted challenges in preparing staff to conduct the tradeoff process. For example, a Navy contracting officer told us that guidance and training only go so far to prepare acquisition staff to conduct best value tradeoff procurements. Instead, acquisition staff need to be involved in a number of best value tradeoff procurements to develop the business judgment necessary to conduct a successful acquisition. DOD officials stated that the complexity of the tradeoff process also increases the risk of bid protests. Of the 88 contracts we reviewed that used a tradeoff process, 15 were the subject of a bid protest to GAO. While most of the protests were denied, DOD took corrective actions in 5 cases, including 4 cases in which DOD terminated the contract or made a new source selection decision when it determined that it failed to adhere to the solicitations’ requirements. Some of the services have developed initiatives to address these challenges. For example, the Air Force set up an Acquisition Center of Excellence (ACE) at Tinker Air Force Base, which provides pre-award source selection assistance to contract and program staff. Air Force officials stated that ACE reviews the evaluation factors within individual source selection plans, serves in an advisory capacity on source selection teams and holds workshops for contracting officers. Similarly, Army officials at Ft. Monmouth’s Communications—Electronics Command have developed an online business tool—the ASSIST tool—that shepherds contracting officers through the solicitation process. For example, the tool provided a list of steps that must be completed for best value tradeoff procurements and automatically routes documents through source selection evaluation boards and other participating officials for review, as required. DOD is also drafting a departmentwide source selection guide to improve consistency and standardize source selection procedures for competitively awarded negotiated procurements. Given the influx of new acquisition staff, DOD officials stated they wanted to develop a more prescriptive guide for best value procurements. While the DOD draft source selection guidance contains information on various aspects of the best value process, such as the source selection decision document, DOD officials told us it does not address price differentials. Numerous DOD officials underscored the importance of training in the use of the best value process, particularly training that addresses the tradeoff decision that acquisition staff must make. For example, one Army Corps of Engineers official told us that source selection officials would benefit by training that contains real life lessons on how other officials have made price differential decisions during the tradeoff process. Similarly, Marine Corps and Army officials told us that while decisions are made on a case-by-case basis, informal rules of thumb regarding price differentials can come into play and indicated that additional guidance or training, especially case studies or scenarios, would be helpful. The Defense Acquisition University (DAU) is responsible for providing training to the DOD acquisition workforce. According to DAU officials, they offer more than 10 courses that contain elements of the best value tradeoff process, but none of the current courses provide case studies or scenarios that focus on reaching price differential decisions during source selection. They noted that once the new source selection guidance is implemented, which is anticipated for January 2011, they plan to augment existing contracting courses to reflect the new guidance. The best value tradeoff process underlies the vast majority of DOD competitively awarded contracts, and effective use of this process hinges on making sound tradeoffs between price and non-cost factors. By focusing on non-cost factors, DOD anticipates that it will obtain technical solutions that are innovative and address complex and time-sensitive program requirements. Applying a tradeoff process, however, does not guarantee successful acquisitions, nor is it without other challenges. In particular, using a tradeoff process can be more complex and take more time than other source selection methods, and requires that acquisition staff have proper guidance, needed skills, and sound business judgment. With the anticipated influx of more than 6,400 DOD contracting personnel over the next few years, providing a firm foundation for use of the tradeoff process is essential. While DOD and the military departments have taken steps to improve source selection procedures, acquisition personnel noted a lack of training to assist them in deciding whether or not a price differential is warranted when making tradeoff decisions. For example, while DOD’s new source selection guide provides insights on the source selection process, it is silent on how to reach decisions on when to pay a price differential, as is DOD’s current training curriculum. DOD has an opportunity as it updates its training curriculum to provide acquisition staff with better insights using real life examples on reaching tradeoff decisions. Taking this step can help DOD minimize the risk of paying a price differential when not warranted or losing the benefit of a technically superior solution. To help DOD effectively employ the best value tradeoff process, we recommend that the Secretary of Defense direct the Director of Defense Procurement and Acquisition Policy to work with the Defense Acquisition University to develop training elements, such as case studies or scenarios that focus on reaching tradeoff decisions, including consideration of price differentials, as it updates the source selection curriculum. DOD provided written comments on a draft of this report. DOD concurred with our recommendation and intends to request the Panel on Contracting Integrity—comprised of senior DOD leaders tasked, in part, to help improve DOD's performance—to assist the Defense Acquisition University in developing training case studies and scenarios that focus on reaching tradeoff decisions. DOD’s letter is reprinted in appendix II. We are sending copies of this report to interested congressional committees and the Secretary of Defense. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions on the matters covered in this report, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Section 845 of the National Defense Authorization Act for Fiscal Year 2010 directed GAO to report on the Department of Defense’s (DOD) use of the best value tradeoff process, and specifically for cases in which DOD evaluated contractors’ proposals on factors other than cost or price, if these non-cost factors, when combined, were considered more important than cost or price. To respond to the mandate, we determined (1) how often and for what types of contracts DOD used the best value tradeoff process; (2) why and how DOD used the best value tradeoff process; and (3) what challenges, if any, DOD faces in using the best value tradeoff process. To determine how often and for what types of contracts DOD used the best value tradeoff process, we used data from the Federal Procurement Data System-Next Generation (FPDS-NG) as of January 2010 to identify a population of contracts based on the following criteria: (1) newly awarded by DOD in fiscal year 2009; (2) competitively awarded, and (3) had obligations of $25 million or more in fiscal year 2009. We established the $25 million threshold because the Defense Federal Acquisition Regulation Supplement (DFARS) requires contracts with total estimated costs of $25 million or more in any fiscal year to prepare written acquisition plans, which contain information about the source selection approach. This analysis identified 363 contracts. From this population, we selected a probability sample of 160 contracts, including 60 indefinite delivery contracts, and reviewed associated solicitations, source selection decision documents, and other contract documents to determine the solicitation approach DOD used. We verified the obligations and contract award fields in FPDS-NG with contract data to ensure that the contracts within our sample were within scope. Thirty-one contracts from our initial sample of 160 contracts were outside the scope of our review because they were incorrectly coded in key parameters, such as being coded as competitively awarded when they were not or had misreported the amount of obligations made on the contract or task order. We excluded these contracts from our sample and determined that FPDS-NG was sufficiently reliable for the purposes of our review after adjusting for these errors. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 8 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, percentage estimates based on our sample have 95 percent confidence intervals that are within plus or minus 8 percentage points of the estimate itself. Confidence intervals for other numeric estimates are reported along with the estimate itself. Table 1 summarizes the estimated percentage of contracts of various source selection approaches reviewed. Based on our analysis of the remaining 129 sample contracts, we estimate that the total number of best value tradeoff, lowest price technically acceptable, or sealed bid award decisions (in-scope contracts) in the full population of interest was about 293. For contracts that utilized a best value tradeoff process, we categorized them based on the relative importance placed on price. In addition, we determined contract type, the type of procurement (product versus services), and the type of product or service for our sample contracts using FPDS-NG data and verified this information with the contract documents. To determine why and how DOD used the best value tradeoff process and, in particular, when non-cost factors were considered more important than price, we obtained and reviewed DOD and service level acquisition guidance related to source selection policies and procedures that describe how and when the tradeoff process may be used, including those used for issuing task orders. In addition, for each of the contracts within our sample, we obtained contract documentation including the acquisition plan, solicitation, and source selection decision memorandum and reviewed them in preparation for interviews with DOD officials. In several cases, the solicitation was unclear as to which type of tradeoff process was used. In these cases, we relied on the source selection decision document to categorize the tradeoff process used. We judgmentally selected buying activities to visit based on factors including the number of contracts awarded on a best value tradeoff basis, contract type, and goods or services procured. Buying activities included at least one command from each military department as well as a defense agency. We reviewed 27 contracts and 23 task orders through our site visits. Specifically, we judgmentally selected 23 of 48 task orders for review by compiling all task orders issued on indefinite delivery/indefinite quantity contracts obligating over $10 million that were administered by the officials at the sites we visited. We chose this dollar threshold to exceed the FAR requirement to provide fair opportunity notices for task orders valued at $5 million or more. Results from these selected contracts or task orders cannot be generalized beyond the specific contract contracts or task orders selected. During the course of our review, we interviewed officials from the following commands: Department of the Army, U.S. Army Corps of Engineers, New Orleans District Office, Louisiana, and Afghanistan Engineering District, Kabul and Kandahar, Afghanistan; Department of the Army, Armament Research, Development and Engineering Center, Picatinny Arsenal, New Jersey; Department of the Army, Communications–Electronics Command, Fort Monmouth, New Jersey; Department of the Navy, Marine Corps Combat Development Command, Department of the Navy, Naval Air Systems Command, Patuxent River, Maryland and Lakehurst, New Jersey; Department of the Air Force, Air Force Materiel Command, Tinker Air Force Base, Oklahoma; Defense Logistics Agency Energy, Ft. Belvoir, Virginia; and Joint Theater Support Contracting Command, U.S. Central Command, Kabul and Kandahar, Afghanistan, and Baghdad, Iraq. We interviewed DOD acquisition and contracting officials to identify their rationale for the selected source selection approach (e.g., the thought process behind why a best value approach was chosen over other approaches). For award decisions that used a best value tradeoff process, we discussed why the evaluation factors were chosen and how their relative weights were assigned. We also interviewed officials about the process used and the underlying rationale when issuing selected task orders. We also interviewed officials to determine what the expected outcomes were from using the best value tradeoff process. We reviewed applicable DOD source selection decision documents and related memoranda to determine how often DOD paid a price differential, the amount of the price differential, and the reasons that were given underlying the decision to pay a higher price. We defined a price differential as a positive difference in price between the offeror who received the award and the offeror next in line for award. To determine what challenges if any, DOD faces in using the best value tradeoff process, we reviewed DOD guidance and interviewed officials from Defense Procurement and Acquisition Policy, the military departments and defense agencies. We conducted this performance audit from March 2010 through October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Timothy DiNapoli, Assistant Director; William Russell, Katheryn Hubbell, Paige Muegenburg, Jodi Munson, Anna Russell, Sylvia Schatz, Roxanna Sun, and Bob Swierczek made key contributions to this report. | The Department of Defense (DOD) obligated about $380 billion in fiscal year 2009 to acquire products and services. One approach DOD can take to evaluate offerors' proposals is the best value tradeoff process in which the relative importance of price varies compared to non-cost factors. The National Defense Authorization Act for Fiscal Year 2010 required GAO to review DOD's use of the best value tradeoff process, specifically when non-cost factors were more important than price. In response, GAO determined (1) how often and for what types of contracts DOD used the best value tradeoff process; (2) why and how DOD used such an approach; and (3) challenges, if any, DOD faces in using the best value tradeoff process. GAO identified a probability sample of new, competitively awarded fiscal year 2009 contracts in which DOD obligated $25 million or more. GAO reviewed guidance, solicitations, source selection decisions, and other documents for 129 contracts and interviewed DOD contracting and program staff about the use of the best value tradeoff process. In fiscal year 2009, DOD used best value processes for approximately 95 percent of its new, competitively awarded contracts in which $25 million or more was obligated. Almost half of DOD's contracts--47 percent--were awarded using a tradeoff process in which non-cost evaluation factors, when combined, were more important than price. DOD used best value tradeoffs principally to acquire services, such as construction of troop housing, as well as for professional management services. DOD used the best value tradeoff process in 88 of the 129 contracts GAO reviewed. For 60 of the 88 contracts, DOD weighted non-cost factors as more important than price. In these cases, DOD was willing to pay more for a contractor that demonstrated it understood complex technical issues more thoroughly, could provide a needed good or service to meet deadlines, or had a proven track record in successfully delivering products or services of a similar nature. In making tradeoff decisions, GAO found that DOD selected a lower priced proposal nearly as often as it selected a higher technically rated, but more costly proposal. Overall, GAO found that DOD paid a combined total of more than $230 million in price differentials--the difference in price between the awardee and the offeror next in line for award--on 21 contracts, but chose not to pay more than $800 million in proposed costs by selecting a lower priced offer over a higher technically rated offer in 18 contracts. DOD does not track whether the use of best value tradeoff processes correlates with the contractor successfully meeting the terms of the contract and noted that many factors ultimately contribute to an acquisition's success or failure. DOD officials identified several challenges in using the best value tradeoff process, including the difficulty in determining meaningful evaluation factors and the business judgment of acquisition staff required. DOD officials also noted that the complexity of the tradeoff process increases the risk of bid protests. For example, GAO found that 15 of the 88 contracts awarded using a best value tradeoff process reviewed were protested to GAO, resulting in 4 cases in which DOD terminated the contract or made a new source selection decision when DOD determined that it failed to adhere to the solicitations' requirements. Such concerns are heightened given the expected influx of more than 6,400 new contracting personnel over the next few years. According to DOD officials, making sound tradeoff decisions, and in particular, deciding whether or not a price differential is warranted, is one of the most difficult aspects of using a best value tradeoff process. DOD is developing a new departmentwide source selection guide and intends to subsequently revise its training curriculum, but neither the guide nor DOD's current training curriculum provides agency personnel with information on assessing price differentials when performing tradeoff analyses. GAO recommends that to help DOD effectively employ best value tradeoff processes, DOD develop training elements, such as case studies, that focus on reaching tradeoff decisions, as it updates its training curriculum. DOD concurred with this recommendation. |
The introduction of significantly redesigned currency began in March 1996, with the introduction into circulation of the newly designed $100 note. Redesigned lower denomination notes were expected to be introduced into circulation at subsequent 9- to 12-month intervals, but the introduction of the $50 note has been delayed because of efforts to make the denomination easier to read by the visually impaired. The note is now expected to be introduced later this month. The redesigned currency includes several new security features. Some of these features are overt; that is, they are designed to be recognized by the public. The other features are covert; that is, they are intended to be used by the banking system. One of the overt security features on the $50 note is concentric fine lines printed in the oval shape that is behind Ulysses S. Grant’s portrait on the front of the note. During the initial production of the newly designed $50 notes, BEP detected flaws in some of the notes, specifically a gap, or white space, between some of the concentric lines surrounding Grant’s portrait. Neither BEP nor the Federal Reserve know specifically how many flawed notes are among the 217.6 million redesigned notes produced before September 8, 1997. Although both BEP and the Federal Reserve have done some inspections to identify flawed notes, neither has done a complete count or a statistically projectable sample. BEP said it is not prepared to estimate the number of flawed notes without more thorough sampling, which it plans to do. In Philadelphia, Federal Reserve officials looked at 200 of the $50 notes and estimated that 50 to 60 percent were flawed. On September 30, 1997, we and Federal Reserve officials jointly reviewed judgmentally selected samples of newly redesigned $50 notes that had been shipped to the Philadelphia and Richmond Federal Reserve banks. We jointly determined that 56 percent of the 1,200 notes we reviewed that were produced before September 8, 1997, and were shipped to Philadelphia did not meet the Federal Reserve’s standards for circulation concerning the clarity of the concentric lines surrounding President Grant’s portrait. At Richmond, we jointly inspected 1,000 $50 notes produced before September 8, 1997, and found that 45 percent contained similar flaws. We also jointly inspected 1,000 $50 notes at Richmond that were printed after September 7, 1997, and found that 2 percent were flawed. On September 30, 1997, we independently inspected 1,664 $50 notes at BEP headquarters that were printed after September 7, 1997, and found that 12 percent were flawed. A better estimate of the number of flawed notes at BEP and the Federal Reserve banks cannot be made until more rigorous and scientific sampling procedures are used for the note inspections. more lines not printing completely. These gaps were inconsistently distributed throughout the notes, thus making them difficult to correct. BEP viewed the problem as a start-up issue to be expected with production of a completely new note design. BEP officials told us that although they viewed the new notes as acceptable for distribution to the Federal Reserve and for circulation, they believed that the quality of the concentric lines needed to be improved. Accordingly, they made a number of changes in their production, including adjustments to printing presses, changes in the ink, and changes to the printing plates used to create the face of the new note. For example, BEP made modifications to the printing plates by cutting small horizontal grooves into the concentric lines, called dams, that permit ink to be deposited more successfully on the paper. According to BEP, these changes reduced, but did not eliminate the concentric line gaps in some of the $50 notes. In September, Federal Reserve and BEP officials, at a regularly scheduled meeting, discussed the importance of note quality. Immediately after that meeting, BEP invited the Federal Reserve to view some of the new $50 notes that it had produced to get its customer’s input on the quality of the notes. According to Federal Reserve officials, this was the first time they were informed of the problems with the concentric lines surrounding President Grant’s portrait. BEP officials said they did not tell the Federal Reserve about the problem earlier because they believed the notes were of acceptable quality and that the production problems were typical of those that could be expected in producing a newly designed note. According to Federal Reserve and BEP officials, the printing problems with the concentric lines did not appear in test notes that BEP supplied to the Federal Reserve prior to full scale production of the notes. BEP officials stated that printing difficulties often appear only in the production process. They said that test currency is produced under more carefully controlled conditions and is not produced at the same press speeds and volumes. concentric lines behind the portrait to be certain that they are clear. In mid-September, Federal Reserve officials met with BEP, U.S. Secret Service, and other Treasury representatives who agreed with the Federal Reserve’s concerns and also agreed on quality standards for determining note acceptability. These standards were then programmed into BEP’s automated currency inspection equipment. BEP and the Federal Reserve refer to notes produced before the dams were cut as phase I notes, and those produced after the dams were cut as phase II notes. They refer to notes produced after BEP’s currency inspection devices were recalibrated as phase III notes. BEP and Federal Reserve officials believe phase II notes are of higher quality than phase I notes, and that the quality of phase III notes is higher than that of both phase I and II notes. Beginning in June 1997, BEP produced a total of 160 million phase I notes, of which about 59.5 million were shipped to 16 Federal Reserve banks and 100.5 million are stored at BEP headquarters. Beginning around August 1, 1997, BEP produced 57.6 million phase II notes, all of which are stored at BEP. Production of phase III notes began around September 8, 1997, and as of September 24, 1997, BEP reported having shipped about 11.7 million of the phase III notes to Federal Reserve banks and storing about 4.3 million of the phase III $50 notes in its inventory. Secret Service, Federal Reserve, and BEP officials said the flaws in the notes did not increase the risk of counterfeiting or further delay the notes’ introduction. According to a Secret Service official, issuing the flawed notes would not make them more susceptible to counterfeiting or impede counterfeiting detection. However, the official noted that the flaw in the concentric lines could result in increased public questions about the note’s authenticity. Federal Reserve officials voiced similar concerns, particularly in regard to foreign countries where U.S. currency is often more closely scrutinized. Much of their concern stemmed from the emphasis given to the concentric lines in the promotional material being disseminated on the new $50 note. Federal Reserve and BEP officials stated that the flawed notes would not cause a further delay in the issuance of the new note to the public because the $50 note represents a relatively small portion of BEP’s total production, and it does not take long for it to make enough notes to meet the public demand. As of September 29, 1997, Federal Reserve officials told us that they had not decided what to do with the flawed notes but expect to decide by the end of the year. According to Federal Reserve officials, there is no need to rush to make a decision because the newly designed $50 notes are not scheduled to be released for circulation until October 27, 1997, and they believe that they will have enough of the good notes to put into circulation. The Federal Reserve has identified three options that it is considering: destroy all 217.6 million phase I and phase II $50 notes and replace them; inspect the 217.6 million phase I and phase II $50 notes and destroy and replace only those notes that are found to be flawed; or circulate the 217.6 million phase I and phase II $50 notes after the higher quality new notes have been in circulation for a few years. Before decisions can be made on which option to select, Federal Reserve officials described several steps that they planned to take. First, they said they would determine costs of developing and installing sensors on their currency processing equipment to inspect the phase I and phase II $50 notes. The Federal Reserve said that although its equipment—normally used to inspect recirculating notes—has the capability to check certain aspects of individual notes, it does not have the sensors needed to detect the gaps in the background of the portrait. According to BEP, its equipment can detect the gaps in the background of the portrait but only in its normal production format—that is, in sheets of 32 notes. Since all the phase I and phase II notes have been cut into individual notes, BEP’s detection equipment cannot be used for such an inspection. Thus, sensors that have the capability to detect such gaps would need to be developed by a vendor and then purchased by the Federal Reserve. The second planned step would be to determine how much it would cost to identify the acceptable notes and reprint only those that were unacceptable. The third planned step would entail the Federal Reserve and BEP conducting scientific samples of the entire inventory to identify what portion is acceptable and unacceptable. Finally, the fourth step would be to use the data obtained in the first three steps to determine the most cost beneficial option between destroying and replacing all the notes or identifying and destroying and replacing only the flawed notes. According to Federal Reserve officials, they do not believe that there is a high probability that they would choose the third option of distributing all 217.6 million phase I and phase II notes at a later time. The Federal Reserve has not estimated the complete costs of reproducing the flawed $50 notes. As an example to provide perspective on the costs of the options under consideration, according to BEP and Federal Reserve officials, if the Federal Reserve were to decide to destroy all 217.6 million of the $50 notes and replace them, it would cost approximately $7.2 million for printing replacement notes plus an additional $360,000 to destroy the notes at the Federal Reserve banks and BEP and to ship the replacement notes. This amount is about $1 million less than the $8.7 million the Federal Reserve initially paid for the phase I and phase II $50 notes because the replacement production costs do not include charges for capital equipment and fixed costs that were already included in the charges for the original production runs. The Federal Reserve was not able to estimate the costs associated with option two because the costs of obtaining and installing the sensor equipment are not known at this time; nor does it yet know what proportion of the 217.6 million notes are acceptable or what the costs of inspecting them would be. According to the Federal Reserve, the costs associated with the third option would probably be minimal and would be mostly storage costs. All costs incurred by the Federal Reserve due to the replacement of the flawed notes would result in a reduction in the amount of money the Federal Reserve returns to the Department of the Treasury after it subtracts its operating expenses from its revenues. Mr. Chairman, while our review of this matter has not been extensive, we have made two observations that should prove useful in the future production of redesigned currency. These observations relate to (1) the Federal Reserve’s and other stakeholder involvement in inspecting BEP production and limiting the number of notes produced until production problems are resolved and (2) resolving the problems with printing fine concentric lines before new denominations are produced. currency production, primarily because BEP has generally produced high quality currency; the currency designs have not significantly changed for many years; and BEP experienced no major problems with the printing of the newly designed $100 note last year. Federal Reserve officials said that they are now reassessing their approach to monitoring the quality of currency production. Both BEP and Federal Reserve officials said that they agree that early inspection of BEP production would be worthwhile after the experience with the production of the newly designed $50 note, but said they have not yet agreed on the specifics of the Federal Reserve’s early involvement. Once BEP and the Federal Reserve reach agreement on the details, we believe it would be helpful for them to formalize their agreement in writing. In addition, BEP and the Federal Reserve may wish to include Secret Service and other Treasury officials in their discussions and agreements. Based on the problems encountered with the newly designed $50 note, the BEP and Federal Reserve might also want to limit the production of newly designed currency until all production problems are resolved and to include such a limitation in their written agreement. Our second observation deals with the resolution of problems in printing concentric fine lines surrounding the portrait on denominations lower than the $50 note, which the Treasury Department and the Federal Reserve plan to introduce at 9- to 12- month intervals following the introduction of the $50 note. According to BEP, the fine concentric line design on the face of the new $50 note poses particularly difficult challenges to print clearly, and the fine concentric lines will be somewhat different for each denomination because the configuration of the portraits will vary. For example, BEP officials said that printing the fine concentric lines on the newly designed $100 note, which has a portrait of Benjamin Franklin with long hair taking up a large area of the oval surrounding Franklin’s portrait, has not been as difficult as printing the lines on the newly designed $50 note, which has a portrait of Ulysses S. Grant with relatively shorter hair taking up a smaller area of the surrounding oval. It may prove helpful for BEP to explore whether design changes would lessen the chances of production problems for future denominations. very limited observations of $50 note production this week, we observed some imperfect concentric line backgrounds, but it is important to note that our sampling was not statistically representative and we cannot make any projections on the overall rate of imperfection. In view of the experience with the early production of the redesigned $50 note, we recommend that the Secretary of the Treasury and the Board of Governors of the Federal Reserve: Formalize an agreement to have Federal Reserve, BEP, Secret Service, and other relevant Treasury officials involved early in the currency production process for future redesigned notes to inspect production and agree on an acceptable level of quality; Limit initial production of newly designed currency to the number that would be necessary to provide reasonable assurance that all production problems are resolved, and include such a limitation in their written agreement; and Explore the feasibility of design changes that might lessen the potential for production problems for future redesigned denominations. Mr. Chairman, that concludes my prepared statement and I will be happy to answer any questions that the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed issues related to the Treasury's recent production of flawed, newly redesigned $50 notes. GAO noted that: (1) neither the Bureau of Engraving and Printing (BEP) nor the Federal Reserve know specifically how many flawed notes are among the 217.6 million redesigned notes produced before September 8, 1996; (2) BEP views the problem as a start-up issue to be expected with production of a completely new note design; and (3) Federal Reserve officials have not decided what to do with the flawed notes, but have identified three options: (a) destroy all 217.6 million redesigned notes and replace them; (b) inspect the 217.6 million notes and destroy and replace only those notes that are found to be flawed; or (c) circulate the 217.6 million notes after the higher quality new notes have been in circulation for a few years. |
Runway safety is a longstanding major aviation safety concern; prevention of runway incursions, which are precursors to aviation accidents, has been on NTSB’s list of most wanted transportation improvements since 1990 because runway collisions can be catastrophic. Recent data indicate that runway incursions are growing and may become even more numerous as the volume of air traffic increases. The number and rate of incursions declined from a peak in fiscal year 2001 and remained relatively constant for the next 5 years. However, from fiscal years 2006 through 2007, the number and rate of incursions increased by 12 percent and nearly regained the 2001 peak (see fig. 1). Additionally, data for the first quarter of fiscal year 2008 show that the number of incursions increased substantially after FAA began using a definition of incursions developed by the International Civil Aviation Organization (ICAO), a United Nations specialized agency. Using the ICAO definition, FAA is now counting some incidents as incursions that had been formerly classified as surface incidents. During the first quarter of fiscal year 2008, using the ICAO definition, FAA counted 230 incursions. If FAA had continued to use its previous definition, it would have counted 94 incursions. According to an FAA official, by adopting the ICAO definition, FAA expects to report about 900 to 1,000 incursions this year. Fig. 2 shows the number and rate of incursions, by quarter, during fiscal year 2007 and during the first quarter of fiscal year 2008. Moreover, the number and rate of serious incursions—where collisions were narrowly or barely avoided—increased substantially during the first quarter of fiscal year 2008, compared to the same quarter in fiscal year 2007. During the first quarter of fiscal year 2008, 10 serious incursions occurred, compared to 2 serious incursions during the first quarter of fiscal year 2007. (See fig. 3.) Most runway incursions involve general aviation aircraft. According to FAA, 72 percent of incursions from fiscal years 2003 through 2006 involved at least one general aviation aircraft. However, about one-third of the most serious incursions from fiscal years 2002 through 2007—about 9 per year—involved at least one commercial aircraft that can carry many passengers. That number includes two serious incursions that occurred just two months ago, in December 2007. (See table 3 in the appendix for additional information on recent serious incursions.) Figure 4 shows the number of serious incursions involving commercial aircraft from fiscal years 2001 through 2007. In the United States, most incursions have occurred at major commercial airports, where the volume of traffic is greater. Los Angeles International Airport and Chicago O’Hare International Airport had the greatest number of runway incursions from fiscal years 2001 through 2007, as shown in fig. 5. The primary causes of incursions, as cited by experts we surveyed and some airport officials, include human factors issues, such as miscommunication between air traffic controllers and pilots, a lack of situational awareness on the airfield by pilots, and performance and judgment errors by air traffic controllers and pilots. According to FAA, 57 percent of incursions during fiscal year 2007 were caused by pilot errors, 28 percent were caused by air traffic controller errors, and 15 percent were caused by vehicle operator or pedestrian errors (see fig. 6). FAA, airports, and airlines have taken steps to address runway safety, but the lack of leadership and coordination, technology challenges, lack of data, and human factors-related issues impede further progress. To improve runway safety, FAA has deployed and tested technology designed to prevent runway collisions; promoted changes in airport layout, markings, signage, and lighting; and provided training for pilots and air traffic controllers. In addition, in August 2007, following several serious incursions, FAA met with aviation community stakeholders and agreed on a short-term plan to improve runway safety. In January 2008, FAA reported on the status of those actions, which included accelerating the upgrading of airport markings, which were originally required to be completed by June 30, 2008, at medium and large airports, upgrading markings at smaller commercial airports, which had not completing a runway safety review of 20 airports that were selected on the basis of runway incident data, and requiring that nonairport employees, such as airline mechanics, receive recurrent driver training at 385 airports. According to FAA, since the August 2007 meeting, all 112 active air carriers have reported that they are (1) providing pilots with similar or other training that incorporates scenarios from aircraft pushback through taxi, and (2) reviewing procedures to identify and develop a plan to address elements that contribute to pilot distraction while taxiing. FAA also indicated that it had completed an analysis of air traffic control procedures pertaining to taxi clearances and found that more explicit taxi instructions are needed, and that it had signed a partnership agreement with the National Air Traffic Controllers Association to create a voluntary safety reporting system for air traffic controllers. In our November 2007 report, we found that FAA’s Office of Runway Safety had not carried out its leadership role to coordinate and monitor the agency’s runway safety efforts. Until recently, the office did not have a permanent director for the previous 2 years and staffing levels declined. FAA took a positive step by hiring a permanent director at the Senior Executive Service level for the office in August 2007. The new director has indicated he is considering several initiatives, including establishing a joint FAA-industry working group to analyze the causes of incursions and track runway safety improvements. In our November 2007 report, we also found that FAA had not updated its national runway safety plan since 2002, despite agency policy that such a plan be prepared every 2 to 3 years. The lack of an updated plan resulted in uncoordinated runway safety efforts by individual FAA offices. For example, in the absence of an updated national runway plan, each FAA office is expected to separately include its runway safety initiatives in its own business plan. However, this practice does not provide the same national focus and emphasis on runway safety that a national plan provides. Furthermore, not all offices with runway safety responsibilities included efforts to reduce incursions in their business plans. Until the national runway safety plan is updated, the agency lacks a comprehensive, coordinated strategy to provide a sustained level of attention to improving runway safety. The deployment of surface surveillance technology to airports is a major part of FAA’s strategy to improve runway safety, but it has presented challenges. To provide ground surveillance, FAA has deployed the Airport Movement Area Safety System (AMASS), which uses the Airport Surface Detection Equipment-3 (ASDE-3) radar, at 34 of the nation’s busiest airports and is deploying an updated system, ASDE-X, at 35 major airports. The current deployment schedule will result in a total of 44 airports having AMASS and/or ASDE-X (see table 5 in the appendix). Both systems are designed to provide controllers with alerts when they detect a possible collision on the ground. As of January 2008, ASDE-X was commissioned at 11 of the 35 airports scheduled to receive it. FAA is also testing runway status lights, which are a series of lights embedded in the runways that give pilots a visible warning when runways are not clear to enter, cross, or depart on, at the Dallas-Ft. Worth International Airport and the San Diego International Airport. The agency made an initial investment decision last year to deploy the system at 19 airports, starting in November 2009, and is planning to make a final investment decision in June 2008. In addition, FAA is testing the Final Approach Runway Occupancy Signal at the Long Beach-Daugherty Field airport in California, which activates a flashing light visible to aircraft on approach as a warning to pilots when a runway is occupied and hazardous for landing. However, FAA risks not meeting its current ASDE-X cost and schedule plans, which have been revised twice since 2001, and the system is experiencing operational difficulties with its alerting function. Although it took about 4 years for ASDE-X to be commissioned at 11 airports, FAA plans to deploy the system at the remaining 24 additional airports by 2010. In addition, not all 11 ASDE-X airports have key safety features of the system. For example, as of January 2008, two ASDE-X airports did not have safety logic, which generates a visible and audible alert to an air traffic controller regarding a potential runway collision. Furthermore, the ASDE-X airports are experiencing problems with false alerts, which occur when the system incorrectly predicts an impending collision, and false targets, which occur when the system incorrectly identifies something on the airfield as an aircraft or vehicle and could generate a false alert. Moreover, most airports in the United States have no runway safety technology to supplement a controller’s vision of the airfield and will not have such technology even after FAA completes its plan to deploy ASDE-X at 35 major airports. While FAA is testing additional technology to prevent runway collisions, such as the Final Approach Runway Occupancy Signal, the systems are years away from deployment. Another technology, runway status lights, have had positive preliminary test evaluations, but need a surface surveillance system such as ASDE-3/AMASS or ASDE-X to operate. In addition, FAA is still testing a low cost surface surveillance system that already is being used at 44 airports outside of the United States. Furthermore, systems that provide direct collision warnings to flight crews, which NTSB and experts have recommended, are still being developed. FAA lacks reliable runway safety data and the mechanisms to ensure that the data are complete. Although FAA collects information about runway incursions and classifies their severity, its tabulation of the number of incursions does not reflect the actual number of incidents that occur. FAA only counts incursions that occur at airports with air traffic control towers, so the actual number of incursions, which includes those that occurred at airports without air traffic control towers, is higher than FAA reports. While the change in definition of incursions that FAA adopted at the beginning of fiscal year 2008 will increase the number of incursions counted, it will not address this problem. In addition, an internal agency audit of 2006 incursion data questioned the accuracy of some of the incursion severity classifications. FAA plans to start a nonpunitive, confidential, voluntary program for air traffic controllers similar to a program that FAA has already established for pilots and others in the aviation community. The new program will enable air traffic controllers to report anything that they perceive could contribute to safety risks in the national airspace system. The benefit of such program is that the information obtained might not be reported otherwise, and could increase the amount of data collected on the causes and circumstances of runway incursions. However, FAA has not indicated when such a program would be implemented. FAA has also taken some steps to address human factors issues through educational initiatives, such as developing simulated recreations of actual incursions to enhance air traffic controller training. However, air traffic controller fatigue, which may result from regularly working overtime, continues to be a human factors issue affecting runway safety. NTSB, which investigates transportation accidents, has identified four instances from 2001 through 2006 when tired controllers made errors that resulted in serious incursions. We found that, as of May 2007, at least 20 percent of the controllers at 25 air traffic control facilities, including towers at several of the country’s busiest airports, were regularly working 6-day weeks. (See table 7 in the appendix for additional information.) Experts we surveyed indicated that the actions that FAA could take with the greatest potential to prevent runway incursions, considering costs, technological feasibility, and operational changes, were measures to provide information or alerts directly to pilots. Experts believed that lighting systems that guide pilots as they taxi at the airport, and technology that provides enhanced situational awareness on the airfield and alerts of potential incursions, would be of particular importance. In our November 2007 report, we recommended that FAA (1) prepare a new national runway safety plan, (2) develop an implementation schedule for establishing a nonpunitive voluntary safety reporting program for air traffic controllers, and (3) develop a mitigation plan for addressing controller overtime. The agency agreed to consider our recommendations. In closing, although FAA has taken many actions to improve runway safety, the number of serious incursions that are continuing to occur— many of which involved aircraft carrying hundreds of passengers— suggests that this country continues to face a high risk of a catastrophic runway collision. FAA must provide sustained attention to improving runway safety through leadership, technology, and other means. As the volume of air traffic continues to increase, providing sustained attention to runway safety will become even more critical. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions from you or other members of the Subcommittee. For further information on this testimony, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or dillinghamg@gao.gov. Individuals making key contributions to this testimony include Teresa Spisak, Bob Homan, and David Goldstein. | While aviation accidents in the United States are relatively infrequent, recent incidents have heightened concerns about safety on airport runways. As the nation's aviation system becomes more crowded every day, increased congestion at airports may exacerbate ground safety concerns. This statement discusses (1) the trends in runway incursions, (2) what FAA has done to improve runway safety, and (3) what more could be done. This statement is based on GAO's November 2007 report issued to this committee on runway safety. GAO's work on that report included surveying experts on the causes of runway incidents and accidents and the effectiveness of measures to address them, reviewing safety data, and interviewing agency and industry officials. This statement also contains information from FAA on recent incursions and actions taken since November 2007. Recent data indicate that runway incursions, which are precursors to aviation accidents, are growing. Although the number and rate of incursions declined after reaching a peak in fiscal year 2001 and remained relatively constant for the next 5 years, they show a recent upward trend. From fiscal year 2006 through fiscal year 2007, the number and rate of incursions increased by 12 percent and both were nearly as high as their 2001 peak. Furthermore, the number of serious incursions--where collisions are narrowly or barely avoided--increased from 2 during the first quarter of fiscal year 2007 to 10 during the same quarter in fiscal year 2008. FAA has taken steps to address runway safety, but further progress has been impeded by the lack of leadership and coordination, technology challenges, lack of data, and human factors-related issues. FAA's actions have included deploying and testing technology designed to prevent runway collisions and promoting changes in airport layout, markings, signage, and lighting. However, until recently, FAA's Office of Runway Safety did not have a permanent director. Also, FAA has not updated its national runway safety plan since 2002, despite agency policy that such a plan be prepared every 2 to 3 years, resulting in uncoordinated efforts within the agency. Moreover, runway safety technology currently being installed, which is designed to provide air traffic controllers with the position and identification of aircraft on the ground and alerts of potential collisions, is behind schedule and experiencing cost increases and operational difficulties with its alerting function. FAA also lacks reliable runway safety data and the mechanisms to ensure that the data are complete. Furthermore, air traffic controller fatigue, which may result from regularly working overtime, continues to be a matter of concern for the National Transportation Safety Board (NTSB) and others. FAA could take additional measures to improve runway safety. These measures include implementing GAO's recommendations to prepare a new national runway safety plan, address controller overtime and fatigue, and start a nonpunitive, confidential, voluntary program for air traffic controllers to report safety risks in the national airspace system, which would be similar to a program that FAA has already established for pilots and others in the aviation community. Such a program could help the agency to understand the causes and circumstances regarding runway safety incidents. Additional improvements, suggested by experts and NTSB, include developing and deploying technology to provide alerts directly to pilots. |
The radio-frequency spectrum is the part of the natural spectrum of electromagnetic radiation lying between the frequency limits of 9 kilohertz and 300 gigahertz. It is the medium that makes wireless communications possible and supports a vast array of commercial and governmental services. Commercial entities use spectrum to provide a variety of wireless services, including mobile voice and data, paging, broadcast radio and television, and satellite services. Additionally, some companies use spectrum for private tasks, such as communicating with remote vehicles. Federal, state, and local agencies also use spectrum to fulfill a variety of government missions. For example, state and local police departments, fire departments, and other emergency services agencies use spectrum to transmit and receive critical voice and data communications, and federal agencies use spectrum for varied mission needs such as national defense, law enforcement, weather services, and aviation communication. Spectrum is managed at the international and national levels. The International Telecommunication Union (ITU), a specialized agency of the United Nations, coordinates spectrum management decisions among nations. Spectrum management decisions generally require international coordination, since radio waves can cross national borders. Once spectrum management decisions are made at the ITU, regulators within each nation, to varying degrees, will follow the ITU decisions. In the United States, responsibility for spectrum management is divided between two agencies: FCC and NTIA. FCC manages spectrum use for nonfederal users, including commercial, private, and state and local government users under authority provided in the Communications Act. NTIA manages spectrum for federal government users and acts for the President with respect to spectrum management issues. FCC and NTIA, with direction from the Congress, jointly determine the amount of spectrum allocated to federal and nonfederal users, including the amount allocated to shared use. Historically, concern about interference or crowding among users has been a driving force in the management of spectrum. FCC and NTIA work to minimize interference through two primary spectrum management functions—the “allocation” and the “assignment” of radio spectrum. Specifically: Allocation involves segmenting the radio spectrum into bands of frequencies that are designated for use by particular types of radio services or classes of users. For example, the frequency bands between 88 and 108 megahertz (MHz) are allocated to FM radio broadcasting in the United States. In addition to allocation, FCC and NTIA also specify service rules, which include the technical and operating characteristics of equipment. Assignment, which occurs after spectrum has been allocated for particular types of services or classes of users, involves providing a license or authorization to use a specific portion of spectrum to users, such as commercial entities or government agencies. FCC assigns licenses for frequency bands to commercial enterprises, state and local governments, and other entities, while NTIA makes frequency assignments to federal agencies. When FCC assigns a portion of spectrum to a single entity, the license is considered exclusive. When two or more entities apply for the same exclusive license, FCC classifies these as mutually exclusive applications—that is, the grant of a license to one entity would preclude the grant to one or more other entities. For mutually exclusive applications, FCC has primarily used three assignment mechanisms— comparative hearings, lotteries, and auctions. FCC historically used comparative hearings, which gave competing applicants a quasi-judicial forum in which to argue why they should be awarded a license instead of other applicants. In 1981, partially in response to the administrative burden of the comparative hearing process, the Congress authorized the use of lotteries, which allowed FCC to randomly select licenses from the qualified applicant pool. The Congress provided FCC with authority to use auctions to assign mutually exclusive licenses for certain subscriber- based wireless services in the Omnibus Budget Reconciliation Act of 1993. Auctions are a market-based mechanism in which FCC assigns a license to the entity that submits the highest bid for specific bands of spectrum. As of November 30, 2005, FCC has conducted 59 auctions for over 56,000 licenses to select between competing applications for the same license, which have generated over $14.5 billion for the U.S. Treasury. However, only a very small portion of total licenses has been auctioned. (See fig. 1.) In some frequency bands, FCC authorizes unlicensed use of spectrum— that is, users do not need to obtain a license to use the spectrum. Rather, an unlimited number of unlicensed users can share frequencies on a non- interference basis. Thus, the assignment process does not apply to the use of unlicensed devices. However, manufacturers of unlicensed equipment must receive authorization from FCC before operating or marketing an unlicensed device. To promote the more efficient use of spectrum, FCC is incrementally adopting market-based approaches to spectrum management. For instance, FCC has introduced some flexibility in the spectrum allocation process, although it remains largely a command-and-control process. In addition, in 1994, FCC instituted auctions to assign certain spectrum licenses. According to industry stakeholders, FCC’s use of auctions is seen as an improvement over comparative hearings and lotteries, the primary assignment mechanisms employed in the past. Finally, FCC has taken steps to facilitate greater secondary market activity, which may provide an additional mechanism to promote the more efficient use of spectrum. FCC currently employs largely a command-and-control process for spectrum allocation. That is, FCC applies regulatory judgments to determine and limit what types of services—such as broadcast, satellite, or mobile radio—will be offered in different frequency bands by geographic area. In addition, for most frequency bands FCC allocates, the agency issues service rules to define the terms and conditions for spectrum use within the given bands. These rules typically specify eligibility standards as well as limitations on the services that relevant entities may offer and the technologies and power levels they may use. These decisions can constrain users’ ability to offer services and equipment of their choosing. However, FCC has provided greater operational and technical flexibility within certain frequency bands. For example, FCC’s rules for Commercial Mobile Radio Service (CMRS), which include cellular and Personal Communications Services (PCS), are considered less restrictive. Under these rules, wireless telephony operators are free to select technologies, services, and business models of their choosing. FCC has not provided comparable flexibility in other bands. For example, spectrum users have relatively little latitude for making similar choices in frequency bands allocated to broadcast television services. Further, the Spectrum Policy Task Force Report, a document produced by FCC staff, identified two alternatives to the command-and-control model: the “exclusive, flexible rights” model, and the “open-access” model. The exclusive, flexible rights model provides licensees with exclusive, flexible use of the spectrum and transferable rights within defined geographic areas. This is a licensed-based approach to spectrum management that extends the existing allocation process by providing greater flexibility regarding the use of spectrum, and the ability to transfer licenses or to lease spectrum usage rights. The open-access model allows a potentially unlimited number of unlicensed users to share frequency bands, with usage rights governed by technical standards, but with no rights to interference protection. This approach does not require licenses, and as such is similar to FCC’s Part 15 rules (which govern unlicensed use in the 900 MHz, 2.4 GHz, and 5.8 GHz bands)—where cordless phones and Wi-Fi technologies operate. Both models allow flexible use of spectrum, so that users of spectrum, rather than FCC, play a larger role in determining how spectrum is ultimately used. FCC’s Spectrum Policy Task Force recommended a balanced approach to allocation—utilizing aspects of the command-and-control; exclusive, flexible rights; and open-access models. FCC is currently using elements of these two alternatives models, although it primarily employs the command-and-control model. In 1994, FCC began using auctions—a market-based mechanism that assigns a license to the entity that submits the highest bid for specific bands of spectrum. FCC’s implementation of auctions mitigates a number of problems associated with comparative hearings and lotteries—the two primary assignment mechanism employed until 1993. For example: Auctions are a relatively quick assignment mechanism. With auctions, FCC reduced the average time for granting a license to less than 1 year from the initial application date, compared to an average time of over 18 months with comparative hearings. Auctions are administratively less costly than comparative hearings. Entities seeking a license can reduce expenditures for engineers and lawyers arising from preparing applications, litigating, and lobbying; and FCC can reduce expenditures associated with reviewing and analyzing applications. Auctions are a transparent process. FCC awards licenses to entities submitting the highest bid rather than relying on possibly vague criteria, as was done in comparative hearings. Auctions are effective in assigning licenses to entities that value them the most. Alternatively, with lotteries, FCC awarded licenses to randomly- selected entities. Auctions are an effective mechanism for the public to realize a portion of the value of a national resource used for commercial purposes. Entities submitting winning bids must remit the amount of their winning bid to the government, which represents a portion of the value that the bidder believes will arise from using the spectrum. As we reported in December 2005, many industry stakeholders we contacted, and panelists on our expert panel, stated that auctions are more efficient than previous mechanisms used to assign spectrum licenses. For example, among our panelists, 11 of 17 reported that auctions provide the most efficient method of assigning licenses; no panelist reported that comparative hearings or lotteries provided the most efficient method. Of the remaining panelists, several suggested that the most efficient mechanism depended on the service that would be permitted with the spectrum. While FCC’s initial assignment mechanisms provide one means for companies to acquire licenses, companies can also acquire licenses or access to spectrum through secondary market transactions. Through secondary markets, companies can engage in transactions whereby a license or use of spectrum is transferred from one company to another. These transactions can incorporate the sale or trading of licenses. In some instances, companies acquire licenses through the purchase of an entire company, such as Cingular’s purchase of AT&T Wireless. Ultimately, FCC must approve transactions that result in the transfer of licenses from one company to another. Secondary markets can provide several benefits. First, secondary markets can promote more efficient use of spectrum. If existing licensees are not fully utilizing the spectrum, secondary markets provide a mechanism whereby these licensees can transfer use of the spectrum to other companies that would utilize the spectrum. Second, secondary markets can facilitate the participation of small businesses and introduction of new technologies. For example, a company might have a greater incentive to deploy new technologies that require less spectrum if the company can profitably transfer the unused portion of the spectrum to another company through the secondary market. Also, several stakeholders with whom we spoke noted that secondary markets provide a mechanism whereby a small business can acquire spectrum for a geographic area that best meets the needs of the company. In recent years, FCC has undertaken actions to facilitate secondary-market transactions. FCC authorized spectrum leasing for most wireless radio licenses with exclusive rights and created two categories of spectrum leases: Spectrum Manager Leasing—where the licensee retains legal and working control of the spectrum—and de Facto Transfer Leasing—where the licensee retains legal control but the lessee assumes working control of the spectrum. FCC also streamlined the procedures that pertain to spectrum leasing. For instance, the Spectrum Manager Leases do not require prior FCC approval and de Facto Transfer Leases can receive immediate approval if the arrangement does not raise potential public interest concerns. While FCC has taken steps to facilitate secondary market transactions, some hindrances remain. For example, some industry stakeholders told us that the lack of flexibility in the use of spectrum can hinder secondary market transactions. In some countries, spectrum managers have adopted market-based mechanisms to encourage the efficient use of spectrum by government agencies. In the United States, NTIA has not adopted incentive-based fees for federal government users of spectrum; rather, NTIA applies fees that recover only a portion of the cost of administering spectrum management. Additionally, adopting market-based mechanisms for government use of spectrum might be difficult or undesirable in some contexts because of the primacy of certain government missions, the lack of flexibility in use of spectrum for some agencies, and the lack of financial incentives for government users. Spectrum managers in some countries have adopted market-based mechanisms for government users of spectrum. For example, in Australia, Canada, and the United Kingdom, spectrum managers have implemented incentive-based fees for government users of spectrum. Incentive-based fees are designed to promote the efficient use of spectrum by compelling spectrum users to recognize the value to society of the spectrum that they use. In other words, these fees mimic the functions of a market. These incentive-based fees differ from other regulatory fees that are assessed only to recover the cost of the government’s management of spectrum. In the United States, NTIA has not adopted incentive-based fees, or other market-based mechanisms, for federal government users of spectrum. Currently, NTIA charges federal agencies spectrum management fees, which are based on the number of assignments authorized to each agency. In our 2002 report, we noted that, according to NTIA, basing the fee on the number of assignments, rather than the amount of spectrum used per agency, better reflects the amount of work NTIA must do for each agency. Moreover, NTIA stated that this fee structure provides a wider distribution of costs to agencies. However, NTIA’s fee does not reflect the value of the spectrum authorized to each agency, and thus it is not clear how much this encourages the efficient use of spectrum by federal agencies. The fee also recovers only a portion of the cost of administering spectrum management. NTIA does not currently have the authority to impose fees on government users that exceed its spectrum management costs. Applying market-based mechanisms might be difficult or undesirable for federal government users in some situations. The purpose of market-based mechanisms is to provide users with an incentive to use spectrum as efficiently as possible. However, the characteristics of government use of spectrum impose challenges to the development and implementation of market-based mechanisms for federal government users, and in some situations, make implementation undesirable. For example: Primacy of certain federal government missions. Because of the primacy of certain federal government missions—such as national defense, homeland security, and public safety—imposition of market- based mechanisms for use of the spectrum to fulfill these missions might not be desirable. In fact, NTIA officials have told us that the agency rarely revokes the spectrum authorization of another government agency because doing so could interfere with the agency’s ability to carry out important missions. Lack of flexibility in use of spectrum. Market-based mechanisms can create an incentive to use spectrum more efficiently only if users can actually choose to undertake an alternative means of providing a service. In some situations, federal government agencies do not have a viable alternative to their current spectrum authorization. For example, spectrum used for air traffic control has been allocated internationally for the benefit of international air travel. Thus, the Federal Aviation Administration has little ability to use spectrum differently than prescribed in its current authorizations. In situations such as this, market-based mechanisms would likely prove ineffective. Lack of financial incentives. If federal government users can obtain any needed funding for spectrum-related fees through the budgetary process, market-based mechanisms are not likely to be effective. However, imposing fees will make the cost visible to agency managers, thus providing them information they need if they are to manage spectrum use more efficiently. Whether more efficient spectrum use actually occurs will depend in part on whether agencies receive appropriations for the full amount of the fees or only for some portion. If agencies do not receive appropriations for the full amount, some pressure will be created, but it will not be as strong as the private sector’s profit motive. As we reported in December 2005, industry stakeholders and panelists on our expert panel offered a number of options for improving spectrum management. The most frequently cited options include (1) extending FCC’s auction authority, (2) reexamining the distribution of spectrum— such as between commercial and government use—to enhance the efficient and effective use of this important resource, and (3) ensuring clearly defined rights and flexibility in commercially licensed spectrum bands. There was no consensus on these options for improvements among stakeholders we interviewed and panelists on our expert panel, except for extending FCC’s auction authority. Panelists on our expert panel and industry stakeholders with whom we spoke overwhelmingly supported extending FCC’s auction authority. For example, 21 of 22 panelists on our expert panel indicated that the Congress should extend FCC’s auction authority beyond September 2007—the date auction authority was set to expire at the time of our expert panel. Given the success of FCC’s use of auctions and the overwhelming support among industry stakeholders and experts for extending FCC’s auction authority, we suggested that the Congress consider extending FCC’s auction authority. In February 2006, the Congress extended FCC’s auction authority to 2011 with the passage of the Deficit Reduction Act of 2005. While panelists on our expert panel overwhelmingly supported extending FCC’s auction authority, a majority also suggested modifications to enhance the use of auctions. However, there was little consensus on the suggested modifications. The suggested modifications fall into the following three categories: Better define license rights. Some industry stakeholders and panelists indicated that FCC should better define the rights accompanying spectrum licenses, as these rights can significantly affect the value of a license being auctioned. For example, some industry stakeholders expressed concern with FCC assigning overlay and underlay rights to frequency bands when a company holds a license for the same frequency bands. Enhance secondary markets. Industry stakeholders we contacted and panelists on our expert panel generally believed that modifying the rules governing secondary markets could lead to more efficient use of spectrum. For example, some panelists on our expert panel said that FCC should increase its involvement in the secondary market. These panelists thought that increased oversight could help to both ensure transparency in the secondary market and also promote the use of the secondary market. Additionally, a few panelists said that adoption of a “two-sided” auction would support the efficient use of spectrum. With a two-sided auction, FCC would offer unassigned spectrum, and existing licensees could make available the spectrum usage rights they currently hold. Reexamine existing small business incentives. The opinions of panelists on our expert panel and industry stakeholders with whom we spoke varied greatly regarding the need for and success of FCC’s efforts to promote economic opportunities for small businesses. For example, some panelists and industry stakeholders do not support incentive programs for small businesses. These panelists and industry stakeholders cited several reasons for not supporting these incentives, including (1) the wireless industry is not a small business industry; (2) while the policy may have been well intended, the current program is flawed; or (3) such incentives create inefficiencies in the market. Other industry stakeholders suggested alternative programs to support small businesses. These suggestions included (1) having licenses cover smaller geographic areas, (2) using auctions set aside exclusively for small and rural businesses, and (3) providing better lease options for small and rural businesses. Finally, some industry stakeholders with whom we spoke have benefited from the small business incentive programs, such as bidding credits, and believe that these incentives have been an effective means to promote small business participation in wireless markets. Panelists on our expert panel suggested a reexamination of the use and distribution of spectrum to ensure the most efficient and effective use of this important resource. One panelist noted that the government should have a good understanding of how much of the spectrum is being used. To gain a better understanding, a few panelists suggested that the government systematically track usage, perhaps through a “spectrum census.” This information would allow the government to determine if some portions of spectrum were underutilized, and if so, to make appropriate allocation changes and adjustments. A number of panelists on our expert panel also suggested that the government evaluate the relative allocation of spectrum for government and commercial use as well as the allocation of spectrum for licensed and unlicensed purposes. While panelists thought the relative allocation between these categories should be examined, there was little consensus among the panelists on the appropriate allocation. For instance, as shown in figure 2, 13 panelists indicated that more spectrum should be dedicated to commercial use, while 7 thought the current distribution was appropriate; no panelists thought that more spectrum should be dedicated to government use. Similarly, as shown in figure 3, nine panelists believed that more spectrum should be dedicated to licensed uses, six believed more should be dedicated to unlicensed uses, and five thought the current balance was appropriate. Similar to a suggested modification of FCC’s auction authority, some panelists on our expert panel suggested better defining users’ rights and increasing flexibility in the allocation of spectrum. Better defining users’ rights would clarify the understanding of the rights awarded with any type of license, whether the licensees acquired the license through an auction or other means. In addition, some panelists stated that greater flexibility in the type of technology used—and service offered—within frequency bands would help promote the efficient use of spectrum. In particular, greater flexibility would allow the licensee to determine the efficient and highly valued use, rather than relying on FCC-based allocation and service rules. However, some panelists on our expert panel and industry stakeholders with whom we spoke noted that greater flexibility can lead to interference, as different licensees provide potentially incompatible services in close proximity. Thus, panelists on our expert panel stressed the importance of balancing flexibility with interference protection. Under the current management framework, neither FCC nor NTIA has been given ultimate decision-making authority over all spectrum use or the authority to impose fundamental reform, such as increasing the reliance on market-based mechanisms. FCC manages spectrum for nonfederal users while NTIA manages spectrum for federal government users. As such, FCC and NTIA have different perspectives on spectrum use. FCC tends to focus on maximizing public access to and use of the spectrum. Alternatively, NTIA tends to focus on protecting the federal government’s use of the spectrum from harmful interference, especially in areas critical to national security and public safety. Further, despite increased communication between FCC and NTIA, the agencies’ different jurisdictional responsibilities appear to result in piecemeal efforts that lack the coordination to facilitate major spectrum reform. For example, FCC’s and NTIA’s recent policy evaluations and initiatives—the FCC Spectrum Policy Task Force and the Federal Government Spectrum Task Force, respectively—tend to focus on the issues applicable to the users under their respective jurisdictions. Major spectrum reform must ultimately address multidimensional stakeholder conflicts. One source of conflict relates to balancing the needs of government and private-sector spectrum users. Government users have said that because they offer unique and critical services, a dollar value cannot be placed on the government’s provision of spectrum-based services. At the same time, private-sector users have stated that their access to spectrum is also critical to the welfare of society, through its contribution to a healthy and robust economy. A second source of conflict relates to balancing the needs of incumbent and new users of spectrum. Since most useable spectrum has been allocated and assigned, accommodating new users of spectrum can involve the relocation of incumbent users. While new users of spectrum view relocations as essential, incumbent users often oppose relocations because the moves may impose significant costs and disrupt their operations. A third source of conflict relates to existing technology and emerging technology. Some new technologies, such as ultra wideband, may use the spectrum more efficiently, thereby facilitating more intensive use of the spectrum. However, users of existing technology, both commercial and government, have expressed concern that these new technologies may create interference that compromises the quality of their services. The current spectrum management framework may pose a barrier to spectrum reform because neither FCC nor NTIA has ultimate authority to impose fundamental reform and these stakeholder conflicts cross the jurisdictions of both FCC and NTIA. As such, contentious and protracted negotiations arise over spectrum management issues. We previously made two recommendations to help further the reform process. First, we recommended that the Secretary of Commerce and FCC should establish and carry out formal, joint planning activities to develop a national spectrum plan to guide decision making. Additionally, we also recommended that the relevant administrative agencies and congressional committees work together to develop and implement a plan for the establishment of an independent commission that would conduct a comprehensive examination of current spectrum management. To date, neither recommendation has been implemented. With authorization from Congress, FCC has taken several steps to implement a more market-oriented approach to spectrum management. In recent years, FCC has taken actions to facilitate secondary-market transactions. FCC authorized spectrum leasing for most wireless radio licenses with exclusive rights and also streamlined the procedures that pertain to spectrum leasing. In addition, FCC has conducted 59 auctions for a wide variety of spectrum uses, including personal communications services and broadcasting. FCC’s auctions have contributed to a vibrant commercial wireless industry. The Congress’ recent decision to extend FCC’s auction authority was, in our opinion, a positive step forward in spectrum reform. However, more work is needed to ensure the efficient and effective use of this important national resource. To help reform spectrum management, we have previously recommended that (1) the Secretary of Commerce and FCC should establish and carry out formal, joint planning activities to develop a national spectrum plan to guide decision making; and (2) the relevant administrative agencies and congressional committees work together to develop and implement a plan for the establishment of a commission that would conduct a comprehensive examination of current spectrum management. To date, these recommendations have not been implemented. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For questions regarding this testimony, please contact JayEtta Z. Hecker on (202) 512-2834 or heckerj@gao.gov. Individuals making key contributions to this testimony include Amy Abramowitz, Michael Clements, Nikki Clowers, Eric Hudson, and Mindi Weisenbloom. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The radio-frequency spectrum is used to provide an array of wireless communications services that are critical to the U.S. economy and various government missions, such as national security. With demand for spectrum exploding, and most useable spectrum allocated to existing users, there is growing concern that the current spectrum management framework might not be able to respond adequately to future demands. This testimony, which is based on previous GAO reports, provides information on (1) the extent to which the Federal Communications Commission (FCC) has adopted market-based mechanisms for commercial use, (2) the extent to which market-based mechanisms have been adopted for federal government users of spectrum, (3) options for improving spectrum management, and (4) potential barriers to spectrum reform. FCC is incrementally adopting market-based approaches for managing the commercial use of spectrum. Market-based mechanisms can help promote the efficient use of spectrum by invoking the forces of supply and demand. For example, although FCC currently employs largely a command-and-control process for spectrum allocation, it has provided greater flexibility within certain spectrum bands. In addition, FCC began using auctions to assign spectrum licenses for commercial uses in 1994. Finally, FCC has taken steps to facilitate greater secondary market activity, which may provide an additional mechanism to promote the efficient use of spectrum. While some countries have adopted market-based mechanisms to encourage the efficient use of spectrum by government agencies, the Department of Commerce's National Telecommunications and Information Administration (NTIA) has not adopted similar mechanisms for federal government use in the United States. NTIA imposes fees designed to recover only a portion of its cost to administer spectrum management, rather than fees that would more closely resemble market prices and thus encourage greater spectrum efficiency among government users; currently, NTIA does not have authority to impose fees that exceed its spectrum management costs. However, adopting market-based mechanisms for federal government use of spectrum might be difficult or undesirable in some contexts because of the primacy of certain government missions, the lack of flexibility in use of spectrum for some agencies, and the lack of financial incentives for government users. Industry stakeholders and experts have identified a number of options for improving spectrum management. The most frequently cited options include (1) extending FCC's auction authority, (2) reexamining the use and distribution of spectrum, and (3) ensuring clearly defined rights and flexibility in commercial spectrum bands; there was no consensus on these options, except for extending FCC's auction authority. Given the success of FCC's use of auctions and the overwhelming support for extending FCC's auction authority, GAO suggested that the Congress consider extending FCC's auction authority beyond 2007. Congress extended FCC's auction authority to 2011 with the passage of the Deficit Reduction Act of 2005. The current spectrum management framework may pose barriers to reform, since neither FCC nor NTIA has been given ultimate decision-making authority over all spectrum use, or the authority to impose fundamental reform, such as increasing the reliance on market-based mechanisms. Under the divided management framework, FCC manages spectrum for nonfederal users, including commercial uses, while NTIA manages spectrum for federal government users. As such, FCC and NTIA have different perspectives on spectrum use. Further, spectrum management issues and major reform cross the jurisdictions of both agencies. Thus, contentious and protracted negotiations arise over spectrum management issues. |
Established in 1965, Head Start is a federally funded early childhood development program that served over 900,000 children at a cost of $6.8 billion in 2004. Head Start offers low-income children a broad range of services, including educational, medical, dental, mental health, nutritional, and social services. Children enrolled in Head Start are generally between the ages of 3 and 5 and come from varying ethnic and racial backgrounds. Head Start is administered by HSB within ACF. HSB awards Head Start grants directly to local grantees. Grantees may develop or adopt their own curricula and practices within federal guidelines. Grantees may contract with other organizations—called delegate agencies—to run all or part of their local Head Start programs. Each grantee or delegate agency may have one or more centers, each containing one or more classrooms. In this report, the term “grantee” is used to refer to both grantees and delegate agencies. Figure 1 provides information on the numbers of Head Start grantees, delegate agencies, centers and classrooms. Since the inception of Head Start, questions have been raised about the effectiveness of the program. In 1998, we reported that Head Start lacked objective information on performance of individual grantees and Congress enacted legislation requiring HSB to establish specific educational standards applicable to all Head Start programs and allowed development of local assessments to measure whether the standards are met. HSB implemented this legislation by developing the Child Outcomes Framework to guide Head Start grantees in their ongoing assessment of the progress of children. The Framework covers a broad range of child skill and development areas and incorporates each of the legislatively mandated goals, such as that children “use and understand an increasingly complex and varied vocabulary” and “identify at least 10 letters of the alphabet.” Since 2000, HSB has required every Head Start grantee to include each of the areas in the Framework in the child assessments that each grantee adopts and implements. The eight broad areas included in the Framework are language development, literacy, mathematics, science, creative arts, social and emotional development, approaches to learning, and physical health and development. Grantees are permitted to determine how to assess children’s progress in these areas. These assessments are to align with the grantee’s curriculum; as a result the specific assessments vary across the grantees. The assessments occur 3 times each year and generally involve observing the children during normal classroom activities. The results of the assessments are used for the purposes of individual program improvement and instructional support and are not aggregated across grantees or systematically shared with federal officials. The NRS, prompted by the April 2002 announcement of President Bush’s Good Start, Grow Smart initiative, builds on the 1998 legislation by requiring all Head Start programs to implement the same assessment, twice a year, to all 4- and 5-year-old Head Start participants who will attend kindergarten the following year. When President Bush announced this initiative in April 2002, it called for full implementation in fall 2003; as a result the NRS was developed and preparations for implementation occurred within an 18-month period. See figure 2. Shortly after the President announced this initiative, HSB hired a contractor to assist it in developing and implementing the NRS. The contractor, working closely with HSB, was responsible for the design and field testing of the NRS, including developing training materials to support national implementation of the reporting system by grantees. HSB also worked with the Technical Work Group and others throughout implementation of the NRS. The Technical Work Group includes 16 experts in such areas as child development, educational testing, and bilingual education. They advised HSB on the selection of assessments, the appropriateness of the assessments in addressing the mandated indicators, the technical merit of the assessments, and the overall design of the NRS. While the Technical Work Group members offered advice, the group members were not always in agreement with each other and HSB was not obligated to act on any of the advice it received. A list of the Technical Work Group members and their professional affiliations is included in appendix I. Through focus groups, teleconferences, and various correspondences, HSB officials communicated to Head Start grantees the purpose of the NRS and their plans for administering the assessment. Focus groups and discussions were held with various interested parties, including Head Start managers and directors and experts from universities and the public sector, on issues ranging from strengths and limitations of various assessment tools to strategies for assessing non-English speaking children. HSB also received input through a 60-day public comment period, from mid-April to June 2003. Another contractor developed a Computer-Based Reporting System (CBRS) for the NRS. Local Head Start staff use the CBRS to enter descriptive information about their grantees, centers, classrooms, teachers, and children, as shown in table 1, as well as to keep track of which children have been assessed. HSB analyzes the descriptive information from the CBRS in conjunction with the child assessment data to develop reports on the progress of specific subgroups of children. For example, HSB can report separately on the average scores of children enrolled in part-day programs and those enrolled in full-day programs. HSB, with assistance from the contractors, worked to ensure local staff received adequate training on administering the assessment and using the CBRS, and provided guidance on how to obtain consent from parents. Training and certification of all assessors was required so that all assessors would administer the NRS in the same way. Two-and-a-half day training sessions were held at eight sites throughout the U.S. and Puerto Rico during July and August 2003. Roughly 2,800 individuals completed the training, of which 484 were certified in both English and Spanish. In turn, these certified trainers held training sessions locally to train and certify additional staff who would be able to administer assessments. The development of educational tests is a science in itself, to which university departments, professional organizations, and private companies are devoted. Among the most important concepts in test development are validity and reliability. Validity refers to whether the test results mean what they are expected to mean and whether evidence supports the intended interpretations of test scores for a particular purpose. Reliability refers to whether or not a test yields consistent results. Validity and reliability are not properties of tests; rather, they are characteristics of the results obtained using the tests. For example, even if a test designed for 4th graders were shown to produce meaningful measures of their understanding of geometry, this wouldn’t necessarily mean that it would do so when administered to 2nd or 6th graders or with a change in directions allowing use of a compass and ruler. Test developers typically implement “pilot” tests that represent the actual testing population and conditions and they use data from the pilot to evaluate the reliability and validity of a test. This process generally takes more than 1 year, especially if the test is designed to measure changes in performance. In the remainder of the report, we will discuss how the focus of the NRS was determined and the assessment was developed, HSB’s response to problems in initial implementation as well as some implementation issues that remain unaddressed, and the extent to which the assessment meets the professional and technical standards to support specific purposes identified by HSB. The NRS assesses vocabulary, letter recognition, simple math skills, and screens for understanding of spoken English. As initially conceived by HSB, the NRS was to gauge the progress of Head Start children in 13 congressionally mandated indicators of learning. However, time constraints and technical matters precluded HSB from assessing children on all of the indicators and led HSB to consider, and eventually adopt, portions of other assessments for use in the NRS. The 18 months from announcing the Good Start, Grow Smart initiative, of which the NRS is a part, to implementing the assessment was not enough time for HSB to develop a completely new assessment. Therefore, HSB, with the advice of its contractor and the Technical Work Group, chose to borrow material from existing assessments. Concerns raised by Technical Work Group members and the contractor about the length and complexity of the assessment and the technical adequacy of individual components eventually led to limiting the areas assessed in the NRS, from 13 skills to 6. The six legislatively mandated skills that HSB targeted included whether children in Head Start: use increasingly complex and varied spoken vocabulary; understand increasingly complex and varied vocabulary; identify at least 10 letters of the alphabet; know numbers and simple math operations, such as addition and subtraction; for non-English speaking children, demonstrate progress in listening to and understanding English; and for non-English speaking children, show progress in speaking English. In April and May of 2003 an assessment that included 5 components covering the 6 skills was field tested with 36 Head Start programs to examine the basic adequacy of the NRS, as well as the method for training assessors, and the use of the CBRS. The field test also included a Spanish version of the NRS. Based on the field test, one component–-phonological awareness, or one’s ability to hear, identify, and manipulate sounds–-was eliminated. While this component examined an area that experts have linked to prevention of reading difficulties, the test used to assess it was problematic. HSB moved forward with the other components of the NRS. The four components of the NRS each measure one or more of the six legislatively-mandated indicators. The four components that comprise the NRS are from the following tests: Oral Language Development Scale (OLDS) of the Pre-Language Assessment Scale 2000 (Pre-LAS 2000), Third Edition of the Peabody Picture Vocabulary Test (PPVT-III), Head Start Quality Research Centers (QRC) letter-naming exercise, and Early Childhood Longitudinal Study of a kindergarten cohort (ECLS-K) math assessment. Some or all of each test was previously used for other studies, and the PPVT and letter naming were previously used in studies of Head Start children. Three of the four tests were modified from their original version, as shown in table 2. Figures 3 and 4 are examples from the letter naming and early math skills components of the NRS. Figure 5 is an example of the type of item used in the vocabulary (PPVT) component of the NRS. Here are some letters of the alphabet. GESTURE WITH A CIRCULAR MOTION AT LETTERS AND SAY: Point to all the letters that you know and tell me the name of each one. Go slowly and show me which letter you’re naming. INDICATE ONLY CORRECTLY NAMED LETTERS ON ANSWER SHEET. WHEN CHILD STOPS NAMING LETTERS, SAY: Look carefully at all of them. Do you know any more? KEEP ASKING UNTIL CHILD DOESN’T KNOW ANY MORE. | In September 2003, the Head Start Bureau, in the Department of Health and Human Services (HHS) Administration for Children and Families (ACF), implemented the National Reporting System (NRS), the first nationwide skills test of over 400,000 4- and 5-year-old children. The NRS is intended to provide information on how well Head Start grantees are helping children progress. Given the importance of the NRS, this report examines: what information the NRS is designed to provide; how the Head Start Bureau has responded to concerns raised by grantees and experts during the first year of implementation; and whether the NRS provides the Head Start Bureau with quality information. The Head Start Bureau developed the NRS to gauge the extent to which Head Start grantees help children progress in specific skill areas, including understanding spoken English, recognizing letters, vocabulary, and early math. Due to time constraints and technical matters, the Head Start Bureau adapted portions of other assessments for use in the NRS. Head Start Bureau officials have responded to some concerns raised during the first year of NRS implementation, but other issues remain. For example, the Head Start Bureau has modified training materials and is exploring the feasibility of sampling. However, it is not monitoring whether grantees are inappropriately changing instruction to emphasize areas covered in the NRS. Head Start Bureau officials have said NRS results will eventually be used for program improvement, targeting training and technical assistance, and program accountability; however, the Head Start Bureau has not stated how NRS results will be used to realize these purposes. Currently, results from the first year of the NRS are of limited value for accountability purposes because the Head Start Bureau has not shown that the NRS meets professional standards for such uses, namely that (1) the NRS provides reliable information on children's progress during the Head Start program year, especially for Spanish-speaking children, and (2) its results are valid measures of the learning that takes place. The NRS also may not provide sufficient information to target technical assistance to the Head Start centers and classrooms that need it most. |
In 2000, mass transit systems provided over 9 billion passenger trips and employed about 350,000 people in the United States. The nation’s transit systems include all multiple-occupancy-vehicle services designed to transport customers on local and regional routes, such as bus, trolley bus, commuter rail, vanpool, ferry boat, and light rail services, and are valued at a trillion dollars. As figure 1 shows, buses are the most widely used form of transit, providing almost two-thirds of all passenger trips. A number of organizations are involved in the delivery of transit services in the United States, including federal, state, and local governments and the private sector: FTA provides financial assistance to transit agencies to plan and develop new transit systems and operate, maintain, and improve existing systems. FTA is responsible for ensuring that the recipients of federal transit funds follow federal mandates and administrative requirements. FTA’s Office of Safety and Security is the agency’s focal point for transit safety (freedom from unintentional danger) and security (freedom from intentional danger). State and local governments also provide a significant amount of funding for transit services. As figure 2 shows, state and local governments provide funding for over 40 percent of transit agencies’ operating expenses and about a quarter of their capital expenses. According to statute, states are also responsible for establishing State Safety Oversight Agencies to oversee the safety of transit agencies’ rail systems. Transit agencies, which can be public or private entities, are responsible for administering and managing transit activities and services. Transit agencies can directly operate transit service or contract for all or part of the total transit service provided. About 6,000 agencies provide transit services in the United States, and the majority of these agencies provide more than one mode of service. Although all levels of government are involved in transit security, the primary responsibility for securing transit systems has rested with the transit agencies. FTA administers a number of programs, both discretionary and formula based, that provide federal funding support to transit agencies. The largest of these programs is the urbanized area formula grant program, which provides federal funds to urbanized areas (jurisdictions with populations of 50,000 or more) for transit capital investments, operating expenses, and transportation-related planning. As figure 3 shows, the urbanized area formula grant program accounts for almost one-half of the total authorized funds for all transit programs under the Transportation Equity Act for the 21st Century (TEA-21). Recipients of urbanized area formula funds are required to spend at least 1 percent of these funds to improve the security of existing or planned mass transportation systems unless the transit agencies certify that such expenditures are unnecessary. Restrictions on the use of urbanized area formula funds for operating expenses have changed over the years. When the urbanized area formula program was created in 1982, funds could be used by transit agencies, regardless of an area’s population, for operating expenses with certain limitations. However, during fiscal years 1995 to 1997, an overall cap was placed on the total amount of these formula grants that could be used for operating expenses. In fiscal year 1995, the cap was $710 million, and in fiscal years 1996 and 1997 it was $400 million. With the passage of TEA-21 in 1998, the restrictions on urbanized area formula funds were again changed. Specifically, TEA-21 prohibits transit agencies that serve urbanized areas with populations of 200,000 or more from using urbanized area formula funding for operating expenses. According to FTA officials, the prohibition was instituted because policymakers believed the federal government should only pay for the construction of mass transit systems, not their operations. The legislative history of TEA-21 indicates that the Congress allowed transit agencies serving urban areas with populations of less than 200,000 to continue to use urbanized area formula funds for operating expenses so that they would have sufficient funding flexibilities. Throughout the world, public surface transportation systems have been targets of terrorist attacks. For example, the first large-scale terrorist use of a chemical weapon occurred in 1995 in the Tokyo subway system. In this attack, a terrorist group released sarin gas on a subway train, killing 11 people and injuring about 5,500. In addition, according to the Mineta Transportation Institute, surface transportation systems were the target of more than 195 terrorist attacks from 1997 through 2000. As figure 4 illustrates, buses were the most common target during this period. Transit agencies face significant challenges in making their systems secure. Certain characteristics of transit systems, such as their high ridership and open access, make them both vulnerable to attack and difficult to secure. The high cost of transit security improvements, coupled with tight budgets, competing needs, and a restriction on using federal funds for operating expenses (including security-related operating expenses such as additional security patrols) in large urban areas creates an even greater challenge for transit agencies. Moreover, because of the numerous stakeholders involved in transit security, coordination can become a problem. According to transit officials and transit security experts, certain characteristics of mass transit systems make them inherently vulnerable to terrorist attacks and difficult to secure. By design, mass transit systems are open (i.e., have multiple access points and, in some cases, no barriers) so that they can move large numbers of people quickly. In contrast, the aviation system is housed in closed and controlled locations with few entry points. The openness of mass transit systems can leave them vulnerable because transit officials cannot monitor or control who enters or leaves the systems. In addition, other characteristics of some transit systems—high ridership, expensive infrastructure, economic importance, and location (e.g., large metropolitan areas or tourist destinations)—also make them attractive targets because of the potential for mass casualties and economic damage. Moreover, some of these same characteristics make transit agencies difficult to secure. For example, the number of riders that pass through a mass transit system—especially during peak hours—make some security measures, such as metal detectors, impractical. In addition, the multiple access points along extended routes make the costs of securing each location prohibitive. Further complicating transit security is the need for transit agencies to balance security concerns with accessibility, convenience, and affordability. Because transit riders often could choose another means of transportation, such as a personal automobile, transit agencies must compete for riders. To remain competitive, transit agencies must offer convenient, inexpensive, and quality service. Therefore, security measures that limit accessibility, cause delays, increase fares, or otherwise cause inconvenience could push people away from transit and back into their cars. Our discussions with transit agency officials and our survey results indicate that striking the right balance between security and these other needs is difficult. For example, as shown in figure 5, 9 percent of survey respondents reported that the most significant barrier to making their transit systems as safe and secure as possible is balancing riders’ need for accessibility with security measures. Funding security improvements is a key challenge for transit agencies. Our survey results and our interviews with transit agency officials indicate that insufficient funding is the most significant challenge in making transit systems as safe and secure as possible. Moreover, our survey results indicate that the most common reason for not addressing items identified as needing attention through safety and security assessments is insufficient funding. Factors contributing to funding challenges include high security costs, tight budgets, competing budget priorities, and a provision prohibiting transit agencies in large urbanized areas from using federal urbanized area formula funds for operating expenses, such as security training. Transit security investments can be quite expensive. While some security improvements are inexpensive, such as removing trash cans from subway platforms, most require substantial funding. For example, one transit agency estimated that an intrusion alarm and closed circuit television system for only one of its portals would cost approximately $250,000—an amount equal to at least a quarter of the capital budgets of more than half the transit agencies we surveyed. According to our survey results, the top three safety and security funding priorities of transit agencies regardless of size are enhanced communication systems, surveillance equipment, and additional training. The transit agencies we visited have identified or are identifying needed security improvements, such as upgraded communication systems, additional fencing, surveillance equipment, and redundant or mobile command centers. Of the 10 transit agencies we visited, 8 had developed cost estimates of their identified improvements. The total estimated cost of the identified security improvements at the 8 agencies is about $711 million. The total cost of all needed transit security improvements throughout the country is unknown; however, given the scope of the nation’s transit systems and the cost estimate for 8 agencies, it could amount to billions of dollars. Transit agency officials told us that they are facing tight budgets, which make it more difficult for their agencies to pay for expensive security improvements. According to most of the agencies we visited, the weakened economy has negatively affected their revenue base by lowering ridership, tax revenues dedicated to transit, or both. In particular, 8 of the 10 agencies we visited reported that ridership has dropped this year, primarily because of the slow economy. The decreased ridership levels have lowered fare box revenue. In addition, state and local sales taxes, which provide revenue for many transit agencies, have declined with the weakened economy and reduced the transit agencies’ revenue, according to a number of transit agency officials. Other competing funding needs also present a challenge for transit agencies. Given the tight budget environment, transit agencies must make difficult trade-offs between security investments and other needs, such as service expansion and equipment upgrades. For example, an official at one transit agency stated that budget shortfalls and expenditures for security improvements have delayed some needed capital projects and reduced the budgets for all departments—except the safety and security budget. Similarly, an official at another agency reported that his agency is funding security improvements with money that was budgeted for nonsecurity projects. According to our analysis, 16 percent of the agencies we surveyed view balancing safety and security priorities against other priorities as the most significant challenge to making their systems as safe and secure as possible. Officials from some transit agencies we visited also reported that the funding challenges are exacerbated by the current statutory limitation on using urbanized area formula funds for operating expenses. The urbanized area formula program provides federal funds to urbanized areas (jurisdictions with populations of 50,000 or more) for transit capital investments, operating expenses, and transportation-related planning. The program is the largest source of federal transit funding. As mentioned earlier, TEA-21 prohibits transit agencies in large urbanized areas (jurisdictions with populations of 200,000 or more) from using urbanized area formula funding for most operating expenses. This prohibition limits many agencies’ ability to use FTA funds for security-related operating expenses. For example, transit agencies in large urbanized areas cannot use their urbanized area formula funds to pay for security training or salaries for security personnel, among other uses. Officials from a number of agencies we visited said this prohibition was a significant barrier to funding needed security improvements, although several agency officials also noted that the elimination of this prohibition would be helpful only if additional funding were provided. Given the declining revenue base of some transit agencies, however, the prohibition compounds the budgetary challenges of securing transit systems. Coordination among all stakeholders is integral to enhancing transit security, but it can create additional challenges. Numerous stakeholders are typically involved in decisions that affect transit security, such as decisions about its operations and funding. As we noted in our testimony before the Subcommittee on Transit and Housing in September and in previous reports, coordination among all levels of government and the private sector is critical to homeland security efforts, and a lack of coordination can create problems, such as duplication of effort. In addition, the national strategy for homeland security recognizes the challenges associated with intergovernmental coordination but emphasizes the need for such coordination. According to our site visits and our survey results, coordination of emergency planning is generally taking place between transit agencies and local governments, despite some challenges; however, such coordination appears to be minimal between transit agencies and governments at the regional, state, and federal levels. We found that transit agencies and local governments are coordinating their emergency planning efforts. Our survey results indicate that 77 percent of transit agencies have directly coordinated emergency planning at the local level; moreover, 65 percent of agencies surveyed believe they have been sufficiently integrated into their local government’s emergency plans. Likewise, 9 of the 10 transit agencies we visited said they are integrated to at least a moderate extent into their local government’s emergency planning. Officials from these 9 transit agencies noted that their agencies are included in their local government’s emergency planning activities, such as emergency drills, tabletop exercises, planning meetings, and task forces. For example, when Minneapolis held an emergency drill that simulated a biological attack on the city, Metro Transit transported “victims” to hospitals, even taking some victims to out-of-state hospitals because the local hospitals were at capacity. Transit agency and local government officials said their past experiences with weather emergencies and special events, like Super Bowl celebrations, had helped establish good working relationships. According to the officials, these past experiences have demonstrated the types of support services transit agencies can provide during emergencies, including evacuations, triage centers, victim transport, and shelters. However, officials said these working relationships are usually informal and undocumented. For example, the majority of the transit agencies we visited did not have a memorandum of understanding with their local government. Although transit agencies are generally active participants in emergency planning at the local level, they nevertheless face some coordination challenges. According to our survey results, some of the most significant challenges in coordinating emergency planning at the local level are insufficient funding, limited awareness of terrorist threats to transit, and lack of time. Similar concerns were often raised during our meetings with transit agencies. For example, one agency official noted that his agency operates in over 40 jurisdictions and that coordinating with all of these local governments is very time consuming. In contrast to the local level, coordination of emergency planning among transit agencies and governments at the regional, state, and federal levels appears to be minimal. Most of the transit agencies we visited reported limited coordination with governments other than their local government. Our survey results reveal a similar pattern. For example, 68 percent of transit agencies we surveyed have not directly coordinated emergency planning at the regional level; 84 percent have not directly coordinated emergency planning at the state level; and 87 percent have not directly coordinated emergency planning at the federal level. As we have noted in past reports on homeland security, the lack of coordination among stakeholders could result in communication problems, duplication, and fragmentation. Without coordination, transit agencies and governments also miss opportunities to systematically identify the unique resources and capacities that each can provide in emergencies. Prior to September 11, all 10 transit agencies we visited and many of the transit agencies we surveyed were implementing measures to enhance transit safety and security, such as revising emergency plans and training employees on emergency preparedness. Transit agency officials we interviewed often noted that the 1995 sarin gas attack on the Tokyo subway system or their agency’s experiences during natural disasters had served as catalysts for focusing on safety and security. Although safety and security were both priorities, the terrorist attacks on September 11 elevated the importance of security. (See app. III for select survey results, which includes information on the emergency planning and preparedness of the transit agencies we surveyed. Differences and similarities of transit agencies in large urbanized areas to those in small urbanized areas are also presented.) Since September 11, transit agencies have taken additional steps to improve transit safety and security. Officials from the agencies we visited told us their agencies have been operating at a heightened state of security since September 11. According to agency officials and our survey results, many transit agencies in large and small urbanized areas have implemented new safety and security measures or increased the frequency or intensity of existing activities, including the following: Vulnerability or security assessments: Many transit agencies have conducted vulnerability or security assessments. For example, all 10 of the agencies we visited and 54 percent of the agencies we surveyed said they had conducted a vulnerability or security assessment since September 11. The purpose of these assessments is to identify potential vulnerabilities and corrective actions or needed security improvements. Improved communication systems, more controlled access to facilities, and additional training were some of the needs identified in the assessments of the agencies we visited. Fast-track security improvements: Security improvements planned or in process prior to September 11 were moved up on the agenda or finished early. For example, one agency, which was putting alarms on access points to the subway ventilation system before September 11, completed the process early. Immediate, inexpensive security improvements: Removing bike lockers and trash cans from populated areas, locking underground restrooms, and closing bus doors at night were among the immediate and inexpensive improvements that agencies made. Intensified security presence: Many agencies have increased the number of police or security personnel who patrol their systems. Surveillance equipment, alarms, or security personnel have been placed at access points to subway tunnels, bus yards, and other nonpublic places. Employees have also been required to wear identification cards or brightly colored vests for increased visibility. For example, 41 percent of the transit agencies we surveyed have required their personnel to wear photo identification cards at all times since September 11. Increased emergency drills: Many agencies have increased the frequency of emergency drilling—both full-scale drills and tabletop exercises. For example, one agency we visited has conducted four drills since September 11. Agencies stressed the importance of emergency drilling as a means to test their emergency plans, identify problems, and develop corrective actions. Figure 6 is a photograph from an annual emergency drill conducted by the Washington Metropolitan Area Transit Authority. Revised emergency plans: Agencies reviewed their emergency plans to determine what changes, if any, needed to be made. For example, 48 percent of the agencies we surveyed, regardless of the size of urbanized area served, created or revised their emergency plans after September 11. In addition, some agencies we visited updated their emergency plans to include terrorist incident protocols and response plans. Additional training: Agencies participated in and conducted additional training on antiterrorism. For example, all 10 of the agencies we visited had participated in the antiterrorism seminars sponsored by FTA or the American Public Transportation Association. Similarly, 59 percent of all transit agencies we surveyed reported having attended security seminars or conferences since September 11. Some of the agencies we visited have also implemented innovative practices in recent years to increase their safety, security, and preparedness in emergency situations. Through our discussions with transit agencies, we identified some innovative safety and security measures, including the following: Police officers trained to drive buses: Capital Metro in Austin, Texas, trained some of the city police officers to drive transit buses during emergencies. The police officers received driver training and were licensed to drive the buses. If emergencies require buses to enter a dangerous environment, these trained police officers, instead of transit agency employees, will drive the buses. Training tunnel constructed: The Washington Metropolitan Area Transit Authority constructed an off-site duplicate tunnel, complete with railcars, tracks, and switches, to simulate an emergency environment for training purposes. (See fig. 7.) Employee suggestion program implemented: New York City Transit implemented an employee suggestion program to solicit security improvement ideas. If an employee’s suggestion is adopted, he or she receives a day of paid leave. The federal government’s role in transit security is evolving. FTA has expanded its role in transit security since September 11 by launching a multipart security initiative and increasing the funding for its safety and security activities. In addition, the Aviation and Transportation Security Act gave TSA responsibility for transit security; however, TSA’s role and responsibilities have not yet been defined. Although the transit agencies we visited were generally pleased with FTA’s assistance since September 11, they would like the federal government to provide more assistance, including more information and funding. As the federal government’s role in transit safety and security initiatives evolves, policymakers will need to address several issues, including (1) the roles of stakeholders in funding transit security, (2) federal funding criteria, (3) goals and performance indicators for the federal government’s efforts, and (4) the appropriate federal policy instrument to deliver assistance deemed appropriate. FTA has limited authority to regulate and oversee safety and security at transit agencies. According to statute, FTA cannot regulate safety and security operations at transit agencies.nonregulatory safety and security activities, including safety- and security- related training, research, and demonstration projects. In addition, FTA may promote safety and security through its grant-making authority. Specifically, FTA may stipulate conditions of grants, such as certain safety and security statutory and regulatory requirements, and FTA may withhold funds for noncompliance with the conditions of a grant.transit agencies must spend 1 percent of their urbanized area formula funds on security improvements. FTA is to verify that agencies comply with this requirement and may withhold funding from agencies that it finds are not in compliance. FTA officials stated that FTA’s authority to sponsor FTA may, however, institute For example, nonregulatory activities and to stipulate the conditions of grants is sufficient for the safety and security work they need to accomplish. Despite its limited authority, FTA had established a number of safety and security programs before September 11. For example, FTA offered voluntary security assessments, sponsored training at the Transportation Safety Institute, issued written guidelines to improve emergency response planning, and partially funded a chemical detection demonstration project, called PROTECT, at the Washington Metropolitan Area Transit Authority. Although FTA maintained both safety and security programs before September 11, its primary focus was on the safety rather than the security programs. This focus changed after September 11. In response to the terrorist attacks on September 11, FTA launched a multipart transit security initiative last fall. The initiative includes security assessments, planning, drilling, training, and technology: Security assessments: FTA deployed teams to assess security at 36 transit agencies. FTA chose the 36 agencies on the basis of their ridership, vulnerability, and the potential consequences of an attack. Each assessment included a threat and vulnerability analysis, an evaluation of security and emergency plans, and a focused review of the agency’s unified command structure with external emergency responders. FTA completed the assessments in late summer 2002. Emergency response planning: FTA is providing technical assistance to 60 transit agencies on security and emergency plans and emergency response drills. Emergency response drills: FTA offered transit agencies grants up to $50,000 for organizing and conducting emergency preparedness drills. According to FTA officials, FTA has awarded $3.4 million to over 80 transit agencies through these grants. Security training: FTA is offering free emergency preparedness and security training to transit agencies through its Connecting Communities Forums. These forums are being offered throughout the country and are designed to bring together small- and medium-sized transit agency personnel with their local emergency responders, like local firefighters and police officers. The purpose of the forums is to give the participants a better understanding of the roles played by transit agencies and emergency responders and to allow the participants to begin developing the plans, tools, and relationships necessary to respond effectively in an emergency. In addition, FTA is working with the National Transit Institute and the Transportation Safety Institute to expand safety and security course offerings. For example, the National Transit Institute is now offering a security awareness course to front line transit employees free of charge. Research and development: FTA increased the funding for its safety- and security-related technology research and has accelerated the deployment of the PROTECT system. FTA also increased expenditures on its safety and security activities after the attacks of September 11. To pay for its multipart security initiative, FTA reprioritized fiscal year 2002 funds from its other programs and used a portion of the Department of Defense and Emergency Supplemental Appropriations Act of 2002 (DOD supplemental), which provided $23.5 million for transit security purposes. Specifically, FTA will put about $18.7 million of the DOD supplemental toward its multipart security initiative. As a result of these actions, FTA’s expenditures on its safety and security activities has increased significantly in recent years. As figure 8 shows, if FTA receives the amount of funding it requested for fiscal year 2003, FTA’s expenditures on safety and security activities will more than double since fiscal year 2000—increasing from $8.1 million to $17.9 million. TSA is responsible for the security of all modes of transportation, including transit. The Aviation and Transportation Security Act created TSA within the Department of Transportation and defined its primary responsibility as ensuring security in all modes of transportation. The act also gives TSA regulatory authority over transit security, which FTA does not possess. Since its creation last November, TSA has primarily focused on improving aviation security in order to meet the deadlines established in the Aviation and Transportation Security Act. As a result, TSA has not yet exerted full responsibility for security in other modes of transportation, such as transit. TSA’s role in transit security is evolving. For transit security, the Aviation and Transportation Security Act does not specify TSA’s role and responsibilities as it did for aviation security. For example, the act does not set deadlines for TSA to implement certain transit security requirements. Similarly, although the President’s National Strategy for Homeland Security states that the federal government will work with the private sector to upgrade security in all modes of transportation and utilize existing modal relationships and systems to implement unified, national standards for transportation security, it does not outline TSA’s or the Department of Homeland Security’s role in transit security. TSA will be transferred to the new Department of Homeland Security as part of the recently passed Homeland Security Act (HR 5005). To define its roles and responsibilities in transit security, TSA is currently working with FTA to develop a memorandum of agreement. According to FTA and TSA officials, the memorandum of agreement will define the roles and responsibilities of each agency as they relate to transit security and address a variety of issues, including separating safety and security activities, establishing national standards, interfacing with transit agencies, and establishing funding priorities. For example, TSA officials said they expect to mandate a set of national standards for transit security. Consequently, the memorandum of agreement would articulate the roles and responsibilities of TSA and FTA in establishing these standards. TSA and FTA have not finalized the timetable for issuing the memorandum of agreement. TSA and FTA officials originally planned to issue the memorandum of agreement in September 2002. However, according to FTA officials, the issuance was delayed so that the memorandum could incorporate and reflect the administration’s fiscal year 2004 budget request. According to TSA officials, FTA and TSA would like to issue the memorandum of agreement by January 2003. Although TSA and FTA are informally coordinating transit security issues, the memorandum of agreement will formalize their relationship, help prevent duplication of effort, and help TSA manage the shared responsibilities involved in securing the nation’s transportation system. The transit agencies we visited were generally pleased with the assistance FTA has provided since September 11. Officials from these agencies added, however, that the federal government could do more in helping them secure their transit systems. They suggested, for example, that the federal government provide additional information on a number of issues, invest more in security-related research and development, help obtain security clearances, and supply increased funding for security improvements. Officials from the transit agencies we visited reported a need for the federal government to disseminate additional information on topics ranging from available federal grants to appropriate security levels for individual agencies. A recurring theme was for the federal government to establish a clearinghouse or similar mechanism that maintains and disseminates this type of information. Specifically, officials expressed a need for the federal government to provide additional information on the following topics: Intelligence: Transit officials from a number of agencies stated that the federal government should provide additional information on threats to their transit agencies or cities. Officials also commented that “real time” information on attacks against other transit agencies would be useful. Best practices: A number of officials said that information on transit security best practices would be beneficial. According to FTA officials, the assessments of the 36 transit agencies are helping them identify best practices. Federal grants: Officials from several transit agencies suggested that information on available grants that can be used for transit safety and security improvements would be useful, noting that locating these grants is challenging and time consuming. For example, an assistant general manager stated that she spends too much of her time searching the Internet for grants available for transit. Level of security: Transit officials from a few agencies told us that it would be helpful for the federal government to provide information on the appropriate level of security for their agencies. For example, officials at one agency questioned whether they needed to continue to post guards—24 hours a day, 7 days a week—at the entrance and exit of their tunnel, a practice instituted when the Department of Transportation issued a threat advisory to the transit industry in May 2002. Similarly, our survey results indicate that determining the appropriate level of security is a challenge for transit agencies. Cutting-edge technology: Officials from a number of agencies said that the federal government should provide information on the latest security technologies. For example, officials from one agency said that such information is needed because they have been bombarded by vendors selling security technology since September 11; however, the officials said they were unsure about the quality of the products, whether the products were needed, or whether the products would be outdated next year. Decontamination practices: Several transit agency officials expressed a need for information on decontamination protocols. For example, one agency official noted that information is needed on how to determine if the system is “clean” after a chemical or biological attack. According to FTA officials, FTA is developing two mechanisms to better disseminate information on intelligence, best practices, and security- related issues to transit agencies. First, FTA is launching a new secure Web site to post best practices and allow for the exchange of security-related information. In September 2002, FTA invited 100 transit agencies to register to use this Web site, which utilizes the Federal Bureau of Investigation (FBI) secure Web site technology called Infragard. Second, FTA is funding the transit Information Sharing and Analysis Center, which will disseminate intelligence information to transit agencies. The Center will initially be available for the largest 50 agencies. The schedules for launching or expanding the Center to other transit agencies have not been established. Officials from several of the agencies we met with also said that the federal government should be investing more in security-related research and development. Agency officials noted that individual transit agencies do not have the resources to devote to research and development. Moreover, the officials said this is an appropriate role for the federal government, since the products of research and development endeavors should benefit the entire transit community, not just individual agencies. FTA’s Office of Technology is currently the agency’s focal point for research and development and is responsible for identifying and supporting technological innovations, including safety and security innovations. According to FTA documents, the Office of Technology’s obligations for safety and security technologies have increased from $680,000 in fiscal year 2000 to an estimated $1.1 million in fiscal year 2002. FTA’s fiscal year 2003 budget request includes about $4.2 million for the Office of Technology’s safety and security technologies, representing a 272-percent increase from fiscal year 2002. FTA is also conducting 13 research projects on a variety of security-related issues, such as updating its guide for security planning, developing material for a security awareness campaign, and working on decontamination procedures for public transportation. A number of transit officials also expressed a need for the federal government to help them obtain security clearances. As we have reported in our previous work on homeland security, state and local officials have characterized their lack of security clearances as a barrier to obtaining critical intelligence information. The inability to receive any classified threat information could hamper transit agencies’ emergency preparedness capability as it apparently did at one of the transit agencies we visited. In this agency’s city, a bomb threat was made against a major building, but because the transit agency officials did not have the necessary security clearances, the FBI did not inform them of this threat until about 40 minutes before the agency was asked to help evacuate the building. According to transit agency officials, the lack of advance notice negatively affected their agency’s ability to respond, even though, in this case, the threat was not carried out. Proposed legislation (H.R. 3483) provides that the Attorney General expeditiously grant security clearances to governors who apply for them and to state and local officials who participate in federal counterterrorism working groups or regional task forces. FTA has offered to help transit agencies join their local FBI Joint Terrorism Task Force to better access intelligence information, but it has not made assisting transit agencies with security clearances part of their security activities. Officials from the transit agencies we visited also said that additional federal funding is needed. As noted earlier, many of the transit agencies we visited are experiencing tightened budgets, which make it more difficult for them to fund safety and security needs. Moreover, according to our survey results, insufficient funding is the most significant obstacle agencies face in trying to make their systems more safe and secure. The Congress has already made additional funding available for transit security purposes— about $23.5 million through the fiscal year 2002 DOD supplemental. FTA’s fiscal year 2003 budget request also includes $17.9 million for safety and security expenditures. Important funding decisions for transit safety and security initiatives remain. Due to the expense of security enhancements and transit agencies’ tight budget environments, the federal government is likely to be viewed as a source of funding for at least some of these enhancements. These improvements join the growing list of security initiatives competing for federal assistance. Based on our past work on homeland security issues, site visits to transit agencies, and survey results, we believe that several issues will need to be addressed when the federal government’s role in funding transit safety and security initiatives is considered. These issues include (1) determining the roles of stakeholders in funding transit security, (2) developing an approach to distribute federal funds, (3) establishing goals and performance indicators for the federal government’s efforts, and (4) selecting the appropriate federal policy instrument to deliver assistance. The roles and responsibilities of stakeholders in funding transit safety and security need to be determined. Since all levels of government and the private sector are concerned about transit safety and security, determining who should finance security activities may be difficult. Some of the benefits of transit systems, such as employment and reduced congestion, remain within the locality or region. In addition, private companies that own transit systems could directly benefit from security measures because steps designed to thwart terrorists could also prevent others from stealing goods or causing other kinds of economic damage. Given the importance of transit to our nation’s economic infrastructure, some have argued that the federal government should help pay for protective measures for transit. Transit officials we spoke with said that the federal government should provide additional funding for security needs. Fifty-nine percent of transit agencies in large- and small-urbanized areas responding to our survey said they plan to use federal funds to pay for their top three security priorities. Additionally, TSA and FTA officials said they would seek additional resources for transit security. The current authorizing legislation for federal surface transportation programs, TEA-21, expires on September 30, 2003. The reauthorization of TEA-21 provides an opportunity to examine stakeholders’ roles and responsibilities for transit security, including federal funding responsibilities. Since requests for funding transit security improvements will likely exceed available resources, an approach for distributing the federal dollars is needed. Transit agency officials we met with identified a number of possible federal funding criteria, including ridership levels, the population of the city the transit agency serves, identified vulnerabilities of the agency, the potential for mass casualties, and assets of the agency (e.g., tunnels and bridges). In general, the transit agency officials we spoke with believed the federal government should direct its dollars to agencies that are most at risk or most vulnerable to a terrorist attack—a criterion consistent with a risk management approach. A risk management approach is a systematic process to analyze threats, vulnerabilities, and the criticality (or relative importance) of assets to better support key decisions linking resources with prioritized efforts for results. Figure 9 illustrates that the highest risks and priorities emerge where the three parts of a risk management approach overlap. For example, transit infrastructure that is determined to be a critical asset, vulnerable to attack, and a likely target would be at most risk and therefore would be a higher priority for funding compared with infrastructure that was only vulnerable to attack. We have advocated using a risk management approach to guide federal programs and responses to better prepare against terrorism and other threats and to better direct finite national resources to areas of highest priority. FTA and TSA have not developed funding criteria or an approach to distribute funding for transit security. However, the agencies have the needed information to apply a risk management approach. For example, FTA obtains threat information from a variety of sources, including the FBI, and is in the process of identifying the most critical transit infrastructure. In addition, FTA has vulnerability information from the security assessments it recently performed. Moreover, according to TSA officials, TSA used a risk management approach to recently distribute grants to seaports and is researching best practices for using risk management assessments. In addition to a funding approach, goals and performance indicators need to be established to guide the federal government’s efforts in transit security. These critical components can influence all decisions—from launching new initiatives to allocating resources—as well as measure progress and ensure accountability. The Congress has long recognized the need to objectively assess the results of federal programs, passing the Government Performance and Results Act of 1993 (commonly referred to as the Results Act). The Results Act required agencies to set strategic and annual goals, measure performance, and report on the degree to which goals are met. However, goals or outcomes of where the nation should be in terms of transit security or other national security programs have yet to be defined. For example, as we reported this summer, the National Strategy for Homeland Security does not establish a baseline set of performance goals and measures for assessing and improving preparedness. Moreover, the goals and measures for transit safety and security in the Department of Transportation’s current strategic plan were developed before September 11 and focus more on safety and crime than on terrorism. Consequently, they do not reflect today’s realities or the changing role of the federal government in transit security. Given the recent and proposed increases in security funding, such as the DOD supplemental that provided about $23.5 million for transit security, as well as the need for real and meaningful improvements in preparedness, establishing clear goals is critical to ensuring both a successful and a fiscally responsible effort. Moreover, performance indicators are needed to track progress toward these established goals. Another important consideration is the design of policy instruments to deliver assistance. Our previous work on federal programs suggests that the choice and design of policy instruments have important consequences for performance and accountability. The federal government has a variety of policy tools, including grants, loan guarantees, regulations, tax incentives, and partnerships, to motivate or mandate state and local governments or the private sector to help address security concerns. The choice and design of policy tools can enhance the government’s capacity to (1) target the areas of highest risk to better ensure that scarce federal resources address the most pressing needs, (2) promote the sharing of responsibilities among all parties, and (3) track and assess progress toward achieving national goals. Regardless of the tool selected, specific safeguards and clear accountability requirements, such as documentation of the terms and conditions of federal participation, are needed to protect federal interests. Securing the nation’s transit system is not a short-term or easy task. Many challenges must be overcome. FTA and the transit agencies we visited have made a good start in enhancing transit security, but more work is needed. Transit agencies’ calls for increased federal funding for security needs join the list of competing claims for federal dollars and, as a result, difficult trade-offs will have to be made. Since requests for federal assistance will undoubtedly exceed available resources, criteria will be needed for determining which transit security improvements merit any additional federal funds. To ensure that finite resources are directed to the areas of highest priority, the criteria should be in line with a risk management approach. In addition to helping distribute funds, establishing a risk-based funding approach would inform congressional decision making and demonstrate to the Congress that the funds will be managed efficiently. Moreover, as the federal government’s role in transit security expands—whether through additional funding or the setting of national standards by TSA—it is important that goals and performance indicators are established to guide the government’s efforts in transit security. These components are needed to ensure accountability and results. The upcoming reauthorization of the surface transportation authorizing legislation provides an opportunity to examine the role of the federal government, including its funding responsibilities, in transit security. However, transit agencies cannot wait for the new authorizing legislation to implement transit security improvements and are moving forward with improvements to enhance the security of their system and passengers. The federal government could assist transit agencies as they press forward with their security initiatives by allowing all transit agencies, regardless of the size of the population it serves, to use urbanized area formula funds for security-related operating expenses. Although eliminating the prohibition on urbanized area funds would not provide additional funding, it would give agencies increased flexibility in financing transit security enhancements so that they could decide, for example, to use their federal dollars to pay for additional security patrols instead of a new rail car. This additional flexibility would be especially helpful given the high costs of transit security improvements and the declining revenues of many agencies. Additionally, the Department of Transportation could help transit agency officials obtain timely intelligence information so that they can make better informed decisions about their agency’s emergency planning and response. The transit Information Sharing and Analysis Center is a positive step in providing some transit agencies timely intelligence information. The Department of Transportation could take other steps as well, including helping transit agency officials obtain security clearances, to further enhance the sharing of critical intelligence information to transit agencies. To provide transit agencies greater flexibility in paying for transit security improvements, we recommend that the Secretary of Transportation consider seeking a legislative change to allow all transit agencies, regardless of the size of the urbanized area they serve, to use federal urbanized area formula funds for security-related operating expenses. To discourage the replacement of state and local funds with federal funds, any legislative change should include a requirement that transit agencies maintain their level of previous funding. To help transit agencies enhance transit security, to guide federal dollars to the highest priority, and to ensure accountability and results of the federal government’s efforts in transit security, we also recommend that the Secretary of Transportation take the following actions: Develop and implement strategies to help transit agency officials obtain timely intelligence information, including helping transit agency officials obtain security clearances. Develop clear, concise, transparent criteria for distributing federal funds to transit agencies for security improvements. The criteria should correspond to a risk management approach so that federal dollars are directed to the areas of highest priority. Establish goals and performance indicators for the department’s transit security efforts in order to promote accountability and ensure results. We provided the Department of Transportation with a draft of this report for review and comment. Department of Transportation officials, including the Deputy Administrator of the Federal Transit Administration, provided oral comments on the draft on November 22, 2002. The officials generally concurred with the report’s findings and conclusions. Moreover, they stated that the Department of Transportation will carefully consider our recommendations as it continues working to improve transit security. The officials also provided two minor clarifications on TSA’s authority over transit security and the expected issuance date of the memorandum of agreement between TSA and FTA, which we incorporated into the report. We conducted our review from May through October 2002 in accordance with generally accepted government auditing standards. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Secretary of Transportation, the Administrator of the Federal Transit Administration, the Director of the Office of Management and Budget, and interested congressional committees. We will make copies available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-2834. Individuals making key contributions to this report are listed in appendix IV. This appendix presents our survey instrument and overall results. Unless otherwise noted, we report the number of respondents for each question and the weighted percentage of respondents who selected each answer for each question. Angeles at (213) 830-1039 or dresbenm@gao.gov. transit agencies as well as conducting site visits at selected agencies. We recognize that there are great demands on your statistical purposes, and our report will present time; however, your cooperation is critical to our results in summary form. ability to provide current and complete information to Congress. Thank you in advance for your cooperation. property’s safety and security activities and needs. Please complete and mail your questionnaire by July 25, 2002. A pre-addressed postage-paid return envelope has been included. This questionnaire asks for information about your transit property’s safety and security activities. Please use the following definitions for terms used throughout this questionnaire. Acts of extreme violence: Sabotage; the use of bombs, chemical or biological agents, or nuclear or radiological materials; or armed assault with firearms or other weapons by a terrorist or another actor that causes or may cause substantial damage or injury to persons or property in any manner. Emergency plan: Document that details an organization’s operating procedures, including the responsibilities of professionals for any event, human-caused or natural, that requires responsive action to protect life or property. Transit property: Also known as a transit agency, transit system, or transit authority. Includes all transit assets such as facilities, stations, and rolling stock. Total number of unlinked passenger trips: The number of passengers who board public transportation vehicles. Passengers are counted each time they board a vehicle no matter how many vehicles they use to travel from their origin to their destination. Section: Transit Property Characteristics 1. What transit services does your agency provide? (Check all that apply.) N=146 2. Rail other than subway (e.g., commuter or light rail) 5. Customized Community Transport (e.g., demand response or paratransit) 6. Other- Please specify: ________________________________________ 2. Please provide the total number of unlinked passenger trips your transit property provided (for all modes) in FY 2000 and FY 2001. (Enter number of trips. See definition of “total number of unlinked passenger trips” on page 1.) Section: Transit Properties and Acts of Extreme Violence 4. Which of the following, if any, has your transit property experienced in the past 5 years? (Check all that apply.) N=146 1. Reported bomb threat on transit property 2. Reported chemical or biological substance on transit property 3. Explosive device on transit property 4. Chemical or biological substance on transit property 5. Nuclear device on transit property 6. Detonation of explosive on transit property 8. Attempted or actual sabotage by employee or nonemployee 9. Breach of essential computer system 10. Shooting with multiple victims on transit property 11. Other - Please specify: _______________________________________ ------------------------------------ 12. Experienced none of the above 5. In your opinion, what is the likelihood of an act of extreme violence occurring on your transit property in the next 5 years? (Check one. See definition of “acts of extreme violence” on page 3. As likely as not 6. Which of the following assessments of safety and security, if any, has been carried out for your transit property during the last 5 years? (Check all that apply.) N=146 1. Assessment of transit system’s vulnerabilities to an act of extreme violence 2. Assessment of system’s ability to sustain operations during an act of extreme violence 3. Assessment of threat of extreme violence to key transit infrastructure (i.e. stations, power stations, bridges, tunnels, control centers, vehicles) 4. Assessment of safety and security but not specifically for acts of extreme violence 5. Other - Please specify: ________________________________________ ------------------------------------ 6. Have not assessed safety and security Please skip to Question 9. 7. Have the safety and security assessments identified items needing action? (Check one.) N=107 1. Yes Continue. 2. No Skip to Question 10. 8. Which of the following factors, if any, have limited your ability to complete or resolve action items identified by the assessment(s)? (Check all that apply.) N=82* 1. Lack of available technology or information on technology 2. [14%] Inadequate information on terrorist threats 3. Balancing security and safety priorities against other priorities 4. Insufficient staff time or availability to complete 5. Insufficient time since assessment 6. Balancing riders’ needs for accessibility with safety and security measures 7. Limited staff knowledge 8. Lengthy process to gain approval for action 10. Other - Please specify: _____________________ -------------------- 11. No limiting factors, all action items are completed or resolved *Because not all respondents answered this question, the estimates have larger sampling errors than for other questions. For this question, sampling errors are less than plus or minus 12 percent. If a safety and security assessment has been carried out, skip to question 10; if not, answer question 9. 9. For which of the following reasons has your transit property not yet conducted a safety and security assessment? (Check all that apply.) N=37 1. Do not think the transit system is at risk 2. Did not think the transit system was at risk in the past 3. Low priority given to assessments 4. Inadequate information on how to assess safety and security 5. Limited staff knowledge 6. Lack of staff time or availability 8. Limited availability of consultants 9. Other - Please describe: ________________________________________ *Because only 25 percent of respondents had not yet conducted a safety and security assessment, we cannot provide representative data for this question. 11. To what extent, if at all, have the local governments you serve incorporated your agency into their emergency plan(s)? (Check one. See definition of “emergency plan” on page 1.) N=146 1. Very great extent 5. Little or no extent 6. No basis to judge/Don’t know 12. Has your agency directly coordinated emergency planning at the local level (e.g., coordinated with local government emergency management agency or local law enforcement)? (Check one.) 1. Yes Continue. 2. No Skip to question 14. 13. To what extent, if at all, has your transit property encountered the following challenges when trying to coordinate emergency planning at the local level, including with law enforcement? (Check one box in each row.) (1) (2) (3) (4) (5) Lack of information sharing (N=111) Difficulty establishing joint emergency protocol (N=111) Inadequate information to identify appropriate counterparts (N=111) Lack of interest to coordinate (N=112) Lack of time to coordinate (N=111) Disagreement on funding priorities (N=111) Limited awareness of terrorist threat to transit (N=112) Lack of coordination among various local agencies (N=111) Insufficient funding (N=110) Other - Please describe: _________________________ ___________________________________________ 14. Has your agency directly coordinated emergency planning at the state level (e.g., coordinated with state emergency management agency or state law enforcement)? (Check one.) N=146 1. Yes Continue. 2. No Skip to question 16. 15. To what extent, if at all, has your transit property encountered the following challenges when trying to coordinate emergency planning at the state level, including with law enforcement? (Check one box in each row.) (1) (2) (3) (4) (5) Lack of information sharing (N=21) Difficulty establishing joint emergency protocol (N=22) Inadequate information to identify appropriate counterparts (N=22) Lack of interest to coordinate (N=22) Lack of time to coordinate (N=22) Disagreement on funding priorities (N=22) Limited awareness of terrorist threat to transit (N=22) Lack of coordination among various state agencies (N=22) Insufficient funding (N=21) Other - Please describe: _________________________ *Because most respondents had not coordinated emergency planning at the state level, we cannot provide representative data for this question. 16. Has your agency directly coordinated emergency planning at the regional level (e.g., coordinated with government entities or law enforcement agencies in your region)? (Check one.) N=146 1. Yes Continue. 2. No Skip to question 18. 3. Not applicable Skip to question 18. 17. To what extent, if at all, has your transit property encountered the following challenges when trying to coordinate emergency planning at the regional level, including with law enforcement? (Check one box in each row.) (1) (2) (3) (4) (5) Lack of information sharing (N=43) Difficulty establishing joint emergency protocol (N=43) Inadequate information to identify appropriate counterparts (N=43) Lack of interest to coordinate (N=43) Lack of time to coordinate (N=43) Disagreement on funding priorities (N=43) Limited awareness of terrorist threat to transit (N=43) Lack of coordination among various regional agencies (N=43) Insufficient funding (N=43) Other - Please describe: _________________________ *Because most respondents had not coordinated emergency planning at the regional level, we cannot provide representative data for this question. 18. Has your agency directly coordinated emergency planning at the federal level (e.g., coordinated with federal emergency management agency or federal law enforcement)? (Check one.) N=146 1. Yes Continue. 2. No Skip to question 20. 19. To what extent, if at all, has your transit property encountered the following challenges when trying to coordinate emergency planning at the federal level, including with law enforcement? (Check one box in each row.) (1) (2) (3) (4) (5) Lack of information sharing (N=18) Difficulty establishing joint emergency protocol (N=18) Inadequate information to identify appropriate counterparts (N=18) Lack of interest to coordinate (N=18) Lack of time to coordinate (N=18) Disagreement on funding priorities (N=18) Limited awareness of terrorist threat to transit (N=18) Lack of coordination among various federal agencies (N=17) Insufficient funding (N=17) Other - Please describe: _________________________ *Because most respondents had not coordinated emergency planning at the federal level, we cannot provide representative data for this question. 20. Does your transit property have an emergency plan(s) or emergency operating procedures? (Check one. See definition of “emergency plan” on page 1.) N=146 1. Yes Continue. 2. No Skip to Question 26. 21. Which of the following situations does your transit property’s emergency plan(s) specifically address? (Check all that apply.) N=96* 1. Hostage barricade situation 2. Control center defense 3. Reported bomb threat on transit property 4. Reported chemical or biological substance on transit property 5. Explosive device on transit property 6. Chemical or biological substance on transit property 7. Nuclear device on transit property 8. Detonation of explosive on transit property 10. Attempted or actual sabotage by employee or nonemployee 11. Breach of essential computer system 12. Shooting with multiple victims on transit property 14. Other - Please describe: __________________________________ *Because not all respondents answered this question, the estimates have larger sampling errors than for other questions. For this question, sampling errors are less than plus or minus 11 percent. 22. About what proportion of your agency’s personnel have received formal training, such as in-class training, on the emergency plan? (Check one box in each row.) (1) (2) (3) (5) (4) b. All other personnel *Because not all respondents answered this question, the estimates have larger sampling errors than for other questions. For this question, sampling errors are less than plus or minus 11 percent. 23. In general, about how often do agency personnel receive refresher training or updates on new procedures concerning your emergency plan? (Check one box in each row.) (1) (2) (3) (4) (5) (6) (N=96) b. All other personnel (N=95) 24. Does your transit property’s emergency plan specify coordination with any of the following agencies? (Check all that apply.) N=96* 1. Local police departments 2. Local fire/emergency medical service 3. Local government (e.g., mayor’s or city administrator’s office) 5. Local support/charity services 6. Other transit agencies 7. Other local transportation providers 8. State law enforcement 9. State/local emergency management agencies 10. State/local environmental protection agencies 11. Federal law enforcement (e.g., FBI) 12. Federal emergency management agencies 13. Federal transportation agencies (e.g., Federal Railroad Administration, Federal Transit Administration) 15. Other - Please describe: ___________________ ------------------------------- 16. As of this date, have not specified coordination with other agencies *Because not all respondents answered this question, the estimates have larger sampling errors than for other questions. For this question, sampling errors are less than plus or minus 11 percent. 25. Have you shared your transit property’s emergency plans with any of the following entities? (Check all that apply.) N=96 1. Local police departments 2. Local fire/emergency medical service 3. Local government (e.g., mayor’s or city administrator’s office) 5. Local support/charity services 6. Other transit agencies 7. Other local transportation providers 8. State law enforcement 9. State/local emergency management agencies 10. State/local environmental protection agencies 11. Federal law enforcement (e.g., FBI) 12. Federal emergency management agencies 13. Federal transportation agencies (e.g., Federal Railroad Administration, Federal Transit Administration) 15. Other - Please describe: ___________________ ---------------------------------- 16. As of this date, have not shared plans with any other entities If your transit property has an emergency plan, skip to question 27; if not, answer question 26. 26. For which of the following reasons has your transit property not yet developed an emergency plan? (Check all that apply.) N=50* 1. Do not think the transit system is at risk 2. Did not think transit system was at risk in the past 3. Low priority given to emergency planning 4. Inadequate information on how to do an 5. Limited staff knowledge 6. Lack of staff time or availability 8. Limited availability of consultants 9. Transit agency covered by local government plan 10. Other - Please describe: ___________________ *Because not all respondents answered this question, the estimates for this question have larger sampling errors than for other questions. For this question, sampling errors are less than plus or minus 15 percent. Section: Funding Sources for Safety and Security Activities 27. Is your transit property allowed to use Federal Transit Administration (FTA) funds for operations? (Check one.) N=145 28. Please indicate the cycle of your agency’s fiscal year. (Check one.) N=146 1. January 1 to December 31 2. April 1 to March 31 3. July 1 to June 30 4. October 1 to September 30 5. Other - Specify: ______/_______ to ______/_______ (MM/DD) (MM/DD) 29. Please provide the following information about your total operating expenses and total operating funds spent on safety and security activities (e.g., administrative costs and personnel). (Round amount to the nearest dollar. If an estimate is provided, please check box.) Total operating funds spent on safety and security $ 0-1,000,000: 15% $1,000,000-10,000,000: 52% $10,000,000-25,000,000: 15% $25,000,000-100,000,000: 10% $100,000,000-1,000,000,000: 7% $1,000,000,000 and above: 1% $ 0-1,000,000: 15% $1,000,000-10,000,000: 51% $10,000,000-25,000,000: 15% $25,000,000-100,000,000: 11% $100,000,000-1,000,000,000: 7% $1,000,000,000 and above: 1% $ 0-1,000,000: 13% $1,000,000-10,000,000: 52% $10,000,000-25,000,000: 14% $25,000,000-100,000,000: 13% $100,000,000-1,000,000,000: 7% $1,000,000,000 and above: 2% (projected) *Because 40 percent or more of respondents were only able to provide estimates, we are unable to present reliable data for these questions. In addition, subsequent analyses raised other questions about data reliability. 30. What sources does your transit agency use to fund your safety and security operating expenses? (Check all that apply.) N=146 2. Other federal funds (i.e., non-FTA funds) 5. Other (e.g., fare box revenue, loans) - Specify: ________________________________ 31. What FTA programs, if any, does your transit property currently use to fund safety and security operating expenses? (Check all that apply.) N=144 1. Do not use FTA programs for safety and security operating expenses ------------------------------- 2. Urbanized Area Formula Program 3. Nonurbanized Area Formula Program 4. Elderly and Persons with Disabilities Program 5. Clean Fuels Formula Program 6. Over the Road Bus Accessibility Program 7. Alaska Railroad Program 8. Bus and Bus-Related Program 9. Fixed Guideway Modernization Program 10. New Starts Program 11. Job Access and Reverse Commute Program 12. Metropolitan Planning Program 13. State Planning and Research Program 14. National Planning and Research Program 15. Rural Transit Assistance Program 16. Other - Please describe: _____________________________________ 32. Please provide the following information about your total capital expenses and total capital funds spent on safety and security activities (e.g., surveillance equipment and fencing). (Round amount to the nearest dollar. If an estimate is provided, please check box.) What FTA programs, if any, does your transit property currently use to fund safety and security capital expenses? (Check all that apply.) N=144 1. Do not use FTA programs for safety and security capital expenses ------------------------------- 2. Urbanized Area Formula Program 3. Nonurbanized Area Formula Program 4. Elderly and Persons with Disabilities Program 5. Clean Fuels Formula Program 6. Over the Road Bus Accessibility Program 7. Alaska Railroad Program 8. Bus and Bus-Related Program 9. Fixed Guideway Modernization Program 10. New Starts Program 11. Job Access and Reverse Commute Program 12. Metropolitan Planning Program 13. State Planning and Research Program 14. National Planning and Research Program 15. Rural Transit Assistance Program 16. Other - Please describe: _____________________________________ 35. Has your transit property identified funding needed for safety and security projects in the near future? (Check one.) N=146 1. Yes Continue. 2. No Skip to Question 37. 36. What is the estimated total dollar amount of these identified needs over the next 3 years? N=73 $ * or Do not know** *Because about 40 percent of the respondents could not estimate a total dollar amount for their identified needs, we cannot provide representative data for this question. **Because not all respondents answered this question, the estimates have larger sampling errors than for other questions. For this question, sampling errors are less than plus or minus 12 percent. 37. Currently, how much of a funding priority is each of the following safety and security needs? (Check one box in each row.) (1) (2) (3) (4) (5) (6) a.Enhanced communication system(s) (e.g., 2-way radios) c.Chemical, biological, or radiological detection systems d.Clear, impact-resistant sheeting for transit vehicle windows e.Trespasser intrusion detection systems for tunnel environments f.Application of Crime Prevention Through Environmental Design (CPTED) engineering concepts into new facilities and i. Section: Transit Agency Preparation 41. Please provide an answer in each column: a. Prior to September 11, 2001, what steps had your transit property taken to improve its safety and security? b. Since September 11, 2001, what steps has your transit property taken to improve its safety and security? (Check “yes” or “no” in each column.) Coordinated with local and state government entities, including Coordinated with other transit agencies Conducted background checks on all employees Increased visibility of facility personnel (e.g., personnel wear brightly colored vests) Required staff to display photo ID at all times Tracked employee sick days as an indicator of potential hazards Purchased security technology (e.g., surveillance equipment) Purchased security infrastructure (e.g., fencing, lighting) Made computer system more secure (“hardened” computer system) Conducted public education/awareness campaign for transit Developed after-event media relations protocol Tracked reports of sick riders as an indicator of potential Other(s) - Please describe: ____________________________________________________ 42. Please use the space below to provide any additional comments regarding the survey or your system’s transit safety and security. Thank you very much for your assistance. To address our objectives, we visited 10 transit agencies across the country, including the Capital Metropolitan Transportation Authority in Austin; Chicago Transit Authority; Central Florida Regional Transit Authority in Orlando; Los Angeles County Metropolitan Transportation Authority; Minneapolis-St. Paul Metropolitan Council; New York City Transit; Regional Transportation District in Denver; San Francisco Bay Area Rapid Transit; San Francisco Municipal Railway; and Washington Metropolitan Area Transit Authority in the District of Columbia. We selected these agencies because they represent different geographical areas and operate transit systems of different sizes and modes. (See fig. 10 and table 1.) During our site visits, we interviewed key officials from the transit agencies and the respective city governments and reviewed the transit agencies’ emergency plans. In addition to our site visits, we surveyed a sample of 200 transit agencies. The sample from which we drew our population consisted of all transit agencies throughout the nation that are eligible to receive federal urbanized area formula funding, according to the most up-to-date list of eligible agencies provided by the National Transit Database. The results of our mail survey are generalizable to this population, which we refer to as our sample population. We stratified our sample population into two groups—agencies that serve urbanized areas with a population of 200,000 or more (large urbanized areas); and agencies that serve urbanized areas with a population of 50,000 to 199,999 (small urbanized areas). We distinguished between these two strata because agencies that operate in large urbanized areas are prohibited from using federal urbanized area formula funds for operating expenses, whereas agencies in small urbanized areas are not prohibited from using FTA funds for operating expenses. We randomly selected 100 agencies from each stratum to survey. Our overall survey response rate was 78 percent. However, we excluded 9 surveys from our analysis after determining that these transit agencies were outside the scope of our review for one of the following reasons: they had gone out of business (3); they were subsidiaries of other agencies included in our sample (2); or they did not provide bus, customized community transport, rail, subway, or ferryboat services (e.g., they only provide vanpool service) (4). The reported survey results are based on the responses of the subpopulation of 146 agencies within the scope of our review. To help design our survey instrument, we reviewed surveys on transit safety and security conducted by FTA, the American Public Transportation Association (APTA), and the Transportation Cooperative Research Program. We also obtained input from Department of Transportation, FTA, and transit agency officials; and representatives from APTA and the Mineta Transportation Institute. After developing the survey instrument, we pretested the content and format of the survey with officials from several transit agencies and made necessary revisions. All returned questionnaires were reviewed, and we called respondents to obtain information when questions were not answered or clarification was needed. All data were double-keyed and verified during data entry, and computer analyses were performed to identify any inconsistencies or other indications of error. A copy of the mail questionnaire is included in appendix I. All sample surveys are subject to sampling error—that is, the extent to which the survey results differ from what would have been obtained if the whole population had been observed. Measures of sampling error are defined by two elements, the width of the confidence intervals around the estimate (sometimes called the precision of the estimate) and the confidence level at which the intervals are computed. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Moreover, because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95- percent confidence interval (e.g., plus or minus 5 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95-percent confident that the confidence intervals for each of the mail survey questions includes the true values in the sample population. All percentage estimates from the mail survey have sampling errors of plus or minus 10 percentage points or less, unless otherwise noted. In addition, other potential sources of error associated with surveys, such as misinterpretation of a question and nonresponse, may be present, although nonresponse errors should be minimal. Finally, in addition to our site visits and survey, we analyzed agency documents and interviewed transit agency officials, industry representatives, and academic experts. We analyzed FTA budget data, safety and security documents, and applicable statutes and regulations. We reviewed research on terrorism and attended transit security forums sponsored by APTA and FTA. Finally, we interviewed FTA, TSA, and Department of Transportation officials and representatives from APTA, the National Governors Association, the Mineta Transportation Institute, RAND Corporation, the University of California at Los Angeles, and the Amalgamated Transit Union. We conducted our review from May through October 2002 in accordance with generally accepted government auditing standards. This appendix provides our analysis of the responses we received to selected questions from our survey of 200 transit agencies in the United States. (See app. I for the overall survey results and our survey instrument.) This analysis provides information about the characteristics, including both general and safety- and security-related characteristics, of the transit agencies surveyed. Differences in the characteristics of transit agencies in large urbanized areas (populations of 200,000 or more) and transit agencies in small urbanized areas (populations between 50,0000 to 199,999) are also presented. The transit agencies we surveyed provide a variety of transit services, including bus, rail, and ferryboat. Although a mix of services is provided by the surveyed transit agencies, bus is by far the most common transit service provided. (See fig. 11.) Our survey results also indicate that there are some differences between transit agencies in large urbanized areas and transit agencies in small urbanized areas. For example, transit agencies in large urbanized areas offer more types of services than transit agencies in small urbanized areas. Additionally, transit agencies in large urbanized areas were more likely to provide rail services than transit agencies in small urbanized areas and were the only agencies to provide subway service. The transit agencies we surveyed reported that they provided almost 10 billion unlinked passenger trips in fiscal years 2000 and 2001. Specifically, according to the agencies, they provided a total of 4.7 billion unlinked passenger trips in fiscal year 2000 and 4.9 billion trips in fiscal year 2001. Our survey results also indicate that transit agencies in large urbanized areas carry more passengers than transit agencies in small urbanized areas. For example, the majority of transit agencies in small urbanized areas reported that they provided fewer than 1 million passenger trips in fiscal year 2001, while the majority of transit agencies in large urbanized areas provided more than 1 million passenger trips. Moreover, 7 percent of the transit agencies in large urbanized areas stated that they provided more than 100 million passenger trips in fiscal year 2001. No transit agency that we surveyed in a small urbanized area served that number of passengers. (See fig. 12.) According to our survey results, transit agencies in large urbanized areas typically have bigger operating and capital budgets than transit agencies in small urbanized areas. (See fig. 13.) In particular, 57 percent of the transit agencies in large urbanized areas have operating budgets of more than $10 million, while 10 percent of transit agencies in small urbanized areas have operating budgets of comparable size. Additionally, 32 percent of the transit agencies in large urbanized areas have capital budgets of more than $10 million. In comparison, none of the transit agencies in small urbanized areas that we surveyed had capital budgets of that magnitude. Most transit agencies we surveyed either contract with a security service (35 percent) and/or have established agreements with local or state police (34 percent) to provide security for their property. However, our survey did reveal some differences between transit agencies in large and small urbanized areas in terms of their transit properties’ security, as shown in figure 14. For example, of the transit agencies we surveyed, only those agencies in large urbanized areas had their own transit police officers. Our survey results show that all transit agencies we surveyed rely on a variety of federal, state, and local sources to fund safety and security expenses. As figure 15 shows, transit agencies in large and small urbanized areas identified local funds as the most common source of funding for safety and security operating expenses. A notable difference between transit agencies in large and small urbanized areas appears in their use of FTA funds. In particular, 62 percent of agencies in small urbanized areas identified FTA funds as a source of funds for safety and security operating expenses, while 23 percent of agencies in large urbanized areas identified this as a source. In contrast to safety and security operating expenses, we found that the most common source of funds for safety and security capital expenses is FTA funds. (See fig. 16.) The majority of the transit agencies we surveyed do not believe they are likely targets for acts of extreme violence. In particular, 62 percent of transit agencies we surveyed believe they are unlikely or very unlikely to be the target of an act of extreme violence in the next 5 years. By contrast, 6 percent of the transit agencies we surveyed consider the likelihood of an act of extreme violence on their property likely or very likely. Thirty-one percent of the transit agencies we surveyed believe they are as likely as not to experience an act of extreme violence on their property in the next 5 years. In addition, the majority of the transit agencies we surveyed have not experienced an act of extreme violence on their property in the past 5 years. Specifically, 66 percent of the transit agencies we surveyed said that they have not experienced acts of extreme violence on their systems. However, the agencies that have experienced acts of extreme violence have encountered a variety of situations. (See fig. 17.) Seventy-five percent of the transit agencies we surveyed have conducted an assessment of their transit system. As figure 18 shows, the majority of the assessments have focused on general safety and security issues, not necessarily on the transit systems vulnerability to a terrorist threat or act of extreme violence. Seventy-seven percent of the agencies reported that their assessments have identified items needing action; however, the majority of these agencies indicated that a variety of factors have limited their ability to resolve the identified problems. According to these transit agencies, insufficient funding, the need to balance security and safety priorities with other priorities, and insufficient staff time or availability to complete action items were the top reasons why identified needs have not been addressed. Sixty-six percent of all surveyed agencies have emergency plans. In general, our survey results indicate that the majority of the agencies’ emergency plans describe protocols for a number of emergency situations, such as natural disasters, reported bomb threats, and explosive devices. Moreover, our survey results also indicate that the majority of all agencies’ plans specify coordination with other entities, such as local police departments, and most agencies have shared their plans with other entities. However, our survey results reveal that transit agencies in large urbanized areas have more comprehensive emergency plans than agencies in small urbanized areas, in terms of both the level of coordination with other entities and the number of scenarios addressed by the plans. For example, as figure 19 shows, the emergency plans of agencies in large urbanized areas specify coordination with the media more often than plans of agencies in small urbanized areas. Furthermore, as figure 20 shows, the emergency plans of agencies in large urbanized areas address more emergency situations—such as an explosive device on the transit property—than the emergency plans of agencies in small urbanized areas. In addition to those named above, Karin Bolwahnn, Nikki Clowers, Michelle Dresben, Elizabeth Eisenstadt, Michele Fejfar, David Hooper, Wyatt R. Hundrup, Hiroshi Ishikawa, and Sara Ann Moessbauer made key contributions to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | About one-third of terrorist attacks worldwide target transportation systems, and transit systems are the mode most commonly attacked. In light of the history of terrorism against mass transit and the terrorist attacks on September 11, GAO was asked to examine challenges in securing transit systems, steps transit agencies have taken to improve safety and security, and the federal role in transit safety and security. To address these objectives, GAO visited 10 transit agencies and surveyed a representative sample of transit agencies, among other things. Transit agencies have taken a number of steps to improve the security of their systems since September 11, such as conducting vulnerability assessments, revising emergency plans, and training employees. Formidable challenges, however, remain in securing transit systems. Obtaining sufficient funding is the most significant challenge in making transit systems as safe and secure as possible, according to GAO survey results and interviews with transit agency officials. Funding security improvements is problematic because of high security costs, competing budget priorities, tight budget environments, and a provision precluding transit agencies that serve areas with populations of 200,000 or more from using federal urbanized area formula funds for operating expenses. In addition to funding challenges, certain characteristics of transit agencies make them both vulnerable to attack and difficult to secure. For example, the high ridership and open access of some transit systems makes them attractive for terrorists but also makes certain security measures, like metal detectors, impractical. Moreover, because all levels of the government and the private sector are involved in transit decisions, coordination among all the stakeholders can pose challenges. While transit agencies are pursuing security improvements, the federal government's role in transit security is expanding. For example, the Federal Transit Administration (FTA) launched a multipart security initiative and increased funding of its safety and security activities after September 11. In addition, the Aviation and Transportation Security Act gave the Transportation Security Administration (TSA) responsibility for the security of all transportation modes, including transit. TSA anticipates issuing national standards for transit security. As the federal government's role expands, goals, performance indicators, and funding criteria need to be established to ensure accountability and results for the government's efforts. |
The 7(a) loan program, which is authorized by Section 7(a) of the Small Business Act, is SBA’s largest business loan program. It is intended to serve small business borrowers who otherwise cannot obtain financing under reasonable terms and conditions from the private sector. In administering the 7(a) program, SBA has evolved from making guaranteed loans directly to depending on lending partners, primarily banks. Under 7(a), SBA provides guarantees of up to 85 percent on loans made by participating lenders. Within 7(a), there are three classifications of lenders—regular, certified, and preferred. SBA evaluates and grants preferred lender status to 7(a) lenders after receiving nominations and reviews from its 70 district offices and a regional processing center. Of the three categories, preferred lenders have the most autonomy in that they can make loans without prior SBA review or approval. Most preferred lenders are banks that have their own safety and soundness regulators, such as the Office of the Comptroller of the Currency. Those regulators, however, may not focus on the 7(a) loans that SBA guarantees when they examine the bank. The other preferred lenders, which are SBLCs, have no regulator other than SBA—making SBA oversight more critical. As of August 2002, SBA had over 400 preferred lenders. To give you an idea of this program’s scope, in fiscal year 2002, 7(a) loan approvals totaled approximately $12.2 billion, of which preferred lenders approved $6.7 billion. However, preferred lending activity is concentrated in a few larger institutions. Less than 1 percent of 7(a) lenders account for more than 50 percent of 7(a) dollar volume outstanding. According to SBA, most of these lenders are preferred lenders. Two offices within SBA have primary responsibility for 7(a) lender oversight—the Office of Lender Oversight (OLO) and the Office of Financial Assistance (OFA). OLO is responsible for many oversight functions, such as managing all headquarters and field office activities regarding lender reviews. However, OFA has retained some oversight responsibilities. OFA’s current role in lender oversight is to provide final approval of lenders’ PLP status. Lenders are granted PLP status in specific SBA districts for a period of 2 years or less. OFA collects information about the lender prepared by the Sacramento Processing Center, with input from one or more of SBA’s 70 district offices, and decides whether to renew a lender’s PLP status or to grant status in an additional district. OFA may also discontinue a lender’s PLP status. Other lenders participating in the 7(a) program are subject to a different oversight regime. Specifically, SBA divides SBLC program functions between OLO and OFA. OLO is responsible for SBLC on-site examination, and OFA handles day-to-day program management and policymaking. Ultimate responsibility for enforcement of corrective actions rests with OCA. As participants in the 7(a) program, SBLCs are subject to the same review requirements as other 7(a) lenders, and they are also subject to safety and soundness oversight by SBA. SBA has identified goals for its lender oversight program that are consistent with appropriate standards for an oversight program; however, SBA had not yet established a program that is likely to achieve them. Since our last review, SBA had made progress in developing its lender oversight program, but there are still areas in need of improvement if SBA is to develop a successful program. SBA has highlighted risk management in its strategy to modernize the agency; however, PLP reviews are not designed to evaluate financial risk, and the agency has been slow to respond to recommendations made for improving its monitoring and management of financial risk—posing a potential risk to SBA’s portfolio. PLP reviews are designed to determine lender compliance with SBA regulations and guidelines, but they do not provide adequate assurance that lenders are sufficiently assessing eligibility and creditworthiness of borrowers. Although SBA has identified problems with preferred lender and SBLC lending practices, it has not developed clear policies that would describe enforcement responses to specific conditions. Thus, it is not clear what actions SBA would take to ensure that preferred lenders or SBLCs address lending program weaknesses. Although the process for certifying lenders for PLP status—another means by which SBA oversees lenders—has become better defined and more objective, some lenders told us they continue to experience confusing and inconsistent procedures during this process due to varying recommendations from field offices. Since our June 1998 report, SBA has responded to a number of recommendations for improving lender oversight by developing guidance, establishing OLO, and doing more reviews. SBA developed “Standard Operating Procedures” (SOP) for oversight of SBA’s lending partners and the “Loan Policy and Program Oversight Guide for Lender Reviews” in October 1999. SBA established OLO in fiscal year 1999 to coordinate and centralize lender review processes for PLP and SBLC oversight. OLO created a “Reviewer Guide” for personnel engaged in PLP reviews and does training for all SBA staff involved in conducting preferred lender reviews. OLO officials said that to effectively oversee and monitor SBA lenders, they also evaluate lender-generated risk to the SBA portfolio, work with SBA program offices to manage PLP oversight operations, and plan to conduct regular and systematic portfolio analysis using a new loan monitoring system. Additionally, to minimize the number of visits SBLCs receive during a year, OLO combined PLP reviews with SBLC examinations performed by FCA. In another effort to improve the lender review process, SBA developed an automated, 105-item checklist that is designed to make its analysis more objective. The questionnaire addresses lender organizational structure, policies, and controls, but the answers are provided in a “yes-no” format and generally refer to the presence or absence of specific documents. SBA noted that the format makes assessments of lenders more consistent and objective. However, we note that without a more substantive method of evaluating lender performance, this approach does not provide a meaningful assessment. SBA also has increased the number of PLP reviews performed. In June 1998, we reported that SBA had not reviewed 96 percent of 7(a) lenders, including preferred lenders, in the districts we visited. SBA reviewed 385 reviews of 449 preferred lenders in its 2001-- 2002 review year. While elements of SBA’s oversight program touch on the financial risk posed by preferred lenders, including SBLCs, weaknesses in the program limit SBA’s ability to focus on, and respond to, current and future financial risk to their portfolio. Neither the PLP review process nor SBA’s off-site monitoring efforts adequately focus on the financial risk posed by preferred and other lenders to SBA. SBA oversight of SBLCs is charged with monitoring how SBLCs administer their credit programs, identifying potential problems, and keeping SBA losses to an acceptable level. However, SBA’s progress in reporting examination results in a timely manner and implementing other program improvements limits the effectiveness of SBA’s SBLC oversight. SBA officials stated that PLP reviews are strict compliance reviews that are not designed to measure the lenders’ financial risk. Our review and that of SBA’s Inspector General (IG) confirmed this. The PLP review serves as SBA’s primary internal control mechanism to determine whether preferred lenders are processing, servicing, and liquidating loans according to SBA standards and whether such lenders should participate in the programs. While the review has questions that touch on the financial risk of a given loan, review staff are not required to answer them; and SBA guidance explicitly states that the answers to the questions are for research purposes only and are not to be considered in making any determinations about the lender. By not including an assessment of the financial risk posed by individual lenders during PLP reviews, SBA is missing an opportunity to gather information that could help predict PLP lenders’ future performance, thereby better preparing SBA to manage the risk to its portfolio. The SBA IG also suggested that financial risk and lender-based risk should be considered as part of a comprehensive oversight program. SBA’s off-site monitoring efforts do not adequately assess the financial risk posed by PLP and other lenders. SBA currently uses loan performance benchmarking and portfolio analysis to serve as its primary tools for off- site monitoring. While SBA officials stated that loan performance benchmarks are based on financial risk and serve as a measure to address a lender’s potential risk to the SBA portfolio, we found that the benchmarks were not consistently used for this purpose. In addition, we found that OLO does not perform routine analysis of SBA’s portfolio to assess financial risk. At the time of our review, staff produced ad-hoc reports to analyze aggregate lending data to look for trends and to try to anticipate risk. Currently, FCA staff responsible for SBLC safety and soundness examinations also perform PLP reviews at SBLCs—these reviews are the same ones that SBA contractors perform at preferred lenders and employ the same review checklist. Upon the completion of its examinations, FCA provides a draft report to SBA for comment, incorporates any changes, and then provides a final report to SBA, which, in turn, issues a final report to the SBLC. SBA has not eliminated weaknesses in SBLC oversight, which were cited by us and the SBA IG. We, and the SBA IG, found that final SBLC examination reports were not issued in a timely manner. SBA’s IG reported that final reports for fiscal year 2001 SBLC examinations were not issued until February 2002, 10 months after OLO received the first draft report from FCA. Our work confirmed these findings. We found that OLO does not maintain standards for the timely issuance of examination reports. However, OLO has recently developed draft customer service goals calling for SBLC examination reports to be finalized within 90 days of receipt of a draft report from FCA. However, as of August 2002, none of the examination reports from fiscal year 2002 had been issued. According to the IG, because of the delays in finalizing the reports and SBA’s policy to delay any necessary enforcement actions until final reports are issued, two SBLCs were allowed to continue operating in an unsafe and unsound manner, despite early identification of material weaknesses during fiscal year 2001 examinations. The effectiveness of any examination program is measured, to a large degree, on its ability to identify and promptly remedy unsafe and unsound conditions. By delaying reporting and remedial action, SBA has significantly limited the effectiveness of its SBLC oversight program. SBA has been slow to implement recommendations from FCA for improving the SBLC examination program. In addition to examining SBLCs, FCA was asked by SBA to provide recommendations for changes in the SBLC program. Each year FCA provides its views in a comprehensive report. FCA’s September 1999 report made 15 recommendations, 12 of which SBA agreed to implement. We reviewed the reports for fiscal years 2000 and 2001, in which FCA made additional recommendations with which SBA agreed. Yet, the 2001 report still lists 8 recommendations from the 1999 report and 2 from the 2000 report. SBA officials explained that limited resources have contributed to the delay in implementation of many of these recommendations. Assessing whether a borrower is eligible for 7(a) assistance is difficult because the requirements are broad and variable, making a qualitative assessment of a lender’s decision by a trained reviewer all the more important. SBA regulations require a lender to attest to the borrower’s demonstrated need for credit by determining that the desired credit is unavailable to the borrower on reasonable terms and conditions from nonfederal sources without SBA assistance. These “credit elsewhere” provisions are particularly difficult to assess and must be determined prior to assessing other credit factors. SBA guidance also requires preferred lenders to certify that credit is not otherwise available and to retain the explanation in the borrower file. SBA does provide guidance on factors that may contribute to a borrower being unable to receive credit elsewhere. Factors that lenders should consider include the following: The business requires a loan with a longer maturity than the lender’s The requested loan exceeds either the lender’s legal limit or policy limit, regarding amounts loaned to one customer; The lender’s liquidity depends upon selling the guaranteed portion of the loan on the secondary market; The collateral does not meet the lender’s policy requirements because of its uniqueness or low value; The lender’s policy normally does not allow loans to new ventures or businesses in the applicant’s industry; and Any other factors relating to the credit that in the lender’s opinion cannot be overcome except by receiving a guaranty. Based on these criteria, the credit elsewhere test could always be satisfied by structuring an SBA guaranteed loan so that its terms and conditions differ from those available on the commercial market. As a result, these loans could be made available to businesses that could obtain credit elsewhere on reasonable market terms and conditions, although not the same terms and conditions offered with the SBA guarantee. SBA officials stated that the credit elsewhere requirements are designed to be broad so as to not limit a lender’s discretion and allow flexibility, depending upon geographic region, economic conditions, and type of business. For example, SBA officials said that when credit is more readily available, businesses that require SBA assistance might be held to a different standard, thereby making it more difficult to obtain the SBA guarantee than when credit is tighter. Nonetheless, the flexibility that lenders have along with the difficulty in assessing lenders’ credit elsewhere decisions further support the need for developing specific criteria for a credit elsewhere standard. These changes would facilitate a more qualitative assessment of eligibility decisions made by preferred lenders. Moreover, because it is a cursory review of documents in the file, the PLP review also does not qualitatively assess a lender’s credit decision. Preferred lenders are required to perform a thorough and complete credit analysis of the borrower and establish repayment terms on the loan in the form of a credit memorandum. SBA guidance requires, at a minimum, discussion in the credit memorandum of a borrower’s capitalization or proof that the borrower will have adequate capital for operations and repayment, as well as capable management ability. SBA officials said that lender review staff focus on the lender’s process for making credit decisions rather than the lender’s decision. SBA officials said that it is unlikely that the review would result in a determination that the loan should not have been made. An SBA official stated that review staff would not perform an in-depth financial analysis to assess the lender’s credit decision and that a lender’s process would only be questioned in the case of missing documentation. For example, review staff would cite a lender if it did not document the borrower’s repayment ability. Some lenders we interviewed criticized the lack of technical expertise of contract review staff. The lenders stated that review staff was unable to provide additional insight into material compliance issues during the review because of a lack of technical knowledge of the underwriting process and requirements. For example, one lender said he was cited for not signing a credit elsewhere statement, but the reviewer did not evaluate a financial statement in the file substantiating the credit elsewhere assessment. To improve PLP and SBLC oversight, we recommended that SBA incorporate strategies into its review process to adequately measure the financial risk lenders pose to SBA, develop specific criteria to apply to the credit elsewhere standard, and perform qualitative assessments of lender performance and lending decisions. SBA stated that it believes the existing statutes, regulations, policies, and procedures provide sufficient guidance to lenders. These are the same sources we analyzed and found to be broad, making a qualitative assessment of a lender’s decisions difficult. SBA has responded that it does measure financial risk of SBLCs through the safety and soundness examinations conducted by FCA and that the PLP lender reviews do estimate some degree of financial risk. We had noted both of these measures in our December 9, 2002 report. We also noted that SBA had not acted on suggestions that FCA had made to enhance SBA’s oversight of SBLCs. Only 3 of 15 preferred lender review reports that we reviewed provided any evidence of such an assessment. And, we note, SBA’s review guidance does not require such a review. Thus, our recommendations remain open. SBA has authority to suspend or revoke a lender’s PLP status for reasons that include unacceptable loan performance; failure to make enough loans under SBA’s expedited procedures; and violations of statutes, regulations, or SBA policies. However, SBA has not developed policies and procedures that describe circumstances under which it will suspend or revoke PLP authority or how it will do so. SBA guidance does not include specific follow-up procedures for PLP lenders that receive poor review ratings, but it does discuss recommended patterns of follow-up. SBA officials said that, in practice, they request action plans to address deficiencies for any ratings of “minimally in compliance” and “not in compliance.” In addition, lenders with ratings of not in compliance are to receive follow-up reviews. SBA officials explained that because they want to encourage lenders to participate in PLP, they prefer to work out problems with lenders, and therefore rarely terminate PLP status. And, where a lender persists in noncompliance, SBA will generally allow the status to expire, rather than terminating it. However, without clear enforcement policies, PLP lenders cannot be certain of the consequences of certain ratings and they may not take the oversight program seriously. In November 2000, we recommended that the SBLC examination program could be strengthened by clarifying SBA’s regulatory and enforcement authority regarding SBLCs. Although it has the authority to do so, SBA has yet to develop, through regulation, clear policies and procedures for taking supervisory actions. By not expanding the range of its enforcement actions—which it can do by promulgating regulations—SBA is limited in the actions it can take to remedy unsafe and unsound conditions in SBLCs. SBA regulations only provide for revocation or suspension of an SBLC license for a violation of law, regulation, or any agreement with SBA. Without less drastic measures, SBA has a limited capability to respond to unsatisfactory conditions in an SBLC. Unlike SBA, federal bank and thrift regulators use an array of statutorily defined supervisory actions, short of suspension or revocation of a financial institution’s charter or federal deposit insurance, if an institution fails to comply with regulations or is unsafe or unsound. We recommended that SBA provide, through regulation, clear policies and procedures for taking enforcement actions against preferred lenders and SBLCs in the event of continued noncompliance with SBA’s regulations. Most recently, SBA has responded that it does have clear policies and procedures; however, the agency intends to expand upon them. We will continue to followup and monitor SBA’s response to this recommendation. SBA’s preferred lender certification process begins when a district office serving the area in which a lender’s office is located nominates the lender for preferred status or when a lender requests a field office to consider it for PLP status. The district will then request performance data regarding the lender from SBA’s Sacramento Processing Center. The processing center then provides the district office with data required to fill in part of a worksheet developed for the nomination process. The district office sends the completed worksheet, along with other required information, back to the processing center. The processing center analyzes the nomination and sends it with a recommendation to OFA for final decision. According to SBA’s SOP, in making its decision, OFA considers whether the lender (1) has the required ability to process, close, service, and liquidate loans; (2) has the ability to develop and analyze complete loan packages; and (3) has a satisfactory performance history with SBA. OFA also considers whether the lender shows a substantial commitment to SBA’s “quality lending goals,” has the ability to meet the goals, and demonstrates a “spirit of cooperation” with SBA. OFA and district office staff said that although district offices do not provide final approval of PLP status for lenders in their districts, they generally play an important role and district input is given significant weight. Most of the district office staff we interviewed believed that they had considerable influence on OFA’s decision regarding a lender’s PLP status. A PLP lender may request an expansion of the territory in which it can process PLP loans by submitting a request to the Sacramento Processing Center. The processing center will obtain the recommendation of each district office in the area into which the PLP lender would like to expand its PLP operations. The processing center will forward the district recommendations to OFA for a final decision. Lenders we interviewed had varying experiences in gaining and maintaining their PLP status. While some lenders expressed general satisfaction with the process and their understanding of it, others cited problems. For example, several PLP lenders we interviewed said that they had their PLP status declined in a specific district, although they had already achieved PLP status in other districts. In some instances, lenders said that they did not understand why they had been turned down, in light of their proven performance. These lenders commented that some district offices were not open to working with lenders from outside their districts while others were. In our interviews with district offices, we sometimes heard differing descriptions from district office officials on the level of commitment required of a lender who wished to gain PLP status in their district. Some district officials said that a lender had to maintain a physical presence in the district, while others disagreed. However, all district office officials expressed the need for some regular discussion with a lender to understand the lender’s commitment to the district. Larger lenders, as well as the National Association of Government Guaranteed Lenders (NAGGL), noted the administrative burden of maintaining relationships with many of the 70 district offices to maintain PLP status. The lenders noted that to receive and maintain PLP status in a given district, it is generally necessary to meet at least annually with district office staff to discuss status and plans for future lending. For some large national lenders, this can amount to 40 or more visits per year. In response to this concern, NAGGL has recommended a national PLP status based on a uniform national standard to ease the administrative burdens on large national lenders that account for the largest volume of PLP lending. District office officials that we interviewed generally acknowledged that they want to understand a lender’s plans for their district before agreeing to endorse a lender that wishes to gain PLP status in their district. District officials explained that PLP status is an important marketing tool for lenders. As advocates for the credit needs of small businesses in their districts, the district office officials see PLP status as a “carrot” to encourage lenders to make a sufficient volume of loans to their district. They suggest that a “national” PLP lender might make a large volume of PLP loans nationwide, but none in their district. The officials reason that without a district-by-district PLP status, district offices would lose an important tool for encouraging lenders to respond to credit needs in their districts. To hold lenders to a uniform national standard while maintaining individual district office’s preferences and reinforcing their relationships with PLP lenders, SBA developed a formula-driven lender evaluation worksheet to facilitate the nomination, expansion, and renewal processes. The worksheet replaces the former procedure that involved written recommendations from district officials; however, it continues to award points based on sometimes subjective criteria, such as the district office’s assessment of the lender’s SBA marketing and outreach efforts, rather than the formulas in the spreadsheet. Where this is the case, district office staff are required to provide written justification for the points awarded. SBA has a Lender Liaison program, managed by its Office of Field Operations (OFO), to assist large national lenders in managing relationships with SBA. The program involves the assignment of a single SBA official, generally a district director, to act as a liaison to a large national lender. In the event that a large lender should experience difficulty in managing its PLP status, it would have a single SBA official to call to assist in resolving any problems. OFO staff said that feedback they have received from lenders indicated that they like the program, finding it useful for resolving difficulties. Two of the lenders we interviewed participated in the program, and both expressed satisfaction with it. SBA has designated lender liaisons for 20 PLP lenders and, at the time of our review, intended to expand the program to 50 additional lenders. OLO identified 70 lenders who have PLP status in 6 or more districts and could benefit from the program. We recommended that SBA continue to explore ways to assist large national lenders to participate in the PLP. SBA has indicated that they are reviewing the issues we identified with regard to large national lenders and considering the best approach to address the issues. We will continue to followup with SBA and monitor its response on this matter. In our past work analyzing organizational alignment and workload issues at SBA and other agencies’ efforts to improve management and performance, we have described the importance of tying organizational alignment to a clear and comprehensive mission statement and strategic plan. By organizational alignment, we mean the integration of organizational components, activities, core processes, and resources to support efficient and effective achievement of outcomes. For example, we noted how agency operations can be hampered by unclear linkage between an agency’s mission and structure, but greatly enhanced when they are tied together. We have identified human capital management challenges in key areas, which include undertaking strategic human capital planning and developing staffs whose size, skills, and deployment meet agency needs. We have also noted the importance of separating safety and soundness regulation and mission evaluation from the function of mission promotion. While SBA’s role regarding PLP lenders is slightly different from that of a safety and soundness regulator, two principles still apply to SBA. First, oversight and program evaluation functions should be organizationally separate and maintain an arm’s-length relationship from program promotion. And second, in evaluating program compliance, SBA needs to weigh the financial risks to the federal government along with the 7(a) program’s mission to provide credit to those who cannot get it elsewhere. SBA officials have said and written that lender oversight is becoming an increasing priority for SBA; however, the function is not housed in an independent office with the exclusive role of providing lender oversight. OLO was created within OCA in fiscal year 1999 to ensure consistent and appropriate supervision of SBA’s lending partners; however, OCA has other objectives, including the promotion of PLP to appropriate lenders. OFA, also part of OCA, is responsible for providing overall direction for the administration of SBA’s lending programs, including working with lenders to deliver lending programs, including 7(a), and developing loan policies and standard operating procedures. OFA’s lender oversight role is to provide final approval of lenders’ PLP status and to take necessary enforcement actions against SBLCs. Yet, in its promotion role, OFA works with lenders to deliver lending programs. Thus the only explicit enforcement authority—the authority to revoke PLP status—resides with OFA rather than OLO. The presence of both OFA and OLO within OCA does not afford the oversight function an arm’s-length position from the promotion function. The organizational arrangement presents a potential conflict, or at least the appearance of a conflict, between the desire to encourage lender participation in PLP and the need to evaluate lender performance (with the potential for discontinuing lenders’ participation in PLP). Evidence of overlapping responsibilities and poorly aligned resources also can be seen in delays SBA has experienced in completing certain tasks associated with lender oversight. As noted previously, these delays could hamper effective PLP and SBLC oversight by delaying corrective action that might arise from review findings. Since some, but not all, responsibility for the lender oversight function migrated from OFA to OLO, both offices continue to mingle responsibilities for certain functions. The division of responsibility between OFA and OLO has created the need for more interoffice coordination to complete certain tasks. For example, we found substantial delays in finalizing PLP review reports and, as noted earlier, in SBLC examination reports. | The Small Business Administration (SBA) is responsible for oversight of its 7(a) loan program lenders, including those who participate in the Preferred Lenders Program or PLP. SBA delegates full authority to preferred lenders to make loans without prior SBA approval. In fiscal year 2002, preferred lenders approved 55 percent of the dollar value of all 7(a) loans--about $7 billion. Small businesses are certainly a vital part of the nation's economy. According to SBA, they generate more than half of the nation's gross domestic product and are the principal source of new jobs in the U.S. economy. In turn, SBA's mission is to maintain and strengthen the nation's economy by aiding, counseling, assisting, and protecting the interests of small businesses. Providing small businesses with access to credit is a major avenue through which SBA strives to fulfill its mission. Strong oversight of lenders by SBA is needed to protect SBA from financial risk and to ensure that qualified borrowers get 7(a) loans. SBA has a total portfolio of about $46 billion, including $42 billion in direct and guaranteed small business loans and other guarantees. Because SBA guarantees up to 85 percent of the 7(a) loans made by its lending partners, there is risk to SBA if the loans are not repaid. SBA must ensure that lenders provide loans to borrowers who are eligible and creditworthy to protect the integrity of the 7(a) program. Our statement today is based on the report we issued December 9, 2002, Small Business Administration: Progress Made but Improvements Needed in Lender Oversight (GAO-03-90). The report and our remarks will focus on our evaluation of (1) SBA's 7(a) lender oversight program and (2) SBA's organizational alignment for conducting oversight of preferred lenders and Small Business Lending Companies (SBLC). In addition, we will comment on SBA's latest response to our findings and recommendations. Our overall objective is to provide the Senate Committee on Small Business and Entrepreneurship with information and perspectives to consider as it moves forward on SBA reauthorization. SBA has made progress in developing its lender oversight program, but there are still areas in need of improvement. While SBA has identified appropriate elements for an effective lender oversight program, it has been slow to change programs and procedures to fully incorporate all of these elements. In addition, financial risk management issues have become more critical for SBA, as its current loan programs focus on partnering with lenders, primarily banks, that make loans guaranteed up to 85 percent by SBA. However, our work showed that SBA had not yet consistently incorporated adequate measures of financial risk into the PLP review process or the SBLC examination program. The current PLP review process, which SBA uses to ensure compliance with the program mission, rules, and regulations, involves a cursory review of documentation maintained in lenders' loan files rather than a qualitative assessment of borrower creditworthiness or eligibility. SBA's standards for borrower eligibility (the "credit elsewhere" requirement) are broad and therefore subject to interpretation. SBA had not developed clear enforcement policies for preferred lenders or SBLSs that would specifically describe its response in the event that reviews discover noncompliance or safety and soundness problems. SBA had been slow to finalize and issue SBLC examination reports. In addition, SBA had been slow to respond to recommendations for improving the SBLC examination program. Without continued improvement to better enable SBA to assess the financial risk posed by 7(a) loans and to ensure that its lending partners are making loans to eligible small businesses, SBA will not have a successful lender oversight program. Although SBA has listed the oversight of its lending partners as an agency priority, the function does not have the necessary organizational independence or resources to accomplish its goals. In our past work analyzing organizational alignment and workload issues, we have described the importance of (1) tying organizational alignment to a clear and comprehensive mission statement and strategic plan and (2) providing adequate resources to accomplish the mission. However, two different offices--Lender Oversight and Financial Assistance, both of which are in the Office of Capital Access (OCA)--carry out SBA's lender oversight functions. OCA also promotes and implements SBA's lending programs. This alignment presents a possible conflict because PLP promotion and operations are housed in the same office that assesses lender compliance with SBA safety and soundness and mission requirements. Additionally, split responsibilities within OCA and limited resources have impeded SBA's ability to complete certain oversight responsibilities, which could result in heightened risk to its portfolio or lack of comprehensive awareness of portfolio risk. |
Our April 2014 report noted that VA has experienced substantial delays in executing new outpatient-clinic lease projects; nearly all of the delays occurred in the planning stages prior to entering into a lease agreement with the developer. Specifically, we found that 39 of the 41 outpatient- clinic projects for which a prospectus was submitted experienced schedule delays, ranging from 6 months to 13.3 years, with an average delay of 3.3 years, while 2 projects experienced schedule time decreases. Our data analysis showed that 94 percent of these delays occurred in the planning stages prior to entering into the lease agreement. For all but one of the projects that experienced a delay, the delay occurred during the pre-lease agreement stage. We also compared the length of delays that occurred during the pre-lease agreement stage to the length of delays that occurred once a lease agreement was entered into with the development firm. We found that the average delay during the pre-lease agreement stages for all 41 projects totaled nearly 3.1 years. Conversely, the average project delay once a lease agreement was finalized totaled approximately 2.5 months, and 11 outpatient-clinic projects actually experienced schedule decreases during this stage. VA officials at 6 of the 11 outpatient-clinic projects selected for detailed review mentioned that the large majority of schedule delays occur during the planning stages prior to entering into a lease agreement. For the 41 lease projects we reviewed, we found that several factors contributed to delays: VHA’s late or changing requirements: According to data we analyzed and VA officials we interviewed, late or changing VHA requirements were the most common reasons for delays. Requirements can pertain to facility size, types of treatment rooms, types of medical equipment, electrical voltage needs, and other details. We found in many instances, either that CFM either did not receive VHA’s requirements on time or that VHA changed its requirements during the solicitation of offers, necessitating a re-design that affected the schedule. In evaluating VA data, we found that 23 of the 41 leasing projects (56 percent) experienced delays because VHA was late in submitting space requirements to CFM, or VHA changed space requirements and thus the scope of the project. For example, the size of the Jacksonville outpatient clinic had increased by 29 percent, and the Austin outpatient-clinic site we visited had increased by 36 percent from the time the prospectuses for these projects were submitted to Congress to the time they were completed. Site Selection Challenges: In analyzing VA data, we found that 20 of the 41 outpatient-clinic projects we reviewed (49 percent) experienced delays due to difficulties in locating or securing a suitable site. For example, an increase in scope to the Jacksonville project resulted in a larger building design that then required more land. To accommodate these changes, the landowner worked to acquire additional properties around the already selected site. Although the developer was ultimately successful in obtaining additional land for the project, this process led to delays. According to VA officials, prior to entering into the lease agreement, there were delays associated with difficult negotiations with the developer. However, the officials said that the negotiations resulted in keeping project costs lower. In addition, there were significant environmental clean-up requirements at the site, requirements that needed to be satisfied before construction began. The original site’s location was obtained in December 2002, but the larger site was not obtained until December 2009, a delay of 7 years. Outdated Guidance: At the sites we reviewed, we found that outdated policy and guidelines resulted in challenges for VA staff to complete leasing projects on time. For example, officials from the four Las Vegas outpatient sites we visited stated that VA’s policies for managing leases seem to change for each project, creating uncertainty regarding CFM job responsibilities. In addition to substantial delays, our April 2014 report noted that VA also experienced cost increases to its outpatient-clinic projects when compared to the costs in the projects’ prospectuses. VA provided cost data for its outpatient-clinic lease projects in January 2014. For the 31 projects with complete cost data, we found that “total first-year costs,” when compared to the prospectus costs, increased from $153.4 million to $172.2 million, an increase of nearly $19 million (12 percent). However, for the 31 projects, the total “prospectus first-year rent” was estimated at $58.2 million, but the total awarded first-year rent for these projects equaled $92.7 million as of January 2014, an increase of $34.5 million (59 percent).because the department must pay the higher rent over the lifetime of the lease agreement. For example, all 31 VA lease projects included in this cost analysis have lease terms of 20 years, and the increase in rent must be paid for the duration of the contract. Although first year’s rents increased for the 31 projects—increasing overall total costs—VA’s total “build-out” costs were lower than reported in the projects’ prospectuses. Build-out costs are one-time, lump-sum payments VA makes to developers for special purpose, medically related improvements to buildings when VA accepts the projects as completed. VA officials said the decrease in build-out costs from those originally estimated in the prospectuses was due to the national downturn in the commercial real estate market starting in 2008. The downturn created more competition among developers and helped VA realize more competitive pricing on its medical build-out requirements than was anticipated in the prospectuses. Such increases in rent have long- term implications for VA, The causes of the total cost increase can be attributed primarily to increases in the projects’ awarded first-year rent due to the schedule delays and changes to the design or scope of a project that we discussed previously. Schedule delays can increase costs because of changes in the local leasing market during the period of the delay. Therefore, when VA estimates costs as part of the prospectuses submitted to Congress in the annual budget request, an automatic annual escalation is applied to each project to account for rising costs and market forces that make construction and leased space more expensive over time. VA officials said the escalation ensures that the authorized cost of the project is in line with the realities of the real estate and construction markets. Because VA annually adjusts a project’s cost by an increase of 4 percent for each year the project is delayed, project delays directly result in cost increases. Additionally, we found that projects we reviewed increased in total size by 203,000 square feet. Changes in a project’s size expand the scope of the project, requiring design changes, which can result in schedule delays, further adding to costs. Our April 2014 report found that VA has made some progress in addressing issues with its major medical-facilities leasing program. Specifically, in April 2012, VA formed a high level council, the Construction Review Council, to oversee the department’s capital asset program, including leasing. Based on the findings of the council and our work for the April 2012 report on VA’s major leased outpatient clinics, VA is planning the following improvements to the major medical-facilities- leasing program: requiring detailed design requirements earlier in the design process to help avoid the delays, scope changes, and cost increases. However, these improvements were in the early stages, and their success will depend on how quickly and effectively VA implements them. Requiring detailed design requirements earlier in the facility-leasing process. VA issued a guidance memorandum in January 2014 directing that beginning with fiscal year 2016, VA should develop detailed space and design requirements before submitting the prospectus to Congress: Developing a process for handling scope changes. In August 2013, VA approved a new concept to better address scope changes to both major construction and congressionally authorized lease projects. According to VA officials, among other improvements, this process ensures a systematic review of the impact of any ad-hoc changes to projects in scope, schedule, and cost; Plans to provide Congress with clearer information on the limitations associated with costs of proposed projects. VA’s 2014 budget submission did not clarify that its estimates for future lease projects included only one year’s rent, which does not reflect the total costs over the life of the leases, costs that VA states cannot be accurately determined in early estimates. VA officials clarified this estimate beginning with VA’s 2015 budget submission. However, we also found that while VA has updated and refined some guidance for specific aspects of lease projects—including design guidance for the construction of outpatient clinics—to better support VA’s leasing staff and prevent project delays, it has not updated its VHA guidance for clinic leasing (used by staff involved with projects) since 2004. We reviewed VHA’s 2004 Handbook 1006.1, Planning and Activating Community-Based Outpatient Clinics, VHA’s overall guidance for leasing outpatient clinics. This Planning Handbook is intended to establish consistent planning criteria and standardized expectations. The Planning Handbook is widely used by VA officials and provides important guidance, in particular, clarifying the differing responsibilities of officials and departments and the legal authorities of the leasing process. However, this guidance is out of date and no longer adequately reflects the roles and responsibilities of the various VA organizations involved in major medical-facilities-leasing projects. According to VA officials, the close collaboration of these organizations is necessary for a successful lease project. As of November 2013, VHA’s leasing program has a long-term liability of $5.5 billion, but its guidance on outpatient clinics is a decade old and no longer relevant. Standards for Internal Control in the Federal Government calls for federal agencies to develop and maintain internal control activities, which include policies and procedures, to enforce management’s directives and help ensure that actions are taken to address risks. Such activities are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and for achieving effective results. The lack of updated guidance can affect coordination among stakeholders and could contribute to schedule delays and cost increases. Using outdated guidance can lead to miscommunications and errors in the planning and implementing of veterans’ leased clinics. Furthermore, the policy, planning criteria, and business plan format in the Planning Handbook were developed based on an old planning methodology that VA no longer uses; thus, the guidance does not reflect VA’s current process. In our April 2014 report, we recommended that the Secretary of Veterans Affairs update VHA’s guidance for leasing outpatient clinics to better reflect the roles and responsibilities of all VA staff involved in leasing projects. VA concurred with our recommendation and reported that it had created a VHA Lease Handbook that was in the concurrence process to address the roles and responsibilities of staff involving in leasing projects. In October 2014, VA reported that it had revised its clinic leasing guidance in response to GAO’s recommendation and that its leasing authority was now under the General Services Administration (GSA) and the handbook was undergoing further revisions to incorporate GSA leasing processes. Chairman DeSantis and Ranking Member Lynch, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions about matters discussed in this testimony, please contact Dave Wise, (202) 512-2834 or WiseD@gao.gov. Other key contributors to this testimony include Ed Laughlin, Assistant Director; Nelsie Alcoser; George Depaoli; Jessica Du; Raymond Griffith; Amy Rosewarne; and Crystal Wesco. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | VA operates one of the nation's largest health-care delivery systems. To help meet the changing medical needs of the veteran population, VA has increasingly leased medical facilities to provide health care to veterans. In April 2014, GAO reported that VHA's leasing program had long-term liability of $5.5 billion and was growing. This statement discusses VA outpatient clinic lease issues, specifically, (1) the extent to which schedule and costs changed for selected VA outpatient clinics' leased projects since they were first submitted to Congress and factors contributing to the changes and (2) actions VA has taken to improve its leasing practices for outpatient clinics and any opportunities for VA to improve its project management. It is based on GAO's April 2014 report ( GAO-14-300 ) along with selected updates conducted in August and October 2014 to obtain information from VA on actions it has taken to address GAO's prior recommendation. For that report, GAO reviewed all 41 major medical leases that were associated with outpatient clinic projects for which a prospectus had been submitted to Congress, as required by law. In its April 2014 report, GAO found that schedules were delayed and costs increased for the majority of the Department of Veterans Affairs' (VA) leased outpatient projects reviewed. As of January 2014, GAO found that 39 of the 41 projects reviewed—with a contract value of about $2.5 billion—experienced schedule delays, ranging from 6 months to 13.3 years, with an average delay of 3.3 years. The large majority of delays occurred prior to entering into a lease agreement, in part due to VA's Veterans Health Administration (VHA): 1) providing project requirements late or changing them or 2) using outdated guidance. Costs also increased for all 31 lease projects for which VA had complete cost data, primarily due to delays and changes to the scope of a project. For example, first-year rents increased a total of $34.5 million—an annual cost which will extend for 20 years (the life of these leases). GAO's report also found that VA had taken some actions to address problems managing clinic-leased projects. First, it established the Construction Review Council in April 2012 to oversee the department's capital asset programs, including the leasing program. Second, consistent with the council's findings and previous GAO work, VA was planning the following improvements: Requiring detailed design requirements earlier in the facility-leasing process . VA issued a guidance memorandum in January 2014 directing that beginning with fiscal year 2016, VA should develop detailed space and design requirements before submitting the prospectus to Congress. Developing a process for handling scope changes. In August 2013, VA approved a new concept to better address scope changes to both major construction and congressionally authorized lease projects. According to VA officials, among other improvements, this process ensures a systematic review of the impact of any ad-hoc changes to projects in scope, schedule, and cost. Plans to provide Congress with clearer information on the limitations associated with costs of proposed projects. VA's 2014 budget submission did not clarify that its estimates for future lease projects included only one year's rent, which does not reflect the total costs over the life of the leases, costs that VA states cannot be accurately determined in early estimates. VA officials clarified this estimate beginning with VA's 2015 budget submission. However, these improvements were in the early stages, and their success will depend on how quickly and effectively VA implements them. Finally, GAO reported that VA was also taking steps to refine and update guidance on some aspects of the leasing process, for example the VA's design guides, but VHA has not updated the overall guidance for clinic leasing (used by staff involved with projects) since 2004. In October 2014, VA reported that it was in the process of revising its clinic leasing guidance in response to GAO's recommendation and that its leasing authority was now under the General Services Administration (GSA) and the handbook was undergoing further revisions to incorporate GSA leasing processes. In its April 2014 report, GAO recommended that VA update VHA's guidance for the leasing of outpatient clinics. VA concurred with GAO's recommendation and is taking actions to implement the recommendation. |
We reported in 2006 that DOD had established force health protection and surveillance policies aimed at assessing and reducing or preventing health risks for its deployed federal civilian personnel; however, at the time of our review, the department lacked a quality assurance mechanism to ensure the components’ full implementation of its policies. In reviewing DOD federal civilian deployment records and other electronic documentation at selected component locations, we found that these components lacked documentation to show that they had fully complied with DOD’s force health protection and surveillance policy requirements for some federal civilian personnel who deployed to Afghanistan and Iraq. As a larger issue, DOD’s policies did not require the centralized collection of data on the identity of its deployed civilians, their movements in theater, or their health status, further hindering its efforts to assess the overall effectiveness of its force health protection and surveillance capabilities. In August 2006, DOD issued a revised policy that became effective in December 2006, outlining procedures to address its lack of centralized deployment and health-related data. However, at the time of our review, the procedures were not comprehensive enough to ensure that DOD would be sufficiently informed of the extent to which its components fully comply with its requirements to monitor the health of deployed federal civilians. Our 2006 report noted that DOD components included in our review lacked documentation to show that they always implemented force health protection and surveillance requirements for deployed federal civilians. These requirements included completing (1) pre-deployment health assessments to ensure that only medically fit personnel deploy outside of the United States as part of a contingency or combat operation; (2) pre- deployment immunizations to address possible health threats in deployment locations; (3) pre-deployment medical screenings for tuberculosis and human immunodeficiency virus (HIV); and (4) post- deployment health assessments to document current health status, experiences, environmental exposures, and health concerns related to their work while deployed. DOD’s force health protection and surveillance policies required the components to assess the medical condition of federal civilians to ensure that only medically fit personnel deploy outside of the United States as part of a contingency or combat operation. At the time of our review, the policies stipulated that all deploying civilian personnel were to complete pre-deployment health assessment forms within 30 days of their deployments, and health care providers were to review the assessments to confirm the civilians’ health readiness status and identify any needs for additional clinical evaluations prior to their deployments. While the components that we included in our review had procedures in place that would enable them to implement DOD’s pre-deployment health assessment policies, it was not clear to what extent they had done so. Our review of deployment records and other documentation at the selected component locations found that these components lacked documentation to show that some federal civilian personnel who deployed to Afghanistan and Iraq had received the required pre-deployment health assessments. For those deployed federal civilians in our review, we found that, overall, a small number of deployment records (52 out of 3,771) were missing documentation to show that they had received their pre-deployment health assessments, as reflected in table 1. As shown in table 1, the federal civilian deployment records we included in our review showed wide variation by location regarding documentation of pre-deployment health assessments, ranging from less than 1 percent to more than 90 percent. On an aggregate component-level basis, at the Navy location in our review, we found that documentation was missing for 19 of the 52 records in our review. At the Air Force locations, documentation was missing for 29 of the 37 records in our review. In contrast, all three Army locations had hard copy or electronic records which indicated that almost all of their federal deployed civilians had received pre-deployment health assessments. In addition to completing pre-deployment health assessment forms, DOD’s force health protection and surveillance policies stipulated that all DOD deploying federal civilians receive theater-specific immunizations to address possible health threats in deployment locations. Immunizations required for all civilian personnel who deployed to Afghanistan and Iraq included: hepatitis A (two-shot series); tetanus-diphtheria (within 10 years of deployment); smallpox (within 5 years of deployment); typhoid; and influenza (within the last 12 months of deployment). As reflected in table 2, based on the deployment records maintained by the components at locations included in our review, the overall number of federal civilian deployment records lacking documentation of only one of the required immunizations for deployment to Afghanistan and Iraq was 285 out of 3,771. However, 3,313 of the records we reviewed were missing documentation of two or more immunizations. At the Army’s Fort Bliss, our review of its electronic deployment data determined that none of its deployed federal civilians had documentation to show that they had received immunizations. Officials at this location stated that they believed some immunizations had been given; however, they could not provide documentation as evidence of this. DOD policies required deploying federal civilians to receive certain screenings, such as for tuberculosis and HIV. Table 3 indicates that, at the time of our review, 55 of the 3,771 federal civilian deployment records included in our review were lacking documentation of the required tuberculosis screening; and approximately 35 were lacking documentation of HIV screenings prior to deployment. DOD’s force health protection and surveillance policies also required returning DOD federal civilian personnel to undergo post-deployment health assessments to document current health status, experiences, environmental exposures, and health concerns related to their work while deployed. At the time of our review, the post-deployment process began within 5 days of civilians’ redeployment from the theater to their home or demobilization processing stations. DOD’s policies required civilian personnel to complete a post-deployment assessment that included questions on health and exposure concerns. A health care provider was to review each assessment and recommend additional clinical evaluation or treatment as needed. As reflected in table 4, our review of deployment records at the selected component locations found that these components lacked documentation to show that most deployed federal civilians (3,525 out of 3,771) who deployed to Afghanistan and Iraq had received the required post- deployment health assessments upon their return to the United States. At the time of our review, federal civilian deployment records lacking evidence of post-deployment health assessments ranged from 3 at the U.S. Army Corps of Engineers Transatlantic Programs Center and Wright- Patterson Air Force Base, respectively, to 2,977 at Fort Bliss. Beyond the aforementioned weaknesses found in the selected components’ implementation of force health protection and surveillance requirements for deploying federal civilians, as a larger issue, we noted in our 2006 report that DOD lacked comprehensive, centralized data that would enable it to readily identify its deployed civilians, track their movements in theater, or monitor their health status, further hindering efforts to assess the overall effectiveness of its force health protection and surveillance capabilities. The Defense Manpower Data Center is responsible for maintaining the department’s centralized system that currently collects location-specific deployment information for military servicemembers, such as grid coordinates, latitude/longitude coordinates, or geographic location codes. However, at the time of our review, DOD had not taken steps to similarly maintain centralized data on its deployed federal civilians. In addition, DOD had not provided guidance that would require its components to track and report data on the locations and movements of DOD federal civilian personnel in theaters of operations. In the absence of such a requirement, each DOD component collected and reported aggregated data that identified the total number of DOD federal civilian personnel in a theater of operations, but each lacked the ability to gather, analyze, and report information that could be used to specifically identify individuals at risk for occupational and environmental exposures during deployments. In previously reporting on the military services’ implementation of DOD’s force health protection and surveillance policies in 2003, we highlighted the importance of knowing the identity of servicemembers who deployed during a given operation and of tracking their movements within the theater of operations as major elements of a military medical surveillance system. We further noted the Institute of Medicine’s finding that documentation on the location of units and individuals during a given deployment is important for epidemiological studies and appropriate medical care during and after deployments. For example, this information allows epidemiologists to study the incidences of disease patterns across populations of deployed servicemembers who may have been exposed to diseases and hazards within the theater, and health care professionals to treat their medical problems appropriately. Without location-specific information for all of its deployed federal civilians and centralized data in its department-level system, DOD limits its ability to ensure that sufficient and appropriate consideration will also be given to addressing the health care concerns of these individuals. At the time of our review, DOD also had not provided guidance to the components that would require them to forward completed deployment health assessments for all federal civilians to the Army Medical Surveillance Activity, where these assessments are supposed to be archived in the Defense Medical Surveillance System, integrated with other historical and current data on personnel and deployments, and used to monitor the health of personnel who participate in deployments. The overall success of deployment force protection and surveillance efforts, in large measure, depends on the completeness of health assessment data. In our report, we noted that the lack of such data may hamper DOD’s ability to intervene in a timely manner to address health care problems that may arise from DOD federal civilian deployments to overseas locations in support of contingency operations. With increases in the department’s use of federal civilian personnel to support military operations, we noted in our report that DOD officials have recognized the need for more complete and centralized location- specific deployment information and deployment-related health information on its deployed federal civilians. In this regard, we further noted that in August 2006, the Office of the Under Secretary of Defense for Personnel and Readiness issued revised policy and program guidance that generally addressed the shortcomings in DOD’s force health protection and surveillance capabilities. The revised policy and guidance, that became effective in December 2006, require the components within 3 years, to electronically report (at least weekly) to the Defense Manpower Data Center, location-specific data for all deployed personnel, including federal civilians. In addition, the policy and guidance require the components to submit all completed health assessment forms to the Army Medical Surveillance Activity for inclusion in the Defense Medical Surveillance System. Nonetheless, in our 2006 report we noted that DOD’s new policy is not comprehensive enough to ensure that the department will be sufficiently informed of the extent to which its components are complying with existing health protection requirements for its deployed federal civilians. Although the policy requires DOD components to report certain location- specific and health data for all of their deployed personnel, including federal civilians, we noted that it does not establish an oversight and quality assurance mechanism for assessing and ensuring the full implementation of the force health protection and surveillance requirements by all DOD components that our prior work has identified as essential in providing care to military personnel. To strengthen DOD’s force health protection and surveillance for its deployed federal civilians, in our 2006 report, we recommended that DOD establish an oversight and quality assurance mechanism to ensure that all components fully comply with its requirements. In February 2007, the Office of the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness published a new instruction on force health protection quality assurance. This policy applies to military servicemembers as well as applicable DOD and contractor personnel. The new policy requires the military services to implement procedures to monitor key force health protection elements such as pre- and post- deployment health assessments. In addition, the policy requires each military service to report its force health protection and quality assurance findings to the Assistant Secretary of Defense (Health Affairs) through the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness. In our June 2007 report on DOD’s compliance with the legislative requirement to perform pre- and post-deployment medical examinations on military servicemembers, we noted that DOD lacked a comprehensive oversight framework to help ensure effective implementation of its deployment health quality assurance program, which included specific reporting requirements and results-oriented performance measures to evaluate the services’ adherence to deployment health requirements. Also, we noted in our 2007 report that the department’s new instruction and planned actions indicate that DOD is taking steps in the right direction. We stated and still believe that if the department follows through with its efforts, it will be responsive to several of our reports’ recommendations to improve DOD’s force health protection and surveillance for the Total Force. In our 2006 report, we found that DOD had established medical treatment policies that cover its federal civilians while they are deployed to support contingency operations in Afghanistan and Iraq, and available workers’ compensation claims we reviewed confirmed that those deployed federal civilians received care consistent with the policies. These policies state that DOD federal civilians who require treatment for injuries or diseases sustained during overseas hostilities may be provided care under the DOD military health system. DOD’s military health system provides four levels of medical care to personnel who are injured or become ill while deployed, as shown in figure 1. Medical treatment during a military contingency begins with level one care, which consists of basic first aid and emergency care at a unit in the theater of operation. The treatment then moves to a second level of care, where, at an aid station, injured or ill personnel are examined and evaluated to determine their priority for continued movement outside of the theater of operation and to the next (third) level of care. At the third level, injured or ill personnel are treated in a medical installation staffed and equipped for resuscitation, surgery, and postoperative care. Finally, at the fourth level of care, which occurs far from the theater of operation, injured or ill personnel are treated in a hospital staffed and equipped for definitive care. Injured or ill DOD federal civilians deployed in support of contingency operations in Afghanistan and Iraq who require level four medical care are transported to DOD’s Regional Medical Center in Landstuhl, Germany. In our 2006 report, we found that injured or ill DOD federal civilians who cannot be returned to duty in theater are evacuated to the United States for continuation of medical care. In these cases (or where previously deployed federal civilians later identify injuries or diseases and subsequently request medical treatment), DOD’s policy provides for its federal civilians who require treatment for deployment-related injuries or occupational illnesses to receive medical care through either the military’s medical treatment facilities or civilian facilities. The policy stipulates that federal civilians who are injured or become ill as a result of their deployment must file a FECA claim with DOD, which then files a claim with the Department of Labor’s Office of Workers’ Compensation Programs (OWCP). The Department of Labor’s OWCP is responsible for making a decision to award or deny medical benefits. OWCP must establish—based on evidence provided by the DOD civilian—that the employee is eligible for workers’ compensation benefits due to the injury or disease for which the benefits are claimed. To obtain benefits under FECA, as noted in our report, DOD federal civilians must show that (1) they were employed by the U.S. government, (2) they were injured (exposed) in the workplace, (3) they have filed a claim in a timely manner, (4) they have a disabling medical condition, and (5) there is a causal link between their medical condition and the injury or exposure. Three avenues of appeal are provided for employees in the event that the initial claim is denied: (1) reconsideration by an OWCP claims examiner, (2) a hearing or review of the written record by OWCP’s Branch of Hearings and Review, and (3) a review by the Employees’ Compensation Appeals Board. DOD’s medical treatment process and the OWCP’s claims process are shown in figure 2. Overall, the claims we reviewed showed that the DOD federal civilians who sustained injuries or diseases while deployed had received care that was consistent with DOD’s medical treatment policies. Specifically, in reviewing a sample of seven workers’ compensation claims (out of a universe of 83) filed under the Federal Employees’ Compensation Act by DOD federal civilians who deployed to Iraq, we found that in three cases where care was initiated in theater the affected federal civilians had received treatment in accordance with DOD’s policies. For example, in one case, a deployed federal civilian was treated for traumatic injuries at a hospital outside of the theater of operation and could not return to duty in theater because of the severity of the injuries sustained. The civilian was evacuated to the United States and received medical care through several of the military’s medical treatment facilities as well as through a civilian facility. Further, in all seven claims that we reviewed, DOD federal civilians who requested medical care after returning to the United States, had, in accordance with DOD’s policy, received initial medical examinations and/or treatment for their deployment-related injuries or illnesses and diseases through either military or civilian treatment facilities. While OWCP has primary responsibility for processing and approving all FECA claims for medical benefits, the scope of our review did not include assessing actions taken by the Department of Labor’s OWCP in further processing workers’ compensation claims for injured or ill civilians and authorizing continuation of medical care once their claims were submitted for review. Our 2006 report found that DOD provides a number of special pays and benefits to its federal civilian personnel who deploy in support of contingency operations, which are generally different in type and in amount from those provided to deployed military personnel. It should be noted that while DOD federal civilian and military personnel are key elements (components) of the Total Force, each is governed by a distinctly different system. Both groups receive special pays, but the types and amounts differ. DOD federal civilian personnel also receive different types and amounts of disability benefits, depending on specific program provisions and individual circumstances. In 2003, we designated federal disability programs as a high-risk area because of continuing challenges with modernizing those programs. Importantly, our work examining federal disability programs has found that the major disability programs are neither well aligned with the 21st century environment nor positioned to provide meaningful and timely support. Further, survivors of deceased DOD federal civilian and military personnel generally receive comparable types of cash survivor benefits—lump sum, recurring, or both—but benefit amounts differ for the two groups. Survivors of DOD federal civilian personnel, however, almost always receive lower noncash benefits than military personnel. DOD federal civilian and military personnel are both eligible to receive special pays to compensate them for the conditions of deployment. As shown in table 5, some of the types of special pays are similar for both DOD federal civilian and military personnel, although the amounts paid to each group differ. Other special pays were unique to each group. In the event of sustaining an injury while deployed, DOD federal civilian and military personnel are eligible to receive two broad categories of government-provided disability benefits—disability compensation and disability retirement. However, the benefits applicable to each group vary by type and amount, depending on specific program provisions and individual circumstances. Within these broad categories, there are three main types of disability: (1) temporary disability, (2) permanent partial disability, and (3) permanent total disability. In 2003, we designated federal disability programs as a high-risk area because of continuing challenges with modernizing those programs. Importantly, our work examining federal disability programs has found that the major disability programs are neither well aligned with the 21st century environment nor positioned to provide meaningful and timely support. Both DOD federal civilian and military personnel who are injured in the line of duty are eligible to receive continuation of their pay during the initial period of treatment and may be eligible to receive recurring payments for lost wages. However, the payments to DOD federal civilian personnel are based on their salaries and whether the employee has any dependents, regardless of the number, which can vary significantly, whereas disability compensation payments made by the Department of Veterans Affairs (VA) to injured military personnel are based on the severity of the injury and their number of dependents, as shown in table 6. DOD federal civilian personnel are eligible to receive continuation of pay (salary) for up to 45 days, followed by a recurring payment for wage loss which is based on a percentage of salary and whether they have any dependents, up to a cap. In contrast, military personnel receive continuation of pay of their salary for generally no longer than a year, followed by a recurring VA disability compensation payment for wage loss that is based on the degree of disability and their number of dependents, and temporary DOD disability retirement for up to 5 years. When a partial disability is determined to be permanent, DOD federal civilian and military personnel can to continue to receive recurring compensation payments, as shown in table 7. For DOD federal civilian personnel, these payments are provided for the remainder of life as long as the impairment persists, and can vary significantly depending upon the salary of the individual and the existence of dependents. Military personnel are also eligible to receive recurring VA disability compensation payments for the remainder of their lives, and these payments are based on the severity of the servicemember’s injury and the number of dependents. In addition, both groups are eligible to receive additional compensation payments beyond the recurring payments just discussed, based on the type of impairment. DOD federal civilians with permanent partial disabilities receive a schedule of payments based on the specific type of impairment (sometimes referred to as a schedule award). Some impairments may result in benefits for a few weeks, while others may result in benefits for several years. Similarly, military personnel receive special monthly VA compensation payments depending on the specific type and degree of impairment. When an injury is severe enough to be deemed permanent and total, DOD federal civilian and military personnel may receive similar types of benefits, such as disability compensation and retirement payments; however, the amounts paid to each group vary. For civilian personnel, the monthly payment amounts for total disability are generally similar to those for permanent partial disability described earlier, but unlike with permanent partial disabilities, the payments do not take into account any wage earning capacity. Both groups are eligible to receive additional compensation payments beyond the recurring payments similar to those for permanent partial disability. DOD federal civilians with permanent disabilities receive a schedule award based on the specific type of impairment. In addition, DOD federal civilian personnel may be eligible for an additional attendant allowance—up to $1,500 per month during 2006— if such care is needed. Military personnel receive special monthly VA compensation payments for particularly severe injuries, such as amputations, blindness, or other loss of use of organs and extremities. The payments are designed to account for attendant care or other special needs deriving from the disability. In 2003, we designated federal disability programs as a high-risk area because of continuing challenges with modernizing those programs. Our work examining federal disability programs found that the major disability programs are neither well aligned with the 21st century environment nor positioned to provide meaningful and timely support. Survivors of deceased DOD federal civilian and military personnel generally receive similar types of cash survivor benefits—either as a lump sum, a recurring payment, or both—through comparable sources. However, the benefit amounts generally differ for each group. Survivors of civilian and military personnel also receive noncash benefits, which differ in type and amounts. As shown in table 8, survivors of deceased DOD federal civilian and military personnel both receive lump sum benefits in the form of Social Security, a death gratuity, burial expenses, and life insurance. Survivors of deceased DOD federal civilian and military personnel are also eligible for recurring benefits, some of which are specific to each group, as shown in table 9. In addition to lump sum and recurring benefits, survivors of deceased DOD federal civilians and military personnel receive noncash benefits. As shown in table 10, survivors of deceased military personnel receive more noncash benefits than do those of deceased DOD federal civilian personnel, with few benefits being comparable in type. DOD currently has important policies in place that relate to the deployment of its federal civilians. Moreover, DOD’s issuance of its new instruction on force health quality assurance further indicates that DOD is taking steps in the right direction. If the department follows through with its efforts, we believe it will strengthen its force health protection and surveillance oversight for the Total Force. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have. If you or your staffs have any questions about this testimony, please contact Brenda S. Farrell at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this testimony include Sandra B. Burrell, Assistant Director; Julie C. Matta; and John S. Townes. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the Department of Defense (DOD) has expanded its involvement in overseas military operations, it has grown increasingly reliant on its federal civilian workforce to support contingency operations. GAO was asked to discuss DOD's (1) force health protection and surveillance policies, (2) medical treatment policies that cover federal civilians while they are deployed to support contingency operations in Afghanistan and Iraq, and (3) differences in special pays and benefits provided to DOD's deployed federal civilian and military personnel. For this statement, GAO primarily drew on its September 2006 report that addressed these objectives. For its report, GAO analyzed over 3,400 deployment-related records at eight component locations for deployed federal civilians and policies related to defense health care, reviewed claims filed under the Federal Employees' Compensation Act (FECA); and examined major provisions of special pays and disability and death benefits provided to DOD's deployed federal civilians and military personnel. In 2006, GAO reported that DOD had established force health protection and surveillance policies to assess and reduce or prevent health risks for its deployed federal civilians, but it lacked procedures to ensure implementation. GAO's review of over 3,400 deployment records found that components lacked documentation that some federal civilians who deployed to Afghanistan and Iraq had received, among other things, required pre- and post-deployment health assessments and immunizations. Also, DOD lacked centralized data to readily identify its deployed civilians and their movement in theater, thus hindering its efforts to assess the overall effectiveness of its force health protection and surveillance capabilities. GAO noted that until DOD establishes a mechanism to strengthen its oversight of this area, it would not be effectively positioned to ensure compliance with its policies, or the health care of deployed federal civilians. GAO also reported that DOD had established medical treatment policies for its deployed federal civilians, which provide those who require treatment for injuries or diseases sustained during overseas hostilities with care under the DOD military health system. GAO reviewed a sample of seven workers' compensation claims (out of a universe of 83) filed under FECA by DOD federal civilians who deployed to Iraq. GAO found in three cases where care was initiated in theater that the affected civilians had received treatment in accordance with DOD's policies. In all seven cases, DOD civilians who requested care after returning to the United States had, in accordance with DOD's policies, received medical examinations and/or treatment for their deployment-related injuries or diseases. GAO reported that DOD provides certain special pays and benefits to its deployed federal civilians, which generally differ in type and/or amount from those provided to deployed military personnel. For example, in cases where injuries are sustained while deployed, both DOD federal civilian and military personnel are eligible to receive government-provided disability benefits; however, the type and amount of the benefits vary, and some are unique to each group. Importantly, continuing challenges with modernizing federal disability programs have been the basis for GAO's designation of this as a high-risk area since 2003. In addition, while the survivors of deceased DOD federal civilian and military personnel generally receive similar types of cash survivor benefits for Social Security, burial expenses, and death gratuity, the comparative amounts of these benefits differ. However, survivors of DOD federal civilians almost always receive lower noncash benefits than military personnel. GAO does not take a position on the adequacy or appropriateness of the special pays and benefits provided to DOD federal civilian and military personnel. Any deliberations on this topic should include an examination of how such changes would affect ensuring adequate and appropriate benefits for those who serve their country, as well as the long-term fiscal well-being of the nation. |
Terrestrial television service—also known as over-the-air broadcast television—is transmitted from television towers through the radiofrequency spectrum to rooftop antennas or antennas attached directly to television sets inside of homes. With traditional analog technology, pictures and sounds are converted into “waveform” electrical signals for transmission, while digital technology converts these pictures and sounds into a stream of digits consisting of zeros and ones. Digital transmission of television signals provides several advantages compared with analog transmission, by enabling better quality picture and sound reception as well as other new services. In addition, digital transmission uses the radiofrequency spectrum more efficiently than analog transmission. This increased efficiency makes multicasting, where several digital television signals are transmitted in the same amount of spectrum as one analog television signal, and HDTV services possible. But, to implement digital transmission, upgrades to transmission facilities, such as television towers, are necessary, and consumers must purchase a digital television or a set-top box that will convert digital signals into an analog form for viewing on existing analog televisions. Both the United States and Germany have programs in place to complete the transition from analog to digital television. In the United States, the Congress and FCC provided television stations with additional spectrum to transmit both an analog and digital signal, and set a deadline for the shutoff of the analog signal at the end of 2006, or when 85 percent of households can receive the digital signal, whichever is later. In Germany, the federal government set a deadline of 2010 for the shutoff of analog signals and did not provide spectrum for an extended simulcast period. Each Media Authority (there are a total of 15 throughout Germany) decides on the specific timing of the terrestrial transition. The city of Berlin, Germany, and its surrounding metropolitan area initiated digital terrestrial transmissions in November 2002 and shut-off all analog signals in August 2003. We were told that regulation of the German television market is primarily the responsibility of state government, with the federal government exercising only limited authority to regulate this market. Television broadcasting in Germany is commonly characterized as a “dual system” in which public and private broadcasting coexist, with each market segment consisting of two dominant broadcasting entities. Both segments are subject to the broadcasting laws passed by the respective German states. Although terrestrial broadcasting was once the only means by which German households received television program signals, today only 5 to 7 percent of these households rely on terrestrial broadcasting, with the remainder using cable or satellite service for the reception of television signals. The federal government exercises important but limited authority in regulating television broadcasting, leaving the state (called Länder) governments with the primary responsibility for broadcasting regulation. At the federal government level, the Ministry of Economics and Labour is responsible for establishing and advancing general objectives in the telecommunications sector, such as the promotion of new technologies and innovation, and ensuring competition among providers of telecommunications services. In the context of the DTV transition, the Ministry led the effort in Germany to develop and recommend a strategy for the transition from analog to digital radio and television broadcasting. A separate federal entity, the Regulatory Authority for Telecommunications and Posts (RegTP), established in 1998, is responsible for technical aspects in the provision of telecommunications services, including management of Germany’s radiofrequency spectrum allocations, the development of standards for the distribution and use of telecommunications systems, and testing of electronics equipment. RegTP is playing a key role in the DTV transition in Germany by establishing procedures for and assigning frequency allocations to roll out digital video broadcasting service. Federal and state government officials told us that the authority to directly organize and regulate broadcasting services rests with each of the regional governments as part of their jurisdiction over educational and cultural matters. In each of the German states, a “Media Authority” serves as the primary regulatory authority over radio and television broadcasting services. Charged with implementation of their respective state-enacted broadcasting laws, the 15 Media Authorities are independent agencies and are not considered to be part of the state government administrations. Among the most important functions of the Media Authorities is the establishment of procedures for assigning broadcast frequencies allocated by RegTP to public and private broadcasters. The Media Authorities also have a significant role in overseeing the transition to digital television. Broadcasting laws and regulations in Germany are affected to some extent by actions of the European Union (EU). Although Germany and other EU- member states manage their own broadcasting policies, rules and guidelines are set at the EU level on matters that involve common interests, such as open borders, fair competition, and a commitment to public broadcasting. In the EU’s Action Plan to stimulate advanced services, applications, and content, EU member states are encouraged to have a strategy for the DTV transition with an assessment of market conditions, a date for the switchoff of analog terrestrial broadcasting, and a platform-neutral approach that takes into account the competing cable, satellite, and terrestrial delivery platforms. Terrestrial, or over-the-air, television in German is commonly characterized as a “dual system” in which public and private broadcasting coexist, with each market segment consisting of two dominant broadcasting entities. Public broadcasting corporations are the creation of the states, but operate largely as self-regulated entities. At the regional level, the German states have formed regional public broadcasters that operate their own television channels with regional-specific programming. The regional public broadcasters also formed a national network in 1950 known as ARD. ARD provides a nationwide broadcast channel (Channel 1), with some of its programming supplied by these regional broadcasters. A second nationwide public broadcasting channel, ZDF, was formed directly by the German states in 1961 as an independent, nonprofit corporation. In addition to their own channels, ARD and ZDF jointly operate four additional public television channels that are broadcast in various parts of Germany. We were told that approximately 40 percent of television viewing in Germany is of the various public channels provided by ARD and ZDF. The public broadcasters are given one frequency each by the Media Authorities for the terrestrial broadcast of their programming channel. Their primary source of revenue derives from a compulsory monthly fee paid by owners of radios and television sets. The amount of the fee is set jointly by the states, based on a recommendation of an independent panel, and is set at 16 Euro ($19.68) per month for each household. We were told that this amounts to about 6 billion Euro ($7.38 billion) annually. ARD receives slightly less than two-thirds of the fee revenues and allocates shares among its regional broadcasters, while ZDF receives about one- third of the total fee revenues. Two percent of the total fee revenue is distributed to the 15 Media Authorities. ARD and ZDF generate additional revenues from limited on-air advertisements. However, they are restricted to a maximum of 20 minutes of advertising per day before 8:00 p.m. and are precluded from any advertising on Sundays and holidays. The introduction of private television broadcasting in Germany is a relatively recent development. In the early 1980s, additional spectrum frequencies were made available for the opening of private television broadcasting. Today, two broadcasting groups—RTL Group and ProSiebenSat.1 Media—dominate this segment of the television broadcasting market, each operating multiple channels. Unlike their public broadcasting counterparts, private broadcasters must obtain licenses from relevant Media Authorities. Because frequencies are limited, not all private broadcasters operate nationally, and with the growth of cable and satellite systems, some have chosen not to renew terrestrial licenses in all locations. In particular, private broadcasters often do not provide terrestrial service in rural areas. Private broadcasters generate all of their revenues from advertising and receive no payments from the fees paid by owners of radios and television sets. Although terrestrial broadcasting as described above was once the only means by which German households could receive television program signals, there are currently three methods for television delivery— terrestrial broadcasting, cable television service, and satellite service. Terrestrial broadcasting, in fact, is now the method least relied upon by German television households for receiving program signals—only about 5 to 7 percent of German households rely exclusively on terrestrial television. Some German households that receive their primary television signals by satellite or cable may have a second or third set in the household that is used only for terrestrial reception. Households relying on analog terrestrial broadcasting receive between 3 to 12 channels, with an average of 5 to 6 channels. The primary transmitter networks that transmit television broadcast signals from various towers throughout the country are owned and operated by Deutsche Telekom. Broadcast stations pay Deutsche Telekom to transmit their terrestrial signals. ARD also owns a network of terrestrial broadcast towers for its own operations. Introduced in the early 1980s, cable television service is now the dominant method for the delivery of television programming in Germany: about 60 percent of the households subscribe to a cable system. Like terrestrial broadcasting in Germany, the 15 Media Authorities regulate cable television service in their respective areas. The state media laws set forth the must-carry requirements in each region, which specify the broadcast stations that cable providers are required to carry on their systems. We were told that these regulations vary considerably by region, with some areas requiring cable systems to carry nearly all public and private stations, and other areas imposing significantly fewer must-carry responsibilities on cable systems. To be carried by a cable operator, however, public and private broadcasters must pay a carriage fee to the cable operator, which is negotiated directly between the parties. Typical cable systems in Germany were constructed for the provision of analog service, provide about 30 to 33 channels of analog programming, and cost subscribers less than 15 Euro ($18.45) per month. It is often the case that this fee is included in the household’s rent. The third method of distribution of television programming is through satellite service, which today is received by an estimated 35 percent of German television households. According to RegTP, to provide satellite television service in Germany, a license to use the necessary spectrum is required by the agency. Also, any broadcast station that wants to be carried on a satellite system must obtain authorization to do so from one of the Media Authorities. The predominant provider of satellite television service in Germany is ASTRA, a Luxembourg-based company that provides satellite service throughout Europe. In order for a broadcast channel—whether a public station or a private station—to be carried by a satellite provider, a contractual agreement is reached between the broadcaster and the satellite provider that gives the right to the satellite provider to rebroadcast the signal, but requires the broadcast station to pay a fee for that carriage. For viewers, satellite service is available free of charge; however, viewers must purchase the equipment needed in order to receive programming. In addition, they must be able to situate the satellite dish toward the southern sky to receive the transmission signal from the geostationary satellite. The costs for a satellite dish and related equipment are estimated at less than 200 Euro ($246.04). Satellite television service provides viewers in Germany with approximately 125 channels, about 60 of which are in German. In Germany, government officials and industry participants are implementing the DTV transition to improve the viability of terrestrial television in the face of a low and declining share of households that rely solely on terrestrial television. Several elements of the DTV transition will apply throughout Germany, including an island based approach, where the DTV transition will occur separately in different metropolitan areas, and the adoption of standard-definition digital television. In Berlin, extensive planning facilitated the rapid DTV transition. Important elements of the Berlin DTV transition included a short simulcast period, financial and nonfinancial support provided to private broadcasters, subsidies provided to eligible low-income households for set-top boxes, and an extensive consumer education effort. While the Berlin DTV transition is generally viewed as successful, it is unclear whether a full DTV transition will occur throughout Germany. A primary rationale for the German DTV transition was to preserve terrestrial television in the face of a low and declining share of households that rely solely on this method of television reception. As mentioned previously, fewer than 10 percent of German households rely solely on terrestrial television, and the share has been rapidly declining in recent years. Since broadcasters reach over 90 percent of German households through cable and satellite service, concerns arose about the continued costs associated with the transmission of terrestrial television relative to the number of viewers. By increasing the number of television channels delivered terrestrially, the DTV transition was seen as a means to improve the viability of terrestrial television. Because there was concern that terrestrial viewership would continue to decline, German regulators decided that any DTV transition would need to occur relatively quickly. Some industry participants in Germany suggested that a switch-off of terrestrial television might be the better course. These parties argued that terrestrial television is costly and that German households have both cable and satellite as alternatives. Further, cable service is offered at reasonably low prices and satellite service is completely free of charge once the satellite dish and receiver have been installed. Ultimately, however, German regulators decided to proceed with a DTV transition. The transition provided benefits for both consumers and broadcasters. For consumers, the presence of digital terrestrial television ensures that consumers maintain a choice of three mechanisms to receive television service. We were told that this choice is important in cities such as Berlin, where many people cannot receive satellite service and, without terrestrial television, would be dependent on cable service. Further, one consumer group noted that digital terrestrial television allows consumers to avoid paying a fee for cable service while receiving a similar number of channels as they would with cable service. For broadcasters, the presence of terrestrial television provides a third mechanism for the transmission of their signals. We were told that this helps keep the fees that broadcasters must pay to cable companies to carry their signals lower than would be the case if broadcasters were reliant solely on cable and satellite for the transmission of their signals. In Germany, the Digital Broadcasting Initiative (the Initiative) establishes a nationwide framework for digital broadcasting. The federal government established the Initiative in 1997, and the federal Ministry of Economics and Labour and the Länder (or states) chair and deputy chair, respectively, the Initiative. Other members of the Initiative include representatives of the federal and state governments; public and private broadcasters; content providers; cable, satellite, and terrestrial operators; equipment manufacturers; and consumer groups. The Initiative develops strategies for digital broadcasting, including terrestrial television and radio, cable, and satellite service. The Initiative set a deadline for the DTV transition of 2010; this date is a strategy or recommendation, and not set forth in German law. The Initiative developed different strategies for television and radio, cable, and satellite service, and the DTV transition occurring throughout Germany at this time only focuses on terrestrial television. Thus, only households that rely solely on terrestrial television—about 160,000 in Berlin—were required to purchase equipment in order be able to continue to receive terrestrial television service on their existing analog televisions. Households that rely on cable or satellite service were unaffected by the DTV transition because cable and satellite providers converted the signals to ensure that households receiving their service could continue to view the signals without any additional equipment. Although, households that receive cable or satellite service would require equipment for televisions in their homes that are not connected to the cable or satellite service. The Initiative determined that the German DTV transition would occur through an island-based approach, in which each island will transition independently to digital terrestrial television. Each island is a major metropolitan area, such as Berlin or Munich. Figure 1 illustrates the various islands in Germany and the actual or planned year for the DTV transition. We were told that Germany adopted this approach because the DTV transition could not be achieved throughout the entire country simultaneously; officials thought that a nationwide DTV transition would be too big to manage at one time. Additionally, by adopting the island approach, German officials gained experience with the DTV transition, and thereby were able to assess whether the public would accept terrestrial digital television. Several officials told us that the islands will eventually grow together, and the DTV transition will encompass the entire country. However, we were also told that had the Berlin DTV transition not been a success, the transition in other areas may have been reevaluated and may not have gone forward. In addition to the island-based approach, Germany decided to adopt standard-definition digital television, instead of high-definition digital television. The government and industry officials with whom we spoke cited several advantages of standard-definition digital versus high- definition digital for Germany. First, the equipment that consumers must purchase for standard-definition digital is generally less expensive than the equipment necessary for high-definition digital. Second, with high- definition digital, broadcasters must install more costly equipment and incur higher transmission costs than would be the case with standard- definition digital. Finally, German officials believe that terrestrial television with a standard-definition digital signal is more competitive with cable and satellite than it would be with a high-definition digital signal. These officials noted that the increase in competitiveness of terrestrial television derives from its mobility and the increased channels available with standard definition digital. In particular, officials we spoke with noted that standard-definition digital technology allows multiple channels to be shown with the same amount of spectrum that was previously used to transmit one analog terrestrial channel. Thus, terrestrial television in Berlin now offers nearly as many channels to viewers as they receive on their cable systems. This greater number of channels combined with the mobility of terrestrial television—a feature not available with cable or satellite that enables consumers to take their television to their boats and garden homes—was seen as a factor that would make terrestrial television more attractive relative to cable or satellite service. Finally, German officials did not plan for the return of spectrum following the DTV transition. Germany has allocated a limited amount of spectrum for terrestrial television, and all the analog frequencies have been dedicated to digital television. As previously mentioned, broadcasters intend to use the spectrum for multiplexing—providing four digital channels in the same amount of spectrum that they previously provided one analog channel. However, if all multiplexes are not used, some spectrum could be returned to the government. But, it is not clear that this spectrum could or would be assigned to a different use, such as mobile telephone or Internet access. mabb, the Media Authority that regulates radio and television in the states of Berlin and Brandenburg, made several key decisions about how the DTV transition would occur in the area under its authority. When to undertake the DTV transition. Each of the 15 Media Authorities throughout Germany made decisions about when to undertake the DTV transition within their region. Berlin was the first of Germany’s islands to undertake the DTV transition. We were told that Berlin had several characteristics that made it favorable to serve as a test market for the DTV transition. First, the percent of households that rely solely on terrestrial television is relatively low in Berlin. Since the DTV transition in Germany requires only equipment modifications for terrestrial televisions, the number of households affected was relatively small—only about 160,000 households—and the transition more manageable. Second, Berlin had more spectrum dedicated to television because spectrum that had been used by both East and West Berlin was all still allocated to terrestrial television use. Third, because Berlin is not near other major cities, no signal interference concerns arose in the area, as they might for cities such as Bonn or Cologne, which are near other cities and the German border with other countries. Finally, Berlin also has fairly simple topography—it is basically flat—enabling easier transmission of television signals. Length of Simulcast. mabb and industry participants implemented the DTV transition in the Berlin area with a short simulcast period. The DTV transition agreement negotiated between mabb and the broadcasters specified a three-phase simulcast process: On November 1, 2002, the simulcast period commenced as digital signals for some of the stations of both public and commercial broadcasters began to be transmitted. Berlin officials dedicated two additional channels for the simulcast, with each of these channels carrying four multicast digital stations. Thus, eight of Berlin’s eleven analog stations were initially simulcast. On February 28, 2003, five previously analog channels were converted to digital channels, with each channel carrying multiple stations. Thus, the digital signals of more stations were turned on, including stations that were not previously available terrestrially in Berlin. The analog transmission of all national private broadcasters stopped, and public broadcasters transitioned their analog signals to lower-power analog frequencies. On August 8, 2003, all analog transmission stopped. The government and industry officials with whom we spoke with cited several reasons for the short simulcast period. First, Germany does not have enough spectrum dedicated to television service to implement a long simulcast period while also providing additional stations; the spectrum used for analog transmission is the same spectrum that will be used for digital transmission. Second, an extended simulcast period is costly for broadcasters, who, as mentioned earlier, must pay for terrestrial transmission. Third, a quick and certain shutoff date provides an incentive for households to purchase the necessary set-top boxes. German federal officials and other Media Authorities are generally encouraged by the success of the short simulcast period in Berlin. In the state of North-Rhine Westphalia, the Media Authority intends to implement a 6-month simulcast period for public broadcasters, with no simulcast period for private broadcasters, in the state’s two islands. Private broadcaster support. mabb made the decision to provide financial and nonfinancial support to private broadcasters. Public broadcasters were able to finance their transition costs through the radio and television license fee they receive. Private broadcasters, on the other hand, do not receive license fees, but were viewed as important participants in the DTV transition. Therefore, mabb decided to provide support to private stations, which consisted of three elements. First, for 5 years, mabb will pay the broadcasters’ incremental costs associated with digital transmission (i.e., mabb will pay the difference between the broadcasters’ former analog transmission costs and their digital transmission costs). In return, the private broadcasters agreed to provide digital terrestrial television for at least 5 years. Second, as incumbent broadcasters, the private broadcasters received authority to provide multiplexed service. That is, the private broadcasters were allowed to increase the number of terrestrial channels they provide in Berlin using the spectrum they were already assigned. Third, one broadcaster told us that in return for participating in the DTV transition in the Berlin island, it received favorable must-carry status throughout the region that mabb regulates—that is, mabb will require that its stations be carried on cable systems in the region. At this time, it is not clear whether and to what extent the other Media Authorities plan to provide similar support for private broadcasters’ DTV transition in other regions. One private broadcaster told us that it would be unwilling to participate in the DTV transition in other islands if it does not receive the multicast authority. Subsidy of set-top box for needy households. In addition to supporting private broadcasters, mabb provided support to certain households for the purchase of set-top boxes. According to mabb, the overriding principle was that households must pay for the set-top boxes necessary to watch terrestrial digital broadcast signals. However, mabb made contingencies for low-income households. Households that were entitled to government aid could apply to the Social Welfare Office for assistance. If the household met the income eligibility criteria and relied solely on terrestrial television (i.e., the household did not receive cable or satellite service), the household received a voucher for a free set-top box. Qualifying households received their set-top box either from specified retailers, or the box was delivered to their home, whichever means was least costly. During the DTV transition period, mabb paid 75 percent of the subsidy cost and the Social Welfare Office paid the remaining 25 percent of the subsidy cost. mabb funded its share of the subsidy through the portion of the radio and television license fee that it receives, while the Social Welfare Office funded its share of the subsidy through its regular budget. Following the transition period, the Social Welfare Office began paying the entire cost of the subsidy, up to 129 Euro ($158.70). According to mabb, a total of 6,000 set-top boxes were provided to needy households with a total cost of 500,000 Euro ($615,100). Extensive consumer education. mabb and industry participants conducted an extensive consumer education effort. One official told us that a primary concern with the DTV transition is making sure that households that rely solely on terrestrial television understand that they must do something to be able to continue receiving television. In Berlin, two important consumer education mechanisms were messages on terrestrial-only television signals and information sessions with retailers. On television signals received by terrestrial television, households saw a rolling scroll that informed them about the DTV transition. Deutsche TV- Plattform and the Berlin Chamber of Commerce also held information sessions with retailers. Other consumer education mechanisms included a direct mailing to every household, a consumer hotline, flyers and newsletters, an Internet Web site, and advertisements on buses and subways. One primary concern with the consumer education effort was to avoid confusing cable and satellite subscribers. Because the DTV transition only affected households relying solely on terrestrial television, the consumer education effort focused on means that would target only these households, and not households subscribing to cable and satellite service. We were also told that a short consumer education period was best for informing households about the DTV transition; in Berlin, the consumer education effort lasted approximately 4 weeks and cost approximately 800,000 Euro ($984,160). Relatively few consumer complaints and problems arose during the Berlin DTV transition. For example, a consumer organization that we spoke with told us that there were very few complaints, and that most complaints that did arise concerned the cost of the set-top box, which they said was approximately 100 to 125 Euro ($123.02 to $153.78). We were also told that there were minor technical problems and few reception problems. An mabb official with whom we spoke thought that reception had improved following the DTV transition, because the agency ensured a strong digital signal and because digital transmission is superior to analog transmission. The technical and reception problems that did arise included difficulties installing and using the set-top box; reception problems in some multiple- dwelling units, particularly ground-floor units and buildings with rooftop antennas and boosters; and interference problems for some cable subscribers because of the strength of the digital signal. During the Berlin DTV transition, some households changed the mechanism through which they receive television service. We were told that between one-third and one-half of households that previously relied solely on terrestrial television switched to either cable or satellite service, rather than purchase the set-top box. An official with mabb told us that the percent of households switching from terrestrial television to cable and satellite was less than they had expected. On the other hand, more set-top boxes—over 200,000—were sold than the number of former terrestrial- only households, indicating that some households purchased multiple boxes, and that some cable and satellite households also purchased set- top boxes for a second or third television that only received terrestrial service. We were also told that relatively few cable subscribers switched to terrestrial television following the DTV transition. As previously mentioned, cable payments are often included in the household’s rent payment and some cable contracts are long-term in nature, thereby reducing the incentive and flexibility that some households have to switch away from cable service. Some industry officials told us, however, that they expect some cable subscribers to switch to terrestrial service in the longer term. The government, industry, and consumer representatives with whom we spoke mentioned several factors as contributing to the success of the Berlin DTV transition. These factors include the following: The DTV transition provided enhanced consumer value for Berlin households. The number of channels available through terrestrial television increased from approximately 11 to 27 and included an electronic program guide. The government and broadcasters did not have to finance the new programs. The new channels available through terrestrial television following the DTV transition already existed on cable and satellite systems. There was good cooperation between the government officials and broadcasters, which helped ensure that consumers received additional channels. The transition affected a relatively small percentage of Berlin households; only households that relied solely on terrestrial television—less than 10 percent of Berlin households—had to take action to avoid losing their television service. The set-top boxes were relatively inexpensive, and the price fell throughout the transition period. There was a scheduled time line for the DTV transition and a firm shutoff date. There was good communication to consumers about the DTV transition. While the Berlin DTV transition appears successful, a full DTV transition might not extend throughout Germany. Government and industry officials with whom we spoke said that private broadcasters will most likely not provide digital service in rural areas outside the islands, but that public broadcasters will provide digital service in these areas. This is not entirely different than the current situation with analog television, where the private broadcasters do not provide terrestrial television in all areas of the country. However, it does raise the possibility that a full DTV transition, including the digital terrestrial transmission of both public and private broadcasters, might not occur throughout Germany. Finally, some groups we spoke with identified problems with the Berlin DTV transition. The cable television industry in Germany mentioned several problems. Cable industry officials with whom we spoke objected to the use of the radio and television license fee for the DTV transition. These officials told us that all German households pay the license fee, but only terrestrial households in the islands benefit from the DTV transition. In fact, the cable industry has petitioned the European Commission about the use of the license fee for the DTV transition. Other problems noted by the cable industry officials with whom we spoke include cable subscribers purchasing set-top boxes by mistake and the expense and problems cable operators incurred to upgrade their headend facilities to receive the digital signal. Regarding the set-top box subsidy, the Social Welfare Office thought that the process could have been handled a little better. In particular, it found that approximately 20 percent of the applications for subsidies were not handled adequately, most often because they were incomplete or missing signatures. Based on our examination of the DTV transition in Berlin and other areas of Germany, it is clear that the manner in which DTV is to be rolled out is considerably different than in the United States. Nevertheless, we found that much of the focus in Berlin leading up to and during the simulcast period was on making sure that consumers who receive television solely through terrestrial means obtain the necessary set-top boxes so that they would be able to view DTV signals once the analog signals were turned off. Since the DTV transition in the United States is already in a simulcast phase—that is, most digital broadcast television signals are already being transmitted—the phase of encouraging consumers to adopt DTV equipment is upon us. FCC has yet to fully determine how cable and satellite households will count toward the 85 percent threshold. Ultimately, the Congress and FCC will need to turn their attention to providing information, incentives, and possibly assistance to those who need to purchase equipment in order for the transition—and the return of valuable spectrum—to be completed. Ensuring that consumers understand the transition, how they will be affected by it, and what steps they need to take is critical not only for ensuring the transition moves forward, but for ensuring that consumers do not unexpectedly lose television reception or incur costs beyond what is necessary to successfully transition to digital television. The Berlin experience highlights a few factors, which relate to consumers’ purchase of set-top boxes, that were very important for the success of the DTV transition in that city: Information provided focused a great deal on need for set-top box and benefits of completing the transition. The Berlin authorities and broadcasters provided extensive information to the public, the media, and retailers about what the transition would entail, what consumers needed to do, how they would benefit by transitioning to digital television, and where to get assistance if there was confusion about what equipment was necessary or if there were problems with equipment or reception. This effort was planned and coordinated among many parties, adequate resources were dedicated to the information campaign, and nearly everyone we spoke with told us it a critical factor to the success of the rapid DTV transition in Berlin. Set-top boxes were subsidized for needy households. Subsidies were provided to certain households that might have had difficulties affording the necessary set-top boxes. In particular, low-income households that rely on terrestrial television could apply for financial assistance for the purchase of a set-top box. Because of the low penetration of terrestrial television, only about 6,000 households required this subsidy at a cost of about half a million Euro ($615,100). Nevertheless, this may have helped in the management of the transition by ensuring that the transition would not be an undue burden for lower-income households. Near-term date certain for transition deadline made clear when set-top boxes would need to be in place. Finally, the Media Authority in Berlin set a date certain for the transition that required consumers to make decisions quickly about how they would adapt to the transition. This enabled all stakeholders to know what they needed to work toward: when set-top boxes needed to be available in the market; when education of consumers, hotlines, and TV scroll information would be required; and the date by which consumers needed to decide how to transition or lose their television service. To summarize my statement, Mr. Chairman, although the context of the transition differs considerably in Germany as compared with the United States, there may be interesting and helpful lessons for the Congress and FCC from the DTV transition in Berlin and other areas of Germany. This concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittee may have at this time. For questions regarding this testimony, please contact Mark L. Goldstein on (202) 512-2834 or goldsteinm@gao.gov. Individuals making key contributions to this testimony included Amy Abramowitz, Dennis Amari, and Michael Clements. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In Berlin, Germany, the transition from analog to digital television (DTV), the DTV transition, culminated in the shutoff of analog television signals in August 2003. As GAO previously reported, the December 2006 deadline for the culmination of the DTV transition in the United States seems unlikely to be met. Failure to meet this deadline will delay the return of valuable spectrum for public safety and other commercial purposes. Thus, the rapid completion of the DTV transition in Berlin has sparked interest among policymakers and industry participants in the United States. At the request of the Subcommittee on Telecommunications and the Internet, House Committee on Energy and Commerce, GAO examined (1) the structure and regulation of the German television market, (2) how the Berlin DTV transition was achieved, and (3) whether there are critical components of how the DTV transition was achieved in Berlin and other areas of Germany that have relevance to the ongoing DTV transition in the United States. The German television market is characterized by a central role of public broadcasting and is regulated largely at the state level. Although the federal government establishes general objectives for the telecommunications sector and manages allocations of the German radiofrequency spectrum, 15 media authorities organize and regulate broadcasting services within their areas of authority. The two public broadcasters are largely financed through a mandatory radio and television license fee of 16 Euro ($19.68) per household, per month, or about 6 billion Euro ($7.38 billion) in aggregate per year. Today, only 5 to 7 percent of German households rely on terrestrial television. Most households receive television through cable service, which typically costs less than 15 Euro ($18.45) per month, or satellite service, which is free once the household installs the necessary satellite equipment. Berlin officials and industry participants engaged in extensive planning for the rapid DTV transition in the Berlin test market. In Germany, government officials and industry participants are implementing the DTV transition largely for the purpose of improving the viability of terrestrial television; officials do not expect to recapture radio spectrum after the transition. Several elements of the DTV transition apply throughout Germany. For example, Germany is implementing the transition within specified "islands," which are typically larger metropolitan areas, because officials thought that a nationwide DTV transition would be too big to manage at one time. Also, the German DTV transition focuses exclusively on terrestrial television, not cable and satellite television. The Media Authority in Berlin specified other components of the DTV transition for the Berlin area, including a short (10 month) simulcast period, financial and nonfinancial support provided to private broadcasters, subsidies provided to low-income households, and an extensive consumer education effort. Certain aspects of the DTV transition in Berlin and other regions of Germany are relevant to the ongoing transition in the United States because, even though the television market and the transition are structured differently in the two countries, government officials face similar key challenges. We found that much of the focus of government officials leading up to and during the brief simulcast in Berlin was on ensuring households who rely on terrestrial television received the necessary consumer equipment. In the United States, most television stations are providing a digital signal--that is, the United States is in the simulcast phase. Thus, the challenge facing the Congress and the Federal Communications Commission, as was the case in Berlin, is encouraging households to purchase set-top boxes or digital televisions. The key components of the Berlin DTV transition that enabled the rapid deployment of set-top boxes included (1) implementing an extensive consumer education effort; (2) providing subsidies to low-income households for set-top boxes; and (3) setting a relatively near-term, date certain that all stakeholders understood would be the shutoff date for analog television. |
Medicaid and CHIP are the nation’s largest health care financing programs for low-income children, accounting for about $79 billion in shared federal and state expenditures in 2009, the most recent year for which data are available. Medicaid is a federal-state program for certain categories of low-income children, families, and individuals. In fiscal year 2010, 34.4 million children had health coverage through Medicaid. CHIP is also a federal-state program and provides health care coverage to children 18 years of age and younger living in low-income families whose incomes exceed the eligibility requirements for Medicaid. In fiscal year 2010, 7.7 million children had health care coverage through CHIP. State Medicaid and CHIP programs are required to cover certain groups of individuals and offer a minimum set of services, including services provided by primary care and specialty care physicians, and services provided in hospitals, clinics, and other settings. States are also responsible for enrolling physicians as Medicaid and CHIP providers. For Medicaid programs, federal law establishes that state Medicaid payments to providers must be sufficient to enroll enough providers so that care and services are available to beneficiaries at least to the extent that they are available to the general population in the same geographic area. On May 6, 2011, CMS issued a proposed regulation regarding this requirement. The proposed regulation is intended to promote standardized and transparent methods for states to review and monitor Medicaid beneficiaries’ access to covered services delivered under a fee-for-service delivery model. Under the proposed regulation, state monitoring of Medicaid beneficiaries’ access is anticipated to be an ongoing and evolving process. The proposed regulation describes different approaches states may use to assess Medicaid beneficiaries’ access to care, and identifies different actions states may take to address access problems. In addition, the proposed regulation includes a requirement for states to annually assess Medicaid beneficiaries’ access to a different subset of covered services and then make the results of these assessments available to the public. Figure 1 illustrates how the supply of primary care physicians varies among states and within states. Like all children, children in Medicaid and CHIP depend on physicians and other health care providers for regular health screenings to monitor their health, development, and growth. In addition to primary health care needs, these screenings are important in identifying conditions that may warrant specialty care and services. Medicaid programs are required to provide regular health screenings, under the benefit known as Early and Periodic Screening, Diagnostic and Treatment (EPSDT) services, for eligible children. We and others have reported, however, that many children in Medicaid and CHIP are not receiving well-child checkups, all required health screening services, or needed specialty services. For example: In August 2009, we reported that, on the basis of parents’ reports in the Medical Expenditure Panel Survey (MEPS), about 40 percent of children in Medicaid and CHIP had not had a well-child checkup over a 2-year period. In May 2010, HHS’s Office of Inspector General reported that in nine states, three of four children in Medicaid did not receive all required covered health screening services. In April 2011, on the basis of MEPS, we reported that for 12 percent of children in Medicaid and CHIP 17 years of age and younger who needed health care services, such as tests or treatments, their families had difficulties accessing those services. In addition, an estimated 24 percent of children in Medicaid and CHIP 17 years of age and younger who needed specialists had problems accessing specialty services. We also reported, in April 2011, that monitoring access to specialty care for children in Medicaid and CHIP was important because the National Survey of Children’s Health—which is based on responses from parents or guardians—showed that these children had problems accessing needed services. We also found that the required state reports submitted to CMS regarding services provided to children in Medicaid lacked detail. For example, the reports do not indicate whether children referred to providers for treatment actually receive the services they need. We recommended that the Administrator of CMS work with states to identify additional improvements that could be made to the annual reports that states are required to submit to CMS, including options for capturing information on children’s receipt of the services for which they are referred. CMS agreed with our recommendations. On the basis of our survey of physicians, we estimate that nationally more than three-quarters of primary and specialty care physicians are enrolled as Medicaid and CHIP providers and serving children covered by these programs. These participating physicians are generally more willing to accept privately insured children as new patients than children in Medicaid and CHIP. In addition, the percentage of physicians accepting children in Medicaid and CHIP is similar to the percentage of physicians accepting uninsured children. Participating physicians do not appear to show a preference when scheduling appointments for new patients, as the reported wait times for new appointments are generally the same for privately insured children and children in Medicaid and CHIP. We also found that for most participating physicians, children in Medicaid and CHIP represent less than 20 percent of the children they serve. Physicians not enrolled or not serving children in these programs often cited administrative issues related to reimbursement and provider enrollment requirements as factors limiting their willingness to serve these children. On the basis of physicians’ responses to our survey, we estimate that nationally 78 percent of physicians are enrolled as Medicaid and CHIP providers and serving children in these programs. A larger share of primary care physicians than specialty care physicians are participating in Medicaid and CHIP—that is, enrolled and serving children in Medicaid and CHIP. Among primary care physicians, participation in Medicaid and CHIP is higher in rural areas than in urban areas. Overall, the proportion of physicians participating in Medicaid and CHIP ranged from a low of 71 percent for specialty care physicians to a high of 94 percent for primary care physicians in rural areas. (See table 1.) Physicians who participate in Medicaid and CHIP do not appear to show a preference for a particular delivery model. In areas where both managed care and fee-for-service delivery models exist for these programs, 78 percent of participating physicians serve Medicaid and CHIP children in both delivery models. Among participating physicians, 10 percent only serve children under the fee-for-service model, and 8 percent only serve children in the managed care model. (For additional data on physicians’ participation in Medicaid and CHIP, including estimates of the percentage of participating physicians serving children in Medicaid and CHIP by delivery model, and the lower and upper bounds of all estimates on physician participation, see app. II, tables 5-8.) Although most participating physicians are accepting children in Medicaid and CHIP as new patients, they are generally more willing to accept privately insured children as new patients. For example, about 8 of 10 participating physicians are accepting all privately insured children, compared to less than 5 of 10 accepting all children enrolled in Medicaid and CHIP. About 1 of 10 participating physicians are not accepting any children in Medicaid and CHIP as new patients, compared to about 1 of 30 who are not accepting any privately insured children as new patients. (See fig. 2.) Participating physicians were generally more willing to accept privately insured children than Medicaid and CHIP children in each of the physician types we analyzed: primary care physicians, specialty care physicians, and primary care physicians in urban and rural areas. Both primary care physicians and specialty care physicians are more willing to accept privately insured children as new patients than children in Medicaid and CHIP. (See fig. 3.) For example, for both primary care physicians and specialty care physicians the percentage of participating physicians who accept all privately insured children as new patients is about 30 percent higher than the percentage who accept all children in Medicaid and CHIP. (For additional data on acceptance of new patients by child’s insurance and physician type, including estimates of physicians’ acceptance of uninsured children, and the lower and upper bounds of all estimates, see app. II, tables 9 and 10.) Similarly, a March 2011 report found that the percentage of primary care physicians who were accepting all or most Medicaid patients—adults and children—was considerably lower than the percentage accepting all or most privately insured patients. This study also found that the relative supply of primary care physicians did not affect physician willingness to accept new Medicaid patients. Specifically, primary care physicians in states with fewer primary care physicians per capita were as willing to accept new Medicaid patients as primary care physicians in states with more primary care physicians per capita. As illustrated in figure 4, primary care physicians in urban and rural areas are more willing to accept privately insured children as new patients than children in Medicaid and CHIP; however, rural primary care physicians are more willing than urban primary care physicians to accept children in Medicaid and CHIP as new patients. In rural areas, the percentage of participating primary care physicians who will accept all privately insured children as new patients is about 20 percent higher than the share willing to accept all children in Medicaid and CHIP. In urban areas, the difference is about 30 percent. Further, the percentage of primary care physicians in rural areas who are willing to accept all children in Medicaid and CHIP as new patients (62 percent) is much higher than the percentage in urban areas (43 percent). (For additional data on acceptance of new patients by child’s insurance and primary care physician’s geographic location, including estimates of physician acceptance of uninsured children, the lower and upper bounds of all estimates, and information on statistically significant differences, see app. II, table 11.) The percentage of physicians accepting uninsured children as new patients is similar to the percentage accepting children in Medicaid and CHIP. For example, 55 percent of all participating physicians accept all uninsured children as new patients, and 9 percent do not accept children without insurance, compared to 47 percent and 9 percent, respectively, for children in Medicaid and CHIP. (See app. II, tables 9-11.) Other research has found that physicians’ willingness to accept patients enrolled in Medicaid and uninsured patients is lower than willingness to accept privately insured patients. When accepting new Medicaid and CHIP patients, physicians who participate in Medicaid and CHIP do not appear to show a preference for children in a fee-for-service or managed care delivery model. In areas where both delivery models exist for these programs, 69 percent of participating physicians accept children in Medicaid and CHIP under both fee-for-service and managed care. The percentage of physicians who only accept these children under one type of delivery model is about the same for each delivery model—7 percent only accept Medicaid and CHIP children in a managed care delivery model, and 10 percent only accept these children in a program with a fee-for-service delivery model. (See app. II, table 12, for additional information regarding physician acceptance of children in Medicaid and CHIP by delivery model.) Participating physicians do not appear to have a preference for, or to give priority to, privately insured children when scheduling appointments for new patients. Nationally, physicians cited wait times for new patient appointments as largely the same for children in Medicaid and CHIP and privately insured children. For example, the most common wait time for a new appointment cited was less than 48 hours for both children in Medicaid and CHIP and privately insured children. Further, for both groups of children, more than half of the participating physicians could schedule a nonurgent visit in 6 days or fewer. Wait times for children in Medicaid and CHIP and privately insured children were similar for primary care physicians (national, urban, and rural) and specialty care physicians. For primary care physicians overall and those in urban and rural locations, more than half of participating physicians indicated that wait times are less than 1 week for children seeking new appointments, regardless of insurance coverage of the child. For specialty care physicians, more than half of physicians indicated that wait times for new appointments are 1 week or more for children with private insurance, as well as for children covered by Medicaid and CHIP. (See app. II, tables 13 through 15, for data on wait times by physician type and geographic location of primary care physicians.) A June 2011 report on children’s access to specialty services found that wait times in one large urban county differed for children in Medicaid and CHIP as compared to privately insured children. Using a methodology that entailed researchers calling clinics in Cook County, Illinois, and posing as mothers of children with Medicaid or CHIP coverage, and, in separate calls, as mothers of children with private insurance, the study found that among the clinics that accepted both Medicaid and CHIP and private insurance, the average wait time for children covered by Medicaid and CHIP was 22 days longer than that for children with private insurance. Children in Medicaid and CHIP represent a relatively small share of most participating physicians’ child patients. Although the percentage of children in Medicaid and CHIP served by participating physicians varies, for more than half (55 percent) of all participating physicians, children in Medicaid and CHIP represent less than 20 percent of the children they serve. The most common physician response was that children in Medicaid and CHIP represent less than 10 percent of the children they serve. The second most common response was that children in Medicaid and CHIP represent 60 percent or more of the children they serve. (See fig. 5.) The share of participating physicians’ child patients that are in Medicaid and CHIP was similar for primary care physicians, specialty care physicians, and urban primary care physicians. For the majority of participating physicians in each of these groups, children in Medicaid and CHIP accounted for less than 20 percent of the children they served. In contrast, for the majority of rural primary care physicians, these children accounted for 20 percent or more of all the children they served. (See app. II, tables 16 through 19, for data on the patient mix of participating physicians.) Our findings are similar to those from recent research in California, which found that for the majority of the physicians participating in the state’s Medicaid program—primary care, specialty care, and urban as well as rural physicians—adults and children enrolled in Medicaid accounted for 20 percent or less of their patients. Physicians not participating in the programs—that is, those not enrolled or not serving children in Medicaid and CHIP—often cited certain administrative issues related to reimbursement and enrolling as a provider as factors that limit their own willingness to serve children enrolled in these programs. Specifically, of 13 factors that physicians could identify on our survey as limiting their own willingness to serve children in Medicaid and CHIP, nonparticipating physicians most frequently identified 5 factors. For physicians not participating in Medicaid and CHIP, we estimate that nationally 1. 95 percent are influenced by low reimbursement, 2. 87 percent are influenced by burdens associated with billing, 3. 85 percent are influenced by delayed reimbursement, and 4. 85 percent are influenced by burdens associated with enrolling and participating. 5. 78 percent are influenced by difficulty referring patients to other providers. In contrast, two factors were frequently cited as not limiting physicians’ own willingness to participate in Medicaid and CHIP: practice capacity and other patients’ perceptions of Medicaid and CHIP patients. Specifically, 64 percent of nonparticipating physicians said that practice capacity does not limit their own willingness to serve Medicaid and CHIP children, and 71 percent said other patients’ perceptions of Medicaid and CHIP patients does not limit their own willingness to serve these children. (For additional information on the degree to which certain factors influence participation for participating and nonparticipating physicians, see app. II, tables 20 and 21.) Other research has suggested that although physicians often cite administrative issues as limiting their own willingness to participate in Medicaid and CHIP, raising reimbursement rates may not increase their participation in these programs. For example, one study found that physicians’ negative perceptions of the program or its beneficiaries may cause them to be reluctant to participate. Other studies have shown that a number of factors unrelated to reimbursement can affect physician participation in these programs, including gender, the type of practice, whether the physician owns or is an employee in a practice, and the geographic area in which the physician practices. Recent provisions have been implemented to increase Medicaid reimbursement rates. Under PPACA, states are required to increase Medicaid payment rates for primary care services for 2013 and 2014. For these 2 years, states will be required to pay certain primary care physicians an amount equal to the amount Medicare pays for primary care services, and the federal government will pay 100 percent of the additional costs. However, one researcher noted that for states with the lowest levels of physician supply the increase in reimbursement rates may not increase the supply of Medicaid primary care providers to the levels necessary for the likely growth in the Medicaid population. On the basis of our national survey, most physicians participating in Medicaid and CHIP experience difficulty referring children in these programs to specialty care, but relatively few have difficulty referring privately insured children to specialty care. This difference is consistent for primary and specialty care physicians at the national level, as well as for primary care urban and primary care rural physicians. Physicians who responded to our survey told us that they experience difficulty referring children in Medicaid and CHIP to specialty care for a number of reasons, including specialty physician supply and long waiting lists for specialists willing to accept children covered by Medicaid and CHIP. The most frequently cited specialty types that are difficult referrals for children in Medicaid and CHIP were nearly identical to the types most frequently cited as difficult for privately insured children. On the basis of the results of our survey, more than three times as many physicians experience difficulty referring children in Medicaid and CHIP to specialty care as experience difficulty referring privately insured children. We estimate that nationally, 84 percent of participating physicians experience some or great difficulty referring children in Medicaid and CHIP, compared to 26 percent for privately insured children. Of further note, 34 percent of the physicians experience great difficulty for children in Medicaid and CHIP, compared to 1 percent for privately insured. At the same time, 75 percent experience no difficulty referring privately insured children to specialty care, compared to 16 percent for children in Medicaid and CHIP. (See fig. 6.) Physicians generally have more difficulty referring children in Medicaid and CHIP to specialty care than privately insured children regardless of physician type and geographic location. For each physician group— primary care physicians, specialty care physicians, and primary care urban and primary care rural physicians—a greater percentage of physicians experience difficulty referring children enrolled in Medicaid and CHIP to specialty care than experience difficulty referring privately insured children. (See figs. 7 and 8.) (For additional data on referrals to specialty care by child’s insurance and physician specialty type and geographic location, including estimates for uninsured children, the lower and upper bounds of all estimates, and information on statistically significant differences, see app. II, tables 22 through 24.) The June 2011 report examining children’s access to specialty services in one large urban county found disparities in provider acceptance of children in Medicaid and CHIP as compared to privately insured children. The study found that 66 percent of the calls for children covered by Medicaid and CHIP were denied an appointment compared to 11 percent for children with private insurance. The level of difficulty physicians experience in referring children in Medicaid and CHIP to specialty care is similar to the level of difficulty they experience in referring uninsured children. Specifically, the percentage of participating physicians that experience some or great difficulty referring uninsured children to specialty care (84 percent) was the same as the percentage that experience some or great difficulty referring Medicaid and CHIP to specialty care. These findings are consistent with the findings of our April 2011 report that children in Medicaid and CHIP and uninsured children were more likely to experience problems receiving needed specialty care than privately insured children. “Few pecili in thi ll geogrphicre will ee children in the firt plce; if the rik i high nd the reimbuement low, it getrder” Physicians who responded to our open-ended survey question requesting information on whether they experience difficulty referring children in Medicaid and CHIP to specialty care cited a variety of reasons, including the short supply of specialists in the area, long waiting lists for specialists, specialists not accepting or limiting the number of children covered by Medicaid and CHIP that they will accept, and low reimbursement rates and other administrative issues associated with the programs. The specialties cited by physicians as difficult to refer children to for specialty care were largely the same for children in Medicaid and CHIP and privately insured children. In our survey, we asked physicians who indicated that they face difficulty referring children to specialists to list the particular specialties for which making a referral is difficult. The most frequently cited specialties for children enrolled in Medicaid and CHIP and privately insured children were mental health specialties (such as psychiatry and psychology), dermatology, and neurology. Shortages in these specialty types are not unknown. For example, a 2010 survey of physicians in Michigan found that dermatology, neurology, and pediatric psychiatry were among the most difficult specialties for referrals. Similarly, a 2010 study of the physician workforce in Massachusetts classified the shortages of physicians in dermatology, neurology, and psychiatry as severe. HHS projects that, as for many specialties, the supply of psychiatrists, dermatologists, and neurologists will continue to grow for the next decade or so. However, HHS noted that demand for physician services—both primary and specialty care—is growing faster than supply, and that the resulting shortfall could impede national health care goals. Medicaid and CHIP have a significant role in addressing the preventive and specialty health care needs of tens of millions of children in the United States. In April 2011, we reported that children’s access to needed specialty care is an issue warranting closer monitoring. We recommended to CMS—a recommendation to which CMS agreed—that the agency work with states to identify ways to improve annual Medicaid and CHIP reports that states submit to CMS, including ways to capture information on children’s receipt of specialty care services for which they have been referred by a physician or other provider. Findings of our current review, capturing perspectives of physicians working to serve the medical needs of Medicaid and CHIP children, further suggest the need for monitoring of children’s receipt of needed specialty care in Medicaid and CHIP. In particular, our finding that more than three times as many physicians experience difficulty referring children in Medicaid and CHIP to specialty care as experience difficulty referring privately insured children lends importance to our April 2011 recommendation in that it gives the clearest indication to date of the extent of the referral problem for children in Medicaid and CHIP. We provided a draft of this report to HHS for its review and comment. HHS’s letter and general comments are reprinted in appendix III. HHS commented that CMS is committed to improving physician participation rates and that our report will be of significant value to CMS as it works with states and providers to ensure that beneficiaries have access to covered health care services. HHS also raised concerns about the report’s portrayal of the percentage of physicians accepting all Medicaid and CHIP children separately from the percentage accepting some, saying that when the report describes half of physicians as accepting all new children, the reader may assume the other half does not accept any new children. HHS suggested that we combine the percentages of physicians accepting some and all. We do not agree with HHS’s suggestion. The report consistently depicts the extent of physicians’ willingness to serve by providing the share accepting all, some, or no children in Medicaid and CHIP as new patients, and combining all and some would mask the important differences in physicians’ willingness to accept Medicaid and CHIP children. HHS also commented that we should provide qualifying statements about our sample of physicians, because the majority of physicians who responded to our survey do not serve a large percentage of children. We conducted statistical testing of the survey data to determine whether physician characteristics—including the percentage of the physician’s practice that is made up of children—influenced physicians’ responses. We found that the percentage of children in physicians’ practices did not affect physician responses to key questions in our survey. We revised our report to provide information about this additional statistical testing. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Administrator of CMS and other interested parties. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions or need additional information, please contact me at (202) 512-7114 or iritanik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix IV. We conducted a mixed-mode survey (mail and Web-based) of primary care and specialty care physicians to determine the extent to which nonfederal primary care and specialty care physicians are enrolled as Medicaid and Children’s Health Insurance Program (CHIP) providers and serve children in these programs; the extent to which they are accepting new Medicaid and CHIP patients; factors that may affect physicians’ own willingness to participate in Medicaid and CHIP; and the extent to which participating physicians experience difficulty referring children in Medicaid and CHIP for specialty care. We developed a questionnaire for surveying primary care and specialty care physicians. We pretested the questionnaire with a convenience sample of primary care and specialty care physicians in four states: Georgia, Illinois, Oregon, and Washington. On the basis of the pretest results, we revised the questionnaire for clarity. Most questions were closed-ended, in which physicians selected from a list of possible responses, answered yes or no questions, or selected responses on a three- point scale, such as none, some, or all. The questionnaire also included some open-ended questions to allow respondents to identify specific types of specialty care physicians that were difficult to get referrals to or other comments respondents had regarding serving children in Medicaid and CHIP. Using the questionnaire, we surveyed a nationally representative sample of primary care and specialty care physicians, including a representative sample of primary care physicians in rural and urban areas. We used the American Medical Association’s Physician Masterfile to select a random sample. We fielded the questionnaire from August 2010 through October 2010. Our random sample included 2,642 primary care and specialty care physicians who were eligible to participate. Eligible physicians were those who 1. work in an office- or hospital-based setting; 2. provide direct patient care to children (age 0-18); 3. have a primary specialty in one of our two groups of physicians; 4. are age 65 or younger; and 5. are not an employee of a federal agency. We received complete responses from 932 eligible physicians, for an overall response rate of 35 percent. Based on the sampling frame and the results of our nonresponse bias analyses, we were able to generalize results nationally to primary care and specialty care physicians who serve children. Table 2 illustrates the response rates for each physician group surveyed. We analyzed survey results for four groups of physicians: primary care physicians, specialty care physicians, primary care physicians in urban areas, and primary care physicians in rural areas. We analyzed physician responses using standard descriptive statistics. In our analysis, we project results to the national level, and to areas where both managed care and fee-for-service delivery systems are available. All estimates are based on self-reported information provided by the survey respondents and have a margin of error of plus or minus 5 percent or less at the 95 percent confidence level, unless otherwise noted. For the open-ended questions related to difficulties making referrals to specialty care, we used a standard content review method to identify the types of specialists that physicians have difficulty referring children to for specialty care. Our coding process for these qualitative responses involved one independent coder and an independent reviewer who verified the coded comments. Of the 932 eligible physicians responding to our survey, two-thirds were male; over two-thirds worked in an office-based setting; and, for most, child patients represented less than 20 percent of the patients they served (see table 3 and figs. 9 and 10). On average, respondents were 50 years old, and had graduated from medical school 23 years earlier. Ninety-three percent provided at least 20 hours of patient care per week. The number of physicians who employ nurse practitioners or physician assistants was about evenly split among physicians responding to our survey. About two-thirds of primary care rural physicians in our sample said they employ nurse practitioners or physician assistants (see table 4). The number of physicians who responded to our survey varied by region, with the highest numbers of physicians responding from the South, and the lowest from the Northeast (see fig. 11). We performed checks on survey responses to identify inconsistent answers. We also reviewed survey data for missing or ambiguous responses, and performed statistical testing to determine whether physician characteristics (such as age, gender, or percentage of children physicians reported serving) influenced physicians’ responses to key survey questions. We found that physician characteristics did not influence responses. We also conducted a nonresponse bias analysis to determine whether any bias was introduced in the results due to the absence of responses from some members of the sample. For the nonresponse bias analysis, we utilized data from our survey, the American Medical Association Physician Masterfile, and follow-up telephone interviews with physicians who did not respond to our paper or Web- based survey. Based on the results of our nonresponse bias analysis, we adjusted our survey analysis weights to ensure that physicians were appropriately represented in our study. Based on our systematic survey processes, follow-up procedures, and nonresponse bias analysis, we determined that the questionnaire responses were representative of the experience and perceptions of primary care and specialty care physicians nationally, and of primary care physicians in urban and rural areas. We determined that the data were sufficiently reliable for our purposes. This appendix contains additional data we collected from our 2010 national survey of physicians who serve children. It includes the results from the closed-ended survey questions on our questionnaire, but does not include narrative responses that we received to the open-ended questions. Results are generally provided for physicians participating in state Medicaid and Children’s Health Insurance Program (CHIP) programs— that is, physicians who are enrolled in these programs and also providing services to these children in these programs. We report statistically significant differences only when comparing responses by (1) the child’s type of insurance (Medicaid and CHIP coverage and private insurance coverage); (2) physician type (all physicians, primary care physicians, and specialty care physicians); (3) geographic location (rural and urban) of primary care physicians; and (4) child’s type of insurance for each type of physician. We provide national estimates regarding the following: physician participation—the extent to which physicians are participating, that is, enrolled in Medicaid and CHIP and serving children in these programs (tables 5 through 8); acceptance of new patients—participating physicians’ acceptance of new child patients by insurance type, physician type, delivery model, and Medicaid and CHIP (tables 9 through 12), and the length of time patients must wait for a new appointment, by insurance type (tables 13 through 15); patient composition—children in Medicaid and CHIP as a share of all children served by participating physicians (tables 16 through 19); factors limiting Medicaid and CHIP participation—factors cited by nonparticipating and participating physicians as limiting their own participation in these programs (tables 20 through 21); and level of difficulty referring children for specialty care—the extent to which participating physicians experience difficulties referring to specialty care (tables 22 through 24). In addition to the contact named above, Catina Bradley, Assistant Director; Martha Kelly, Assistant Director; Suzanne Worth, Assistant Director; Zhi Boon; Tim Bushfield; Sean DeBlieck; Laura Henry; Roseanne Price; Dan Ries; Hemi Tewarson; and Jennifer Whitworth. Medicaid and CHIP: Reports for Monitoring Children’s Health Care Services Need Improvement. GAO-11-293R. Washington D.C.: April 5, 2011. Oral Health: Efforts Under Way to Improve Children’s Access to Dental Services, but Sustained Attention Needed to Address Ongoing Concerns. GAO-11-96. Washington, D.C.: November 30, 2010. Health Care Delivery: Features of Integrated Systems Support Patient Care Strategies and Access to Care, but Systems Face Challenges. GAO-11-49. Washington, D.C.: November 16, 2010. Medicaid Managed Care: CMS’s Oversight of States’ Rate Setting Needs Improvement. GAO-10-810. Washington, D.C.: August 4, 2010. Medicaid Preventive Services: Concerted Efforts Needed to Ensure Beneficiaries Receive Services. GAO-09-578. Washington, D.C.: August 14, 2009. Medicaid: Concerns Remain about Sufficiency of Data for Oversight of Children’s Dental Services. GAO-07-826T. Washington, D.C.: May 2, 2007. Medicaid Managed Care: Access and Quality Requirements Specific to Low-Income and Other Special Needs Enrollees. GAO-05-44R. Washington, D.C.: December 8, 2004. Medicaid and SCHIP: States Use Varying Approaches to Monitor Children’s Access to Care. GAO-03-222. Washington, D.C.: January 14, 2003. Medicaid: Stronger Efforts Needed to Ensure Children’s Access to Health Screening Services. GAO-01-749. Washington, D.C.: July 13, 2001. | Medicaid and the Children's Health Insurance Program (CHIP)--two joint federal-state health care programs for certain low-income individuals--play a critical role in addressing the health care needs of children. The Children's Health Insurance Program Reauthorization Act of 2009 required GAO to study children's access to care under Medicaid and CHIP, including information on physicians' willingness to serve children covered by Medicaid and CHIP. GAO assessed (1) the extent to which physicians are enrolled and serving children in Medicaid and CHIP and accepting these and other children as new patients, and (2) the extent to which physicians experience difficulty referring children in Medicaid and CHIP for specialty care, as compared to privately insured children. GAO conducted a national survey of nonfederal primary and specialty care physicians who serve children, and asked about their enrollment in state Medicaid and CHIP programs, whether they served and accepted Medicaid and CHIP and privately insured children, and the extent to which they experienced difficulty referring children in Medicaid and CHIP and privately insured children to specialty care. GAO also interviewed officials with the Centers for Medicare & Medicaid Services (CMS), an agency within the Department of Health and Human Services (HHS) that oversees Medicaid and CHIP. Most physicians are enrolled in Medicaid and CHIP and serving children covered by these programs. On the basis of its 2010 national survey of physicians, GAO estimates that more than three-quarters of primary and specialty care physicians are enrolled as Medicaid and CHIP providers and serving children in those programs. A larger share of primary care physicians (83 percent) are participating in the programs--enrolled as a provider and serving Medicaid and CHIP children--than specialty physicians (71 percent). Further, a larger share of rural primary care physicians (94 percent) is participating in the programs than urban primary care physicians (81 percent). Nationwide, physicians participating in Medicaid and CHIP are generally more willing to accept privately insured children as new patients than Medicaid and CHIP children. For example, about 79 percent are accepting all privately insured children as new patients, compared to about 47 percent for children in Medicaid and CHIP. Nonparticipating physicians--those not enrolled or not serving Medicaid and CHIP children--most commonly cite administrative issues such as low and delayed reimbursement and provider enrollment requirements as limiting their willingness to serve children in these programs. Physicians experience much greater difficulty referring children in Medicaid and CHIP to specialty care, compared to privately insured children. On the basis of the physician survey, more than three times as many participating physicians--84 percent--experience difficulty referring Medicaid and CHIP children to specialty care as experience difficulty referring privately insured children--26 percent. For all children, physicians most frequently cited difficulty with specialty referrals for mental health, dermatology, and neurology. In its comments on a draft of this report, HHS stated that CMS is committed to improving physician participation and that this report will be of value as CMS works with the states to ensure beneficiary access to care. |
Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to discuss the Federal Housing Finance Board’s (FHFB) regulatory oversight of the nation’s third largest government-sponsored enterprise (GSE), the Federal Home Loan Bank System (System). At your request, we recently issued a report on FHFB’s oversight. The specific objectives of our review were to evaluate (1) FHFB’s annual safety and soundness and mission compliance examinations of the Federal Home Loan Banks (Banks), (2) other aspects of FHFB’s oversight, and (3) the status of FHFB’s involvement in System business. We reached four primary conclusions about FHFB’s oversight which I will discuss today. First, FHFB did not ensure that all parts of the annual examinations we reviewed met their internal standards for assessing safety and soundness. Second, weaknesses exist in FHFB’s off-site monitoring and supervisory enforcement programs. Third, FHFB does not have policies or procedures, outside of its reviews of the special affordable housing and community investment programs, to determine whether or the extent to which Banks are supporting their public mission of housing finance. Fourth, FHFB’s involvement in promoting System programs and projects that it subsequently evaluates for mission compliance and safety and soundness could complicate its primary duty as safety and soundness regulator and may prompt questions about FHFB’s objectivity. In addition, I will discuss the concept of a single regulator for all the housing GSEs. We have suggested in past work that Congress consider creating one regulator to oversee the safety and soundness and mission compliance of the three largest GSEs. Our recent work at FHFB and the other GSE regulators has strengthened our belief that this single-regulator concept would be more effective than the existing regulatory structure. only regulator that remains involved in the business of the System it regulates. In certain instances, the Federal Home Loan Bank Act (Bank Act) provides for FHFB’s involvement in System business. FHFB has devolved some business or governance and management activities to the Bank boards. However, FHFB continues to function as a promoter and coordinator for the System. To complete our objectives, we reviewed FHFB’s examination function and other relevant oversight activities, such as off-site monitoring and enforcement. This included a review of Bank examination reports and selected supporting work papers. We also reviewed off-site monitoring reports and related documents, as well as documents relevant to FHFB’s enforcement activities. Finally, we reviewed information relevant to FHFB’s managerial functions and the status of its devolution project. As part of our evaluation of FHFB’s examination program, we reviewed the 1996 and 1997 examinations and supporting work papers for a stratified, judgmental sample of six Banks whose assets represented 60 percent of System assets at year-end 1996. We found that examiners performed required examinations but failed to follow all the policies and procedures specified in their examination manual. Most notably, examiners did not always fully assess critical elements of Bank operations—such as internal controls, board of director and management oversight, and the reliability of internal audits—that FHFB, other financial regulators, and we have identified as vital in evaluating an institution’s risk-management capabilities. None of the examinations we reviewed fully assessed more than one of the areas. All failed to assess board of director oversight. While examiners generally assessed management of interest-rate and credit risk, the critical elements just mentioned should be reviewed during every on-site examination to ensure that operations risk is being adequately managed. Operations risk poses the potential for unexpected financial loss due to such problems as inadequate internal controls or fraud. due to limited staff resources in their office, they were unable to take a top-down examination approach. In addition, we found that examiners relied on the work of Bank internal auditors without any regular assessment of the adequacy of their work. In each of the 12 examinations we reviewed, more than half of the work in each area specified in FHFB’s examination manual was not conducted in accordance with the manual’s procedures. That is, examiners did not complete the examination program in the manual or use the manual’s examination questionnaires. The examiners explained that they often did not have time to complete the procedures described in the manual and that the manual’s procedures often were not useful for certain parts of the examination. In addition, we found that, for most areas covered in the examination, examiners did not document examination procedures or provide support for their conclusions, as required by FHFB standards. In all but 1 of the 12 examinations reviewed, some planned examination procedures were not completed during the course of the examination. In each of the cases, examiners indicated in the work papers that those procedures were not completed because of time constraints. In 2 of the 12 examinations, examiners curtailed the scope but provided no explanation for the change in the work papers. OS officials told us that limited examination staff resources sometimes resulted in scope reductions, and that such reductions occurred in parts of the examination that examiners believed involved less risk. Examiners also failed to expand the examination scope when potentially serious problems were found. Examiners found potentially serious internal control problems at one Bank in consecutive examinations but did not expand their reviews to determine whether there were additional related problems. FHFB did not view those internal control weaknesses as significant. Both cases involved an inadequate segregation of duties in a Bank’s investment activities and were weaknesses that recurred at the same Bank. In spite of the fact that adequate segregation of duties involves a violation of fundamental principles of internal controls, FHFB did not believe it was necessary to expand its review to the Bank’s system of internal controls. follow the guidance and complete the appropriate examination procedures described in the examination manual, and, (3) adequately documenting the work performed and conclusions drawn during examinations. In its almost 10 years of operation, FHFB has not developed a compliance program to ensure mission compliance, one of its statutory duties. Historically, mission compliance oversight included reviewing the Banks’ compliance with affordable housing program and community investment program requirements—two programs mandated by law in 1989 that represented less than 1 percent of the System’s total assets in 1997. More recently, FHFB’s mission compliance efforts have included promoting certain mission-related activities; however, FHFB continues to lack policies and procedures that lay out how it will effectively regulate mission. Recently, FHFB has taken a number of steps to try to better ensure and assess mission compliance. Specifically, FHFB has (1) required that Banks submit annual reports that describe their new products, pricing, and investment partnerships; (2) commissioned a study to, among other purposes, assist in developing procedures to oversee Bank mission compliance; (3) tested draft examination procedures to ensure mission compliance; and (4) amended regulations for Bank member community support requirements, as well as FHFB’s oversight activities, to ensure member compliance with those requirements. FHFB has also begun to study the System’s investment activities and is considering whether it should limit non-mission related investments. We view these as positive steps because a high level of non-mission related investments would raise questions about how Banks are fulfilling their mission. Investments at the individual Banks ranged from 17 to 58 percent of assets at year-end 1997. We encourage FHFB to continue its efforts to develop a regulatory framework for a mission compliance oversight program. To be effective we believe such a program must be based on well-defined policies that delineate what constitutes mission compliance and prescribe the methods to be used to measure whether Banks have fulfilled their mission. We found additional weaknesses in FHFB’s off-site monitoring and enforcement programs that raise concerns about its regulatory effectiveness. Both functions are vital to ensure that any problems are identified promptly and that corrective action is taken when needed. Recognizing the need for timely monitoring, the Office of Supervision developed a regulatory oversight and off-site monitoring system in 1996 that required monthly reviews of Bank data, including minutes of board of directors meetings, internal audit reports, and financial data. In 1997, the Office of Supervision suspended its monthly off-site monitoring due to staff constraints. We found that examiners primarily reviewed the periodic data submitted by the Banks to FHFB as part of their annual preparation for examinations. The Office of Supervision also prepared several periodic reports on financial management policy compliance and interest-rate risk exposures, financial trends, and debt-issuance activities. In addition, the Office of Policy produced several periodic monitoring reports, such as a quarterly profile report that tracks Bank statistics (including Bank membership), the affordable housing program, and unsecured credit. Both offices shared their reports with the board of directors but they generally did not coordinate their monitoring activities, which are viewed as having different purposes. FHFB lacked policies and procedures for off-site monitoring, and there appeared to be no correlation between Bank size or scope of activities and the level or type of off-site monitoring performed by these offices. process. We believe FHFB would be better prepared and assured of its ability to take forceful action if its statute enumerated the authorities granted other GSE regulators, such as cease and desist and civil money penalty powers. Therefore, we suggest that Congress consider granting FHFB the specific enforcement authority provided other GSE regulators. Mr. Chairman, our review of FHFB oversight would not be complete without a consideration of its unique role in some aspects of System business. We remain concerned, as we have noted in the past, that combining the roles of oversight and involvement in System business may undermine the independence necessary for FHFB to be an effective safety and soundness and mission regulator. We recognize that the responsibility for FHFB’s involvement in System business is, in part, due to statutory authorities carried over from FHFB’s predecessor, the Bank Board. For example, the Bank Act gives FHFB authority to issue the System’s consolidated obligations and requires that FHFB approve Bank dividends and bylaws. FHFB and System officials agree that a regulator should not be involved in the day-to-day operations of Banks, but the degree and type of involvement they consider appropriate varies. Since 1994, FHFB has identified and devolved certain business or governance and management activities, within specified limits, to Banks’ boards. These activities include the authority to establish presidents’ salaries and incentive plans, approve affordable housing program applications, determine the compensation of Bank directors, and set Bank performance targets. Management activities identified by as yet to be devolved include the authority to approve dividends, certain general administrative matters, and setting credit policies. Although FHFB has delegated some of these functions to the bank boards, we suggest that Congress consider ensuring, through legislation, that FHFB not be involved in the business of the System. We are aware of and support the provisions of the legislation pending in the House and Senate that would begin to correct some of our concerns about FHFB’s involvement in System business. potential to provide central coordination and promotion for the System. Nevertheless, FHFB officials view promotion as part of FHFB’s role as a regulator. Its 5-year strategic plan, which FHFB says is integral to its budget and performance planning, illustrates the prominence of the promotion and coordination roles in agency operations. Of the plan’s nine objectives, one addresses the examination function, and five address changes FHFB advocates to enhance Bank performance, such as expanding the acceptable uses for advances and expanding acceptable collateral on advances to include small business loans. Of the other three objectives, two address the devolution effort, and one deals with disseminating public information about FHFB’s performance. We identified other examples of the FHFB’s promotion and coordination activities during our review. For example, the FHFB chairman coordinates and participates in periodic meetings with Bank chairs and vice chairs that include coordinating congressional lobbying efforts. FHFB’s involvement with these bank officials—whom it appoints—in lobbying for statutory changes illustrates the potential FHFB has for influence over these positions. We believe FHFB should have regulatory authority over business functions to ensure safety and soundness and mission compliance, but we emphasize that having such regulatory authority differs from being a participant in System business on a regular basis and from promoting a particular program or activity over other mission-related activities. Further, mission promotion is not a substitute for mission regulation, which has to be built on measurable and enforceable regulations and policies. The last issue I want to address today is our suggestion that Congress consider creating a single regulator to oversee the safety and soundness and mission compliance of the three housing GSEs. In addition to the System, these include the Federal National Mortgage Association (Fannie Mae) and the Federal Home Loan Mortgage Corporation (Freddie Mac), which are regulated for safety and soundness by the Office of Federal Housing Enterprise Oversight (OFHEO), an independent regulator within the Department of Housing and Urban Development (HUD), and HUD itself, which has general regulatory authority and oversees Fannie Mae’s and Freddie Mac’s mission compliance. In past work on the housing GSEs, we discussed the advantages and disadvantages of creating a single housing GSE regulator. Since then, we have continued to monitor and evaluate the housing GSEs and their regulators. For example, we issued a report on OFHEO in October 1997 and updated that work in July 1998. We also reported on HUD’s mission oversight of Fannie Mae and Freddie Mac in July 1998. We found that OFHEO had not fully completed two important duties: establishing risk-based capital standards and implementing a comprehensive and timely examination program. At your request, Mr. Chairman, we provided new information to this subcommittee in July regarding OFHEO’s progress in implementing a comprehensive oversight program. We reported that OFHEO had made some progress but still faced challenges in completing those two important duties. Our work at HUD raised a number of issues about its oversight of Fannie Mae and Freddie Mac, some of which would be eliminated or at least mitigated if there were a single regulator for the housing GSEs. For example, HUD is required to establish goals for its GSEs’ purchase of mortgages serving targeted groups and also maintain the GSEs financial soundness because such purchases could increase credit risk. We found that HUD had adopted a conservative approach to setting the goals that placed a high priority on maintaining the GSEs’ financial soundness, but that HUD had not fully analyzed the financial consequences of setting higher goals. As a result of our work at OFHEO and HUD, we found no evidence that would cause us to alter our previous position regarding a single regulator. In addition, our current work at FHFB has strengthened our conclusion that FHFB’s, OFHEO’s, and HUD’s oversight of the housing GSEs would be more effective if combined. Thus, we continue to support our 1994 and 1997 positions that a single housing GSE regulator be created to oversee the safety and soundness and mission compliance oversight of the housing GSEs. A single regulator would be better able to evaluate the trade-off between mission and safety and soundness as well as evaluate the financial aspects of new mortgage products and other GSE activities, such as nonmission investments, because it would combine expertise in housing and finance. A single regulator would be more independent and objective than separate agencies, because it would not be affiliated with one particular GSE, or dependent on that GSE for its continued existence and thus subject to its influence. A single regulator would be more prominent in government than either FHFB or OFHEO is alone. This should further enhance the single regulator’s independence and make it more competitive in attracting and retaining staff with appropriate expertise and experience. In addition, a single regulator could capitalize on sharing staff expertise in such areas as examinations, risk monitoring, financial analysis, and economic research. The examinations staffing constraints we identified at FHFB and similar staffing concerns identified at OFHEO might be alleviated by combining FHFB, OFHEO, and HUD resources. Similarly, OFHEO’s work in setting capital standards and developing a stress test could be useful in oversight of the System. This concludes my prepared statement, Mr. Chairman. My colleagues and I would be pleased to answer any questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the Federal Housing Finance Board's (FHFB) regulatory oversight of the Federal Home Loan Bank System, focusing on: (1) FHFB's annual safety and soundness and mission compliance examinations of the Federal Home Loan Banks; (2) other aspects of FHFB's oversight; and (3) the status of FHFB's involvement in System business. GAO noted that: (1) FHFB's examination function did not ensure that annual examinations met FHFB's internal examination standards, including adequate documentation for work performed; (2) the examinations included reviews of interest-rate and credit risk, two of the primary types of risk faced by the Banks; (3) however, the examinations did not fully assess other areas that FHFB and others have identified as vital in evaluating an institution's risk management capabilities, such as management and board of directors oversight, internal control systems, and internal audit function; (4) weaknesses existed in FHFB's off-site monitoring and supervisory enforcement programs; (5) FHFB lacks a coordinated off-site monitoring system, which is an important part of effective safety and soundness oversight, because it can provide an early warning of potential problems; (6) FHFB also lacks an enforcement program that clearly articulates policies and procedures for taking corrective action; (7) the situation is further aggravated because the statute grants only general authority to enforce the statute and make orders; (8) the only authority delineated in the statute is the authority to remove or suspend Bank employees, directors, officers, or agents for cause; (9) FHFB does not have policies or procedures, outside of its reviews of the special affordable housing and community investment programs, to determine whether or the extent to which Banks are supporting housing finance; (10) FHFB recognized this omission and has begun to take steps to establish such a program, but no final actions have been taken to establish a regulatory framework to ensure mission compliance; (11) FHFB continues to be involved in System business; (12) many of the authorities that involve FHFB in System business are specified in statute or are carryover regulations from its predecessor agency; (13) FHFB began to devolve many of the functions in 1994, but it still plays a role in coordination and promotion of Banks; (14) GAO continues to believe that such involvement in the System's business functions may undermine FHFB's independence and lead to questions about its objectivity; and (15) GAO supports its position that a single housing regulator be created to oversee the safety and soundness and mission compliance of the housing government-sponsored enterprises. |
The SSI program was established in 1972 under Title XVI of the Social Security Act and provides payments to low-income aged, blind, and disabled persons—both adults and children—who meet the financial eligibility requirements. A disability is defined for adults as the inability to engage in any substantial gainful activity because of any medically determinable physical or mental impairment(s) that can be expected to result in death, or has lasted or can be expected to last for a continuous period of not less than 12 months. To meet financial eligibility requirements, in fiscal year 2014, an individual’s or married couple’s monthly countable income had to be less than the monthly federal SSI benefit rate of $721 per month for an individual and $1,082 per month for a married couple. Further, countable resources (such as financial institution accounts) had to be $2,000 or less for individuals and $3,000 or less for married couples. Recipients are to report changes in their income and financial resources to SSA as soon as they occur and a penalty may be deducted from the recipient’s benefit if the report is not made within 10 days after the close of the month in which they change. In addition, to determine an individual’s ongoing financial eligibility for SSI program payments, SSA conducts periodic “redeterminations.” During a redetermination, field office staff perform a variety of activities to verify recipients’ income, resources, living arrangements, and other factors to determine their continued SSI program eligibility. These activities may include querying internal and external databases, checking with employers and banks, and performing interviews with recipients to obtain current information. To ensure that only recipients who remain disabled continue to receive benefits, SSA is required to conduct periodic continuing disability reviews (CDR) in certain circumstances. These reviews assess whether recipients are still eligible for benefits based on several criteria, including their current medical condition. During the CDR process, SSA applies a medical improvement standard. Under this standard, SSA may discontinue benefits for an individual if it finds substantial evidence demonstrating both that a beneficiary’s medical condition has improved and that the individual is able to engage in substantial gainful activity. If SSA determines that these conditions have not been met in the course of conducting a CDR, the recipient may continue to receive benefits until the individual receives a subsequent CDR (which potentially could result in a discontinuation of benefits), dies, or transitions to Social Security retirement benefits. Multiple entities are involved in determining recipients’ initial and continued eligibility. After an SSA field office determines that an SSI applicant meets the program’s financial requirements, a state Disability Determination Services agency reviews the applicant’s medical eligibility. Similarly, SSA field offices conduct redeterminations of recipients’ financial eligibility, and state Disability Determination Services agencies assess continued medical eligibility. Complex eligibility rules and many layers of review with multiple handoffs from one person to another make the SSI program complicated and also costly to administer. During fiscal year 2014, SSA estimated that it made $5.1 billion in improper payments in the program. As our prior work has shown, improper payments, including overpayments, may result, in part, because eligibility reviews are not conducted when scheduled, information provided to SSA is incomplete or outdated, or errors are made in applying complex program rules. Because CDRs are a key mechanism for ensuring continued medical eligibility, when SSA does not conduct them as scheduled, program integrity is affected and the potential for overpayments increases as some recipients may receive benefits for which they are no longer eligible. SSA reported in January 2014 that it is behind schedule in assessing the continued medical eligibility of its disability program recipients and has accumulated a backlog of 1.3 million CDRs. In recent years, SSA has cited resource limitations and a greater emphasis on processing other workloads as reasons for the decrease in the number of reviews conducted. From fiscal years 2000 to 2011, the number of adult and childhood CDRs fell approximately 70 percent, according to our analysis of SSA data. More specifically, CDRs for children under age 18 with mental impairments—a group that comprises a growing majority of all child SSI recipients—declined by 80 percent. Children make up about 15 percent of all SSI recipients, and we reported in 2012 that CDRs for 435,000 child recipients with mental impairments were overdue, according to our analysis of SSA data. Of these, nearly half had exceeded their scheduled CDR date by 3 years, and 6 percent exceeded their scheduled date by 6 years. Of the 24,000 childhood CDRs pending 6 years or more, we found that about 70 percent were for children who, at initial determination, SSA classified as likely to medically improve within 3 years of their initial determination. Twenty-five percent— over 6,000—of these pending CDRs were for children medically expected to improve within 6 to 18 months of their initial determination. Reviews of children who are expected to medically improve are more productive than reviews of children who are not expected to improve because they have a greater likelihood of benefit cessation and thus yield higher cost savings over time. SSA officials report that the agency has placed a higher priority on conducting CDRs for populations other than SSI children that they believe will result in greater savings over time. However, our analysis of SSA’s data showed that SSI child claims that received a CDR in fiscal year 2011 were ceased at a higher rate than other claims. In our June 2012 report, we recommended that SSA eliminate the existing CDR backlog of cases for children with impairments who are likely to improve and, on an ongoing basis, conduct CDRs at least every 3 years for these children. If this recommendation were implemented, SSA could potentially save $3.1 billion over 5 years by preventing overpayments to children with mental impairments, according to our analysis of fiscal year 2011 data. SSA generally agreed that it should complete more CDRs for SSI children but emphasized that it is constrained by limited funding and competing workloads. Moving forward, one of the goals in SSA’s Fiscal Year 2014- 2018 Strategic Plan is to strengthen the integrity of the agency’s programs. In line with this goal, SSA requested additional program integrity funding for fiscal year 2015 to enable the agency to conduct more CDRs, and Congress made these funds available. SSA recently reported that in each year since 2012, it has increased the number of reviews conducted for SSI children, completing nearly 90,000 reviews in fiscal year 2014, in contrast to the 25,000 reviews it completed in fiscal year 2011, the year prior to GAO’s audit. The agency stated it will continue to work toward eliminating its CDR backlog for SSI children if it receives sustained and predictable funding. While additional funding may help address the backlog, we continue to have concerns about the agency’s ability to manage its resources in a manner that adequately balances its service delivery priorities with its stewardship responsibility. Because SSA has noted that it considers SSI childhood CDRs to be a lower priority than other CDRs, it is unclear whether the agency will continue to use new increases in funding to review children most likely to medically improve—reviews that could yield a high return on investment. As a result of CDRs, disability recipients that SSA determines have improved medically may cease receiving benefits; however, several factors may hinder SSA’s ability to make this determination. In prior work, our analysis of SSA data showed that 1.4 percent of all people who left the agency’s disability programs between fiscal years 1999 and 2005 did so because SSA found that they had improved medically; however, recipients more commonly left for other reasons, including conversion to Social Security retirement benefits or death. At that time, we identified a number of factors that challenged SSA’s ability to assess disability program recipients using the medical improvement standard, including (1) limitations in SSA guidance for applying the standard; (2) inadequate documentation of prior disability determinations; (3) failure to abide with the requirement that CDR decisions be made on a neutral basis—without a presumption that the recipient remained disabled; and (4) the judgmental nature of the process for assessing medical improvement. For example, we noted that—based on a review of the same evidence—one examiner may determine that a recipient has improved medically and discontinue benefits, while another examiner may determine that medical improvement has not been shown and will continue the individual’s benefits. Furthermore, we concluded that the amount of judgment involved in the decision-making process increases for certain types of impairments, such as psychological impairments, which are more difficult to assess than others, such as physical impairments. These issues have implications for the consistency and fairness of SSA’s medical improvement decision-making process, as well as program integrity, and in 2006, we recommended that SSA clarify several aspects of its policies for assessing medical improvement. Since then, SSA has taken some steps that may help address the issues we raised but has not fully implemented our recommendation. For example, SSA began implementing an electronic claims analysis tool for use during initial disability determinations to (a) document a disability adjudicator’s detailed analysis and rationale for either allowing or denying a claim, and (b) ensure that all relevant SSA policies are considered during the disability adjudication process. In addition, SSA reported in its fiscal year 2016 annual performance plan that it will continue to expand the use and functionality of this analysis tool to help hearing offices standardize and better document the hearing decision process and outcomes. However, SSA’s guidance for assessing medical improvement may continue to present challenges when applying the standard. As of May 2015, the guidance does not provide any specific measures for what constitutes a “minor” change in medical improvement, and it instructs examiners to exercise judgment in deciding how much of a change justifies an increase in the ability to work. We continue to believe that SSA should fully implement the actions we previously recommended to improve guidance in this area. In light of the questions that have been raised about SSA’s ability to conduct and manage timely, high-quality CDRs for its disability programs, we are currently undertaking a study of SSA’s CDR policies and procedures for this Subcommittee. More specifically, we are examining how SSA prioritizes CDRs, the extent to which SSA reviews the quality of CDR decisions, and how SSA calculates cost savings from CDRs. We look forward to sharing our findings once our audit work is complete. In addition to overpayments that result when CDRs are not conducted as scheduled, overpayments may result when financial information provided to SSA is incomplete or outdated. In December 2012, we reported that SSA lacks comprehensive, timely information on SSI recipients’ financial institution accounts and wages. For fiscal year 2011, the unreported value of recipients’ financial institution accounts, such as checking and savings accounts, and unreported wages were the major factors associated with causes of overpayments, and were associated with about $1.7 billion (37 percent) of all SSI overpayments. Specifically, overpayments occurred because recipients did not report either the existence of financial institution accounts, increases in account balances, or monthly wages. SSA has developed tools in recent years to obtain more comprehensive and timely financial information for SSI recipients, but these tools have limitations: The Access to Financial Institutions initiative, which SSA implemented in all states in June 2011, involves electronic searches of about 96 percent of the financial institutions where SSI recipients have a direct deposit account. This initiative therefore provides SSA with independent data on a recipient’s financial institution accounts for use in periodically redetermining their eligibility for payments. However, in our December 2012 report, we found that this initiative does not capture all relevant financial institutions, and SSA staff were generally not required to conduct these searches for recipients who, for example, report a lesser amount of liquid resources or do not report any financial accounts. The Telephone Wage Reporting system, implemented in fiscal year 2008, allows recipients to call into an automated telephone system to report their monthly wages. Agency officials reported that this system should ease the burden of reporting wages for some recipients and save time for SSA staff since wage data is input directly into SSA’s computer system. At the same time, the accuracy and completeness of information obtained through this system is limited because it relies on self-reported data and the system is unable to process wage information for individuals who work for more than one employer. SSA recently reported that it is continuing to gain experience using these tools and is studying the effects of recent expansions to the Access to Financial Institutions initiative. In May 2015, the SSA Office of the Inspector General (OIG) noted that despite SSA’s implementation of the Access to Financial Institutions initiative, the dollar amount of overpayments associated with financial account information has increased over the last few fiscal years. The OIG recommended that SSA continue (1) monitoring Access to Financial Institutions to ensure a positive return on investment and (2) researching other initiatives that will help to reduce improper payments in the SSI program. SSA agreed with the OIG’s recommendations and noted that it is studying the effects of recent expansions of the initiative, including an increase in the number of undisclosed bank account searches performed and inclusion of more recipients with lower levels of liquid resources. Over the years, we have also identified issues with inaccurate wage reporting by employers that have contributed to improper payments. We and the SSA OIG have previously identified patterns of errors and irregularities in wage reporting, such as employers using one Social Security number for more than one worker in multiple tax years. Inaccurate wage information can lead SSA to make either overpayments or underpayments to SSI recipients. In July 2014, we identified indications of possible Social Security number misuse in wage data used by SSA for the SSI program. In one case, an individual in California had wages reported from 11 different employers in seven other states during the same quarter of calendar year 2010, suggesting that multiple individuals may have been using the SSI recipient’s Social Security number and name for work. According to SSA, Social Security number misuse can cause errors in wage reporting when earnings for one individual are incorrectly reported to the record of another person having a similar surname. However, we found that the prevalence of such Social Security number misuse in SSA’s wage data was unclear. When an SSI overpayment is identified, recipients are generally required to repay the overpaid amount, although they can request a waiver of repayment under certain circumstances. We reported in December 2012 that SSA increased its recovery of SSI overpayment debt by 36 percent from $860 million to $1.2 billion from fiscal year 2002 to fiscal year 2011. However, SSA grants most overpayment waiver requests, and waiver documentation and oversight was limited. Specifically, in fiscal year 2011, SSA approved about 76 percent of all SSI overpayment waivers requested by recipients. Claims representatives, who are located in SSA’s approximately 1,230 field offices, have the authority to approve such waivers, and SSA does not require supervisory review or approval for overpayment waivers of $2,000 or less. According to the standards for internal control in the federal government, agencies must have controls in place to ensure that no individual can control all key aspects of a transaction or event. We recommended that SSA review the agency’s policy concerning the supervisory review and approval of overpayment waiver decisions of $2,000 or less. SSA agreed with this recommendation and subsequently convened a workgroup to evaluate this policy and review the payment accuracy of a random sample of waiver decisions. SSA found that the dollar accuracy rate of the randomly selected waiver transactions it reviewed in the SSI program was nearly 99 percent. However, in a more recent review of 5,484 SSI waiver decisions of less than $2,000, SSA found that 50 percent of decisions were processed incorrectly. In light of this finding, we continue to believe that additional supervisory review may improve program integrity. However, as a result of its earlier study findings, SSA decided to continue its current policy for waiver decisions of $2,000 or less. Beyond SSA’s field offices, we also found limited oversight of the waiver process on a national basis. In our December 2012 report, we concluded that management oversight of the SSI overpayment waiver decision process is limited. Specifically, SSA did not analyze trends in the type, number, and dollar value of waivers granted, including those waivers below the $2,000 approval threshold that SSA staff can unilaterally approve, or determine whether there were waiver patterns specific to SSA offices, regions, or individual staff. Without such oversight and controls in place, SSA is unaware of trends in the waiver process that may jeopardize the agency’s ability to maximize its overpayment recovery efforts and safeguard taxpayer dollars. We recommended that SSA explore ways to strengthen its oversight of the overpayment waiver process. While the agency agreed with the intent of this recommendation, it cited resource constraints to creating and analyzing data at the level of detail specified in our recommendation. However, we continue to believe that, short of additional steps to better compile and track additional data on waiver patterns specific to SSA offices and individuals, SSA will be constrained in its efforts to recover identified overpayments. SSA faces management challenges that may constrain its ability to ensure program integrity. As mentioned above, SSA has cited challenges with balancing the demands of competing workloads, including CDRs, within its existing resources. In February 2015, we reported that SSA has taken a number of steps toward managing its workload and improving the efficiency of its operations, but capacity challenges persist, and delays in some key initiatives have the potential to counteract efficiency gains. SSA is also facing succession planning challenges in the coming years that could affect program integrity. In 2013, we reported that SSA projects that it could lose nearly 22,500 employees, or nearly one-third of its workforce, due to retirement—its primary source of attrition—between 2011 and 2020. An estimated 43 percent of SSA’s non-supervisory employees and 60 percent of its supervisors will be eligible to retire by 2020. During this same time, workloads and service delivery demands are expected to increase. The high percentage of supervisors who are eligible to retire could result in a gap in certain skills or institutional knowledge. For example, regional and district managers told us they had lost staff experienced in handling the most complex disability cases and providing guidance on policy compliance. SSA officials and Disability Determination Services managers also told us that it typically takes 2 to 3 years for new employees to become fully proficient and that new hires benefit from mentoring by more experienced employees. SSA’s Commissioner also noted that as a result of attrition, some offices could become understaffed, and that without a sufficient number of skilled employees, backlogs and wait times could significantly increase and improper payments could grow. Federal internal controls guidance states that management should consider how best to retain valuable employees, plan for their eventual succession, and ensure continuity of needed skills and abilities. Thus, we recommended that SSA update its succession plan to mitigate the potential loss of institutional knowledge and expertise and help ensure leadership continuity. In response to our recommendation, SSA published a human capital operating plan, detailing specific workforce management and succession planning steps SSA will take across the organization. We believe this is an important step in addressing the upcoming workload and workforce challenges. In our 2013 report, we also concluded that SSA’s long-term strategic planning efforts did not adequately address the agency’s wide-ranging challenges. For example, in the absence of a long-term strategy for service delivery, the agency would be poorly positioned to make decisions about its critical functions. Such decisions include how the agency will address disability claims backlogs while ensuring program integrity, how many and what type of employees SSA will need for its future workforce, and how the agency will more strategically use its information technology and physical infrastructure to best deliver services. Federal internal controls guidance states that federal agencies should comprehensively identify risks, analyze and decide how to manage these risks, and establish mechanisms to deal with continual changes in governmental, economic, industry, regulatory, and operating conditions. We recommended that SSA develop a long-term strategy for service delivery. We also noted that without a dedicated entity to provide sustained leadership, SSA’s planning efforts would likely remain decentralized and short-term. We recommended that SSA consider having an entity or individual dedicated to ensuring that SSA’s strategic planning activities are coordinated agency-wide. In response to these recommendations, SSA appointed a chief strategic officer responsible for coordinating agency-wide planning efforts. SSA has also recently taken a key step toward developing a long-range strategic plan to address wide-ranging management challenges. In April 2015, SSA published Vision 2025, which incorporates input from employees, advocates, members of Congress, and other stakeholders and articulates a vision of how SSA will serve its customers in the future. As a next step, SSA has indicated that it will create working groups representing a cross-section of SSA staff. Under the leadership of SSA’s Chief Strategic Officer, they will be charged with developing a strategic roadmap for the next 10 years that will define actions SSA will need to take and resources required to achieve SSA’s vision for 2025. Moving forward, SSA will need to implement the steps outlined in its long-term strategic plan—as well as those in its human capital plan—to ensure it has the capacity and resources needed to manage future workloads while making quality decisions. As stated in Vision 2025, SSA plans to realize its service delivery vision in part by simplifying and streamlining its policies and procedures, and in 2013, SSA formed an SSI Simplification Workgroup that is tasked with identifying promising proposals that could simplify the SSI program and reduce improper payments. Program complexity has been a long- standing challenge for SSI that contributes to administrative expenses and the potential for overpayments. In addition to collecting documentation of income and resources to determine SSI benefit amounts, SSA staff must also apply a complex set of policies to document an individual’s living arrangements and financial support being received. These policies depend heavily on recipients to accurately report a variety of information, such as whether they live alone or with others; the extent to which household expenses are shared; and exactly what portion of those expenses an individual pays. Over the life of the program, these policies have become increasingly complex. The complexity of SSI program rules pertaining to these areas of benefit determination is reflected in the program’s administrative costs. In fiscal year 2014, SSI benefit payments represented about 6 percent of benefits paid under all SSA-administered programs, but the SSI program accounted for 33 percent of the agency’s administrative expenditures. In our prior work, we noted that staff and managers we interviewed cited program complexity as a problem leading to payment errors, program abuse, and excessive administrative burdens. In December 2012, we also reported that the calculation of financial support received was a primary factor associated with SSI overpayments from fiscal year 2007 through fiscal year 2011. The SSI Simplification Workgroup is considering options for simplifying benefit determination policies as well as adding a sliding scale for multiple SSI recipients in the same family. In light of these long-standing issues, we have begun work for this Subcommittee that will provide information about SSI recipients who are often subject to complex benefit determination policies. Generally, if two members of a household receive SSI benefits, both members are eligible for the maximum amount of benefits, unless they are married. However, this benefit structure does not directly reflect savings that may result from multiple individuals sharing household expenses, and the policies SSA currently applies to address this issue are highly complex and burdensome. Over the last two decades, various groups have proposed applying a payment limit to the benefits received by more multiple- recipient households, which could be used in place of the more complex calculations SSA currently performs. Our new study is examining such households and the potential administrative or other barriers to implementing a change in the amount of benefits received by households with multiple recipients. Another long-standing challenge for the SSI program is that once on benefits, few individuals leave the disability rolls, despite the fact that some may be able to do so through increased earnings and employment. Our prior work has noted that if even a small percentage of disability program recipients engaged in work, SSA’s programs would realize substantial savings that could offset program costs. To this end, the Ticket to Work and Work Incentives Improvement Act of 1999 provided for the establishment of the Ticket to Work and Self-Sufficiency Program (Ticket program) which provides eligible disability program recipients with employment services, vocational rehabilitation services, or other support services to help them obtain and retain employment and reduce their dependency on benefits. In May 2011, we reported that the Ticket program continued to experience low participation rates, despite revisions to program regulations that were designed to attract more disability program recipients and service providers. Further, although participants have a variety of differing needs, the largest service providers in the program focused on those who were already working or ready to work. One service provider told us that certain disability program recipients are often screened out because they lack the education, work experience, or transportation needed to obtain employment. We made several recommendations for improving program oversight in our May 2011 report, which the agency has since implemented. However, the number of individuals using the Ticket program who left the disability rolls because of employment remains low—under 11,000 in fiscal year 2014. Individuals who start receiving SSI as children often collect benefits for the long term, potentially because they do not receive interventions that could help them become self-sufficient. Approximately two-thirds of child recipients remain on SSI after their age 18 redeterminations. Research has found that children who remain on SSI benefits into early adulthood have higher school dropout rates, lower employment rates, and lower postsecondary enrollment rates in comparison to the general young adult population. Additionally, these youth participate in vocational services at a low rate. In light of this, concerns have been raised that SSA is not doing enough to inform youth on SSI who are approaching age 18 about available employment programs. At the request of this Subcommittee, we will soon begin work to examine SSA’s efforts to promote employment and self-sufficiency among youth on SSI. Chairman Boustany, Ranking Member Doggett, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this statement, please contact me at (202) 512-7215 or bertonid@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Rachel Frisk, Alexander Galuten, Isabella Johnson, Kristen Jones, Phil Reiff, and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The SSI program, administered by SSA, provides cash assistance to eligible aged, blind, and disabled individuals with limited financial means. In fiscal year 2014, the program paid nearly $56 billion in federally funded benefits to about 8.2 million individuals. The program has grown substantially in recent years, and is expected to grow more in the near future. SSA has a stewardship responsibility to guard against improper payments and to address program integrity issues that if left unchecked could increase the potential for waste, fraud, and abuse. SSA estimated that it made $5.1 billion in improper payments in fiscal year 2014. In addition, SSA's management concerns are wide ranging and include ensuring its workforce is able to meet service delivery needs. In this statement, GAO describes SSA's challenges with 1) ensuring SSI program integrity and 2) managing the program. This testimony is primarily based on GAO products issued from 2002 to 2015, which used multiple methodologies, including analyses of SSI administrative data from fiscal years 2000 to 2011; reviews of relevant federal laws, regulations, and guidance; and interviews of SSA officials. In May 2015, GAO obtained current data on improper payments and updates from SSA reports and guidance on actions taken to address GAO's past recommendations. The Social Security Administration (SSA) faces challenges with ensuring the integrity of the Supplemental Security Income (SSI) program's processes for preventing, detecting, and recovering overpayments. For example, SSA is required in certain circumstances to periodically review SSI recipients' medical and financial eligibility, yet the lack of timely reviews and difficulty getting complete financial information hinder SSA's ability to prevent and detect overpayments to recipients. SSA estimated that $4.2 billion of the payments it administered to SSI recipients in fiscal year 2013 were overpayments. In June 2012, GAO found that SSA had accumulated a substantial backlog of recipients' medical eligibility reviews, including for over 23,000 children with mental impairments who were deemed likely to medically improve when initially determined eligible for benefits. GAO recommended that SSA eliminate its backlog for these children and conduct timely reviews going forward, estimating based on fiscal year 2011 data that these actions could save more than $3.1 billion over 5 years by preventing related overpayments. SSA recently reported that it has increased the number of medical eligibility reviews conducted for SSI children in each year since 2012, completing nearly 90,000 reviews in fiscal year 2014—in contrast to the 25,000 reviews completed in fiscal year 2011—and plans to continue these efforts. In December 2012, GAO also reported that a lack of comprehensive, timely information on SSI recipients' financial accounts and wages led to overpayments. GAO noted that SSA had recently developed electronic tools to address these issues, and SSA reported that the agency is gaining experience using them. However, despite these efforts, in May 2015, the SSA Office of the Inspector General found that overpayments associated with financial account information have increased in recent years and recommended SSA continue researching initiatives that will help to reduce improper payments in the SSI program. SSA agreed to this recommendation. SSA faces several management challenges in administering SSI related to workload, service delivery, and program complexity. In 2013, GAO reported that as a result of an ongoing retirement wave, SSA faced a loss of institutional knowledge and expertise, which may result in increased review backlogs and improper payments. GAO recommended that SSA update its succession plan, in line with federal internal controls guidance that states that management should plan for succession and ensure continuity of needed skills and abilities. In response, SSA published a human capital document detailing its succession plans. Federal internal controls guidance also states that agencies should comprehensively identify and manage risks, and GAO also recommended SSA develop a long-term service delivery plan to determine, among other things, how SSA will address both program integrity and other workloads. In response, SSA published an April 2015 description of its vision for future service delivery and indicated it plans to develop a strategy for achieving this vision moving forward. SSA also noted the importance of simplifying its policies and procedures to meet its service delivery goals and SSA has plans to do so. Program complexity is a long-standing challenge that contributes to administrative expenses and potential overpayments. GAO is beginning work for this subcommittee related to how benefit amounts are calculated for multiple SSI recipient households, an area that SSA has considered for program simplification. GAO has previously made recommendations to help SSA strengthen its program oversight and address management challenges. In response, the agency has taken some steps and plans to do more. |
FAA’s Federal Air Marshal program expanded the Sky Marshal program, which was established as part of the Customs Service in the 1970s to deter hijackings to and from Cuba. Shortly after TWA Flight 847 was hijacked in Athens, Greece, in June 1985, then President Ronald Reagan called for an expansion of the Sky Marshal program. On August 8, 1985, the Congress enacted the International Security and Development Cooperation Act, which established the statutory basis for the program within DOT, which further delegated the responsibility to FAA. Since then, the Federal Air Marshal program has provided specially trained, armed teams of civil aviation security specialists for deployment worldwide on antihijacking missions. As a result of the events of September 11, 2001, the President and the Congress decided to rapidly expand the Service. On September 17, 2001, FAA began to develop a plan to recruit federal air marshals in unprecedented numbers. Accordingly, FAA designed a process and put together a team of specialists, incorporating resources from its Human Resource Management, Aviation Medical, Civil Aviation Security, and Federal Air Marshal Training organizations to implement the recruitment process. The process was designed to ensure that each air marshal candidate met the medical entry standards, passed DOT’s drug-testing program, and was preliminarily judged suitable to obtain a top-secret clearance, which is required for permanent employment with the Service. As part of the assessment, each candidate was required to participate in a security interview with an investigator from FAA, OPM, or the U.S. Investigative Services (an OPM contractor), as well as interviews with representatives of FAA’s Office of Human Resource Management and the Service. In October 2001, FAA implemented this recruitment process, and the Deputy Secretary of Transportation also set July 1, 2002, as the deadline for recruiting, hiring, and training enough federal air marshals to provide coverage on flights that posed high security risks. In November 2001, after the Aviation and Transportation Security Act was passed, TSA assumed FAA’s responsibilities for aviation security and supported FAA’s recruitment effort through July 2002. Between October 2001 and July 2002, TSA received nearly 200,000 applications for federal air marshal positions. Thousands of applicants were assessed for employment, and TSA, through OPM, initiated full background investigations for top-secret clearances. Other federal agencies also made law enforcement officers available to augment the Service until TSA could hire, train, and deploy the first few classes of new air marshals. See appendix II for a demographic profile of the Service’s expanded workforce. With expansion, the Service’s annual budget grew from $4.4 million for fiscal year 2001 to $545 million for fiscal year 2003. Currently, the Service operates a headquarters office in Virginia, 21 field offices, and a specialized air marshal training and human resource facility in Atlantic City, New Jersey. Basic law enforcement training takes place at the Federal Law Enforcement Training Center in Artesia, New Mexico. See appendix III for a map of these facilities and appendix IV for a time line of the major organizational events affecting the Service since September 11, 2001. DHS brings together some 23 federal agencies comprising over 100 organizations, including the Federal Air Marshal Service, in what the department describes as the most significant transformation of the U.S. government since the merger in 1947 of the various branches of the armed forces into the Department of Defense. DHS is divided into five directorates, one of which, Border and Transportation Security, includes both TSA and ICE. Among other organizations, ICE includes a portion of the former Immigration and Naturalization Service, now called the Bureau of Citizenship and Immigration Services; the U.S. Customs Service now called Customs and Border Protection; and, as of November 2, 2003, the Federal Air Marshal Service. To expedite the deployment of thousands of air marshals, the Service obtained preliminary background checks and provided abbreviated training before deploying air marshal candidates on flights. As a result, the Service was able to meet the Deputy Secretary’s deployment deadline and carry out its mission. To deploy its expanded workforce as quickly as possible between October 2001 and June 2002, the Service followed the same expedited background check procedures that federal agencies have used since 1995, when Executive Order 12968 authorized the temporary use of interim security clearances. Under these procedures, candidates who require security clearances and pass preliminary background checks may, within about 24 hours, obtain interim security clearances that allow them to work until their full background checks have been completed and they obtain their final clearances. A preliminary background check consists of an interview with a security specialist; a review of an applicant’s responses to a standard questionnaire for national security positions; a criminal history check, based on fingerprints and a review of biographical data from National Crime Information Center files; and credit reports. An interim security clearance may be revoked at any time if unfavorable information is identified during an investigation. Between October 2001 and July 2002, thousands of candidates were assessed for employment, and TSA, through OPM, initiated full background investigations for top-secret clearances. According to TSA management, the majority of the candidates passed the preliminary background checks and obtained interim security clearances that allowed them to work while their full background checks were being completed. Less than a quarter of the candidates did not pass the preliminary checks because of bankruptcy, bad credit, or other problems. TSA placed these candidates on a “pending/ready” list and did not allow them to work as air marshals, but it pursued full background investigations for them because many of the issues identified during preliminary background checks are minor and are favorably resolved during full background investigations. Full background checks for thousands of candidates identified a small number as unsuitable. In June 2003, the Service placed 80 air marshal candidates on administrative leave while TSA resolved issues that surfaced during full background investigations. By August 2003, 47 of these candidates had received their top-secret clearances and have since been returned to flight status. Of the 33 remaining candidates, 19 have been denied clearances, and the Service is taking steps to terminate their employment; 4 have been approved for, but have not yet received, top- secret clearances; 7 have resigned; and the remaining 3 are awaiting TSA’s approval of their top-secret clearances. The Service said it has continued to identify some candidates as unsuitable, and as of October 2003, 14 air marshals were on administrative leave because of issues that surfaced during full background checks. When definitive information for each of these cases is obtained, the Service said, the air marshal would be returned to flight status or steps would be taken to terminate the air marshal’s employment. During our review, we found that the background investigations used to grant top-secret clearances for air marshals were not being expedited as requested. According to TSA, an expedited background investigation costs $3,195 and should be completed within 75 days, whereas a regular background check costs $2,700 and should be completed within 120 days. Consequently, for every 1,000 background investigations, the Service paid a premium of about $495,000. TSA paid the expedited fees to OPM up front, as required, but as of July 2003, about 23 percent of the air marshals were still operating under interim security clearances. Some candidates had been awaiting final clearances for up to a year. The Service told us in April 2003 that it had, on numerous occasions, raised concerns about the delays in processing final security clearances but had met with little success. Additionally, the Service said that its efforts to reclaim the difference in cost were unsuccessful. DHS said that TSA’s Credentialing Office had taken steps since June 2003 to ensure that every active air marshal was operating under a top-secret clearance; and as of October 2003, about 3 percent of the active air marshals were operating under interim security clearances. According to OPM, the primary reason for these clearance-processing delays is that the agency has received an unprecedented number of requests for background investigations governmentwide since September 2001. For fiscal year 2002, OPM’s data indicated that the average processing time for 75-day expedited background checks was 96 days. OPM said that the expedited requests received higher-priority processing than the regular (120-day) background checks, resulting in faster turnaround for services related to the expedited requests. OPM added that its contractor charges premiums for expedited requests because the costs for these requests are higher. Consequently, according to OPM, no price adjustments are made when overall deadlines are missed. While the Service is not responsible for the delays in completing air marshals’ full background investigations, we found that it could have provided OPM with information for scheduling the investigations more efficiently. As candidates applied for positions between October 2001 and June 2002 and their preliminary background checks were completed, the Service offered conditional employment to some of the candidates and, as discussed, placed others on a “pending/ready” list. However, the Service did not make this information available to OPM. As a result, some potential employees received their top-secret clearances ahead of other candidates who were being trained or deployed on flights. We brought this issue to the attention of the Service in March 2003; and in May, the Service sent OPM a list of candidates and asked OPM to give highest priority to investigations of those who were already deployed on flights. In addition, the Service has asked OPM to schedule the investigations for senior managers first and then to schedule investigations for other applicants on a first-in, first-out basis. On May 28, 2003, the Service also detailed a liaison from its Office of Field Operations to assist TSA’s Office of Security in setting priorities for reviewing and adjudicating the backlog of background investigations. To deploy the requisite number of air marshals by the Deputy Secretary’s July 2002 deadline, the Service revised and abbreviated its training program. From October 2001 through July 2002, it modified the air marshal curriculum incrementally, eventually reducing the original 14-week program to about 5 weeks for candidates without prior law enforcement experience and about 1 week for candidates with such experience. The revised curriculum was designed to provide candidates with the basic law enforcement knowledge, skills, and abilities needed to perform their duties as air marshals, including knowledge of the Service’s rules and regulations, physical skills, and basic and advanced marksmanship. The curriculum no longer included certain elements of the original training program, such as driving skills and cockpit familiarization, because these were not deemed critical for air marshals to perform their duties. The curriculum also eliminated a 1-week’s visit to an airline and some instruction in the Service’s policies and procedures, which was to be provided on the job. Moreover, although the curriculum retained instruction in both basic and advanced marksmanship, air marshal candidates no longer had to pass an advanced marksmanship test to qualify for employment. Candidates were still required to pass a basic test with a minimum score of 255 out of a possible 300—the highest qualification standard for any federal law enforcement agency, according to the Service. To provide all the newly hired air marshals with needed skills, beyond the basic abilities the Service determined were critical for immediate deployment, the Service instituted a new 4-week advanced training course in October 2002. All air marshals hired from October 2001 through July 2002, regardless of their previous law enforcement experience, were required to complete the course by January 2004. This course includes some elements, such as emergency evacuation and flight simulator training, that the Service did not include in the 5-week course because, although it considered the elements important for air marshals to carry out their mission, it did not consider them critical for immediate deployment. In addition, the course provides further training in advanced marksmanship skills. Air marshals hired after August 2002 attend this advanced training course after completing their basic training. The Service has developed a centralized tracking system to ensure that all air marshals take this course. Although the Service is now providing additional marksmanship training, its decision not to restore the advanced marksmanship test as a qualification standard for employment has proved controversial. Passing this test would require candidates to demonstrate their speed and accuracy in a confined environment similar to the environment on board an aircraft. The DOT IG’s report suggested that the Service needed to adopt a firearms qualification standard that was more stringent and comprehensive than the basic firearms qualifying test. The Service disagreed, emphasizing that its minimum score is the most stringent in federal law enforcement and adding that its 4-week course provides further training in advanced firearms skills. Our review of the Service’s documentation confirmed that instruction in advanced marksmanship is a critical part of this training, even though passing this element is no longer a condition of employment. In August 2003, the Service reported that proposed cutbacks in its training funds would require it to extend the date for all air marshals hired from October 2001 through July 2002 to complete the 4-week advanced course from January 2004 to mid-2004. According to DHS, the Service’s transfer to ICE will not adversely affect either the funding for air marshals’ training or the schedule for newly hired air marshals to complete the 4-week training course, since a total of $626.4 million is being transferred from TSA to ICE. While this funding exceeds the $545 million that the Service received for fiscal year 2003, it is not clear how much of the funding will be allocated for training. Given the importance of training to ensure that air marshals are prepared to carry out their mission, we believe that maintaining adequate funding for training should remain a priority. Additionally, should reductions in the funding for training be required, our recent work on strategic training and development efforts provides alternatives that an agency can consider to across-the-board cuts—such as evaluating training needs, setting training priorities, developing alternative training requirement scenarios, and determining how much funding each of these scenarios would require. Our work further suggests that it is important for agencies to ensure that their training and development efforts are cost effective, given the anticipated benefits and to incorporate measures that can be used to demonstrate contributions that training and development programs make to improve results. These principles are applicable at all times, but especially when funds are limited. Determining whether air marshals with prior law enforcement experience have the same training needs as those without such experience could help set cost-effective training priorities. We found that a cornerstone of human capital management is the ability to successfully acquire, develop, and retain talent. Investing in and enhancing the value of employees through training and development is a crucial part of addressing this challenge. This investment can include not only formal and on-the-job-training but also other opportunities, such as rotational assignments. Our work further specifies that agencies should link their training curriculum to the competencies needed for them to accomplish their mission. The Service has begun developing a formal training curriculum beyond the basic and advanced training courses described above. This curriculum requires air marshals to participate in 5 days of recurrent training each quarter that, in addition to the quarterly weapons qualification, includes training in advanced firearms, operational tactics, defensive tactics, surveillance detection, emergency medicine, physical fitness, and legal and administrative elements. Additionally, the Service is developing rotational assignments for air marshals that allow them to participate in law enforcement task forces, as well as fill a variety of operational and training positions in headquarters and the field. The Service recognizes that such opportunities can not only enhance professional development but also help to prevent problems such as boredom and burnout. According to the Secretary of Homeland Security, one of the advantages of the Service’s transfer to ICE is that it will enhance air marshals’ professional development opportunities. As the Service grew from a small, centralized organization to an organization with 21 field offices and thousands of employees, its need for information, policies, and procedures to manage its expanded workforce and operations also grew. The Service collects several types of information that it can use to continually improve its operations and oversight and, in some instances, it has used the information to do so. In other instances, however, the Service lacks sufficiently detailed information for effective monitoring and oversight. The new, decentralized organization has also required new or written policies and procedures to cover new situations and ensure that the same guidance is available to air marshals in all locations. According to DHS, it recognized that the Service would need to revise its existing policies or draft new ones, and it has been working to do so since March 2002. Nonetheless, its policy- development efforts sometimes responded to problems, rather than anticipating and preventing them. DHS told us that it is committed to proactively addressing policy issues and developing procedures. The Service collects information on air marshals’ work schedules and other issues, including potential security incidents documented in reports filed by air marshals after completing their missions, allegations of misconduct by air marshals, and reasons provided by air marshals for leaving the Service. Such information can be useful to managers in monitoring mission operations and retention. According to our Standards for Internal Control in the Federal Government, the information should be recorded and communicated to management and others within the agency who need it, and it should be provided in a form and within a time frame that enables them to carry out their responsibilities. For example, one way to do this would be to ensure that pertinent information is captured in sufficient detail to help management identify specific actions that need to be taken. Moreover, according to our human capital model, a fact-based, performance-oriented approach to human capital management is a critical success factor for maximizing the value of human capital. In addition, high-performing organizations use data to determine key performance objectives and goals, which enable them to evaluate the success of their human capital approaches. For example, obtaining employee input and suggestions can provide management with firsthand knowledge of the organization’s operations, which management can use to ensure ongoing effectiveness and continuous improvement. The Service has analyzed and made effective use of its mission reports and conduct data, but other management information that it currently collects is not sufficiently well defined or detailed for monitoring and managing the workforce. Although the Service initially had no systematic means of obtaining regular input from its employees, it has recently put processes in place to solicit air marshals’ opinions and suggestions. In addition, the Service is participating in an Office of Management and Budget program assessment project. As part of this effort, DHS said it has identified annual and long-term performance measures and related performance outcome targets to evaluate the Service’s organizational effectiveness along key strategic goals and objectives. Through this project and other strategic planning initiatives, DHS says it expects to systematically measure and analyze the Service’s organizational performance along human capital, mission scheduling, professional development, and quality of work-life dimensions. When the Service was first directed to expand its mission and operations, it was using a manual system to schedule air marshals for flight duty. This system was quickly overwhelmed as the number of air marshals and flights grew, leading to the concern that air marshals were being scheduled inconsistently for flight duty. The Service acknowledged that during this period, some air marshals were overworked while others were underutilized. In June 2002, the Service replaced the manual system with an automated system, which, according to Service officials, improved the agency’s ability to schedule and deploy its workforce. While the automated system expanded the Service’s scheduling capability, it did not provide the Service with all of the information it needed for effective monitoring. For example, it did not initially break down data on air marshals’ use of leave into enough categories for the Service to assess whether some air marshals were abusing sick leave in order to get a day off. Specifically, an article in USA Today reported that about 1,250 air marshals called in sick over an 18-day period. Eventually, the Service determined that the article was based on a report generated by the automated scheduling system that overrepresented the number of air marshals who were on sick leave. Although the report was labeled “Sick Leave,” it included data on all air marshals who were unavailable for flight duty, not only for sickness but also for other reasons such as administrative leave, and it listed each day of unavailability for flight duty as a separate incident, although the same air marshal might have been unavailable for several days in a row for the same reason. In analyzing data from the scheduling system, we found that because the system reported all leave charges—sick, administrative, military, or other—as sick leave, the Service could not distinguish air marshals who were unavailable to fly because they were out sick from air marshals who were unavailable to fly because of injuries but were available for light field office duty. For example, an air marshal with an injured ankle might not be able to fly, but could perform administrative work in the field office. The Service has since modified the scheduling system to obtain better information on the type of leave—sick, military, or administrative— charged by air marshals who are unavailable to fly. The DOT IG also investigated cases concerning sick leave abuse and likewise found that it was based on a misunderstanding of the report’s contents stemming from the report’s label. Although the automated scheduling system provides information for managers to monitor how many hours air marshals are scheduled for work, automated information is not available for comparing the number of hours actually worked with the number of hours initially scheduled. These numbers can differ when flights are delayed or cancelled because of bad weather or mechanical problems. Information on these differences is important for Service managers to consider because of their implications for both the Service’s mission and air marshals’ quality of life. For example, if air marshals work too many hours, they may become too tired to concentrate on their mission, or if they spend too much time away from home, they may become dissatisfied with their jobs. Information on the number of hours flown will also be important for the Service to carry out a new long-term study, initiated by the Director in the summer of 2002, on the medical and physiological effects of flying. To date, the Service, in collaboration with FAA’s Civil Aviation Medical Institute and the Air Force, has identified a methodology and objectives for the study and completed a literature review to identify trends, possible risks, and other pertinent information. As part of the study, the Service plans to collect and analyze data from recurrent air marshal physical examinations and to compare these data with physiological data from the Civil Aviation Medical Institute. Although the Service is still awaiting funding approval to conduct the physical examinations and develop the database, Service officials plan to begin both efforts in the first quarter of fiscal year 2004. The study team has also developed a training course on human physiology as it relates to the aviation environment. The Service expects this course to be available early in fiscal year 2004. On the basis of some early findings from the study team’s literature search, the Service set limits in its automated flight-scheduling system to address mission, quality-of-life, and health concerns. The system limits scheduled “duty time” to 10 hours a day or 50 hours a week. Our analysis of schedules from the automated system for 37 weeks found that about 92 percent of the schedules were consistent with these controls. The Service added that further guidance has been developed that results in scheduling air marshals to fly an average of 4.2 hours per day, 18 days per month. Thus, air marshals should fly about 75 hours per month, which the Service said was within the aviation and military standards for pilots—90 and 100 hours per month, respectively. As part of implementing this guidance, the Service is conducting a detailed analysis of individual flight schedules to determine if the goals are being met. The Service reported on the basis of this analysis that, as of September 2003, scheduled flight time averaged 76.5 hours per month. The Service’s analysis, however, focuses on flight schedules and not on actual hours worked by the air marshals. Information on the hours air marshals actually work is not available for automated comparison with the hours they are scheduled to work because the actual work hours are recorded manually on time and attendance sheets and are not transferred to the automated system. Without an automated way to compare actual hours worked with scheduled hours, the Service lacks a tool needed to determine if the automated flight-scheduling system is meeting its objectives related to mission, quality-of life, and health concerns. DHS agreed that the information on actual hours should be automated and said that the Service intended to incorporate this capability via personal digital assistants (PDA) issued to all air marshals. Between September 2001 and September 2003, air marshals submitted reports of almost 2,100 incidents that occurred during their missions. A little over 40 percent of these mission reports describe passengers that exhibited suspicious behavior to the air marshals. About 18 percent of the reports discuss disagreements or conflicts between air marshals and airline or airport personnel over airport or airline procedures. The remaining mission reports cover a wide variety of incidents that the Service grouped into 17 other categories, as shown in appendix V. The Service has taken some action to follow up on the air marshals’ mission reports, but it has not addressed all of the issues the reports raise. For example, the Service established a liaison with the airlines in response to reports of disagreements and conflicts with the airlines. According to an official with the Air Transport Association, this action has improved relations between the air marshals and the airlines. Nevertheless, some coordination and communication issues remain. In October 2002, for instance, the Service purchased PDAs for distribution to all air marshals. Service officials told us that before making the purchase, they contacted FAA about obtaining approval to use the feature that would allow the air marshals to communicate with one another aboard aircraft. In August 2002, FAA advised the Service that it planned to approve this PDA feature for use by air marshals during flight. However, FAA’s approval was never finalized, and the airlines have not allowed the air marshals to use the PDAs for this purpose because of concerns about interference with flight control or navigational signals. According to Service officials, air marshals have stopped using their PDAs’ communication feature in flight until FAA approves its use, and the Service continues to work with FAA to obtain such approval. The Service reports that air marshals continue to use other features of the PDAs, such as their cell phone, pager, e-mail, surveillance, and photo-display capabilities. Between October 2001 and July 2003, the Service collected data on almost 600 reports of misconduct by air marshals, which it classified into over 40 categories. Among the categories with large numbers of reported cases were “insubordination or failure to follow orders,” “loss of government property,” and “abuse of government credit cards.” According to Service officials, they have used the misconduct database to identify issues such as abuse of government credit cards and cell phones that need to be investigated. For example, during the Service’s rapid expansion, management noted an unacceptable number of unauthorized charges and late payments associated with air marshals’ use of the government-issued travel card. Further investigation revealed that the process of claiming reimbursement for travel was slow and burdensome and there were misunderstandings about what charges were proper. After corrective action, the delinquency rate dropped dramatically. Similarly, an analysis of the misconduct data indicated that a number of air marshals were accused of being abusive to airline personnel during the boarding process. A detailed review of the data pointed to differences in the Service’s and the airlines procedures for boarding aircraft. Subsequently, the Service negotiated a mutually agreeable solution with the airlines to resolve these differences. In these instances, the Service used misconduct reports to effectively refine its management controls. The Service maintains data on the number of air marshals who leave the Service and categorizes their reason for leaving. However, these data are not detailed enough for management to identify and follow up on issues that could affect retention. Retention is important both to ensure the continued deployment of experienced personnel who can carry out the Service’s security mission and to avoid the costs to recruit, train, and deploy new personnel, which, according to the Service, total about $40,275 per person. Our analysis of the Service’s data on separations indicates that from September 2001 through July 2003, about 10 percent of the thousands of newly hired air marshals left the Service. However, during August 2002, when the media reported a “flood” of resignations from the Service, our analysis indicated that slightly more than 4 percent of the newly hired air marshals had left. We found that the most frequently recorded reasons for air marshals separating from the Service were to take other jobs and personal reasons. Such reasons are not detailed enough for management to identify and target issues that may hinder retention. To gain greater insight into the reasons for separation, we examined the Service’s documentation for 95 selected cases. For 37 of these cases, the departing air marshals cited multiple reasons for leaving the Service. For example, one departing air marshal cited personal reasons and going back to his previous job. Even with this additional information, we could not identify management issues that might have led to the separations because the reasons documented by the Service were too general and vague. The Service’s method of collecting data on air marshals’ reasons for separation may be responsible, in part, for the generality and vagueness of the information. Specifically, the Service uses either an open-ended exit interview with the air marshal’s first-line supervisor, the air marshal’s resignation letter, or both to collect the data. The supervisor conducts and writes up the exit interview and an administrative official in the field forwards the interview write-up, resignation letter, or both to human resource officials in Service headquarters. A human resource specialist then reviews the documentation and determines which of the reasons cited is the primary reason for the separation. This method of collecting information has several limitations. First, the open-ended exit interview may not prompt responses that go beyond generalities, such as taking another job or personal reasons, to determine whether management issues, such as problems in transferring to a duty station closer to home or burdensome work schedules, contributed to the air marshal’s resignation. Second, using the first-line supervisor to conduct the interview may discourage detailed responses, either because the air marshal may not want to reveal his or her concerns or reasons or because the supervisor may not want to report specific issues. Finally, using a human resource specialist to determine the primary reason for a separation means that the reason is filtered through another party rather than provided directly by the air marshal who is resigning. Our work on human capital has determined that feedback from exit interviews can guide workplace- planning efforts. If these exit interviews are constructed to collect valid and reliable data, they allow managers to spotlight areas for attention, such as employee retention. According to the DOT report, air marshals interviewed by the IG’s office were concerned about the way the air marshal program was being managed, which contributed to low morale in the Service. The air marshals the IG interviewed expressed dissatisfaction with the Service’s work schedules, aircraft boarding procedures, and dress code policy. During the early stages of its expansion, the Service did not have processes or mechanisms in place to gather input and suggestions from its employees. Such processes and mechanisms would not only allow the Service to monitor air marshals’ concerns about management issues, as the DOT IG’s report also noted, but it would also provide the Service with its employees’ firsthand knowledge and insights that it could use to improve operations and policies. According to our work on human capital, leaders at agencies with effective human capital management seek out the views of employees at all levels and communication flows up, down, and across the organization, facilitating continuous improvement. Tools commonly used for obtaining employee input include employee satisfaction surveys, employee advisory councils, and employee focus groups. Recently, the Service began putting processes and mechanisms in place to gather input from its employees. The Service reports that all field offices have methods, such as advisory committees, for air marshals to ask questions or express concerns to senior field office management. Additionally, question and answer sessions are held when the Director, Deputy Director, or Assistant Director visits a field office and during the basic and advanced training classes. To obtain further employee input, the Service participated in an ombudsman program that TSA sponsors to improve its operations and customer service. According to the Service, it is also developing a lessons learned and best practices intranet site that will allow substantive communication on issues of interest and concern to all air marshals. Policies and procedures that were designed to support a small, centralized Service were not designed for and could not accommodate the needs of a vastly expanded and decentralized workforce. According to our Standards for Internal Control in the Federal Government, internal control should provide for an assessment of the risks an agency faces from both external and internal sources. For example, when an agency expands its operations to new geographic areas, it needs to give special attention to the risks that the expansion presents. In attempting to hire, train, and deploy its new workforce by the Deputy Secretary’s deadline and establish a new field organization to support its new domestic mission, the Service had little time to systematically assess the risks of expansion and ensure that its policies and procedures were appropriate and adequate. Efforts to develop new policies or modify existing ones to accommodate new circumstances took time, and during the transition, some air marshals voiced concerns to the media. Delays in hiring supervisors and the discretion they were given in interpreting policies may have contributed to air marshals’ confusion. Before its expansion, the Service was a centralized organization with one office and fewer than 50 air marshals. Because there were no field offices, the Service had no policy on transfers between field locations. The vacancy announcement used during the hiring process stated that field offices would be located in various major metropolitan areas, and a Service official stated that air marshal applicants were allowed to express their preferences for particular field locations. According to a media report, air marshals alleged that transfers to their preferred locations were promised but that those promises were not kept. Our review of a recruiting video and other documents related to the hiring process did not find any evidence that transfers were promised; however, the recruiting video indicated that opportunities for transfer existed. Service officials said that no transfers were promised and that as the Service hired air marshals and implemented its new field office structure, it assigned the newly hired marshals to the 21 field offices as needed. Service officials later added that except in hardship cases, the air marshals were expected to remain in the originally assigned field offices for 3 to 5 years. The DOT IG also investigated this issue and interviewed air marshals who alleged that promises of transfers made during the hiring process were not kept, but the IG did not determine whether there was a legitimate basis for the air marshals’ concerns. By June 2002, the Service had received over 500 applications for transfers. Until a policy was issued, the Service tried to respond to the air marshals’ requests and to address quality-of-life issues by developing guidance that provided for transferring any air marshal (1) who owned a home within 100 miles of an established field office and (2) whose immediate family resided in that location—provided that both of these conditions existed before the air marshal’s employment with the Service. While the Service communicated this guidance orally to field managers, some air marshals were reportedly confused about why their requests for transfers were denied. In January 2003, the Service postponed further action on transfer requests, officials said, until applicable policies—on hardship and transfers—were finalized. On May 29, 2003, the Service implemented a hardship transfer policy that established processes and criteria for approving transfer requests when an air marshal or an immediate family member incurs a medical or child-custody hardship. In developing the policy, the Service said it looked into other law enforcement agencies’ transfer programs to identify best practices. During the early months of the Service’s expansion, air marshals expressed confusion and dissatisfaction to the media about policies covering their attire. At that time, the Service had no written dress code policy. Instead, according to Service officials, the agency carried over an unwritten FAA policy that air marshals should dress appropriately for their missions and the air marshal in charge of a mission should determine what attire was appropriate for that mission. According to the Service, some airline personnel complained to the Service that marshals were not dressed to blend in with other passengers in the location of the air marshals’ assigned seats. The Service said that the lack of a written policy might have created confusion for some newly hired air marshals whose initial training did not cover the Service’s policy on dress and whose field office supervisors had discretion in interpreting the policy. In May 2002, the Service issued a policy that directed air marshals to dress so as to present a professional image and blend into their environment. The Service believes that this policy enables air marshals to perform their duties without drawing undue attention to themselves. For example, an air marshal might wear a business suit on a morning flight to New York and a sports shirt on an afternoon flight to Orlando. To explain and ensure consistent application of the policy, the Director of the Service discussed this policy with supervisors and staff during his visits to many field offices and to the Service’s training center. Air marshals also discussed concerns about the Service’s workweek policy with the media. Some air marshals complained that they had been promised 4-day workweeks to compensate for the rigors of travel but were being required to work 5-day workweeks. Other air marshals reported being confused about the reasons for the change from a 4-day to a 5-day workweek and questioned whether this change was necessary. According to Service officials, the change in workweek policy occurred on July 1, 2002, when the Director of the Service brought the air marshals into compliance with the requirements of law enforcement availability pay (LEAP), a pay premium for unscheduled duty equaling 25 percent of a law enforcement officer’s base salary. Under this pay computation method, air marshals are required to average 10 hours of overtime per week. LEAP became applicable to the Service with the passage of the Aviation and Transportation Security Act on November 19, 2001, but the Service initially continued to compute air marshals’ schedules according to the method it had previously used, called the “first forty” method. Under this method, the first 40 hours worked in a week constituted the basic workweek, and 4-day and even 3-day workweeks were allowed if air marshals accrued 40 hours within that time. However, Service officials determined, in consultation with TSA’s legal department and human resources office, that a change to a 5-day workweek was necessary for the Service to comply with LEAP. Accordingly, the Director ordered a 5-day basic workweek, effective July 1, 2002. The DOT IG reported that over 85 percent of the air marshals its staff interviewed expressed concern about working 5 consecutive 10-hour mission days (with 2 consecutive off-duty days), saying that it resulted in fatigue and illness. Service officials acknowledged that working 10-hour days can create fatigue, but said that such days are routine in the law enforcement community. Service officials also maintained that fatigue can be managed by applying scheduling controls and monitoring air marshals’ schedules. However, as noted, the Service lacks the data to ensure that air marshals’ actual work hours are consistent with the scheduling controls. The Service is likely to face challenges in implementing changes resulting from its mergers into DHS in March 2003 and into ICE in November 2003. While changes in the size of its workforce could eventually occur in light of the many recent improvements to aviation security and federal budget constraints, the plans announced to date point to changes in the roles, responsibilities, and training of ICE’s workforces; the Service’s coordination with TSA and other organizations; and administrative matters. DHS reported looking forward to the opportunities accompanying the Service’s pending merger into ICE. Our recent work on mergers and organizational transformations proposes several key practices and implementation steps that could assist the Service and other departmental organizations as they face these challenges. One challenge for the Service will be to implement any changes in the size or in the roles and responsibilities of its workforce that the department determines are warranted after the Service is transferred to ICE. The right size of a security organization’s workforce appears to depend, among other things, on the nature and scope of the terrorist threat and on the totality of measures in place to protect against that threat. When the Service was first directed to expand, there were fewer protective measures in place than there are today. Over the past 2 years, an entire “system of systems” has been established for aviation security alone. This system includes not only the expanded Federal Air Marshal Service, but also about 50,000 federal security screeners in the nation’s airports, 158 airport security directors, explosives detection equipment for passengers and baggage, requirements for performing background checks on about 1 million airline and airport employees, reinforced cockpit doors on all passenger aircraft, and authorization for pilots to carry guns in the airplane cockpit. Now, as the department assesses the nation’s homeland security risks, considers the constraints on federal resources, and sets priorities, it will need to determine its appropriate size. It has already begun to make changes in the federal security screener workforce by reducing the total number of full-time screeners by 6,000 in fiscal year 2003 and by planning a further reduction of 3,000 full-time screeners in fiscal year 2004 together with the hiring of part-time screeners to meet daily and seasonal periods of higher demand. In announcing the Service’s merger into ICE, the Secretary of Homeland Security did not propose a change in the size of the Service’s or of ICE’s other two law enforcement workforces, but his statement pointed to an expansion of their roles and responsibilities that would give the department greater flexibility to adjust its law enforcement resources according to varying threats. Through cross-training, the Secretary said, thousands more law enforcement agents would be available for deployment on flights, providing a surge capacity during times of increased aviation security threats. At the same time, air marshals may be assigned to other law enforcement duties, as threat information dictates. This planned expansion of the roles and responsibilities of air marshals, immigration agents, and customs agents will pose training challenges for ICE and its component organizations. According to the Secretary’s announcement, the training will be centralized, which could eventually produce some cost efficiencies, but initially a needs assessment will have to be conducted to identify each law enforcement workforce’s additional training requirements. Cross-training requirements and curriculums will also have to be established and approved. Finally, each component organization will have to coordinate the new training requirements with its other mission requirements and schedule its officers for the cross-training. The Service is also likely to face coordination challenges following its transfer from TSA to ICE. In part, the transfer is designed to improve coordination by unifying DHS’s law enforcement functions, but it also divides aviation security responsibilities that, for about 2 years, were under TSA. According to the Secretary, the transfer will facilitate the coordination and sharing of law enforcement information, thereby enhancing aviation security. However, TSA has raised questions about how air marshals’ flights will be scheduled, and the TSA Administrator has expressed a desire to influence the scheduling. Immigration agents have reportedly also wondered how ICE would juggle air marshal deployments with the bureau’s current investigative work. Finally, the Service’s transfer to ICE poses administrative challenges for the three component organizations. For example, the planned changes in the roles and responsibilities of the federal law enforcement officers could have implications for their performance evaluations and compensation. Currently, the three groups of law enforcement officers are under different pay systems and are compensated at different rates. Efforts are under way to resolve these challenges. On the basis of our work on mergers and organizational transformations, we identified nine key practices and 21 implementation steps that could assist DHS in successfully merging the roles, responsibilities, and cultures of the Service and the department’s other component organizations. While these practices will ultimately be important to a successful merger and we have previously recommended them for the department, there are three, we believe, that are particularly applicable to the Service, given the concerns about communication and other allegations reported in the media. These three practices emphasize communicating with employees and obtaining and using their feedback to promote continuous improvement. See appendix VI for a complete listing of the practices and implementations steps. One key practice in a merger or transformation is to set implementation goals and a time line to build momentum and show progress from day one. These goals and the time line are essential to pinpoint performance shortfalls and gaps and suggest midcourse corrections. Research indicates that people are at the heart of successful mergers and transformations. Thus, seeking and monitoring employee attitudes and taking appropriate follow-up actions is an implementation step that supports this practice. Our work suggests that obtaining employee input through pulse surveys, focus groups, or confidential hotlines can serve as a quick check of how employees are feeling about large-scale changes and the new organization. As discussed in this report, the Service did not initially have such tools in place—in large part because of the enormous demands it faced in recruiting, training, and deploying thousands of air marshals by the Deputy Secretary’s deadline—and it was not monitoring employee attitudes. Furthermore, although monitoring provides good information, it is also important for agency management not only to listen to employees’ concerns but also to take action. By not taking appropriate follow-up actions, negative attitudes may translate into actions such as employee departures—or, as was the case with the Service, complaints to the media. Identifying cultural features of merging organizations is another important step in setting implementation goals. Cultural similarities between the Service and the other organizations within ICE could facilitate the Service’s merger into ICE. As the Director of the Service and others have noted, air marshals, immigration agents, and customs agents are all law enforcement officers and share a common culture. Moreover, as a spokesperson for ICE pointed out, many air marshals came to the Service from Customs and the Immigration and Naturalization Service; and some other agents served as air marshals temporarily after September 11. Establishing a communication strategy to create shared expectations and report related progress is another key practice in implementing a merger or transformation. According to our work on transformations and mergers, communication is most effective when it occurs early, clearly, and often and when it is downward, upward, and lateral. Organizations have found that a key implementation step is to communicate information early and often to build trust among employees as well as an understanding of the purpose of planned changes. As the Service found when modifying its workweek policy to implement LEAP premium pay, the absence of ongoing communication can confuse employees. Two-way communication is also part of this strategy, facilitating a two-way honest exchange with, and allowing for feedback from, employees, customers, and stakeholders. Once this solicited employee feedback is received, it is important to consider and use it to make appropriate changes when implementing a merger or transformation. Involving employees to obtain their ideas and gain their ownership is a third key practice for a successful transformation or merger. Employee involvement strengthens the transformation process by including frontline perspectives and experiences. A key implementation step in this practice is incorporating employee feedback into new policies and procedures. After obtaining sufficient input from key players, the organization needs to develop clear, documented, and transparent policies and procedures. Not having such policies and procedures was an impediment to the Service as it expanded, creating confusion about issues such as transfers and dress codes. DHS said that it fully recognizes the value and importance of communicating with employees and of obtaining and using their feedback to promote continuous improvement. It further noted that as the Service merges into ICE, it is committed to involving employees to obtain their opinions and gain their ownership. The rapid expansion of the Service’s mission and workforce posed significant challenges, many of which the Service has begun to address. In the 2 years that have elapsed since the terrorist attacks of September 11, the Service has deployed thousands of new air marshals on thousands of domestic and international flights. During this time, the Service has also established a decentralized organization and begun to integrate its operations with those of a new department. While these accomplishments initially came at some cost, as evidenced by air marshals’ concerns with the Service’s management, the Service has taken steps to provide advanced training, improve scheduling, obtain and use more detailed management information, develop and communicate policies and procedures, and obtain and respond to employee feedback. Continuing these efforts will be important for the Service as it moves forward. Developing and analyzing information on the hours air marshals actually work is key to ensuring that the Service’s scheduling controls are operating as intended. Flying for too many hours can cause fatigue, potentially diminishing air marshals’ alertness and reducing their effectiveness. Capturing detailed, firsthand information on air marshals’ reasons for separation is critical to developing cost-effective strategies for promoting retention and would also allow the Service to identify and analyze the root causes of issues and to address vulnerabilities through changes to its policies, procedures, and training. While retention has not been an issue to date, the cost of recruiting, training, and deploying air marshals is too high to risk separations that could be avoided through better understanding of and attention to air marshals’ concerns. We recommend that the Secretary of the Department of Homeland Security direct the Under Secretary for Border and Transportation Security to support the Service’s continued commitment to developing into a high-performing organization by taking the following actions to improve management information and to implement key practices that contribute to successful mergers and organizational transformations: Develop an automated method to compare actual hours worked with scheduled hours so that the Service can monitor the effectiveness of its scheduling controls and support its planned long-term study of the effects of flying on air marshals and their aviation security mission. Seek and monitor employee attitudes by obtaining detailed, firsthand information on air marshals’ reasons for separation, using such means as confidential, structured exit surveys, that will allow management to analyze and address issues that could affect retention and take appropriate follow-up actions, such as improving training, career development opportunities, and communication. We provided a draft of this report to DHS for its review and comment. DHS agreed with our report’s information and recommendations and said it welcomes our proposals for practices that it believes will ultimately maximize its ability to protect the American public, contribute to the protection of the nation’s critical infrastructure, and preserve the viability of the aviation industry. DHS also expressed a commitment to continuous improvement as it moves forward, including actions designed to build on the accomplishments the Service has already achieved in expanding its mission and workforce since the terrorist attacks of September 11, 2001. According to DHS, the Service has ongoing activities in several areas, such as continuing to address policy issues and develop procedures and establishing field office mechanisms and groups to discuss employee issues and concerns. We included this information in the final report. Additionally, DHS identified references in the draft report to “overscheduling” of air marshals, with an explicit suggestion that such “overscheduling” was among air marshals’ reasons for separating from the Service. We revised the report to avoid this implication, since we had not intended to suggest that air marshals were being overscheduled. Our intent was to point out that without an automated method to compare actual hours worked with scheduled hours, the Service would not readily be able to monitor the effectiveness of its scheduling controls. We also agreed with DHS that there were no data in the Service’s separation information to suggest that “overscheduling” was among air marshals’ reasons for leaving the Service, and we modified the report accordingly. DHS agreed with our recommendation to automate air marshals’ time and attendance data to facilitate comparisons of actual hours worked with scheduled hours and said that the Service was taking steps to implement the recommendation. DHS also agreed that there was a need to improve the quality of the Service’s separation information. In its comments, DHS also emphasized its belief that the Service’s merger with ICE would have a number of significant benefits, particularly from cross-training personnel. DHS noted that after cross-training, the air marshals, as well as personnel in the other ICE components, would have far more law enforcement capability and could supplement each other’s functions during times of heightened threat. Additionally, DHS said that the aviation system would benefit from the concentration and coordination of DHS law enforcement personnel under the direction of a single Assistant Secretary. We discuss these changes in our report by examining them in the context of issues that may arise as the Service merges with other agencies. In addition, we discuss key practices and implementation steps that could be useful in dealing with the changes. We note, however, that it is too early to assess any possible benefits or repercussions of the changes. Finally, DHS provided technical clarifications to the report, which we incorporated into the report as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 5 days after the date of this letter. At that time, we will send copies of this report to the Ranking Member, Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform, other interested congressional committees, the Secretary of Homeland Security, the Undersecretary for Border and Transportation Security, the Administrator of the Transportation Security Administration, and the Acting Assistant Secretary of the Bureau of Immigration and Customs Enforcement. This report is also available on GAO's home page at http://www.gao.gov. Please contact Carol Anderson-Guthrie or me at (202) 512-2834 if you have any questions about the report. Key contributors to this report are listed in appendix VII. To address each of our study objectives and research questions, we reviewed and analyzed data and documentation provided by the Federal Air Marshal Service (The Service) on background checks and training; scheduling, mission incidents, employee misconduct, and separation; and several workforce policies and procedures. We also interviewed officials responsible for implementing and operating the Service. Additionally, we used our Standards for Internal Controls in the Federal Government, Internal Control Management and Evaluation Tool, Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government, and Model of Strategic Human Capital Management, to help assess the Service’s training, management information, and policies and procedures. We also reviewed an audit report by the Department of Transportation’s (DOT) Inspector General (IG) on the Federal Air Marshal program. To guide our examination of the Service’s future challenges, we used our Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. To compare the background check procedures for the newly hired air marshals with those used before September 2001, we obtained and reviewed Service documents that described the process and procedures used to apply for a top-secret clearance, as well as for an interim secret clearance waiver. We interviewed officials at the Service’s Human Resource Center in New Jersey who were knowledgeable about the process and were coordinating the Service’s requirements with the responsible Security Management Offices at both the Federal Aviation Administration (FAA) and the Transportation Security Administration (TSA). We also analyzed data provided by the Office of Personnel Management’s (OPM) Investigative Service and had discussions with OPM personnel on the number of clearances processed and the procedures that are used. To determine what changes were made in the training curriculum for the newly hired air marshals, we analyzed documents related to the air marshal training curriculum. In order to identify the curriculum in place before the changes were made, we interviewed air marshals who had been with the Service before September 2001. To understand the Service’s curriculum from September 2001 through July 2003, we evaluated class schedules, training materials, and training data that tracked the completion of coursework and firearms qualification training. We visited the Federal Law Enforcement Training Center in Artesia, New Mexico and the Service’s training center in New Jersey, where we interviewed officials responsible for overseeing the air marshal training program. In addition, we interviewed representatives of the Air Line Pilots Association, the Air Transport Association, and current and former air marshals. To determine what management information and policies and procedures the Service had developed to support its expanded mission and workforce, we examined the Service’s automated scheduling system and management information on mission incidents, reported misconduct, and reasons for separation. We analyzed the automated scheduling system data to determine if the current system controls were operating as expected. Additionally, to determine the extent of sick leave use and to address allegations of excessive use, we analyzed the “sick calls” generated from the scheduling system between July and October 2002. We also reviewed and discussed with Service management its policies and procedures for air marshals’ transfers between offices, dress code requirements, and work schedules. To determine how many newly hired air marshals have left the Service and why, we used agency data on the number of air marshals on board, hired, and separated each month; supervisory memorandums summarizing exit interviews; resignation letters; personnel action forms; and the Service’s summary database on separations. Using the summary database, we determined the number of air marshals who separated, by reason, and calculated the percentage of total employees that separated for a specific reason. We discussed the process for collecting these data with agency officials responsible for maintaining the Service’s personnel data from the Service’s Human Resource Center in New Jersey. The Service provided information on the processing and maintenance of its data and on the relationships among its data systems. When we had concerns about the consistency and validity of the data, we asked agency officials to address each concern. On the basis of the information provided by the agency and our review, we determined that the required data elements were adequate for the purpose of this work. To gain a basic understanding of the issues surrounding staff decisions to leave the Service, we reviewed the agency’s separation data. For each departed staff, these data capture only one predominant reason (for leaving). To supplement this analysis, we selected 95 cases (36 percent of 264 separation cases) that had some form of documentation, had occurred at various times between January 2002 and March 2003, and had originated at various field offices. For each selected case, we reviewed any available resignation letters, exit interviews, and forms documenting personnel actions. This approach allowed us to conduct a limited quality check of the Service’s data and determine whether reasons outside of those reported by the Service provided a broader view of air marshals’ reasons for leaving the Service. To get a better understanding of the types of misconduct that air marshals have been charged with, we reviewed the electronic spreadsheets that the Service uses to track the status of each case of reported misconduct. The spreadsheets included cases reported between October 2001 and July 2003. We sorted the cases of misconduct by category to determine if a particular category was prevalent. We also spoke with Service management about the adjudication of alleged misconduct and the issues related to the completeness and definition of misconduct measures. To determine the types and frequency of the mission reports submitted by air marshals, we analyzed the database maintained by the Federal Air Marshals’ Mission Operations Control Center. This database contained approximately 1,600 incidents that were reported by air marshals between September 11, 2001, and September 16, 2003. We then sorted the incidents into broad categories, including mission-related incidents and incidents that occurred between air marshals and airport or airline personnel. We also received information on the Service’s use and dissemination of the incident data from the Special Agent in Charge of Field Operations. We reviewed the DOT IG’s report on the Federal Air Marshal program as an additional source of information about the Service. This report evaluated various aspects of the Service, including its selection and hiring process and its procedures for properly training and fully qualifying air marshals to respond to incidents aboard aircraft. For one aspect of the report, the IG interviewed 112 air marshals in a one-on-one format at their field office duty stations. The air marshals were not selected for interview using structured or random selection methods. Information obtained through these interviews highlights employee concerns with the Service but is anecdotal and therefore cannot be projected to the universe of the Service’s air marshal workforce. Appendix IV: Events Affecting the Federal Air Marshal Service, September 2001 through October 2002 The exact number of federal air marshals is classified. Appendix VI: Key Practices and Implementation Steps for Mergers and Organizational Transformations Ensure top leadership drives the transformation. Define and articulate a succinct and compelling reason for change. Balance continued delivery of services with merger and transformation activities. Establish a coherent mission and integrated strategic goals to guide the transformation. Adopt leading practices for results- oriented strategic planning and reporting. Focus on a key set of principles and priorities at the outset of the transformation. Embed core values in every aspect of the organization to reinforce the new culture. Set implementation goals and a timeline to build momentum and show progress from day one. Make public implementation goals and timeline. Seek and monitor employee attitudes and take appropriate follow-up actions. Identify cultural features of merging organizations to increase understanding of former work environments. Attract and retain key talent. and skills inventory to allow knowledge exchange among merging organizations. Dedicate an implementation team to manage the transformation process. implementation team. Select high-performing team members. Use the performance management system to define the responsibility and assure accountability for change. Adopt leading practices to implement effective performance management systems with adequate safeguards. Establish a communication strategy to create shared expectations and report related progress. Communicate early and often to build trust. Ensure consistency of message. Encourage two-way communication. Provide information to meet specific needs of employees. Involve employees to obtain their ideas and gain ownership for the transformation. Use employee teams. Involve employees in planning and sharing performance information. Incorporate employee feedback into new policies and procedures. organizational levels. Build a world-class organization. Adopt leading practices to build a world- class organization. In addition to those named above, Bess Eisenstadt, David Hooper, Kevin Jackson, Maren McAvoy, Minette Richardson, Laura Shumway, Rick Smith, Gladys Toro, and Alwynne Wilber made key contributions to this report. | To help strengthen aviation security after the September 11, 2001, terrorist attacks, the Congress expanded the size and mission of the Federal Air Marshal Service (the Service) and located the Service within the newly created Transportation Security Administration (TSA). Between November 2001 and July 1, 2002, the Service grew from fewer than 50 air marshals to thousands, and its mission expanded to include the protection of domestic as well as international flights. In March 2003, the Service, with TSA, merged into the new Department of Homeland Security (DHS); and in November 2003, it was transferred from TSA and merged into DHS's Bureau of Immigration and Customs Enforcement (ICE). GAO looked at operational and management control issues that emerged during the rapid expansion of the Service, specifically addressing its (1) background check procedures and training; (2) management information, policies, and procedures; and (3) challenges likely to result from its mergers into DHS and ICE. To deploy its expanded workforce by July 1, 2002, a deadline set by the Deputy Secretary of Transportation, the Service used expedited procedures to obtain interim secret security clearances for air marshal candidates and provided abbreviated training for them. These procedures allowed candidates with interim clearances to work until they received their final top-secret clearances. Because of a governmentwide demand for clearances, nearly a quarter of the active air marshals had not received their top-secret clearances as of July 2003; but by October 2003, only about 3 percent were awaiting their top-secret clearances. To train its expanded workforce before the Deputy Secretary's deployment deadline, the Service incrementally revised and abbreviated its curriculum. The Service has begun to develop management information, policies, and procedures to support its expanded workforce and mission, but it has not yet completed this major effort. For example, it replaced a manual system for scheduling flight duty with an automated system, but it has not yet developed an automated means to monitor the effectiveness of its scheduling controls designed to prevent air marshals' fatigue. In addition, it has gathered and used information on potential security incidents and on air marshals' reasons for separation from the Service to improve its operations and workforce management. However, some of this information is not clear or detailed enough to facilitate follow-up. Finally, the Service has implemented policies needed to support its expansion. The Service is likely to face challenges in implementing changes resulting from its mergers into DHS and ICE, including changes to its roles, responsibilities, and training and to its procedures for coordinating with TSA's security organizations, as well as administrative changes. GAO's recent work on mergers and organizational transformations proposes several key practices--set implementation goals, establish a communication strategy, and involve employees to obtain their ideas--and associated implementation steps that could help the Service implement such changes. |
DOD is subject to various laws dating back to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) as amended by the Superfund Amendments and Reauthorization Act (SARA) of 1986 that govern remediation (cleanup) of contamination on military installations. DOD must also follow federal accounting standards that establish requirements for DOD to recognize and report the estimated costs for the cleanup of training ranges in the United States and its territories. Increasing public concern about potential health threats has affected not only the present operations of these training ranges but also the management, cleanup, and control of this training range land that has been, or is in the process of being, transferred to other agencies and public hands. DOD defines a range as any land mass or water body that is used or was used for conducting training, research, development, testing, or evaluation of military munitions or explosives. DOD classifies its ranges into the following five types. Active ranges are currently in operation, construction, maintenance, renovation, or reconfiguration to meet current DOD component training requirements and are being regularly used for range activities. Examples of these ranges would include ranges used for bombing, missiles, mortars, hand grenades, and artillery testing and practice. Inactive ranges are ranges that are not currently being used as active ranges. However, they are under DOD control and are considered by the military to be a potential active range area in the future, and have not been put to a new use incompatible with range activities. Closed ranges have been taken out of service and are still under DOD control but DOD has decided that they will not be used for training range activities again. Transferred ranges have been transferred to non-DOD entities such as other federal agencies, state and local governments, and private parties, and are those usually associated with the formerly used defense sitesprogram. Transferring ranges are in the process of being transferred or leased to other non-DOD entities and are usually associated with the base realignment and closure program. Congress addressed environmental contamination at federal facilities under SARA in 1986. This legislation established, among other provisions, the Defense Environmental Restoration Program and the Defense Environmental Restoration Account as DOD’s funding source under the Act. The goals of the Defense Environmental Restoration Program include (1) identification, investigation, research and development, and cleanup of contamination from hazardous substances, pollutants, and contaminants and (2) correction of other environmental damage such as detection and disposal of unexploded ordnance which creates an imminent and substantial danger to the public health or welfare or to the environment. The Office of the Deputy Under Secretary of Defense for Environmental Security (DUSD(ES)) was created in 1993. That office has overall responsibility for environmental cleanup within DOD and includes the Office of Environmental Cleanup that manages the Defense Environmental Restoration Program. Carrying out any remediation or removal actions under applicable environmental laws, including SARA, would likely require the immediate or future expenditure of funds. Federal accounting standards determine how those expenditures are accounted for and reported. The Chief Financial Officers’ Act of 1990, as expanded by the Government Management and Reform Act of 1994, requires that major federal agencies, including DOD, prepare and submit annual audited financial statements to account for its liabilities, among other things. Two federal accounting standards, Statement of Federal Financial Accounting Standards (SFFAS) Nos. 5 and 6, establish the criteria for recognizing and reporting liabilities in the annual financial statements, including environmental liabilities. SFFAS No. 5, Accounting for Liabilities of the Federal Government, defines liability as a probable future outflow of resources due to a past government transaction or event. SFFAS No. 5 further states that recognition of a liability in the financial statements is required if it is both probable and measurable. Effective in 1997, SFFAS No. 5 defines probable as that which is more likely than not to occur (for example, greater than a 50 percent chance) based on current facts and circumstances. It also states that a future outflow is measurable if it can be reasonably estimated. The statement recognizes that this estimate may not be precise and, in such cases, it provides for recognizing the lowest estimate of a range of estimates if no amount within the range is better than any other amount. SFFAS No. 6, Accounting for Property, Plant, and Equipment, further defines cleanup costs as costs for removal and disposal of hazardous wastes or materials that because of quantity, concentration, or physical or chemical makeup may pose a serious present or potential hazard to human health or the environment. The Office of the Under Secretary of Defense (Comptroller) issues the DOD Financial Management Regulation containing DOD’s policies and procedures in the area of financial management, which require the reporting of environmental liabilities associated with the cleanup of closed, transferred, and transferring ranges in the financial statements.DOD has taken the position that the cleanup of these ranges is probable and measurable and as such should be reported as a liability in its financial statements. Under the presumption that active and inactive ranges will operate or be available to operate indefinitely, the DOD Financial Management Regulation does not specify when or if liabilities should be recognized in the financial statements for these ranges. The Senate Report accompanying the National Defense Authorization Act for Fiscal Year 2000 directed DOD to provide a report to the congressional defense committees, no later than March 1, 2001, that gives a complete estimate of the current and projected costs for all unexploded ordnance remediation. As of March 30, 2001, DOD had not issued its report. For the purposes of the March 2001 report, DOD officials had stated that they would estimate cleanup costs for active and inactive training ranges just as they would for closed, transferred, and transferring ranges. Thus, the cleanup costs shown in this report would have been significantly higher than the training range liabilities reported in the financial statements, which only include estimates for closed, transferred, and transferring ranges. However, in commenting on a draft of our report, DOD officials informed us that they would not be reporting the cleanup costs of active and inactive training ranges in their March report. As DOD downsizing and base closures have increased in recent years, large numbers of military properties have been, and are continuing to be, turned over to non-DOD ownership and control, resulting in the public being put at greater risk. DOD uses a risk-based approach when transferring ranges from its control to reduce threats to human health and the environment. DOD attempts to mitigate risk to human health on transferred and transferring ranges. In instances where DOD has not removed, contained, and/or disposed of unexploded ordnance and constituent contamination from training ranges prior to transfer, it implements institutional controls to restrict access to transferring ranges and to transferred ranges where risks are found. Institutional controls include implementing community education and awareness programs, erecting fences or barriers to control access, and posting signs warning of the dangers associated with the range. Figure 1 shows signs posted at Fort McClellan, Alabama, warning of unexploded ordnance. Fort McClellan has been designated for closure under the base realignment and closure program and, as such, is in the process of transferring base properties out of DOD control. DOD officials have estimated that approximately 16 million acres of potentially contaminated training ranges have been transferred to the public or other agencies. The risk to the public was further discussed by an Environmental Protection Agency (EPA) official in a letter dated April 22, 1999, to DUSD(ES). The EPA official cautioned that many training ranges known or suspected to contain unexploded ordnance and other hazardous constituents have already been transferred from DOD control, and many more are in the process of being transferred, and the risks from many of these have not been adequately assessed. The letter went on to state that risks correspondingly increase as ranges that were once remote are encroached by development or as the public increases its use of these properties. An example of the development of sites adjacent to training ranges is the planned construction of two schools and a stadium by the Cherry Creek School District adjacent to the Lowry Bombing Range, a transferred range, near Denver. Construction is expected to begin in May 2001. Most training range contamination is a result of weapons systems testing and troop training activities conducted by the military services. Unexploded ordnance consists of many types of munitions, including hand grenades, rockets, guided missiles, projectiles, mortars, rifle grenades, and bombs. Figure 2 shows examples of some of the typical unexploded ordnance that has been removed from training ranges. Risks from this unexploded ordnance can encompass a wide range of possible outcomes or results, including bodily injury or death, health risks associated with exposure to chemical agents, and environmental degradation caused by the actual explosion and dispersal of chemicals or other hazardous materials to the air, soil, surface water, and groundwater. For example, according to an EPA report, EPA surveyed 61 current or former DOD facilities containing 203 inactive, closed, transferred, and transferring ranges and identified unexploded ordnance “incidents” at 24 facilities. These incidents included five accidental explosions, which resulted in two injuries and three fatalities. According to an EPA official, the three fatalities identified in their limited survey were two civilian DOD contractors and one military service member. Although DOD reported its unexploded ordnance cleanup liability on training ranges at about $14 billion in its fiscal year 2000 agencywide financial statements, it is likely that the financial statements are substantially understated. Further, significant cleanup costs will not be included in the planned March 2001 report. DOD officials and Members of Congress have expressed concern over the potential liability the government may be faced with but are still uncertain how large the liability may be. Various estimates have shown that cleanup of closed, transferred, and transferring training ranges could exceed $100 billion. For example: In preparation for DOD’s planned issuance of the Range Rule, DOD began an analysis of the potential costs that may be incurred if the Rule was implemented. The Rule was intended to provide guidance to perform inventories and provide cleanup procedures at closed, transferred, and transferring ranges. The Rule was withdrawn in November 2000 and the cost analysis was never formally completed. However, a senior DOD official said that initial estimates in the cost analysis that was developed in 2000 put the cleanup costs of training ranges at about $40 billion to $140 billion for closed, transferred, and transferring training ranges. DOD estimated that its potential liability for cleanup of unexploded ordnance might exceed $100 billion as noted in a conference report to the National Defense Authorization Act for Fiscal Year 2001 (Report 106-945, October 6, 2000). DOD will not respond fully to the Senate Report request for reporting the costs of cleaning up unexploded ordnance on its training ranges. DOD officials informed us that due to time constraints, the training range liability to be reported in the March 2001 report would not be complete or comprehensive because the required information could not be collected in time for analysis and reporting. A DUSD(ES) official said that the March 2001 report will include a discussion of the limitations and omissions. DOD officials stated that they have deferred the collection and analysis of key data elements. Some of the items that were excluded are the costs to clean up the soil and groundwater resulting from unexploded ordnance and constituent contamination. These omitted costs could be significant. Further, the March 2001 report will not include information on water ranges. DOD’s 1996 Regulatory Impact Analysis reported that DOD had approximately 161 million acres of water training ranges, almost 10 times the size of the estimated closed, transferred, and transferring land ranges. In commenting on a draft of this report, DOD stated that the 161 million acres of water ranges are active training ranges, the majority of which are open-ocean, deep water, restricted access areas and most are outside the territorial waters of the United States. DOD also stated that the majority of water ranges are not likely to cause an imminent and substantial danger to public health and safety or the environment. However, until a complete and accurate inventory is performed, DOD will be unable to determine whether some water ranges meet the reporting requirement of SFFAS No. 5 and, thus, must be reported in the financial statements. The DOD Comptroller has revised the DOD Financial Management Regulation to clarify DOD’s fiscal year 2000 financial statement reporting requirements for training range cleanup costs. The revision includes guidance that requires the reporting of the cleanup costs of closed, transferred, and transferring ranges as liabilities in the financial statements. DOD has indicated that the costs to clean up these training ranges is probable and measurable and as such should be reported as a liability in the financial statements. We concur with DOD that these costs should be reported in the financial statements as liabilities because they are probable and measurable. Specifically, they are probable because DOD is legally responsible for cleaning up closed, transferred, and transferring ranges which were contaminated as a result of past DOD action. For example, under SARA, DOD is responsible for the cleanup of sites that create an imminent and substantial danger to public health and safety or the environment. In addition, these training range cleanup efforts are measurable. DOD has prior experience in training range cleanup under the formerly used defense sites program and has used this experience to develop a methodology to estimate future cleanup costs. However, as explained later in this report, DOD has not based its reported financial statement liability for cleanup of these ranges on a complete inventory or consistent cost methodology, resulting in estimates that range from $14 billion to over $100 billion. In addition, we believe that certain active and inactive sites may have contamination that should also be recorded as a liability in the financial statements because these sites meet criteria in federal accounting standards for recording a liability. The DOD Financial Management Regulation does not include instructions for recognizing a liability for training range cleanup costs on active and inactive ranges in the financial statements. Although cleanup of active and inactive ranges would not generally be recognized as a liability in the financial statements, there are circumstances when an environmental liability should be recognized and reported for these ranges. A liability should be recognized on active and inactive ranges if the contamination is government related, the government is legally liable, and the cost associated with the cleanup efforts is measurable. For example, contaminants from an active training range at the Massachusetts Military Reservation threaten the aquifer that produces drinking water for nearby communities. The problem was so severe that in January 2000, EPA issued an administrative order under the Safe Drinking Water Act requiring DOD to cleanup several areas of the training range. According to a DOD official, the cleanup effort could cost almost $300 million. As a result, the cleanup of this contamination is probable (since it is legally required) and measurable. Thus, this liability should be recognized in the financial statements under SFFAS No. 5. Although DOD and the services have collected information on other environmental contamination under the Defense Environmental Restoration Program for years, they have not performed complete inventories of training ranges to identify the types and extent of contamination present. To accurately compute the training range liabilities, the military services must first perform in-depth inventories of all of their training ranges. Past data collection efforts were delayed because the services were waiting for the promulgation of the Range Rule which has been withdrawn. DOD recently began collecting training range data to meet the reporting requirements for the Senate Report. However, as stated previously, DOD has limited its data collection efforts and will not be reporting on the cleanup of water ranges or the unexploded ordnance constituent contamination of the soil and water. The Army, under direction from DUSD(ES), proposed guidance for the identification of closed, transferred, and transferring ranges with the preparation and attempted promulgation of the Range Rule. In anticipation of the Range Rule, DOD prepared a Regulatory Impact Analysis report in 1996, recognizing that the cleanup of its closed, transferred and transferring training ranges was needed and that the cleanup costs could run into the tens of billions of dollars. To address inventories of its active and inactive ranges, DOD issued Directive 4715.11 for ranges within the United States and Directive 4715.12 for ranges outside the United States in August 1999. These directives required that the services establish and maintain inventories of their ranges and establish and implement procedures to assess the environmental impact of munitions use on DOD ranges. However, the directives did not establish the guidance necessary to inventory the ranges nor establish any completion dates. Although the directives assigned responsibility for developing guidance to perform the inventories, DOD has not developed the necessary guidance specifying how to gather the inventory information or how to maintain inventories of the active and inactive training ranges. Since fiscal year 1997, federal accounting standards have required the recognition and reporting of cleanup costs, as mentioned earlier. However, DOD did not report costs for cleaning up closed, transferred, and transferring training ranges until the services estimated and reported the training range cleanup costs in DOD’s agencywide financial statements for fiscal year 1999. Agencywide financial statements are prepared in accordance with the DOD Financial Management Regulation, which is issued by the DOD Comptroller and incorporates Office of Management and Budget guidance on form and content of financial statements. In an attempt to comply with the mandates in the Senate Report, DOD embarked on a special effort to collect training range data necessary to estimate potential cleanup costs. The Senate Report directed DOD to report all known projected unexploded ordnance remediation costs, including training ranges, by March 1, 2001, and to report subsequent updates in the Defense Environmental Restoration Program annual report to Congress. While the Senate Report did not expressly direct DOD to identify an inventory of training ranges at active facilities, installations subject to base realignment and closure, and formerly used defense sites, the data necessary to fully estimate costs of unexploded ordnance— normally located on training ranges—could only be attained in conjunction with the performance of a complete and accurate inventory that includes training ranges. Although the Senate Report’s directives were dated May 1999, DOD did not provide formal guidance to the services for collecting training range data until October 2000—17 months later. As a first step in February 2000, the Under Secretary of Defense for Acquisition, Technology, and Logistics assigned the responsibility to the Office of the Director of Defense Research and Engineering, in coordination with DUSD(ES), for obtaining the range data and preparing the report. On October 23, 2000, DUSD(ES) issued specific guidance to the military services instructing them to gather range information and detailing some of the specific information needed. Although DOD instituted an Unexploded Ordnance Inventory Working Group in March 2000 to work with the services to develop specific guidance, service officials told us that DOD had not clearly told them what was required or when it was required until shortly before the official tasking was issued on October 23, 2000. Once officially tasked to gather range information, the services were given until January 5, 2001, to gather and provide it to DOD for analysis by a DOD contractor. Lacking specific guidance from DOD to inventory their ranges, but recognizing that they would eventually be tasked to gather range information in anticipation of the Range Rule or for the Senate Report, each of the services developed its own survey questionnaires to begin gathering range information before the formal guidance was issued. The Navy took a proactive approach and began developing a questionnaire in late 1999. The questionnaire was issued to the Navy commands in December 1999. The Army and the Air Force also developed their own questionnaires and issued them in September 2000. Because the formal guidance was issued after the services had begun their initial data collection, the services had to collect additional data from their respective units or other sources. According to DOD officials, the training range inventory information gathered from these questionnaires for the March 2001 report will also be used in the future as a basis for financial statement reporting. Although the scope of ranges in the United States and its territories is not fully known—because DOD does not have a complete inventory of training ranges—DOD estimates that over 16 million acres of land on closed, transferred, and transferring ranges are potentially contaminated with unexploded ordnance. DOD also estimates that it has about 1,500 contaminated sites. Many former military range sites were transferred to other federal agencies and private parties. Training ranges must be identified and investigated to determine type and extent of contamination present, risk assessments performed, cleanup plans developed, and permits obtained before the actual cleanup is begun. These precleanup costs can be very expensive. For example, the Navy estimates that these investigative costs alone are as much as $3.96 million per site. Identifying the complete universe of current and former training ranges is a difficult task. Ranges on existing military bases are more easily identifiable and accessible. More problematic, however, are those ranges that were in existence decades ago, that have been transferred to other agencies or the public, and records of the ranges’ existence or the ordnance used cannot always be found. Special investigative efforts may be necessary to identify those locations and ordnance used. In preparing for World War I and World War II, many areas of the country were used as training ranges. In some instances, documentation on the location of and/or the types of ordnance used on these ranges is incomplete or cannot be found. For example, unexploded ordnance was unexpectedly found by a hiker in 1999 at Camp Hale in Colorado, a site used for mountain training during World War II and since transferred to the U.S. Forest Service. Because additional live rifle grenades were found in 2000, the Forest Service has closed thousands of acres of this forest to public use pending further action. This site also serves as an example of the difficulty in identifying and cleaning up unexploded ordnance in rough mountain terrain and dense ground cover. In addition to not having an accurate and complete inventory of its training ranges, DOD has just recently focused on development of a consistent methodology for estimating its training range cleanup cost estimates. However, DOD is using different methodologies for estimating cleanup costs for the annual financial statements and the March 2001 report. While DOD is using a standard methodology for estimating and reporting its cleanup costs for the March 2001 report, that methodology was not used to estimate the training range cleanup costs for the fiscal year 2000 financial statements. In addition, each of the services is using different methodologies for calculating cleanup cost estimates for reporting its liabilities in the financial statements. Without a consistent methodology, cleanup costs reported in the financial statements and other reports will not be comparable and have limited value to management when evaluating cleanup costs of each the services’ training ranges and budgeting for the future. Because the military services do not apply a consistent cost methodology to compute the liabilities for their financial statements, any comparison among the training range liabilities across the services will not be meaningful. DOD is reporting a liability of about $14 billion for fiscal year 2000 for cleaning up closed, transferred, and transferring training ranges. Of the $14 billion, the Navy is reporting a liability of $53.6 million. The Navy, based on limited surveys completed in 1995 through 1997, estimated the number and size of its training ranges and applied a $10,000 an acre cleanup cost factor to compute its liability. The Navy based its estimates on the assumption of cleaning up its closed, transferred, and transferring ranges to a “low” cleanup/remediation level. The low cleanup/remediation level means that the training ranges would be classified as “limited public access” and be used for things such as livestock grazing or wildlife preservation, but not for human habitation. The Army recognized the largest training range cleanup liability for fiscal year 2000. It reported a $13.1 billion liability for cleaning up closed, transferred, and transferring ranges. The $13.1 billion was comprised of $8 billion to clean up transferred ranges, $4.9 billion for the cleanup of closed ranges, and $231 million for the cleanup of transferring ranges.The Army used an unvalidated cost model to compute the $8 billion costs of cleaning up transferred ranges and used a different cost methodology for estimating the $4.9 billion for closed ranges. The Air Force reported a liability of $829 million for both fiscal years 1999 and 2000 based on a 1997 estimate of 42 closed ranges, using a historical cost basis for estimating its liability. According to DOD officials, DOD has standardized its methodology for estimating and reporting the unexploded ordnance cleanup costs that will be reported in the March 2001 report. DOD’s cost model used to compute the unexploded ordnance cleanup costs from its training ranges has not been validated. The original cost model was initially developed by the Air Force in 1991 and has been used by government agencies and the private sector to estimate other environmental cleanup costs not associated with training range cleanup. A new module was recently added to the cost model to estimate costs for removing unexploded ordnance and its constituents from former training ranges. The new module uses cost data developed by the U.S. Army Corps of Engineers from past experiences in cleaning up training ranges on formerly used defense sites. DOD officials told us that they believe that this model is the best one available to compute the cleanup costs. However, the assumptions and cost factors used in the model were not independently validated to ensure accurate and reliable estimates. DOD Instruction 5000.61 requires that cost models such as this must be validated to ensure that the results produced can be relied upon. We did not evaluate this model, but we were informed that DOD is in the process of developing and issuing a contract to have this model validated. A DOD official also informed us that DOD is currently considering requiring that the cost model be used as a standard for the military services’ valuation of their cleanup cost estimates used to report liabilities in the financial statements. Until DOD standardizes and validates its costing methodology used for estimating and reporting all cleanup cost estimates for training range cleanup and requires its use DOD-wide, it has no assurance that the military services will compute their cleanup costs using the same methodology. As a result, the services will in all probability continue to produce unreliable and differing estimates for their various reporting requirements. DOD lacks leadership in reporting on the cleanup costs of training ranges. DUSD(ES) was created in 1993 as the office responsible for environmental cleanup within DOD. However, this office has focused its principal efforts on the cleanup of other types of environmental contamination, not unexploded ordnance. Although requirements for reporting a training range environmental liability have existed for years, DOD has not established adequate or consistent policies to reliably develop the cost of the cleanup of training ranges and to oversee these costing efforts. Similar to the problems noted previously in this report concerning the inventory delays and lack of guidance, the Defense Science Board, in 1998, reported that DOD had not met its management responsibility for unexploded ordnance cleanup. It reported that there were no specific DOD-wide unexploded ordnance cleanup goals, objectives, or management plans. The report went on to say that unexploded ordnance cleanup decisions are made within the individual services, where remediation requirements are forced to compete against traditional warfighting and toxic waste cleanup requirements. This competition has resulted in unexploded ordnance cleanup efforts being relegated to “house-keeping duties” at the activity or installation level, according to the Board’s report. To address DOD’s unmet management responsibilities for unexploded ordnance cleanup, the Defense Science Board recommended the establishment of an Office of Secretary of Defense focal point for oversight of unexploded ordnance cleanup activities within DOD. This recommendation was made even though DUSD(ES) had overall responsibility for environmental cleanup under the Defense Environmental Restoration Program. According to the Director of DOD’s Environmental Cleanup Program, a single focal point for managing the cleanup of unexploded ordnance has still not been formally designated. A focal point with the appropriate authority could be a single point of contact who could manage and oversee the development of a complete and accurate training range inventory, the development of a consistent cost methodology across all services, and the reporting of the training range liability for the financial statements and other required reports. The Department of Energy (DOE) has been successful in its identification and reporting of thousands of environmentally contaminated sites, with cleanup liabilities reported at $234 billion in fiscal year 2000. Initially, in the early 1990s, DOE was unable to report the estimated cleanup costs. However, through substantial effort and support of DOE leadership, DOE was able to receive a clean, or unqualified, audit opinion, for its fiscal year 1999 and 2000 financial statements. DOE’s efforts provide a useful example to DOD in its efforts to identify and report cost estimates on its contaminated sites. After 50 years of U.S. production of nuclear weapons, DOE was tasked with managing the largest environmental cleanup program in the world. DOE has identified approximately 10,500 release sites from which contaminants could migrate into the environment. DOE has made substantial progress in defining the technical scope, schedules, and costs of meeting this challenge, and in creating a plan to undertake it. DOE officials told us that in order to build a reliable database and management program for contaminated sites, the process requires a significant investment in time and manpower. DOE officials stated that they began their data collection and management program process in the early 1990s and are continuing to build and update their database. However, they emphasized that their efforts, similar to DOD’s current efforts, started with an initial data call to collect preliminary information to identify the sites. They said the next step involved sending teams to each of the sites to actually visit and observe the site, sometimes taking initial samples, to further identify and confirm the contaminants, and to help assess the risk associated with the site contamination. The information gathered was entered into a central database in 1997 to be used for management and reporting purposes. In 1999, DOE completed entering baseline data for all known cleanup sites. In addition to the above steps, once a site was selected for cleanup, a much more involved process was done to further test for and remove the contaminants. However, until a site is fully cleaned up, each site is reviewed and cost estimates are reviewed annually and any changes in conditions are recorded in the central database. DOE officials told us that in addition to providing the necessary leadership and guidance to inventory and manage their sites, another key to this success was establishing a very close working relationship between the program office and the financial reporting office to ensure consistent and accurate reporting of their cleanup liabilities. As military land, including training ranges, is transferred to the public domain, the public must have confidence that DOD has the necessary leadership and information to address human health and environmental risks associated with training range cleanup. Also, the Congress needs related cost information to make decisions on funding needed. DOD’s recent efforts to develop the information needed to report training range cleanup costs for the required March 2001 report represent an important first step in gathering the needed data. However, accurate and complete reporting can only be achieved if DOD compiles detailed inventory information on all of its training ranges and uses a consistent and valid cost methodology. Because of the complexity of the data gathering process and the many issues involved in the cleanup of training ranges, top management leadership and focus is essential. A senior-level official with appropriate management authority and resources is key to effectively leading these efforts to produce meaningful and accurate reports on training range cleanup costs. We recommend that the Secretary of Defense designate a focal point with the appropriate authority to oversee and manage the reporting of training range liabilities. We also recommend that the Secretary of Defense require the designated focal point to work with the appropriate DOD organizations to develop and implement guidance for inventorying all types of training ranges, including active, inactive, closed, transferred, and transferring training ranges. We recommend that this guidance, at a minimum, include the following requirements: key site characterization information for training ranges be collected for unexploded ordnance removal; identification of other constituent contamination in the soil and/or water; performance time frames, including the requirements to perform the necessary site visits to confirm the type and extent of contamination; and the necessary policies and procedures for the management and maintenance of the inventory information. We further recommend that the Secretary of Defense require the designated focal point to work with the appropriate DOD organizations to develop and implement a consistent and standardized methodology for estimating training range cleanup costs to be used in reporting its training range cleanup liabilities in DOD’s agency-wide annual financial statements and other reports as required. In addition, we recommend that the Secretary of Defense require that the designated focal point validate the cost model in accordance with DOD Instruction 5000.61. Further, we recommend that the Secretary of Defense require the DOD Comptroller to revise the DOD Financial Management Regulation to include guidance for recognizing and reporting a liability in the financial statements for the cleanup costs on active and inactive ranges when such costs meet the criteria for a liability found in the federal accounting standards. In commenting on a draft of this report, DOD stated that it has made significant progress in estimating and reporting environmental liabilities on its financial statements; however, much work remains to be done. DOD’s response also indicated that as the department increases its knowledge related to this area, the appropriate financial and functional policies will be updated to incorporate more specific guidance for recognizing and reporting environmental liabilities. DOD concurred with our recommendations, but provided several comments in response to our recommendation that the Secretary of Defense require the DOD Comptroller to revise the DOD Financial Management Regulation to include guidance for recognizing and reporting a liability in the financial statements for the cleanup costs on active and inactive ranges when such costs meet the criteria for a liability. DOD stated that it revised Volume 6B, Chapter 10, of the DOD Financial Management Regulation to clarify instances when a liability should be recognized for an active or inactive range on an active installation. However, this revision of the DOD Financial Management Regulation does not address the recognition of an environmental liability at active and inactive ranges in accordance with the criteria of SFFAS No. 5. For example, as stated in our report, the total $300 million cleanup cost estimate on the active range at the Massachusetts Military Reservation should be recognized as a liability in accordance with the criteria in SFFAS No. 5. DOD further stated that since it intends to continue to use its active and inactive ranges in the foreseeable future, the removal of ordnance to maintain safety and usability is considered an ongoing maintenance expense. DOD stated that this expense is not accrued as a liability except in those few specific instances in which an environmental response action—beyond what is necessary to keep the range in operation—is probable and the costs of such a response is measurable. Although this position is consistent with SFFAS No. 5, it is not specifically indicated in the DOD Financial Management Regulation. Finally, DOD stated that as the Department gains additional experience in this area, it will review appropriate chapters in the DOD Financial Management Regulation to determine what, if any, additional specific guidance may need to be included regarding recognizing and reporting liabilities. While we agree that such a review is appropriate, we continue to recommend that the DOD Financial Management Regulation be revised to include guidance in those instances when active and inactive ranges meet the criteria in SFFAS No. 5. DOD also provided several technical comments, which we have incorporated in the report as appropriate. We are sending copies of this report to the Honorable John Spratt, Ranking Minority Member, House Committee on the Budget, and to other interested congressional committees. We are also sending copies to the Honorable Donald H. Rumsfeld, Secretary of Defense; the Honorable David R. Oliver, Acting Under Secretary of Defense for Acquisition, Technology, and Logistics; and the Honorable Mitchell E. Daniels, Jr., Director of the Office of Management and Budget. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you or your staff have any questions about this report. Other GAO contacts and key contributors to this report are listed in appendix III. Our objectives were to review DOD’s ongoing efforts to (1) gather and collect information on its training ranges and issues affecting the successful completion of the inventory and (2) recognize environmental liabilities associated with the cleanup of unexploded ordnance from its training ranges, including DOD’s efforts to develop and implement a methodology to develop cost estimates. The focus of our review was on DOD efforts to gather and collect information on its training ranges and the environmental costs associated with the cleanup of the training ranges. As a result, other sites containing unexploded ordnance were not included in the scope of our review. These sites include munitions manufacturing facilities, munitions burial pits, and open burn and open detonation sites used to destroy excess, obsolete, or unserviceable munitions. To accomplish these objectives, we: reviewed relevant standards and guidance applicable to environmental liabilities including Statement of Federal Financial Accounting Standards (SFFAS) No. 5, Accounting for Liabilities of the Federal Government; SFFAS No. 6, Accounting for Property, Plant, and Equipment; and DOD Financial Management Regulation, Volume 6B, Chapter 10, and Volume 4, Chapters 13 and 14; reviewed DOD guidance to the military services for performing the training range inventory survey; reviewed the military services’ survey documents used to collect information on training ranges; interviewed officials from the Deputy Under Secretary of Defense for Environmental Security (DUSD(ES)); Director Defense Research and Engineering; U.S. Army Corps of Engineers; and the Army, Navy, and Air Force involved in planning and conducting the data collection efforts and analyzing the data; interviewed an official from the Office of the Under Secretary of Defense (Comptroller); interviewed officials from the U.S. Environmental Protection Agency; interviewed environmental officials from the states of Colorado and Alabama; interviewed officials from the Department of Energy; interviewed the contractor selected by DOD, which assisted in planning and analyzing the data and preparing the cost analysis for the March 2001 report; and visited two locations—Lowry Bombing Range, Denver, and Ft. McClellan, Anniston, Alabama—to gain insight into the complexities involved in estimating liabilities for training range cleanup. We did not audit DOD’s financial statements and therefore we do not express an opinion on any of DOD’s environmental liability estimates for fiscal year 1999 or 2000. We conducted our work in accordance with generally accepted government auditing standards from May 2000 through March 2001. On March 29, 2001, DOD provided us with written comments on our recommendations, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. DOD also provided comments on several other matters, which we have incorporated in the report as appropriate but have not reprinted. Staff making key contributions to this report were Paul Begnaud, Roger Corrado, Francine DelVecchio, and Stephen Donahue. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are also accepted. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St., NW (corner of 4th and G Sts. NW) Washington, DC 20013 Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov 1-800-424-5454 (automated answering system) | Because of concerns about the long-term budgetary implications associated with the environmental cleanup of the Department of Defense (DOD) training ranges, GAO examined (1) the potential magnitude of the cost to clean up these ranges in compliance with applicable laws and regulations, (2) the scope and reliability of DOD's training range inventory, and (3) the methodologies used to develop cost estimates. GAO found that DOD lacks complete and accurate data with which to estimate training range cleanup costs. DOD has not done a complete inventory of its ranges to fully identify the types and extent of unexploded ordnance present and the associated contamination. Recently, DOD began to compile training range data, but these initial efforts have been delayed because DOD did not issue formal guidance to the services for collecting the information until October 2000. Because DOD has not completed an inventory of its ranges, the services have used varying methods to estimate the size and condition of the ranges necessary to estimate the cost of cleanup for financial statement purposes. As a result, environmental liability costs are not consistently calculated and reported across the services. |
Congress established FHA in 1934 under the National Housing Act (P.L. 73- 479) to broaden homeownership, shore up and protect lending institutions, and stimulate employment in the building industry. FHA’s single-family program insures private lenders against losses (up to almost 100 percent of the loan amount) from borrower defaults on mortgages that meet FHA criteria. In 2004, more than three-quarters of the loans that FHA insured went to first-time homebuyers, and more than one-third of these loans went to minorities. From 2001 through 2005, FHA insured about 5 million mortgages with a total value of about $590 billion. However, FHA’s loan volume fell sharply over that period, and in 2005 FHA-insured loans accounted for less than 4 percent of the single-family mortgage market, compared with about 13 percent a decade ago. Additionally, default rates for FHA-insured mortgages have risen steeply over the past several years, a period during which home prices have appreciated rapidly and default rates for conventional and VA-guaranteed mortgages have been relatively stable. FHA determines the expected cost of its insurance program, known as the credit subsidy cost, by estimating the program’s future performance. Similar to other agencies, FHA is required to reestimate credit subsidy costs annually to reflect actual loan performance and expected changes in estimates of future loan performance. FHA’s mortgage insurance program is currently a negative subsidy program, meaning that the present value of estimated cash inflows to the Fund exceed the present value of estimated cash outflows. FHA has estimated that the loans it expects to insure in 2007 will have a subsidy rate of -0.37, a rate closer to zero (the point at which estimated cash inflows equal estimated cash outflows) than any previous estimate. The economic value, or net worth, of the Fund that supports FHA’s insurance depends on the relative size of cash outflows and inflows over time. Cash flows out of the Fund for payments associated with claims on defaulted loans and refunds of up-front premiums on prepaid mortgages. To cover these outflows, FHA receives cash inflows from borrowers’ insurance premiums and net proceeds from recoveries on defaulted loans. If the Fund were to be exhausted, the U.S. Treasury would have to cover lenders’ claims directly. Two major trends in the conventional mortgage market have significantly affected FHA. First, in recent years, members of the conventional mortgage market (such as private mortgage insurers, Fannie Mae, and Freddie Mac) increasingly have been active in supporting low- and even no-down-payment mortgages, increasing consumer choices for borrowers who may have previously chosen an FHA-insured loan. Second, to help assess the default risk of borrowers, particularly those with high loan-to- value ratios, the mortgage industry has increasingly used mortgage scoring and automated underwriting systems. Mortgage scoring is a technology- based tool that relies on the statistical analysis of millions of previously originated mortgage loans to determine how key attributes such as the borrower’s credit history, property characteristics, and terms of the mortgage affect future loan performance. As a result of such tools, the mortgage industry is able to process loan applications more quickly and consistently than in the past. In 2004, FHA implemented a mortgage scoring tool, called the FHA Technology Open to Approved Lenders (TOTAL) Scorecard, to be used in conjunction with existing automated underwriting systems. HUD’s legislative proposal is intended to modernize FHA, in part, to respond to the changes in the mortgage market. The proposal, among other things, would authorize FHA to change the way it sets insurance premiums, insure larger loans, and reduce down-payment requirements. The proposed legislation would enable FHA to depart from its current, essentially flat, premium structure and charge a wider range of premiums based on individual borrowers’ risk of default. Currently, FHA also requires homebuyers to make a 3 percent contribution toward the purchase of a property. HUD’s proposal would eliminate this contribution requirement and enable FHA to offer some borrowers a no-down-payment product. FHA is subject to limits in the size of the loans it can insure. For example, for a one-family property in a high-cost area, the FHA limit is 87 percent of the limit established by Freddie Mac. The legislative proposal would raise this limit to 100 percent of the Freddie Mac limit. If Congress authorizes the reforms HUD has proposed, FHA’s ability to assess the default risk of borrowers will take on increased importance because FHA would be adjusting insurance premiums based on its assessments of the credit risk of borrowers and insure potentially larger and riskier mortgages with low or no down payments. A primary tool that FHA uses to assess the default risk of borrowers who apply for FHA- insured mortgages is its TOTAL scorecard. In reports we issued in November 2005 and April 2006, we noted that while FHA’s process for developing TOTAL generally was reasonable, some of the choices FHA made in the development process could limit the scorecard’s effectiveness. FHA and its contractor used variables that reflected borrower and loan characteristics to create TOTAL, as well as an accepted modeling process to test the variables’ accuracy in predicting default. However, we also found that: The data used to develop TOTAL were 12 years old by the time FHA implemented the scorecard. Specifically, when FHA began developing TOTAL in 1998, the agency chose to use 1992 loan data, which would be old enough to provide a sufficient number of defaults that could be attributed to a borrower’s poor creditworthiness. However, FHA did not implement TOTAL until 2004 and has not subsequently updated the data used in the scorecard. Best practices of private-sector organizations call for scorecards to be based on data that are representative of the current mortgage market—specifically, relevant data that are no more than several years old. In the past 12 years, significant changes—growth in the use of down-payment assistance, for example—have occurred in the mortgage market that have affected the characteristics of those applying for FHA-insured loans. As a result, the relationships between borrower and loan characteristics and the likelihood of default also may have changed. TOTAL does not include certain key variables that could help explain expected loan performance. For example, TOTAL does not include a variable for the source of the down payment. However, FHA contractors, HUD’s Inspector General, and our work have all identified the source of a down payment as an important indicator of risk, and the use of down-payment assistance in the FHA program has grown rapidly over the last 5 years. Further, TOTAL does not include other important variables—such as a variable for generally riskier adjustable rate loans—included in other scorecards used by private-sector entities. Although FHA has a contract to update TOTAL by 2007, the agency did not develop a formal plan for updating TOTAL on a regular basis. Best practices in the private sector and reflected in bank regulator guidance call for having formal policies to ensure that scorecards are routinely updated. Without policies and procedures for routinely updating TOTAL, the scorecard may become less reliable and, therefore, less effective at predicting the likelihood of default. To improve TOTAL’s effectiveness, we recommended, among other things, that HUD develop policies and procedures for regularly updating TOTAL and more fully consider the risks posed by down-payment assistance when underwriting loans by including the presence and source of down-payment assistance as a loan variable in the scorecard. In response, FHA agreed to consider incorporating a variable for down-payment assistance in TOTAL. Despite potential limitations in the use of TOTAL, HUD still could realize additional benefits from the scorecard, if, like private-sector lenders and mortgage insurers, it put TOTAL to other uses. Based on its current use of TOTAL, FHA lenders and borrowers have seen two added benefits—less paperwork and more consistent underwriting decisions. However, private lenders and mortgage insurers put their scorecards to other uses, including to help price products based on risk and launch new products. For example, to set risk-based prices, private-sector organizations use scorecards to rank the relative risk of borrowers and price products according to that ranking. By increasing their use of scorecards, these organizations are able to broaden their customer base and improve their financial performance. Adopting these best practices from the private sector could generate similar kinds of benefits for FHA, particularly if FHA were to implement risk-based pricing. To the extent that conventional mortgage lenders and insurers are better able than FHA to use mortgage scoring to identify and approve relatively low-risk borrowers and charge fees based on default risk, FHA may face adverse selection—that is, conventional providers may approve lower-risk borrowers in FHA’s traditional market segment, leaving relatively high-risk borrowers for FHA. Accordingly, the greater the effectiveness of TOTAL, the greater the likelihood that FHA will be able to effectively manage the risks posed by borrowers seeking FHA-insured loans. To improve how FHA benefits from TOTAL, we recommended that the agency explore additional uses for the scorecard, including using it to implement risk-based pricing of mortgage insurance and to develop new products. These actions could enhance FHA’s ability to effectively compete in the mortgage market. In response to our recommendations, FHA indicated that it planned to explore these uses for TOTAL. If implemented, HUD’s legislative proposal could affect the Fund’s cash inflows and outflows and, as a result, significantly affect the credit subsidy costs of the insurance program. For example, changes in FHA’s insurance premiums could affect the revenues FHA receives, and changes in the composition and riskiness of the loan portfolio (as a result of larger loans or more loans with no down payments) could affect the size and number of insurance claims FHA pays. As previously noted, FHA, like other federal agencies, is required to reestimate credit subsidy costs annually to reflect actual loan performance and expected changes in estimates of future loan performance. FHA has estimated negative credit subsidies for the Fund since 1992, when federal credit reform became effective. However, as we reported in September 2005, with the exception of the 1992 reestimate, FHA’s subsidy reestimates have been less favorable than the original estimates. In particular, FHA’s $7 billion reestimate for fiscal year 2003 was more than twice the size of any other reestimate from fiscal years 2000 through 2004. The $7 billion reestimate for fiscal year 2003 had three main components. The first component was the $3.9 billion difference between FHA’s fiscal year 2003 estimates of the net present value of future cash flows and the estimates it made one year earlier. Most of this difference stemmed from changes in FHA’s estimates of claims and, to a lesser extent, prepayments (the payment of a loan before its maturity date). That is, FHA changed its estimate of future loan performance based on its observation of actual loan performance during fiscal year 2003 and revised economic assumptions. The second component was the $2.1 billion difference between estimated and actual cash flows occurring during fiscal year 2003. Underestimation of claims (net of recoveries on claims) and an overestimation of net fees (insurance premium receipts less premium refunds) for loans made prior to fiscal year 2003 largely account for the difference. The third component was an interest adjustment on the reestimate required by Office of Management and Budget guidance that increased the total reestimate by $1.1 billion. Several recent policy changes and trends may have contributed to changes in the expected claims underlying the $7 billion reestimate. For example: Revised underwriting guidelines made it easier for borrowers who are more susceptible to changes in economic conditions—and therefore more likely to default on their mortgages—to obtain an FHA-insured loan. Competition from conventional mortgage providers could have resulted in FHA insuring more risky borrowers. FHA insured an increasing number of loans with down-payment assistance, which generally have a greater risk of default. FHA’s loan performance models did not include key variables that help estimate loan performance, such as credit scores, and as of September 2005, the source of down payment. The major factors underlying the surge in prepayment activity that also contributed to the reestimate were declining interest rates and rapid appreciation of housing prices. These trends created incentives and opportunities for borrowers to refinance using conventional loans. To more reliably estimate program costs, we recommended that FHA study and report on how variables found to influence credit risk, such as payment-to-income ratios, credit scores, and down-payment assistance would affect the forecasting ability of its loan performance models. We also recommended that when changing the definitions of key variables, FHA report the impact of such changes on the models’ forecasting ability. In response, FHA indicated, among other things, that its contractor was considering the specific variables that we had recommended FHA include in its annual actuarial review and had incorporated the source of down- payment assistance in the 2005 actuarial review of the Fund. If Congress authorized FHA to insure mortgages with smaller or no down payments, practices used by other mortgage institutions could help FHA to design and implement these new products. In a February 2005 report, we identified steps that mortgage institutions take when introducing new products. Specifically, mortgage institutions often utilize special requirements when introducing new products, such as requiring additional credit enhancements (mechanisms for transferring risk from one party to another) or implementing stricter underwriting requirements, and limiting how widely they make available a new product. Some mortgage institutions require additional credit enhancements on low- and no-down-payment products, which generally are riskier because they have higher loan-to-value ratios than loans with larger down payments. For example, Fannie Mae and Freddie Mac mitigate the risk of low- and no-down-payment products by requiring additional credit enhancements such as higher mortgage insurance coverage. Although FHA is required to provide up to 100 percent coverage of the loans it insures, FHA may engage in co-insurance of its single-family loans. Under co- insurance, FHA could require lenders to share in the risks of insuring mortgages by assuming some percentage of the losses on the loans that they originated (lenders would generally use private mortgage insurance for risk sharing). Mortgage institutions also can mitigate the risk of low- and no-down- payment products through stricter underwriting. Institutions can do this in a number of ways, including requiring a higher credit score threshold for certain products, requiring greater borrower reserves, or requiring more documentation of income or assets from the borrower. Although the changes FHA could make are limited by statutory standards, it could benefit from similar approaches. The HUD Secretary has latitude within statutory limitations to change underwriting requirements for new and existing products and has done so many times. For example, FHA expanded its definition of what could be included as borrower’s effective income when calculating payment-to-income ratios. However, FHA officials told us that they were unlikely to mandate a credit score threshold or borrower reserve requirements for a no-down-payment product because the product was intended to serve borrowers who are underserved by the conventional market, including those who lack credit scores and have little wealth or personal savings. Finally, mortgage institutions can increase fees or charge higher premiums to help offset the potential costs of products that are believed to have greater risk. For example, Fannie Mae officials stated that they would charge higher guarantee fees on low- and no-down-payment loans if they were not able to require higher insurance coverage. FHA, if authorized to implement risk-based pricing, could set higher premiums on FHA-insured loans understood to have greater risk. We recommended that if FHA implemented a no-down-payment mortgage product or other new products about which the risks were not well understood, the agency should (1) consider incorporating stricter underwriting criteria such as appropriate credit score thresholds or borrower reserve requirements and (2) utilize other techniques for mitigating risks, including the use of credit enhancements. In response, FHA said it agreed that these techniques should be evaluated when considering or proposing a new FHA product. Some mortgage institutions initially may offer new products on a limited basis. For example, Fannie Mae and Freddie Mac sometimes use pilots, or limited offerings of new products, to build experience with a new product type. Fannie Mae and Freddie Mac also sometimes set volume limits for the percentage of their business that could be low- and no-down-payment lending. FHA has utilized pilots or demonstrations when making changes to its single-family mortgage insurance but generally has done so in response to legislative requirement rather than on its own initiative. For example, FHA’s Home Equity Conversion Mortgage insurance program started as a pilot that authorized FHA to insure 2,500 reverse mortgages. Additionally, some mortgage institutions may limit the origination and servicing of new products to their better lenders and servicers. Fannie Mae and Freddie Mac both reported that these were important steps in introducing a new product. We recommended that when FHA releases new products or makes significant changes to existing products, it consider similar steps to limit the initial availability of these products. FHA officials agreed that they could, under certain circumstances, envision piloting or limiting the ways in which a new product would be available, but pointed to the practical limitations of doing so. For example, FHA officials told us that administering the Home Equity Conversion Mortgage pilot program was difficult because of the challenges of equitably selecting a limited number of lenders and borrowers. FHA generally offers products on a national basis and, if they did not, specific regions of the county or lenders might question why they were not able to receive the same benefit. FHA officials told us they have conducted pilot programs when Congress has authorized them, but they questioned the circumstances under which pilot programs were needed, and also said that they lacked sufficient resources to appropriately manage a pilot. However, if FHA does not limit the availability of new or changed products, the agency runs the risk of facing higher claims from products whose risks may not be well understood. HUD’s legislative proposal would represent a significant change to the agency’s single-family mortgage insurance program and presents new risk management challenges. In our November 2005 report examining FHA’s actions to manage the new risks associated with the growing proportion of loans with down-payment assistance, we found that the agency did not implement sufficient standards and controls to manage the risks posed by these loans. Homebuyers who receive FHA-insured mortgages often have limited funds and, to meet the 3 percent borrower investment FHA currently requires, may obtain down-payment assistance from a third party, such as a relative or a charitable organization (nonprofit) that is funded by property sellers. The proportion of FHA-insured loans that are financed in part by down- payment assistance from various sources has increased substantially in the last few years, while the overall number of loans that FHA insures has fallen dramatically. Money from nonprofits funded by seller contributions has accounted for a growing percentage of that assistance. From 2000 to 2004, the total proportion of FHA-insured purchase loans that had a loan- to-value ratio greater than 95 percent and that also involved down- payment assistance, from any source, grew from 35 to nearly 50 percent. Approximately 6 percent of FHA-insured purchase loans in 2000 received down-payment assistance from nonprofits (the large majority of which were funded by property sellers), but by 2004 nonprofit assistance grew to about 30 percent. We and others have found that loans with down-payment assistance do not perform as well as loans without down-payment assistance. We analyzed loan performance by source of down-payment assistance, using two samples of FHA-insured purchase loans from 2000, 2001, and 2002—a national sample and a sample from three Metropolitan Statistical Areas (MSA) with high rates of down-payment assistance. Holding other variables constant, our analysis indicated that FHA-insured loans with down-payment assistance had higher delinquency and claim rates than similar loans without such assistance. For example, we found that the probability that loans with nonseller-funded sources of down-payment assistance would result in insurance claims was 49 percent higher in the national sample and 45 percent higher in the MSA sample than it was for comparable loans without assistance. Similarly, the probability that loans with nonprofit seller-funded, down-payment assistance would result in insurance claims was 76 percent higher in the national sample and 166 percent higher in the MSA sample than it was for comparable loans without assistance. The poorer performance of loans with nonprofit seller- funded, down-payment assistance may be explained, in part, by the sales prices of the homes bought with such assistance. More specifically, our analysis indicated that FHA-insured homes bought with seller-funded nonprofit assistance were appraised and sold for about 2 to 3 percent more than comparable homes bought without such assistance. The difference in performance also may be partially explained by the homebuyer having less equity in the transaction. FHA has implemented some standards and internal controls to manage the risks associated with loans with down-payment assistance, but stricter standards and additional controls could help FHA better manage risks posed by these loans while meeting its mission of expanding homeownership opportunities. Like other mortgage industry participants, FHA generally applies the same underwriting standards to loans with down-payment assistance that it applies to loans without such assistance. One important exception is that FHA, unlike others, does not limit the use of down-payment assistance from seller-funded nonprofits. Some mortgage industry participants view assistance from seller-funded nonprofits as a seller inducement to the sale and, therefore, either restrict or prohibit its use. FHA has not viewed such assistance as a seller inducement and, therefore, does not subject this assistance to the limits it otherwise places on contributions from sellers. However, due in part to concerns about loans with nonprofit seller-funded, down-payment assistance, FHA has proposed legislation that could help eliminate the need for such assistance by allowing some FHA borrowers to make no down payments for an FHA-insured loan. FHA has taken some steps to assess and manage the risks associated with loans with down-payment assistance, but additional controls may be warranted. For example, FHA has contracted for two studies to assess the use of such assistance with FHA-insured loans and conducted ad hoc performance analyses of loans with down-payment assistance but has not routinely assessed the impact that the widespread use of down-payment assistance has had on loan performance. Also, FHA has targeted its monitoring of appraisers to those that do a high volume of loans with down-payment assistance, but FHA has not targeted its monitoring of lenders to those that do a high volume of loans with down-payment assistance, even though FHA holds lenders, as well as appraisers, accountable for ensuring a fair valuation of the property it insures. Our report made several recommendations designed to better manage the risks of loans with down-payment assistance generally, and more specifically from seller-funded nonprofits. Overall, we recommended that in considering the costs and benefits of its policy permitting down- payment assistance, FHA also consider risk-mitigation techniques such as including down-payment assistance as a factor when underwriting loans or more closely monitoring loans with such assistance. For down-payment assistance providers that receive funding from property sellers, we recommended that FHA take additional steps to mitigate the risks of these loans, such as treating such assistance as a seller contribution and, therefore, subject to existing limits on seller contributions. In response, FHA agreed to improve its oversight of down-payment assistance lending by (1) modifying its information systems to document assistance from seller-funded nonprofits and (2) requiring lenders to inform appraisers when assistance is provided by seller-funded nonprofits. In addition, HUD has proposed a zero down-payment program as an alternative to seller- funded, down-payment assistance. In May 2006, the Internal Revenue Service issued a ruling stating that organizations that provide seller-funded, down-payment assistance to home buyers do not qualify as tax-exempt charities. FHA permitted these organizations to provide down-payment assistance because they qualified as charities. Accordingly, the ruling could significantly reduce the number of FHA-insured loans with seller-funded down payments. The risks FHA faces in today’s mortgage market are growing. For example, the agency has seen increased competition from conventional mortgage and insurance providers, many of which offer low- and no-down-payment products and that may be better able than FHA to identify and approve relatively low-risk borrowers. Additionally, FHA is insuring a greater proportion of loans with down-payment assistance. These loans are more likely to result in insurance claims than loans without such assistance. To effectively manage the risks posed by FHA’s existing products, we have concluded from our prior work that the agency must significantly improve its risk management and cost estimation practices. We are encouraged by a variety of steps FHA has taken to enhance its capabilities in these areas, such as developing and implementing a mortgage scorecard and improving its loan performance models. However, FHA needs to take additional steps, such as establishing policies and procedures for updating TOTAL scorecard on a regular basis, more fully considering the risks posed by down-payment assistance when underwriting loans, developing a framework for introducing new products in a way that mitigates risk, and studying and reporting on the impact of variables found to influence credit risk that are not currently in the agency’s loan performance models. HUD’s legislative proposal could help FHA serve more low-income and first-time homebuyers, but also would introduce additional risks to the Fund. Consideration of this proposal should include serious deliberation of the associated risks and the capacity of FHA to mitigate them. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact William B. Shear at (202) 512-8678. Individuals making key contributions to this testimony included Triana Bash, Anne Cangi, Marcia Carlsen, John Fisher, Austin Kelly, John McGrail, Andrew Pauline, Barbara Roesmann, Mathew Scirè, Katherine Trimble, and Steve Westley. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA) has faced several challenges in recent years, including rising default rates, higher-than-expected program costs, and a sharp decline in program participation. To help FHA adapt to market changes, HUD has proposed a number of changes to the National Housing Act that would raise FHA's mortgage limits, allow greater flexibility in setting insurance premiums, and reduce down-payment requirements. Implementing the proposed reforms would require FHA to manage new risks and estimate the costs of program changes. To assist Congress in considering issues faced by FHA, this testimony provides information from recent reports GAO has issued that address FHA's risk management and cost estimates. Specifically, this testimony looks at (1) FHA's development and use of its mortgage scorecard, (2) FHA's consistent underestimation of program costs, (3) instructive practices for managing risks of new mortgage products, and (4) weaknesses in FHA's management of risks related to loans with down-payment assistance. Recent trends in mortgage lending have significantly affected FHA, including increased use of automated tools (e.g., mortgage scoring) to underwrite loans, increased competition from lenders offering low-and no-down-payment products, and a growing proportion of FHA-insured loans with down-payment assistance. Although FHA has taken steps to improve its risk management, in a series of recent reports, GAO identified a number of weaknesses in FHA's ability to manage risk and estimate program costs during this period of change. The way that FHA developed and uses its mortgage scorecard, while generally reasonable, limits how effectively it assesses the default risk of borrowers. With one exception, FHA's reestimates of program costs have been less favorable than originally estimated, including a $7 billion reestimate for fiscal year 2003. FHA has not consistently implemented practices used by other mortgage institutions to help manage the risks associated with new mortgage products. FHA has not developed sufficient standards and controls to manage risks associated with insuring a growing proportion of loans with down-payment assistance. GAO made several recommendations in its recent reports, including that FHA (1) incorporate the risks posed by down-payment assistance into its mortgage scorecard, (2) study and report on the impact of variables not in its loan performance models that have been found to influence credit risk, and (3) consider piloting new mortgage products. FHA has taken actions in response to GAO's recommendations, but additional improvements in managing risk and estimating program costs will be important if FHA is to successfully implement its proposed program changes. |
Reverse auctions are similar to traditional auctions, except that the sellers compete against each other to sell their products or services to the buyer. Unlike a traditional auction, in which multiple buyers bid against one another to push the price up, reverse auctions enable a buyer to evaluate proposals submitted from multiple sellers, in which sellers compete against one another to provide the lowest price or highest-value offer. After considering all offers, the buyer selects the winning proposal, often at a reduced price. Figure 1 compares these two types of auctions. Prior to 1997, auctioning techniques were prohibited in the federal government under the Federal Acquisition Regulation (FAR) procedures for negotiated procurements. Agencies were prohibited from advising an offeror of where its bid stood compared with those of other offerors and furnishing information about other offerors’ prices, both of which are key tenets of a reverse auction. In 1997, the FAR Council rewrote Part 15 of the FAR to eliminate these prohibitions as part of an overall effort to make the source selection process more innovative, simplify the process, and facilitate a best value acquisition approach. Currently, while the FAR does not specifically address reverse auctions, several provisions facilitate agencies’ use of them, such as allowing the use of innovative strategies and electronic commerce. OFPP has considered the necessity for government-wide guidance on reverse auctions numerous times since 1997, when the auctions prohibition was removed from the FAR. In October 2000, the FAR Council issued a notice in the Federal Register requesting information from the acquisition community to help determine the best approach to inform thinking regarding the use of reverse auction techniques. In 2007, when responding to the Conference Report accompanying the National Defense Authorization Act for Fiscal Year 2006, OFPP conducted a survey of both sellers and buyers to assess how federal government buying activities could most effectively use reverse auctions to increase savings. According to OFPP officials, neither of these efforts resulted in guidance being provided to federal agencies. In 2001, the Department of Defense (DOD) issued guidance on acquiring commercial items that stated where competition exists, the reverse auction environment results in driving down the sellers’ offered prices. Since then, several reviews have addressed agencies’ use of reverse auctions. In 2003, the Army Corps of Engineers concluded that the method was useful for commodities purchases but was not suitable for construction services acquisitions. In 2004, we reported that the U.S. Postal Service claimed over $5.9 million in savings by using reverse auctions in fiscal year 2003; however, we found that $2.1 million of the claimed savings was questionable because of incorrect baseline data. We also noted that the Postal Service may not have obtained the lowest prices possible when, in about a quarter of the auctions, it received only one bid. In 2011, we found that half of the 24 agencies involved in an Office of Management and Budget (OMB) cost savings initiative reported using reverse auctions to improve competition and reduce prices on commonly purchased products and services. The agencies we reviewed—Army, DHS, DOI, and VA—have steadily increased their use of reverse auctions in volume and dollars in recent years. From fiscal years 2008 to 2012, the number of reverse auctions almost tripled—from 7,193 to 19,688—and resulted in about $828 million in fiscal year 2012 contract awards. While there is no requirement to limit reverse auctions to commercial items, agencies generally used them to acquire commercial products and services—primarily for information technology (IT) products and the lease or rental of equipment. Combined, the agencies used reverse auctions to award only a small portion of all commercial acquisitions—not quite 7 percent of actions. Across the agencies we reviewed, reverse auctions shared several common characteristics, such as relatively small dollar value awards and a high rate of awards to small businesses. DLA’s guidance, issued in June 2012, states that the reverse auction pricing tool should be used for all competitive purchases over $150,000. Across the agencies we selected, use of reverse auctions increased almost 175 percent between fiscal years 2008 and 2012. Figure 2 summarizes the growth in use of reverse auctions in dollars and number of auctions. Agencies in our review used reverse auctions to purchase a variety of commercial products, although the top five categories varied among agencies. For example, Army, DHS, and DOI purchased mostly IT products, while the VA purchased mostly medical equipment and supplies. In addition, DHS and DOI purchased comparatively less in medical supplies than the Army and the VA. Across the selected agencies, 41 percent of all reverse auctions were for IT-related items and 23 percent were for medical supplies in fiscal year 2012, as shown in figure 3. Of the $828 million in fiscal year 2012 contracting actions that resulted from reverse auctions, $746 million—or 90 percent—were for products. Services, in contrast, constituted about 10 percent. Across the agencies, four categories of services—Lease or Rental of Equipment; IT and Telecom; Medical Services; and Maintenance, Repair, and Rebuilding of Equipment—made up nearly 60 percent of the $83 million used to buy services through reverse auctions. In fiscal year 2012, the four agencies in our review collectively reported more than 234,000 contract actions, excluding modifications, for commercial items that were valued at $21.5 billion.agencies used reverse auctions to award only a small portion of all commercial acquisitions—not quite 7 percent of actions—which represented a little less than 4 percent of the dollar value of those commercial actions. Figure 4 compares the relationship between the number and value of reverse auctions as a percent of all commercial actions in fiscal year 2012. While to date most reverse auctions have been used for commercial products, some agency officials told us that the use of reverse auctions to acquire services is increasing and are also being used for more complex contract actions. Our analysis of the data from FedBid identified four common characteristics among contract awards: For $150,000 or Less There is no requirement to limit reverse auctions to a certain award value, and the reverse auctions at the selected agencies in fiscal year 2012 resulted in awards that ranged in value from about $100 to almost $6.3 million. However, we found that about 95 percent of the acquisitions using reverse auctions resulted in awards of $150,000 or less. See figure 5. While we could not assess DLA’s activity at this level of detail, its guidance states that the reverse auction pricing tool should be used for all competitive purchases over $150,000. About 86 percent of fiscal year 2012 acquisitions using reverse auctions—16,906 of 19,688—went to small businesses, in keeping with the FAR requirement that acquisitions of supplies or services with expected values of more than $3,000 but not over $150,000 are reserved for small businesses, with some exceptions.accounted for $661 million (80 percent) of the dollar value of all reverse auction awards. See figure 6. Almost half of the reverse auctions in fiscal year 2012 across the four agencies in our review—9,257 of 19,688—were conducted to place orders for products and services using existing contracts. Federal agencies can use a number of existing contract vehicles to leverage buying power and obtain lower prices, including the General Service Administration’s (GSA) multiple award schedule (Schedule) program, multi-agency contracts, and government-wide acquisition contracts (GWAC). In some cases, the use of these contract vehicles includes a fee that the ordering agency must pay. The remaining reverse auctions did not result in placing orders under existing contracts but were considered open market transactions. See table 1. See appendix II for a description for each of the existing contracts used and the associated fees. Almost 60 percent of the contract actions resulting from reverse auctions conducted in fiscal year 2012 across the selected agencies were awarded at the end of the fiscal year, as shown in figure 7. Agency officials told us that the surge of fourth quarter reverse auctions parallels what happens with acquisitions in general at that time of the year and can be due to late release of funds. According to agency officials, reverse auctions, which can take as little as an hour for uncomplicated purchases, can facilitate the timely award of contracts at the end of the year. In prior work, we reported that a contracting officer turned to procedures that facilitated the rapid award of contracts in the fourth quarter. Four of the five agencies included in our review used the same service provider, FedBid, to conduct their reverse auctions. Agencies pay a variable fee to conduct reverse auctions through FedBid, which is no more than 3 percent of the winning bid. FedBid has a limited role in the acquisition process, since contracting officers are still responsible for making key decisions that affect the auction, such as selecting the winning vendor. For example, contracting officers select the basis for award, which may include award to other than the lowest priced bidder and we found that about a fourth of the 2012 contract actions resulting from reverse auctions were not awarded to the lowest bidder. The Army, DHS, DOI, and VA contracted with a company called FedBid to conduct their reverse auctions during fiscal year 2012. The agencies used an existing GSA Schedule contract to procure FedBid’s services, which include an online user interface, data management, and regular reports on reverse auction activity. According to FedBid, its product is a commercially available online procurement service that allows sellers of commercial items to the government to compete against each other in real time and in an interactive environment. FedBid also states that it safeguards each seller’s identity and pricing. Other companies also conduct reverse auctions, and some officials noted their agencies had undertaken a cost/benefit analysis to determine which company to contract with for this service. Agency acquisition officials told us that using a contractor for their reverse auctions reduced some of their administrative duties and allowed senior contracting officers to spend more time on complex acquisitions. For example, FedBid offers remote and on-site assistance to train, set up accounts, and provide technical support for federal reverse auction users. According to a FedBid representative, the company provides staff for an on-site helpdesk at the Army and DHS full time, and at DOI and VA on- demand as needed, typically in the fourth quarter of the fiscal year. According to agency officials, FedBid employees also provide training on the use of their system at government contracting facilities. In addition, FedBid provides technical support to vendors on how to use their system. However, vendor questions about contract requirements are directed to the contracting officer. OFPP procurement policy notes that agencies should provide a greater degree of scrutiny when contracting for professional and management support services, which include acquisition support, program evaluation, and other services that can affect the government’s decision-making authority. We did not conduct a detailed review of FedBid’s role in providing technical support to contracting officials at the agencies. However, regarding its use of FedBid, DHS’s Office of Procurement Operations recognized this concern and issued an operating procedure to emphasize that documentation in the contract file must clearly state that the contracting officer made all acquisition decisions throughout the procurement process, and that the role of any acquisition support contractor personnel was solely administrative and not decision-making. Other agencies in our review noted that this is a good practice and one that could be easily implemented. DLA, the fifth agency we reviewed, did not obtain reverse auction services from FedBid, but rather purchased a license that allows it to conduct its own real-time, web-based auctions. The site allows DLA to manage its reverse auctions without the need for contractor services, with the exception of occasional technical support. The design of online reverse auctions can vary based on the acquisition strategy selected by the contracting officer. While the FAR is silent on reverse auctions specifically, agency officials told us contracting officers are required to follow other applicable acquisition procedures as outlined in the FAR and agencies’ specific acquisition regulations when deciding to use a reverse auction and then throughout the auction and award process. For example, the program office or contracting officers are required to conduct market research for acquisitions above the simplified acquisition threshold (currently $150,000) or, for acquisitions below that threshold, when adequate information is not available and the circumstances justify its cost. The contracting officer is also expected to determine whether an acquisition will be set aside for small businesses if it generally has an estimated value exceeding $3,000 but not over $150,000, and to follow simplified acquisition procedures to the greatest extent possible for all purchases not exceeding the simplified acquisition threshold. When using a reverse auction, contracting officers can utilize an existing contract vehicle, such as the GSA Schedule, or set aside the procurement for certain small business communities, where appropriate under applicable regulations. Contracting officers determine other features as well, including the length of the auction and the amount of information available to bidders about each other’s bids. These strategies or features can affect the competitive environment of the auction and affect the magnitude of cost savings. When setting up an auction on FedBid’s system, a contracting officer can choose to set a target price, which may be based on a government cost estimate or market research. If a target price is in effect, or “active,” a vendor must bid below that price—and below any other subsequent bids—in order to be the leading vendor. A vendor is informed when they are the leading vendor during the auction, though other vendors’ names and bid prices remain anonymous. A contracting officer can award a contract even if no submitted bids meet the target price. Vendors must register with FedBid and agree to the requirements established by the contracting officer before submitting a bid in an auction. These requirements may include delivery terms, whether the acquisition must be a brand name item, or other terms specific to the acquisition. Vendors can use FedBid’s system to submit questions about requirements during the auction, and the system notifies the contracting officer via e-mail. It is up to the contracting officer to decide whether to answer them. Questions about using FedBid’s system are usually sent to FedBid employees. Figure 8 outlines the acquisition process when using a reverse auction, along with the roles of the agency officials procuring the item, FedBid, and the bidding vendors. When a vendor submits a bid, FedBid automatically adds its fee and ranks the adjusted bid (i.e., the vendor’s bid plus the fee) against adjusted bids submitted by other vendors. When the reverse auction ends and the contracting officer receives the results, the bids, which already include FedBid’s fee, are ranked from lowest to highest. According to agency officials, contracting officers are then responsible for determining that the results of the reverse auction have met the competition, savings, and other criteria for the procurement, selecting a winning vendor from those results, and awarding the contract. When the agency receives the goods or services, it pays the entire bid amount to the selected vendor, including the reverse auction fee. FedBid then sends an invoice to the selected vendor for the reverse auction fee. FedBid caps its fee at 3 percent of the winning vendor’s bid, but the fee may be less depending on the specifics of FedBid’s contract with the agency. For example, in June 2009, DHS’s Office of Procurement Operations negotiated a reduced fee for its reverse auctions. In addition, FedBid may reduce its fee or charge no fee in specific circumstances, such as if the adjusted bid exceeds the contracting officer’s target price or if the fee would exceed $10,000. In fact, our analysis found that FedBid received no fees in 20 percent of reverse auctions conducted in fiscal year 2012 at the selected agencies. In those cases, the agencies paid the price of the winning vendor’s bid with no FedBid fee added. However, in 6 percent of the auctions, agencies paid the maximum fee even when the final award price exceeded the agency’s target price. See figure 9. In July 2013, GSA launched its own reverse auction tool to allow agencies to use reverse auctions with the GSA Schedule without using separate contractor to conduct the auctions. GSA officials told us that they do not intend to charge a reverse auction fee for awards made to GSA Schedule holders; however, the usual 0.75 percent fee for using the GSA Schedule will apply to these awards. After the auction ends, a contracting officer can accept the auction results or start over with a new one. For example, a contracting officer might choose to repost the solicitation to a wider pool of potential vendors in an attempt to garner additional participants. Alternately, a contracting officer can choose to make an award even if the bids exceeded the target price. The contracting officer must also establish the basis for award. For example, the contracting officer can make the award to the lowest bidder or make the award based on a cost/technical tradeoff process where it is in the best interest of the government to consider other than the lowest price. If a contracting officer is concerned about a vendor’s ability to meet the requirements, he or she may select another vendor, even if that vendor submitted a higher bid. On the basis of our analysis of a random sample of fiscal year 2012 auctions, we estimate that 24 percent of all reverse auction contracts were not awarded to the lowest bidding vendor. Competition and savings—two of the key benefits of reverse auctions cited by the agencies we reviewed—are not always being maximized. Both have been limited because not all reverse auctions involve interactive bidding. Agencies have awarded a significant number of contracts when there was only one offeror or a lack of interactive bidding among vendors. As a result, agencies paid a fee without realizing the key benefits they initially sought. It is also unclear whether savings due to reverse auctions are accurate because target prices maybe set too low or too high. In some cases, agencies are paying two fees, but they generally lack the data to provide transparency into the fees they are paying. To some degree, these shortcomings result from confusion caused by a lack of comprehensive government-wide guidance. All five agencies we reviewed characterize a benefit of reverse auctions as driving prices lower by having vendors compete against each other. For example, DHS Customs and Border Protection’s guidance notes that a benefit of reverse auctions is the increased competition on commonly purchased commodities. DLA guidance states that reverse auctions allow the government to procure products and services in a competitive and dynamic environment where vendors bid prices down until the end of the auction. In addition, according to information FedBid provides to the agencies, their service allows sellers to compete against each other in real time and in an interactive environment. The benefits of competition in acquiring products and services from the private sector are well established. However, contracts that are awarded using competitive procedures but for which only one offer is received (one-offer awards) have recently become an area of concern. We reported on this issue in 2010, and OFPP has noted that competitions yielding a response of only one offer deprive agencies of the ability to consider alternative solutions in a reasoned and structured manner. DOD, in its September 2010 Better Buying Power initiative memorandum, referred to competitive procurements for which only one offer was received as “ineffective competition.” Over a third of the fiscal year 2012 reverse auctions conducted by FedBid for the agencies in our review had no interactive bidding—where vendors engage in multiple rounds of bids against each other to drive prices lower. We found that 27 percent of the auctions involved only one vendor who may have submitted one or multiple bids, and another 8 percent had multiple vendors who only submitted one bid each. Agencies paid $3.9 million in fees for these auctions. The remaining 65 percent of auctions involved multiple vendors where at least one vendor submitted more than one bid. Figure 10 shows the percentage of FedBid’s fiscal year 2012 auctions for the agencies in our review that had interactive bidding among multiple vendors, versus those that did not, and the fees the agencies paid to FedBid. In fiscal year 2012, the selected agencies conducted 3,617 auctions where only one vendor participated and submitted only one bid. Agencies in our review paid $1.7 million in fees for these types of auctions. In this situation, agencies may not be getting the best price. In prior work, we found that a successful bidder did not initially offer his best price and it would not have been his final offer had there been competing bids. A few vendors that we spoke to also noted that it is in the vendor’s best interest to submit a high initial bid and wait for another vendor to offer a lower price before lowering their own price. In our review of 119 contract files for awards resulting from reverse auctions for the agencies in our review, 24 auctions (20 percent) only had one offeror and the contracting officers did not negotiate a lower price but accepted that vendor’s bid. The agencies in our review also conducted 1,707 auctions in fiscal year 2012, where a single vendor submitted multiple bids. This can occur when a vendor makes more than one attempt to submit a bid below an active target price to become the leading vendor. Given that FedBid does not disclose vendor identities, the target price, or bids, vendors do not know when they are bidding against themselves. While this could lead to lower prices, it does not meet agencies’ goals of increasing competition, and these prices could possibly be obtained through traditional acquisition procedures. However, agency officials stated that using reverse auctions reduced some of the time that would otherwise be spent on the acquisition. The agencies paid $1.1 million in fees when only one vendor participated in the auction but made more than one bid. We also found that the agencies in our review paid $1.1 million in fees in fiscal year 2012 for 1,663 auctions where multiple vendors submitted a single bid. In theory, a contracting officer could have obtained the same results by soliciting bids or offers from multiple vendors and avoided the reverse auction fee. In fiscal year 2012, the agencies in our review conducted 12,701 auctions where more than one vendor participated, and had multiple bids, with an average of six vendors and 15 bids total. In our review of selected contract files, we found evidence that contracting officers took proactive steps to increase the number of bidders in an effort to realize lower prices, such as by asking FedBid to hold another auction to include additional vendors. In one case, a DHS contracting officer used a reverse auction to place an order for office supplies under an existing contract vehicle but did not receive any bids. Consequently, the contracting officer conducted another auction to place the order under GSA’s Schedule program, which resulted in four vendors submitting a total of 40 bids. The contracting officer did not believe the auction resulted in the best price and invited all vendors to submit bids.vendors submitting a total of 74 bids and yielded an even lower price. Agencies cite savings as one of the benefits of reverse auctions.Although the agencies in our review stated that they do not publicly report the savings, they use the information—provided by FedBid—to assess the potential costs and benefits of reverse auctions. Savings information could also be used to determine whether to increase use of reverse auctions and to make decisions about what types of products or services are appropriate for that technique. FedBid calculates the savings by determining the difference between the government’s independent cost estimate (which becomes the auction target price) and the final award price. Agency officials stated that savings can be calculated using multiple methods. According to a DLA official, for its reverse auctions, DLA calculates savings in some situations by calculating the difference between the award price and the auction’s first bid. Using FedBid’s approach, savings from fiscal year 2012 reverse auctions for the Army, DHS, DOI, and VA totaled more than $98 million. See figure 11 for calculated savings based on the level of interactive bidding. However, it is also unclear whether savings due to reverse auctions are accurate. For example, the estimated savings may be too high since it includes $24 million in savings from auctions where vendors did not bid against each other to drive prices lower. Additionally, the accuracy of the calculated savings depends on the validity of the agency’s target price. Based on our contract file reviews, we found that most contracting officers relied on their market research or government independent cost estimate to establish the target price. However, our analysis shows that the target price may have been set too low in some cases, because in 1,111 auctions that had interactive bidding among vendors in fiscal year 2012, the final award price was higher than the target price. For example, VA conducted a reverse auction to obtain medical equipment and 4,418 vendors were notified. For this reverse auction, two vendors submitted a total of eight bids, all of which were higher than the target price. A contracting official stated that since it was best value procurement, the lowest bid was accepted even though the bid was above the target price. Four of the agencies we reviewed do not collect their own data, but rather rely on FedBid to identify their reverse auction activity. We found that these agencies do not track how much they pay in reverse auction fees. Without independently collecting or verifying this information, agencies are not able to independently assess the cost effectiveness of reverse auctions. In addition, we found that agencies sometimes pay two sets of fees when using an existing contract vehicle in conjunction with a reverse auction. When an agency limits a reverse auction to a group of vendors under a certain contract vehicle, the agency can pay one fee for the auction and a separate fee for the use of the contract vehicle. For the agencies in our review, 47 percent of acquisitions using reverse auctions in fiscal year 2012 were ordered under pre-existing contracts resulting in $6.5 million in reverse auction fees paid to FedBid. For example, for reverse auctions resulting in orders under Schedule contracts, the selected agencies in fiscal year 2012 paid $1.3 million to GSA and VA for the use of pre-existing contracts and another $2.8 million to FedBid in reverse auction fees. The degree to which agencies are able to maximize the benefits of reverse auctions are hindered by a lack of comprehensive government- wide regulations and guidance. Standards for internal control in the federal government identify the need for documenting policies and procedures to ensure appropriate measures are taken to address risk. Accordingly, the federal government publishes uniform policies and procedures for federal acquisitions in the FAR, which provides guidance to federal agencies and that the public, to include vendors, may read to better understand the federal acquisition process. However, as noted above, the FAR does not specifically address reverse auctions. Agencies have developed their own guidance, which generally encourages the use of reverse auctions for certain types of procurements and highlights the benefits of competition and savings from reverse auctions. But most agency guidance does not provide contracting officers with information on what to do in certain situations, for example, when only one vendor submits a bid. Only one agency out of five—VA—addresses what action should be taken when auctions fail to generate interactive bidding, but even then only requires that contracting officers document that occurrence in their contract files. VA has taken action to gain greater insight into its use of reverse auctions. In an effort to determine the effect of reverse auctions on the VA supply chain, in March 2012 the VA Senior Procurement Executive halted the use of reverse auctions until an assessment of their effect could be completed. Subsequently, in April 2012, VA issued guidance requiring contracting activities to develop internal controls and standard operating procedures to establish independent oversight of reverse auction procurements, such as determining savings and fees paid. In conducting our file reviews, we found examples of cases where the contract files are now including this information. DLA, which manages its own reverse auction activity, independently collects summary information from its buying commands on use and savings obtained. We found agency officials and vendors were uncertain about how reverse auctions fees are paid. For example, an Army contracting official incorrectly believed the vendors are charged the fee by FedBid, and another Army official stated that vendors may be confused about fees charged by FedBid, with some vendors believing they are paying the reverse auction fee, even though it is the procuring agency that does so. In another case, an Army procurement official told us he believed that if an auction does not generate any savings, FedBid would not charge a fee. However, we found that in fiscal year 2012, the selected agencies paid fees in 33 percent of auctions that did not generate any savings. Industry representatives also told us that their members were uncertain who pays the auction fee (it is paid by the ordering agency to the vendor, who is later invoiced by FedBid). Confusion also exists about how reverse auctions are managed. Several vendors stated that FedBid’s interface creates an additional layer between the vendor and the end user that can inhibit their efforts to clarify details in the solicitation—such as the type of material an agency requires— that are important in setting a bid price. While questions can be submitted through FedBid, some vendors told us their questions concerning the requirements are sometimes ignored if they do so. In fact, although vendor questions are posted on the FedBid website, it is the sole responsibility of the contracting officer—not FedBid—to respond to them.FedBid’s system only invited GSA Schedule holders currently registered with FedBid to submit bids. However, according to FedBid officials all GSA Schedule holders are automatically invited to bid, but must register with FedBid to submit bids. An agency official stated that vendors had concerns over security and privacy. Another agency official added that some vendors did not want to register with another contractor, which can reduce the level of competition and may not result in lower prices. Additionally, several vendors expressed concerns that the reverse auction system usually identifies the lowest bidder and is awarded the contract. In fact, it is the contracting officer’s responsibility to award the contract and, as discussed above, we estimated that about a quarter of the auctions did not result in awards to the lowest bidder. Further, some contracting officers incorrectly thought that Government acquisition officials and vendors told us that government- wide guidance on the use of reverse auctions would be useful in clearing up some of the confusion about the role that reverse auctions play in the acquisition process. OFPP is responsible for providing overall direction for government-wide procurement policies, regulations and procedures, and to promote economy, efficiency, and effectiveness in acquisition processes. OFPP officials told us that in 2012 they requested agencies to submit any existing guidance they had, but have yet to determine their next step. The lack of government-wide guidance addressing the use of reverse auctions and the confusion within the vendor community about the process may limit the potential benefits from the use of reverse auctions. Recent trends clearly indicate that agencies’ use of reverse auctions is on the rise. Because the FAR is silent on reverse auctions, agencies are left to decide when and how to use them. While most reverse auctions have been used to acquire commercial items of relatively small dollar value, this is by no means the case across the board. And some agencies are now considering expanding their use of reverse auctions to buy more services and for complex auctions. Further, some agencies are directly encouraging use of reverse auctions for certain procurements without full information about whether they are, in fact, gaining the intended benefits from the auctions in terms of competition and savings. Confusion about the functions and responsibilities of the reverse auction contractor and the government, and a lack of transparency regarding the fees the contractor is charging, suggests that guidance is needed. We found confusion about who is making final award determinations and the basis for those determinations. Further, some vendors expressed concern that they were not able to contact the contracting officer directly with questions pertinent to a given requirement. And while we found that the contractor did not charge a fee in about 20 percent of fiscal year 2012 auctions, it is troubling that agencies are not aware of the fees they are paying—including paying more than one fee under certain circumstances. More transparency about the reverse auction fees could help contracting officers determine whether use of an auction is the best tool for a given procurement. Certain issues that are not addressed by agencies and could be included in government-wide guidance are whether reverse auctions should be limited to commercial items; be used only for simple services acquisitions; and be used only for items of a relatively low dollar value ($150,000 or less). In addition, factors that could be considered in government-wide guidance to help ensure that the intended benefits of reverse auctions are maximized include: steps, if any, contracting officers should take when only one bid is received; factors contracting officers should consider when deciding whether to use a reverse auction to place orders under certain contract vehicles, such as the GSA Schedule; and whether contracting officers should be urged to examine whether the lowest price, plus any applicable fee(s), actually results in a savings below the target price when deciding to follow through with an award. To help mitigate confusion about the use of reverse auctions in federal acquisitions, we recommend that the Director of the Office of Management and Budget take the following two actions: (1) Take steps to amend the FAR to address agencies’ use of reverse auctions. (2) Issue guidance advising agencies to collect and analyze data on the level of interactive bidding and, where applicable, fees paid, to determine the cost effectiveness of using reverse auctions, and disseminating best practices from agencies on their use of reverse auctions related to maximizing competition and savings. We provided a draft of this report to OMB, DHS, DOD, Interior, VA, and GSA. We received e-mail and oral comments from OMB. We received minor technical comments from DOD, DHS, and Interior which were incorporated as appropriate. In technical comments, DOI officials stated that the department has benefited from reverse auctions and generally supports a FAR revision and the need for government guidance. We received e-mails from VA and GSA noting that they had no comments. Senior OMB staff stated that they generally agreed with our recommendations. While they stated that many of the issues we identified may be more suited to management guidance than regulatory coverage, they agreed that FAR coverage should be considered. They indicated that, before taking concrete steps to amend the FAR, they would discuss our findings and conclusions with the FAR and Chief Acquisition Officers Councils. We are sending copies of this report to interested congressional committees; the Director of the Office of Management and Budget; the Secretaries of Defense, Homeland Security, the Interior, and Veterans Affairs; and the Administrator of the General Services Administration. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-4841 or MackinM@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to determine (1) what agencies are buying through reverse auctions and trends in their use; (2) how agencies are conducting reverse auctions; and (3) the extent to which the potential benefits of reverse auctions are being maximized. To determine what agencies are buying through reverse auctions and trends in their use, we identified the agencies that conducted the greatest number of reverse auctions in fiscal year 2012 by reviewing contract award information in Federal Business Opportunities (FedBizOpps.gov). The award information identifies when reverse auctions were used; however, not all contract award actions are required to be announced in FedBizOpps. Over 99 percent of the reverse auctions for fiscal year 2012 listed in FedBizOpps showed that the agencies used the same contractor, FedBid, Inc. (FedBid), to conduct their reverse auctions. Because the federal agencies did not maintain the level of detailed information needed for our review, we obtained reverse auction data from FedBid. We used this data to: (1) confirm that the Departments of the Army (Army), Homeland Security (DHS), the Interior (DOI), and Veterans Affairs (VA) were primary users of reverse auctions; (2) determine the types of products and services acquired by these agencies; (3) compute the fees charged by FedBid for its services; and (4) analyze the savings. We used the data for Army, DHS, DOI, and VA to determine how agencies used reverse auctions in their acquisitions; these agencies comprised approximately 69 percent of government-wide reverse auction activity based on FedBid data. By relying on the FedBizOpps data, we also selected the Defense Logistics Agency (DLA) for examination because it had the greatest number of reverse auctions that did not use FedBid. Together, these five agencies represented approximately 70 percent of government-wide reverse auction activity during fiscal year 2012 based on FedBid and FedBizOpps data. We were not able to perform detailed analysis of DLA data for fiscal year 2012, because the agency collected only summary level information. Agency officials told us that providing the data for each auction would require reviewing the contract file to determine whether a reverse auction had been used. We determined that it would not be good use of DLA’s resources to conduct that review. However, we interviewed DLA officials and obtained their summary level information to assess the agency’s use of reverse auctions. To determine the reliability of the data obtained, we selected a random sample of all contract files for acquisitions where reverse auctions were used in fiscal year 2012 for the four agencies and compared the data obtained by the service provider, FedBid, with the information contained in the contract files and determined that the data was sufficiently reliable for our purposes. Table 1 shows the number of contract files reviewed at each of the four selected agencies. We relied on FedBid to determine if a vendor used automatic-rebid to lower its original bid. According to FedBid officials, automatic-rebids are not counted as new bids but are considered part of the original bids. The use of automatic-rebids information was not included in the contract files and therefore, we were not able to independently verify this information. We randomly selected 119 reverse auctions from the set of all auctions conducted in fiscal year 2012 by the Army, DHS, DOI, and VA. The results of our sample are generalizable to the entire population of 19,688 reverse auction conducted at the four agencies during fiscal year 2012. All percentage estimates from the file review have margins of error at the 95 percent confidence level of plus or minus 7 percentage points or less, unless otherwise noted. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. The sample included a mix of products and services, single and multiple vendors as shown in tables 2 and 3. To determine how agencies are conducting reverse auctions and to understand how contracting officers conducted market research, determined government estimates, and made source selections, we used the same random sample of contract files for acquisitions that used reverse auctions as discussed above. We obtained documentation from the contract files concerning the independent government cost estimate and market research information to determine how the auction’s target price was established. We also requested information on whether small business participants were considered, why decisions were made to use reverse auctions, the technical and price evaluation, and the source selection decisions. We also determined if the contract was awarded to the lowest bidding vendor and if the file contained documentation showing the estimated savings and fees for using reverse auctions. We did not systematically assess the relationship between auction outcomes and the selected acquisition strategy or specific design features. We also met with officials from the reverse auction service provider, FedBid, to discuss their roles and responsibilities during the reverse auction process. To determine to what extent agencies are maximizing the potential benefits of reverse auctions, we analyzed the data obtained from FedBid, identifying the government’s target price for the product or service, the number of vendors and bids for each auction, the lowest bid submitted, the savings estimated from the use of reverse auctions, and the type of contract used to acquire the products and services. We also computed the fees charged by FedBid. We also reviewed and analyzed, if available, government-wide and the five agencies’ regulations, policies and guidance assessing the use of reverse auctions. We met with government acquisition officials, including officials from the Office of Management and Budget’s Office of Federal Procurement Policy, and also contracting officers, and small business and competition officials at the selected agencies. We also spoke with members of the American Small Business Chamber of Commerce representing small government contractors and with officials from the Coalition for Government Procurement representing both small and large federal contractors to obtain their members position on the federal government’s use of reverse auctions, both located in Washington, D.C. We conducted this performance audit from November 2012 to December 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Reverse Auctions Used to Place Orders under Existing Contracts for Selected Agencies, Fiscal Year 2012 Characteristics Provides a flexible procurement strategy through which Army users may procure commercial off the shelf information technology hardware, software and services via an e-commerce based process. Does not charge a fee. Provides DHS users with ability to order commercially available information technology commodities, solutions, and value-added reseller services through multiple award indefinite delivery/indefinite quantity contracts with vendors in certain small business socioeconomic categories. Does not charge a fee. Provides DHS users ability to purchase tactical communications commodity products, infrastructure, and services via a multivendor indefinite delivery/indefinite quantity contract vehicle. Does not charge a fee. Provide commercial products and services at varying prices and a streamlined process to obtain products and services at prices associated with volume buying. Charge users a 0.75 and a 0.5 percent fee, respectively. Provides all federal agencies information technology products via a government-wide acquisition contract that includes 38 competed prime contract holders. Charges users a 0.45 percent fee. Provides users ability to purchase hardware, software, networking and telecommunications equipment, scientific research stations, warranties and maintenance service. Charges a .50 percent fee with $10,000 cap per delivery order. The VA operates its portion of the Schedule program under a delegation of authority from GSA. In addition to the contact named above, Katherine Trimble, Assistant Director; Russ Reiter; Carl Barden; Virginia (Jenny) Chanley; Dayna Foster; Kristine Hassinger; Georgeann Higgins; Julia Kennon; Kenneth Patton; Roxanna Sun; Bob Swierczek; and Jocelyn Yin made key contributions to this report. | Reverse auctions are one tool used by federal agencies to increase competition and reduce the cost of certain items. Reverse auctions differ from traditional auctions in that sellers compete against one another to provide the lowest price or highest-value offer to a buyer. GAO was asked to review issues related to agencies' use of reverse auctions. This report examines (1) what agencies are buying through reverse auctions and trends in their use; (2) how agencies are conducting reverse auctions; and (3) the extent to which the potential benefits of reverse auctions are being maximized. GAO identified five agencies conducting about 70 percent of government reverse auctions. GAO analyzed available data and guidance and interviewed agency officials and contractors. GAO also reviewed a random sample of contract files to understand agency procedures; the results of this analysis are generalizable to all reverse auctions for four of the five agencies in our review. The Departments of the Army, Homeland Security, the Interior, and Veterans Affairs used reverse auctions to acquire predominantly commercial items and services--primarily for information technology products and medical equipment and supplies--although the mix of products and services varied among agencies. Most--but not all--of the auctions resulted in contracts with relatively small dollar value awards--typically $150,000 or less--and a high rate of awards to small businesses. The four agencies steadily increased their use of reverse auctions from fiscal years 2008 through 2012, with about $828 million in contract awards in 2012 alone. GAO was not able to analyze data from a fifth agency, the Defense Logistics Agency (DLA), because it collected only summary level information during fiscal year 2012. DLA guidance states that the reverse auction pricing tool should be used for all competitive purchases over $150,000. Four agencies used the same commercial service provider to conduct their reverse auctions and paid a variable fee for this service, which was no more than 3 percent of the winning bid amount. DLA conducts its own auctions through a purchased license. Regardless of the method used, according to agency officials, contracting officers are still responsible for following established contracting procedures when using reverse auctions. GAO found that the potential benefits of reverse auctions--competition and savings--had not been maximized by the agencies. GAO found that over one-third of fiscal year 2012 reverse auctions had no interactive bidding, where vendors bid against each other to drive prices lower. In addition, almost half of the reverse auctions were used to obtain items from pre-existing contracts that in some cases resulted in agencies paying two fees--one to use the contract and one to use the reverse auction contractor's services. There is a lack of comprehensive governmentwide guidance and the Federal Acquisition Regulation (FAR), which is the primary document for publishing uniform policies and procedures related to federal acquisitions, does not specifically address reverse auctions. As a result, confusion exists about their use and agencies may be limited in their ability to maximize the potential benefits of reverse auctions. GAO recommends that the Director of the Office of Management and Budget (OMB) take steps to amend the FAR to address agencies' use of reverse auctions and issue government-wide guidance to maximize competition and savings when using reverse auctions. OMB generally agreed with GAO's recommendations, noting that FAR coverage should be considered and that, before taking concrete steps to amend the FAR, they would discuss GAO's findings and conclusions with the FAR and Chief Acquisition Officers Councils. |
The general purpose of facility security is to protect people, property, and the facility itself by deterring, detecting, and responding to potentially criminal and dangerous acts and people. Threats to facility security may include theft, unauthorized access, natural disasters, and terrorism, among others. An organization’s need to balance security with open and public access can make facility security more challenging, including at facilities such as medical centers, commercial office buildings, and gaming facilities. Organizations’ efforts to provide facility security are more extensive than simply assigning an individual to “stand guard.” Key functions of facility security generally include facility access, patrol and law enforcement, and security management (see table 1). As part of facility security management, organizations conduct risk assessments—or facility security assessments—that include identifying threats, vulnerabilities, and consequences to determine overall risk and what means, or countermeasures, are best suited to secure the facility. Organizations use a variety of countermeasures to provide facility security, including the use of security equipment, building-design specifications, and security personnel. Nonmilitary federal facilities are categorized into five facility security risk levels that are based on five factors: mission criticality, symbolism, facility population, facility size, and threat to tenant agencies. Private companies make individual determinations on how they want to mitigate facility security risks and must ensure their security workforces meet the specific needs of their industry. For example, security guards in the hospital industry protect employees, patients, visitors, and hospital equipment, and also may provide specialized assistance to ensure the safety of people with particular medical needs. To carry out facility security functions, organizations may rely on in-house security personnel; for federal agencies, those personnel are classified into several specific general schedule (GS) job series. Federal guidance provides broad parameters for the duties associated with each job position within its assigned OPM job series, but each agency is able to further refine its specific position descriptions within those parameters. The following provides the five job series used for the security personnel at the agencies we reviewed and a summary of the key security duties associated with each job series according to OPM guidance: GS-0085 Security Guard—generally performs protective services work involving guarding, protecting, and controlling access to federal facilities; GS-0083 Police—generally performs law enforcement work involving protecting the peace, investigating crimes, and arresting violators; GS-0080 Security Administration—generally performs or manages facility security work involving developing risk assessments, implementing security procedures, and overseeing security staff; GS-1811 Criminal Investigation—generally performs or supervises work involving planning and conducting investigations related to violations of federal laws; and GS-1802 Compliance Inspection—generally performs work involving conducting inspections to ensure compliance with federal laws (e.g., inspection of airline passengers and baggage). In addition to in-house facility security personnel, organizations may also use contract security personnel to secure their facilities. Organizations generally contract for a certain number of hours of security service to be fulfilled by contracting companies, rather than specifying the number of contract security personnel. Contracting companies recruit, hire, train, and pay their own security staff and typically charge an organization an hourly rate for their services. Titles for these contract security personnel may vary by organization. For example, FPS calls them protective security officers, while the Army more simply calls them contract security guards. In the federal government, DHS is designated under the Homeland Security Act of 2002 as the primary agency authorized to enforce federal laws and regulations aimed at protecting federal facilities and persons on the property. Within DHS, FPS is the security provider for GSA-owned or - controlled facilities. FPS’s federal workforce consists of about 675 law enforcement security officers (LESO), also known as inspectors, who are responsible for law enforcement and security duties, including: patrolling building perimeters, responding to incidents, completing risk assessments for buildings, recommending security countermeasures, and overseeing the contract security workforce. FPS also relies on about 14,000 contract security guards to control access, operate security equipment, observe the environment for suspicious activity, and respond to emergency situations involving the safety and security of the facility. We previously identified several vulnerabilities and weaknesses in the oversight of both FPS’s federal and contract workforces, and FPS is currently undertaking efforts to address these weaknesses and improve management of its security workforce. In addition to FPS, other federal agencies are responsible for securing and protecting their own facilities. Table 2 shows the facilities protected by the other agencies included in our review. Eight of the nine federal agencies selected for our review currently use a combination of both in-house and contract security personnel to secure their facilities, and the distribution of in-house and contract staff vary significantly (see fig. 1). VHA almost exclusively uses federal employees to secure its hospitals. Three of the selected agencies have statutory requirements that determine their use of federal and contract staff: the Army, Air Force, and TSA. DOD is generally prohibited from entering into a contract for the performance of firefighting or security guard functions at any military installation or facility. However, Congress authorized DOD to temporarily use contract security staff in fiscal year 2003 to address increased security needs at its facilities when numerous DOD employees were deployed overseas, but DOD is now required to discontinue the temporary use of contract security guards at the end of fiscal year 2012. TSA’s composition of mostly federal security employees, or airport passenger screeners, was dictated when the agency was created in the Aviation and Transportation Security Act of 2001. Others among our selected agencies generally have the discretion to determine the extent to which they use in-house staff or contract the facility security functions out to private contractors. For instance, PFPA primarily uses federal police officers to secure the Pentagon—a facility with a high risk for terrorist attack—and contract security guards to secure its lower-risk facilities. Federal agencies reported using a variety of in-house security positions (see table 3); however, one or two key positions may account for the majority of the agency’s in-house security staff. For example, while the Smithsonian reported that it uses four different types of federal security positions, almost 90 percent of its security employees are federal security guards. Agency officials reported that their in-house security staffs collectively perform a broader range of facility security functions than their contract staff. In-house security administration staff, police officers, and security guards, among others, perform a wide range of security functions. The most common security functions that in-house staff performed are law enforcement, post inspections, and risk assessments (see fig. 2). In contrast, seven of the eight agencies currently using contract security personnel reported their contract staff generally perform routine facility access control functions, including visitor screening and control center operations. FPS reported that its contract security guards performed a wider range of tasks, including some patrol and response duties. Officials from other agencies reported using contract security guards for what they consider to be lower-risk security posts, such as those providing visitor assistance. According to Air Force officials, their decisions of where to use contract staff are not predicated on facility or post risk levels, but on where staff are needed to replace deployed military personnel. Depending on the functions that are performed, each security position, whether in-house or contracted, generally has different training requirements that are specified by each individual agency’s needs. Training for federal and military police officers is generally more extensive than that required for federal and military security guards—two commonly used in-house security positions. While federal police officers receive training at a police academy, a federal law enforcement training facility, or a DOD-agency training facility, training for federal security guards is currently dictated by each agency’s individual needs. For example, Air Force officials told us that Air Force police officers receive 5 weeks of training and can perform all the job functions of security guards, in addition to broader law enforcement functions, while Air Force security guards receive 2 weeks of training to perform a more limited set of functions focused on facility access. Currently, no federal governmentwide training standards exist for contract security guards to work in federal facilities. Consequently, training requirements for contract security staff vary depending on the agency, as well as possible state requirements. Agencies specify in their contract statements of work the functions that contract staff are expected to perform, as well as the qualifications that are required for the staff. For instance, in addition to basic security training provided by the contractor, FPS contract security guards are required to have 16 hours of FPS-provided training, including certification on X-ray and magnetometer equipment, while the Air Force’s contract security guards receive 40 hours of government-provided training specific to the installation in which they are assigned. Selected agency officials told us that their decisions about staffing facility security functions—whether it be deciding between using in-house or contract staff or deciding the most appropriate type of in-house staff—are driven by multiple factors, such as their individual facility security requirements and costs. Federal facilities nationwide differ in their facility type, size, location, occupant mission, and risk level, among other factors. As we have previously reported, and security officials corroborated, there is no widely accepted formula to determine the size and makeup of a security workforce and no standard model can be applied for staffing because the risk level and specific building needs may differ. While some federal agencies may use in-house staff to secure their high-risk facilities, other agencies, such as JPS or USMS, may use contract security guards to protect their high-risk facilities. Over the years, we have advocated the use of a comprehensive risk management approach that links threats and vulnerabilities to resource requirements and allocations to address potential security threats. According to security officials from selected agencies, staffing for specific security positions is based on factors such as the risk level and specific needs of the facilities that are being protected. Staffing needs dictate the qualifications that agencies set for either their in-house or contract staff. For instance, FPS requires a high-school diploma, among other things, for its contract security guards; however, it does not require a law enforcement background or previous law enforcement experience. In contrast, PFPA requires some of its contract security guards to have, among other things, a secret-level security clearance, because of their potential access to sensitive materials. Examples of factors considered by agency security officials in reaching their security staffing decisions include the following: Smithsonian reported primarily using federal security guards to control access, operate security equipment, and patrol the perimeter of its facilities where the security risks are higher. Contract security guards are used to assist and advise visitors within the interior of museums, where security risks are lower because visitors are screened when granted access to the building. JPS security officials stated that the high-profile nature of the law enforcement and justice mission of DOJ draws increased attention to its facilities and poses increased or additional security threats, such as protests and other potential harm. It uses armed contract security guards, all of whom have prior law enforcement experience and are highly trained and deputized as Special Deputy U.S. Marshals. VHA facilities face security risks due to their open campuses at diverse locations. VA officials explained they rely on locally conducted risk assessments to determine their facilities’ security response. At some of its medical facilities located in rural locations, ready access to local law enforcement services may be limited; at several of its large urban VHA facilities, local law enforcement agencies generally do not provide basic police services on federal facilities. As a result, VA primarily uses uniformed federal police officers to provide facility security and law enforcement functions. Security officials also cited cost as another factor that was considered in staffing their workforces. We previously found that security officials from federal agencies cited budget considerations in making law enforcement and facility security staffing decisions. The base salary costs of government security positions vary depending on the experience and qualifications of the individual employee. Among our selected federal agencies, in-house security positions vary in base pay from an average of about $37,000 for security guards to nearly $90,000 for criminal investigators (see table 4). We found that an agency may hire entry-level employees into a GS-3 or GS-4 position, while experienced employees ranged up to the GS-15 grade level, particularly for security positions requiring higher levels of responsibilities or qualifications. With respect to contract security staff, the specific functions to be performed and the hourly rate associated with each position are established within a contract statement of work. One federal official told us that using a combined federal and contract workforce distributed based on functional areas and risks could make sense from a cost perspective. For example, a cost- effective model may be to have a high-level federal security or law enforcement officer present at facilities to oversee contract security guards assigned to perform certain limited facility access control functions. Representatives of the nine federal agencies and ten private sector organizations with whom we spoke identified several issues that present either benefits or challenges for using contract and in-house security staff, as identified in table 5. In our analysis of the benefits and challenges identified for both in-house and contract security staff, we found that both workforce staffing approaches offer advantages and disadvantages. As indicated previously, eight of the nine federal agencies in our review use both in-house and contract security staff. If staffing is well managed, agencies may achieve the benefits of either staffing approach. Cost. Private sector and federal agency representatives identified potential for cost savings as a benefit of using contract staff over in-house security staff. Such potential cost savings were based on several factors identified by representatives: (1) an in-house staffing model requires organizations to have more employees on board to staff posts than may be required under a contract model in which security is procured hourly; (2) a contract workforce may offer savings in employee compensation costs, including health and retirement benefits; and (3) contract security costs are fixed within the contract, which may reduce the risk of budget fluctuations. First, contract security staff are typically procured based on the hours of service provided and not by the number of staff who are used by the contractor to provide such services. Several federal officials reported that agencies that use in-house security workforces must have more security staff available than the equivalent hours required to fill the same security posts through a contract workforce to cover time when staff are away from their posts, such as for training or leave. For example, and as discussed later, Smithsonian officials reported it uses contract security guards at lower-risk areas of its facilities which has enabled it to staff five posts with contract security guards for the same cost as three posts staffed with federal security guards. In addition, the use of an in-house security workforce increases the number of FTEs an agency must recruit, train, schedule, and manage, and adds to the in-house administrative responsibilities and associated costs that could otherwise be handled by a contractor. However, Army officials reported that an Army analysis for fiscal year 2009 showed that while contract security guards would have offered savings over in-house security guards in the first 2 years of an in- sourcing decision, in-house security guards would be more cost effective over time as start-up costs for training, equipment, and uniforms are reduced. They noted it had sufficient administrative capacity to absorb the increased workload without additional administrative staff. Second, federal agency and private sector representatives told us that a contract security workforce offers savings in employee compensation costs, including health and retirement benefits. With a contract security workforce, the contractor is responsible for providing health or retirement benefits to its workforce, rather than the organization procuring the service. Several federal and private sector representatives reported that the benefits offered by contractors may be of lesser value than those offered in the federal sector, where employee benefits represent a significant portion of an employee’s compensation. OPM reported that for fiscal year 2010, the cost factor for federal employee health benefits was about $5,900 per enrolled employee. Retirement benefits for employees covered under the Federal Employees Retirement System (FERS) are about 14 percent of a regular civilian employee’s salary and as much as 30 percent of a federal law enforcement officer’s salary. An executive from one private sector hospital that had recently transitioned to a contract security workforce estimated that the hospital saved about 36 percent annually by using a contract security workforce rather than an in-house one, with much of this savings coming from no longer having to pay for health, retirement, and other benefits. In addition, several representatives also reported that contract security staff are often paid less than in-house security staff. According to May 2009 data from the Department of Labor, Bureau of Labor Statistics (BLS), the national average annual wage for a contract security guard was $24,450—about 30 percent less than the national average annual wage of $36,410 paid to security guards employed by the federal executive branch in that year. However, federal and private sector representatives also noted that offering lower wages and benefits to security personnel could present challenges in assembling a qualified security workforce, which could present security risks. As such, several representatives noted that, in using a contract security workforce, it is important to establish minimum wage and training requirements within the contract. A third benefit of using a contract security workforce is the ability to predict and manage security costs since the costs of the services provided are fixed by the contract. For example, in using an in-house security workforce, increasing security coverage or covering for workforce absences could require the use of overtime hours, which may be costly. Five of the federal agencies in our review reported they budgeted overtime costs for facility security staff for fiscal year 2010, with one agency reporting it budgeted about $1,600 for each facility security staff in that year. Overtime costs for staff absences may not be applicable with a contract security workforce because contractors are responsible for staffing each post under the terms of the contract. An executive from a private sector hospital that uses a contract security workforce reported that the hospital knows its security costs for the life of the contract, including costs defined in the contract for procuring additional security guard hours, if needed. Given the significant fiscal challenges currently facing the federal government, the reported cost savings offered by a contract security staff may be of particular interest to federal agencies. However, as we have previously reported, in the federal procurement system today, there is common recognition that a cost-only focus does not necessarily deliver the best quality or performance for the government or the taxpayers. Thus, while cost is always a factor, and often an important one, it is not the only factor that needs to be considered. Personnel flexibility. Representatives also reported personnel flexibility as a benefit of using contract security staff, including the flexibility to adjust and deploy security staff levels to meet immediate needs. According to FPS officials, its security contracts include a requirement that the contractor maintain a reserve force with a recommended capacity of at least 10 percent to provide additional security guard hours as needed. For example, FPS provides contract security guards to the Federal Emergency Management Agency to support its emergency-response efforts. FPS also provided additional security guard service hours to the Internal Revenue Service in response to an attack on an agency facility in Austin, Texas, in 2010. FPS contractors may employ part-time personnel so they have sufficient numbers to draw upon in the event of a temporary surge in security guard needs, according to FPS officials. In the private sector, executives representing gaming and theme-park industries reported that, while their organizations primarily rely upon an in-house security staff for day-to-day security, both industries call upon contractors to surge their workforce size to address security risks for New Year’s Eve celebrations or other events that attract large crowds, such as concerts. Using a contract security workforce may also reduce some in-house human capital administrative duties, such as recruiting security staff and addressing performance issues. Several federal agency officials reported that the use of an in-house security workforce presents personnel responsibility challenges, such as increased administrative functions for recruiting and hiring new staff, managing annual or sick leave, planning work shifts, and other duties. We have previously reported that the federal hiring process can be lengthy and complex and is often an impediment to the agencies, managers, and applicants it is designed to serve. This governmentwide hiring challenge also applies to the hiring of in-house security staff. For example, officials with one federal agency reported that its personnel center was taking from 99 to 120 days to recruit and hire new security staff. With a contract workforce, recruitment, hiring, and other administrative responsibilities are the responsibility of the contractor, and the contractor is obligated to provide the hours of service contracted for, regardless of the challenges it might face in doing so. Several federal agency and private sector representatives also reported that contract security staff offer greater flexibility to quickly address poor security guard performance issues than in-house staff. Although representatives we interviewed did not cite specific poor performance issues among in-house staff, several reported that poor performing contract staff can be quickly removed from a client’s site, which is not generally the case for in-house staff. It is generally more complex and time consuming to address poor performing in-house staff, and the process for federal employees may include performance reviews and appeals. While using contract staff can reduce personnel responsibilities in some areas, we have previously reported that it is important for federal agencies to have systems in place to oversee and manage the performance of contract and in-house security staff. In prior work, we have noted that it is critical that agencies implement performance management systems that help their security staff maximize their full potential, while also providing agencies with the necessary information to reward top performers and deal with poor performers, among other things. We have also noted that it is important to monitor contractor performance to ensure that the terms of the contract are met. Contractor performance evaluations may include daily oversight activities, such as post inspections, or annual reviews to ensure that a contractor is meeting all training, certification, and suitability requirements. Private sector executives who we interviewed told us that the performance of contract and in-house security guards can be monitored through various means, including customer service surveys, officer performance scenario tests and observations, security guard attendance, and other data. We previously reported that federal agencies can develop effective performance management systems by implementing a set of key practices that apply to agencies’ management of in-house as well as contract security workforces. Implementing performance management practices requires effort across an organization and is a critical ingredient to ensure the performance of either an in-house or contract workforce model. Staff selection. Representatives from both federal agencies and private sector organizations reported that in-house security staff offer increased control over security staff selection—an important benefit to ensure a qualified security workforce. Representatives from several organizations favored selecting their own staff when they considered the facility or post high risk or when the impact from a security breach could pose a high risk of loss to the organization. In using a contract security workforce, individual staff selection decisions are generally made by the contractor and not by the organization in which the staff are placed. Although security staff qualifications may be defined in the contract, several officials reported that reduced control over security staff selection can result in a less-qualified workforce. For example, PFPA officials reported that by using an in-house security workforce, it can control the selection process to ensure the highest caliber officers are hired to protect the Pentagon, a high-risk facility for terrorist attack. In the private sector, executives representing two large gaming corporations reported that their industry primarily uses in-house security staff rather than contract staff to help ensure that large amounts of cash circulating on the gaming floor are secure from theft. Casinos conduct background investigations on all employees, and executives reported that having control of the checks, rather than relying on a contractor to vet officers, ensures their thoroughness before officers are placed in sensitive security positions. Similarly, private sector executives reported concerns with ensuring that thorough security guard background investigations were conducted and state certifications were kept up-to-date by contractors. Staff development. Several private sector and federal agency representatives reported that having in-house security staff allows for greater control over the training and development that security guards receive to tailor staff skills to meet organizational needs. Although specialized training can be costly and time consuming, executives from two private sector firms and a federal agency told us they make training investments for their in-house staff, in part, because they tend to be longer tenured than contract officers. For example, private sector hospital executives reported that most hospitals use in-house security staff who receive training in crisis intervention, infection control, emergency preparedness, and other issues. VHA officials reported that having in- house security staff is preferable to contract staff because it can ensure the workforce receives specific training to meet professional standards. VHA facilities are accredited by the Joint Commission, an organization that accredits health care facilities by maintaining specific standards, such as managing security risks. According to VHA officials, it is easier to maintain the standards with in-house employees rather than relying on contractors whose training requirements are different. According to officials, VHA police officers are considered to be part of the patient-care team, trained to provide security in the VHA psychological and behavioral health centers. VHA officers receive basic training at VHA’s own law enforcement training center, which costs the agency approximately $7,800 per officer; VHA also provides facility-specific training and management- level supervisory courses. Staff retention. Representatives we interviewed commonly cited staff retention as a benefit of having in-house security staff. In general, federal agency and private sector representatives reported retaining security staff was as an important element in building an experienced workforce that is familiar with the facility and loyal to the organization they are charged to protect. Representatives from several private sector organizations reported that turnover rates—or the percentage of individuals leaving an organization per year—were considered to be higher for contract security guards than those of in-house security staff. Several private sector and federal agency representatives reported that their organization’s in-house security staff turnover rates ranged from 10 to 35 percent; contractor turnover rates were generally considered to be much higher among the officials we interviewed. Two private sector executives further noted that higher security guard turnover can result in an inconsistent security workforce that may not be as familiar with the organization and the facilities they are assigned to protect. Although private sector representatives generally considered staff retention to be a benefit of in-house staff over contract staff, officials from five of the nine federal agencies we interviewed reported that their agencies had experienced some staff retention challenges. Some federal officials noted that staff retention can be more difficult in certain geographic locations where the federal government and contractors may be competing for qualified staff. Reported challenges included retaining newly hired and trained federal officers who tended to move to higher paying positions within the federal system. VHA and Smithsonian officials indicated that their respective agencies had experienced turnover rates for their in-house security workforces of approximately 10 and 13 percent per year, respectively. Although such turnover rates were lower than the reported turnover rates for contract staff, attrition can be costly because agencies expend upfront costs to recruit, conduct background investigations, and train new staff. Furthermore, federal officials also noted that delays in the federal hiring process can exacerbate staff retention challenges, as attritions may not be quickly replaced by new hires. The Smithsonian, for example, determined that, in many cases, federal security guards hired at the GS-5 level were leaving for other agencies that hired their security guards at the GS-6 level. To address its staff retention issues, Smithsonian conducted a thorough staffing analysis that evaluated security risks and needs at each post within 19 museum properties in the Washington, D.C., and New York, New York, areas. It developed a staffing plan that promoted some GS-5 level security guards to GS-6, with those in-house security guards posted at higher-risk facility entrance posts. Smithsonian also procured a contractor to fill 70 lower- risk posts in building interiors that were previously staffed by federal security guards. In doing so, Smithsonian officials reported the agency has addressed its staff retention challenges and restructured its security workforce. Officials from the four selected federal agencies (Air Force, Army, Smithsonian, and TSA) that had undergone a workforce transition cited upfront planning in assessing facility security and staffing needs, including administrative support and training requirements, as a key lesson learned in facilitating a security workforce transition. These officials reported that changing their staffing approach was a challenging undertaking and upfront planning to assess and identify facility security and staffing requirements was critical to a successful transition. Officials further noted that this planning should also include an assessment of the organization’s administrative and training capabilities that are necessary to support the security workforce. We have previously reported that assessing and determining facility security and staffing needs is a key practice and element in a risk management approach for allocating resources in facility protection. Officials from the Smithsonian, which voluntarily changed its staffing approach, told us that conducting detailed security and staffing needs assessments based on risk management helped the transition to its current approach of using both federal and contract security guards. Until recently, the Smithsonian had primarily used federal security guards to protect its 19 museum facilities and assets. Faced with an increasing turnover rate of its federal security workforce, budget constraints, and the need to increase security presence at its facilities, Smithsonian officials told us they developed the current staffing strategy after drawing on several staffing analyses undertaken over the years. Components of the multiple facility security and staffing needs assessments included an examination of job functions of the security guards, security needs and risk level of each facility, and actual staffing needs for each post by shift. The agency also looked at post needs in terms of post hours required by shift, rather than the number of people (i.e., FTEs) required to staff the post. From these analyses, the agency determined that it could change its staffing approach and reduce costs for some low-risk posts by using a contract workforce and eliminating some posts. Since 2009, the Smithsonian has used contract staff, who are generally posted at lower- risk interior areas of some buildings to monitor collections, while continuing to use federal security guards at higher-risk areas, such as the museum entrance lobbies to screen visitors. By contrast, the Army and Air Force were temporarily allowed to change their staffing approaches, and TSA was required to use an in-house security force when the agency was created. Officials from these agencies stated that, in hindsight, they believe their workforce transitions would have benefited from more upfront planning, including assessing their security and staffing needs. For instance, in 2006, the Army assessed its staffing and post needs and requirements, including determining the baseline service hours needed at each security post, after transitioning from a federal workforce to a contract one in 2002. The Army had originally replaced its in-house staff with contract staff on a one-to-one staff exchange without assessing its security and staffing needs at its military installations and posts. This resulted in what we and its officials later determined were higher-than-necessary contract costs. Army officials told us that a facility security and staffing needs analysis was not conducted in 2002, when it was originally allowed to change its workforce, because of the relatively short time frame it had for its workforce transition. Some officials also underscored the importance of assessing the agency’s administrative infrastructure—including its information technology, financial systems, and human capital management—to identify administrative and training requirements and capacities, and to ensure the agency is capable of supporting a change in its staffing approach. TSA officials told us that the agency spent about $60 to $70 million to change and transfer data into a new financial system to manage its federal workforce. Because TSA had to transition airport screeners from a contract workforce hired by the airlines to a federal employee workforce within 1 year, it initially adopted the Department of Transportation’s (DOT) financial and human resources system. However, DOT’s system was not originally equipped or intended to take on a large influx of federal employees, and it proved difficult to use, according to TSA officials. TSA officials told us that, given their initial time constraints, the agency did not have the time and opportunity to plan and assess whether the system had the capacity to handle the increased federal workforce. These agencies’ experiences indicate that taking the time and conducting an assessment of facility security and staffing needs prior to any security workforce transitions, should such a transition be mandated or desired by FPS, would likely prove beneficial. FPS has recently taken some actions to assess its staffing needs based on risks, but the outcomes of these efforts are yet to be determined. For instance, FPS has developed federal workforce requirements and has incorporated workload data and facility risk as part of its workforce analysis. However, a final workforce analysis plan is under executive review with OMB; and, as the details of the plan are not yet known, it is unclear whether or the extent to which it will include an assessment of the types and numbers of security positions needed, as well as associated job functions, roles, and responsibilities. Additionally, FPS is in the process of developing a Risk Assessment Management Program (RAMP) system, which among other things, is designed to improve its ability to manage security at federal facilities and allocate resources based on risks. While these efforts may help provide a foundation for assessing its security and staffing needs, it is uncertain how much FPS could use them to assess and identify other staffing approaches and options that would be beneficial and financially feasible for protecting federal facilities. When changing their staffing approaches, other agencies found it helpful to assess security needs and risk level of each facility, identify specific job functions of its workforce, and link actual security and staffing needs for each post and facility. Additionally, an administrative and support capability assessment may be particularly important if FPS were to transition to primarily using federal employees to staff the current contract security guard positions because, as noted earlier, the agency’s hiring, personnel, and administrative responsibilities would increase. As we previously reported, it is important for agencies to be well equipped to recruit and retain security professionals; our literature review also indicated that whether the security staff are in house or contract, the employee selection and training process is critical. When transitioning to an all-inspector staff, FPS experienced delays in its hiring and training process when Congress mandated it to increase the number of federal law enforcement employees, which affected the agency’s ability to bring staff on board and train them in a timely manner. If a change in workforce approach involved hiring a large number of new federal employees, it could particularly stretch FPS’s existing administrative and support functions. Determining whether its training needs could be met through the Federal Law Enforcement Training Center (FLETC), which currently provides training for new FPS hires and continues to experience backlogs, or through another entity would appear to be the type of assessment that could lay the groundwork for a smoother transition. Finally, TSA officials further commented that a pilot program to phase in staffing changes could help in planning and assessing security and staffing needs. Legislation has recently been introduced in Congress calling for the implementation of a pilot program to examine the effectiveness of using federal employees to staff the current contract security guard positions at selected higher-risk federal facilities. Pilot programs allow for an alternative staffing approach to be vigorously evaluated, shared systematically with others, and adjusted, as appropriate, before it receives wider application. We previously reported that when conducting pilot programs, agencies should develop sound evaluation plans before program implementation—as part of the design of the pilot program itself—to increase confidence in the results and facilitate decision making about broader applications of the pilot program. The lack of a documented evaluation plan for the pilot program increases the likelihood that an agency will not collect appropriate or sufficient data, which limits understanding of the pilot program’s results. Selected federal officials also cited the need to determine the appropriate level of oversight and management of its workforce as another lesson in adopting a new workforce approach. In the case of the Army, officials cited the importance of determining at the outset the appropriate level of government oversight needed over its contract staff. In its contracts awarded in 2006, the Army established additional oversight requirements and mechanisms, including developing specific quality assurance plans and requiring full-time contracting officer technical representatives to perform two detailed inspections every 6 months. This was based on the recognition that government oversight requirements in its earlier contract were insufficient. As we previously reported, if the process is well managed, either an in-house or contract approach to staffing a security workforce can result in a uniform security workforce that provides effective security. As noted earlier, managing and overseeing more than 14,000 contract security guards has proven challenging for FPS, and efforts to implement our recommendations to monitor contractors’ and contract guards’ performance are still under way. For instance, FPS has begun requiring its inspectors to complete two contract security guard inspections a week at level IV federal facilities, and is in the process of providing additional training to its contract security guards. We believe it is important for FPS to continue taking steps to improve its oversight and management of its contract security guards. Changing the makeup of its contract security guard force to an in-house security workforce would continue to require the need for management and oversight. Some federal officials indicated that oversight and management of a federal workforce is just as important in staffing a security workforce. For instance, Army officials indicated that the job functions of a federal security guard would be no different than those functions performed by contract staff; the agency would have to manage its workforce and have the same expectations and security responsibilities performed. We previously reported that FPS lacks a human capital plan to oversee and manage its federal workforce and recommended it develop a strategic human capital plan. In 2011, we reported that human capital management of the federal workforce continues to be a high-risk issue area in the federal government and it is essential for agencies to ensure they have the talent and skill mix needed to address current and emerging human capital challenges. Going forward, in the event FPS looks to change its staffing approach, it will be important to have a strategic human capital plan in place to help manage and guide its current and future workforce planning efforts. We provided a draft of this report to GSA, Smithsonian, VA, and the Departments of Defense, Homeland Security, and Justice in order to obtain comments from the nine agencies we studied. GSA and DOJ had no comments. Smithsonian, VA, DOD, and DHS provided technical comments that we incorporated where appropriate. DHS also provided written comments that are reprinted in appendix II. As agreed upon with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff members have any questions concerning this report, please contact me at (202) 512-2834 or goldsteinm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report examines approaches used by selected federal agencies in staffing federal facility security workforces. Specifically, the objectives of this report were to identify (1) approaches used by selected federal agencies in staffing their facility security workforces; (2) federal agency and private sector representatives’ views on the benefits and challenges of using contract or in-house security staffing approaches; and (3) lessons that the Federal Protective Service (FPS) can learn from other federal agencies that have changed their security staffing approaches. To provide information on each of these objectives, we reviewed previous GAO reports and industry literature on staffing security workforces and selected a nonprobability sample of federal agencies and private sector companies for our review. Because the selected organizations are a nonprobability sample, the information we obtained are not generalizable. Our selection criteria included: dispersed location of physical facilities and security guard presence, need to balance public access and security at facilities, use of a federally or in-house employed and/or contract security workforce, experience in changing the approach used to staff security positions, and recommendations by security industry experts. Based on these criteria we selected nine federal agencies and three private sector industries for our review. The selected federal agencies were: (1) FPS, (2) Transportation Security Administration (TSA), (3) U.S. Army (Army), (4) Pentagon Force Protection Agency (PFPA), (5) U.S. Air Force (Air Force), (6) U.S. Marshals Service (USMS), (7) Department of Justice, Justice Protective Service (JPS), (8) Smithsonian Institution (Smithsonian), and (9) Veterans Health Administration (VHA). To gather a range of perspectives from the private sector, we selected three industries: (1) commercial real estate; (2) entertainment, including gaming operations and theme parks; and (3) hospitals. We selected a total of ten companies and associations within these industries from which we interviewed representatives to gather information to research the objectives described below. To identify approaches used by selected federal agencies in staffing their facility security workforces, we reviewed federal agency documents and data on facility workforce staffing approaches used and conducted interviews with agency officials. We developed, pretested, and had a security expert review a data collection instrument that asked the nine selected federal agencies four questions to gather information about their facility security workforces: 1. the number of full-time equivalent (FTE) facility security staff employed by the agency in fiscal year 2010 within several Office of Personnel Management (OPM) job series, including police (GS-0083), security guards (GS-0085), and security administration (GS-0080), among others; 2. the primary responsibilities, or job functions, performed by each of the different types of facility security positions employed by each agency in fiscal year 2010; 3. the estimated costs per person for training, recruitment, and equipment for facility security personnel in fiscal year 2010; 4. the estimated fiscal year 2010 budget for overtime salary costs for facility security personnel; and 5. the total number of contract facility security staff hours provided in fiscal year 2010. To ensure the accuracy of the staffing data collected from the federal agencies, we provided each federal agency with data on the number of FTE employees for security-related positions in OPM’s Central Personnel Data File (CPDF) as of September 2010—the most current available data at the time of our review. We asked each agency to review and verify its CPDF data and provide updated figures for the information requested. We e-mailed this data collection instrument to the audit liaisons at each of the agencies, who then forwarded the instrument to the appropriate officials to provide responses. We contacted agencies, as necessary, to clarify any questions we had on the information provided. We received completed data collection instruments from eight of nine agencies. PFPA did not provide the requested information, but agency officials provided estimated numbers of facility security position types and contract staff. We previously reported that governmentwide data from CPDF for the key variables reported in this report—agency and pay plan or grade—were 96 percent or more accurate. We determined that the information from OPM’s CPDF reported here is sufficiently reliable for our needs. To determine the distribution of in-house and contract security workforce, we used the number of FTE federal employees and the total number of contract hours procured in fiscal year 2010 that were provided by eight of the nine agencies in the data collection instruments. For PFPA, we used estimated data provided by the agency officials for the number of FTE federal employees and the estimated number of contract staff employed in 2010. We used 1,760 work hours in a year to convert the total number of contract hours in fiscal year 2010 into FTEs. While agencies may use different work hours to convert contract hours to FTE, we used 1,760 work hours in a year, which was used by FPS for a typical federal employee, and included estimated time for annual and sick leave that may be used in a year. To describe federal agency and private sector representatives’ views on the benefits and challenges of using contract or in-house facility security staffing approaches, we conducted semistructured interviews with officials from each selected federal agency and with executives from ten companies and associations within three private sector industries: (1) commercial real estate, (2) entertainment (including gaming and theme parks), and (3) hospitals. In those interviews, we asked federal agency officials and private sector executives open-ended questions to identify the specific benefits and challenges presented in the use of in-house and contract security workforces. To determine the prevalence of the specific benefits and challenges cited, we completed a content analysis of the interviews. We reviewed the responses to open-ended questions and identified a total of six categories that represented the benefits or challenges for the use of in-house or contract security workforces. We developed a codebook that defined each of the six categories which were cost, personnel issues—which included separate codes for personnel flexibility and personnel responsibilities—staff selection, staff development, staff retention, and contract management. An analyst reviewed each response and assigned a code, then a second analyst reviewed each assigned code. If the two analysts disagreed on any of the assigned codes, the two analysts discussed any differences in the coding until a consensus was reached. We then removed any duplicate responses—instances in which a respondent identified the same benefit or challenge more than once for either in-house or contract security workforces—to ensure that only sole benefits and challenges reported by federal agency officials or private sector executives were reported in our analysis. Finally, we analyzed the coded responses to determine how many federal officials and private sector executives reported each benefit and challenge for using in-house and contract security workforces. To determine lessons that FPS can learn from other federal agencies that have changed their security staffing approaches, we selected four agencies that had undergone workforce transitions. The selected agencies were the Army, Air Force, TSA, and Smithsonian. We reviewed agency documents and conducted semistructured interviews with agency officials on the lessons learned in changing and staffing their security workforces. To determine how these lessons may apply to FPS, we reviewed relevant literature from academic and professional organizations and information from prior GAO and agency Inspector General reports, and compared the information collected from each agency with various efforts undertaken by FPS to address its workforce staffing needs. We also interviewed FPS officials regarding an internal preliminary staffing analysis on potential changes to its staffing approach. We conducted this performance audit from July 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Maria Edelstein, Assistant Director; Matt Barranca; Brian Chung; David Hooper; Delwen Jones; Jennifer Kim; and Kelly Rubin made key contributions to this report. | The Federal Protective Service (FPS) within the Department of Homeland Security (DHS) provides security and law enforcement services to over 9,000 federal facilities through its federal and contract security workforce. Over the years, GAO has made numerous recommendations to address significant weaknesses in FPS's oversight and management of its security workforce. Legislation has been introduced that would, among other things, have FPS examine the effectiveness of relying more on federal employees for security. As requested, this report examines: (1) nine federal agencies' approaches for staffing their security workforces; (2) federal and private sector representatives' views on the benefits and challenges of using contract and in-house security staff; and (3) lessons that FPS can learn from federal agencies that have changed their security staffing approaches. GAO reviewed agency documents and conducted interviews with representatives from federal agencies and private sector firms selected based on the use of security guards and experience in changing a security workforce, among other criteria. The selected agencies and private sector firms are a nonprobability sample, and the information we obtained is not generalizable. Eight of the nine selected federal agencies reported using a combination of contract and in-house facility security positions, and the distribution of their security staff varies significantly. Contract security staff are primarily used for routine access control functions, while in-house staff, such as federal security guards and inspectors, tend to perform a variety of security functions, such as patrol and risk assessment. Selected agency officials cited facility risk level and cost, among others, as factors considered when staffing a security workforce. Federal agencies used various types of security staff-- even at high-risk facilities--for protection. As a high-profile law enforcement agency, the Department of Justice uses armed contract security guards with prior law enforcement experience to protect its high-risk facilities. Federal and private sector representatives reported that contract and in-house security staff offer benefits and challenges for agencies to weigh when making staffing decisions. The two primary reported benefits of contract security staff were (1) potential cost savings and (2) flexibility to increase or reduce staff size. Conversely, these two issues were commonly cited as challenges in using in-house security staff. The reported benefits for in-house security staff were greater control to select qualified security staff and develop them to meet organizational needs. Early planning to determine security staffing needs and sufficient oversight were cited as key lessons learned when changing staffing approaches. For example, Smithsonian Institution had time to conduct risk-based assessments, which helped it decide to use contract staff only at lower-risk posts. Other agencies' experiences, as well as FPS's experience in transitioning to an inspector-based workforce, suggest that changing FPS's staffing approach could prove challenging. Early planning could help FPS address some of those challenges in the event a transition is desired or mandated, and sufficient oversight and management of its workforce will be critical to providing effective security. GAO provided the nine agencies with a draft of this report for comment. In response, agencies provided technical comments that were incorporated where appropriate. |
The department is facing near and long-term internal fiscal pressures as it attempts to balance competing demands to support ongoing operations, rebuild readiness following extended military operations, and manage increasing personnel and health care costs as well as significant cost growth in its weapon systems programs. For more than a decade, DOD has dominated GAO’s list of federal programs and operations at high risk of being vulnerable to fraud, waste, abuse. In fact, all of the DOD programs on GAO’s High-Risk List relate to business operations, including systems and processes related to management of contracts, finances, the supply chain, and support infrastructure, as well as weapon systems acquisition. Long-standing and pervasive weaknesses in DOD’s financial management and related business processes and systems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the financial status and cost of DOD activities to Congress and DOD decision makers; (2) adversely impacted its operational efficiency and mission performance in areas of major weapons system support and logistics; and (3) left the department vulnerable to fraud, waste, and abuse. Because of the complexity and long-term nature of DOD’s transformation efforts, GAO has reported the need for a chief management officer (CMO) position and a comprehensive, enterprisewide business transformation plan. In May 2007, DOD designated the Deputy Secretary of Defense as the CMO. In addition, the National Defense Authorization Acts for fiscal years 2008 and 2009 contained provisions that codified the CMO and Deputy Chief Management Officer (DCMO) positions, required DOD to develop a strategic management plan, and required the Secretaries of the military departments to designate their Undersecretaries as CMOs and to develop business transformation plans. DOD financial managers are responsible for the functions of budgeting, financing, accounting for transactions and events, and reporting of financial and budgetary information. To maintain accountability over the use of public funds, DOD must carry out financial management functions such as recording, tracking, and reporting its budgeted spending, actual spending, and the value of its assets and liabilities. DOD relies on a complex network of organizations and personnel to execute these functions. Also, its financial managers must work closely with other departmental personnel to ensure that transactions and events with financial consequences, such as awarding and administering contracts, managing military and civilian personnel, and authorizing employee travel, are properly monitored, controlled, and reported, in part, to ensure that DOD does not violate spending limitations established by statute or other legal provisions regarding the use of funds. Before fiscal year 1991, the military services and defense agencies independently managed their finance and accounting operations. According to DOD, these decentralized operations were highly inefficient and failed to produce reliable information. On November 26, 1990, DOD created the Defense Finance and Accounting Service (DFAS) as its accounting agency to consolidate, standardize, and integrate finance and accounting requirements, functions, procedures, operations, and systems. The military services and defense agencies pay for finance and accounting services provided by DFAS using their operations and maintenance appropriations. The military services continue to perform certain finance and accounting activities at each military installation. These activities vary by military service depending on what the services wanted to maintain in-house and the number of personnel they were willing to transfer to DFAS. As DOD’s accounting agency, DFAS records these transactions in the accounting records, prepares thousands of reports used by managers throughout DOD and by the Congress, and prepares DOD-wide and service-specific financial statements. The military services play a vital role in that they authorize the expenditure of funds and are the source of most of the financial information that allows DFAS to make payroll and contractor payments. The military services also have responsibility over most of DOD’s assets and the related information needed by DFAS to prepare annual financial statements required under the Chief Financial Officers Act. DOD accounting personnel are responsible for accounting for funds received through congressional appropriations, the sale of goods and services by working capital fund businesses, revenue generated through nonappropriated fund activities, and the sales of military systems and equipment to foreign governments or international organizations. DOD’s finance activities generally involve paying the salaries of its employees, paying retirees and annuitants, reimbursing its employees for travel- related expenses, paying contractors and vendors for goods and services, and collecting debts owed to DOD. DOD defines its accounting activities to include accumulating and recording operating and capital expenses as well as appropriations, revenues, and other receipts. According to DOD’s fiscal year 2012 budget request, in fiscal year 2010 DFAS processed approximately 198 million payment-related transactions and disbursed over $578 billion; accounted for 1,129 active DOD appropriation accounts; and processed more that 11 million commercial invoices. DOD financial management was designated as a high-risk area by GAO in 1995. Pervasive deficiencies in financial management processes, systems, and controls, and the resulting lack of data reliability, continue to impair management’s ability to assess the resources needed for DOD operations; track and control costs; ensure basic accountability; anticipate future costs; measure performance; maintain funds control; and reduce the risk of loss from fraud, waste, and abuse. Other business operations, including the high-risk areas of contract management, supply chain management, support infrastructure management, and weapon systems acquisition are directly impacted by the problems in financial management. We have reported that continuing weaknesses in these business operations result in billions of dollars of wasted resources, reduced efficiency, ineffective performance, and inadequate accountability. Examples of the pervasive weaknesses in the department’s business operations are highlighted below. DOD invests billions of dollars to acquire weapon systems, but it lacks the financial management processes and capabilities it needs to track and report on the cost of weapon systems in a reliable manner. We reported on this issue over 20 years ago, but the problems continue to persist. In July 2010, we reported that although DOD and the military departments have efforts underway to begin addressing these financial management weaknesses, problems continue to exist and remediation and improvement efforts would require the support of other business areas beyond the financial community before they could be fully addressed. DOD also requests billions of dollars each year to maintain its weapon systems, but it has limited ability to identify, aggregate, and use financial management information for managing and controlling operating and support costs. Operating and support costs can account for a significant portion of a weapon system’s total life-cycle costs, including costs for repair parts, maintenance, and contract services. In July 2010, we reported that the department lacked key information needed to manage and reduce operating and support costs for most of the weapon systems we reviewed—including cost estimates and historical data on actual operating and support costs. For acquiring and maintaining weapon systems, the lack of complete and reliable financial information hampers DOD officials in analyzing the rate of cost growth, identifying cost drivers, and developing plans for managing and controlling these costs. Without timely, reliable, and useful financial information on cost, DOD management lacks information needed to accurately report on acquisition costs, allocate resources to programs, or evaluate program performance. In June 2010, we reported that the Army Budget Office lacked an adequate funds control process to provide it with ongoing assurance that obligations and expenditures do not exceed funds available in the Military Personnel–Army (MPA) appropriation. We found that an obligation of $200 million in excess of available funds in the Army’s military personnel account violated the Antideficiency Act. The overobligation likely stemmed, in part, from lack of communication between Army Budget and program managers so that Army Budget’s accounting records reflected estimates instead of actual amounts until it was too late to control the incurrence of excessive obligations in violation of the act. Thus, at any given time in the fiscal year, Army Budget did not know the actual obligation and expenditure levels of the account. Army Budget explained that it relies on estimated obligations—despite the availability of actual data from program managers—because of inadequate financial management systems. The lack of adequate process and system controls to maintain effective funds control impacted the Army’s ability to prevent, identify, correct, and report potential violations of the Antideficiency Act. In our February 2011 report on the Defense Centers of Excellence (DCOE), we found that DOD’s TRICARE Management Activity (TMA) had misclassified $102.7 million of the nearly $112 million in DCOE advisory and assistance contract obligations. The proper classification and recording of costs are basic financial management functions that are also key in analyzing areas for potential future savings. Without adequate financial management processes, systems, and controls, DOD components are at risk of reporting inaccurate, inconsistent, and unreliable data for financial reporting and management decision making and potentially exceeding authorized spending limits. The lack of effective internal controls hinders management’s ability to have reasonable assurance that their allocated resources are used effectively, properly, and in compliance with budget and appropriations law. Over the years, DOD has initiated several broad-based reform efforts to address its long-standing financial management weaknesses. However, as we have reported, those efforts did not achieve their intended purpose of improving the department’s financial management operations. In 2005, the DOD Comptroller established the DOD FIAR Directorate to develop, manage, and implement a strategic approach for addressing the department’s financial management weaknesses and for achieving auditability, and to integrate those efforts with other improvement activities, such as the department’s business system modernization efforts. In May 2009, we identified several concerns with the adequacy of the FIAR Plan as a strategic and management tool to resolve DOD’s financial management difficulties and thereby position the department to be able to produce auditable financial statements. Overall, since the issuance of the first FIAR Plan in December 2005, improvement efforts have not resulted in the fundamental transformation of operations necessary to resolve the department’s long-standing financial management deficiencies. However, DOD has made significant improvements to the FIAR Plan that, if implemented effectively, could result in significant improvement in DOD’s financial management and progress toward auditability, but progress in taking corrective actions and resolving deficiencies remains slow. While none of the military services has obtained an unqualified (clean) audit opinion, some DOD organizations, such as the Army Corps of Engineers, DFAS, the Defense Contract Audit Agency, and the DOD Office of Inspector General, have achieved this goal. Moreover, some DOD components that have not yet received clean audit opinions are beginning to reap the benefits of strengthened controls and processes gained through ongoing efforts to improve their financial management operations and reporting capabilities. Lessons learned from the Marine Corps’ Statement of Budgetary Resources audit effort can provide a roadmap to help other components better stage their audit readiness efforts by strengthening their financial management processes to increase data reliability as they develop action plans to become audit ready. In August 2009, DOD’s Comptroller sought to further focus efforts of the department and components, in order to achieve certain short- and long- term results, by giving priority to improving processes and controls that support the financial information most often used to manage the department. Accordingly, DOD revised its FIAR strategy and methodology to focus on the DOD Comptroller’s two priorities—budgetary information and asset accountability. The first priority is to strengthen processes, controls, and systems that produce DOD’s budgetary information and the department’s Statements of Budgetary Resources. The second priority is to improve the accuracy and reliability of management information pertaining to the department’s mission-critical assets, including military equipment, real property, and general equipment, and validating improvement through existence and completeness testing. The DOD Comptroller directed the DOD components participating in the FIAR Plan—the departments of the Army, the Navy, and the Air Force and the Defense Logistics Agency—to use a standard process and aggressively modify their activities to support and emphasize achievement of the priorities. GAO supports DOD’s current approach of focusing and prioritizing efforts in order to achieve incremental progress in addressing weaknesses and making progress toward audit readiness. Budgetary and asset information is widely used by DOD managers at all levels, so its reliability is vital to daily operations and management. DOD needs to provide accountability over the existence and completeness of its assets. Problems with asset accountability can further complicate critical functions, such as planning for the current troop withdrawals. In May 2010, DOD introduced a new phased approach that divides progress toward achieving financial statement auditability into five waves (or phases) of concerted improvement activities (see appendix I). According to DOD, the components’ implementation of the methodology described in the 2010 FIAR Plan is essential to the success of the department’s efforts to ultimately achieve full financial statement auditability. To assist the components in their efforts, the FIAR Guidance, issued along with the revised plan, details the implementation of the methodology with an emphasis on internal controls and supporting documentation that recognizes both the challenge of resolving the many internal control weaknesses and the fundamental importance of establishing effective and efficient financial management. The FIAR Guidance provides the process for the components to follow, through their individual Financial Improvement Plan (FIP), in assessing processes, controls, and systems; identifying and correcting weaknesses; assessing, validating, and sustaining corrective actions; and achieving full auditability. The guidance directs the components to identify responsible organizations and personnel and resource requirements for improvement work. In developing their plans, components use a standard template that comprises data fields aligned to the methodology. The consistent application of a standard methodology for assessing the components’ current financial management capabilities can help establish valid baselines against which to measure, sustain, and report progress. Improving the department’s financial management operations and thereby providing DOD management and the Congress more accurate and reliable information on the results of its business operations will not be an easy task. It is critical that the current initiatives being led by the DOD DCMO and the DOD Comptroller be continued and provided with sufficient resources and ongoing monitoring in the future. Absent continued momentum and necessary future investments, the current initiatives may falter, similar to previous efforts. Below are some of the key challenges that the department must address in order for the financial management operations of the department to improve. Committed and sustained leadership. The FIAR Plan is in its sixth year and continues to evolve based on lessons learned, corrective actions, and policy changes that refine and build on the plan. The DOD Comptroller has expressed commitment to the FIAR goals, and established a focused approach that is intended to help DOD achieve successes in the near term. But the financial transformation needed at DOD, and its removal from GAO’s high-risk list, is a long-term effort. Improving financial management will need to be a cross-functional endeavor; requiring improvements in some of DOD’s other business operations such as those in the high-risk areas of contract management, supply chain management, support infrastructure management, and weapon systems acquisition. As acknowledged by DOD officials, sustained and active involvement of the department’s CMO, the DCMO, the military departments’ CMOs, the DOD Comptroller, and other senior leaders is critical. Within every administration, there are changes at the senior leadership; therefore, it is paramount that the current initiative be institutionalized throughout the department—at all working levels—in order for success to be achieved. Effective plan to correct internal control weaknesses. In May 2009, we reported that the FIAR Plan did not establish a baseline of the department’s state of internal control and financial management weaknesses as its starting point. Such a baseline could be used to assess and plan for the necessary improvements and remediation to be used to measure incremental progress toward achieving estimated milestones for each DOD component and the department. DOD currently has efforts underway to address known internal control weaknesses through three integrated programs: (1) Internal Controls over Financial Reporting (ICOFR) program, (2) ERP implementation, and (3) FIAR Plan. However, the effectiveness of these three integrated efforts at establishing a baseline remains to be seen. As discussed in our recent report, the lack of effective internal controls, in part, contributed to the DOD Inspector General issuing a disclaimer of opinion on the Marine Corps’ fiscal year 2010 Statement of Budgetary Resources (SBR). The auditors reported that ineffective internal control and ineffective controls in key financial systems should be addressed to ensure the reliability of reported financial information. Examples of the problems identified include the following: The Marine Corps did not have effective controls in place to support estimated obligations, referred to as “bulk obligations,” to record a payment liability, and, as a result, was not able to reconcile the related payment transactions to the estimates. The Marine Corps estimates obligations in a bulk amount to record payment liabilities where it does not have a mechanism to identify authorizing documentation as a basis for recording the obligations. The auditors found ineffective controls over three major information technology systems used by the Marine Corps and reported numerous problems that required resolution. For example, the auditors identified a lack of controls over interfaces between systems to ensure completeness of the data being transferred. System interface controls are critical for ensuring the completeness and accuracy of data transferred between systems. The report also noted that the Marine Corps did not develop an overall corrective action or remediation plan that includes key elements of a risk- based plan. Instead, its approach focuses on short-term corrective actions based on manually intensive efforts to produce reliable financial reporting at year-end. Such efforts may not result in sustained improvements over the long term that would help ensure that the Marine Corps could routinely produce sound data on a timely basis for decision making. We previously reported that using principles of risk management helps policymakers make informed decisions about best ways to prioritize investments, so that the investments target the areas of greatest need. However, we found that the Marine Corps’ SBR Remediation Plan focused on individual initiatives to address 70 auditor Notices of Findings and Recommendations that included 139 recommendations, without assessing risks, prioritizing actions, or ensuring that actions adequately responded to recommendations. Further, the plan did not identify resources, roles and responsibilities, or include performance indicators to measure performance against action plan objectives. Given the current efforts, goals, and timeframes for achieving auditability of the Marine Corps’ Fiscal Year 2011 SBR, the current approach is understandably focused on short-term actions. However, achieving financial accountability that is sustainable in the long term will require reliable financial systems and sound internal controls. An effective remediation plan would help ensure that audit recommendations are fully addressed to deal with the short-term and long-term goals. The Marine Corps reported that actions on 88 of the 139 recommendations, including weaknesses related to accounting and financial reporting and information technology systems were fully implemented; however, the completeness and effectiveness of most Marine Corps’ actions have not yet been tested. DOD Inspector General auditors told us that tests performed during the Marine Corps’ fiscal year 2011 SBR audit effort will determine whether and to what extent the problems identified during the fiscal year 2010 SBR audit effort have been resolved. They also confirmed that as of August 25, 2011, the Marine Corps had remediated the problems on 11 of the information technology audit recommendations. Because of the department’s complexity and magnitude, developing and implementing a comprehensive plan that identifies DOD’s internal control weaknesses will not be an easy task. But it is a task that is critical to resolving the long-standing weaknesses and will require consistent management oversight and monitoring for it to be successful. Competent financial management workforce. Effective financial management in DOD will require a knowledgeable and skilled workforce that includes individuals who are trained and certified in accounting, well versed in government accounting practices and standards, and experienced in information technology. Hiring and retaining such a skilled workforce is a challenge DOD must meet to succeed in its transformation to efficient, effective, and accountable business operations. The National Defense Authorization Act for Fiscal Year 2006 directed DOD to develop a strategic plan to shape and improve the department’s civilian workforce. The plan was to, among other things; include assessments of (1) existing critical skills and competencies in DOD’s civilian workforce , (2) future critical skills and competencies needed over the next decade, and (3) any gaps in the existing or future critical skills and competencies identified. In addition, DOD was to submit a plan of action for dev eloping and reshaping the civilian employee workforce to address any identified gaps, as well as specific recruiting and retention goals and strategies on how to train, compensate, and motivate civilian employees. In developi the plan, the department identified financial management as one of its enterprisewide mission-critical occupations. In July 2011, we reported that DOD’s 2009 overall civilian workforce plan had addressed some legislative requirements, including assessing the critical skills of its existing civilian workforce. Although some aspects of the legislative requirements were addressed, DOD still has significant work to do. For example, while the plan included gap analyses related to the number of personnel needed for some of the mission-critical occupations, the department had only discussed competency gap analyses for 3 mission-critical occupations—language, logistics management, and information technology management. A competency gap for financial management was not included in the department’s analysis. Until DOD analyzes personnel needs and gaps in the financial management area, it will not be in a position to develop an effective financial management recruitment, retention, and investment strategy to successfully address its financial management challenges. Accountability and effective oversight. The department established a governance structure for the FIAR Plan, which includes review bodies for governance and oversight. The governance structure is intended to provide the vision and oversight necessary to align financial improvement and audit readiness efforts across the department. As noted in our recent report, both DOD and the components have established senior executive committees as well as designated officials at the appropriate levels to monitor and oversee their financial improvement efforts. These committees and individuals have also generally been assigned appropriate roles and responsibilities. To monitor progress and hold individuals accountable for progress, DOD managers and oversight bodies need reliable, valid, meaningful metrics to measure performance and the results of corrective actions. In May 2009, we reported that the FIAR Plan did not have clear results-oriented metrics. To its credit, DOD has taken action to begin defining results-oriented FIAR metrics it intends to use to provide visibility of component-level progress in assessment; and testing and remediation activities, including progress in identifying and addressing supporting documentation issues. We have not yet had an opportunity to assess implementation of these metrics—including the components’ control over the accuracy of supporting data—or their usefulness in monitoring and redirecting actions. Ensuring effective monitoring and oversight of progress—especially by the leadership in the components—will be key to bringing about effective implementation, through the components’ FIPs. However, as noted in our recent report, we found that weaknesses in the Navy and Air Force FIAR Plan implementation efforts indicate that the monitoring and oversight of such efforts have not been effective. More specifically, we found that component officials as well as the oversight committees at both the component and DOD levels did not effectively carry out their monitoring responsibilities for the Navy Civilian Pay and Air Force Military Equipment FIPs. For the two FIPs that we reviewed, neither individual officials nor the executive committees took sufficient action to ensure that the FIPs were accurate or complied with the FIAR Guidance. As a result, the Navy concluded that its Civilian Pay was ready for audit, as did the Air Force with respect to its Military Equipment, even though they did not have sufficient support to assert audit readiness. On the other hand, once the Navy and Air Force submitted the FIPs to DOD in support of their audit readiness assertions, both the DOD Inspector General and the DOD Comptroller carried out their responsibilities for reviewing the FIPs. In their reviews, both organizations identified issues with the FIPs that were similar to those we had identified. The DOD Comptroller, who makes the final determination as to whether an assessable unit is ready for audit, concluded that neither of these FIPs supported audit readiness. Effective oversight and monitoring would also help ensure that lessons learned from recent efforts would be sufficiently disseminated throughout the department and applied to other financial improvement efforts. In commenting on our report about the FIPs, the DOD Comptroller stated that it is critical that the department continues to look at how effectively it applies lessons learned. Furthermore, effective oversight holds individuals accountable for carrying out their responsibilities. DOD has introduced incentives such as including FIAR goals in Senior Executive Service Performance Plans, increased reprogramming thresholds granted to components that receive a positive audit opinion on their Statement of Budgetary Resources, audit costs funded by the Office of the Secretary of Defense after a successful audit, and publicizing and rewarding components for successful audits. The challenge now is to evaluate and validate these and other incentives to determine their effectiveness and whether the right mix of incentives has been established. Well-defined enterprise architecture. For decades, DOD has been challenged in modernizing its timeworn business systems. Since 1995, we have designated DOD’s business systems modernization program as high risk. Between 2001 and 2005, we reported that the modernization program had spent hundreds of millions of dollars on an enterprise architecture and investment management structures that had limited value. Accordingly, we made explicit architecture and investment management-related recommendations. Congress included provisions in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 that were consistent with our recommendations. In response, DOD continues to take steps to comply with the act’s provisions and to satisfy relevant system modernization management guidance. Collectively, these steps address best practices in implementing the statutory provisions concerning the business enterprise architecture and review of systems costing in excess of $1 million. However, long-standing challenges that we previously identified remain to be addressed. Specifically, while DOD continues to release updates to its corporate enterprise architecture, the architecture has yet to be federated through development of aligned subordinate architectures for each of the military departments. In this regard, each of the military departments has made progress in managing its respective architecture program, but there are still limitations in the scope and completeness, as well as the maturity of the military departments’ architecture programs. For example, while each department has established or is in the process of establishing an executive committee with responsibility and accountability for the enterprise architecture, none has fully developed an enterprise architecture methodology or a well-defined business enterprise architecture and transition plan to guide and constrain business transformation initiatives. In addition, while DOD continues to establish investment management processes, the DOD enterprise and the military departments’ approaches to business systems investment management still lack the defined policies and procedures to be considered effective investment selection, control, and evaluation mechanisms. Until DOD fully implements these longstanding institutional modernization management controls its business systems modernization will likely remain a high-risk program. Successful implementation of the ERPs. The department has invested billions of dollars and will invest billions more to implement the ERPs. The implementation of an integrated, audit-ready systems environment through the deployment of ERP systems underlies all of DOD’s financial improvement efforts and is crucial to achieving departmentwide audit readiness. According to DOD, the successful implementation of the ERPs is not only critical for addressing long-standing weaknesses in financial management, but equally important for helping to resolve weaknesses in other high-risk areas such as business transformation, business system modernization, and supply chain management. Successful implementation will support DOD by standardizing and streamlining its financial management and accounting systems, integrating multiple logistics systems and finance processes, providing asset visibility for accountable items, and integrating personnel and pay systems. Previously, we reported that delays in the successful implementation of ERPs have extended the use of existing duplicative, stovepiped systems, and have continued the funding of these systems longer than anticipated. To the degree that these business systems do not provide the intended capabilities, DOD’s goal of departmentwide audit readiness by the end of fiscal year 2017 could be jeopardized. Over the years we have reported that the department has not effectively employed acquisition management controls to help ensure the ERPs deliver the promised capabilities on time and within budget. As we reported in October 2010, DOD has identified 10 ERPs—1 of which had been fully implemented—as essential to its efforts to transform its business operations. We are currently reviewing the status of two of these ERPs—the Army’s General Fund Enterprise Business System (GFEBS) and the Air Force’s Defense Enterprise Accounting and Management System (DEAMS). GFEBS is intended to support the Army’s standardized financial management and accounting practices for the Army’s general fund, except for funds related to the Army Corps of Engineers. The Army estimates that GFEBS will be used to control and account for approximately $140 billion in annual spending. DEAMS is intended to provide the Air Force with the entire spectrum of financial management capabilities and is expected to maintain control and accountability for approximately $160 billion. GFEBS is expected to be fully deployed during fiscal year 2012, is currently operational at 154 locations, including DFAS, and is being used by approximately 35,000 users. DEAMS is expected to be fully deployed during fiscal year 2016, is currently operational at Scott Air Force Base and DFAS, and is being used by about 1,100 individuals. Our preliminary results identified issues related to GFEBS and DEAMS providing DFAS users with the expected capabilities in accounting, management information, and decision support. To compensate, DFAS users have devised manual workarounds and several applications to obtain the information they need to perform their day-to-day tasks. Examples of the issues in these systems that DFAS users have identified include the following: The backlog of unresolved GFEBS trouble tickets has continued to increase from about 250 in September 2010 to approximately 400 in May 2011. Trouble tickets represent user questions and issues with transactions or system performance that have not been resolved. According to Army officials, this increase in tickets was not unexpected since the number of users and the number of transactions being processed by the system has increased, and the Army and DFAS are taking steps to address issues raised by DFAS. Approximately two-thirds of invoice and receipt data must be manually entered into GFEBS from the invoicing and receiving system (i.e., Wide Area Work Flow). DFAS personnel stated that manual data entry will eventually become infeasible due to increased quantities of data that will have to be manually entered as GFEBS is deployed to additional locations. Army officials acknowledged that there is a problem with the Wide Area Work Flow and GFEBS interface and that this problem reduced the effectiveness of GFEBS, and that they are working with DOD to resolve the problem. GFEBS lacks the ability to run ad hoc queries or search for data in the system to resolve problems or answer questions. The Army has recognized this limitation and is currently developing a system enhancement that they expect will better support the users’ needs. Manual workarounds are needed to process certain accounts receivable transactions such as travel debts. DFAS personnel stated that the problem is the result of the data not being properly converted from the legacy systems to DEAMS. DFAS officials indicated that they were experiencing difficulty with some of the DEAMS system interfaces. For example, the interface problem with the Standard Procurement System has become so severe that the interface has been turned off, and the data must be manually entered into DEAMS. DFAS officials stated that DEAMS does not provide the capability— which existed in the legacy systems—to produce ad hoc reports that can be used to perform the data analysis need to perform daily operations. They also noted that when some reports are produced, the accuracy of those reports is questionable. The Army and Air Force have stated that they have plans to address these issues, and the Army has plans to validate the audit readiness of GFEBS in a series of independent auditor examinations over the next several fiscal years. For DEAMS, the DOD Milestone Decision Authority has directed that the system is not to be deployed beyond Scott Air Force Base until the known system weaknesses have been corrected and the system has been independently tested to ensure that it is operating as intended. In closing, I am encouraged by the recent efforts and commitment DOD’s leaders have shown toward improving the department’s financial management. Progress we have seen includes recently issued guidance to aid DOD components in their efforts to address their financial management weaknesses and achieve audit readiness, and standardized component financial improvement plans to facilitate oversight and monitoring, as well as sharing lessons learned. In addition, the DOD Comptroller and DCMO have shown commitment and leadership in moving DOD’s financial management improvement efforts forward. The revised FIAR strategy is still in the early stages of implementation, and DOD has a long way and many long-standing challenges to overcome, particularly with regard to sustained commitment, leadership, and oversight, before the department and its military components are fully auditable, and DOD financial management is no longer considered high risk. However, the department is heading in the right direction and making progress. Some of the most difficult challenges ahead lie in the effective implementation of the department’s strategy by the Army, Navy, Air Force, and DLA, including successful implementation of ERP systems and integration of financial management improvement efforts with other DOD initiatives. GAO will continue to monitor the progress of and provide feedback on the status of DOD’s financial management improvement efforts. We currently have work in progress to assess implementation of the department’s FIAR strategy and efforts toward auditability. As a final point, I want to emphasize the value of sustained congressional interest in the department’s financial management improvement efforts, as demonstrated by this Subcommittee’s leadership. Mr. Chairman and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. For further information regarding this testimony, please contact Asif A. Khan, (202) 512-9869 or khana@gao.gov. Key contributors to this testimony include J. Christopher Martin, Senior-Level Technologist; F. Abe Dymond, Assistant Director; Gayle Fischer, Assistant Director; Greg Pugnetti, Assistant Director; Darby Smith, Assistant Director; Beatrice Alff; Steve Donahue; Keith McDaniel; Maxine Hattery; Hal Santarelli; and Sandy Silzer. The first three waves focus on achieving the DOD Comptroller’s interim budgetary and asset accountability priorities, while the remaining two waves are intended to complete actions needed to achieve full financial statement auditability. However, the department has not yet fully defined its strategy for completing waves 4 and 5. Each wave focuses on assessing and strengthening internal controls and business systems related to the stage of auditability addressed in the wave. Wave 1—Appropriations Received Audit focuses on the appropriations receipt and distribution process, including funding appropriated by Congress for the current fiscal year and related apportionment/reapportionment activity by the OMB, as well as allotment and sub-allotment activity within the department. Wave 2—Statement of Budgetary Resources Audit focuses on supporting the budget-related data (e.g., status of funds received, obligated, and expended) used for management decision making and reporting, including the Statement of Budgetary Resources. In addition to fund balance with Treasury reporting and reconciliation, other significant end-to-end business processes in this wave include procure-to-pay, hire- to-retire, order-to-cash, and budget-to-report. Wave 3—Mission Critical Assets Existence and Completeness Audit focuses on ensuring that all assets (including military equipment, general equipment, real property, inventory, and operating materials and supplies) that are recorded in the department’s accountable property systems of record exist; all of the reporting entities’ assets are recorded in those systems of record; reporting entities have the right (ownership) to report these assets; and the assets are consistently categorized, summarized, and reported. Wave 4—Full Audit Except for Legacy Asset Valuation includes the valuation assertion over new asset acquisitions and validation of management’s assertion regarding new asset acquisitions, and it depends on remediation of the existence and completeness assertions in Wave 3. Also, proper contract structure for cost accumulation and cost accounting data must be in place prior to completion of the valuation assertion for new acquisitions. It involves the budgetary transactions covered by the Statement of Budgetary Resources effort in Wave 2, including accounts receivable, revenue, accounts payable, expenses, environmental liabilities, and other liabilities. Wave 5—Full Financial Statement Audit focuses efforts on assessing and strengthening, as necessary, internal controls, processes, and business systems involved in supporting the valuations reported for legacy assets once efforts to ensure control over the valuation of new assets acquired and the existence and completeness of all mission assets are deemed effective on a go-forward basis. Given the lack of documentation to support the values of the department’s legacy assets, federal accounting standards allow for the use of alternative methods to provide reasonable estimates for the cost of these assets. In the context of this phased approach, DOD’s dual focus on budgetary and asset information offers the potential to obtain preliminary assessments regarding the effectiveness of current processes and controls and identify potential issues that may adversely impact subsequent waves. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As one of the largest and most complex organizations in the world, the Department of Defense (DOD) faces many challenges in resolving serious problems in its financial management and related business operations and systems. DOD is required by various statutes to (1) improve its financial management processes, controls, and systems to ensure that complete, reliable, consistent, and timely information is prepared and responsive to the financial information needs of agency management and oversight bodies, and (2) to produce audited financial statements. Over the years, DOD has initiated numerous efforts to improve the department's financial management operations and to try to achieve an unqualified (clean) opinion on the reliability of its reported financial information. These efforts have fallen short of sustained improvement in financial management or financial statement auditability. The Subcommittee has asked GAO to provide its perspective on the status of DOD's financial management weaknesses and its efforts to resolve them. DOD financial management has been on GAO's high-risk list since 1995 and, despite several reform initiatives, remains on the list today. Pervasive deficiencies in financial management processes, systems, and controls, and the resulting lack of data reliability, continue to impair management's ability to assess the resources needed for DOD operations; track and control costs; ensure basic accountability; anticipate future costs; measure performance; maintain funds control; and reduce the risk of loss from fraud, waste, and abuse. DOD spends billions of dollars each year to maintain key business operations intended to support the warfighter, including systems and processes related to the management of contracts, finances, the supply chain, support infrastructure, and weapon systems acquisition. These operations are directly impacted by the problems in financial management. In addition, the long-standing financial management weaknesses have precluded DOD from being able to undergo the scrutiny of a financial statement audit. DOD's past strategies for improving its financial management were ineffective, but recent initiatives are encouraging. In 2005, DOD issued its Financial Improvement and Audit Readiness (FIAR) Plan for improving financial management and reporting. In 2009, the DOD Comptroller directed that FIAR efforts focus on financial information in two priority areas: budget and mission-critical assets. The FIAR Plan also has a new phased approach that comprises five waves of concerted improvement activities. The first three waves focus on the two priority areas, and the last two on working toward full auditability. The plan is being implemented largely through the Army, Navy, and Air Force military departments and the Defense Logistics Agency, lending increased importance to the commitment of component leadership. Improving the department's financial management operations and thereby providing DOD management and Congress more accurate and reliable information on the results of its business operations will not be an easy task. It is critical that current initiatives related to improving the efficiency and effectiveness of financial management have the support of DOD leaders and that of DOD's Deputy Chief Management Officer and Comptroller continue with sustained leadership and monitoring. Absent continued momentum and necessary future investments, current initiatives may falter. Below are some of the key challenges that DOD must address for its financial management to improve to the point where DOD is able to produce auditable financial statements: (1) committed and sustained leadership, (2) effective plan to correct internal control weaknesses, (3) competent financial management workforce, (4) accountability and effective oversight, (5) well-defined enterprise architecture, and (6) successful implementation of the enterprise resource planning systems. |
In the 1980s, FAA began considering how a satellite-based navigation system might eventually replace the ground-based system that has long provided navigation guidance to aircraft. In August 1995, after several years of research, FAA contracted with Wilcox Electric to develop WAAS to enhance GPS. However, because of concerns about the contractor’s work, FAA terminated the contract in April 1996. In May 1996, the agency entered into an interim contract with Hughes Aircraft Company (now Raytheon Systems), with the contract becoming final in October 1996. Accuracy, integrity, availability, continuity, and service volume are the major performance goals for the system to meet. Accuracy is defined as the degree to which an aircraft’s position as calculated using the system conforms to its true position. For precision approaches to runways, WAAS is expected to provide aircraft operators with position accuracy within 7.6 meters 95 percent of the time. Integrity is the system’s ability to provide timely warnings when its signals are providing erroneous information and, thus, should not be used for navigation. WAAS is expected to provide a warning to aircraft operators within 5.2 seconds. Availability is the probability that, at any given time, the system will meet FAA’s accuracy and integrity requirements for a specific phase of flight. For precision approaches, WAAS is expected to be available all but 9 hours per year. Continuity is the probability that the system’s signal will meet accuracy and integrity requirements continuously for a specified period. Service volume is the area of coverage for which the system’s signal will meet availability requirements. As shown in figure 1, WAAS is a network of ground stations and geostationary (GEO) communications satellites: Reference stations on the ground (up to 53 units) will serve as the primary data collection sites for WAAS. These stations will receive data from the GPS and GEO satellites. Master stations on the ground (up to seven units) will process the data collected by the reference stations and generate accuracy corrections and integrity messages for each of the GPS and GEO satellites. These stations will also validate the transmitted corrections. Ground earth stations (up to 14 units) will, among other things, transmit accuracy corrections and integrity messages generated by the master stations to FAA’s GEO satellites. GEO satellites (up to six satellites) will transmit wide-area accuracy corrections and integrity messages to aircraft and also broadcast signals that will be similar to the signals broadcast by the GPS satellites. A ground communications system will transmit information among the reference stations, master stations, and ground earth stations. For pilots to use WAAS for navigation, their aircraft will have to be equipped with receivers that process the information carried by the GPS and GEO signals. The receivers will enable the pilots to determine the precise time and the speed and three-dimensional position (latitude, longitude, and altitude) of their aircraft. By July 30, 1999, FAA expects that WAAS’ initial operational capability will be available for pilots’ use. At that time, WAAS is expected to support aircraft navigation for all phases of flight. However, the initial system will not contain all the required hardware and software components needed for (1) redundancy in the event of equipment failures and (2) availability for the nation’s entire airspace. By December 2001, FAA plans to develop a fully operational WAAS by adding reference stations and upgrading software under the Raytheon contract and adding GEO satellites under a separate contract. The full system is expected to be capable of eventually serving as a “sole means” navigation system. That is, the system must, for a given operation or phase of flight, allow the aircraft to meet all navigation system performance requirements. The Secretary’s report provided a complete assessment of the major risks FAA faces in achieving the technical performance goals of the WAAS project. It also disclosed the cost uncertainties and the range and probabilities of potential costs. However, the report could have done more to disclose the uncertainties associated with FAA’s schedule for making WAAS fully operational. In discussing the risks FAA faces in developing WAAS, the Secretary’s report highlighted the vulnerability of the system’s signals to intentional or unintentional interference from electronic equipment. It also discussed mitigation strategies, including the possibilities of an independent backup system and full access to a second GPS frequency. Concerns about the system’s vulnerability to electronic interference have been highlighted in recent months. In an October 1997 report, the President’s Commission on Critical Infrastructure Protection warned against relying on satellite navigation as the sole source of aircraft landing guidance in light of potential interference. That same month, a group of independent experts from outside FAA, called together by the agency’s management to study the technical issues facing WAAS, raised concern about the possible intentional jamming of the signals. In February 1998, the FAA Administrator’s task force on the National Airspace System’s modernization recommended that FAA address the risks posed by electronic interference and gain consensus among users of the system about the agency’s plan to switch from ground- to satellite-based navigation. The Secretary’s report recognized that WAAS’ vulnerability to interference must be assessed and appropriate countermeasures must be in place before FAA can complete the transition to a satellite-based navigation system. The report cited several elements of a risk mitigation plan. For example, FAA has developed procedures for reporting and responding to interference that include outfitting flight inspection aircraft with the capability to locate sources of interference. FAA may employ other risk mitigation strategies as well. One is the retention of an independent backup system. The Secretary’s report noted that FAA is studying the need for such a system, and if the need for a backup is established, the agency would evaluate various alternatives. While the backup system would not have to provide aircraft operators with the same operational capability as WAAS or the current ground-based system, it would have to provide, at a minimum, safe navigation in the event of a loss of service from WAAS. Rather than designating WAAS as a sole means navigation system, FAA may designate it initially as a “primary means” system until concerns about electronic interference are resolved. This means that WAAS would not be expected to fully meet all availability and continuity requirements for navigation. As a result, aircraft operators would either have to be equipped with a backup navigation system or have restrictions on when and where they could fly. FAA and Mitre Corporation officials told us that if an independent backup system is retained, FAA may decide to deploy fewer WAAS reference stations and satellites. In making this decision, the agency would consider whether civil air navigation requirements could be met more cost-effectively with a combination of an independent backup system and WAAS with fewer reference stations and satellites. Even if WAAS remains unmodified, the system’s benefits to FAA and aircraft operators could be expected to decrease if some portion of the current ground-based network is retained as an independent backup system. FAA has intended to decommission its entire network of ground-based navigation aids between 2005 and 2010—with the phaseout concentrated toward the end of that period. In January 1998, FAA found that full decommissioning would result in the agency’s saving about $500 million (in net present value) over WAAS’ life cycle. The agency also expected aircraft operators to be able to reduce the proliferation of on-board navigation equipment. The benefit-cost analysis estimated that the operators would save about $350 million by removing such equipment. Another risk mitigation strategy to counteract WAAS’ vulnerability to electronic interference (particularly unintentional interference) is the use of a second frequency. If one GPS frequency was lost because of interference, a second frequency could be used to provide service.However, the current WAAS design assumes the use of single-frequency receivers on board aircraft. The Department of Transportation (DOT) and DOD, as joint chairs of the Interagency GPS Executive Board, are working toward providing aviation and other civil users with full access to a second GPS frequency on the next generation of GPS satellites. Although the second civil frequency would not be fully operational on GPS satellites until about 2010, FAA would prefer to build WAAS ground- and space-based equipment so that users could operate with “forward compatible” receivers—that is, receivers that can be built to operate with a single frequency now and also operate with dual frequencies in the future. Once a final decision on the second frequency is made, FAA and industry will need up to 2 years to develop the minimum operational performance standards so that manufacturers can begin producing receivers capable of single- and dual-frequency operations. In 1997, we expressed our concern that FAA’s cost estimates for WAAS were firm, discrete-point estimates, implying a level of precision that could not be supported, particularly early in the project’s development. The Secretary’s report addressed this concern by identifying a range of possible costs and associated probabilities. The Secretary’s report stated at a high confidence level (an 80-percent probability) that WAAS’ 15-year life-cycle cost will not exceed about $3 billion. Overall, this estimate is $600 million higher than the agency’s September 1997 estimate. FAA attributes this increase to the costs of leasing additional GEO satellites being higher than expected. (See table 1.) We agree with the Secretary’s report that the greatest degree of uncertainty about the WAAS cost estimates surrounds the costs of the satellites. The uncertainty exists because FAA does not yet know exactly how many additional satellites will be needed and how much the per unit costs will be. The Secretary’s report also states at a high confidence level that the operations and maintenance cost of satellites will be no more than about $1.2 billion, or about 40 percent, of the project’s total cost of $3 billion. This estimate includes about $200 million for the cost of maintaining the leases on the two existing satellites for which FAA currently contracts with Comsat and $1 billion for leasing additional satellites. FAA’s cost estimate assumes that the two satellites leased from Comsat will be retained and two to four additional satellites (with three being the most probable number) will be obtained. The annual unit costs for the added satellites range from about $12 million to $25 million (with $17 million being the most probable cost). The uncertainty surrounding GEO satellite costs is likely to be reduced as more data become available. FAA intends to make a decision on the number of satellites needed for the full WAAS after determining the placement of satellites in space and how well the GPS satellites are performing. The agency will know what the per unit costs will be after it comes to an agreement with a vendor for satellite services. On January 8, 1998, FAA issued a request for information seeking input from vendors that would be willing to finance the costs of designing, building, and launching the GEO satellites. FAA would commit to a multiyear lease for satellite services and reimburse the vendor for its costs. According to the Secretary’s report, the agency has targeted April 1998 for issuing a request for proposals to solicit vendors’ bids and July 1998 for awarding a contract. Two types of leases are potentially applicable to FAA’s satellite leasing strategy: the operating lease and the capital lease. An issue to be resolved is how budget authority for the satellite leasing costs will be scored. According to the scorekeeping guidelines contained in the Conference Report for the Budget Enforcement Act of 1997, operating leases for physical assets are primarily intended to meet short-term capital needs and are to be used to obtain general purpose equipment (that is, equipment not built to meet a unique government specification or need) and equipment that has a private sector market. Capital leases for physical assets, on the other hand, are intended to be generally longer term and used to obtain equipment built to meet unique government-specified needs or uses and leased to the government for most of its useful economic life. FAA’s satellite lease would likely be scored as an operating lease if FAA signs a long-term lease through which the agency leases space on “hosted” GEO satellites. In other words, FAA’s WAAS payload would share space on satellites with other users. Scorekeeping guidelines require that an agency have sufficient budget authority to cover at least the cost of the first year of the contract plus any cancellation fees. According to WAAS’ funding profile, FAA expects no leasing costs in fiscal years 1999, 2000, and 2001 if the vendor agrees to cover the costs of building and launching the satellites and to wait until 2002 for FAA’s first payment on the contract. FAA’s Assistant Chief Counsel, Procurement Law Division, told us that the agency may enter into contracts without budget authority for its cancellation fees because FAA has multiyear contracting authority that exempts it, under certain conditions, from the Anti-Deficiency Act. FAA’s satellite lease would likely be scored as a capital lease if FAA signs a long-term lease for “dedicated” satellites that would be built to meet WAAS’ specifications and used primarily, if not exclusively, for WAAS’ operations. In that case, scorekeeping guidelines require enough up-front budget authority to reflect the estimated net present value of the entire lease, about $290 million, in fiscal year 1999, the first year of the contract.Congressional approval of this amount would result in less budget authority being available for other programs funded through the appropriations process in that fiscal year. Although the Secretary’s report discussed risk factors that could affect the achievement of FAA’s schedule goals for developing WAAS, it fell short of providing a complete assessment. For example, while it assigned a 99-percent degree of confidence in meeting various milestones during fiscal year 1998, the report did not assign probabilities for milestones for fiscal year 1999 and beyond. The agency has set schedule goals for the development of the initial and full system but has provided no range or confidence levels for achieving those goals. The conferees for the DOT Appropriations Act for fiscal year 1998 required the Secretary of Transportation to provide by February 15, 1998, a detailed report on FAA’s plans to provide satellite communications for WAAS.According to the Secretary’s transmittal letter to the Congress, his Department’s report of February 11 included these plans. In our view, however, the report could have done more to discuss the uncertainties FAA faces in obtaining the required GEO satellites. As already noted, FAA released a request for information from satellite vendors on January 8 and has evaluated this information. According to the Secretary’s report, the agency expects to issue a request for proposal by April 1998, award a contract to a satellite provider by July 1998, and complete the launching and testing of the satellites by October 2001. By December 2001, only 2 months later, WAAS is scheduled to become fully operational. One major uncertainty is whether FAA will find a vendor willing and able to complete the launching and testing of the satellites by October 2001. In responding to the January 8 information request, a number of potential vendors pointed to 2002 or 2003 as a more realistic schedule for putting the satellites in orbit. If the GEO satellites are launched after 2001, the resulting delay would be likely to have implications for the project’s benefits and costs. Benefits would decrease, for example, because users would not have a fully operational system available for navigation as early as expected. Aircraft operators would not realize some portion of the $350 million (in net present value) that FAA estimates operators would save by removing ground-based navigation equipment from their aircraft. At the same time, the project’s costs would be likely to increase. In April 1998, FAA’s WAAS program office estimated that a 12-month delay would cost an additional $6 million. This amount would be needed to pay Raytheon to retain a core staff of system engineers to complete the integration and testing of the GEO satellites. Another major uncertainty centers on the time needed to award a contract for the satellites. If the satellite contract is not awarded by July 1998 as planned, the remainder of the schedule is likely to slip. Contract award by that date, however, is doubtful for two reasons. First, negotiations over the terms of the contract might become protracted as FAA and the vendor seek to minimize their financial risks. For example, FAA expects the vendor to invest hundreds of millions of dollars to cover the costs of building and launching the satellites. However, while FAA expects to pay a premium for the vendor to finance the satellite costs, the vendor may not wish to carry the costs until FAA begins paying, as planned, in fiscal year 2002—more than 3 years after the contract is awarded. FAA and the vendor will be negotiating on the extent of the government’s financial guarantees. These guarantees are likely to take the form of cancellation fees that FAA would pay in the event the contract is terminated. Second, FAA may defer contract award until it receives congressional approval to enter into a 10-year lease for the GEO satellites. Under 49 U.S.C. 40111 and 40112, the agency is currently limited to contracts with a 5-year base period with 3 option years. To reduce its costs for the satellite lease, FAA would like to be able to extend the satellite leasing period from 5 years to 10 years and intends to seek the authority to enter into multiyear contracts for an unlimited number of years. In making investment decisions, FAA conducts benefit-cost analyses to determine if the benefits to be derived from acquiring new equipment outweigh the costs. FAA’s analyses dating back to 1994 have always found WAAS to be a cost-beneficial investment—that is, the benefits clearly exceeded the costs. (See app. I for details on FAA’s benefit-cost analyses for the WAAS project in 1994, 1996, 1997, and 1998.) In FAA’s benefit-cost analyses, the costs for WAAS included the future life-cycle costs for facilities and equipment as well as operations and maintenance costs and the costs for decommissioning the current ground-based navigation aids, such as very high frequency omnidirectional ranging (VOR) units. The system’s benefits to FAA included the savings from reduced maintenance of the navigation aids that are to be decommissioned and the avoidance of capital expenditures for replacing those aids with new ground-based equipment. Aircraft operators—the users of WAAS—also benefit. The users’ benefits included the reduction of accident-related costs (from death, injury, and property damage) because the system’s landing signals would be available at airports or runways that currently lack precision landing capability. Also, aircraft operators could benefit by reducing the proliferation of on-board navigation equipment and receiving savings that result from the shorter flight times on restructured, more direct routes that aircraft could fly using WAAS. Shorter flight times from these more direct routes also benefit passengers. Nonaviation benefits were excluded from FAA’s analyses. FAA’s investment analysis group prepared the agency’s most recent benefit-cost analysis, in January 1998, to assist FAA in evaluating whether WAAS was a sound investment. Unlike previous analyses, FAA’s January 1998 analysis used a risk assessment methodology that recognized uncertainties and placed confidence levels on each outcome. The base case analysis assumed that the two existing satellites will continue to be leased throughout the WAAS life cycle and that additional dedicated satellites will be necessary according to the following probabilities: two more satellites, 20-percent probability; three more satellites, 65-percent probability; and four more satellites, 15-percent probability. The base case analysis also assumed that there is a 100-percent probability that all ground-based navigation systems will be decommissioned by 2010. In its analysis, FAA also included the value of the time passengers would save, assuming a range of savings that generally varied from about 20 to 60 seconds, with the most probable amount being 30 seconds, in calculating the benefit-cost ratios. This analysis found (1) a 20-percent chance (the low confidence level) that the WAAS benefit-cost ratio could be 4.0 or greater and (2) an 80-percent chance (the high confidence level) that the ratio could be 3.0 or greater.Expressed another way, the net benefits (dollar value of benefits minus costs) of WAAS were $3.4 billion or greater at the low confidence level and $2.4 billion or greater at the high confidence level. As discussed previously, it is possible that satellite costs could increase and that FAA would decide to retain some of its ground-based navigation systems. To understand the impact of these possibilities, we asked FAA’s investment analysis group to perform alternative runs of their benefit-cost analysis using the methodology that they followed. The scenarios we requested made the following assumptions: a 20-percent probability that the two existing leased satellites will continue to be leased throughout WAAS’ life cycle and an 80-percent probability that they will be replaced with one, more expensive, dedicated leased satellite; a 50-percent probability that three additional satellites will be needed and a 50-percent probability that four additional satellites will be needed; and a 50-percent probability that 125 VOR units will never be decommissioned and a 50-percent probability that 650 VOR units will never be decommissioned. DOT’s guidance, dated April 9, 1997, directs departmental staff to include passenger time savings in benefit-cost analyses. The guidance notes that a controversy exists over whether small increments of time savings, such as a few minutes or less, should be valued at the same hourly rate as larger increments. However, it concludes that assuming “a constant value per hour for large and small time savings is probably appropriate.” The Director, FAA’s Office of Aviation Policy and Plans, told us that while only small increments of passenger time savings may result from any one FAA project, more significant—and clearly valuable—time savings may result from aggregating the small increments. Because FAA develops and implements many aviation projects over a number of years, the agency would not know the total impact of these projects on passenger time savings unless all increments were captured in its benefit-cost analyses. An official of the Office of Management and Budget (OMB) told us that her office does not provide specific guidance to federal agencies about the valuation of small increments of passenger time savings. She said that while OMB has not formally endorsed DOT’s April 1997 guidance, OMB’s staff do not have any major concerns with it. Concerned that passengers might not perceive and value time savings of as little as 30 seconds, we reviewed the economic literature about the validity of using small increments of time and found that no consensus exists.(See app. II for a discussion of the literature.) In the absence of a consensus among experts, we requested that FAA’s investment analysis group perform an alternative run of its January 1998 benefit-cost analysis base case excluding the value of small increments of passenger time savings. The results shown in table 2 reflect the use of the alternative assumptions compared with those in FAA’s 1998 base case analysis. We found that our alternative cost and decommissioning assumptions alone did not cause much of a decrease in the benefit-cost ratios and net benefits. Excluding small increments of passenger time savings caused a more pronounced decrease. For example, we found that at the high confidence level, net benefits declined by only $0.2 billion—from $2.4 billion or greater using FAA’s base case assumptions to $2.2 billion or greater using our alternative cost and decommissioning assumptions. However, the exclusion of small increments of passenger time savings alone led to a $1 billion decline in net benefits. Nevertheless, when the alternative assumptions are taken together, the system’s benefits still exceed the costs by nearly a 2-to-1 ratio. The Secretary’s report adequately discussed the risks FAA faces in achieving the performance and cost goals for the WAAS project. It could have done more, however, to recognize the schedule uncertainties, particularly those related to obtaining the GEO satellites. More information would help the Congress and the administration in deciding on future investments in the WAAS project. Information on the range of milestones for making the system operational and the probabilities attached to those milestones would aid decisionmakers in determining the timing of the investments. Also, a detailed explanation of FAA’s strategy for leasing GEO satellites would help them in understanding the cost and budgetary implications. Particularly useful would be information on (1) the cost-effectiveness of the hosted and dedicated satellite options and (2) the estimated premium to be paid for a vendor’s financing of the building and launching of the satellites. Even under our alternative assumptions, WAAS’ benefits clearly outweigh its costs. However, the continued investment in WAAS must compete with other demands on FAA’s capital and operating budgets. When more is known about the likely costs for obtaining GEO satellites and the extent to which the agency may retain existing ground-based navigation aids, an updated benefit-cost analysis would help the Congress and administration in making future investment decisions. The analysis would be more useful if the agency compared an investment in WAAS with alternative uses of FAA’s resources and explained the effects of including small increments of passenger time savings on the benefit-cost ratio and net benefits of the system. To assist the Congress in making future funding decisions for the Wide Area Augmentation System project, we recommend that the Secretary of Transportation direct the FAA Administrator to report information to the Congress on the range of milestones for making the initial and full Wide Area Augmentation System operational and the probabilities associated with those milestones; a detailed explanation of the agency’s strategy for leasing geostationary updated benefit-cost analyses, including a comparison with alternative investments of FAA’s resources and an explanation of the effects of including small increments of passenger time savings. We provided a draft of this report to the Departments of Transportation and Defense for review and comment. We met with officials from the Office of the Secretary of Transportation and FAA, including the Director, Communications, Navigation, and Surveillance (CNS) Systems; the Chairman, Satellite Operational Implementation Team; the WAAS Program Manager; and the Manager, CNS/Facility Investment Analysis. We also spoke with the Assistant for GPS, Positioning, and Navigation Policy, Office of the Deputy Under Secretary of Defense (Space). DOT and DOD generally agreed with our draft report’s findings, conclusions, and recommendations. They gave us information and suggestions to help make the report clearer and more accurate. We incorporated their suggestions where appropriate. DOT expressed concern that the wording in our draft report could leave the impression that we believe FAA improperly calculated the WAAS benefit-cost ratio because of the inclusion of small increments of passenger time savings. The agency noted that DOT’s guidance directs departmental staff to include all increments—large and small. We did not intend to suggest that FAA should not follow DOT’s guidance, and we have added language to the report to clarify this. However, our review of the economic literature found a lack of consensus among experts on the validity of using small increments of passenger time savings and our sensitivity analysis found that the inclusion of small increments was significant for WAAS’ benefit-cost ratio and net benefits. Taken together, these findings argue for informing decisionmakers about the effects of including small increments of passenger time savings when future benefit-cost analyses are conducted for the WAAS project. To obtain information for this report, we interviewed (1) officials at FAA headquarters and DOD, including DOD’s National Reconnaissance Office; (2) representatives from Raytheon (previously Hughes Aircraft), the prime contractor on WAAS; and (3) officials from the Mitre Corporation who provide technical advice to FAA. We reviewed agency documentation on the current schedule, life-cycle costs, and performance goals for WAAS. We also reviewed technical reports from the WAAS contractor and outside experts that discussed the risks and challenges facing the project. To identify the potential impact of differing assumptions on the benefit-cost ratio for WAAS, we asked FAA to run alternative analyses. We performed our review from October 1997 through April 1998 in accordance with generally accepted government auditing standards. We did not assess the reliability of all cost information. However, with regard to satellite costs—the major cost item contributing to increased life-cycle costs—we did satisfy ourselves that the information being used by FAA is in general agreement with those estimates provided by outside sources, such as DOD. Also, while we did not perform an extensive review of the model used to calculate benefit-cost ratios, the model used by FAA is widely recognized as an appropriate economic analysis tool for providing risk-adjusted benefit-cost ratios. We are sending copies of this report to interested congressional committees, the Secretaries of Transportation and Defense, and the Administrator of FAA. We will also make copies available to others on request. If you or your staff have any questions or need additional information, please call me at (202) 512-3650 or send email to dillinghamg.rced@gao.gov. Major contributors to this report are listed in appendix III. The results of the Federal Aviation Administration’s (FAA) benefit-cost analyses of the Wide Area Augmentation System (WAAS) project in 1994, 1996, 1997, and 1998 are summarized in table I.1. On the benefit side, benefits to the government accrue from the reduced maintenance of the existing, ground-based network of navigation aids and the avoidance of capital expenditures for replacing these aids. Benefits to users—the aircraft operators—fall into five categories: Efficiency benefits derive from having precision landing capability at airports where it does not now exist. Avionics cost savings reflect how WAAS will enable users to reduce the proliferation of avionics equipment in their cockpits. Fuel savings reflect the use of less fuel to fly aircraft that carry less avionics equipment. Safety benefits stem from the reduction in accident-related costs (death, injury, and property damage) because of the availability of WAAS landing signals at airports that presently lack a precision landing capability. Direct route savings result from the shorter flight times associated with restructured, more direct routes that aircraft can fly because of WAAS. Table I.1: FAA’s Analysis of the Net Present Value of Benefits and Costs for the WAAS Project, 1994, 1996, 1997, and 1998 Not applicable. FAA’s September 1997 benefit-cost analysis took a more conservative approach than previous analyses in estimating the benefit-cost ratio. That is, compared with the previous analyses, the assumptions underlying the September study increased the expected costs of WAAS and simultaneously reduced the expected benefits, which resulted in a lower benefit-cost ratio than found in the previous versions of the study. The higher costs in the 1997 analysis were largely due to the inclusion of the costs of decommissioning ground-based navigation systems that were not included in any earlier versions of the study. On the benefit side, several changes in key assumptions led to reduced expected benefits, including (1) a shorter life cycle for the project, (2) a reduction in the assumed “saved” costs from phasing out ground-based navigation systems, (3) a reduction in estimated safety benefits based on the use of more recent accident data,and (4) a reduction in the expected flight time savings resulting from more direct routes. In addition, the high benefit-cost ratio in the 1997 analysis included passenger time savings. The low benefit-cost ratio in the 1997 analysis excluded passenger time savings. The January 1998 analysis used a different methodology in calculating benefit-cost ratios than did the 1997 analysis. The 1998 analysis assessed how multiple events, such as a combination of numbers of satellites and ranges of satellite costs, could affect the benefit-cost ratio. This analysis then produced benefit-cost ratios at high and low confidence intervals. That is, at the high confidence level, there is an 80-percent chance that the benefit-cost ratio could be between 3.0 and 1.0. Conversely, at the low confidence level, there is a 20-percent chance that the benefit-cost ratio could be between 4.0 and 1.0. This analysis did not, however, exclude passenger time savings. The 1997 analysis assessed how individual events, such as increased satellite costs, could affect the benefit-cost ratio. While this analysis did not assign confidence levels to its benefit-cost ratios in arriving at the 5.2 high estimate and the 2.2 low estimate, it did exclude passenger time savings in the low estimate. A sizable portion of the calculated benefits of WAAS are from the time aviation passengers are expected to save once the system is in place. However, most of these savings come in small increments of time—a minute or less per passenger trip. Concerned that passengers might not perceive and value time savings of a minute or less, we requested, as a sensitivity analysis, alternative runs of the WAAS benefit-cost analysis that excluded these passenger time savings. We made this request because we found that there is considerable controversy and no consensus in the economic literature about whether travelers perceive and value very small time savings. We do not suggest that these benefits should be excluded from the benefit-cost analysis of WAAS or that FAA should undertake an analysis that is not in accordance with Department of Transportation’s guidance that directs its staff to include small increments of passenger time savings in benefit-cost analyses. However, we believe it is useful to understand how sensitive the benefit-cost results were to the inclusion of small increments of passenger time savings. This appendix provides information on (1) the value of passenger time in the WAAS benefit-cost analysis and (2) the issues discussed in the economics literature on the value of very small increments of time to travelers. FAA’s most recent WAAS benefit-cost analysis found, as is the case for many transportation improvement projects, considerable benefits attributable to reduced travel time for passengers. In the case of WAAS, about 40 percent of the calculated benefits, or approximately $1 billion in the base case benefit-cost analysis, are due to time savings that would accrue to travelers because of slightly reduced flight times. FAA officials told us that, on average, these time savings would probably be about 30 seconds per flight. FAA’s guidance regarding the valuation of small increments of passenger time savings suggests that there is no reason, based on either empirical findings or theoretical concepts, that these small increments should not be valued at the same per hour rate as larger increments of time savings. We reviewed several studies, including an overview study prepared for FAA, on the issue of the value of small increments of passenger time. As FAA’s analysis points out, there is limited empirical work on this issue. Several studies from the 1970s suggest that travelers may place little value on very small time savings, such as 1 minute. However, the findings of these studies may have limited applicability to WAAS. First, these studies, and most other analyses of this issue, are focused on intracity commuter travel. The nature and characteristics of such travel are very different from intercity air travel and, accordingly, results from the studies may have little applicability to how intercity air travelers value time. Secondly, the study prepared for FAA discusses the considerable methodological problems and limitations in these empirical studies. Because of these problems, and the lack of studies focused on air travel, the empirical literature does not provide definitive evidence about how small increments of time savings are valued by travelers who would benefit from WAAS. Our review of the conceptual arguments regarding the value of small increments of passenger time revealed a mixed message. There appear to be sound conceptual points on both sides of this debate: Some suggest that small increments of passenger time savings should be valued on a pro rata basis just as larger increments of time are; others suggest that less value should be placed on very small time savings. Those who argue that small time savings have little value suggest some key reasons. First, people cannot perceive very small time savings, and if they cannot perceive them, they do not value them. Second, even if a savings of, for example, 1 minute is perceived, it will not be of value to a person unless that time can be put to some alternative use. Because it is likely to take some threshold amount of time to have value in an alternative use, very small increments of time cannot be used and are therefore not valued. Moreover, as the amount of time savings increases, more potential uses of that time become available. Conversely, several conceptual arguments suggest that there is no basis for valuing small time savings at less than their pro rata share of the value of larger time savings. First, some analysts have suggested that even if people do not perceive a time savings, they do place value on it if they put that time to an alternative use. Additionally, the issue of needing a threshold block of time for an alternative use may be true, but this would suggest that people may always have some “spare” time that cannot be used, and if so, very small increments of time may, in some cases, push them over the threshold level and give them a usable block of time. This not only suggests that small increments of time may, in some cases, have considerable value, but it also points out that even spare, or unusable, time will be valued because there is the possibility of time savings from some other source that will meet the threshold for a usable time block. The final argument for valuing even small time increments is that transportation improvement initiatives are somewhat arbitrarily divided into recognized “projects.” That is, across both time and geography, a variety of projects may be providing incremental time savings that may each be only a small amount, but when added together become significant. Hence, it is not appropriate to view the savings of a given project in isolation of other projects that might occur a year later or at a different location. Amy D. Abramowitz Leslie Albin John H. Anderson, Jr. Robert E. Levin John T. Noto E. Jerry Seigler The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a legislative requirement, GAO reviewed the status of the Federal Aviation Administration's (FAA) Wide Area Augmentation System (WAAS) project, focusing on: (1) whether the Secretary of Transportation's report provides a complete assessment of FAA's risks in developing the WAAS project; and (2) how alternative assumptions would affect WAAS' benefit-cost analysis of January 1998. GAO noted that: (1) the Secretary's report provided a complete assessment of FAA's risks in achieving the WAAS project's performance and cost goals but not its scheduled goals; (2) in terms of system performance, the Secretary's report recognized that WAAS' vulnerability to intentional or unintentional interference from electronic equipment must be addressed; (3) in January 1998, FAA estimated that it would save about $500 million (in net present value) over the WAAS project's life cycle by fully phasing out its network of ground-based navigation aids; (4) if FAA retains some portion of this network, these benefits would decrease; (5) FAA also estimated that aircraft operators could save $350 million by removing ground-based navigation equipment from their aircraft; (6) these benefits would be reduced to the extent that operators must continue to keep such equipment on board; (7) by identifying a range of cost estimates and associated probabilities, the Secretary's report addressed GAO's past concern that FAA's firm, discrete-point cost estimates implied a level of precision that could not be supported; (8) GAO agreed with the Secretary's report that the greatest degree of uncertainty about the WAAS cost estimates relates to the costs of the geostationary communications satellites; (9) the uncertainty exists because FAA does not know exactly how many satellites will be needed and how much the per-unit costs will be; (10) the Secretary's report fell short of providing a complete assessment of the uncertainties FAA faces in achieving WAAS' schedule goals; (11) the report also did not discuss the risks to the overall schedule if FAA does not award the contract to lease the satellites by July 1998 as planned; (12) in January 1998, FAA's analysis found that the benefits to aviation from WAAS would be three times as great as its costs; (13) GAO requested that FAA recalculate its benefit-cost analysis to determine the impact of three alternative assumptions; (14) using these cost and decommissioning assumptions did not cause much of a decrease in the benefit-cost ratio or the net benefits; (15) however, the exclusion of small increments of passenger time savings had a much more significant impact; and (16) when these alternative assumptions were taken together, GAO found that the net present value of the project's net benefits decreased by more than $1 billion but were still about twice as great as the costs. |
One of the main purposes of guidance is to explain and help regulated parties comply with agency regulations. As shown in figure 1, guidance may explain how agencies plan to interpret regulations. Agencies sometimes include disclaimers in their guidance to note that the documents have no legally binding effect on regulated parties or the agencies. Even though not legally binding, guidance documents can have a significant effect on regulated entities and the public, both because of agencies’ reliance on large volumes of guidance documents and because the guidance can prompt changes in the behavior of regulated parties and the general public. Nevertheless, defining guidance can be difficult. To illustrate that difficulty, several of the components told us that they do not consider many of the communication documents they issue to the public to be guidance. Regulations and guidance documents serve different purposes. The Administrative Procedure Act (APA) established broadly applicable requirements for informal rulemaking, also known as notice and comment rulemaking. Among other things, the APA generally requires that agencies publish a notice of proposed rulemaking in the Federal Register. After giving the public an opportunity to comment on the proposed regulation by providing “written data, views, or arguments,” and after considering the public comments received, the agency may then publish the final regulation. To balance the need for public input with competing societal interests favoring the efficient and expeditious conduct of certain government affairs, the APA exempts certain types of rules from the notice and comment process, including “interpretative rules” (we will refer to these as interpretive rules in this statement) and “general statements of policy.” Regulations affect regulated entities by creating binding legal obligations. Regulations are generally subject to judicial review by the courts if, for example, a party believes that an agency did not follow required rulemaking procedures or went beyond its statutory authority. Despite the general distinctions between regulations and guidance documents, legal scholars and federal courts have at times noted that it is not always easy to determine whether an agency action should be issued as a regulation subject to the APA’s notice and comment requirements, or is guidance or a policy statement, and therefore exempt from these requirements. Among the reasons agency guidance may be legally challenged are procedural concerns that the agency inappropriately used guidance rather than the rulemaking process or concerns that the agency has issued guidance that goes beyond its authority. On March 9, 2015, the Supreme Court held that an agency could make substantive changes to an interpretive rule without going through notice and comment under the APA. This decision overturned prior federal court rulings that had held that an agency is precluded from substantively changing its interpretation of a regulation through issuance of a new interpretive rule without notice and comment. Other concerns raised about agency use of guidance include consistency of the information being provided, currency of guidance, and whether the documents are effectively communicated to affected parties. An OMB Bulletin establishes policies and procedures for the development, issuance, and use of “significant” guidance documents. OMB defines “significant guidance documents” as guidance with a broad and substantial impact on regulated entities. Pursuant to a memorandum issued by the Director of OMB in March 2009, OMB’s Office of Information and Regulatory Affairs (OIRA) reviews some significant guidance documents prior to issuance. All significant guidance documents, whether reviewed by OIRA or not, are subject to the OMB Bulletin. “Economically significant guidance documents” are also published in the Federal Register to invite public comment. Non- significant guidance is not subject to the OMB Bulletin, and any procedures for developing and disseminating it are left to agency discretion. Selected departments considered few of their guidance documents to be significant as defined by OMB. For example, as of February 2015, agencies listed the following numbers of significant guidance documents on their websites: Education, 139; DOL, 36; and USDA, 34. We were unable to determine the number of significant guidance documents issued by HHS. In contrast, some of the agencies issued hundreds of non- significant guidance documents. All selected components told us that they did not issue any economically significant guidance. OIRA staff told us they accepted departments’ determinations of which types of guidance meet the definition of significant guidance. The selected components we reviewed differed in both the terminology they used for their external non-significant guidance documents and in the amounts of non-significant guidance they issued. We found the components used many names for these guidance documents—for example, Education components’ guidance documents included FAQs and “Dear Colleague” letters, while DOL components used varied terms including bulletins, “Administrator Interpretations,” directives, fact sheets, and policy letters. The components issued varying amounts of guidance ranging from 10 to more than 100 documents issued by a component in a single year. Component officials said a component’s mission or the types of programs it administers can affect the number of guidance documents issued. Officials from DOL’s Bureau of Labor Statistics (BLS) told us their agency, as a non-regulatory component, rarely issues guidance. They said BLS has issued about 10 routine administrative memorandums each year related to the operation of two cooperative agreement statistical programs. In contrast, DOL Occupational Safety and Health Administration (OSHA) officials told us they have regularly issued guidance to assist with regulatory compliance, and could easily produce 100 new or updated products each year to provide guidance to stakeholders. Although the DOL Office of Workers’ Compensation Programs has regulatory authority, officials told us that they have not frequently issued guidance because their authorizing statutes have not changed recently and their programs focus on administering benefits. Agencies have used guidance for multiple purposes, including explaining or interpreting regulations, clarifying policies in response to questions or compliance findings, disseminating suggested practices or leadership priorities, and providing grant administration information. Component officials told us they used guidance to summarize regulations or explain ways for regulated entities to meet regulatory requirements. For example, Education officials told us that they often follow their regulations with guidance to restate the regulation in plainer language, to summarize requirements, to suggest ways to comply with the new regulation, or to offer best practices. In a few cases, components used guidance to alert affected entities about immediate statutory requirements or to anticipate upcoming requirements to be promulgated through the rulemaking process. Education officials told us they often used guidance to help their field office staff understand and apply new statutory requirements. While this may provide timely information about new or upcoming requirements, it may also cause confusion as details are revised during the rulemaking process. Officials from USDA’s Food and Nutrition Service (FNS) told us that when a new statute becomes effective immediately and there is little ambiguity in how the statute can be interpreted, they use a “staging process.” In this process, they issue informational guidance so their stakeholders are aware of and consistently understand new requirements before the more time-consuming rulemaking process can be completed. Other officials told us that in rare instances, they have issued guidance while a proposed rule is out for comment. They noted that statutory deadlines for implementation may require them to issue guidance before issuing a final rule. Component officials cited instances in which they used guidance to provide information on upcoming requirements to be promulgated through regulation to those affected. In one example, HHS’s Office of Child Care within the Administration for Children and Families issued recommendations to its grantees to foreshadow future binding requirements. In that case, the office issued an Information Memorandum in September 2011 recommending criminal background checks. It later published a proposed rule in May 2013 to mandate the background checks. Multiple component officials told us that they used guidance to clarify policies in response to questions received from the field, or regional office input about questions received from grantees or regulated entities. Officials at Education’s Office for Civil Rights and OSHA told us that they often initiated guidance in response to findings resulting from their investigatory or monitoring efforts, among other things. Component officials also told us that they used guidance to distribute information on program suggestions (sometimes called best practices). In particular, we heard this from component officials who administered formula grants in which wide discretion is given to grantees, such as states. Officials at Education’s Office of Postsecondary Education told us that component leadership initiates guidance related to priorities the administration wants to accomplish. One example they cited was a Dear Colleague letter explaining that students confined or incarcerated in locations such as juvenile justice facilities were eligible for federal Pell grants. Components that administered grants also issued procedural guidance related to grant administration. For example, BLS issued routine administrative memorandums to remind state partners of federal grant reporting requirements and closeout procedures. In other examples, DOL provided guidance on how to apply and comply with Office of Disability Employment Policy grants. Officials considered a number of factors before deciding whether to issue guidance or undertake rulemaking. Among these factors, a key criterion was whether officials intended for the document to be binding (in which case they issued a regulation). Officials from all components that issue regulations told us that they understood when guidance was inappropriate and when regulation was necessary and that they consulted with legal counsel when deciding whether to initiate rulemaking or issue guidance. According to DOL officials, new regulations may need to be issued if components determined that current regulations could not reasonably be interpreted to encompass the best course of action, a solution was not case specific, or a problem was widespread. An Education official told us that Education considered multiple factors, including the objective to be achieved, when choosing between guidance and regulations. Similarly, HHS’s Administration for Community Living officials told us that they considered a number of factors, including whether the instructions to be disseminated were enforceable or merely good practice. For example, when Administration for Community Living officials noticed that states were applying issued guidance related to technical assistance and compliance for the state long-term care ombudsman program differently, they decided it would be best to clarify program actions through a regulation, as they could not compel the states to comply through guidance. Officials believed that a regulation would ensure consistent application of program requirements and allow them to enforce those actions. They issued the proposed rule in June 2013 and the final rule in February 2015. FNS officials told us that the decision to issue guidance or undertake rulemaking depended on (1) the extent to which the proposed document was anticipated to affect stakeholders and the public, and (2) what the component was trying to accomplish with the issued document. OIRA staff concurred that agencies understood what types of direction to regulated entities must go through the regulatory process. We found that agencies did not always adhere to OMB requirements for significant guidance. The OMB Bulletin establishes standard elements that must be included in significant guidance documents and directs agencies to (1) develop written procedures for the approval of significant guidance, (2) maintain a website to assist the public in locating significant guidance documents, and (3) provide a means for the public to submit comments on significant guidance through their websites. Education and USDA had written procedures for the approval of significant guidance as directed by OMB. While DOL had written approval procedures, they were not available to the appropriate officials and DOL officials noted that they required updating. HHS did not have any written procedures. We found that Education, USDA, and DOL consistently applied OMB’s public access and feedback requirements for significant guidance, while HHS did not. We made recommendations to HHS and DOL to better adhere to OMB’s requirements for significant guidance. Both agencies concurred with those recommendations. Without written procedures or wide knowledge of procedures for the development of significant guidance, HHS and DOL may be unable to ensure that their components consistently follow other requirements of the OMB Bulletin and cannot ensure consistency in their processes over time. Further, because agencies rely on their websites to disseminate guidance, it is important that they generally follow requirements and guidelines for online dissemination for significant guidance. In the absence of government-wide standards for the production of non- significant guidance, officials must rely upon internal controls—which are synonymous with management controls—to ensure that guidance policies, processes, and practices achieve desired results and prevent and detect errors. We selected four components of internal control and applied them to agencies’ guidance processes (see appendix I). Departments and components identified diverse and specific practices that addressed these four components of internal control. However, the departments and components typically had not documented their processes for internal review of guidance documents. Further, agencies did not consistently apply other components of internal control. Some of the selected components identified practices to address these internal controls that we believe could be more broadly applied by other agencies. Wider adoption of these practices could better ensure that components have internal controls in place to promote quality and consistency of their guidance development processes. To improve agencies’ guidance processes, we recommended that the Secretaries of USDA, HHS, DOL, and Education strengthen their components’ application of internal controls by adopting, as appropriate, practices developed by other departments and components, such as assessment of risk; written procedures and tools to promote the consistent implementation and communication of management directives; and ongoing monitoring efforts to ensure that guidance is being issued appropriately and has the intended effect. USDA, Education, HHS, and DOL generally agreed with the recommendations. Although no component can insulate itself completely from risks, it can manage risk by involving management in decisions to initiate guidance, prioritize among proposed guidance, and determine the appropriate level of review prior to issuance. In addition, if leadership is not included in discussions related to initiation of guidance, agencies risk expending resources developing guidance that is unnecessary or inadvisable. At a few components, officials told us that leadership (such as component heads and department-level management) decided whether to initiate certain guidance, and guidance did not originate from program staff for these components. For example, guidance at DOL’s Employee Benefits Security Administration related to legal, policy, and programmatic factors were proposed by office directors and approved by Assistant Secretaries and Deputy Assistant Secretaries. In most other cases, ideas for additional guidance documents originated from program staff and field offices or from leadership, depending on the nature of the guidance. Education officials told us that component program staff and leadership work together to identify issues to address in guidance. At most components, officials told us that they determine the appropriate level of review and final clearance of proposed guidance, and in many cases guidance was reviewed at a higher level if the document was anticipated to affect other offices or had a particular subject or scope. Risk was one factor agency officials considered when determining the anticipated appropriate level of review and final clearance of proposed guidance. For example, officials at the Employee Benefits Security Administration told us that the need for department-level clearance depended on various factors, including likely congressional interest, potential effects on areas regulated by other DOL components, expected media coverage, and whether the guidance was likely to be seen as controversial by constituent groups. A few agencies reported they considered two other factors in making this decision: whether guidance was related to a major priority or would be “impactful.” Control activities (such as written procedures) help ensure that actions are taken to address risks and enforce management’s directives. Only 6 of the 25 components we reviewed had written procedures for the entire guidance production process, and several of these components highlighted benefits of these procedures for their guidance processes. These components included HHS’s Administration for Children and Families Office of Head Start and five DOL components. The DOL Mine Safety and Health Administration’s written procedures contained information officials described as essential to the effective and consistent administration of the component’s programs and activities. OSHA officials reported that their written procedures were designed to ensure that the program director manages the process for a specific policy document by considering feedback and obtaining appropriate concurrence to ensure that guidance incorporates all comments and has been cleared by appropriate officials. The Deputy Assistant Secretary resolves any disagreements about substance, potential policy implications, or assigned priority of the document. In contrast, Education’s Office of Innovation and Improvement and Office of Elementary and Secondary Education and DOL’s Veterans’ Employment and Training Service had written procedures only for the review and clearance phase. Components without written procedures said they relied on officials’ understanding of the guidance process. In these cases, officials told us that the guidance process was well understood by program staff or followed typical management hierarchies. Officials from all components could describe standard review practices to provide management the opportunity to comment and ensure that its comments were addressed by program staff. Nonetheless, documented procedures are an important internal control activity to help ensure that officials understand how to adequately review guidance before issuance. Most selected components had guidance practices to ensure either intra- agency and interagency review (or both) of guidance documents before issuance. Obtaining feedback from management, internal offices, the public, and other interested parties is essential to ensuring guidance is effective. Intra-agency communications. To ensure that management concurrence was recorded, most components we reviewed used communication tools, such as electronic or hard-copy routing slips, to document approval for guidance clearance or to communicate with management and other offices about proposed or upcoming guidance. In particular, officials at 20 components used a routing slip to document management concurrence. Interagency communications. Most component officials told us that they conferred with other affected components or federal departments to ensure consistency during the development of guidance. External stakeholders. Officials told us that feedback from external nonfederal stakeholders often served as the impetus for the initiation of guidance, and more than half of the selected components cited examples in which they conferred with external nonfederal stakeholders during the guidance development process. At OSHA, for example, external stakeholders were not involved in developing directives or issuing policy, but assisted with developing educational, non-policy guidance, such as hazard alerts. Nearly half of the components we reviewed did not regularly evaluate whether issued guidance was effective and up to date. Without a regular review of issued guidance, components can miss the opportunity to revisit whether current guidance could be improved and thereby provide better assistance to grantees and regulated entities. DOL’s Office of Labor- Management Standards officials told us they had not evaluated the relative success of existing guidance and therefore did not often revise guidance. A few selected components had initiated or established a process for tracking and evaluating guidance to identify necessary revisions. For example, in November 2011, officials at DOL’s Office of Federal Contract Compliance Programs initiated a 2-year project to review their directives system to ensure that they only posted up-to-date guidance. As a result of the project, in 2012 and 2013 officials identified necessary updates to guidance, clarified superseded guidance, and rescinded guidance where appropriate. Officials told us that these actions reduced the original number of directives by 85 percent. Officials also told us that they did this to ensure that their guidance was more accurate and correct, and the actions resulted in officials posting only relevant and current guidance information on the component’s website. Officials told us they now routinely monitor their directives about once a year and review other guidance documents each time they issue new regulations or change a policy to decide if they need to revise them. DOL’s Employment and Training Administration used a checklist to review a list of active guidance documents and identified whether to continue, cancel, or rescind the guidance. In addition, officials indicated which documents were no longer active on their website. Lastly, DOL’s Mine Safety and Health Administration also ensured that program officials periodically reviewed and updated guidance documents and canceled certain guidance. Chairman Lankford, Ranking Member Heitkamp, and members of the Subcommittee, this concludes my prepared remarks. I look forward to answering any questions you may have. For questions about this statement, please contact me at (202) 512-6806 or sagerm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony were Tim Bober, Assistant Director, Robert Gebhart, Shirley Hwang, Andrea Levine, and Wesley Sholtes. Component of Internal Control Risk Assessment Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Once risks have been identified, they should be analyzed for their possible effects. Application to Guidance Processes Agencies should assess the level of risk associated with potential guidance at the outset to determine 1. the legal implications of the use of guidance based on available criteria, and the appropriate level of review. 2. Some agencies have found it helpful to categorize proposed guidance at initiation to determine different types and levels of review. Control activities Internal control activities help ensure that management’s directives are executed. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives. They help ensure that actions are taken to address risks. The control activities should be effective and efficient in accomplishing the agency’s control objectives. The agency should maintain written policies, procedures, and processes to ensure that once the appropriate level of review has been determined, agency officials understand the process to adequately review guidance prior to issuance. Written policies and procedures should designate: 1. 2. the appropriate level of review to maintain appropriate segregation of duties, and the means by which management can comment on the draft guidance and program staff can address those comments. Information and communication Information should be recorded and communicated to management and others within the entity who need it. In addition to internal communications, management should ensure there are adequate means of communicating with, and obtaining information from, external stakeholders who have a significant impact on the agency achieving its goals. Internal communications: Agencies should have procedures in place to get feedback from management and other internal offices on guidance to be issued. For example, they should have a written mechanism (such as a routing slip) to document management review and associated comments and suggestions. External communications: Agencies should provide a means, via an e-mail box or contact person, for the public and interested parties to comment on the guidance, ask questions about the guidance, and facilitate two-way feedback and communication. Monitoring Internal control should generally be designed to ensure that ongoing monitoring occurs in the course of normal operations. Processes should be established to collect feedback on both the substance and clarity of guidance, to communicate this feedback to the appropriate officials, and to maintain applicable feedback to inform future guidance and revisions of guidance. Department United States Department of Agriculture (USDA) Department of Health and Human Services (HHS) Administration for Children and Families’ Office of Child Care Administration for Children and Families’ Office of Head Start Bureau of International Labor Affairs Mine Safety and Health Administration Occupational Safety and Health Administration Office of Disability Employment Policy Office of Federal Contract Compliance Programs Office of Workers’ Compensation Programs Veterans Employment and Training Service Women’s Bureau This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Regulatory guidance is an important tool agencies use to communicate timely information about regulatory and grant programs to regulated parties, grantees, and the public. Guidance provides agencies flexibility to articulate their interpretations of regulations, clarify policies, and address new issues more quickly than may be possible using rulemaking. The potential effects of guidance and risks of legal challenges underscore the need for consistent processes for the development, review, dissemination, and evaluation of guidance. This statement discusses four key questions addressed in GAO's April 2015 report on regulatory guidance: (1) what it is; (2) how agencies use it; (3) how agencies decide whether to use guidance or undertake rulemaking; and (4) steps agencies can take to ensure more effective guidance processes. To conduct that work, GAO reviewed relevant requirements, written procedures, guidance, and websites, and interviewed agency officials. What is regulatory guidance? One of the main purposes of guidance is to explain and help regulated parties comply with agencies' regulations. Even though not legally binding, guidance documents can have a significant effect on regulated entities and the public, both because of agencies' reliance on large volumes of guidance documents and because the guidance can prompt changes in the behavior of regulated parties and the general public. How do agencies use regulatory guidance? The four departments GAO reviewed—Agriculture (USDA), Education (Education), Health and Human Services (HHS), and Labor (DOL)—and the 25 components engaged in regulatory or grant making activities in these departments used guidance for multiple purposes, such as clarifying or interpreting regulations and providing grant administration information. Agencies used many terms for guidance and agency components issued varying amounts of guidance, ranging from about 10 to more than 100 guidance documents each year. Departments typically identified few of their guidance documents as “significant,” generally defined by the Office of Management and Budget (OMB) as guidance with a broad and substantial impact on regulated entities. How do agencies determine whether to issue guidance or undertake rulemaking? According to officials, agencies considered a number of factors when deciding whether to issue a regulation or guidance. However, the key criterion in making the choice was whether they intended the document to be binding; in such cases agencies proceeded with regulation. How can agencies ensure more effective guidance processes that adhere to applicable criteria? All four departments we studied identified standard practices to follow when developing guidance but could also strengthen their internal controls for issuing guidance. Agencies addressed OMB's requirements for significant guidance to varying degrees. Education and USDA had written departmental procedures for approval as required by OMB. DOL's procedures were not available to staff and required updating. HHS had no written procedures. In addition, USDA, DOL, and Education consistently applied OMB's public access and feedback requirements for significant guidance, while HHS did not. In the absence of specific government standards for non-significant guidance—the majority of issued guidance—the application of internal controls is particularly important. The 25 components GAO reviewed addressed some control standards more regularly than others. For example, few components had written procedures to ensure consistent application of guidance processes. However, all components could describe standard review practices and most used tools to document management approval of guidance. Not all components conferred with external nonfederal stakeholders when developing guidance. Finally, nearly half of the components GAO reviewed did not regularly evaluate whether issued guidance was effective and up to date. GAO is making no new recommendations in this statement. In the April 2015 report, GAO recommended steps to ensure consistent application of OMB requirements for significant guidance and to strengthen internal controls in guidance production processes. The agencies generally agreed with the recommendations. |
In the past several years, we have made a number of recommendations for CMS to address missed opportunities for savings in the Medicare program, which the agency has not fully implemented. These include recommendations related to the Medicare fee-for-service (FFS) and Medicare Advantage (MA) programs. Minimizing improper payments and fraud. We have a body of issued and ongoing work about improper payments in Medicare. In 2007, we reported on program integrity activities conducted by CMS contractors to minimize improper payments for medical equipment and supplies. We recommended that CMS require its contractors to develop automated prepayment controls to identify potentially improper claims when billing reaches atypical levels. CMS agreed with the recommendation, but has not implemented it. The agency has added other prepayment controls to flag claims for services that were unlikely to be provided in the normal course of medical care. However, implementing our recommendation and adding additional prepayment controls could enhance identification of improper claims before they are paid to reduce reliance on “pay and chase” strategies. In 2009, we reported that fraudulent and abusive practices in home health agencies, such as overstating the severity of a beneficiary’s condition, contributed to Medicare home health spending To strengthen controls on improper payments in home and utilization.health agencies, we recommended that CMS amend current regulations to expand the types of improper billing practices that are grounds for revocation of billing privileges. CMS told us that it has begun to explore its authority to expand the types of practices that are grounds for revocation of billing rights. We believe that CMS should do so expeditiously. In 2010, we recommended that CMS designate responsible personnel with authority to evaluate and promptly address vulnerabilities identified to reduce improper payments. CMS concurred with this recommendation and has begun to implement this process, but does not yet have written policies and procedures for a fully developed corrective action process that includes monitoring of actions taken. Likewise, we recently testified before the Senate Committee on Finance regarding CMS efforts to combat Medicare fraud. We reiterated our prior recommendation and noted that CMS could do more to strengthen provider enrollment screening to avoid enrolling those intent on committing fraud, improve pre- and postpayment claims review to identify and respond to patterns of suspicious billing activity more effectively, and identify and address vulnerabilities to reduce the ease with which fraudulent entities can obtain improper payments. Enhancing payment safeguard mechanisms. In 2008, we reported on rapid spending growth for advanced imaging services. We recommended that CMS examine the feasibility of adding front-end approaches, such as prior authorization, to improve payment safeguard mechanisms. CMS has not implemented our recommendation, but is currently engaged in a demonstration project to assess the appropriateness of physicians’ use of advanced diagnostic imaging services furnished to Medicare beneficiaries. Aligning coverage for services with clinical recommendations. We reported in early 2012 that Medicare beneficiaries’ use of preventive services did not always align with the U.S. Preventive Services Task Force’s recommendations. We concluded that opportunities exist to improve the appropriate use of preventive services through means such as revising coverage and cost-sharing policies and educating beneficiaries and physicians. In the case of osteoporosis screening, for instance, Medicare coverage rules may preclude utilization of the recommended screening by all those for whom the service is recommended. Conversely, given that the Task Force recommended against prostate cancer screening for men aged 75 or older, the absence of cost sharing for that population may encourage inappropriate use of this service. To better align preventive service use with clinical recommendations, we recommended that CMS provide coverage for Task Force recommended services, as appropriate, given cost- effectiveness and other criteria. In response to our recommendation, the agency stated that it had recently used its authority to expand benefits to cover several new preventive services. This additional coverage, however, does not address the misalignment that remains between Medicare coverage for certain services and the corresponding Task Force recommendations. We also offered a matter for congressional consideration. We suggested that Congress consider requiring beneficiaries to share the cost of the services if they receive services the Task Force recommends against. Better reflecting beneficiary health status in payments to MA plans. In 2010, the federal government spent about $115 billion on the MA program, a private plan alternative to the Medicare FFS program. In January 2012, we reported that CMS could achieve billions of dollars in additional savings by more accurately adjusting for differences between MA plans and Medicare FFS providers in the reporting of beneficiary diagnoses.construct a risk score for each beneficiary. Higher risk scores result in CMS uses this diagnosis data and other information to increased Medicare payments to plans, while lower risk scores result in reduced Medicare payments to plans. Risk scores should be the same among all beneficiaries with the same medical conditions and demographic characteristics, regardless of whether they are in MA or Medicare FFS. MA plans have an incentive to code diagnoses more comprehensively because doing so affects plan payments, which is not the case in Medicare FFS. CMS is required by law to make an adjustment to MA risk scores to bring them in line with those of Medicare FFS. In this report, we found that CMS’s adjustment for diagnostic coding differences was too small. We estimated that MA beneficiary risk scores in 2010 were from 4.8 to 7.1 percent higher than they likely would have been if they had been enrolled in FFS, while CMS’s adjustment for diagnostic coding differences was only 3.4 percent. Compared to CMS’s analysis, our analysis incorporated more recent beneficiary data and accounted for additional beneficiary characteristics that affect risk scores, such as health status and sex. A revised methodology that incorporated this information could have saved Medicare between $1.2 billion and $3.1 billion in 2010 in addition to the $2.7 billion in savings from the adjustment CMS made. We expect that savings in 2011 and future years would be even greater. CMS has continued to use its 2010 adjustment method for 2011 and 2012, even though both we and CMS noted an upward trend in the impact of coding differences over time. To improve the accuracy of the adjustment made for differences in coding practices over time, we recommended that the Secretary of HHS direct the Administrator of CMS to incorporate the most recent data available in its estimates; identify and account for all years of diagnostic coding differences that could affect the payment year for which any adjustment is made; account for the upward trend of the annual impact of coding differences in its estimates; and to the extent possible, account for all relevant differences in beneficiary characteristics between the MA and Medicare FFS populations. CMS stated that it found our findings informative, but did not comment on our recommendations. Canceling the MA Quality Bonus Payment Demonstration. We recently reported that CMS could achieve billions of dollars in savings by canceling the MA Quality Bonus Payment Demonstration—which CMS’s Office of the Actuary has estimated will cost more than $8 billion over 10 years.in the 2010 Patient Protection and Affordable Care Act (PPACA), as amended, CMS is conducting a nationwide demonstration to test whether a scaled bonus structure would lead to larger and faster annual quality improvement for MA plans at various performance levels. Compared with PPACA’s quality bonus payment system, the demonstration extends the bonuses to average-performing plans, accelerates the phase-in of the bonuses for plans with above-average performance, and increases the size of the bonuses in 2012 and 2013. We found that the demonstration’s estimated $8.35 billion cost offsets more than one-third of PPACA’s MA payment reductions during its 3-year time frame and that most of the additional spending will go to average-performing plans rather than to high-performing plans. The MA Quality Bonus Payment Demonstration dwarfs all other Medicare demonstrations—both mandatory and discretionary—conducted since 1995 in its estimated budgetary impact. It is at least seven times larger than that of any other Medicare demonstration conducted since 1995 and is greater than the combined budgetary impact of all those demonstrations. For a variety of reasons, the design of the demonstration precludes a credible evaluation of its effectiveness in achieving CMS’s stated research goal. We therefore believe that it is unlikely that the demonstration will produce meaningful results. Accordingly, we recommended that the Secretary of HHS cancel the demonstration and allow the MA quality bonus payment system established by PPACA to take effect. HHS did not concur with our recommendation, stating that it believed the demonstration supports a strategy to improve the delivery of health care services, patient health outcomes, and population health. We have conducted a substantial body of work on Medicaid program management. Our recommendations have involved a variety of topics and have included different aspects of payment arrangements with states. Improving oversight of supplemental payments. We have reported on varied financing arrangements involving supplemental payments— disproportionate share hospital (DSH) payments that states are required to make to certain hospitals and other non-DSH supplemental payments—that increase federal funding without a commensurate increase in state funding. Our work has found that while a variety of federal legislative and CMS actions have helped curb inappropriate financing arrangements, gaps in oversight remain. For example, while there are federal requirements designed to improve transparency and accountability for state DSH payments, similar requirements are not in place for non-DSH supplemental payments, which may be increasing. From 2006 to 2010, state-reported non-DSH supplemental payments increased from $6.3 billion to $14 billion; however, according to CMS officials, reporting was likely incomplete. We made numerous recommendations aimed at improving oversight of supplemental payments. We have recommended that CMS adopt transparency requirements for non-DSH supplemental payments and develop a strategy to ensure that all state supplemental payment arrangements have been reviewed by CMS. CMS has taken action to address some of these recommendations, but we continue to believe additional action is warranted. CMS has raised concern that congressional action may be necessary to fully address our recommendations. GAO, Medicaid Demonstration Waivers: Recent HHS Approvals Continue to Raise Cost and Oversight Concerns, GAO-08-87 (Washington, D.C.: Jan. 31, 2008). process through steps such as clarifying the criteria for reviewing and approving states’ proposed spending limits and ensuring that valid methods were used to demonstrate budget neutrality. Consequently, we referred this to Congress for consideration. HHS subsequently reported taking steps, such as monitoring the budget neutrality of ongoing demonstrations, to improve its oversight. However, no changes are planned in the methods used to determine budget neutrality of demonstrations to ensure that demonstrations do not increase the federal financial liability. Improving rate-setting methodologies. In August 2010, we reported that CMS had not ensured that all states were complying with federal Medicaid requirements that managed care rates be developed in accordance with actuarial principles, appropriate for the population and For example, we found significant services, and certified by actuaries.gaps in CMS’s oversight of 2 of the 26 states reviewed—CMS had not reviewed one state’s rate setting in multiple years and had not completed a full review of another state’s rate setting since the actuarial soundness requirements became effective in August 2002. Variation in practices across CMS regional offices contributed to these gaps and other inconsistencies in the agency’s oversight of states’ rate setting. This work also found that CMS’s efforts to ensure the quality of the data used to set rates were generally limited to requiring assurances from states and health plans—efforts that did not provide the agency with enough information to ensure the quality of the data used. With limited information on data quality, CMS cannot ensure that states’ managed care rates are appropriate, which places billions of federal and state dollars at risk for misspending. We made recommendations to improve CMS’s oversight of states by implementing a mechanism to track state compliance with Medicaid managed care actuarial soundness requirements, clarifying guidance on rate-setting reviews, and making use of information on data quality in overseeing states’ rate setting. HHS agreed with these recommendations, and as of May 2012, CMS officials indicated that they were reviewing and updating the agency’s guidance and exploring the incorporation of information about data quality into its review and approval of Medicaid managed care rates. Improved financial stewardship of federal programs is becoming increasingly important as the pressure to reduce spending mounts. In an agency as large as HHS, the need for vigilance in continuously seeking out cost savings cannot be overstated. In our work, we have examined many aspects of HHS operations and made recommendations to help HHS prevent unnecessary spending, save money, recover funds that should rightfully be returned, improve the efficiency of agency operations, and improve service for beneficiaries. HHS has implemented many of our recommendations that have proven to be financially beneficial while also enhancing program management. However, there are still recommendations we have made that remain open. While we recognize that some of the recommendations we have highlighted today are relatively new, others are several years old. HHS has made clear that it is committed to improving the nation’s health and well-being while simultaneously contributing to deficit reduction. We therefore urge HHS to expedite action on our open recommendations to further advance its performance and accountability. Chairman Stearns, Ranking Member DeGette, and Members of the Subcommittee, this completes our prepared statement. We would be pleased to respond to any questions that you may have at this time. If you or your staff have any questions about this testimony, please contact us at (202) 512-7114 or cosgrovej@gao.gov and yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are listed in appendix I. In addition to the contacts named above, Geri Redican-Bigott, Assistant Director; Kelly DeMots; Helen Desaulniers; David Grossman; Elizabeth T. Morrison; and Kate Nast made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | HHS manages hundreds of complex programs benefiting the health and well-being of Americans, accounting for a quarter of all federal outlays. For fiscal year 2012, HHS is responsible for approximately $76 billion in discretionary spending and for an estimated $788 billion in mandatory spending. The size and critical mission of the two largest HHS programs, Medicare and Medicaid, make it imperative that HHS is fiscally prudent yet vigilant in protecting the populations that depend on these programs. In recent years, GAO has identified shortcomings and recommended actions to enhance operations and correct inefficiencies in Medicare and Medicaid, and HHS has implemented many recommendations, resulting in billions of dollars in savings. Because agencies now must do more with less, recommendations not yet implemented are opportunities for further conserving HHS funds and strengthening oversight of programs serving the nations most vulnerable populations. GAO was asked to testify on issues related to HHSs budget. This statement draws from GAOs prior work, including work on these two high-risk programs, in which GAO made recommendations related to (1) the management of Medicare and (2) the need for additional oversight of Medicaid. To the extent information was available, GAO updated the status of these recommendations. Over the past several years, GAO has made a number of recommendations to the Centers for Medicare & Medicaid Services (CMS)an agency within the Department of Health and Human Services (HHS)to increase savings in Medicare fee-for-service and Medicare Advantage (MA), which is a private plan alternative to the traditional Medicare fee-for-service program. Open recommendations that could yield billions of dollars in savings remain in many areas, such as the following: Minimizing improper payments and fraud in Medicare. GAO recommended that CMS require contractors to automate prepayment controls to identify potentially improper claims for medical equipment and supplies, expand current regulations to revoke billing privileges for home health agencies with improper billing practices, designate authorized personnel to evaluate and address vulnerabilities in payment systems, and enhance payment safeguards for physicians who use advanced imaging services. Aligning coverage with clinical recommendations. GAO recommended that CMS provide coverage for services recommended by clinical experts, as appropriate, given cost-effectiveness and other criteria. Better aligning payments to MA plans. To ensure that payments to MA plans reflect the health status of beneficiaries, GAO recommended that CMS more accurately adjust for differences between MA plans and traditional Medicare providers in reporting beneficiary diagnoses. GAO also recommended that CMS cancel the MA Quality Bonus Payment Demonstration because its design precludes it from yielding meaningful results. GAO has made recommendations to CMS regarding Medicaid program oversight. Open recommendations remain in many areas, such as the following: Improving oversight of Medicaid payments . GAO recommended that CMS adopt transparency requirements and a strategy to ensure that supplemental payments to providers have been reviewed by CMS. These supplemental payments are separate from and in addition to those made at states regular Medicaid rates. Ensuring Medicaid demonstrations do not increase federal liability. GAO recommended that CMS revise its approval process for demonstrations to ensure they are budget neutral, which GAO subsequently referred to Congress as a matter for consideration. The size of Medicare and Medicaid requires CMS to focus continually on the appropriateness of the methodology for payments that these programs make and the pre- and postpayment checks that can help ensure that program spending is appropriate, overpayment recovery is expedient, and agency practices with regard to operations for these programs are efficient. Therefore, GAO urges HHS to ensure action is taken on open recommendations to advance its performance and accountability. |
Federal departments and agencies receive funding through regular annual appropriations acts. However, in the years covered in this study, all appropriations acts were not enacted before the beginning of the new fiscal year. If one or more of the regular appropriations acts are not enacted, a funding gap may result and agencies may lack sufficient funding to continue operations. The last such occurrence was in fiscal year 1996 during which unusually difficult budget negotiations led to two funding gaps with a widespread shutdown of government operations and the furlough of an estimated 800,000 federal government employees. To prevent similar results, Congress enacts CRs to maintain a level of service in government operations and programs until Congress and the President reach agreement on regular appropriations. CRs are temporary appropriations acts. Once the regular appropriation act is enacted it supersedes the CR. CRs generally do not specify an amount for programs and activities but permit agencies to continue operations at a certain “rate for operations.” They typically incorporate by reference the conditions and restrictions contained in prior years’ appropriations acts or the appropriations bills currently under consideration. The Office of Management and Budget (OMB) is responsible for apportioning executive branch appropriations, including amounts made available under CRs. An apportionment divides appropriations by specific time periods (usually quarters), projects, activities, objects, or combinations thereof, in part to ensure agencies have resources throughout the fiscal year. OMB automatically apportions amounts made available under a CR. The duration of CRs varied during the period covered in this study, fiscal years 1999-2009. Figure 1 shows that the duration of individual CRs enacted from 1999 to 2009 ranged from 1 to 157 days and the number of CRs enacted in each year ranged from 2 to 21. The average length of the CR period was about 3 months and in several years—fiscal years 2002- 2004, 2007, and 2009—agencies’ regular appropriations were not enacted until the second quarter of the fiscal year. This figure also shows that the duration of initial CRs was less than 1 month from 1999-2003, but since then the duration has been about 1 month or more. Between fiscal year 1999 and 2009, most agencies operated under a CR at the beginning of the fiscal year, uncertain if there would be subsequent CRs and if so, how many and how long before receiving regular appropriations. In fiscal year 2001, for example, there were 20 extensions of the initial CR, each ranging from 1 to 21 days, and the total period when one or more agencies operated under a CR was 83 days. There is no discernable pattern for the duration or number of extensions and not all federal agencies are under CRs for the entire duration. As shown in figure 2, agencies covered by the Defense, Military Construction, and Homeland Security Appropriations Subcommittees operated under CRs for about 1 month on average during fiscal years 1999-2009, whereas other agencies operated under CRs for at least 2 months on average. During the period studied, fiscal years 1999-2009, every agency operated under a CR for some period of time. For most, this meant temporarily operating under a conservative rate of spending and limitations on certain activities, as required by the standard provisions. However, in some circumstances, Congress increased amounts available to some programs and activities, extended authorities, or provided greater direction than what was provided by the standard provisions, especially in longer CRs. These specific provisions—called legislative anomalies—may alleviate some challenges during the CR period. We identified 11 standard provisions applicable to the funding of most agencies and programs under a CR. These provisions provide direction regarding the availability of funding and demonstrate the temporary nature of the legislation. For example, one standard provision provides for an amount to be available to continue operations at a designated rate for operations. Since fiscal year 1999, different formulas have been enacted for determining the rate for operations during the CR period. The amount often is based on the prior fiscal year’s funding level or the “current rate” but may also be based on a bill that has passed either the House or Senate. Depending on the language of the CR, different agencies operate under different rates. The amount is available until a specified date or until the agency’s regular appropriations act is enacted, whichever is sooner. In general, CRs prohibit new activities and projects for which appropriations, funds, or other authority were not available in the prior fiscal year. Also, so the agency action does not impinge upon final funding prerogatives, agencies are directed to take only the most limited funding actions and CRs limit the ability of an agency to obligate all, or a large share, of its available appropriation. Congress added two new standard provisions since 1999. At the beginning of fiscal year 2004, Congress standardized a provision that makes funding available under CRs to allow for entitlements and mandatory payments funded through the regular appropriations acts to be paid at the current fiscal year level. In 2007, Congress enacted the furlough provision in the CR for the first time. This provision permits OMB and other authorized government officials to apportion up to the full amount of the rate for operations to avoid a furlough of civilian employees. This authority may not be used until after an agency has taken all necessary action to defer or reduce nonpersonnel-related administrative expenses. The problem of covering salary and personnel expenses with limited funding may be exacerbated when a CR crosses the calendar year and a mandatory salary increase becomes effective. For example, in fiscal year 2009, the CR enacted a 3.9 percent pay increase for certain civilian employees to begin on the first full pay period of the calendar year. However, the CR did not provide additional funding beyond the enacted rates for operations. Accordingly, most agencies were expected to cover the salary increase and related personnel costs at fiscal year 2008 funding levels. In addition to the standard provisions, CRs contained legislative anomalies that provided funding and authorities that were different from the standard provisions. We identified approximately 280 anomalies enacted in CRs since fiscal year 1999. The number of anomalies generally increased as the duration of initial CRs increased in recent years (see fig. 3). Despite the growing number, legislative anomalies covered a small share of the agencies, programs, and activities covered by the CR in each year. Most agencies operated under the more conservative funding levels and limitations provided by the standard provisions for the duration of the CR. Over two-thirds of the anomalies enacted since 1999 fell into two categories: a different amount than that provided by the standard rate for operations, extensions of expiring program authority. Over one-third of the legislative anomalies enacted since 1999 provided an agency, program, or activity an amount different from that provided in the standard provisions. Programs that received a specific or additional amount or a different rate for operations under a CR include the decennial census, wildfire management, disaster relief, veterans healthcare and benefits, and presidential transition activities. An anomaly in the 2009 CR provided BOP with funding equal to the amount requested to cover costs for the current services level in the President’s fiscal year 2009 budget request. The previous year, BOP had received more than $296 million in supplemental appropriations and amounts made available from other DOJ appropriation accounts that were not included in the standard rate for operations in the 2009 CR. According to BOP officials, the anomaly in the 2009 CR helped ensure BOP could continue to pay salaries and expenses of the existing staff and costs of the growing inmate population. In any one of the years we studied prior to 2009, CRs only included up to 18 provisions that provided a different amount than what was provided in the standard provisions. However, in 2009 over 30 such provisions were included in the 157-day CR. In some cases, CRs provided full-year appropriations for a program or activity. Under these circumstances, agencies have funding certainty during the CR period. For example, in fiscal year 2009, the CR appropriated an amount to cover the entire year for Low Income Home Energy Assistance Program (LIHEAP) payments. LIHEAP provides assistance for low-income families in meeting their home energy needs and typically 90 percent of LIHEAP funding is obligated in the first quarter to cover winter heating costs. For several years prior to 2009, OMB provided LIHEAP a seasonal apportionment allowing the program to operate at a higher rate than would have been allowed under OMB’s automatic apportionment. However, by receiving a full-year appropriation in the CR, the LIHEAP program could operate with certainty about its final funding level making an exception apportionment unnecessary. However, these circumstances are rare; most federal programs and activities faced uncertainty during the CR period about when and how much funding would be provided in their regular appropriations. Another large share of legislative anomalies enacted since fiscal year 1999 extended expiring authorities through the specified termination date of the CR. The types of programs extended during the years of our review are diverse, including the National Flood Insurance Program, affordable housing, free lunch, and food service programs. CRs also have extended the authority to collect and obligate fees, such as for mining, or to collect certain copayments from veterans for medications. The fiscal year 2008 CR, for example, included an extension of VA’s authority to collect certain amounts from veterans and third parties, including insurance providers. If the authorization had not been extended, VA would have had to operate with less funding. In some cases, Congress lifted or added restrictions on the authorized purpose for which funds could be used during the CR period or amended other laws. Also, there have been a few legislative anomalies for activities not funded in the prior year, such as a presidential transition. In sum, the number and range of anomalies demonstrate that while CRs are temporary measures, Congress has chosen to include provisions to address specific issues. All six case study agencies reported that the most common inefficiencies were delays to certain activities, such as hiring, and repetitive work, including having to enter into several short-term contracts or issuing multiple grants to the same recipient. The effects of the delays and the amount of additional work varied by agency and by activity and depended in large part on the number and duration of CRs. All case study agencies reported not filling some new or existing positions during the CR period because they were uncertain how many positions their regular appropriation would support or to meet more immediate funding needs during the CR period. For example, according to FBI officials, rates for operations provided in CRs based on the previous year’s appropriations acts do not include annual pay raises, the annualization of pay for the previous year’s hiring increases, or the increased costs of retirement, health insurance, and other employee benefits. To cover these costs, FBI delayed filling existing positions during CRs. In addition, officials from ACF and FDA said they were reluctant to begin the hiring process during the CR period for fear that the time invested would be wasted if the certificate of eligibles listing qualified applicants expired or the agency received insufficient funding to support the additional staff. Agency officials said that if hiring was delayed during the CR period, it was particularly difficult to fill positions by the end of the year after a longer CR period. Overall, case study agency officials said that, absent a CR, they would have hired additional staff sooner for activities such as grant processing and oversight, food and drug inspections, intelligence analysis, prison security, claims processing for veterans’ benefits, or general administrative tasks, such as financial management and budget execution. Agency officials said that given the number of variables involved, it is difficult to quantify the effect that hiring delays related to CRs had on specific agency activities. Agencies were also largely unable to identify any specific foregone opportunities that may have resulted from a delay in hiring related to CRs. However, they did describe some general effects. An FDA official from the Office of Regulatory Affairs said that deferring the hiring and training of staff during a CR affected the agency’s ability to conduct the targeted number of inspections negotiated with FDA’s product centers in areas such as food and medical devices. Another FDA official said that routine surveillance activities (e.g., inspections, sample collections, field examinations, etc.) are some of the first to be affected. BOP officials said that deferring hiring during CRs has made it difficult for BOP to maintain or improve the ratio of corrections officers to inmates as the prison population increases. VBA officials cited missed opportunities in processing additional benefits claims and completing other tasks. Because newly hired claims processors require as much as 24 months of training to reach full performance, a VBA official said that the effects of hiring delays related to CRs are not immediate, but reduce service delivery in subsequent years. However, VBA was able to achieve its hiring goals by the end of the fiscal year in each of the past 4 years. The effects of CRs on hiring at other departments as described by departmental CFOs and others who participated in our panel discussion were similar to those identified by officials at case study agencies. To avoid these types of hiring delays, FBI proceeded with its hiring activities based on a staffing plan supported by the President’s Budget during the CR period in 2009. This helped FBI avoid a backlog in hiring later in the year and cumulatively over time. However, FBI assumed some risk that the regular appropriation for the year would not support the hiring plan. According to FBI officials, if the agency had not received a regular appropriation equal to or greater than the President’s fiscal year 2009 budget request, it likely would have had to suspend hiring for the remainder of the fiscal year and make difficult cuts to other nonpersonnel expenses. In addition to delays in hiring, case study agencies also reported delaying contracts during the CR period. For example, VHA medical facilities did not start nonrecurring maintenance projects designed to improve and maintain the quality of VA Medical Centers (e.g., repairs to electrical or sewage systems) but instead waited until the agency received its regular appropriation to fund these projects. BOP reported that it frequently postponed awarding some contracts during a CR. For example, BOP reported delaying the activation of its Butner and Tucson Prison facilities and two other federal prisons in 2007 during the CR period to make $65.6 million in additional resources available for more immediate needs. According to BOP, delays resulting from CRs contributed to delays in the availability of additional prison capacity at a time when prison facilities were already overcrowded. A recent BOP study found that overcrowding is an important factor affecting the rate of serious inmate assault. As of July 9, 2009, BOP facilities were 37 percent over capacity systemwide. As a result of delaying contracts during CRs, officials from BOP, VHA, and VBA said that they sometimes had to solicit bids a second time or have environmental, architectural, or engineering analyses redone resulting in additional costs in time and resources for the agency. According to BOP, delaying contract awards for new BOP prisons and renovations to existing facilities prevented the agency from locking in prices and resulted in higher construction costs. Based on numbers provided by BOP, a delay in awarding a contract for the McDowell Prison Facility resulted in about $5.4 million in additional costs. However, in general, case study agencies were unable to provide documents confirming cost increases resulting from a CR. Some agency officials said that contracting delays resulting from longer CRs have also affected their ability to fully compete and award contracts in the limited time remaining in the fiscal year after the agency has received its regular appropriation. Federal law and regulations require federal contracts to be competed unless they fall under specific exceptions to full and open competition. Depending on the type of contract, to fully compete a contract an agency must solicit proposals from contractors, evaluate the proposals received, and negotiate and award the contract to the firm with the best proposal. BOP’s Field Acquisition Office, which is responsible for acquisitions over $100,000, said that trying to complete all of its contracts by the end of the fiscal year when a CR lasts longer than 3 to 4 months negatively affects the quality of competition. Longer CRs also have contributed to distortions in agencies’ spending, adding to the rush to obligate funds late in the fiscal year before they expire. For example, VHA reported that it has often delayed contracts for nonrecurring maintenance projects, as described above, until the agency receives its regular appropriation. Although other factors contributed to delays, in 2006 VHA obligated 60 percent (about $248 million) of its $424 million nonrecurring maintenance budget in September, the last month of the fiscal year. Officials from ACF and VHA said that, in general, most of the discretionary grants that they award are not delayed by shorter-term CRs because these grants are typically awarded later in the fiscal year after the agencies have received their regular appropriation. However, an ACF official said that lengthy CR periods—particularly those that extend beyond mid-February, like the ones that ACF operated under in 2003 and 2009—delay discretionary grant announcements. The official said the delay causes a shift in the grant cycles, pushing back the application review period, which in turn pushes back the final award date. A longer CR period also may compress the application time available for discretionary grants. For example, VHA reported that CR periods that extend several months into the fiscal year have delayed notification to nonprofit, state, or local governments of possible grant opportunities for constructing, acquiring, or renovating housing and nursing home care for veterans. These delays reduce the time available for potential grant recipients to meet the program’s application deadlines, which can affect the quality of applications submitted. The application time available for ACF’s discretionary grants may also be compressed by a longer CR. We reviewed the application times for 277 grants awarded by four ACF discretionary grant programs between 2005 and 2008. We found that while application times varied considerably—from 13 to 89 days—they were on average 11 days more in fiscal years when the agency’s regular appropriation was enacted before the end of the first quarter than when the agency’s appropriation was enacted in the second quarter. However, we could not isolate the effect on application times that resulted from a longer CR period from other factors. The effect of CRs on grants described by case study agencies was consistent with what we heard from departmental CFOs and others who participated in our panel discussion. Specifically, panel participants said that discretionary grant awards are generally put on hold at their departments during a CR to avoid having to solicit proposals multiple times. If the amount of funding provided by a formula grant is based on a certain percentage of the total amount appropriated, the grant may be delayed until the department has received its final funding. According to some representatives of nonprofit organizations and state and local governments, in the past, federal grant recipients have been able to temporarily support programs with funds from other sources until agencies’ regular appropriations are passed; however, it is more difficult to do so during periods of economic downturn such as the one they are currently experiencing. An ACF official told us that nonprofit organizations providing shelter to unaccompanied alien children have used lines of credit to bridge gaps in federal funding during a CR. However, in March 2009, a shelter in Texas informed ACF’s Office of Refugee Resettlement that its credit was at its limit and it was in immediate need of additional funds to sustain operations for the next 45 to 60 days. The Office of Refugee Resettlement made an emergency grant to this organization to maintain operations with the CR funding remaining. In addition to the delays described above, some agency officials told us that they delayed making program enhancements because of funding constraints related to the CR. For example, FBI officials said that over $440 million in enhancements to existing programs and activities were delayed in 2009 because the CR instructs agencies to implement only the most limited funding actions to continue operating at the enacted rate. These include improvements to the Data Loading and Analysis System, which FBI said was designed to improve its ability to analyze and share data for counterterrorism, counterintelligence, and cyber intrusion investigations. In addition to delays, all case study agencies reported having to perform additional work to manage within the constraints of the CR. The most common type of additional work that agencies reported was having to enter into new contracts or exercise contract options to reflect the duration of the CR. Agencies often made contract awards monthly or in direct proportion to the amount and timing of funds provided by the CR. In other words, if a CR lasted 30 days, an agency would award a 30-day contract for goods or services. Then, each time legislation extended the CR, the agency would enter into another short-term contract to make use of the newly available funding. For example, a BOP-administered federal prison contracted for an optometrist to provide care for the period between October 1, 2007, and November 16, 2007, the dates of the initial CR in 2008. When the CR was extended, the prison awarded a second contract to the optometrist covering November 19, 2007, to December 14, 2007, and a third contract covering December 17, 2007, to December 21, 2007, roughly corresponding to the duration of the CRs in that fiscal year. The prison also entered into contracts for medical services, fuel and utility purchases, and program services such as parenting instructions in a similar manner during CR periods. According to BOP officials, these contracts would have been awarded for the entire fiscal year had there not been a CR. BOP said that personnel perform this type of additional work at each of BOP’s 115 institutions to manage funds during a CR. Other case study agencies reported similar experiences. FBI reported that it undertakes contract actions, including renewals and options, at a specific percentage based on the rate for operations for the period covered by the CR. For example, during the CR in 2009 that covered 43 percent of the fiscal year, FBI said it executed no more than 40 percent of the value of contract renewals. The FBI adjusts over 7,550 purchase orders each time a CR is extended. VHA reported that to conserve funding, the agency enters into contracts that run month to month or the length of the CR rather than annual contracts covering the agency’s needs for the entire fiscal year. Also, VHA’s 153 medical facilities and roughly 800 clinics order supplies to maintain only the minimum levels needed. Agency officials said that if the agencies had received their regular appropriations at the start of the fiscal year, they would have entered into fewer contracts for longer periods of performance or placed purchase orders less frequently, making this additional work unnecessary. In general, shorter and more numerous CRs led to more repetitive work for agencies managing contracts than longer CRs did. Numerous shorter CRs were particularly challenging for agencies, such as VHA and BOP, that have to maintain an inventory of food, medicine, and other essential supplies. For example, under longer CRs—or with their regular appropriation—BOP officials said that prison facilities routinely contract for a 60- to 90-day supply of food. In addition to reducing work, this allows the prison facilities to negotiate better terms through a delivery order contract by taking advantage of economies of scale. However, under shorter CRs, these facilities generally limit their purchases to correspond with the length and funding provided by the CR. Thus, the prison makes smaller, more frequent purchases, which BOP officials said can result in increased costs. To reduce some of the additional work required to manage contracts in years when there are multiple CRs, FBI changed its requisition process to reduce the amount of work its Finance Division spends creating requisitions for contracts when a CR is extended (see fig. 4). CRs had a similar effect on grant awards. Officials from ACF said that they issue multiple grants to the same grant recipient during the CR period instead of making annual or quarterly awards, resulting in additional work for program managers and/or personnel in the Office of Grants Management. For example, a Head Start official said that if the program received its regular appropriation at the start of the fiscal year, it would likely be able to fund more grant recipients with a single award covering a 12-month period. However, during a CR, Head Start receives funding based on the duration of the CR, and the amount is usually not sufficient to fund all grant recipients for a full year. Rather than delay any individual grants, a Head Start official said that the program has provided some of its grant recipients with a smaller, initial award during the CR period. Then, once the regular appropriation was enacted, Head Start awarded an additional grant to the same recipient, providing the remainder of their annual funding. A Head Start official estimated that issuing an additional grant to the same recipient could take as much as 1 hour per award. The longer the CR period lasts, a Head Start official said, the greater the number of grants they have to award and thus the greater the workload increase. We examined data from ACF’s Grants Administration, Tracking and Evaluation System. Though we could not establish a clear causal link between CRs and specific instances where a grant recipient received multiple awards, we found that in 2008, 185 (about 35 percent) of Head Start Project grants administered to recipients through Head Start’s 10 regional offices received a grant during the CR period and a second grant award shortly after ACF’s regular appropriation for the year was enacted. For example, one childhood development center received a grant for roughly $1.1 million on December 10, 2007, while ACF was operating under a CR, and a second grant for roughly $1.7 million 51 days later, after ACF’s regular appropriation was enacted. To reduce the amount of additional work required to modify contracts and award grants in multiple installments, two case study agencies reported shifting contract and grant cycles to later in the fiscal year (see fig. 5). An agency’s ability to shift its contract cycle depends on a number of factors, including the type of services being acquired. The Federal Acquisition Streamlining Act of 1994 allows agencies to enter into 1-year contracts for severable services that cross fiscal years, so long as the contract period does not exceed 1 year and agencies have sufficient funds to enter into the annual contract. Severable service contracts are for services, such as janitorial services, that are recurring in nature. Using this contract flexibility, an agency can shift its contract cycle so that annual contracts for severable services are executed in the third and fourth quarters of the fiscal year when agencies are less likely to be under a CR. However, some agencies’ ability to shift their contract cycle to mitigate the effects of CRs was limited. A VHA official, for example, said that the agency’s contract workload is so large that it is difficult for the agency to delay work on certain contracts for even short periods of time. VHA officials also said that the agency makes acquisitions based on immediate needs identified by officials in the field rather than centrally managing the timing of contracts. All agencies also reported having to perform a variety of administrative tasks multiple times that they would otherwise not have done or would have needed to do only once had they received their regular appropriation on October 1st. For example, FDA reported that CRs increased the amount of administrative work required to allot funds. Agencies generally subdivide the funds that they are apportioned by OMB into allotments, which are distributed to different offices and/or programs within the agency. FDA typically makes allotments from its total apportioned funds to each of the agency’s six centers. When FDA receives its regular appropriation, it generally makes these allotments on a quarterly basis. But when it is operating under a CR, FDA officials reported that the agency has made allotments for each CR. Conversely, VBA and VHA reported that they did not allot specific dollar amounts during a CR but rather provided guidance that all offices operate at a certain percentage of the previous year’s appropriations (see fig. 6). The types of administrative tasks affected by CRs varied by agency but included the following: issuing guidance to various programs and offices; providing information to Congress and OMB; creating, disseminating, and revising spending plans; and responding to questions and requests for additional funding above what the agency allotted to different programs or offices within the agency. Departmental CFOs and others who participated in our panel discussion said that CRs led to similar repetitive work activities at their agencies. While case study agencies all agreed that performing repetitive activities involved additional time and resources—potentially resulting in hundreds of hours of lost productivity—none of the agencies reported tracking these costs. The time needed to enter into a contract or issue a grant award may be minimal and vary depending on the complexity of the contract or grant, but the time spent is meaningful when multiplied across VHA’s 153 medical facilities and roughly 800 clinics, FBI’s 56 field offices, BOP’s 115 institutions, and the thousands of grants and contracts awarded by our case study agencies. VHA, for example, estimated that it awards 20,000 to 30,000 contracts a year; ACF’s Head Start program awards grants to over 1,600 different recipients each year; and FBI places over 7,500 different purchase orders a year. Some agencies provided estimates of the additional or lost production costs at our request for selected work activities for illustrative purposes. These estimates are based on agency officials’ rough approximations of the hours spent on specific activities related to CRs. In the case of VHA, the estimate is based on the number of employees performing the tasks multiplied by the average monthly salary. VHA estimated that a 1-month CR results in over $1 million in lost productivity at VA medical facilities and over $140,000 in additional work for the agency’s central contracting office. The agency operated under a CR for more than 2 months per year on average between 1999 and 2009. FBI estimated that the Accounting, Budget, and Procurement Sections spent over 600 hours in 2009 on activities related to managing during the CR such as weekly planning meetings and monitoring agency resources and requisitions. ACF estimated that approximately 80 hours of additional staff time is spent for each CR by the ACF’s Division of Budget and program offices issuing guidance, allotting funds, creating and revising spending tables, and performing other administrative tasks. In addition, ACF officials estimated that issuing block grant awards multiple times in a single quarter led to approximately 10 additional staff days of work preparing and verifying allocations for grant recipients and preparing the award notices for mailing. We did not independently verify these estimates or assess their reliability beyond a reasonableness check, which involved reviewing the related documentation for each estimate and corroborating with related interviews and other documents where possible. Moreover, agencies were not able to identify specific activities that were foregone because of the CR. While some agency officials said that a single, long-term CR allowed for better planning in the near term, reducing delays and the amount of repetitive work, others said that operating under the specified rate for operations for a prolonged period limited their decision-making options, making trade-offs more difficult. For example, FBI officials reported that the number of contract requests that it receives to address emergency situations increases the longer the CR period lasts. As a result, FBI often has to reprioritize funds from other operations to fund these contracts, placing a strain on agency operations. Also, agency officials said that if the agency is unable to spend its funding on high-priority needs, such as hiring new staff, because of the limited time available after a lengthy CR, it ultimately will spend funds on a lower priority item that can be procured quickly. Some agency officials said that it was difficult to implement unexpected changes in their regular appropriations, including both funding increases and decreases, in the limited time available after longer CRs. For example, officials from ACF’s Office of Community Services said they made cuts to planned expenditures for training and technical assistance in 2009 to adjust to an unexpected funding directive for a national initiative on community economic development training and capacity development. Officials from FBI’s Criminal Investigative Division said that while funding increases were beneficial, receiving them in their regular appropriation after a longer CR period limited the division’s ability to review new contract requests and make the most effective decisions. The Criminal Investigative Division received additional funding in 2009 for mortgage fraud investigations in its regular appropriation enacted on March 11. According to FBI officials, the usual budget and planning cycle, which can take several months, had be completed in just 6 weeks to meet the deadline that FBI has established for completing all of its large dollar contracts by the end of the fiscal year. In addition, some agency officials reported that absorbing the increased personnel costs in years when the CR period extends into January creates additional challenges, particularly if personnel costs represent a large share of their total budget. This is because most federal civilian employees receive an annual pay adjustment effective in January of each year. Since 1999, the CR period has extended into January four times, and the cost of the salary increase has ranged from 1.7 percent to 4.1 percent (see table 1). To the extent an agency’s regular appropriations were constant or declined from the previous year, these costs would need to be absorbed by the agency or program regardless of CRs. However, for those agencies that ultimately receive a funding increase, absorbing the annual salary increase may strain already tight budgets during the CR period. For example, BOP reported that approximately 70 percent of operating budgets at BOP institutions are devoted to personnel costs. The 3.9 percent statutory salary increase for 2009 contributed to a $7.8 million increase in payroll requirements between December 2008 and January 2009. Departmental CFOs and others who participated in our panel discussion said that agencies across the federal government have to reduce funding for other needs, such as hiring and training, to pay for statutory salary increases. In addition to the anomalies previously described, multiyear appropriations or exception apportionments when granted helped to mitigate the effects of CRs at case study agencies. Officials from three agencies that we reviewed said that having multiyear budget authority— funds that are available for more than one fiscal year—was helpful for managing funds in the compressed time period after regular appropriations were enacted. For example, both VBA and VHA said that having the authority to carry over funds into the next fiscal year has been helpful in years with lengthy CRs because there is less pressure to obligate all of their funds before the end of the fiscal year, thus reducing the incentive to spend funds on lower priority items that can be procured more quickly. FBI also has authority to carry over a limited amount of funds into the subsequent fiscal year, and officials from FBI’s central budget office said this was helpful during a CR. OMB has also helped agencies manage during a CR by providing more than the automatic apportionment when justified. While OMB automatically apportions funds to agencies based upon the lower of the percentage of the year covered by the CR or the seasonal rate of obligations for that same time period, OMB recognizes that some programs may need more of their appropriation available at the beginning of the fiscal year during a CR period. OMB will adjust the apportionment upward in some cases, but these are rare exceptions according to OMB staff. Two of our case study agencies—VHA and ACF—received exception apportionments during the study period—fiscal years 1999 to 2009. OMB apportioned funding for VHA’s medical administration account to reflect its seasonal rate of obligations during the CR period in 2008. According to ACF officials, between 2003 and 2008, OMB also apportioned ACF’s LIHEAP funding based upon its seasonal rate of obligations during the CR period.22 The exception apportionment allowed ACF to obligate the bulk of the funds in the first quarter when heating assistance is most needed. Officials from the remaining four case study agencies—BOP, FBI, FDA, and VBA—said these agencies operated with the automatically apportioned amount during CR periods since fiscal year 1999. The federal budget is an inherently political process in which Congress annually faces difficult decisions on what to fund among competing priorities and interests. CRs enable federal agencies to continue carrying out their missions and delivering services until agreement is reached on their regular appropriations. While not ideal, CRs continue to be a common feature of the annual appropriations process. They provide parties additional time for deliberation and avoid gaps in funding. Agencies have experience managing programs within the funding constraints and uncertainty of CRs and use methods within their available authorities. However, there is no easy way to avoid or completely mitigate the effects of CRs on agency operations. In the first CR of 2009, LIHEAP received a legislative anomaly providing a full-year appropriation for fiscal year 2009. The degree of difficulty that case study agencies encountered in managing under a CR varied, but all of the agencies that we reviewed expressed similar concerns about CRs and their effects on their ability to carry out their work efficiently and effectively. These concerns included the need for repetitive activities and incremental planning. Agencies reported that CRs inhibited them from hiring staff and providing a higher level of service than if they were operating under a regular appropriation. When the CR period is long, the time for planning and program execution is compressed, which can be especially challenging when trying to implement new programs or program enhancements. Although we cannot say that the case studies represent the experiences of all federal agencies, there is nothing that suggests they are atypical. Case study examples cross program types and activities and are consistent with the views of our panel of CFOs and other budget officials. Therefore, we believe that the experiences of these six agencies provide useful insights for Congress about agency operations under CRs. We requested comments on a draft of this report from the Departments of Health and Human Services, Justice, and Veterans Affairs. The departments provided comments that were clarifying or technical in nature and we incorporated them as appropriate. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies to the Secretary of Health and Human Services, the Attorney General, the Secretary of Veterans Affairs, and interested congressional committees. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Denise M. Fantone at (202) 512–6806 or fantoned@gao.gov or Susan A. Poling at (202) 512–2667 or polings@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. The objectives of this report are to describe: 1. the history and characteristics of continuing resolutions (CR), and 2. for selected case study agencies, how CRs have affected agency operations and what actions have been taken to mitigate the effects of CRs. To achieve our first objective, we analyzed how provisions in CRs enacted from fiscal years 1999-2009 direct agencies to operate during the CR period and how the provisions changed over time. Our analysis also covered the number and type of provisions in CRs that provided specific directives or funding levels to particular departments, agencies, and programs for the CR period. We refer to these provisions in the report as “legislative anomalies.” To achieve our second objective, we conducted a case study review analyzing the effects of CRs on select agency operations. In selecting case study agencies, we focused on agencies with (1) extensive experience managing under CRs to facilitate identification of key practices and (2) a broad range of program types, service delivery mechanisms, and operational capabilities to make the findings more useful to agencies across government. Based on the process described below, we selected the following departments and agencies: Department of Health and Human Services (HHS) Administration for Children and Families (ACF) Food and Drug Administration (FDA) Department of Veterans Affairs (VA) Veterans Health Administration (VHA) Veterans Benefits Administration (VBA) Department of Justice (DOJ) Bureau of Prisons (BOP) Federal Bureau of Investigation (FBI) We used a multistep process to select these departments and agencies. We convened a panel of Chief Financial Officers (CFO) or their representatives from major cabinet-level departments in part to help us identify criteria for case study selection. Eleven of the 15 cabinet-level agencies were represented in our panel, including CFOs, deputy CFOs, and budget directors from the departments of Education, Energy, Homeland Security, Housing and Urban Development (HUD), Interior, Justice, Labor, State, Transportation, Treasury, and Veterans Affairs. The panel was specifically asked to identify (1) factors that may make it more or less difficult to manage under a CR, (2) the activities most affected, and (3) strategies agencies use for managing under CRs. The programs, activities, and other factors identified by our panel that may make it more or less difficult to make trade-offs in a CR environment were considered in our case study selection. To begin the selection process, we first analyzed the amount of time departments, covered by different appropriations acts, operated under CRs during fiscal years 1999-2008. We calculated the time between the beginning of the fiscal year—October 1—and the date when the regular appropriations were enacted for each appropriations subcommitee. We then selected departments (based upon the jurisdiction of each subcommittee) that were under a CR for more than the average of 847 days over the past 10 years (see table 2). Next, we eliminated from further consideration the District of Columbia because it receives significant amounts of funding outside of the regular appropriations process that may have mitigated the effect of CRs on its operations. We also eliminated the Department of State because it received 10 percent or more of its funding from fiscal years 1999-2006 from supplemental appropriations. Third, to better understand the range of issues raised by CRs across government, we examined departments within the remaining appropriations subcommittees with the intent of selecting departments that provide services in different ways (e.g., directly by federal personnel, through contracts or grants to third parties, and through the use of federal facilities). We analyzed obligations of the remaining departments based on the following four budget object class categories that were used as proxies for different types of service delivery: Personnel, Compensation, and Benefits (employee salaries and benefits); Contractual Services and Supplies (rent, services, supplies and materials); Grants and Fixed Charges (grants, insurance, and interest); and Acquisition of Assets (equipment, land and structures, investments, and loans). To maximize the usefulness of each department selected for review and to minimize any limitations of object class data, we selected departments that appeared in the top three for more than one object class. Based on this analysis, the following departments were selected: VA (personnel, contractual services and acquisition); DOJ (personnel, acquisition); and HHS (contractual services, grants). Fourth, we selected two agencies for review within each of these departments (see table 3) based on a set of criteria that were developed in part from previous GAO work and what we heard from CFOs and others who participated in our panel discussion. These criteria included the number of accounts, the amount of multiyear funding, whether the appropriation provided a lump sum, and whether the agency had transfer authority. We also reviewed budget data to see if any of the selected agencies received a significant amount of their resources (defined for our purposes as 10 percent or more) from offsetting collections, which are treated differently in the regular appropriations process. We reviewed the 2008 appropriation acts for selected agencies with the goal of having representation from one or more case study agency for each of the criteria. We analyzed data at the account level, and if more than one-half of an agency’s accounts met the criteria, then the agency was considered for review. We focused our analysis primarily on discretionary funding because funding for mandatory accounts occurs outside of the annual appropriations process and therefore is not directly affected by CRs. However, we included VBA because we sought to include at least one agency responsible for administering mandatory benefits with discretionary funds. To analyze the service mechanisms that agencies use to achieve their missions, we examined object class data, program activities, and agencies’ descriptions of their programs. If we found that one of the service mechanisms or factors affecting an agency’s flexibility in obligating funds was not included, we examined other agencies with large discretionary accounts in each department to see if they could make up for the deficiency. We continued this process until we selected agencies that covered a variety of budget flexibilities and ways to deliver services. Table 3 shows the agencies selected for review and how long they operated under CRs from 1999 to 2008. Overall, our six case study agencies received more than $46 billion in discretionary budget authority in 2007, accounting for approximately 10 percent of all nondefense discretionary spending. All three of our case study departments were in the top 10 in federal contract dollars by executive department and independent agencies in 2006, and accounted for approximately 22 percent of all nondefense federal contract dollars. The HHS grant portfolio is the largest in the federal government, with approximately 60 percent of the federal government’s grant dollars. To obtain a range of perspectives, we conducted semistructured interviews with officials at each department-level budget office, each agency’s budget office, and at least one program office in each agency. We discussed the effects of CRs on different types of programs and activities within these agencies. We asked agency officials to demonstrate the effects of regular appropriations being enacted after the start of the fiscal year and to distinguish the effects of the CR versus other possible causes (e.g., level of funding, changes in workload). We provided each agency a standard request for information on budget resources, activities associated with CRs, and planning documents among other things. To provide illustrative examples of the types of costs associated with CRs, we also asked for estimates of the resources needed to perform certain activities or to provide services (e.g., time, average cost of staff days) associated with CRs. In one instance, BOP provided the approximate cost of delays in awarding a contract for a new prison facility, but overall, agencies reported that they do not track these costs. However, they did provide their best estimates at our request. We have not independently verified these estimates or assessed the estimates for reliability beyond a reasonableness check but include them for illustrative purposes. Our check involved reviewing the related documentation for each estimate and corroborating the estimate with related interviews and other documents where possible. One of the limitations of our case study analysis is that we had to rely to a large degree on testimonial evidence because case study agencies could not provide documentation showing the foregone opportunities resulting from a CR. In general, agencies do not produce planning documents—such as spending plans or monthly hiring targets—until they have received their regular appropriations. Aside from VA, all case study agencies have operated under a CR for each of the past 11 years and therefore could only speculate on how they would have operated differently or more efficiently, except anecdotally. In general, there were too many variables for agencies to isolate the effects of CRs from other factors. Selected case studies cannot be generalized, but similarities in agency officials’ accounts of operating under CRs suggest that there are broad- based commonalities in the experiences of federal agencies. When possible, we incorporated statements made by CFOs and others who participated in our panel discussion into our case study review questions and discussions with officials at case study agencies to better understand whether the effects of CRs described by case study agencies were similar to those made by our panel. To better understand the potential effects of CRs on entities receiving federal funding, we interviewed officials representing states and contractors, including the National Association of State Budget Officers, National Conference of State Legislatures, Federal Funds Information for States, one state budget officer, the Professional Services Council, and Logistics Management Institute Government Consulting. In addition, ACF also provided us with information that it received from some grant recipients regarding difficulties managing programs during CRs. We conducted this performance audit from September 2008 to September 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Since 1999, continuing resolutions (CR) have contained the same nine standard provisions that govern most agencies, programs, and activities covered by the CR. Two new standard provisions were added during this time period, the appropriated entitlement provision and the furlough restriction. These standard provisions are listed and described in table 4. Appropriates amounts necessary to continue projects and activities that were conducted in the prior fiscal year at a specific rate for operations. Incorporates restrictions from prior year’s appropriations acts or the acts currently under consideration. Amounts appropriated under a CR are not available to initiate or resume projects or activities for which appropriations, funds, or authority were not available during the prior fiscal year. Appropriations made available under the CR shall remain available to cover all properly incurred obligations and expenditures during the CR period. Expenditures made during the CR period are to be charged against applicable appropriations acts once they are finally enacted. Apportionment time requirements under 31 U.S.C. § 1513 are suspended during the CR period but appropriations provided under a CR must still be apportioned to comply with the Antideficiency Act and other federal laws. Programs or activities with a high rate of obligation or complete distribution of appropriations at the beginning of the prior fiscal year shall not follow the same pattern of obligation nor should any obligations be made that would impinge upon final funding prerogatives. Agencies are directed to implement only the most limited funding action to continue operations at the enacted rate. Authorizes entitlements and other mandatory payments whose budget authority was provided in the prior year appropriations acts to continue at a rate to maintain program levels under current law (or to operate at present year levels). Amounts available for payments due on or about the first of each month after October are to continue to be made 30 days after the termination date of the CR. Authorizes the Office of Management and Budget (OMB) and other authorized government officials to apportion up to the full amount of the rate for operations to avoid a furlough of civilian employees. This authority may not be used until after an agency has taken all necessary action to defer or reduce nonpersonnel-related administrative expenses. Date on which the CR expires. Usually based on the earlier of a specific date or the enactment of the annual appropriations acts. CRs are often described as continuing projects and activities at the previous year’s level, but this is not always the case. The amount provided by a CR often is based on the prior fiscal year’s funding level or the “current rate” but may also be based on other documents that reflect Congress’ or the Administration’s more current positions on funding and operations of federal agencies and programs. The amount provided is sometimes based on an appropriations bill that has passed both the House and the Senate but has not been signed by the President or other legislative or executive documents such as a conference report or the President’s budget request. Often the CR will enact a rate equal to the lower of the “current rate” or not to exceed the current rate and an amount provided for in a bill, the budget request, or some other legislative document. A CR will appropriate “such amounts as may be necessary” for continuing “projects or activities” that were conducted in the previous fiscal year at a specified rate for operations. For purposes of determining which government programs are covered by the resolution, the term “project or activity” refers to the total appropriation rather than the specific project or activities as provided by the President’s budget request or a committee report. If an agency is operating at a rate based upon the prior year’s funding level, or the current rate, during a CR period, the agency is operating within the limits of the resolution so long as the total of obligations under the appropriation does not exceed the level enacted in the prior year. Below we describe the differences in the various rates for operations and other considerations when determining the enacted rate. “Current rate” as used in a CR refers to the total amount of budget authority that was available for obligation for a project or activity during the fiscal year immediately prior to the one for which the CR is enacted. In general, the current rate refers to a sum of money rather than a program level. Thus, the amount of money available under the CR will be limited by that rate, even though an agency’s workload and program needs may increase. To determine the amount available under the current rate, it is necessary to determine whether the appropriation is a 1-year, multiple-year, or no- year appropriation. For programs and activities funded through a 1-year appropriation in prior years, the current rate is equal to the total funds appropriated for the program for the previous year. In those instances in which the program has been funded by multiple-year or no-year appropriations in prior years, the current rate is equal to the total funds appropriated for the previous fiscal year plus any unobligated budget authority carried over into that year from prior years. When the CR appropriates funds to continue an activity at a rate for operations “not exceeding the current rate” or “not in excess of the current rate” the project or activity has no more funds than it had available for obligation in the prior fiscal year. Thus, if the appropriation is multiple- year or no-year funding, any unobligated balance carried over into the CR period must be deducted from the current rate in determining the amount of funds appropriated by the CR. If this were not done, the project or activity would be funded at a higher level in the present year than it was in the prior year. Other Rates for Operations The CR may also appropriate funds to continue a project or activity at a rate for operations in reference to legislative documents, such as House, Senate, or Conference Reports, or executive documents, such as the President’s budget request. Often, the CR will provide for the possibility of several rates for operations depending upon where the appropriations bill is in the legislative process at the beginning of the fiscal year. In such cases, for each appropriation account, the agency must compare the amounts referenced in the CR to determine the enacted rate for that particular account. The rate for operations specified in the CR, regardless of whether it is the current rate or based on another amount in a legislative document, is an annual amount. The continuing resolution, whether lasting 1 day or 1 month, appropriates this full amount. As such, an agency may legally follow any pattern of obligating funds, so long as it is operating under a plan which enables continuation of activities through the fiscal year within the limits of that annual amount and is consistent with other provisions of the CR. Under this principle, when operating under a CR which appropriates funds at the current rate, an agency is not necessarily limited to incurring obligations at the same rate it incurred them in the corresponding time period of the preceding year. Instead the pattern must reflect an operation that could continue activities for the fiscal year at the limits of the amounts appropriated in the previous year. OMB’s apportionment of the appropriation will also affect the availability of the appropriation for obligation. Because the rate for operations changes from year to year, it is necessary to examine the language of the CR very carefully to identify the formula that has been provided for determining amounts available during the CR period. It may be necessary to examine documents other than the CR itself. Often, different appropriations accounts will be operating at different rates depending upon the status of the appropriations bill. The following two examples illustrate different rates for operations enacted in the standard provisions during the last 10 years. In figure 7, all appropriations listed in section 101 would operate at the rate for operations not exceeding the current rate. In figure 8, a project or activity may operate at the lower of either the rate for operations not exceeding the current rate or the rate based upon the amounts provided by the House and Senate bills passed before October 1, 2005. So, for example, if bills passed by the House and Senate included the same amount for an activity in fiscal year 2006, the agency would have to compare the amounts passed by the House and Senate with the current rate. If the House and Senate amount is lower, the agency will continue the project or activity at a rate based upon that amount. If the current rate is lower, the project and activities will continue at a rate for operations not exceeding the current rate. Also, in 2006, if the bills passed by the House and Senate provided no amount for the project or activity, the project or activity would not continue (rate for operations is zero). OMB issues apportionment guidance directing agencies how to calculate the amount of funds available to obligate and spend during the CR period. OMB automatically apportions these amounts. The formula used to determine the apportionment has generally remained the same since fiscal year 1999. To better preserve Congress’ and the President’s final funding prerogatives, the apportionment is equal to the annualized amount (or rate) for each appropriation account funded by the CR multiplied by the lower of: the percentage of the year covered by the CR, or the historical seasonal rate of obligations for the period of the year covered by the CR. For example, assume an agency’s annualized amount for an appropriation account was $100 million. If the initial CR period was 36 days or 10 percent of the fiscal year and the agency’s rate of obligations during the first 10 percent of the fiscal year was 25 percent of its annual appropriations, then the automatic apportionment for that appropriation account would be $10 million during the CR period because the amount based on the percentage of the year is lower than the seasonal rate. If the rate of obligations was 5 percent, the automatic apportionment would be $5 million for the CR period. While the automatic apportionment formula has remained the same over the last 10 years, the calculation of the annualized amount will change depending on the rate for operations provided by the CR and other provisions. In addition to the contacts named above, Carol Henn, Assistant Director; Julie Matta, Assistant General Counsel; Melissa Wolf, Analyst-in-Charge; Sheila Rajabiun, Senior Attorney; Aglae Cantave; Juan Cristiani; Felicia Lopez; and Tom McCabe made key contributions to this report. Leah Querimit Nash, Albert Sim, and Jessica Thomsen also contributed. | In all but 3 of the last 30 years, Congress enacted a continuing resolution (CR) allowing federal agencies to continue operating when their regular appropriations had not been passed. CRs appropriate funds generally through rates for operations--funding formulas frequently referenced to the previous years' appropriations acts or a bill that has passed either the House or Senate--instead of a specific amount. GAO was asked to examine how CRs have changed over time, the effect CRs have had on selected agency operations, and actions that have been taken to mitigate the effects. Accordingly, GAO analyzed CR provisions enacted over the past 10 years and did a case study review of selected agencies that have considerable experience with CRs, represent different ways of providing services, and have different operational capabilities. Case study agencies were the Administration for Children and Families, Bureau of Prisons, Federal Bureau of Investigation, Food and Drug Administration, Veterans Benefits Administration, and Veterans Health Administration. Since 1999, all agencies operated under a CR for some period of time. The CRs included 11 standard provisions that provided direction on the availability of funding and demonstrated the temporary nature of CRs. During CR periods, these standard provisions required most agencies to operate under a conservative rate of spending and imposed limitations on certain activities. However, CRs provided some agencies or programs funding or direction different from what was provided by the standard provisions, especially under longer-term CRs. These specific provisions--called legislative anomalies--may alleviate some challenges of operating during the CR period. Over the last decade, the duration of individual CRs ranged from 1 to 157 days and the CR period lasted 3 months on average. All six case study agencies reported that operating within the limitations of the CR resulted in inefficiencies. The most common inefficiencies reported were delays to certain activities, such as hiring, and repetitive work, including issuing multiple grants or contracts. Case study agencies also reported that CRs limited management options, making trade-offs more difficult. Both the limitations in planning and amount of additional work varied by agency and activity and depended in large part on the number and duration of CRs. After operating under CRs for a prolonged period, agencies faced additional challenges executing their budget in a compressed time frame. Officials from three agencies said that multiyear budget authority was helpful for managing funds in these circumstances. CRs enabled agencies to continue to carry out their missions until the irregular appropriations were enacted. |
The Internal Revenue Code (IRC) defines pension plans as either defined benefit or defined contribution and includes separate requirements for each type of plan. The employer, as plan sponsor, is responsible for funding the promised benefit, investing and managing the plan assets, and bearing the investment risk. If a defined benefit plan terminates with insufficient assets to pay promised benefits, the Pension Benefit Guaranty Corporation (PBGC) provides plan termination insurance to pay participants’ pension benefits up to certain limits. Under defined contribution plans, employees have individual accounts to which employers, employees, or both make periodic contributions. Plans that allow employees to choose to contribute a portion of their pre-tax compensation to the plan under section 401(k) of IRC are generally referred to as 401(k) plans. In many 401(k) plans employees can control the investments in their account while in other plans the employer controls the investments. ESOPs may also be combined with other pension plans, such as a profit-sharing plan or a 401(k) plan. Investment income earned on a 401(k) plan accumulates tax-free until an individual withdraws the funds. In a defined contribution plan, the employee bears the investment risk, and plan participants have no termination insurance. The Internal Revenue Service (IRS) and the Pension and Welfare Benefits Administration (PWBA) of the Department of Labor (DOL) are primarily responsible for enforcing laws related to private pension plans. Under the Employee Retirement Income Security Act of 1974 (ERISA), as amended, IRS enforces coverage and participation, vesting , and funding standards that concern how plan participants become eligible to participate in benefit plans, earn rights to benefits, and reasonable assurance that plans have sufficient assets to pay promised benefits. IRS also enforces provisions of the IRC that apply to pension plans, including provisions under section 401(k) of the IRC. PWBA enforces ERISA’s reporting and disclosure provisions and fiduciary standards, which concern how plans should operate in the best interest of participants. Since the 1980’s, there has been a significant shift from defined benefit plans to defined contribution pension plans. Many employers sponsor both types of plans, with the defined contribution plan supplementing the defined benefit plan. However, most of the new pension plans adopted by employers are defined contribution plans. According to the Department of Labor, employers sponsored over 660,000 defined contribution plans as of 1997 compared with about 59,000 defined benefit plans. As shown in figure 1, defined contribution plans covered about 55 million participants, while defined benefit plans covered over 40 million participants in 1997. The number of employer-sponsored 401(k) plans has also increased substantially in recent years, increasing from over 17,000 in 1984 to over 265,000 plans in 1997. In 1997, 401(k) plans accounted for 40 percent of all employer-sponsored defined contribution plans and approximately 37 percent of all private pension plans. Approximately 33.8 million employees actively participated in a 401(k) plan, and these plans held about $1.3 trillion in assets as of 1997. The continued growth in the number of defined contribution plans and plan assets is encouraging, but concerns remain that many workers who traditionally lack pensions may not be benefiting from these plans, and the overall percentage of workers covered by pensions has remained relatively stable for many years. Furthermore, the trend toward defined contribution plans and the increased availability of lump-sum payments from pension plans when workers change jobs raises issues of whether workers will preserve their pension benefits until retirement or outlive their retirement assets. Similar to other large companies, Enron sponsored both a defined benefit plan and a defined contribution plan, covering over 20,000 employees. Enron’s tax-qualified pension plans consisted of a 401(k)-defined contribution plan, an employee stock ownership plan, and a defined benefit cash balance plan. Under Enron’s 401(k) plan, participants were allowed to contribute from 1 to 15 percent of their eligible base pay in any combination of pre-tax salary deferrals or after-tax contributions subject to certain limitations. Enron generally matched 50 percent of all participants’ pre-tax contributions up to a maximum of 6 percent of an employee’s base pay, with the matching contributions invested solely in the Enron Corporation Stock Fund. Participants were allowed to reallocate their company matching contributions among other investment options when they reached the age of 50. Enron’s employee stock ownership plan, like other ESOPs, was designed to encourage employee ownership in their company. The plan provided employee retirement benefits for workers’ service with the company between January 1, 1987, and December 31, 1994. No new participants were allowed into the ESOP after January 1, 1995. Finally, Enron sponsored a cash balance plan, which accrued retirement benefits to employees during their employment at Enron. An employee was eligible to be a member of the cash balance plan immediately upon being employed. According to DOL officials, the cash balance plan did not have any investments in Enron stock as of the end of 2000. If the plan is unable to pay promised benefits and is taken over by PBGC, vested participants and retirees will receive their promised benefits up to the limit guaranteed under ERISA. The Enron collapse points to the importance of prudent investment principles such as diversification, including diversification of employer matching contributions. Diversification helps individuals to mitigate the risk of holding stocks by spreading their holdings over many investments and reducing excessive exposure to any one source of risk. Many workers are covered by participant-directed 401(k) plans that allow participants to allocate the investment of their account balances among a menu of investment options, including employer stock. Additionally, many plan sponsors match participants’ elective contributions with shares of employer stock. When the employer’s stock constitutes the majority of employees’ account balances and is the only type of matching contribution the employer provides, employees are exposed to the possibility of losing more than their job if the company goes out of business or into serious financial decline. They are also exposed to the possibility of losing a major portion of their retirement savings. For example, DOL reports that 63 percent of Enron’s 401(k) assets were invested in company stock as of the end of 2000. These concentrations are the result both of employee investment choice and employer matching with company stock. The types of losses experienced by Enron employees could have been limited if employees had diversified their account balances and if they had been able to diversify their company matching contributions more quickly. Companies prefer to match employees’ contributions with company stock for a number of reasons. First, when a company makes its matching contribution in the form of company stock, issuing the stock has little impact on the company’s financial statement in the short term. Second, stock contributions are fully deductible as a business expense for tax purposes at the share price in effect when the company contributes them. Third, matching contributions in company stock puts more company shares in the hands of employees who some officials feel are less likely to sell their shares if the company’s profits are less than expected or in the event of a takeover. Finally, companies point out that matching with company stock promotes a sense of employee ownership, linking the interests of employees with the company and other shareholders. Some pension experts have said that easing employer restrictions on when employees are allowed to sell their company matching contributions would increase their ability to diversify. In 1997, a majority of the Pension and Welfare Benefits Administration Advisory Council working group on employer assets in ERISA plans recommended that participants in 401(k) plans be able to sell employer stock when they become vested in the plan.Additionally, legislation has recently been introduced that would limit the amount of employer stock that can be held in participants’ 401(k) accounts and provide participants greater freedom to diversify their employer matching contributions. Proponents of allowing employees to diversify employer stock matching contributions more quickly say that this would benefit both employers and employees by maintaining the tax and financial benefits for the company while providing employees with more investment freedom and increased retirement benefit security. However, others have expressed concern that further restrictions on employer plan designs may reduce incentives for employers to sponsor plans or provide matching contributions. Even with opportunities to diversify, studies indicate that employees will need education to improve their ability to manage their retirement savings. Numerous studies have looked at how well individuals who are currently investing understand investments and the markets. On the basis of those studies, it is clear that among those who save through their company’s retirement programs or on their own, large percentages of the investing population are unsophisticated and do not fully understand the risks associated with their investment choices. For example, one study found that 47 percent of 401(k) plan participants believe that stocks are components of a money market fund, and 55 percent of those surveyed thought that they could not lose money in government bond funds. Another study on the financial literacy of mutual fund investors found that less than half of all investors correctly understood the purpose of diversification. These studies and others indicate the need for enhanced investment education about such topics as investing, the relationship between risk and return, and the potential benefits of diversification. In addition to investor education, employees may need more individualized investment advice. Such investment advice becomes even more important as participation in 401(k) plans continues to increase. ERISA does not require plan sponsors to make investment advice available to plan participants. Under ERISA, providing investment advice results in fiduciary responsibility for those providing the advice, while providing investment education does not. ERISA does, however, establish conditions employers must meet in order to be shielded from fiduciary liability related to investment choices made by employees in their participant- directed accounts. In 1996, DOL issued guidance to employers and investment advisers on how to provide educational investment information and analysis to participants without triggering fiduciary liability. DOL recently issued guidance about investment advice making it easier for plans to use independent investment advisors to provide advice to employees in retirement plans. Industry representatives that we spoke with said more companies are providing informational sessions with investment advisors to help employees better understand their investments and the risk of not diversifying. They also said that changes are needed under ERISA to better shield employers from fiduciary liability for investment advisors’ recommendations to individual participants. ERISA currently prohibits fiduciary investment advisors from engaging in transactions with clients’ plans where they have a conflict of interest, for example, when the advisors are providing other services such as plan administration. As a result, investment advisors cannot provide specific investment advice to 401(k) plan participants about their firm’s investment products without approval from DOL. Various legislative proposals have been introduced that would address employers’ concern about fiduciary liability when they make investment advice available to plan participants and make it easier for fiduciary investment advisors to provide investment advice to participants when they also provide other services to the participants’ plan. However, concerns remain that such proposals may not adequately protect plan participants from conflicted advice. Enron’s failure highlights the importance of plan participants receiving clear information about their pension plan and any changes to it that could affect plan benefits. Current ERISA disclosure requirements provide only minimum guidelines that firms must follow on the type of information they provide plan participants. Improving the amount of disclosure provided to plan participants and also ensuring that such disclosure is in plain English could help participants better manage the risks they face. Enron’s pension plans illustrate the complex nature of some plan designs that may be difficult for participants to understand. For example, Enron’s pension plans included a floor-offset arrangement. Such arrangements consist of separate, but associated defined benefit and defined contribution plans. The benefits accrued under one plan offset the benefit payable from the other. In 1987, Congress limited the use of such plans. However, plans in existence when the provision was enacted, including Enron’s plan, were grandfathered. In addition, Enron’s conversion of its defined benefit plan from one type of benefit formula to another illustrates the types of changes and their consequent affect on benefits that plan participants need to understand. Enron’s defined benefit plan was converted from a final average pay formula—where the pension benefit is a percentage of the participant’s final years of pay multiplied by his or her length of service—to a cash balance formula, which expresses the defined benefit as a hypothetical account balance. As we have previously reported, conversions to cash balance plans can be advantageous to certain groups of workers—for example, those who switch jobs frequently—but can lower the pension benefits of others. The extent to which Enron employees were informed or understood the effect of the floor-offset or the conversion of their defined benefit plan to a cash balance formula is unclear. As stated in a prior GAO report on cash balance plans, we found wide variation in the type and amounts of information workers receive about plan changes and that can potentially reduce pensions benefits. Based in part on our recommendations, the Congress, under the Economics Growth and Tax Relief Reconciliation Act of 2001, required that employers provide participants more timely and clear information concerning changes to plans that could reduce their future benefits. The Treasury Department is responsible for issuing the applicable regulations implementing this requirement. Other types of information may also be beneficial to plan participants. Currently, ERISA requires that plan administrators provide each plan participant with a summary of certain financial data reported to DOL. As we previously reported, the Secretary of Labor could require that plan administrators provide plan participants with information about the employers’ financial condition and other information. Such information could enable employees to be more fully informed about their holdings and any potential risks associated with them. Under ERISA, fiduciaries are held to high but broad standards. Persons who perform certain tasks, generally involving the use of a plan’s assets, become fiduciaries because of those duties. Others, such as the plan sponsor, the plan administrator, or a trustee are fiduciaries because of their position. Fiduciaries are required to act solely in the interest of plan participants and beneficiaries. They are to adhere to a standard referred to as the prudent expert rule, which requires them to act as a prudent person experienced in such matters would in similar circumstances. Fiduciaries are required to follow their plan’s documents and act in accordance with the terms of the plan as it is set out. If fiduciaries do not perform their duties in accordance with ERISA standards, they may be held personally liable for any breach of their duty. Yet, even with the high standards and broad guidance provided by ERISA, in some cases the actions of fiduciaries can seem to conflict with the best interests of plan participants. During the period when revelations about Enron’s finances were contributing to the steady devaluation of Enron’s stock price, Enron’s plan fiduciaries imposed a lockdown on the 401(k) plan, preventing employees from making withdrawals or investment transfers. Enron imposed the lockdown to change recordkeepers, an acceptable practice. Some observers, however, have questioned whether Enron employees were sufficiently notified about the lockdown. Observers have also questioned the equity of treatment between Enron senior executives and Enron workers during the lockdown. Enron’s employees were unable to make changes to their 401(k) accounts during the plan’s lockdown period. However, Enron executives did not face similar restrictions on company stock not held in the plan. Fairness would suggest that company executives should face similar restrictions in their ability to sell company stock during lockdown periods when workers are unable to make 401(k) investment changes. This is especially true for those executives who serve as pension plan fiduciaries, including plan trustees. The Enron collapse, although not by itself evidence that private pension law should be changed, serves to illustrate what can happen to employees’ retirement savings under certain conditions. Specifically, it illustrates the importance of diversification for retirement savings as well as employees’ need for enhanced education, appropriate investment advice, and greater disclosure. All of these may help them better navigate the risks they face in saving for retirement. In addition to the broad issues of diversification and education, Enron’s collapse raises questions about the relationship between various plan designs and participant benefit security. In particular, Congress may wish to consider whether further restrictions on floor-offset arrangements are warranted, whether to provide additional employee flexibility in connection with matches in the form of employer stocks, and whether to limit the amount of employer stock that can be held in certain retirement saving plans. Resolving these issues will require considering the tradeoffs between providing greater participant protections and employers’ need for flexibility in plan design. Finally, Congress will have to weigh whether to rely on the broad fiduciary standards established in ERISA that currently govern fiduciary actions or to impose specific requirements that would govern certain plan administrative operations such as plan investment freezes or lockdowns. | The collapse of the Enron Corporation and the resulting loss of employee retirement savings highlighted several key vulnerabilities in the nation's private pension system. Asset diversification was a crucial lesson, especially for defined contribution plans, in which employees bear the investment risk. The Enron case underscores the importance of encouraging employees to diversify. Workers need clear and understandable information about their pension plans to make sound decisions on retirement savings. Although disclosure rules require plan sponsors to provide participants with a summary of their plan benefits and rights and to notify them when benefits are changed, this information is not always clear, particularly in the case of complex plans like floor-offset arrangements. Employees, like other investors, also need reliable and understandable information on a company's financial condition and prospects. Fiduciary standards form the cornerstone of private pension protections. These standards require plan sponsors to act solely in the interest of plan participants and beneficiaries. The Enron investigations should determine whether plan fiduciaries acted in accordance with their responsibilities. |
In the United States, the practice of pharmacy is regulated by state boards of pharmacy, which establish and enforce standards intended to protect the public. State boards of pharmacy also license pharmacists and pharmacies. To legally dispense a prescription drug, a licensed pharmacist working in a licensed pharmacy must be presented a valid prescription from a licensed health care professional. The requirement that drugs be prescribed and dispensed by licensed professionals helps ensure patients receive the proper dose, take the medication correctly, and are informed about warnings, side effects, and other important information about the drug. Under the Federal Food, Drug, and Cosmetic Act (FDCA), as amended, FDA is responsible for ensuring the safety, effectiveness, and quality of domestic and imported drugs. To gain approval for the U.S. market, a drug manufacturer must demonstrate that a drug is safe and effective, and that the manufacturing methods and controls that will be used in the specific facility where it will be manufactured meet FDA standards. The same drug manufactured in another facility not approved by FDA—such as a foreign- made version of an approved drug—may not be sold legally in the United States. Drugs are subject to other statutory and regulatory standards relating to purity, labeling, manufacturing, and packaging. Failure to meet these standards could result in a drug being considered illegal for sale in the United States. The FDCA requires that drugs be dispensed with labels that include the name of the prescriber, directions for use, and cautionary statements, among other things. A drug is considered misbranded if its labeling or container is misleading, or if the label fails to include required information. Prescription drugs dispensed without a prescription are also considered misbranded. In addition, if a drug is susceptible to deterioration and must, for example, be maintained in a temperature-controlled environment, it must be packaged and labeled in accordance with regulations and manufacturer standards. Drugs must also be handled to prevent adulteration, which may occur, for example, if held under unsanitary conditions leading to possible contamination. FDA-approved drugs manufactured in foreign countries, including those sold over the Internet, are subject to the same requirements as domestic drugs. Further, imported drugs may be denied entry into the United States if they “appear” to be unapproved, adulterated, or misbranded, among other things. While the importation of such drugs may be illegal, FDA has allowed individuals to bring small quantities of certain drugs into the United States for personal use under certain circumstances. We obtained 1 or more samples of 11 of the 13 drugs we targeted, both with and without a patient-provided prescription. Drug samples we received from other foreign pharmacies came from Argentina, Costa Rica, Fiji, India, Mexico, Pakistan, Philippines, Spain, Thailand, and Turkey. Most of the drugs—45 of 68—were obtained without a patient-provided prescription. These included drugs for which physician supervision is of particular importance due to the possibility of severe side effects, such as Accutane, or the high potential for abuse and addiction, such as the narcotic painkiller hydrocodone. (See table 2.) Although most of the samples we received were obtained without a patient-provided prescription, prescription requirements varied. Five U.S. and all 18 Canadian pharmacies from which we obtained drug samples required the patient to provide a prescription. The remaining 24 U.S. pharmacies generally provided a prescription based on a general medical questionnaire filled out online by the patient. Questionnaires requested information on the patient’s physical characteristics, medical history, and condition for which drugs were being purchased. Several pharmacy Web sites indicated that a U.S.-licensed physician reviews the completed questionnaire and issues a prescription. The other foreign Internet pharmacies we ordered from generally had no prescription requirements, and many did not seek information regarding the patient’s medical history or condition. The process for obtaining a drug from many of these pharmacies involved only selecting the desired medication and submitting the necessary billing and shipping information. (See table 3.) None of the 21 prescription drug samples we received from other foreign Internet pharmacies included a dispensing pharmacy label that provided patient instructions for use, and only 6 of these samples came with warning information. Lack of instructions and warnings on these drugs leaves consumers who take them at risk for potentially dangerous drug interactions or side effects from incorrect or inappropriate use. For example, we received 2 samples purporting to be Viagra, a drug used to treat male sexual dysfunction, without any warnings or instructions for use. (See fig. 1.) According to its manufacturer, this drug should not be prescribed for individuals who are currently taking certain heart medications, as it can lower blood pressure to dangerous levels. Additionally, two samples of Roaccutan, a foreign version of Accutane, arrived without any instructions in English. (See fig. 2.) Possible side effects of this drug include birth defects and severe mental disturbances. Compounding the concerns regarding the lack of warnings and patient instructions for use, none of the other foreign pharmacies ensured patients were under the care of a physician by requiring that a prescription be submitted before the order is filled. We observed other evidence of improper handling among 13 of the 21 drug samples we received from other foreign Internet pharmacies. For example, 3 samples of Humulin N were not shipped in accordance with manufacturer handling specifications. Despite the requirement that this drug be stored under temperature-controlled and insulated conditions, the samples we received were shipped in envelopes without insulation. (See fig. 3.) Similarly, 6 samples of other drugs were shipped in unconventional packaging, in some instances with the apparent intention of concealing the actual contents of the package. For example, the sample purporting to be OxyContin was shipped in a plastic compact disc case wrapped in brown packing tape—no other labels or instructions were included, and a sample of Crixivan was shipped inside a sealed aluminum can enclosed in a box labeled “Gold Dye and Stain Remover Wax.” (See fig. 4.) Additionally, 5 samples we received were damaged and included tablets that arrived in punctured blister packs, potentially exposing pills to damaging light or moisture. (See fig. 5.) One drug manufacturer noted that damaged packaging may also compromise the validity of drug expiration dates. Among the 21 drug samples from other foreign pharmacies, manufacturers determined that 19 were not approved for the U.S. market for various reasons, including that the labeling or the facilities in which they were manufactured had not been approved by FDA. For example, the manufacturer of one drug noted that 2 samples we received of that drug were packaged under an alternate name used for the Mexican market. The manufacturer of another drug found that 3 samples we received of that drug were manufactured at a facility unapproved to produce drugs for the U.S. market. In all but 4 instances, however, manufacturers determined that the chemical composition of the samples we received from other foreign Internet pharmacies was comparable to the chemical composition of the drugs we had ordered. Two samples of one drug were found by the manufacturer to be counterfeit and contained a different chemical composition than the drug we had ordered. In both instances the manufacturer reported that samples had less quantity of the active ingredient, and the safety and efficacy of the samples could not be determined. Manufacturers also found 2 additional samples to have a significantly different chemical composition than that of the product we ordered. In contrast to the drug samples received from other foreign Internet pharmacies, all 47 of the prescription drug samples we received from Canadian and U.S. Internet pharmacies included labels from the dispensing pharmacy that generally provided patient instructions for use and 87 percent of these samples (41 of 47) included warning information. Furthermore, all samples were shipped in accordance with special handling requirements, where applicable, and arrived undamaged. Manufacturers reported that 16 of the 18 samples from Canadian Internet pharmacies were unapproved for sale in the United States, citing for example unapproved labeling and packaging. However, the samples were all found to be comparable in chemical composition to the products we ordered. Finally, the manufacturer found that 1 sample of a moisture- sensitive medication from a U.S. Internet pharmacy was inappropriately removed from the sealed manufacturer container and dispensed in a pharmacy bottle. Table 4 summarizes the problems we identified among the 68 samples we received. We observed questionable characteristics and business practices of some of the Internet pharmacies from which we received drugs. We ultimately did not receive six of the orders we placed and paid for, suggesting the potential fraudulent nature of some Internet pharmacies or entities representing themselves as such. The six orders were for Clozaril, Humulin N, and Vicodin, and cost over $700 in total. Five of these orders were placed with non-Canadian foreign pharmacies and one was placed with a pharmacy whose location we could not determine. We followed up with each pharmacy in late April and early May of 2004 to determine the status. Three indicated they would reship the product, but as of June 10, 2004, we had not received the shipments. Three others did not respond to our inquiry. We determined that at least eight of the return addresses included on samples we received from other foreign Internet pharmacies were shipped from locations that raise questions about the entities that provided the samples. For example, we found a shopping mall in Buenos Aires, Argentina, at the return address provided on a sample of Lipitor. Authorities assisting us in locating this address found it impossible to identify which, if any, of the many retail stores mailed the package. The return address for a sample of Celebrex was found to be a business in Cozumel, Mexico, but representatives of that business informed authorities that it had no connection to an Internet pharmacy operation. Finally, the return addresses on samples of Humulin N and Zoloft were found to be private residences in Lahore, Pakistan. Certain practices of Internet pharmacies may render it difficult for consumers to know exactly what they are buying. Some non-Canadian foreign Internet pharmacies appeared to offer U.S. versions of brand name drugs on their Web sites, but attempted to substitute an alternative drug during the order process. In some cases, other foreign pharmacies substituted alternative drugs after the order was placed. For example, one Internet pharmacy advertised brand name Accutane, which we ordered. The sample we received was actually a generic version of the drug made by an overseas manufacturer. About 21 percent of the Internet pharmacies from which we received drugs (14 of 68) were under investigation by regulatory agencies. The reasons for the investigations by DEA and FDA include allegations of selling controlled substances without a prescription; selling adulterated, misbranded, or counterfeit drugs; selling prescription drugs where no doctor-patient relationship exists; smuggling; and mail fraud. The pharmacies under investigation were concentrated among the U.S. pharmacies that did not require a patient-provided prescription (nine) and other foreign (four) pharmacies. One Canadian pharmacy was also included among those under investigation. Consumers can readily obtain many prescription drugs over the Internet without providing a prescription—particularly from certain U.S. and foreign Internet pharmacies outside of Canada. Drugs available include those with special safety restrictions, for which patients should be monitored for side effects, and narcotics, where the potential for abuse is high. For these types of drugs in particular, a prescription and physician supervision can help ensure patient safety. In addition to the lack of prescription requirements, some Internet pharmacies can pose other safety risks for consumers. Many foreign Internet pharmacies outside of Canada dispensed drugs without instructions for patient use, rarely provided warning information, and in four instances provided drugs that were not the authentic products we ordered. Consumers who purchase drugs from foreign Internet pharmacies that are outside of the U.S. regulatory framework may also receive drugs that are unapproved by FDA and manufactured in facilities that the agency has not inspected. Other risks consumers may face were highlighted by the other foreign Internet pharmacies that fraudulently billed us, provided drugs we did not order, and provided false or questionable return addresses. It is notable that we identified these numerous problems despite the relatively small number of drugs we purchased, consistent with problems recently identified by state and federal regulatory agencies. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have at this time. For future contacts regarding this testimony, please call Marcia Crosse at (202) 512-7119. Other individuals who made key contributions include Randy DiRosa, Margaret Smith, and Corey Houchins-Witt. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the demand for and the cost of prescription drugs rise, many consumers have turned to the Internet to purchase them. However, the global nature of the Internet can hinder state and federal efforts to identify and regulate Internet pharmacies to help assure the safety and efficacy of products sold. Recent reports of unapproved and counterfeit drugs sold over the Internet have raised further concerns. This testimony summarizes a GAO report: Internet Pharmacies: Some Pose Safety Risks for Consumers, GAO-04-820 (June 17, 2004). GAO was asked to examine (1) the extent to which certain drugs can be purchased over the Internet without a prescription; (2) whether the drugs are handled properly, approved by the Food and Drug Administration (FDA), and authentic; and (3) the extent to which Internet pharmacies are reliable in their business practices. GAO attempted to purchase up to 10 samples of 13 different drugs, each from a different pharmacy Web site, including sites in the United States, Canada, and other foreign countries. GAO assessed the condition of the samples it received and forwarded the samples to their manufacturers to determine whether they were approved by FDA, safe, and authentic. GAO also confirmed the locations of several Internet pharmacies and undertook measures to examine the reliability of their business practices. GAO obtained most of the prescription drugs it sought from a variety of Internet pharmacy Web sites without providing a prescription. GAO obtained 68 samples of 11 different drugs--each from a different pharmacy Web site in the United States, Canada, or other foreign countries, including Argentina, Costa Rica, Fiji, India, Mexico, Pakistan, Philippines, Spain, Thailand, and Turkey. Five U.S. and all 18 Canadian pharmacy sites from which GAO received samples required a patient-provided prescription, whereas the remaining 24 U.S. and all 21 foreign pharmacy sites outside of Canada provided a prescription based on their own medical questionnaire or had no prescription requirement. Among the drugs GAO obtained without a prescription were those with special safety restrictions and highly addictive narcotic painkillers. GAO identified several problems associated with the handling, FDA-approval status, and authenticity of the 21 samples received from Internet pharmacies located in foreign countries outside of Canada. Fewer problems were identified among pharmacies in Canada and the United States. None of the foreign pharmacies outside of Canada included dispensing pharmacy labels that provide instructions for use, few included warning information, and 13 displayed other problems associated with the handling of the drugs. For example, 3 samples of a drug that should be shipped in a temperature-controlled environment arrived in envelopes without insulation. Manufacturer testing revealed that most of these drug samples were unapproved for the U.S. market because, for example, the labeling or the facilities in which they were manufactured had not been approved by FDA; however, manufacturers found the chemical composition of all but 4 was comparable to the product GAO ordered. Four samples were determined to be counterfeit products or otherwise not comparable to the product GAO ordered. Similar to the samples received from other foreign pharmacies, manufacturers found most of those from Canada to be unapproved for the U.S. market; however, manufacturers determined that the chemical composition of all drug samples obtained from Canada were comparable to the product GAO ordered. Some Internet pharmacies were not reliable in their business practices. Most instances identified involved pharmacies outside of the United States and Canada. GAO did not receive six orders for which it had paid. In addition, GAO found questionable entities located at the return addresses on the packaging of several samples, such as private residences. Finally, 14 of the 68 pharmacy Web sites from which GAO obtained samples were found to be under investigation by regulatory agencies for reasons including selling counterfeit drugs and providing prescription drugs where no valid doctor-patient relationship exists. Nine of these were U.S. sites, 1 a Canadian site, and 4 were other foreign Internet pharmacy sites. |
The 1986 Compact of Free Association between the United States, the FSM, and the RMI provided a framework for the United States to work toward achieving its three main goals: (1) to secure self-government for the FSM and the RMI, (2) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency, and (3) to ensure certain national security rights for all of the parties. The first goal has been met. The FSM and the RMI are independent nations and are members of international organizations such as the United Nations. The second goal of the Compact–advancing economic development and self-sufficiency for both countries–was to be accomplished primarily through U.S. direct financial payments (to be disbursed and monitored by the U.S. Department of the Interior) to the FSM and the RMI. For 1987 through 2003, U.S. assistance to the FSM and the RMI to support economic development is estimated, on the basis of Interior data, to be about $2.1 billion. Economic self-sufficiency has not been achieved. Although total U.S. assistance (Compact direct funding as well as U.S. programs and services) as a percentage of total government revenue has fallen in both countries (particularly in the FSM), the two nations remain highly dependent on U.S. funds. U.S. direct assistance has maintained standards of living that are higher than could be achieved in the absence of U.S. support. Further, the U.S., FSM, and RMI governments provided little accountability over Compact expenditures. The third goal of the Compact–securing national security rights for all parties–has been achieved. The Compact obligates the United States to defend the FSM and the RMI against an attack or the threat of attack in the same way it would defend its own citizens. The Compact also provides the United States with the right of “strategic denial,” the ability to prevent access to the islands and their territorial waters by the military personnel of other countries or the use of the islands for military purposes. In addition, the Compact grants the United States a “defense veto.” Finally, through a Compact-related agreement, the United States secured continued access to military facilities on Kwajalein Atoll in the RMI through 2016. In a previous report, we identified Kwajalein Atoll as the key U.S. defense interest in the two countries. Of these rights, only the defense veto is due to expire in 2003 if not renewed. Another aspect of the special relationship between the FSM and the RMI and the United States involves the unique immigration rights that the Compact grants. Through the original Compact, citizens of both nations are allowed to live and work in the United States as “nonimmigrants” and can stay for long periods of time, with few restrictions. Further, the Compact exempted FSM and RMI citizens from meeting U.S. passport, visa, and labor certification requirements when entering the United States. In recognition of the potential adverse impacts that Hawaii and nearby U.S. commonwealths and territories could face as a result of an influx of FSM and RMI citizens, the Congress authorized Compact impact payments to address the financial impact of these nonimmigrants on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands (CNMI). By 1998, more than 13,000 FSM and RMI citizens had made use of the Compact immigration provisions and were living in the three areas. The governments of the three locations have provided the U.S. government with annual Compact nonimmigrant impact estimates; for example, in 2000 the total estimated impact for the three areas was $58.2 million. In that year, Guam received $7.58 million in impact funding, while the other two areas received no funding. In the fall of 1999, the United States and the two Pacific Island nations began negotiating economic assistance and defense provisions of the Compact that were due to expire. Immigration issues were also addressed. According to the Department of State, the aims of the amended Compacts are to (1) continue economic assistance to advance self-reliance, while improving accountability and effectiveness; (2) continue the defense relationship, including a 50-year lease extension (beyond 2016) of U.S. military access to Kwajalein Atoll in the RMI; (3) strengthen immigration provisions; and (4) provide assistance to lessen the impact of Micronesian migration on Hawaii, Guam, and the CNMI. Under the amended Compacts with the FSM and the RMI, new congressional authorizations of approximately $3.5 billion in funding would be required over the next 20 years, with a total possible authorization through 2086 of $6.6 billion. Economic assistance would be provided to the two countries for 20 years–from 2004 through 2023–with all subsequent funding directed to the RMI for continued U.S. access to military facilities in that country. Under the U.S. proposals, annual grant amounts to each country would be reduced each year in order to encourage budgetary self-reliance and transition the countries from receiving annual U.S. grant funding to receiving annual trust fund earnings. This decrease in grant funding, combined with FSM and RMI population growth, would also result in falling per capita grant assistance over the funding period–particularly for the RMI. If the trust funds established in the amended Compacts earn a 6 percent rate of return, the FSM trust fund would be insufficient to replace expiring annual grants. The RMI trust fund would replace grants in fiscal year 2024 but would become insufficient for this purpose by fiscal year 2040. Under the amended Compacts with the FSM and the RMI, new congressional authorizations of approximately $6.6 billion could be required for U.S. payments from fiscal years 2004 to 2086, of which $3.5 billion would be required for the first 20 years of the Compacts (see table 1). The share of new authorizations to the FSM would be about $2.3 billion and would end after fiscal year 2023. The share of new authorizations to the RMI would be about $1.2 billion for the first 20 years, with about $300 million related to extending U.S. military access to Kwajalein Atoll through 2023. Further funding of $3.1 billion for the remainder of the period corresponds to extended grants to Kwajalein and payments related to U.S. military use of land at Kwajalein Atoll. The cost of this $6.6 billion new authorization, expressed in fiscal year 2004 U.S. dollars, would be $3.8 billion. This new authorized funding would be provided to each country in the form of (1) annual grant funds targeted to priority areas (such as health, education, and infrastructure); (2) contributions to a trust fund for each country such that trust fund earnings would become available to the FSM and the RMI in fiscal year 2024 to replace expiring annual grants; (3) payments the U.S. government makes to the RMI government that the RMI transfers to Kwajalein landowners to compensate them for the U.S. use of their lands for defense sites; and (4) an extension of federal services that have been provided under the original Compact but are due to expire in fiscal year 2003. Under the U.S. proposals, annual grant amounts to each country would be reduced each year in order to encourage budgetary self-reliance and transition the countries from receiving annual U.S. grant funding to receiving annual trust fund earnings. Thus, the amended Compacts increase annual U.S. contributions to the trust funds each year by the grant reduction amount. This decrease in grant funding, combined with FSM and RMI population growth, would also result in falling per capita grant assistance over the funding period–particularly for the RMI (see fig. 1). Using published U.S. Census population growth rate projections for the two countries, the real value of grants per capita to the FSM would begin at an estimated $687 in fiscal year 2004 and would further decrease over the course of the Compact to $476 in fiscal year 2023. The real value of grants per capita to the RMI would begin at an estimated $627 in fiscal year 2004 and would further decrease to an estimated $303 in fiscal year 2023. The reduction in real per capita funding over the next 20 years is a continuation of the decreasing amount of available grant funds (in real terms) that the FSM and the RMI had during the 17 years of prior Compact assistance. The decline in annual grant assistance could impact FSM and RMI government budget and service provision, employment prospects, migration, and the overall gross domestic product (GDP) outlook, though the immediate effect is likely to differ between the two countries. For example, the FSM is likely to experience fiscal pressures in 2004, when the value of Compact grant assistance drops in real terms by 8 percent relative to the 2001 level (a reduction equal to 3 percent of GDP). For the RMI, however, the proposed level of Compact grant assistance in 2004 would actually be 8 percent higher in real terms than the 2001 level (an increase equal to 3 percent of GDP). According to the RMI, this increase would likely be allocated largely to the infrastructure investment budget and would provide a substantial stimulus to the economy in the first years of the new Compact. The amended Compacts were designed to build trust funds that, beginning in fiscal year 2024, yield annual earnings to replace grant assistance that ends in 2023. Both the FSM and the RMI are required to provide an initial contribution to their respective trust funds of $30 million. In designing the trust funds, the Department of State assumed that the trust fund would earn a 6 percent rate of return. The amended Compacts do not address whether trust fund earnings should be sufficient to cover expiring federal services, but they do create a structure that sets aside earnings above 6 percent, should they occur, that could act as a buffer against years with low or negative trust fund returns. Importantly, whether the estimated value of the proposed trust funds would be sufficient to replace grants or create a buffer account would depend on the rate of return that is realized. If the trust funds earn a 6 percent rate of return, then the FSM trust fund would yield a return of $57 million in fiscal year 2023, an amount insufficient to replace expiring grants by an estimated value of $27 million. The RMI trust fund would yield a return of $33 million in fiscal year 2023, an estimated $5 million above the amount required to replace grants in fiscal year 2024. Nevertheless, the RMI trust fund would become insufficient for replacing grant funding by fiscal year 2040. If the trust funds are comprised of both stocks (60 percent of the portfolio) and long-term government bonds (40 percent of the portfolio) such that the forecasted average return is around 7.9 percent, then both trust funds would yield returns sufficient to replace expiring grants and to create a buffer account. However, while the RMI trust fund should continue to grow in perpetuity, the FSM trust fund would eventually deplete the buffer account and fail to replace grant funding by fiscal year 2048. I will now discuss provisions in the amended Compacts designed to provide improved accountability over, and effectiveness of, U.S. assistance. This is an area where we have offered several recommendations in past years, as we have found accountability over past assistance to be lacking. In sum, most of our recommendations regarding future Compact assistance have been addressed with the introduction of strengthened accountability measures in the signed amended Compacts and related agreements. I must emphasize, however, that the extent to which these provisions will ultimately provide increased accountability over, and effectiveness of, future U.S. assistance will depend upon how diligently the provisions are implemented and monitored by all governments. The following summary describes key accountability measures included in the amended Compacts and related agreements: The amended Compacts would require that grants be targeted to priority areas such as health, education, the environment, and public infrastructure. In both countries, 5 percent of the amount dedicated to infrastructure, combined with a matching amount from the island governments, would be placed in an infrastructure maintenance fund. Compact-related agreements with both countries (the so-called “fiscal procedures agreements”) would establish a joint economic management committee for the FSM and the RMI that would meet at least once annually. The duties of the committees would include (1) reviewing planning documents and evaluating island government progress to foster economic advancement and budgetary self-reliance; (2) consulting with program and service providers and other bilateral and multilateral partners to coordinate or monitor the use of development assistance; (3) reviewing audits; (4) reviewing performance outcomes in relation to the previous year’s grant funding level, terms, and conditions; and (5) reviewing and approving grant allocations (which would be binding) and performance objectives for the upcoming year. Further, the fiscal procedures agreements would give the United States control over the annual review process: The United States would appoint three government members to each committee, including the chairman, while the FSM or the RMI would appoint two government members. Grant conditions normally applicable to U.S. state and local governments would apply to each grant. General terms and conditions for the grants would include conformance to plans, strategies, budgets, project specifications, architectural and engineering specifications, and performance standards. Other special conditions or restrictions could be attached to grants as necessary. The United States could withhold payments if either country fails to comply with grant terms and conditions. In addition, funds could be withheld if the FSM or RMI governments do not cooperate in U.S. investigations regarding whether Compact funds have been used for purposes other than those set forth in the amended Compacts. The fiscal procedures agreements would require numerous reporting requirements for the two countries. For example, each country must prepare strategic planning documents that are updated regularly, annual budgets that propose sector expenditures and performance measures, annual reports to the U.S. President regarding the use of assistance, quarterly and annual financial reports, and quarterly grant performance reports. The amended Compacts’ trust fund management agreements would grant the U.S. government control over trust fund management: The United States would appoint three members, including the chairman, to a committee to administer the trust funds, while the FSM or the RMI would appoint two members. After the initial 20 years, the trust fund committee would remain the same, unless otherwise agreed by the original parties. The fiscal procedures agreements would require the joint economic management committees to consult with program providers in order to coordinate future U.S. assistance. However, we have seen no evidence demonstrating that an overall assessment of the appropriateness, effectiveness, and oversight of U.S. programs has been conducted, as we recommended. The successful implementation of the many new accountability provisions will require a sustained commitment by the three governments to fulfill their new roles and responsibilities. Appropriate resources from the United States, the FSM, and the RMI represent one form of this commitment. While the amended Compacts do not address staffing issues, officials from Interior’s Office of Insular Affairs have informed us that their office intends to post six staff in a new Honolulu office. Further, an Interior official noted that his office has brought one new staff on board in Washington, D.C., and intends to post one person to work in the RMI (one staff is already resident in the FSM). We have not conducted an assessment of Interior’s staffing plan and rationale and cannot comment on the adequacy of the plan or whether it represents sufficient resources in the right location. The most significant defense-related change in the amended Compacts is the extension of U.S. military access to Kwajalein Atoll in the RMI. While the U.S. government had already secured access to Kwajalein until 2016 through the 1986 MUORA, the newly revised MUORA would grant the United States access until 2066, with an option to extend for an additional 20 years to 2086. According to a Department of Defense (DOD) official, recent DOD assessments have envisioned that access to Kwajalein would be needed well beyond 2016. He stated that DOD has not undertaken any further review of the topic, and none is currently planned. This official also stated that, given the high priority accorded to missile defense programs and to enhancing space operations and capabilities by the current administration, and the inability to project the likely improvement in key technologies beyond 2023, the need to extend the MUORA beyond 2016 is persuasive. He also emphasized that the U.S. government has flexibility in that it can end its use of Kwajalein Atoll any time after 2023 by giving advance notice of 7 years and making a termination payment. We have estimated that the total cost of this extension would be $3.4 billion (to cover years 2017 through 2086). The majority of this funding ($2.3 billion) would be provided by the RMI government to Kwajalein Atoll landowners, while the remainder ($1.1 billion) would be used for development and impact on Kwajalein Atoll. According to a State Department official, there are approximately 80 landowners. Four landowners receive one-third of the annual payment, which is based on acreage owned. This landowner funding (along with all other Kwajalein- related funds) through 2023 would not be provided by DOD but would instead continue as an Interior appropriation. Departmental responsibility for authorization and appropriation for Kwajalein-related funding beyond 2023 has not been determined according to the Department of State. Of note, the Kwajalein Atoll landowners have not yet agreed to sign an amended land-use agreement with the RMI government to extend U.S. access to Kwajalein beyond 2016 at the funding levels established in the amended Compact. While the original Compact’s immigration provisions are not expiring, the Department of State targeted them as requiring changes. The amended Compacts would strengthen the immigration provisions of the Compact by adding new restrictions and expressly applying the provisions of the Immigration and Nationality Act of 1952, as amended (P.L. 82-414) to Compact nonimmigrants. There are several new immigration provisions in the amended Compacts that differ from those contained in the original Compact. For example, Compact nonimmigrants would now be required to carry a valid passport in order to be admitted into the United States. Further, children coming to the United States for the purpose of adoption would not be admissible under the amended Compacts. Instead, these children would have to apply for admission to the United States under the general immigration requirements for adopted children. In addition, the Attorney General would have the authority to issue regulations that specify the time and conditions of a Compact nonimmigrant’s admission into the United States (under the original Compact, regulations could be promulgated to establish limitations on Compact nonimmigrants in U.S. territories or possessions). In addition, the implementing legislation for the amended Compacts would provide $15 million annually for U.S. locations that experience costs associated with Compact nonimmigrants. This amount would not be adjusted for inflation, would be in effect for fiscal years 2004 through 2023, and would total $300 million. Allocation of these funds between locations such as Hawaii, Guam, and the CNMI would be based on the number of qualified nonimmigrants in each location. Mr. Chairman and Members of the Committee, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For future contacts regarding this testimony, please call Susan S. Westin or Emil Friberg, Jr., at (202) 512-4128. Individuals making key contributions to this testimony included Leslie Holen, Kendall Schaefer, Mary Moutsos, and Rona Mendelsohn. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 1986, the United States entered into a Compact of Free Association with the Pacific Island nations of the Federated States of Micronesia, or FSM, and the Republic of the Marshall Islands, or RMI. The Compact provided about $2.1 billion in U.S. funds, supplied by the Department of the Interior, over 17 years (1987-2003) to the FSM and the RMI. These funds were intended to advance economic development. In a past report, GAO found that this assistance did little to advance economic development in either country, and accountability over funding was limited. The Compact also established U.S. defense rights and obligations in the region and allowed for migration from both countries to the United States. The three parties recently renegotiated expiring economic assistance provisions of the Compact in order to provide an additional 20 years of assistance (2004-2023). In addition, the negotiations addressed defense and immigration issues. The House International Relations and Resources Committees requested that GAO report on Compact negotiations. The amended Compacts of Free Association between the United States and the FSM and the RMI to renew expiring U.S. assistance could potentially cost the U.S. government about $6.6 billion in new authorizations from the Congress. Of this amount, $3.5 billion would cover payments over a 20-year period (2004-2023), while $3.1 billion represents payments for U.S. military access to Kwajalein Atoll in the RMI for the years 2024 through 2086. While the level of annual grant assistance to both countries would decrease each year, contributions to trust funds--meant to eventually replace grant funding--would increase annually by a comparable amount. Nevertheless, at an assumed annual 6 percent rate of return, earnings from the FSM trust fund would be unable to replace expiring grant assistance in 2024, while earnings from the RMI trust fund would encounter the same problem by 2040. The amended Compacts strengthen reporting and monitoring measures that could improve accountability over assistance, if diligently implemented. These measures include the following: assistance grants would be targeted to priority areas such as health and education; annual reporting and consultation requirements would be expanded; and funds could be withheld for noncompliance with grant terms and conditions. The successful implementation of the many new accountability provisions will require appropriate resources and sustained commitment from the United States, the FSM, and the RMI. Regarding defense, U.S. military access to Kwajalein Atoll in the RMI would be extended from 2016 through 2066, with an option to extend through 2086. Finally, Compact provisions addressing immigration have been strengthened. For example, FSM and RMI citizens entering the United States would need to carry a passport, and the U.S. Attorney General could, through regulations, specify the time and conditions of admission to the United States for these citizens. |
State and local governments that receive grants from HHS must follow the uniform administrative requirements set forth in federal regulations.When procuring property and services, these regulations require that states follow the same policies and procedures they use for procurements supported with nonfederal funds. Under HHS’s regulations, states must also ensure that contracts include any clauses required by federal statutes and executive orders. Grantees other than states and subgrantees, such as local governments, rely on their own procurement procedures, provided that they conform to applicable federal laws and the standards identified in the regulations, including standards of conduct, requirements of full and open competition in contracting, procedures for different types of procurements, and bid protest procedures to handle and resolve disputes relating to their procurements. Grantees and subgrantees must maintain a contract administration system that ensures that contractors perform in accordance with the terms, conditions, and specifications of their contracts. The procurement of contracts typically follows a process that comprises several phases, including bid solicitation and contract award processes. The bid solicitation process will begin with the development of a work plan by the contracting agency that outlines the objectives contractors will be expected to achieve and the manner in which they will be expected to achieve them. The state or locality will then issue a request-for-proposals to inform potential bidders of the government’s interest in obtaining contractors for the work specified. A request-for-proposals is a publicly advertised document that outlines information necessary to enable prospective contractors to prepare proposals properly. After these activities are completed, the contract award process begins. Once proposals have been submitted, they are evaluated to assess their relative merit. Several key criteria are almost always considered in evaluating proposals, including price/cost, staffing, experience, and technical and/or other resources. The environment for administering social services such as TANF has been affected by changes to the nation’s workforce system. Through the Workforce Investment Act (WIA) of 1998 (P.L. 105-220), the Congress sought to replace the fragmented training and employment system that existed under the previous workforce system. WIA requires state and local entities that carry out specified federal programs to participate in local one-stop centers—local centers offering job placement assistance for workers and opportunities for employers to find workers—by making employment and training-related services available. While TANF is not a mandatory partner at one-stop centers, some states are using one-stop centers to serve TANF recipients. WIA called for the development of workforce investment boards to oversee WIA implementation at the state and local levels. WIA listed the types of members that should participate on the workforce boards, such as representatives of business, education, labor, and other segments of the workforce investment community, but did not specify a minimum or maximum number of members. Local workforce boards can contract for services delivered through one-stop centers. PRWORA broadened both the types of TANF services that could be contracted out and the types of organizations that could serve as TANF contractors. The act authorized states to contract out the administration and provision of TANF services, including determining program eligibility. Under the prior AFDC program, the determination of program eligibility could not be contracted out to nongovernmental agencies. In addition, under the PRWORA provision commonly referred to as charitable choice, states are authorized to contract with faith-based organizations to provide TANF services on the same basis as any other nongovernmental provider without impairing the religious character of such organizations. Such changes in the welfare environment have affected the involvement of for-profit organizations in TANF contracting. Prior to PRWORA, contracting in the welfare arena was mainly for direct service delivery such as job training, job search instruction, and child care provision. While some for-profit companies provided services, service providers were mostly nonprofit. Large for-profit companies were mainly involved as contractors that designed automated data systems. In the broader area of social services, large for-profits were also involved in providing various services for child support enforcement. Now that government agencies can contract out their entire welfare systems under PRWORA, there has been an increase in the extent to which large for-profit companies have sought out welfare contracts, in some cases on a large scale that includes determining eligibility and providing employment and social services. Federal and state funds are used to serve TANF recipients. For federal fiscal years 1997 to 2002, states received federal TANF block grants totaling $16.5 billion annually. With respect to state funding, PRWORA includes a maintenance-of-effort provision, which requires states to provide 75 to 80 percent of their historic level of funding. States that meet federally mandated minimum participation rates must provide at least 75 percent of their historic level of funding, and states that do not meet these rates must provide at least 80 percent. The federally mandated participation rates specify the percentages of states’ TANF caseloads that must be participating in work or work-related activities each year. HHS oversees states’ TANF programs. In accordance with PRWORA and federal regulations, HHS has broad responsibility to oversee the proper state expenditure of TANF funds and the achievement of related program goals. While TANF legislation prohibits HHS from regulating states in areas that it is not legislatively authorized to regulate, it must still oversee state compliance with program requirements, such as mandated work participation rates and other program requirements. Nearly all states and the District of Columbia contract with nongovernmental entities for the provision of TANF-funded services at the state level, local level, or both levels of government. In 2001, state and local governments spent more than $1.5 billion on contracts with nongovernmental entities, or at least 13 percent of all federal TANF and state maintenance-of-effort expenditures (excluding those for cash assistance). The majority of these contracts are with nonprofit organizations. Although TANF contractors provide a wide array of services, the most commonly contracted services reported by our survey respondents include employment and training services, job placement services, and support services to promote job entry or retention. In addition, eligibility determination for cash assistance under TANF or other TANF-funded services has been contracted out in one or more locations in some states. Most state TANF contracting agencies pay contractors a fixed overall price or reimburse them for their costs rather than base contract payments on achieving program objectives for TANF recipients. Contracting for TANF-funded services occurs in the District of Columbia and every state except South Dakota. However, the level of government at which contracting occurs varies, which complicates efforts to provide comprehensive information on TANF-funded contracts. Contracting occurs only at the state level in 24 states, only at the local level in 5 states, at both levels in the remaining 20 states, and in the District of Columbia. Moreover, contracting at the local level encompasses contracting by agencies such as county departments of social or human services as well as workforce development boards whose jurisdiction may include several counties. Our national survey of TANF contracting provides comprehensive information on contracting at the state level but incomplete and nonrepresentative information on local contracting. In 2001, state and local governments expended at least $1.5 billion in TANF funds for contracted services. With respect to state-level contracting, contracts with nonprofit organizations accounted for 87 percent of TANF funds while contracts with for-profit organizations accounted for 13 percent of funds (see fig. 1). Seventy-three percent of state-level contracts are with nonprofit organizations and 27 percent are with for-profit organizations. Under PRWORA’s charitable choice provision, some states have established initiatives to promote the use of faith-based organizations. Contracts with faith-based organizations constitute a smaller proportion of all contracted TANF funds than contracts with secular nonprofit organizations and for-profit organizations. As shown in figure 1, contracts with faith-based organizations account for 8 percent of TANF funds spent by state governments on contracts with nongovernmental entities nationally. In several states, large percentages of the funds contracted by states and localities that were identified by our national survey are in contracts with for-profit organizations. As shown in table 1, at least half of the contracted funds in 8 states are with for-profit organizations. Moreover, in 11 states, more than 15 percent of all TANF-contracted funds identified by our survey went to faith-based organizations. The proportion of TANF funds expended for contracted services with nongovernmental entities varies considerably by state. Nationally, at least 13 percent of TANF funds expended for services other than cash assistance have been contracted out. As shown in table 1, the proportion of funds contracted out in 10 states in 2001 exceeded 20 percent of their fiscal year 2000 TANF fund expenditures (excluding the portion of expenditures for cash assistance). Idaho, Mississippi, New Jersey, Wisconsin, and the District of Columbia expended more than 40 percent of their TANF funds on contracted services. On the other hand, Iowa, Kansas, North Carolina, New Mexico, and Oregon spent the smallest proportion (2 percent or less of their TANF funds) on contracts with nongovernmental entities. Several large for-profit organizations and nonprofit organizations have large TANF contracts in multiple states. Our national survey of TANF contracting asked state and local respondents to identify the names of the contractors with the three largest dollar contracts in their jurisdictions. Four for-profit organizations—Curtis & Associates, Inc.; Maximus; America Works; and Affiliated Computer Services, Inc.—have contracts with the highest dollar values in two or more states. Among this group, Curtis & Associates, Inc., had the TANF contracts with the highest dollar value relative to other contractors in their respective locations. Among nonprofit contractors, Goodwill Industries, YWCA, Catholic Charities, Lutheran Social Services, Salvation Army, Urban League, United Way, Catholic Community Services, American Red Cross, and Boys & Girls Clubs all have TANF-funded contracts in two or more states. Among this group, Goodwill Industries had the TANF contracts with the highest dollar value relative to other contractors in their respective locations. States and localities contract with nongovernmental entities to provide services to facilitate employment, administer program functions, and strengthen families. Overall, states and localities rarely contract different types of services to nonprofit and for-profit organizations. Government entities contract out most often for services to facilitate employment. As shown in figure 2, over 40 percent of state respondents reported that half or more of their TANF-funded contracts ask for the provision of education and training activities, job placement services, and support services that address barriers to work and help clients retain employment. These support services include substance abuse treatment, assistance with transportation, and other services that facilitate job entry and retention. Childcare services are less common. While the responses we obtained from local respondents about types of services contracted out may not be representative of local TANF contracting, they revealed a similar overall pattern to the responses by state respondents presented in figure 2. In some cases, states and localities have contracted with nongovernmental entities to provide program administrative functions that were required to be performed by government workers in the past, such as determining eligibility. The determination of eligibility for TANF-funded services provided to low-income families who are ineligible for cash assistance has been contracted out in one or more locations in at least 18 states. For example, one Ohio county, which offers a variety of services with varying eligibility criteria to the working poor, contracts with nongovernmental organizations to both provide and determine eligibility for the services. Contractors in at least 4 states are contracting out eligibility for cash assistance under TANF, an option authorized under TANF. Finally, some states and localities are using TANF funds to contract for services related to the TANF objectives of preventing and reducing the incidence of nonmarital pregnancies and encouraging the formation and maintenance of two-parent families. For example, 20 percent of state respondents reported that half or more of their TANF contracts call for the provision of services pertaining to stabilizing families. We asked state and local governments about the use of four common types of contracts for TANF services: cost-reimbursement, fixed priced, incentive, and cost- reimbursement plus incentive. Under cost- reimbursement contracts, contracting agencies pay contractors for the allowable costs they incur, whereas under fixed-price contracts, contracting agencies pay contractors based on a pre-established overall contract price. As figure 3 shows, almost 60 percent of state respondents said that half or more of their TANF contracts are cost-reimbursement. Far fewer respondents report that half or more of their TANF contracts were incentive or cost-reimbursement plus incentive. Under incentive contracts, the amount paid to contractors is determined based on the extent to which contractors successfully achieve specified program objectives for TANF recipients, such as job placements and the retention of jobs. Cost-reimbursement plus incentive contracts pay contractors for costs they incur and provide payments above costs for the achievement of specific objectives. While the responses we obtained from local respondents may not be representative of local TANF contracting, they revealed a similar pattern to the responses by state respondents. Our survey disclosed that many state and local governments have chosen to use a contract type—cost-reimbursement—under which the government assumes a relatively high level of financial risk. Contracting agencies assume greater financial risk when they are required to pay contractors for allowable costs under cost-reimbursement contracts than when overall contract payments are limited to a pre-established price. HHS relies primarily on state single audit reports to oversee state and local procurement of TANF services and monitoring of TANF contractors. State single audit reports identified TANF procurement or subrecipient monitoring problems for about one-third of the states for the period 1999 to 2000, and subrecipient monitoring problems were identified more frequently. However, HHS officials told us that they do not know the overall extent to which state single audits have identified problems with the monitoring of nongovernmental TANF contractors or the nature of these problems because they do not analyze the reports in such a comprehensive manner. Our review of state single audit reports for 1999 and 2000 found internal control weaknesses for over a quarter of states nationwide that potentially affected the states’ ability to effectively oversee TANF contractors. HHS relies primarily on state single audits to oversee TANF contracting by states and localities. The Single Audit Act of 1984 (P.L. 98-502), as amended, requires federal agencies to use single audit reports in their oversight of state-managed programs supported by federal funds. The objectives of the act, among others, are to (1) promote sound financial management, including effective internal controls, with respect to federal funds administered by states and other nonfederal entities; (2) establish uniform requirements for audits of federal awards administered by nonfederal entities; and (3) ensure that federal agencies, to the maximum extent practicable, rely on and use single audit reports. In addition, the act requires federal agencies to monitor the use of federal funds by nonfederal entities and provide technical assistance to help them implement required single audit provisions. The results of single audits provide a tool for federal agencies to monitor whether nonfederal entities are complying with federal program requirements. To help meet the act’s objectives, Office Of Management and Budget (OMB) Circular A-133 requires federal agencies to evaluate single audit findings and proposed corrective actions, instruct states and other nonfederal entities on any additional actions needed to correct reported problems, and follow up with these entities to ensure that they take appropriate and timely corrective action. States, in turn, are responsible for working with local governments to address deficiencies identified in single audits of local governments. Single audits assess whether audited entities have complied with requirements in up to 14 managerial or financial areas, including allowable activities, allowable costs, cash management, eligibility, and reporting. Procurement and subrecipient monitoring constitute 2 of the 14 compliance areas most relevant to TANF contracting. Audits of procurement requirements assess the implementation of required procedures, including whether government contracting agencies awarded TANF contracts in a full and open manner. Audits of subrecipient monitoring requirements examine whether an entity has adequately monitored the entities to whom it has distributed TANF funds. Subrecipients of TANF funds from states can include both local governments and nongovernmental entities with whom the state has contracted. Subrecipients of TANF funds from localities can include nongovernmental TANF contractors. State single audit reports identified TANF subrecipient monitoring or procurement problems for one-third of the states. Single audits identified subrecipient monitoring deficiencies for 9 states in 1999 and 12 states in 2000. Of the 15 states that had subrecipient monitoring deficiencies in either 1999 or 2000, 6 states were cited for deficiencies in both years. State single audits identified procurement problems less frequently: for 3 states in 1999 and 4 states in 2000. The extent to which state single audits have identified problems with subrecipient monitoring involving TANF funds is generally equal to or greater than for several other social service programs in which contracting occurs with nongovernmental organizations. As shown in table 2, the number of state single audits that identified deficiencies in subrecipient monitoring for the 1999 to 2000 time period is similar for TANF, child care, and the Social Services Block Grant. Fewer state audits identified such problems for child support enforcement, Medicaid, and Food Stamps. With regard to procurement, the frequency of identified deficiencies in state audits for TANF was fewer than that for Medicaid but about the same as for several other programs. HHS officials told us that state single audits during this time period had identified TANF subrecipient monitoring problems in only two states— Florida and Louisiana—that involved unallowable or questionable costs and that also pertained to the oversight of nongovernmental TANF contractors. However, HHS officials also said that they do not know the overall extent to which state single audits have identified problems with the monitoring of nongovernmental TANF contractors or the nature of these problems because they do not analyze the reports in such a comprehensive manner. Our analysis of the state single audit reports that cited TANF subrecipient monitoring problems in 1999 or 2000 indicates that the reports for 14 of the 15 states identified internal control weaknesses that potentially affected the states’ ability to adequately oversee nongovernmental TANF contractors. Thus, internal control weaknesses pertaining to contractor oversight have been reported for more than a quarter of all states nationwide. (See app. III for a summary of the problems reported in each of the state single audits.) The reported deficiencies in states’ monitoring of subrecipients cover a wide range of problems, including inadequate reviews of the single audits of subrecipients, failure to inform subrecipients of the sources of federal funds they received, and inadequate fiscal and program monitoring of local workforce boards. The audit reports for some states, such as Alaska, Kentucky (2000 report), and Louisiana (1999 and 2000 reports) specified that the monitoring deficiencies involved or included subrecipients that were nongovernmental entities. For example, the 2000 single audit for Louisiana reported that for 7 consecutive years the state did not have an adequate monitoring system to ensure that subrecipients and social service contractors were properly audited, which indicates that misspent federal funds or poor contactor performance may not be detected and corrected. The audit reports for other states, including Arizona, Michigan, Minnesota, and Mississippi do not specify whether the subrecipients that were inadequately monitored were governmental or nongovernmental entities. However, the reported internal control weaknesses potentially impaired the ability of these states to properly oversee either their own TANF contractors or the monitoring of TANF contractors that have contracts with local governments. For example, the 2000 single audit report for Minnesota found that the state agency did not have policies and procedures in place to monitor the activities of TANF subrecipients. The 2000 audit report for Mississippi found that the state did not review single audits of some subrecipients in a timely manner and did not perform timely follow-up in some cases when subrecipients did not submit their single audits on time. Even if the subrecipients referred to in both of these audit reports were solely local governmental entities, the deficiencies cited potentially limited the states’ abilities to identify and follow-up in a timely manner on any problems with local monitoring of TANF contractors. HHS follows up on a state-by-state basis on the TANF-related problems cited in state single audits and focuses primarily on the problems that involve monetary findings. However, HHS does not use these reports in a systematic manner to develop a national overview of the extent and nature of problems with states’ oversight of TANF contractors. HHS officials said that HHS regional offices review state single audits and perform follow-up actions in cases where deficiencies were identified. These actions include sending a letter to the state acknowledging the reported problems and any plans the state may have submitted to correct the identified deficiency. HHS officials told us that their reviews of single audit reports focus on TANF audit findings that cited unallowable or questionable costs, and that HHS tracks such findings in its audit resolution database. The officials explained that their focus on monetary findings stems from the need to recover any unallowable costs from states and from HHS’s oversight responsibility under PRWORA to determine whether to impose penalties on states for violating statutory TANF requirements. If the deficiency identified by a single audit involves monetary findings, HHS takes actions to recover the costs within the same year, according to HHS officials. HHS officials told us that if the identified deficiency does not involve monetary findings but pertains to a programmatic issue such as subrecipient monitoring, HHS generally relies on the state to correct the reported problem and would initiate corrective action if the same problem were cited in the state’s single audit the following year. However, HHS does not use state single audit reports in a systematic manner to oversee TANF contracting, such as by analyzing patterns in the subrecipient monitoring deficiencies cited by these reports. HHS auditors and program officials also told us that inconsistent auditing of nongovernmental entities and state monitoring of these entities affects HHS’s use of single audits as a management tool. For example, HHS officials said that the same nongovernmental entity might be treated as a subrecipient by one state and as a vendor by another state, which could limit HHS’s ability to determine whether the entity has consistently complied with all applicable federal and state requirements. HHS officials told us that they plan to work, in conjunction with OMB, to explore the reasons for the inconsistencies and, where appropriate, to identify ways to better assure compliance with audit requirements applicable to nongovernmental entities. State and local governments rely on third parties to help ensure compliance with procurement requirements, including bid protests, judicial processes, and external audits. Procurement problems that resulted in the modification of contract award decisions surfaced in 2 of the 10 TANF procurements we reviewed. These problems affected 5 of the 58 TANF contracts awarded in the 10 procurements. Procurement issues were raised in 2 other procurements but did not result in the modification of contract award decisions. State and local governments have primary responsibility for overseeing procurement procedures, and they use several approaches to identify problems with procurement processes. In some cases, contracting agencies rely on aggrieved third parties to identify procurement problems through bid protests or lawsuits. In other cases, organizations outside the procurement process may review bid solicitation and contract award procedures. A bid protest occurs when an aggrieved party—a bidder who did not win a contract award—protests the decision of the local or state agency to award another bidder a contract. The process usually has several tiers, starting with a secondary review by the agency that denied the contract award. If the protest cannot be resolved internally, it can be brought to a higher level of authority. Contract agency officials said that bidders frequently protest contract award decisions. However, state and local officials also said that many bid protests are based more on bidder disgruntlement with award decisions than on corroborated instances of noncompliance with procurement processes. However, these protests do occasionally result in a resolution that favors the bid protester. We reviewed 10 separate procurements—specific instances in which government agencies had solicited bids and awarded one or more TANF contracts—in the local sites that we visited. Procurement problems identified in San Diego and Los Angeles resulted in contract award decisions being modified. In San Diego, the county employees union filed a lawsuit against the county maintaining that the county had failed to conduct a required cost analysis to determine whether it was more or less efficient to contract out services than it would be to provide them by county employees. The union won the case, and the county was required to perform a cost analysis and, upon determination that contracted services would be more cost-efficient than publicly provided services, resolicit bids from potential contractors. In Los Angeles County’s procurement of TANF services, one bidder filed a bid protest, claiming that the contracting agency had failed to properly evaluate its bid. As the final contract award authority, the County Board of Supervisors ordered the Director of Public Social Services to negotiate separate contracts for TANF services to the original awardee and protesting bidder. While procurement issues were raised in the District of Columbia and New York City, their resolution did not result in contract award decisions being modified. In the District of Columbia, the city Corporation Counsel raised concerns regarding the lack of price competition and the lack of an evaluation factor for price. For example, the District’s contracting agency set fixed prices it would pay for TANF services and did not select contractors based on prices they offered. District officials said that they set fixed prices so that contractors would not submit proposals that would unrealistically underbid other contractors. In addition, the agency did not include price as a factor in its evaluation of proposals. As a result of these and other factors, the Corporation Counsel concluded that the District’s procurement of TANF services was defective and legally insufficient. However, the city, operating under the authority of the mayor’s office to make final contract award decisions, approved the contract awards and subsequently implemented regulations changing the way price is used in making contract award decisions. In New York City, the TANF contracting process was alleged to have violated certain requirements, but these charges were not confirmed upon subsequent legal review and a resulting appellate court decision. The New York City Comptroller reported that the contracting agency had not disclosed the weights assigned to evaluation criteria for assessing bids, provided contract information to all bidders, and assessed each bid equitably. With regard to the assessment of bids, the comptroller maintained that the city’s Human Resources Administration (HRA) had deemed as unqualified some proposals that clearly ranked among the most technically qualified and recommended contract awards for other proposals that were much less qualified. The comptroller also maintained that HRA had preliminary contact with one of the potential contractors, reporting that HRA had held discussions and shared financial and other information with the contractor before other organizations had been made aware of the same information. The comptroller concluded that these actions constituted violations of city procurement policies. Utilizing its authority to make final contract award decisions, the mayor’s office subsequently overruled the comptroller’s objections and authorized the contract agency to award contracts to the organizations it had selected. A later court appeal found in favor of the mayor’s office. State and local governments use a variety of approaches to help ensure that TANF-funded contractors expend federal funds properly and comply with TANF program requirements, such as on-site reviews and independent audits. Four of the six states that we visited identified deficiencies in their oversight of TANF contractors. Various factors have contributed to these deficiencies, such as the need in some states to create and support local workforce boards that contract for TANF services and oversee contractors. With regard to contractor performance, several contractors at two local sites were found to have had certain disallowed costs and were required to pay back the amounts of these costs. Moreover, in five of the eight locations that established performance levels for TANF contractors, most contractors, including both nonprofit and for-profit contractors, did not meet one or more of their performance levels. State and local oversight approaches that we found being used originate from organizations external to contracting agencies and these include independent audits and program evaluations. State and local government auditors, comptrollers, treasurers, or contracted certified public accounting firms audit contractors. Independent auditors conduct financial and programmatic audits of compliance with contract specifications. Similarly, evaluators from outside the contracting agency generally evaluate various aspects of program implementation, including financial, programmatic, and operational performance by contractors and other entities responsible for achieving program goals. State and local government auditors in several states have identified shortcomings in how contracting agencies oversee TANF contractors. As shown in table 2, auditors reported oversight deficiencies in four of the six states that we visited—Florida, New York, Texas, and Wisconsin. Audit reports cited uneven oversight coverage of TANF contractors over time or across local contracting agencies. We did not identify any audit reports that assessed the oversight of TANF contractors in California or the District of Columbia. Evolving TANF program structures, resource constraints, and data quality issues contributed to the deficiencies in contractor oversight. In Florida and Texas, for example, new TANF program structures entailed establishing local workforce boards throughout the state as the principal entity for TANF contracting and the subsequent oversight of TANF contractors. In both states, local workforce boards varied significantly in their capability to oversee TANF contractors and ensure compliance with contract requirements. According to New York State program officials, contracting agencies in the state continue to experience ongoing shortfalls in staff resources necessary to provide sufficient oversight of contractor performance. In addition, Wisconsin’s Legislative Audit Bureau reported in 2001 that the Private Industry Council had not provided the requisite oversight of five TANF-funded contractors in Milwaukee County. In addition, state and local officials in other states frequently told us that data quality issues complicated efforts to monitor contractors effectively. For example, officials told us that case file information on job placements or job retention frequently differed from data in automated systems maintained by state or local contracting agencies. In New York City, such discrepancies required the Human Resources Administration to conduct time-consuming reviews and reconciliations of the data. Such inaccuracies forced delays in New York City’s payments to contractors, estimated by city officials to total several million dollars. States and localities have taken actions in response to some of the reported contract oversight deficiencies. For example, state of Florida officials worked with local workforce boards to integrate the operations of welfare and employment offices to improve oversight of service providers, including nongovernmental contractors. In Texas, the Texas Workforce Commission issued new oversight policies and provided technical assistance and guidance to help local workforce boards oversee the performance of TANF contractors. For example, the commission’s prior monitoring had identified inappropriate cost allocations across programs and other financial management problems by local boards. The commission subsequently issued guidance on how boards and their contractors can meet cost allocation requirements. Commission officials told us that they use a team approach to monitor workforce boards and provide technical assistance. Auditors disallowed significant costs by TANF contractors at two of the locations that we visited: Milwaukee County, Wisconsin, and Miami-Dade County, Florida. In the first location, Wisconsin’s State Legislative Audit Bureau reported that one for-profit contractor had disallowable and questionable costs totaling $415,247 (of which 33 percent were disallowable) and one nonprofit contractor had disallowable and questionable costs totaling $367,401 (of which 83 percent were disallowable). State auditors reported that a large proportion of the disallowable costs resulted from the contractors claiming reimbursement from Wisconsin for expenses incurred while attempting to obtain TANF contracts in other states. Auditors said that generally accepted contract restrictions prohibit the use of contract funds obtained in one state from being used to obtain new contracts in other states. State auditors also said they examined whether there had been any preconceived intent underlying these prohibited contract practices, which could have led to charges of fraud. However, the auditors could not demonstrate preconceived intent or any related allegations of fraud. The for-profit contractor also had costs disallowed for expenditures that supported TANF-funded activities involving a popular entertainer who had formerly received welfare benefits. The contractor believed the activity would provide an innovative, motivational opportunity for TANF recipients. While the contractor claimed that Wisconsin officials had not provided sufficient guidance about allowable activities, state officials subsequently found the costs associated with the entertainment activities to be unallowable. Costs incurred by the for-profit contractor that state auditors cited as questionable included charges for a range of promotional advertising activities, restaurant and food purchases for which there was no documented business purpose, and flowers for which documentation was inadequate to justify a business purpose. Costs incurred by the nonprofit contractor that were cited as questionable included funds spent on advertising, restaurant meals and other food purchases that were not a program need, and local hotel charges for which there was inadequate documentation. At the time of our review, the contractors had repaid all unallowable and questionable costs. In 2001, Wisconsin enacted a state law that requires TANF contracts beginning on January 1, 2002, and ending on December 31, 2003, to contain a provision stating that contractors that submit unallowable expenses must pay the state a sanction equal to 50 percent of the total amount of unallowable expenses. Auditors also disallowed some program costs claimed by several contractors under contract with the Miami-Dade Workforce Development Board in Florida. The auditors found instances in which several contractors had billed the contracting agency for duplicate costs. On the basis of these findings, the auditors recommended that the contractors repay the board about $33,000 for the costs that exceeded their valid claims. At the time of our review, arrangements had been made for the contractors to repay the disallowed costs to the contracting agency. Many TANF contractors at the sites that we reviewed are not meeting their established performance levels in the areas of work participation, job placement, or job retention rates. Contracting agencies in eight of the nine localities we reviewed (all except the District of Columbia) have established expected levels of performance for their TANF contractors, and these performance levels vary by locality. At two of the eight sites— Milwaukee and Palm Beach—all contractors met all specified performance levels. However, at five of the other sites, most contractors did not meet one or more of their performance levels, indicating that state and local governments did not achieve all anticipated performance levels by contracting for TANF services. Tables 4, 5, and 6 indicate the overall extent to which contractors met performance levels and the actual performance achieved by individual contractors with respect to measures for work participation, job placement, and job retention rates in each location that had established these performance levels. In contrast, at the two local sites that either established performance measures for the percentage of job placements that pay wages of at least a specified level (Milwaukee and Palm Beach) or offered health benefits (Milwaukee), all contractors met these measures. Payments to contractors at the eight localities that established performance levels are based either entirely or in part on whether contractors meet their specified performance levels. The measures most often used in the locations we visited mirror PRWORA’s emphasis on helping TANF recipients obtain employment. The most common performance measures are work participation, job placement, and job retention rates. Work participation rates stipulate that contractors engage a specified percentage of TANF recipients in work-related activities such as job search or community work experience. Job placement rates specify that contractors place a specified percentage of recipients in jobs and job retention rates specify that contractors ensure that recipients retain employment (but not necessarily at the same job) for a specified period, typically ranging from 30 to 180 days. In addition, some localities have established performance levels that require contractors to place TANF recipients in certain types of jobs, such as jobs that pay wages of at least a specified level or offer health benefits. The localities varied in the types of measures and levels of performance they established. For example, the specified levels for job placements ranged from 22 percent of program participants in Palm Beach to 50 percent in Austin and Houston. Performance levels established for job retention also varied by jurisdiction. For example, the specified performance levels for contractors in Milwaukee County are that 75 percent of TANF recipients who entered employment retain employment for 30 days and 50 percent retain employment for 180 days. In comparison, contractors in San Diego County face a 90-percent level for 30-day employment retention and a 60-percent level for 180-day retention. In most cases, nonprofit and for-profit contractors had similar performance with respect to meeting the performance levels established for them. Across the locations we reviewed, there are 14 instances in which a local site had data on the comparable performance of nonprofit and for-profit contractors. In 11 of these instances, the percentages of nonprofit and for-profit contractors that met the measures were similar. In each of the remaining three instances, for-profit contractors performed substantially better overall. In two locations we reviewed—Los Angeles County and San Diego County—county governments also provided TANF services. Overall, the relative performance levels of county-provided services and contracted services were mixed. For example, in San Diego County, the county performed better than one for-profit contractor and worse than another for-profit contractor in meeting performance levels for certain job retention rates. In Los Angeles County, one of two for-profit contractors performed better than the county in placing TANF recipients in jobs while one of the two county providers achieved higher placement rates than the other for-profit contractor. At the remaining site, the District of Columbia, contracting officials were unable to provide information on how well TANF contractors met expected levels of performance. While the District has not established contractually specified performance levels for TANF contractors, these contractors do have performance-based contracts. For example, contractors receive a specified payment for each TANF recipient who becomes enrolled in work-related activities, placed in a job, or who retains employment for a certain period of time. However, District officials were unable to provide us with an assessment of TANF contractors’ performance in serving TANF recipients. The contracting out of TANF-funded services is an important area for several reasons. First, the magnitude of TANF contracting is substantial, involving at least $1.5 billion in federal and state funds in 2001, which represents at a minimum 13 percent of the total amount states expended for TANF programs (excluding expenditures for cash assistance). In 2001, about a quarter of the states contracted out 20 percent or more of the amounts they had expended for TANF programs in fiscal year 2000, ranging up to 74 percent. Second, PRWORA expanded the scope of services that could be contracted out to nongovernmental entities, such as determining eligibility for TANF. Third, some states are using new entities—local workforce boards—that procure TANF services and are responsible for overseeing TANF contractors. Problems with the performance of TANF contractors have been identified in some cases, but there is no clear pattern of a greater incidence of these problems with nonprofit versus for-profit contractors. At two of the nine localities we reviewed, auditors had disallowed certain costs by several contractors, and arrangements had been made for the contractors to repay unallowable costs. We found more widespread instances of contractors at the local sites not meeting their contractually established performance levels in areas such as work participation, job placement, and job retention rates. Contracting agencies at the local sites had established financial incentives for contractors by basing payments to contractors in whole or part on their performance in such areas. While meeting the service needs of TANF recipients can present many challenges for contractors, this has become even more important now that these recipients face time limits on the receipt of TANF. Effective oversight is critical to help ensure contractor accountability for the use of public funds, and our review identified problems in some cases with state and local oversight of TANF contractors. At the national level, our review of state single audit reports found internal control weaknesses for over a quarter of the states that potentially affected the states’ ability to effectively monitor TANF contractors. The extent to which state single audits have identified problems with subrecipient monitoring involving TANF funds is generally equal to or greater than for several other social service programs in which contracting occurs with nongovernmental organizations. Moreover, in four of the six states we visited, independent audits have identified deficiencies in state or local oversight of TANF contractors. However, HHS officials told us that they do not know the extent and nature of problems pertaining to the oversight of TANF contractors that state single audit reports have cited because HHS does not analyze these reports in such a comprehensive manner. This is due, in part, to HHS’s focus on those problems identified by single audit reports that involve unallowable or questionable costs. While such problems certainly warrant high priority, the result is that there is not adequate assurance that identified deficiencies pertaining to the monitoring of TANF contractors are being corrected in a strategic manner. Greater use of single audits as a program management tool by HHS would provide greater assurance that TANF contractors are being held accountable for the use of public funds. For example, HHS could use state audit reports more systematically in ways such as obtaining additional information about the extent to which nongovernmental TANF contractors are involved in the subrecipient monitoring deficiencies cited in these reports, identifying the most commonly reported types of deficiencies, and tracking how often the same deficiencies are cited recurrently for individual states. To facilitate improved oversight of TANF contractors by all levels of government, we recommend that the Secretary of HHS direct the Assistant Secretary for Children and Families to use state single audit reports in a more systematic manner to identify the extent and nature of problems related to state oversight of nongovernmental TANF contractors and determine what additional actions may be appropriate to help prevent and correct such problems. HHS provided written comments on a draft of this report, and these are reprinted in appendix IV. HHS said that the report addresses an important topic and provides useful information in describing the reasons that have prompted the rise in contracting, as well as the associated issues and challenges. However, HHS did not agree with our recommendation to the Assistant Secretary for Children and Families to use state single audit reports in a more systematic manner with regard to problems related to state oversight of nongovernmental TANF contractors. After evaluating HHS’s comments, we continue to believe that our recommendation is warranted. HHS questioned whether our recommendation is consistent with the provisions of the Single Audit Act and whether the recommendation is necessary, in light of the current responsibilities that federal agencies and other units of government have for using single audit reports. HHS elaborated by explaining that OMB Circular A-133 requires federal agencies to take actions such as ensuring that audits of recipients of federal funds are completed and reports are received in a timely manner, issuing management decisions on audit findings within 6 months after receipt of the audit report, and ensuring that recipients take appropriate and timely corrective actions. HHS said that it performs such actions. HHS also said that Circular A-133 assigns these same responsibilities to other entities (e.g., state and local governments) for oversight of their subrecipients of federal funds. In addition, HHS said that there is some question about whether it is appropriate under the TANF statute, with its clear emphasis on state flexibility, for HHS to assume substantial new responsibilities that could interfere with states’ methods of monitoring their subrecipients or contractors. We believe that our recommendation is consistent with the Single Audit Act, Circular A-133, and the TANF statute. Moreover, we view our recommendation as contributing to the stated objective of the Single Audit Act of ensuring that federal agencies use single audit reports to the maximum extent practicable in overseeing federal programs. In the TANF block grant environment, the rise in contracting brings with it new challenges at all levels of government regarding accountability for use of federal funds by nongovernmental entities. While states have a great deal of flexibility in using TANF funds, HHS continues to have a fiduciary responsibility to ensure that states properly account for their use of federal funds and maintain adequate internal controls over the use of funds by their subrecipients. HHS follow-up on individual state single audit reports does not preclude the agency from analyzing these reports in a more systematic manner to meet its oversight responsibilities, as we recommend. Furthermore, our recommendation does not call for HHS to usurp any oversight responsibilities from the states for overseeing their subrecipients. Finally, HHS said that it failed to recognize what value our recommendation would add to the TANF program. HHS said that because its staffing level for administering TANF has been greatly reduced, the value and cost-benefit of our recommendation must be considered before adding or redirecting staff to gain a comprehensive perspective on the extent and nature of problems with the monitoring of subrecipients and contractors. In response, we believe that implementing our recommendation could strengthen HHS’s oversight of this important area and facilitate improved oversight of TANF contractors by states. For example, more systematic analysis of state single audit reports by HHS could help identify national patterns in the problems with states’ monitoring of their TANF subrecipients cited by these reports. This information would be valuable to states working to improve their oversight of these subrecipients. Moreover, users of single audit reports can now analyze information more quickly than ever before by using the Internet to access a single audit database established by the Bureau of the Census. In addition, more systematic analysis by HHS of the subrecipient monitoring problems reported by state single audits could also provide useful information on the extent to which these problems involve nongovernmental contractors and are recurring in the same states. Such information could help HHS ascertain whether or not this is a growing problem area that may warrant closer scrutiny. By disseminating the results of its analysis of single audit reports to states through existing venues such as audit forums and conferences with state TANF officials, HHS could share information with its TANF partners to facilitate better oversight of contractor-provided services. In addition, we believe that our recommendation represents a cost- effective approach to improving oversight of TANF contractors because the recommendation involves making fuller use of information that is already collected. The national analysis of state single audit reports that we performed for this report took less than a month and involved using the single audit database to identify reports that cited problems with TANF subrecipient monitoring, reviewing these reports to extract the specific problems, and identifying some of the most commonly cited problems. It may be possible for HHS regional office staff to perform some of this type of analysis, as well as to obtain any needed additional information about specific problems, in the course of their current reviews of state single audit reports for their regions. Such an approach could reduce the amount of analysis by HHS headquarters staff needed to obtain a comprehensive perspective on the extent and nature of problems related to state oversight of TANF contractors. We are sending copies of this report to the Secretary of HHS and the department’s Assistant Secretary for Children and Families, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. Please contact me on (202) 512-7215 if you have any questions about this report. Other GAO contacts and staff acknowledgments are listed in appendix V. To identify the extent and nature of Temporary Assistance for Needy Families (TANF) contracting, we conducted a national survey of all 50 states, the District of Columbia, and the10 counties with the largest federal TANF-funding allocations in each of the13 states that administer their TANF programs locally. Contracting for TANF-funded services occurs at different levels of government—the state, the local, or both—and data on TANF-funded contracts are maintained at various levels of government. We developed three survey instruments to accommodate these differences. The first survey instrument, which requested state data only, was sent to the 13 states that contract at both levels of government or locally only, but maintain data separately. For these 13 states, a second survey instrument, which requested data on contracts entered into at the local level, was sent to 10 counties that receive the largest TANF allocations in each of these 13 states to determine how much contracting takes place in their larger counties. The third survey instrument, which requested data on state-level and local-level contracts, was sent to the remaining 37 states and the District of Columbia (see app. II for this survey instrument). All three survey instruments were pretested with appropriate respondents in six states. In addition to obtaining data through our national survey, we also obtained data from HHS on federal TANF and state maintenance-of-effort funds for fiscal year 2000. We did not independently verify these data. The response rate for the survey instrument sent to the counties in the 13 states was 78 percent. The response rate for the remaining survey instruments sent to state governments was 100 percent. Since our survey did not cover all counties in the 13 states that contract for TANF services locally, the total number of TANF-funded contracts and their dollar value may be understated. In addition, eight states that maintain data on local- level contracting did not provide us with these data. We subsequently contacted survey respondents who had indicated that the determination of eligibility had been contracted out to confirm that this was for the TANF program and determine whether contractors determined eligibility for cash assistance or other TANF-funded services. To obtain information on approaches used by the federal government to oversee TANF contracting, we met with officials in HHS’s Administration for Children and Families in Washington, D.C., and conducted telephone interviews with staff in HHS regional offices in Atlanta, Chicago, Dallas, New York, Philadelphia, and San Francisco. We also interviewed the director of HHS’s National External Audit Review Center to learn how the agency uses single audit reports to oversee procurement processes and contractor monitoring. In addition, we analyzed the single audit database and reviewed state single audit reports. To obtain information on approaches used by state and local governments to ensure compliance with bid solicitation and contract award requirements and to oversee contractor performance, we conducted site visits to California, the District of Columbia, Florida, New York, Texas, and Wisconsin. We met with state TANF officials in these states. In addition, we met with procurement officers, contract managers, auditors, and private contractors in the following nine locations: Austin and Houston, Texas; the District of Columbia; Los Angeles County and San Diego County, California; Miami-Dade and Palm Beach, Florida; Milwaukee, Wisconsin; and New York City, New York. We elected to visit these localities because they all serve a large portion of the TANF population and have at least one large contractor providing TANF-funded services. To obtain additional perspectives on TANF contracting, we interviewed representatives from government associations (American Public Human Services Association, Council of State Governments, National Conference of State Legislatures, and the National Association of Counties) and unions (American Federation of State, County, and Municipal Employees at the national office and in Milwaukee County, Wisconsin). We also reviewed various audit reports for the state governments, local governments, and nonprofit contractors that we interviewed in the nine locations to determine whether auditors found instances of noncompliance with bid solicitation and contract award requirements or contract monitoring. In addition, we selected 7 TANF- funded contracts with nonprofit organizations and 10 TANF-funded contracts with for-profit organizations to obtain information on their contract structure, services provided, and other relevant information. Appendix III: Problems Cited with TANF Subrecipient Monitoring by State Single Audits, 1999 and 2000 2000 The state lacked procedures to ensure that subrecipient nonprofit organizations used TANF funds only for allowable purposes as required by TANF regulations. The state failed to inform nonprofit subrecipients of the source and amount of TANF funds they received. As a result, the state cannot provide assurance that nonprofit organizations are complying with federal requirements, including TANF requirements for allowable activities, allowable costs, and suspension and debarment of contractors. In some cases, the state did not provide subrecipients with information about the sources of federal funds they received. The lack of proper notification to subrecipients of federal award information increases the risk of the improper use and administration of federal funds. In some cases, the state did not notify subrecipients that the funding they received originated from TANF. The lack of proper notification to subrecipients of federal award information increases the risk of the improper use and administration of federal funds, including limited assurance that proper audits are conducted of those funds. The single audit report references a state inspector general report that identified inadequate state oversight of local workforce coalitions that administer TANF funds and inadequate procurement and cash management practices by the local coalitions. The state has not ensured that significant deficiencies related to electronic benefit transfer cards are corrected on a timely basis. The state did not issue monitoring reports to counties within a consistent timeframe. The 1999 finding on not notifying subrecipients of the federal funding sources from which they received funds was subsequently reported in 2000, including the associated risks reported in the prior year. The state did not provide information to some subrecipients on the sources of federal funds it distributed to them. The state did not provide this information because it initially considered the service providers to be vendors rather than subrecipients, and as such, the state did not believe it was necessary to notify the service providers of the federal award information. Failure to inform subrecipients of the federal award information could result in subrecipients improperly reporting expenditures of federal awards, expending federal funds for unallowable purposes, or not receiving a single audit in accordance with federal requirements. The state did not ensure that all nongovernmental contractors submitted their required audit reports or requested an extension. As a result, the state cannot be assured that subrecipients expended federal awards for their intended purpose and complied with federal requirements. 1999 a result, the state cannot be assured that subrecipients spent grant monies for their intended purpose and complied with federal requirements. The state continues to lack an adequate monitoring system to ensure that federal subrecipients and social services contractors are audited in accordance with federal, state, and department regulations. For the seventh consecutive year, the state does not have an adequate monitoring system to ensure that federal subrecipients and social services contractors are audited in accordance with federal, state, and department regulations. In addition, the audit identified $267,749 in questionable costs for TANF. For 35 percent of the contracts audited, the contract did not include required federal award information and information on applicable compliance requirements. The state cannot determine if all required audit reports are received and lacks review procedures to ensure that the information entered into the audit tracking system is accurate and complete. State policy and procedures relating to audit follow-up for subrecipient audits need to be revised to include current official policies. The state is not able to ensure the completeness or accuracy of its system for tracking the total amount of funds provided to subrecipients. The state’s internal control mechanisms did not provide for the proper identification, monitoring, and reporting of payments to all subrecipients. The state’s contract management database excludes several entities that received payments of federal funds. As a result, the state could not be assured that all entities receiving funds were identified as subrecipients, when appropriate, and monitored. In addition, self-certification of entities as subrecipients or vendors increases the risk that the state is not properly identifying and monitoring subrecipients.While OMB Circular A-133 requires states to monitor subrecipients to ensure compliance with laws, regulations, and provisions of contracts, the state agency did not have policies and procedures in place to monitor the activities of subrecipients. The state did not verify the amount of federal financial assistance expended by subrecipients, which should be done to determine which subrecipients require an audit. The state had not implemented an effective procedure for documenting the fiscal year-end for each new subrecipient. 2 of 15 subrecipients tested did not submit their 1998 audit reports in a timely manner, and the state did not perform follow up procedures in a timely manner. For 5 of 15 subrecipients tested, the state’s review of the audit reports was performed 6 months or more after the state received the reports. Without adequate control over the submission of audit reports and prompt follow-up of audit findings, noncompliance with federal regulations by subrecipients could occur and not be detected. 1999 Local offices of the state agency reported that they could not locate over 6 percent of the case files requested for detailed review. Without case files, adequate documentation is not available to verify the eligibility of clients and the appropriateness of benefits paid. The state did not properly monitor the federal funds expended by the Essex County Welfare Board for the Public Assistance Program. While an independent auditor issued a single audit report for Essex County, the audit excluded the Public Assistance Program because of the lack of internal controls related to some components of the program. Payments to public assistance recipients are made through an electronic benefit transfer (EBT) system administered by a contractor, but EBT account activity has not been reconciled to the state’s automated system for the public assistance program. Eleven of the 58 local districts did not submit their single audit reports within the required 13-month period. The state did not maintain sufficient documentation to adequately monitor advance payments to, and expenditures of, contractors providing child care services. The state’s procedures for reviewing subrecipient audit reports were inadequate. Errors and omissions in reports on subrecipient expenditures went undetected. The state did not conduct expenditure reviews to ensure that amounts disclosed in subrecipient audit reports agreed with expenditure records maintained by the state. As reported in the prior audit, the state did not perform sufficient monitoring procedures to provide reasonable assurance that subrecipients administered federal awards in compliance with federal requirements. The reported problem remains unresolved, as the state did not provide reasonable assurance that services and assistance were provided to eligible families. Eleven of the 58 local districts did not submit their single audit reports within the required 13-month period. The state does not perform an adequate desk review of local districts’ single audit reports to ensure that submitted reports were performed in accordance with federal requirements. The state did not always perform or document a review of the counties’ eligibility determination process to provide reasonable assurance that services and assistance were provided to eligible families. The state did not always monitor to ensure that sanctions were imposed on TANF recipients who did not cooperate with the child support enforcement office. The state did not perform monitoring procedures to provide reasonable assurance that the counties used Social Services Block Grant funds for only eligible individuals and allowable service activities. 1999 The state’s fiscal and program monitoring of local workforce boards does not provide reasonable assurance that TANF funds are being spent appropriately. Current fiscal monitoring procedures are inconsistent and lack program-specific attributes. For example, state fiscal monitors generally do not compare a local workforce board’s funding allocation for specific programs to its subcontractor’s budget to ensure that the board is passing on the funds as required. Federal and state compliance is not ensured by the limited scope of reviews. The state conducted limited program monitoring of only 4 of 18 boards that had TANF contracts in place. No problems were cited. While the 2000 state single audit did not report monitoring problems, another state audit issued in March 2001 reported that local workforce boards still needed to make significant improvements in their contract monitoring. The audit reported that improvements are needed to ensure proper accounting for program funds, management of contracts with service providers, and achievement of data integrity. The following individuals made important contributions to this report: Barbara Alsip, Elizabeth Caplick, Mary Ellen Chervenic, Joel Grossman, Adam M. Roye, Susan Pachikara, Daniel Schwimer, and Suzanne Sterling. Welfare Reform: Interim Report on Potential Ways to Strengthen Federal Oversight of State and Local Contracting. GAO-02-245. Washington, D.C.: 2002. Welfare Reform: More Coordinated Federal Effort Could Help States and Localities Move TANF Recipients With Impairments Toward Employment. GAO-02-37. Washington, D.C.: 2001. Workforce Investment Act: Better Guidance Needed to Address Concerns Over New Requirements. GAO-02-72. Washington, D.C.: 2001. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce. GAO/HEHS-01-368. Washington, D.C.: 2001. Welfare Reform: Progress in Meeting Work-Focused TANF Goals. GAO- 01-522T. Washington, D.C.: 2001. Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort. GAO/HEHS-00-48. Washington, D.C.: 2000. Social Service Privatization: Ethics and Accountability Challenges in State Contracting. GAO/HEHS-99-41. Washington, D.C.: 1999. Social Service Privatization: Expansion Poses Challenges in Ensuring Accountability for Program Results. GAO/HEHS-98-6. Washington, D.C.: 1997. Managing for Results: Analytic Challenges in Measuring Performance. GAO/HEHS/GGD-97-138. Washington, D.C.: 1997. Privatization: Lessons Learned by State and Local Governments. GAO/GGD-97-48. Washington, D.C.: 1997. Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices. GAO/HEHS-97-4. Washington, D.C.: 1996. | The Personal Responsibility and Work Opportunity Reconciliation Act (PRWPRA) of 1996 changed the nation's cash assistance program for needy families with children. The former program, Aid to Families with Dependent Children (AFDC), was replaced with the Temporary Assistance for Needy Families (TANF) block grant, which provides states with $16.5 billion each year through 2002 to serve this population. TANF's goals include ending the dependence of needy families on government benefits by promoting job preparation, work, and marriage; preventing and reducing the incidence of nonmarital pregnancies; and encouraging two-parent families. PRWORA expanded the scope of services that could potentially be contracted out, such as determining eligibility for TANF, which had traditionally been done by government employees. Moreover, with the large drop in TANF caseloads nationally, a greater share of federal TANF block grant funds and state funds is now devoted to various support services that are typically contracted out. Although PRWORA expanded the flexibility of states to design and administer TANF programs, its also limited the ability of the Department of Health and Human Services (HHS) to regulate states' TANF programs. Contracting with nongovernmental entities to provide TANF-funded services occurs in almost every state and exceeds $1.5 billion in federal TANF and state maintenance-of-effort funds for 2001. HHS relies primarily on state single audit reports to oversee TANF contracting by states and localities. Their regional offices follow up on the TANF deficiencies identified by these reports, and HHS focuses on reported deficiencies that involve unallowable or questionable costs. However, HHS does not know the extent and nature of problems pertaining to the oversight of nongovernmental TANF contractors that have been cited by state single audits because they do not analyze the reports in a comprehensive manner. State and local governments rely on third parties to help ensure compliance with bid solicitation and contract award procedures, including bid protests, judicial processes, and external audits. State and local government agencies use various approaches to oversee TANF contractors, and problems have been identified with both contract oversight and contractor performance. State and local governments have primary responsibility for overseeing TANF contractors, and they rely on various approaches, including reviewing contractor-provided information and performing on-site reviews. However, auditors in four of the six states identified deficiencies in state or local oversight of TANF contractors, such as uneven oversight by local contracting agencies. |
Our review found that VHA’s internal controls were not designed to provide reasonable assurance that improper purchase card and convenience check purchases would not occur or would be detected in the normal course of business. We found that (1) VHA lacked adequate segregation of duties between those purchasing and receiving goods; (2) payments for purchase card and convenience check transactions often did not have key supporting documents; (3) timeliness standards for recording, reconciling, and reviewing transactions were not met; and (4) cardholders did not consistently take advantage of vendor-offered purchase discounts. Generally, we found that internal controls were not operating as intended because cardholders and approving officials were not following operating guidance governing the program, and in the case of documentation and vendor-offered discounts, they lacked guidance. We also noted that monitoring activities could be strengthened, for example, as in instances where (1) accounts remained active long after the cardholder had left service at VA, (2) credit limits on accounts were significantly higher than actual usage, and (3) human capital resources were insufficient to enable adequate monitoring of the purchase card program. Our Standards for Internal Control in the Federal Government requires that (1) key duties and responsibilities be divided or segregated among different people to reduce the risk of error or fraud; (2) all transactions and other significant events be clearly documented and readily available for examination, and other significant events be authorized and executed only by persons acting within the scope of their authority; (3) transactions be promptly recorded to maintain their relevance and value to management in controlling operations and decisions; and (4) internal control monitoring be performed to assess the quality of performance over time and ensure that audit findings are promptly resolved. Similarly, internal control activities help ensure that management’s directives are carried out. They should be effective and efficient in accomplishing the agency’s objectives and should occur at all levels and functions of the entity. We found that VHA lacked adequate segregation of duties regarding independent receiving of goods and separation of responsibilities within the purchasing process. Independent receiving, which means someone other than the cardholder receives the goods or services, provides additional assurance that items are not acquired for personal use and that they come into the possession of the government. This reduces the risk of error or fraud. From our purchase card internal control testing, we estimate that $75 million in transactions did not have evidence that independent receiving of goods had occurred. In addition, our data mining of the purchase card and convenience check activity identified 15 agency or organization program coordinators (A/OPC) who were also cardholders and collectively made 9,411 purchases totaling $5.5 million during fiscal year 2002. Because A/OPCs are responsible for monitoring cardholders’ and approving officials’ activities for indications of fraud, waste, and abuse, these A/OPCs were essentially monitoring their own activities. We also found instances where purchase card and convenience check transactions lacked key supporting documentation. This would include internal written authorization for convenience check disbursements and vendor invoices that support the description, quantity, and price of what was purchased. VHA’s purchase card guidance does not address the types of documentation that cardholders should maintain to support their purchases. It only addresses documentation requirements in its audit guide, which is an appendix to the purchase card guidance that provides instructions to internal reviewers for performing their monitoring functions. Furthermore, we noted that VA’s operating guidance for convenience checks has no requirement that vendor documentation be provided before checks are issued. The guidance only provides that sufficient documentation, such as a VA-created purchase order, must be evident before checks are issued. The invoice is a key document in purchase card internal control activities. Without an invoice, independent evidence of the description and quantity of what was purchased and the price charged is not available. In addition, the invoice is the basic document that should be forwarded to the approving official or supervisor so that he or she can perform an adequate review of the cardholder’s purchases. Of the 283 purchase card sample transactions we tested, 74 transactions totaling $2.1 million lacked an invoice, credit card slip, or other adequate vendor documentation to support the purchase. Based on these results, we estimate that $312.8 million of the fiscal year 2002 purchase card transactions lacked key supporting documentation. For the convenience check sample, we found 35 of 255 transactions totaling $43,669 lacked the same key documentation. Based on these results, we estimate that $3.8 million of the fiscal year 2002 convenience check transactions lacked key supporting documentation. We also noted that VA’s operating guidance over convenience checks does not provide detailed procedures regarding appropriate written documentation or authorization that must be forwarded to the authorizing employee before funds are disbursed to a third party. VA’s operating guidance only provides that the required documentation be the same as that for paying with cash, such as a purchase order. The guidance makes no mention of independent vendor documentation and that this type of documentation be required prior to issuing checks to vendors. In addition, VA’s guidance only requires that the authorizing employees issuing convenience checks retain copies for 1 year. This documentation requirement is inconsistent with the Federal Acquisition Regulation (FAR) and VHA’s Records, Control Schedule 10-1, dated February 14, 2002, which requires that such records be retained for 6 years and 3 months after final payment for procurements exceeding the simplified acquisition threshold and for 3 years after final payment for procurements below the simplified acquisition threshold. We found that of 255 convenience check transactions, 17, totaling $8,890, lacked written authorization needed for issuance. Based on these results, we estimate that $1.7 million of the fiscal year 2002 convenience check transactions lacked written authorization. In addition, we noted that 19 of the 255 convenience check transactions lacked a copy of the check or carbon copy. Based on these results, we estimate that $2.3 million of the fiscal year 2002 convenience check transactions lacked this supporting documentation. Although VA only requires copies of convenience checks to be retained for 1 year, retaining the copies and the supporting documentation for the longer retention period mandated by the FAR and incorporated in VHA’s Records, Control Schedule 10-1, would facilitate subsequent internal and external reviews in assessing whether a transaction was proper and in compliance with acquisition policies and procedures. At the time of our work, VHA had also established several timeliness standards for cardholders and approving officials to ensure prompt recording, reconciliation, and review of purchases. Specifically, within 1 workday of making a purchase, cardholders are required to input or record the purchase information in VA’s purchase card order system. Within 10 calendar days of electronically receiving the transaction charge information from Citibank, the cardholder must reconcile 75 percent of these Citibank charges to the purchase information in the system. Within 17 calendar days, 95 percent of the Citibank charges must be reconciled. As evidence of reconciliation, the purchase card order system assigns the date the cardholder reconciled the purchase in the system. For testing the timeliness of cardholder reconciliations, we used the 17 calendar day criteria. In addition, VHA requires that within 14 calendar days of electronically receiving the cardholder’s reconciled purchases, the approving official, through an electronic signature, certify in the purchase card order system that all procurements are legal and proper and have been received. Our review found untimely recording, reconciliation, and approving official review. Table 1 summarizes the statistical results of VHA’s timeliness standards that cardholders and approving officials must meet to ensure prompt recording, reconciliation, and review of purchases. Our work shows that the internal controls were not operating as intended to ensure prompt recording of transactions and events. The following examples illustrate the extent of untimely recording, reconciliation, and review of the purchase card transactions. For instance, one cardholder made a purchase on July 9, 2002, of $994, but did not record the information in VA’s purchase card order system until August 29, 2002— 51 days later and 50 days after VHA policy required that the information be entered. Another cardholder made a purchase of $100 on August 24, 2002. Citibank sent charge information for this purchase to VHA on October 8, 2002. According to VHA policy, the cardholder should have reconciled this charge within 17 days. Instead, we found that the account was not reconciled until September 8, 2003, or 335 days after receiving the charge information. In another instance, a cardholder reconciled a purchase card transaction totaling more than $3,000, which should have been reviewed and certified by an approving official within 14 calendar days. We found no evidence that the approving official reviewed this cardholder’s reconciliation until 227 days later. It is critical that cardholders and approving officials promptly record, reconcile, and review purchase card transactions so that erroneous charges can be quickly disputed with the vendor and any fraudulent, improper, or wasteful purchases can be quickly detected and acted upon. We also found instances where cardholders did not consistently take advantage of vendor-offered purchase discounts. Our review identified 69 invoices containing vendor-offered discounts totaling $15,785 that were not taken at the time of purchase or subsequently credited for the discount amount. When purchases are made, vendors may offer purchase discounts if buyers make early payments of their invoices. Typically, the vendor specifies a period during which the discount is offered, but expects the full invoice amount for payments made after that period. When cardholders use the purchase card, payment to vendors, via Citibank, generally occurs at the time of purchase. In turn, Citibank bills VA for the purchases through a daily electronic file. Therefore, it is critical that cardholders ask about any vendor-offered discounts at the time of purchase and make efforts to obtain a credit upon receipt and review of the invoice. Our detailed testing indicated that VHA did not always take advantage of vendor-offered discounts and that it lacked purchase card guidance to ensure cardholders ask about vendor payment terms to determine whether discounts were being offered. For example, one vendor offered VHA a discount of 2.9 percent, or $896, for an invoice amount of $30,888 if it was paid within 15 days. Citibank, on behalf of VA, made payment to the vendor within the 15-day time frame, yet the vendor charged the cardholder’s account for the full invoice amount. We found no evidence that the cardholder attempted to obtain a credit for the available discount offered. In another example, we found that a cardholder had taken advantage of the vendor-offered discount. A factor that may contribute to cardholder inconsistencies in taking advantage of vendor discounts is the lack of established policies and procedures that address this issue. We found that VHA’s purchase card guidance did not include procedures to ensure that cardholders take advantage of available vendor discounts before making payments or require that approving officials identify instances when cardholders did not take advantage of vendor discounts in order to determine the frequency of these occurrences. Without such guidance, VHA will not be able to determine the frequency of these occurrences and actual dollars lost by the government. While VHA’s purchase card guidance includes prescribed monitoring procedures to help ensure purchases are legal and proper, we found no monitoring procedures to identify active accounts of cardholders who had separated from VA nor any provisions to assess cardholder credit limits. We also noted insufficient human capital resources at the A/OPC level for executing the prescribed monitoring activities. For instance, we identified 18 instances in which purchase card accounts remained active after the cardholders left VA and all related outstanding purchase orders had been reconciled. Of the 18 purchase card accounts that remained active after the cardholders had left VA, we determined that 14 accounts remained active 6 or more days after the cardholders’ outstanding purchase orders had been reconciled, which we deemed too long. The remaining 4 purchase cards had been promptly canceled after all outstanding purchase orders were reconciled. Of the 14 accounts that were untimely cancelled, 11 accounts remained open between 6 and 150 days and 3 accounts remained open between 151 and 339 days. For example, one cardholder separated from VA on April 3, 2002, with five outstanding purchase card orders made prior to separation. The last purchase transaction was reconciled on May 21, 2002, but the account was not canceled until April 25, 2003, or 339 days after reconciliation. Requiring monitoring procedures to identify active accounts of departed cardholders and to ensure prompt closure once outstanding purchase orders have been reconciled would assist in reducing the risk of fraud, waste, and abuse that could occur when accounts remain open beyond the necessary time frame. In addition to accounts left open, our analysis of purchases VHA cardholders made in 2002 showed that cumulatively they bought $112 million of goods and services per month on average, but they had credit limits of $1.2 billion, or about 11 times their actual spending. According to VHA’s purchase card guidance, the approving official, in conjunction with the A/OPC, billing officer, and head of contracting activity, recommends cardholder single purchase and monthly credit limits. However, we found no guidance on what factors to consider when recommending the dollar amounts to be assigned to each cardholder. Further, we found no monitoring procedures that require the A/OPC or approving official to determine periodically whether cardholder limits should be changed based on existing and expected future use. Periodic monitoring and analysis of cardholders’ actual monthly and average charges, in conjunction with existing credit limits would help VHA management make reasonable determinations of cardholder spending limits. Without adequate monitoring, the financial exposure in VHA’s purchase card program can become excessive when its management does not exercise judgment in determining single purchase and monthly credit limits. During our review, for instance, the difference between the monlthly cumulative credit limits of $1.2 billion and actual spending of $112 million represents a $1.1 billion financial exposure. Limiting credit available to cardholders is a key factor in managing the VHA purchase card program, minimizing the government’s financial exposure, and enhancing operational efficiency. Furthermore, VHA has not provided sufficient human capital resources to enable monitoring of the purchase card program. One key position for monitoring purchases and overseeing the program is the A/OPC. While the A/OPC position is a specifically designated responsibility, we found in many instances that the A/OPC also functioned in another capacity or performed other assigned duties, for example, as a systems analyst, budget analyst, and contract specialist. Of the 90 A/OPCs who responded to a GAO question regarding other duties assigned, 55 A/OPCs, or 61 percent, reported that they spend 50 percent or less of their time performing A/OPC duties. For example, at the extreme low end of the scale, one A/OPC responded that he was also the budget analyst and that he spends 100 percent of his time on budget analyst duties, leaving no time for A/OPC duties on an ongoing basis. Given that VHA makes millions of purchase card and convenience check transactions annually, which in fiscal year 2002 exceeded $1.4 billion, it is essential that VHA management devote adequate attention to monitoring its purchase card program to ensure that it is properly managed to reduce the risk of fraud, waste, and abuse. The lack of adequate internal controls resulted in numerous violations of applicable laws and regulations and VA/VHA purchase card policies. We classified purchases made in violation of applicable laws and regulations or VA/VHA purchase card policies as improper purchases. We found violations that included purchases for personal use such as food or clothing, purchases that were split into two or more transactions to circumvent single purchase limits, purchases over the $2,500 micro- purchase threshold that were either beyond the scope of the cardholder’s authority or lacked evidence of competition, and purchases made from an improper source. We also found violations of VA/VHA policy that included using convenience checks to pay for purchases even though the vendor accepted the government purchase card, convenience check payments that exceeded established limits, and purchases for which procurement procedures were not followed. While the total amount of improper purchases we identified, based on limited scale audit work, is relatively small compared to the more than $1.4 billion in annual purchase card and convenience check transactions, we believe our results demonstrate vulnerabilities from weak controls that could have been exploited to a much greater extent. For instance, from the nonstatistical sample, we identified 17 purchases, totaling $14,054, for clothing, food, and other items that cardholders purchased for personal use. Items that are classified as personal expenses may not be purchased with appropriated funds without specific statutory authority. The FAR emphasizes that the governmentwide commercial purchase card may be used only for purchases that are otherwise authorized by law or regulation. We identified eight purchases totaling $7,510, in the nonstatistical sample that were subject to procurement from a mandatory source of supply but were obtained from other sources. Various federal laws and regulations, such as the Javits-Wagner-O’Day Act (JWOD), require government cardholders to acquire certain products from designated sources. The JWOD program generates jobs and training for Americans who are blind or have severe disabilities by requiring that federal agencies purchase supplies and services furnished by nonprofit agencies, such as the National Industries for the Blind and the National Institute for the Severely Handicapped. We noted that cardholders did not consistently purchase items from JWOD suppliers when they should have. For example, a cardholder purchased day planner starter kits and refills for employees, totaling $1,591, from Franklin Covey, a high-end office supply store. These items provide essentially the same features as the JWOD items, which would have cost $1,126, or $465 less. During our data mining, we noted that VHA made 652 purchases totaling $76,350 from Franklin Covey during 2002. While we did not review all of the individual purchases, based on our detailed testing of similar transactions, it is likely that many of them should have been procured from a mandatory source at a much lower cost. Using data mining techniques, we identified purchases that appeared to have been split into two or more transactions by cardholders to circumvent their single purchase limit. We requested documentation for a statistically determined sample of 280 potential split transactions totaling $4 million. Of these 280 transactions, we determined that 49 were actual splits. Based on these results, we estimate that $17.1 million of the total fiscal year 2002 purchase card transactions were split transactions. For example, a cardholder with a single purchase limit of $2,500 purchased accommodations in 110 hotel rooms totaling $4,950. When performing follow-up, the cardholder stated that VA provides lodging accommodations for veterans receiving medical services such as radiation therapy, chemotherapy, and day surgery who live at least 150 miles from the medical facility. The cardholder created two separate purchase orders and had the vendor create two separate charges, one for $2,500 and the other for $2,450, so that the purchase could be made. On the documentation provided, the cardholder stated the “purchase was split per the direction of the previous purchase card program administrator.” The cardholder also stated that currently, her purchase card at that facility is no longer used to pay hotel lodging for veterans. Hotel payments are now disbursed electronically via VA’s Financial Service Center. The purpose of the single purchase limit is to require that purchases above established limits be subject to additional controls to ensure that they are properly reviewed and approved before the agency obligates funds. By allowing these limits to be circumvented, VA had less control over the obligation and expenditure of its resources. The FAR provides that the purchase card may be used by contracting officers or individuals who have been delegated micro-purchase authority in accordance with agency procedures. Only warranted contracting officers, who must promote competition to the maximum extent practical, may make purchases above the micro-purchase threshold using the purchase card. Contracting officers must consider solicitation of quotations from at least three sources, and they must minimally document the use of competition or provide a written justification for the use of other than competitive procedures. When cardholders circumvent these laws and regulations, VHA has no assurance that purchases comply with certain simplified acquisition procedures and that cardholders are making contractual commitments on behalf of VHA within the limits of their delegated purchasing authority. From the statistical sample of purchases over $2,500, we found that for 19 of the 76 transactions, cardholders lacked warrant authority needed to make these types of purchases. Based on these results, we estimate that cardholders with only micro-purchase authority, made $111.9 million of the total fiscal year 2002 purchases that exceeded $2,500. In addition, we found that 12 of the 76 transactions lacked evidence of competition. Based on these results, we estimate that $60 million of the total fiscal year 2002 purchases totaling more than $2,500 lacked evidence of competition. We identified 23 purchase card transactions totaling $112,924 in the nonstatistical sample related to the rental of conference room facilities used for internal VA meetings, conferences, and training. For these purchases, the cardholders could not provide documentation to show that efforts had been made to secure free conference space. VA’s acquisition regulations state that rental conference space may be paid for only in the event that free space is not available, and require that complete documentation of efforts to secure free conference space be maintained in the purchase order file. For one purchase, VHA paid $31,610 for conference room facilities and related services for 3 days at the Flamingo Hilton Hotel in Las Vegas. The cardholder provided no evidence that attempts to secure free facilities had been made. In addition, of the 23 purchase card transactions cited, 12 purchases totaling $103,662 occurred at one VHA facility. This included one transaction totaling $12,000 for a 3- day training course on Prevention and Management of Disruptive Behavior at the MGM Grand Hotel in Las Vegas. Again, we were not provided evidence that efforts had been made to secure free conference space. We identified improper use of convenience checks related to purchases that exceeded VA’s established limits of $2,500 and $10,000 and payments to vendors who accept the purchase card payments. VA’s convenience check guidance requires that a single draft transaction be limited to $2,500 or in some cases $10,000 unless a waiver has been obtained from the Department of the Treasury, restricting convenience check use to instances when vendors do not accept purchase cards. From the statistical testing of convenience check limits, we found that 91 of 105 convenience check purchases were paid using multiple checks because the total purchase amount exceeded the established convenience check limit. Based on these results, we estimate that $13.8 million of the total fiscal year 2002 convenience check transactions were improperly used to pay for purchases exceeding the established limits. In April 2003, VA issued new purchase card guidance providing that for micro-purchases, convenience checks may be used in lieu of purchase cards only when it is advantageous to the government and it has been documented as the most cost-effective and practical procurement and disbursement method. However, we found no established criteria for determining the most cost-effective and practical procurement and disbursement method. The ineffectiveness of internal controls was also evident in the number of transactions that we classified as (1) wasteful, that is, excessive in cost compared to other available alternatives or for questionable government need, or (2) questionable because there was insufficient documentation to determine what was purchased. Of the 982 nonstatistical sample transactions we reviewed, 250 transactions, totaling $209,496, lacked key purchase documentation. As a result, we could not determine what was actually purchased, how many items were purchased, the cost of each of the items purchased, and whether there was a legitimate government need for such items. Because we tested only a small portion of the transactions that appeared to have a higher risk of fraud, waste, or abuse, there may be other improper, wasteful, and questionable purchases in the remaining untested transactions. We identified 20 purchases totaling $56,655 that we determined to be wasteful because they were excessive in cost relative to available alternatives or were of questionable government need. The limited number of wasteful purchases found in the nonstatistical sample demonstrates that cardholders are generally prudent in determining that prices of goods and services are reasonable before they make credit card purchases. We considered items wasteful if they were excessive in cost when compared to available alternatives, and questionable if they appeared to be items that were a matter of personal preference or convenience, were not reasonably required as part of the usual and necessary equipment for the work the employees were engaged in, or did not appear to be for the principal benefit of the government. We identified 18 purchases, totaling $55,156, for which we questioned the government need and 2 purchases, totaling $1,499, that we considered excessive in cost. A majority of the purchases were related to officewide and organizational awards. Many award purchases were for gift certificates and gift cards. Although VA policy gives managers great latitude in determining the nature and extent of awards, we identified 10 purchases, totaling $51,117, for award gifts for which VHA was unable to provide information on either the recipients of the awards or the purposes for which the recipients were being recognized. Therefore, we categorized these purchases as of questionable government need. For example, we identified two transactions for 3,348 movie gift certificates, totaling over $30,000. For these purchases, the cardholders and A/OPCs could provide neither the award letters nor justification for the awards. Consequently, VHA could provide no evidence that these purchases were actually used for awards. We also identified two purchases that we considered wasteful because of excessive cost. We identified a cardholder who purchased a $999 digital camera when there were other less costly digital cameras widely available. For example, during the same 6-month period from February 2002 through July 2002, two other cardholders purchased digital cameras for $526 and $550. No documentation was available to show why the more expensive model was necessary. In the second example, we identified a purchase for a 20-minute magic show, totaling $500, that was performed during a VA volunteer luncheon. Although VA policies allow for funds for volunteer events, this expenditure, at roughly $25 per minute, seemed excessive. We also found questionable purchases. As I discussed earlier, we identified numerous transactions from the statistical samples that were missing adequate supporting documentation on what was actually purchased, how many items were purchased, and the cost of the items purchased. We requested supporting documentation for a nonstatistical sample of 982 transactions, totaling $1.2 million. Of these, we identified 315 transactions, totaling $246,596, that appeared to be improper or wasteful, for which VHA either provided insufficient or no documentation to support the propriety of the transactions. We classified 250 of these 315 transactions, totaling $209,496, as missing invoices because the cardholders either provided VHA internal documentation but no vendor documentation to support the purchase or provided no documentation at all to support the purchase. VHA internal documentation includes purchase orders, reconciliation documents, and receiving reports. Vendor documentation includes invoices, sales receipts, and packing slips. For 184 of these transactions, totaling $155,429, internal documentation was available but no vendor documentation was available. No documentation at all was available for the remaining 66 transactions, totaling $54,068. These purchases were from vendors that would more likely be selling unauthorized or personal use items. Examples of these types of purchases included a purchase form Radio Shack totaling $3,305, a purchase from Daddy’s Junky Music totaling $1,041, a purchase from Gap Kids totaling $788, and a purchase from Harbor Cruises totaling $357. An example of a transaction with internal documentation but no vendor documentation included a purchase from Circuit City where the cardholder stated that the purchase was for three $650 television sets and three $100 television stands, totaling $2,300 (including $50 shipping), that were needed to replace the existing ones in the VA facility’s waiting area. In another transaction, no vendor documentation was available for a transaction from Black & Gold Beer where the cardholder stated that the purchase of beer was for a patient. The purchase order shows that three cases were purchased at $12.50 each, totaling $37.50. The cardholder stated that the purchase was at the request of the pharmacy for a specific patient; however, no documentation was provided to support this claim. We believe that at least some of the items we identified may have been determined to be potentially fraudulent, improper, or wasteful had the documentation been provided or available. In addition, we noted that of the 66 transactions for which VHA cardholders provided no documentation to support the purchase, 32 transactions (49 percent) represented 2 or more transactions by the same cardholder. For example, one cardholder did not provide documentation for 5 transactions, totaling $5,799, from various types of merchants, including two restaurants, a movie theater, a country club, and an airport café. For 65 transactions, totaling $37,100, that we characterized as questionable but appeared to be either improper or wasteful, the documentation we received either was not correct or was inadequate, and we were unable to determine the propriety of the transactions. For example, one transaction was for $1,350 to Hollywood Entertainment; however, the purchase order and invoice listed Hear, Inc., as the vendor for closed captioning services. The cardholder stated that she believed Hollywood Entertainment is an associate company name for Hear, Inc.; however, the company could not provide any documentation to support this statement. Additionally, from our Internet searches of both Hollywood Entertainment and Hear, Inc. we found no information to indicate that these two companies were associated in any way. We also identified 68 transactions, totaling $31,772, involving the purchase of tickets for sporting events, plays, movies, amusement or theme parks, and other recreation activities for veterans and VA volunteers. The documentation provided for these transactions was inadequate or missing vendor invoices; therefore, we could not determine whether these tickets were used in support of the volunteers or veterans. As a result, we categorized these purchases as questionable. Various programs under VHA, such as Recreation Therapy, Voluntary Services, and Blind Rehabilitation Service, sponsor assorted activities for veterans and VA volunteers. From our review of these types of purchases, we found that VHA does not have procedures in place to ensure that the purchased items were used by the intended recipients and accounted for properly. In most cases, there was inadequate or no documentation to account for how the tickets were distributed and who participated in the events. For example, we found a purchase of 46 tickets, totaling $812, for veterans to attend a Pittsburgh Pirates baseball game. However, we were provided no documentation that identified who received the tickets or who attended the baseball game. Proper accountability over the distribution and receipt of tickets for such events is needed to help ensure that tickets are not improperly used for personal use. In closing, Mr. Chairman, I want to emphasize that without improvements in its internal controls to strengthen segregation of duties; documentation of purchase transactions; timely recording, review, and reconciliation of transactions; and program monitoring, VHA will continue to be at risk for noncompliance with applicable laws and regulations and its own policies and remain vulnerable to improper, wasteful, and questionable purchases. Our report, which is being released at this hearing, makes 36 recommendations to strengthen internal controls and compliance in VHA’s purchase card program to reduce its vulnerability to improper, wasteful, and questionable purchases. This concludes my statement. I would be happy to answer any questions you or other members of the committee may have. For information about this statement, please contact McCoy Williams, Director, Financial Management and Assurance, at (202) 512-6906, or Alana Stanfield, Assistant Director, at (202) 512-3197. You may also reach them by e-mail at williamsm1@gao.gov or stanfielda@gao.gov. Individuals who made key contributions to this testimony include Lisa Crye, Carla Lewis, and Gloria Medina. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Veterans Affairs (VA) Office of Inspector General (OIG) has continued to identify significant vulnerabilities in the department's use of government purchase cards. Over the years, the OIG has identified internal control weaknesses that resulted in instances of fraud and numerous improper and questionable uses of purchase cards. The OIG has made a number of recommendations for corrective action. Given that VA is the second largest user of the governmentwide purchase card program, with reported purchases totaling $1.5 billion for fiscal year 2002, and because of the program weaknesses reported by the OIG, GAO was asked to determine whether existing controls at the Veterans Health Administration (VHA) were designed to provide reasonable assurance that improper purchases would be prevented or detected in the normal course of business, purchase card and convenience check expenditures were made in compliance with applicable laws and regulations, and purchases were made for a reasonable cost and a valid government need. GAO's report on this issue, released concurrently with this testimony, makes 36 recommendations to strengthen internal controls and compliance in VHA's purchase card program to reduce its vulnerability to improper, wasteful, and questionable purchases. Weaknesses in VHA's controls over the use of purchase cards and convenience checks resulted in instances of improper, wasteful, and questionable purchases. These weaknesses included inadequate segregation of duties; lack of key supporting documents; lack of timeliness in recording, reconciling, and reviewing transactions; and insufficient program monitoring activities. Generally, GAO found that internal controls were not operating as intended because cardholders and approving officials were not following VA/VHA operating guidance governing the program and, in the case of documentation and vendor-offered discounts, lacked adequate guidance. The lack of adequate internal controls resulted in numerous violations of applicable laws and regulations and VA/VHA purchase card policies that GAO identified as improper purchases. GAO found violations of applicable laws and regulations that included purchases for personal use such as food or clothing, purchases that were split into two or more transactions to circumvent single purchase limits, purchases over the $2,500 micro-purchase threshold that were either beyond the scope of the cardholder's authority or lacked evidence of competition, and purchases made from an improper source. While the total amount of improper purchases GAO identified is relatively small compared to the more than $1.4 billion in annual purchase card and convenience check transactions, they demonstrate vulnerabilities from weak controls that may have been exploited to a much greater extent. The ineffectiveness of internal controls was also evident in the number of transactions classified as wasteful or questionable. GAO identified over $300,000 in wasteful or questionable purchases, including two purchases for 3,348 movie gift certificates totaling over $30,000 for employee awards for which award letters or justification for the awards could not be provided and a purchase for a digital camera totaling $999 when there were other less costly digital cameras widely available. Also, 250 questionable purchases totaling $209,496 from vendors that would more likely be selling unauthorized or personal use items lacked key purchase documentation. Examples of these types of purchases included a purchase from Radio Shack totaling $3,305, a purchase from Daddy's Junky Music totaling $1,041, a purchase from Gap Kids totaling $788, and a purchase from Harbor Cruises totaling $357. Missing documentation prevented determining the reasonableness and validity of these purchases. Because only a small portion of the transactions that appeared to have a higher risk of fraud, waste, or abuse were tested, there may be other improper, wasteful, and questionable purchases in the remaining untested transactions. |
As part of our undercover investigation, we produced counterfeit documents before sending our two teams of investigators out to the field. We found two NRC documents and a few examples of the documents by searching the Internet. We subsequently used commercial, off-the-shelf computer software to produce two counterfeit NRC documents authorizing the individual to receive, acquire, possess, and transfer radioactive sources. To support our investigators’ purported reason for having radioactive sources in their possession when making their simultaneous border crossings, a GAO graphic artist designed a logo for our fictitious company and produced a bill of lading using computer software. Our two teams of investigators each transported an amount of radioactive sources sufficient to manufacture a dirty bomb when making their recent, simultaneous border crossings. In support of our earlier work, we had obtained an NRC document and had purchased radioactive sources as well as two containers to store and transport the material. For the purposes of this undercover investigation, we purchased a small amount of radioactive sources and one container for storing and transporting the material from a commercial source over the telephone. One of our investigators, posing as an employee of a fictitious company, stated that the purpose of his purchase was to use the radioactive sources to calibrate personal radiation detectors. Suppliers are not required to exercise any due diligence in determining whether the buyer has a legitimate use for the radioactive sources, nor are suppliers required to ask the buyer to produce an NRC document when making purchases in small quantities. The amount of radioactive sources our investigator sought to purchase did not require an NRC document. The company mailed the radioactive sources to an address in Washington, D.C. On December 14, 2005, our investigators placed two containers of radioactive sources into the trunk of their rental vehicle. Our investigators – acting in an undercover capacity – drove to an official port of entry between Canada and the United States. They also had in their possession a counterfeit bill of lading in the name of a fictitious company and a counterfeit NRC document. At the primary checkpoint, our investigators were signaled to drive through the radiation portal monitors and to meet the CBP inspector at the booth for their primary inspection. As our investigators drove past the radiation portal monitors and approached the primary checkpoint booth, they observed the CBP inspector look down and reach to his right side of his booth. Our investigators assumed that the radiation portal monitors had activated and signaled the presence of radioactive sources. The CBP inspector asked our investigators for identification and asked them where they lived. One of our investigators on the two-man undercover team handed the CBP inspector both of their passports and told him that he lived in Maryland while the second investigator told the CBP inspector that he lived in Virginia. The CBP inspector also asked our investigators to identify what they were transporting in their vehicle. One of our investigators told the CBP inspector that they were transporting specialized equipment back to the United States. A second CBP inspector, who had come over to assist the first inspector, asked what else our investigators were transporting. One of our investigators told the CBP inspectors that they were transporting radioactive sources for the specialized equipment. The CBP inspector in the primary checkpoint booth appeared to be writing down the information. Our investigators were then directed to park in a secondary inspection zone, while the CBP inspector conducted further inspections of the vehicle. During the secondary inspection, our investigators told the CBP inspector that they had an NRC document and a bill of lading for the radioactive sources. The CBP inspector asked if he could make copies of our investigators’ counterfeit bill of lading on letterhead stationery as well as their counterfeit NRC document. Although the CBP inspector took the documents to the copier, our investigators did not observe him retrieving any copies from the copier. Our investigators watched the CBP inspector use a handheld Radiation Isotope Identifier Device (RIID), which he said is used to identify the source of radioactive sources, to examine the investigators’ vehicle. He told our investigators that he had to perform additional inspections. After determining that the investigators were not transporting additional sources of radiation, the CBP inspector made copies of our investigators’ drivers’ licenses, returned their drivers’ licenses to them, and our investigators were then allowed to enter the United States. At no time did the CBP inspector question the validity of the counterfeit bill of lading or the counterfeit NRC document. On December 14, 2005, our investigators placed two containers of radioactive sources into the trunk of their vehicle. Our investigators drove to an official port of entry at the southern border. They also had in their possession a counterfeit bill of lading in the name of a fictitious company and a counterfeit NRC document. At the primary checkpoint, our two-person undercover team was signaled by means of a traffic light signal to drive through the radiation portal monitors and stopped at the primary checkpoint for their primary inspection. As our investigators drove past the portal monitors and approached the primary checkpoint, they observed that the CBP inspector remained in the primary checkpoint for several moments prior to approaching our investigators’ vehicle. Our investigators assumed that the radiation portal monitors had activated and signaled the presence of radioactive sources. The CBP inspector asked our investigators for identification and asked them if they were American citizens. Our investigators told the CBP inspector that they were both American citizens and handed him their state-issued drivers’ licenses. The CBP inspector also asked our investigators about the purpose of their trip to Mexico and asked whether they were bringing anything into the United States from Mexico. Our investigators told the CBP inspector that they were returning from a business trip in Mexico and were not bringing anything into the United States from Mexico. While our investigators remained inside their vehicle, the CBP inspector used what appeared to be a RIID to scan the outside of the vehicle. One of our investigators told him that they were transporting specialized equipment. The CBP inspector asked one of our investigators to open the trunk of the rental vehicle and to show him the specialized equipment. Our investigator told the CBP inspector that they were transporting radioactive sources in addition to the specialized equipment. The primary CBP inspector then directed our investigators to park in a secondary inspection zone for further inspection. During the secondary inspection, the CBP inspector said he needed to verify the type of material our investigators were transporting, and another CBP inspector approached with what appeared to be a RIID to scan the cardboard boxes where the radioactive sources was placed. The instrumentation confirmed the presence of radioactive sources. When asked again about the purpose of their visit to Mexico, one of our investigators told the CBP inspector that they had used the radioactive sources in a demonstration designed to secure additional business for their company. The CBP inspector asked for paperwork authorizing them to transport the equipment to Mexico. One of our investigators provided the counterfeit bill of lading on letterhead stationery, as well as their counterfeit NRC document. The CBP inspector took the paperwork provided by our investigators and walked into the CBP station. He returned several minutes later and returned the paperwork. At no time did the CBP inspector question the validity of the counterfeit bill of lading or the counterfeit NRC document. We conducted corrective action briefings with CBP and NRC officials shortly after completing our undercover operations. On December 21, 2005, we briefed CBP officials about the results of our border crossing tests. CBP officials agreed to work with the NRC and CBP’s Laboratories and Scientific Services to come up with a way to verify the authenticity of NRC materials documents. We conducted two corrective action briefings with NRC officials on January 12 and January 24, 2006, about the results of our border crossing tests. NRC officials disagreed with the amount of radioactive material we determined was needed to produce a dirty bomb, noting that NRC’s “concern threshold” is significantly higher. We continue to believe that our purchase of radioactive sources and our ability to counterfeit an NRC document are matters that NRC should address. We could have purchased all of the radioactive sources used in our two undercover border crossings by making multiple purchases from different suppliers, using similarly convincing cover stories, using false identities, and had all of the radioactive sources conveniently shipped to our nation’s capital. Further, we believe that the amount of radioactive sources that we were able to transport into the United States during our operation would be sufficient to produce two dirty bombs, which could be used as weapons of mass disruption. Finally, NRC officials told us that they are aware of the potential problems of counterfeiting documents and that they are working to resolve these issues. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be pleased to answer any questions that you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-7455 or kutzg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Given today's unprecedented terrorism threat environment and the resulting widespread congressional and public interest in the security of our nation's borders, GAO conducted an investigation testing whether radioactive sources could be smuggled across U.S. borders. Most travelers enter the United States through the nation's 154 land border ports of entry. Department of Homeland Security U.S. Customs and Border Protection (CBP) inspectors at ports of entry are responsible for the primary inspection of travelers to determine their admissibility into the United States and to enforce laws related to preventing the entry of contraband, such as drugs and weapons of mass destruction. GAO's testimony provides the results of undercover tests made by its investigators to determine whether monitors at U.S. ports of entry detect radioactive sources in vehicles attempting to enter the United States. GAO also provides observations regarding the procedures that CBP inspectors followed during its investigation. GAO has also issued a report on the results of this investigation (GAO-06-545R). For the purposes of this undercover investigation, GAO purchased a small amount of radioactive sources and one secure container used to safely store and transport the material from a commercial source over the telephone. One of GAO's investigators, posing as an employee of a fictitious company located in Washington, D.C., stated that the purpose of his purchase was to use the radioactive sources to calibrate personal radiation detection pagers. The purchase was not challenged because suppliers are not required to determine whether prospective buyers have legitimate uses for radioactive sources, nor are suppliers required to ask a buyer to produce an NRC document when purchasing in small quantities. The amount of radioactive sources GAO's investigator sought to purchase did not require an NRC document. Subsequently, the company mailed the radioactive sources to an address in Washington D.C. The radiation portal monitors properly signaled the presence of radioactive material when our two teams of investigators conducted simultaneous border crossings. Our investigators' vehicles were inspected in accordance with most of the CBP policy at both the northern and southern borders. However, GAO's investigators, using counterfeit documents, were able to enter the United States with enough radioactive sources in the trunks of their vehicles to make two dirty bombs. According to the Centers for Disease Control and Prevention, a dirty bomb is a mix of explosives, such as dynamite, with radioactive powder or pellets. When the dynamite or other explosives are set off, the blast carries radioactive material into the surrounding area. The direct costs of cleanup and the indirect losses in trade and business in the contaminated areas could be large. Hence, dirty bombs are generally considered to be weapons of mass disruption instead of weapons of mass destruction. GAO investigators were able to successfully represent themselves as employees of a fictitious company present a counterfeit bill of lading and a counterfeit NRC document during the secondary inspections at both locations. The CBP inspectors never questioned the authenticity of the investigators' counterfeit bill of lading or the counterfeit NRC document authorizing them to receive, acquire, possess, and transfer radioactive sources. |
Every satellite has a bus and payload. The bus is the body of the satellite and is managed by the satellite control operations to maintain a desired location. It carries the payload and is composed of a number of subsystems, such as the power supply, antennas, and mechanical and thermal control equipment. The bus also provides electrical power, stability, and propulsion for the entire satellite. The payload includes the devices the satellite needs to perform its mission. This configuration differs for every type of satellite. For example, the payload for a weather satellite could include cameras to take pictures of cloud formations, while the payload for a GPS satellite would include equipment to pass navigation information from the satellites to receivers on Earth. Monitoring and operating of the satellite payload is done to collect data or provide a capability to the warfighter or civilian user. GPS is a global PNT network consisting of space, ground control, and user equipment segments that support the broadcasts of military and civilian GPS signals. Each of these signals includes positioning and timing information, which enables users with GPS receivers to determine their position, velocity, and time 24 hours a day, in all weather, worldwide. GPS has changed the way the world operates, and underpins military operations as well as major sections of the economy, including telecommunications; electrical power distribution; banking and finance; transportation; environmental and natural resource management; agriculture; search and rescue; and other emergency services. GPS is used by all branches of the military to guide troop movements, integrate logistics support, and synchronize communications networks. In addition, many U.S. and allied precision-guided munitions are directed to their targets by GPS signals. The space, ground control, and user equipment segments are needed to take full advantage of GPS capabilities. The GPS space segment, which accounts for more than half of the total GPS costs in the Air Force’s current budget is a constellation of satellites that orbit approximately 12,500 miles above the earth. According to Air Force officials, current GPS requirements do not specify a number of satellites that are to be in orbit, but rather an accuracy threshold for the system. Based upon the threshold requirement for positioning and timing accuracy, the Air Force derived a constellation size of 24 satellites available 95 percent of the time. Due to the unanticipated longevity of some of the previously launched satellites, the constellation has at times exceeded the derived number of satellites. See figure 1 below for a depiction of the GPS segments. GPS satellites broadcast encrypted military signals and unencrypted civilian signals that can be processed by GPS receivers to identify their location worldwide. Since the constellation became fully operational in 1995, it has consisted of satellites from various generations of development and production, each introducing improved capabilities and additional signals. The latest generation of satellites in orbit—the GPS IIF—broadcasts two military and three civilian signals. Additionally, the GPS constellation currently hosts a nuclear detonation detection system payload to monitor nuclear events on Earth. The GPS ground control segment comprises the Master Control Station at Schriever Air Force Base, Colorado; the Alternate Master Control Station at Vandenberg Air Force Base, California; and various monitoring stations and ground antennas. Information from the monitoring stations is processed at the Master Control Station to determine the accuracy of the satellites’ clocks (for signal timing) and the precision of their orbits. The Master Control Station operates the satellites and regularly updates their navigation messages, transmitting information to the satellites via the ground antennas. The U.S. Naval Observatory Master Clock monitors the GPS constellation and provides timing data for the individual satellites. The GPS user equipment segment includes military and commercial GPS receivers. A receiver determines a user’s position by calculating the distance from four or more satellites based on the time it takes each of the signals to reach the receiver. Military GPS receivers are designed to utilize the encrypted military GPS signals that are only accessible to authorized users; commercial receivers use the civilian GPS signal, which is publicly available worldwide. The Air Force is in the process of modernizing the space segment under the GPS III program, which will incorporate advances over the GPS IIF satellites it is replacing, including a higher power military navigation signal to improve jamming resistance and a new civilian signal to allow users to receive GPS signals in combination with foreign satellite navigation systems. The acquisition strategy is to purchase up to 34 satellites following an incremental approach to replenish the current satellites in the constellation as they reach the end of their operational life. The original strategy was to purchase up to eight satellites in each of the first two increments and up to 18 satellites in the last increment. However, the GPS program is drafting a modified acquisition strategy to streamline the last two increments into one to allow larger buying quantities to take advantage of economies of scale, making this an appropriate time to examine future GPS options. The Air Force has purchased four GPS III satellites to date, the first of which is expected to launch in 2015. The Air Force plans to incrementally increase the capabilities of the GPS III satellites as technology maturation occurs and funding allows. The ground control segment is being modernized under the Air Force’s Global Positioning System Next Generation Operational Ground Control System (GPS OCX) program. GPS OCX is also being developed in three increments and will eventually allow DOD to take full advantage of the capabilities offered by the various satellites. The first increment is to deliver a capability to launch and initiate on-orbit testing of GPS III satellites; the next increment is to deliver a capability to command and control GPS II and GPS III satellites; and the final increment is to deliver a capability to make military and international signals operational. Software challenges have delayed availability of the command and control capability, which was previously planned for 2015 and is currently expected to be operational in late 2016 (final block is planned for 2017). Each of the military services is managing modernization of its user equipment through the joint Military GPS User Equipment program, which is to develop modernized military GPS receivers that deliver improved capabilities for accurate, reliable, and available PNT service where current receiver performance might be compromised (e.g., jammed) or unavailable. The 2004 U.S. Space-based PNT policy established a coordinating structure to integrate input from, and delineate the respective roles of the military and civilian departments and agencies for program planning (including identification of system requirements), resource allocation, system development, and operations. As part of the coordinating structure, an executive committee advises and coordinates among U.S. government agencies on maintaining and improving U.S. space-based PNT infrastructures, including GPS and related systems. The executive committee is co-chaired by the deputy secretaries of the DOD and the DOT, and includes members at the equivalent level from the Departments of State, Commerce, Homeland Security, the Interior, and Agriculture; the Joint Chiefs of Staff; and the National Aeronautics and Space Administration (NASA). The National Coordination Office for Space-based PNT provides day-to-day support for the executive committee. The Air Force broadly addressed all committee reporting requirements in its assessment of future GPS options for the space segment. These requirements include evaluation of system capabilities, implementation approaches, technical and programmatic risks, and estimated costs. However, each of the options presented and evaluated is based on a GPS constellation of 30 satellites, whereas the Air Force’s requirement is for a 24-satellite constellation. The 30-satellite constellation assumption has a significant effect on the cost of the options studied. It also raises questions about what constellation size the Air Force is committed to fielding and maintaining in the future. The Air Force GPS report identified and assessed nine options for the space segment of a future GPS which were developed as part of a six- step process the Air Force used to conduct the study, to include: Define the purpose, scope, and decision criteria. Assess the user requirement or capability in operational context. Develop a broad trade space of all available options. Review the trade space with GPS Senior Advisory Group to select a small set of promising options and to develop criteria for assessing the down-selected options. Analyze the options and assess them against the criteria using modeling and analysis tools, and conduct a risk assessment for each option. Integrate all the findings and develop recommendations. According to program officials, they focused the report on options for the space segment because it is the most costly part of the GPS. The space segment accounts for more than half of all GPS program costs in the current budget. For the space segment, we found that the Air Force’s report addressed all four Committee requirements for each of these nine options. Specifically, the report identified system capability, implementation approaches, technical and programmatic risks, and estimated costs, as provided for by the Committee. Each of the options for the space segment is assessed to determine how quickly it could be fielded, implementation approaches, technical and programmatic risks, and space segment costs (to include the cost of launch). GPS program officials stated that the cost analyses that support the nine space segment options were not high fidelity estimates but instead estimated at a high level. Although this may be expected given the limited time provided to complete the study and prepare the report, the high-level cost estimates are not at a level that would support programmatic decisions. Table 1 below identifies our evaluation relating to the Committee requirements as well as our observations. Of the nine options the Air Force identified for the space segment, several relied on a core constellation of 18 to 24 GPS III satellites that would be augmented with other satellites to reach a total of 30. The last two options do not rely on a core GPS III constellation but would also consist of 30 satellites. See table 2 for a description of each option as well as information on the results of the Air Force’s risk and cost assessments. Technical risk, for some of the options, includes the incorporation of technologies that are new to GPS. Programmatic risk includes areas such as establishing requirements, budgeting for funds, and obtaining approval of the acquisition strategy. The options are not presented in any particular order of significance. GPS program officials noted that none of the options presented in the report represent a radical departure from the GPS III program, which they characterized as an essential investment for the health of the constellation. The GPS III program plans to eventually replace the existing constellation with upgraded satellites that carry the nuclear detonation detection system payload. The first seven options presented in the Air Force report involve GPS III or some modification of it. Specifically, five options (numbers 3, 4, 5, 6, and 7) involve launching two GPS III satellites on a single launch vehicle (referred to as dual launch), and according to the Air Force report, are expected to substantially reduce costs by eliminating the need to buy launch vehicles for each satellite, as is currently practiced. Further, four options (numbers 4, 5, 8, and 9) entail fielding smaller, lower-cost PNT satellites—referred to as navigation satellites (NavSats)—yet to be developed satellites that would complement the core constellation of GPS III satellites in various configurations, or to replace GPS III satellites entirely. The NavSat concept is comparable to GPS III except that it is comprised of dedicated PNT satellites (without the secondary nuclear detonation detection system); and according the Air Force report, is expected to be significantly less costly than GPS III satellites due to the development and use of mass-reducing technologies. According to Air Force officials, the baseline GPS requirement for accuracy drives a requirement for a 24 satellite constellation at 95 percent availability. However, each of the GPS space segment options the Air Force assessed is based on a constellation of 30 satellites. Therefore, it is unclear whether investment costs for these options will in fact be lower than the baseline cost of the current GPS III program. Moreover, based on the estimated costs presented in the report, basing all options on a 30- satellite constellation may actually increase the overall GPS investment because the limited differences between the options assessed narrows the range of costs across the seven options that rely on a core of GPS III satellites. Program officials noted that although the requirement is for 24 satellites, for the last seven years the Air Force has operated at least 27 satellites in the GPS constellation and, more recently, as many as 31 satellites due to the existing satellites outlasting the useful life originally estimated. We reported in 2010 that DOD predicted many of the older satellites in the constellation will reach the end of their operational life faster than they will be replenished over the next several years, decreasing the size of the constellation from its current level. Since the magnitude of error calculated by GPS receivers dramatically decreases as the constellation reaches 30 satellites, and program officials stated that users have come to rely on this increased accuracy over the last several years, the Air Force determined it was appropriate to assess options for providing this same level of service into the future. As a result, the report establishes a constellation of 30 satellites—an increase over the Air Force’s requirement for accuracy—as a baseline for each of the GPS options assessed. Program officials, however, acknowledged that there is reluctance to formally require the GPS constellation to be larger than the 24-satellite requirement supported by the Air Force without the certainty of a corresponding growth in program funding. While a 30-satellite constellation may be justifiable, using that as a baseline in the study may have eliminated some lower cost options that may exist with 24 satellites. Additionally, given the uncertainty regarding the magnitude of future GPS investments, the DOD and civilian agency community would benefit from knowing which constellation size the Air Force is committed to supporting. The Air Force GPS report identifies key drivers of cost and capability across the assessed options, but more information on each of these drivers is needed to fully assess their potential effect on the future GPS program. Similarly, additional information is needed on inputs for the cost estimates presented. This information includes analysis of costs and other potential options associated with the ground and user equipment segments, which were excluded from the study, as well as costs associated with the various technical and programmatic risks identified in the report. Assurance that best practices are followed for subsequent cost estimates would also benefit future investment decisions. Additionally, the Air Force was not required by the Committee to consult representatives from the broader PNT advisory community, but several of these individuals provided us with useful information relating to the options assessed. Future investment decisions would benefit from a broader outreach to gain the input and perspectives of other GPS stakeholders. Development of dual launch capability for GPS III satellites, development of the NavSat concept, and the inclusion or exclusion of the nuclear detonation detection system payload, are key drivers of cost and capability between the GPS options assessed by the Air Force. For example, both the dual launch capability and the NavSat concept require use of technologies new to GPS with associated developmental risks, and as key drivers of cost, they are factors that differ significantly enough to have a material impact on the analysis and the resulting findings, including estimated costs. Additionally, two of the NavSat options are based on the possibility that the nuclear detonation detection mission could eventually be transferred to another space-based system, but the effect of such a change on overall system capability was not fully considered in the report. To assess the options as a basis for making future GPS investment decisions, more information on each of these drivers is important to understand the full impact they could have on the future integrity, capability, and cost of the system. For options 3 through 7, development of a dual launch capability for GPS III satellites is necessary, but a lack of detail in the Air Force’s report on the maturity of key technologies and cost considerations make it difficult to conduct a fully informed assessment of the viability of these options. Program officials indicated that this ongoing effort will require reductions in the size, weight, and power of the GPS III satellites, as well as development of equipment (adaptors) that will allow two satellites to fly on a single launch vehicle. According to program officials, these adaptors are nearing a high level of technology maturity. However, the report names various other components and technologies under development for this effort, such as lithium ion batteries and more efficient signal amplifiers, but does not indicate where they stand in terms of technology maturity. Instead, the report assumes that dual launch capability is fully developed and that demonstration of the capability will be ready in time to launch the seventh and eighth GPS III satellites (of potentially 32 production satellites). Consequently, it is difficult to assess the near- to medium-term feasibility and timeframes of all options that are based on dual launching GPS III satellites. Additionally, the Air Force did not fully consider the cost impact of its dual launch approach. While cost savings may eventually be accrued from dual launches, the report does not address the acquisition and operations strategy—what is necessary to buy and conduct dual launches up front. For example, one of the subject matter experts we consulted identified a number of factors which would result in cost increases that offset some of the expected savings from not procuring a second launch vehicle. These include (1) taking the steps to ensure the constellation’s integrity would remain intact in the event of a launch failure, as two satellites would be lost in such an instance rather than one (e.g., Air Force officials would have to decide whether to orbit more spares to overcome the effects of a possible launch failure); (2) making changes at the manufacturing level to accommodate two satellites, such as the additional storage costs to hold the first satellite until the second satellite is ready to launch; (3) ensuring launch site facilities are capable of processing two satellites essentially simultaneously, noting that discovery of an anomaly during launch site testing of one satellite could affect the other satellite; and (4) ensuring the additional support equipment and personnel necessary to support preparing two satellites for launch. This expert further acknowledged that none of these factors were “showstoppers” but suggested that they should be taken into consideration. Program officials noted that in developing the notional launch schedule for the options, they factored in a two percent probability of launch failure which is an industry standard for mature launch vehicle designs. However, they acknowledged that modifications needed on the launch vehicle to dual launch GPS satellites have not been demonstrated. Some of the identified technical risks for the notional NavSat options may be understated given that they involve undertaking new acquisition programs with technologies not previously used on GPS satellites. For example, many of the initiatives to reduce satellite size, weight, and power underway for developing dual launch capability are also being explored for application on NavSats. We have found in our prior work relating to best practices for weapon system acquisition that programs employing technologies that are not fully mature tend to face challenges staying on budget and on schedule, which increases risk to the overall program. The Air Force report indicates that maturation of the NavSat concept is a prerequisite for many options assessed, including development of new satellite buses and payloads. These development efforts are assigned medium and medium-high technical and programmatic risk. While the report acknowledges that there are technical risks involved with developing higher efficiency signal amplifiers and other space, weight, and power reducing technologies, it concludes that none of these technologies is high risk. However, officials from the Air Force Research Laboratory, which is helping to develop some of these technologies for potential application on NavSats, noted that some of the new technologies are currently at a relatively low level of maturity (i.e., only tested in a laboratory environment), and will not likely be feasible for the GPS program until at least 2018. Moreover, one of the subject matter experts we consulted noted that there are other potential risks associated with the NavSat options, such as not having a proven track record. This expert noted problems in the NavSat programs could result in the loss of user confidence in GPS that has been built up over decades of delivering a reliable satellite navigation service. Details regarding technology maturity are also needed to better understand the cost impact of these options on the overall GPS program. For example, program officials acknowledged that savings from dual launching GPS III satellites could help offset some of the additional costs associated with development of NavSats. However, they noted that there would likely be significant initial development costs for the NavSats that would result in the need for a net budget increase in the near-term. Additionally, a key aspect of NavSat capability relevant to civilian GPS users is not addressed in the Air Force report and without additional clarification, it is difficult to determine the impacts to these users. The report notes that one approach for reducing the size, weight, power, and cost of proposed NavSats, would be to reduce the total number of navigation signals they transmit. Specifically, under option 5 in the report, NavSats would broadcast two of four civilian signals and two military signals. Three of the subject matter experts we consulted noted concerns in part because determining which civilian signals to exclude would likely result in the prioritization of one user group over another, as the civilian signals have different applications. Air Force officials indicated that, option 5 is based on a larger core of GPS III satellites—which are required to carry all civilian signals—and augmented by a smaller number of NavSats. However, it is unclear how a core of GPS III satellites would compare with a combined constellation of GPS III satellites and NavSats, both required to broadcast all civilian signals. Two of the NavSat options included in the Air Force report would require the eventual elimination of the nuclear detonation detection system payload from GPS satellites, and the report notes that as long as this mission is a priority and is planned to be hosted on GPS satellites, these options—and the low relative costs—would therefore not be viable. Elimination of the nuclear detonation detection system could have other effects on the overall capability of the GPS system not recognized or addressed in the Air Force report. For example, one of the subject matter experts we consulted mentioned that the GPS search and rescue function, a requirement for GPS III satellites, shares an antenna with the nuclear detonation detection system payload. Thus, the two options (numbers 8 and 9) that drop the nuclear detonation detection system completely would potentially forgo the search and rescue function as well. However, program officials said the search and rescue payload is relatively small and could be adapted to the NavSats without the nuclear detonation detection system, and also noted that in a mixed constellation of GPS III satellites and NavSats, there would likely be sufficient coverage for the search and rescue mission even if the search and rescue sensors were only on the GPS III satellites. Given the differing views of these officials and subject matter experts, this issue will need to be resolved as the Air Force pursues future assessments of GPS options. The Air Force’s high-level cost estimates appear consistent across the options for the space segment, but the estimates do not include the other two key GPS segments. For its study, the Air Force defined affordability as the reduction in total ownership cost of sustaining GPS throughout its life cycle. However, without inclusion of the other two key segments of the system, the methodology used does not reflect true life cycle costs. Additionally, Air Force officials noted that the cost estimates developed for the report do not include cost risk. This leaves some question as to the ultimate usefulness of the cost estimates, risk rankings, or both. In addition, the Air Force did not apply all aspects of best practices in developing its cost estimates, which further reduces the usefulness of these estimates in decisionmaking. Finally, the Air Force did not obtain input from the PNT advisory community which may have helped inform its assessment. For a defense acquisition program, DOD defines life cycle costs as the total cost to the government of acquisition and ownership for a program over its full life. This includes the cost of development, acquisition, operations, and support (to include personnel), and where applicable, disposal. While the Air Force report indicates that the estimates are based upon life cycle costs, it does not include key elements necessary to meet the definition of life cycle cost, thereby underestimating the total costs of the options and limiting their usefulness. The Air Force’s cost estimates for each option only included the space segment—satellite and launch vehicles—which it determined to be the only segment to have significant impact on overall costs. As mentioned previously, the space segment accounts for the largest share—more than half—of total GPS costs in the Air Force’s current budget. Under this methodology, program officials said options dealing with the ground control system were not included in the cost estimates because costs associated with adding capability to control new satellites were estimated to be a small percentage of the total GPS system cost, and would not provide a significant distinction between the options. However, this estimation, without the ground control segment, may not be valid given that there could be unknown cost and schedule risks for the ground control segment associated with modifications to accommodate new and different satellite vehicles. More specifically, while changes to the ground segment may not be a major cost differentiator between the options, they have presented significant challenges. For example, program officials said the costs of adding ground control capability for a new generation of satellites are known, but this is based on updates to the current ground control system. However, in our prior work, we also reported that modernization of the current ground control segment has been challenging, and has only enabled limited capability to control the new satellites, rather than facilitating full access to their new capabilities. Additionally, the ground control segment is being completely replaced as part of the GPS OCX program, which according to Air Force officials, will eventually enable users to take full advantage of the capability offered by the GPS constellation, and is expected to cost nearly $3.7 billion through fiscal year 2017. This cost is significantly higher than initial planning estimates due to software and other challenges experienced thus far, and the extent and cost of this effort were not recognized in the Air Force report. Additionally, two of the subject matter experts we consulted indicated that inclusion of user equipment could have benefited the study. The Air Force report did not assess any options that would require hardware changes to user equipment beyond what is already planned for user equipment modernization because of the large number of civilian receivers currently in use. One subject matter expert noted that although the report cites this as the primary reason user equipment was not examined; civilian user equipment will continue to evolve by integrating GPS signals with signals from other global navigation systems, as well as with non-space-based PNT capabilities such as inertial navigation sensors, chip scale atomic clocks, and terrestrial radiolocation technologies. The expert noted that this could reduce some of the capability that the space segment needs to provide in the future. Based on our discussions with officials from the Air Force Research Laboratory, this could in turn potentially reduce the size and power requirements for the space vehicle and thereby the overall cost. According to Air Force officials involved in the study, the overall cost estimates do not include cost risk, which is defined as the risk associated with the ability of the program to achieve its acquisition strategy cost objectives. Similarly, identified technical and programmatic risks and their relative rankings do not generally appear to have a basis in cost. The options involving a constellation of all NavSats or at least a core of NavSats (numbers 8 and 9 respectively) both reflect high technical and programmatic risks. At the same time, the cost estimates for these options reflect the lowest development and procurement costs (excluding operations and sustainment costs) over the fiscal year 2013 to fiscal year 2030 timeframe. We have found in our prior work relating to key practices for weapon system acquisition that programs employing technologies that are not fully mature tend to require more time in development or production and more funding than is initially anticipated. Without additional details as to how cost factors into the risk determination, it is difficult to know the relative utility of the Air Force’s cost estimates for making further GPS decisions. Best practices in cost estimating result in accurate and credible cost estimates that management can use for making informed decisions. The methodology consists of 12 steps that we present in figure 2. These best practices represent an overall process of established, repeatable methods that result in high-quality cost estimates that are comprehensive and accurate, and that can be easily and clearly traced, replicated, and updated. We found that the Air Force followed some aspects of the 12-step process in developing its estimates for GPS architecture options, but did not follow others. For example, the Air Force defined the estimate’s purpose and identified some ground rules and assumptions, but did not perform sensitivity analysis and conducted minimal analysis of cost-related risks. This would be expected given the time constraints and other limitations of the study; however, to further assess its GPS options, more fully-developed cost estimates would be important in facilitating more rigorous comparisons across the options. GPS has grown into a global utility whose multi-use services are integral to U.S. national security and transportation safety, and are an essential element of the worldwide economic infrastructure. As a result, any decision regarding GPS has far reaching consequences for many beyond DOD. This fact was clearly addressed in the national PNT policy which formalized the membership of the PNT executive committee to include other government departments and agencies that have a stake in the stewardship of GPS. This committee ensures that the national security, homeland security, and civil requirements receive full and appropriate consideration in the PNT decision process and facilitates the integration and reduction of conflicts of these requirements for PNT, as required. However, the Air Force did not seek input from those departments and agencies that are represented on the committee, including the Department of Transportation, which co-chairs the committee with DOD. These key stakeholders could have provided valuable insight on potential impacts of the options presented, as well as future needs arising from any changes to the GPS. Several of the subject matter experts we consulted directly represent the stakeholders identified in the PNT policy. While these individuals largely believe the Air Force report is a good starting point for future GPS decisions, they provided additional, meaningful insights relating to potential risks and costs associated with the individual options as well as with ground control and satellite launch; user equipment capabilities; and the concerns and needs of various GPS users. According to Air Force officials, they did receive input from individuals who advocated for the needs of non-DOD users, but acknowledged that these individuals did not directly represent those users or other stakeholders identified in PNT policy. While not specifically required by the House Armed Services Committee to seek input from others, obtaining input from key stakeholders, such as was provided by the subject matter experts we consulted, the Air Force may have been able to obtain additional insights as it developed and weighed the various GPS options. This was a sentiment shared by some of our subject matter experts who expressed concern over the lack of input from the civilian community. The Air Force’s GPS report met the committee’s reporting requirements and serves as a good starting point from which to assess potential lower cost GPS options. The study focused on the space segment and as such, the options consist of various configurations of satellites and launch options—whether to launch satellites individually or in multiples. The less risky options appear to be relatively minor deviations from the department’s current approach in fielding GPS III, with the key decision being whether to launch one or two satellites at a time. Pursuing any options would likely require a near term increase in the overall GPS budget to achieve future savings—a challenge in the current fiscally constrained environment. Yet, the study used a larger constellation size than the Air Force’s current derived requirements and lacked the comprehensiveness that would allow the study to be used for future decision making. Based on acquisition best practices, there are actions the Air Force could take to improve the study results to include: identifying the constellation size to be supported; expanding the areas of consideration to include ground control and user equipment; more thoroughly defining and analyzing key capabilities and implementation approaches; and using a higher fidelity approach in assessing risk and estimating cost going forward. The Air Force could also benefit from greater consultation from the broader PNT stakeholder and advisory community, which could offer valuable perspectives relating to the overall approach for investigating future GPS options, as well as the relative merits and unknowns associated with each option analyzed. While there are a number of limitations in the study, it is valuable as a basis for substantive discussion of long-term GPS investments. Going forward, it is important for the Air Force to take a more comprehensive approach in assessing options for making sound future GPS investments. To better position the DOD as it continues pursuing more affordable GPS options, and to have the information necessary to make decisions on how best to improve the GPS constellation, we recommend that the Secretary of Defense direct the Secretary of the Air Force to take the following three actions: 1. Affirm the future GPS constellation size that the Air Force plans to support, given the differences in the derived requirement of the 24- satellite constellation and the 30-satellite constellations called for in each of the space segment options in the Air Force’s report. 2. Ensure that future assessments of options include full consideration of the space, ground control, and user equipment segments, and are comprehensive with regard to their assessment of costs, technical and programmatic risks, and schedule. 3. Engage stakeholders from the broader civilian community identified in PNT policy in future assessments of options. This input should include civilian GPS signals, signal quality and integrity, which signals should be included or excluded from options, as well as issues pertaining to other technical and programmatic matters. In written comments on a draft of this report, DOD concurred with all three of our recommendations to better position the DOD as it continues pursuing more affordable GPS options, and to have the information necessary to make decisions on how best to improve the GPS constellation. DOD’s written comments are reprinted in appendix II. DOD concurred with our first recommendation that the Secretary of the Air Force affirm the future GPS constellation size the Air Force plans to support. In its response, the department stated the numbers of satellites are affirmed annually in the President’s Budget request. However, the budget shows satellite procurements over time and does not specify the target constellation size to meet current or future accuracy requirements, which has a direct impact on annual procurement costs. Additionally, as our report indicates, the Air Force report based all options on a 30- satellite constellation while reporting a GPS requirement for accuracy of a 24 satellite constellation at 95 percent availability. Therefore, the need remains for the department to more clearly and transparently identify the target GPS constellation size the department intends to pursue. DOD also concurred with our second recommendation to ensure future assessments of options include full consideration of the space, ground control, and user equipment segments, and are comprehensive with regard to their assessment of costs, technical and programmatic risks, and schedule. In its response, the department stated that while consideration of the space and ground control segments should be comprehensive in these areas, the user equipment segment should be included in future assessments when those assessments include the fielding of new user equipment capability. As one of the subject matter experts we consulted noted, user equipment continues to evolve and could potentially embrace PNT technologies and capabilities that would reduce required capabilities for GPS satellites. As our report indicates, a comprehensive look at user equipment is warranted, especially given that the user equipment modernization program is in a pre-development phase. Finally, DOD concurred with our third recommendation to engage stakeholders from the broader civilian community identified in PNT policy in future assessments of options. In its response, the department said stakeholders from the broader civilian community identified in PNT policy should be engaged in future assessment of options that include changes to the Standard Positioning System performance standard or to agreements or commitments the DOD has already made with civil stakeholders. However, as we noted in our report, considering the unknown effect on the civilian user community of some of the options presented, as well as potential future application of new technology for user equipment, comprehensive assessment of future GPS options should involve the broader PNT community, particularly those stakeholders identified in PNT policy. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Secretary of the Air Force. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2527 or makm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. House Report No. 112-479 accompanying H.R. 4310, the National Defense Authorization Act for Fiscal Year 2013 directed the Commander of the Space and Missile Systems Center, U.S. Air Force, to provide a report to the congressional defense committees on lower-cost solutions for providing GPS capability following the procurement of the GPS III satellites. The Committee provided that the report should identify the system capability, possible implementation approaches, technical and programmatic risks, and the estimated costs of any solutions it recommends. The committee also mandated GAO to review the report provided by the Commander of the Space and Missile Systems Center, and to provide its recommendations to the congressional defense committees within 90 days after the Air Force report is received. To determine the extent to which the Air Force’s report, Lower Cost Solutions for Providing Global Positioning System (GPS) Capability, met the requirements, and to identify additional information that could guide future investment decisions, we assessed the report and the approach, assumptions, and criteria the Air Force used to conduct the study. To accomplish this, we obtained and reviewed documents that supported the Air Force’s report. We then discussed the report and our preliminary analysis of the report with GPS program and Aerospace Corporation officials responsible for the report to obtain their perspectives and clarify aspects of the report. We also reviewed other relevant high-level space strategic documents including the National Space Policy and the 2008 Biennial GPS Report to Congress. To identify the technological advances available to the Air Force and the readiness of those technologies, we interviewed officials from Defense Advanced Research Projects Agency and the Air Force Research Laboratory. To identify additional or clarifying information that could help guide future investment decisions, we sent data collection instruments to ten subject matter experts from the Positioning, Navigation and Timing stakeholder and advisory community to obtain their insights on GPS options the Air Force identified, as well as information that could provide additional insights, and received responses from seven of them. These subject matter experts were advisors to the National Space-Based Positioning, Navigation, and Timing Executive Committee or were identified by the National Coordination Office for Space-Based Positioning, Navigation, and Timing as experts in the field, and who were also government officials. To assess the completeness of cost estimates cited in the report, we assessed the process used to develop the cost estimates in the study to determine the extent to which they followed GAO’s 12-Step Reliable Process for Developing Cost Estimates assessment tool. We conducted this performance audit from April 2013 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Marie A. Mak, (202) 512-2527 or makm@gao.gov. In addition to the contact named above, Art Gallegos, Assistant Director; Andrew Redd; Emile Ettedgui; Jean Lee; Marie Ahearn; Karen Richey; Roxanna Sun; and Robert Swierczek made key contributions to this report. | The GPS--a space-based satellite system that provides positioning, navigation, and timing data to users worldwide--has become an essential U.S. national security asset and component in daily life. The GPS program is being modernized to enhance its performance, accuracy and integrity. In 2013, the House Armed Services Committee directed the Air Force to report on lower-cost GPS solutions. The committee also mandated that GAO review the Air Force report. GAO (1) assessed the extent to which the Air Force GPS report met Committee requirements; and, (2) identified additional information that is important in guiding future GPS investments. GAO reviewed the Air Force report, interviewed officials responsible for preparing it, and consulted subject matter experts from the positioning, navigation, and timing advisory community. GAO found the Air Force, the military branch responsible for Global Positioning System (GPS) acquisition, in its report on Lower Cost Solutions for Providing Global Positioning System Capability, broadly addressed all four congressional requirements--system capability, implementation approaches, technical and programmatic risks, and estimated costs--for each option presented for the space segment. GPS consists of three segments--space, ground control, and user equipment--but the study only addressed the space segment, which accounts for the largest share of total GPS costs--more than half--in the Air Force's current budget. The Air Force identified and assessed nine options for future GPS space segments, ranging in cost from $13 billion to $25 billion from fiscal year 2013 through 2030. The report assessed each option based on a constellation or collection of 30 total satellites instead of 24, which is the Air Force's baseline GPS requirement for accuracy. This increase in total satellites raises an issue with the constellation size the Air Force intends to support in the future. Air Force officials stated that the cost analyses supporting the nine options were high-level cost estimates. Although this may be expected given the time constraints and other limitations of the study, these estimates are not at a level that would support future GPS investment decisions. Although the Air Force report is a good starting point, more information on key cost drivers and cost estimates, and broader input from stakeholders would help guide future investment decisions. Specifically, the key cost drivers include dual launch capability (launching two satellites on a single launch vehicle), navigation satellites (smaller GPS-type satellites yet to be developed), and a nuclear detection capability. The cost estimates also excluded the ground control and user equipment segments and cost risk. Further, the Air Force did not obtain inputs from some key stakeholders such as those from the GPS positioning, navigation, and timing advisory community. Consequently, without conducting a more comprehensive assessment that addresses each of these concerns, the Air Force is not yet in a position to make sound future GPS investments. GAO recommends the Air Force: (1) affirm the future size of the GPS constellation it plans to support; (2) ensure future assessments are comprehensive and include cost risk and the impact of options on all three GPS segments; and (3) engage the broader stakeholder community in future assessments of options. DOD concurred with these recommendations. |
The basic process by which all federal agencies typically develop and issue regulations is set forth in the Administrative Procedure Act (APA) and is generally known as the rulemaking process. Rulemaking at most regulatory agencies follows the APA’s informal rulemaking process, also known as “notice and comment” rulemaking, which generally requires agencies to publish a notice of proposed rulemaking in the Federal Register, provide interested persons an opportunity to comment on the proposed regulation, and publish the final regulation, among other things. Agencies may also take other actions to gather information during the rulemaking process; for example, agencies may hold a public meeting to allow stakeholders to discuss specific aspects of the proposed regulation. Under the APA, a person adversely affected by an agency’s rulemaking is generally entitled to judicial review of that new rule. For regulations developed and issued using the APA’s notice and comment rulemaking process, the court may invalidate a regulation if it finds it to be “arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law,” sometimes referred to as the arbitrary and capricious test. In addition to the APA requirements, federal agencies typically must comply with requirements imposed by certain other statutes and executive orders. Some of the relevant laws include the Paperwork Reduction Act and the Regulatory Flexibility Act, which were both enacted in 1980; the Congressional Review Act, enacted in 1996;and the Information Quality Act, enacted in 2000. (See app. II for an overview of requirements that commonly apply to OSHA standard setting.) In accordance with various presidential executive orders, agencies work closely with staff from OMB’s Office of Information and Regulatory Affairs, who review draft regulations and other significant regulatory actions prior to publication.standard setting were established in 1980 or later. Most of the additional requirements that affect OSHA Agencies can supplement the notice and comment procedure for developing regulations through a process called “negotiated rulemaking.” Through this process, the agency convenes a negotiated rulemaking committee, generally composed of representatives of the agency and the various interest groups to be affected by a potential regulation, before developing and issuing the proposed rule. If the committee comes to an agreement on the content of a potential regulation, the agency may use it as the proposed rule. However, any agreement by the negotiated rulemaking committee is not binding on the agency or interest groups represented on the committee. Negotiated rulemaking does not replace any procedures required by the APA; rather, it can be used to help reach agreement among the members of the committee on the content of a proposed regulation, and according to proponents, it may help decrease the likelihood of subsequent litigation over the regulation. OSHA administers the OSH Act, which was enacted to help assure, so far as possible, safe and healthful working conditions for the nation’s workers. Section 6(b) of the act authorizes the Secretary of Labor to “promulgate, modify, or revoke any occupational safety or health standard” when he or she determines that doing so would serve the objectives of the OSH Act. Occupational safety and health standards are a type of regulation and are defined as standards that require “conditions, or the adoption or use of one or more practices, means, methods, operations, or processes, reasonably necessary or appropriate to provide safe or healthful employment and places of employment.” Section 6(b) of the act also specifies the procedures by which OSHA must promulgate, modify, or revoke its standards. These procedures include publishing the proposed rule in the Federal Register, providing interested persons an opportunity to comment, and holding a public hearing upon request. Section 6(a) of the OSH Act directed the Secretary of Labor (through OSHA) to adopt any national consensus standards or established federal standards as safety and health standards within 2 years of the date the OSH Act went into effect. In general, national consensus standards are safety and health standards that a nationally recognized standards- producing organization, such as the National Fire Protection Association, adopts after reaching substantial agreement among those who will be affected, including businesses, industries, and workers. Unlike OSHA’s standards, which are mandatory, employers may choose whether to voluntarily follow national consensus standards. The OSH Act specified that OSHA set standards under section 6(a) without following OSHA’s typical standard-setting procedures or the APA, including provisions for public comment. Indeed, according to an OSHA publication, hundreds of requirements in current OSHA standards make reference to or are based on about 200 consensus standards, but the OSH Act does not explicitly The require OSHA to ensure that these standards are kept up to date.vast majority of these standards have not changed since originally adopted, despite significant advances in technology, equipment, and machinery over the past several decades. When a federal agency decides to develop a rule, it is generally required by the National Technology Transfer and Advancement Act of 1995 to use technical standards developed or adopted by voluntary consensus standards bodies, where appropriate, except when doing so is inconsistent with applicable law or otherwise impractical. Under the OSH Act, if OSHA issues a rule that differs substantially from an existing national consensus standard, the agency must publish in the Federal Register an explanation of why its rule will better effectuate the purposes of the OSH Act than the national consensus standard. OSHA’s Directorate of Standards and Guidance, working with staff from other Labor offices, leads the agency’s standard-setting process. These staff explore the appropriateness and feasibility of developing standards to address workplace hazards that are not covered by existing standards. Once OSHA initiates such an effort, an interdisciplinary team typically composed of at least five staff focus on that issue. We analyzed the 58 significant health and safety standards that OSHA issued between 1981 and 2010 and found that the time frames for developing and issuing them ranged from 15 months to about 19 years (see table 1). At any given point during this period, OSHA staff worked to develop standards that eventually became final, represented in the table below. On average, OSHA took a total of about 93 months (7 years, 9 months) to develop and issue these standards. After the agency published the proposed standard, it took an average of about 39 months (3 years, 3 months) to finalize the standard. The majority of these standards—47 of the 58—were finalized between 1981 and 1999. In addition to these final standards, OSHA staff have also worked to develop standards that have not yet been finalized. For example, according to agency officials, OSHA staff have been working on developing a silica standard since 1997, a beryllium standard since 2000, and a standard on walking and working surfaces since 2003. We found that the time it takes OSHA to develop and issue standards varied over the 30-year period and by the type of standard. First, as shown in table 1, it took OSHA about 70 percent longer, on average, to finalize standards in the 1990s than it took during the 1980s, and about 30 percent longer than during the 2000s. While we were not able to determine the reason for this through our analysis, it demonstrates that there is no clear trend of OSHA developing and issuing standards more or less quickly over time. Second, we found that it took OSHA longer to develop and issue safety standards than health standards—an average of about 8 years, 6 months for safety standards compared with about 6 years, 4 months for health standards—even though several experts to whom we spoke stated that health standards are more difficult for OSHA to issue than safety standards (see figs. 1 and 2 for a depiction of the timelines for safety and health standards issued between 1981 and 2010). Part of this difference may be explained by the fact that a larger portion of the health standards (6 of 23, compared with only 3 of 35 safety standards) were standards for which Congress or the courts articulated time frames for their issuance or development. Experts and agency officials frequently cited the increased number of procedural requirements established since 1980, shifting priorities, and the relatively high standard of judicial review required for OSHA standards as factors that lengthen OSHA’s time frames for developing and issuing standards. In addition to these primary factors, several of the experts and agency officials also noted two secondary factors affecting the standard-setting process: significant data challenges and an institutional apprehension about setting standards in the wake of adverse court decisions. We have characterized these as secondary factors because they are both related to the three primary factors. Experts and agency officials indicated that the increased number of procedural requirements affects standard-setting time frames because of the complex requirements for OSHA to demonstrate the need for standards. Experts and agency officials named a variety of statutes and executive orders that have imposed an increasing number of procedural requirements on OSHA since 1980. The process for developing and issuing standards is complex and directed by multiple procedural requirements. According to Labor staff, agency consideration of a new standard can be the result of information OSHA receives from stakeholder petitions; occupational safety and health entities, such as the National Institute for Occupational Safety and Health (NIOSH) and the U.S. Chemical Safety and Hazard Investigation Board; OSHA’s enforcement efforts; or staff research (see fig. 3). To publicly signal OSHA’s intent to pursue development of a new safety or health standard, OSHA typically publishes a Request for Information or an Advance Notice of Proposed Rulemaking on the topic in the Federal Register. In this report, we refer to these events as “initiation.” OSHA also signals the beginning of standard-setting efforts by placing the issue on its regulatory agenda. However, OSHA can stop the standard-setting process either informally—by ceasing to actively work on the standard— or through a public announcement. The process for developing OSHA standards varies, but the typical process involves multiple steps. After OSHA initiates a standard-setting effort, staff typically schedule meetings with stakeholders—employer groups, worker groups, and other interested parties—to solicit feedback and discuss issues related to the potential standard, including its potential cost to employers. Am. Textile Mfrs. Inst. v. Donovan, 452 U.S. 490, 513 n.31 (1981). standards on butadiene, methylene chloride, hexavalent chromium, silica, and diacetyl. When OSHA performs the economic feasibility analysis, it concludes that a standard is economically feasible if the affected industry or industries will maintain long-term profitability and competitiveness. To do this, staff and contractors, by analyzing information they collect when visiting worksites, must assess the extent to which employers in the affected industries can afford to implement the required controls. In addition to the site visits, OSHA staff sometimes conducts industry-wide surveys to determine baseline practices and collect other relevant information needed for the technological and economic feasibility analyses. According to OSHA officials, the process of developing a survey and having it approved by OMB takes a minimum of 1 year. In addition to the feasibility analyses, OSHA staff generally must also conduct economic analyses. First, OSHA must assess the costs and benefits of significant standards as required by Executive Order 12866. Second, under the Small Business Regulatory Enforcement Fairness Act of 1996, if OSHA determines that a potential standard would have a significant economic impact on a substantial number of small entities, such as businesses, it is one of three federal agencies that must initiate a panel process that seeks and considers input from representatives of the affected small businesses.several months of work that many other federal regulatory agencies do not have to complete in order to issue regulations. Agency officials told us they want to consult with small businesses, but that the provisions laid out The small business panel process takes in the requirement make it too formal a process and are duplicative of the public hearings they hold after publishing the proposed rule. Finally, according to OMB guidelines, if a potential standard is projected to have an economic impact of more than $500 million, OSHA must initiate a peer review of the underlying scientific analyses. After completing the above steps, OSHA submits the preamble and text of the potential standard to OMB for review. Notice of Proposed Rulemaking in the Federal Register to alert the public that OSHA intends to issue a new final standard and to invite interested parties to comment on the proposed standard. Although OSHA is only required under the OSH Act hold public hearings upon request, as a general practice, officials told us that OSHA holds such hearings and has issued regulations governing its hearing procedures.administrative law judge presides over the hearings, and stakeholders have the opportunity to submit evidence to support their views on specific provisions of the proposed standards. The administrative law judge may also permit cross-examination by stakeholders or OSHA attorneys to bolster or challenge testimony presented during the hearing. Finally, stakeholders can submit data and other written documents subsequent to the hearing that OSHA must consider when crafting the final standard. Executive Order 12866 requires that OMB review all significant regulatory actions prior to their publication in the Federal Register. The executive order generally limits this review period to a maximum of 90 days; however, this period may be extended on a one-time basis for up to 30 days upon written approval of the OMB Director, or indefinitely at the request of the head of the rulemaking agency. business panels, which agency officials estimated adds about 8 months to the standard-setting process. According to agency officials and experts, OSHA’s priorities may change as a result of changes within OSHA, Labor, Congress, or the presidential administration. During the 30-year period covered by our review, administrations have alternately favored and resisted the development of new federal regulations or revisions of existing regulations. For example, officials told us that Assistant Secretaries typically serve for about 3 years, and that new appointees tend to change the agency’s priorities. Some agency officials and experts told us that, regardless of the agency leadership’s motivation for changes in priority, these changes often cause delays in the process of setting standards. Further, officials told us that, ultimately, political appointees make decisions about what standards, if any, to pursue based on their goals and the agency’s resources. Other experts described instances in which changes in the agency’s standard-setting priorities affected the process. One example some cited was OSHA’s efforts to develop the ergonomics standard. OSHA worked for several years in the 1990s to develop a proposed rule on ergonomics to address workers’ exposure to risk factors leading to musculoskeletal disorders. After being in the preproposal stage through much of the 1990s, there was interest in the late 1990s for OSHA to publish a proposed rule, and OSHA issued a final standard just 1 year after publishing the proposed rule. Several experts and agency officials noted that, in order to develop the rule so quickly, the vast majority of OSHA’s standard-setting resources were focused on this rulemaking effort, taking attention away from several standards that previously had been a priority. Agency officials told us, for example, that work on this standard used nearly 50 full-time staff in OSHA’s standards office, half the staff economists, and 7 or 8 attorneys, compared with the more typical 5 total staff assigned to develop a new standard. The standard of judicial review that applies to OSHA standards if they are challenged in court also affects OSHA’s time frames because it requires more robust research and analysis, according to some experts and agency officials. OSHA standards are subject to a different standard of judicial review than most other federal regulatory agencies’ regulations. Instead of the arbitrary and capricious test provided for under the APA, the OSH Act directs courts to review OSHA’s standards using a more stringent legal standard: it provides that a standard shall be upheld if supported by “substantial evidence in the record considered as a whole.” stringent standard requires a higher level of scrutiny by the courts and, therefore, requires OSHA staff to perform more extensive research and analysis to support a new standard. For example, OSHA officials explained that the substantial evidence standard requires that OSHA staff conduct a large volume of detailed research in order to understand all industrial processes that involve the hazard being regulated and to ensure that a given hazard control would be feasible for each process. OSHA officials and experts discussed two additional factors that cause OSHA officials to perform an extensive amount of work in developing standards, which are related to the factors described above. 29 U.S.C. § 655(f). need for or feasibility of a standard contribute to substantial challenges to attaining information required for setting standards. They cited court decisions interpreting the OSH Act’s requirements as one of the reasons they must rigorously support the need for and feasibility of standards. For example, in 1980, the Supreme Court held that before it can issue a standard, OSHA must determine that the standard is necessary to remedy a “significant risk” of material health impairment among workers. As a result of this decision, OSHA generally conducts quantitative risk assessments for each health standard, which it must ensure are supported by substantial evidence.decision essentially established a standard of medical and scientific certainty and has resulted in OSHA staff having to spend an inordinate amount of effort gathering data to support the need for a standard. OSHA’s standard-setting process has been significantly influenced by court decisions interpreting statutory requirements. A key example is the 1980 “benzene decision,” in which the Supreme Court invalidated an OSHA standard that set a new exposure limit for benzene because OSHA failed to make a determination that benzene posed a “significant risk” of material health impairment under workplace conditions permitted by the current standard. Another example is a 1992 decision in which the U.S. Court of Appeals for the Eleventh Circuit struck down an OSHA health standard that would have set or updated the permissible exposure limit In that case, the court found that (PEL) for over 400 air contaminants.OSHA had not adequately demonstrated that current exposure to each hazard posed significant risk, or that each standard reduced that risk to the extent feasible. Labor officials told us that the court’s decision discouraged them from trying to expedite the standard-setting process by combining many standards into one rulemaking effort. Several experts with whom we spoke observed that such adverse court decisions have contributed to an institutional culture of trying to make OSHA standards impervious to future adverse decisions. These experts cited the threat of litigation as a disincentive to issuing standards. In contrast, agency officials commented that while OSHA tries to avoid lawsuits that might ultimately invalidate a standard, in general OSHA does not try to make a standard “bulletproof.” Agency officials noted the agency is frequently sued. OSHA has not issued any emergency temporary standards in nearly 30 years, citing, among other reasons, legal and logistical challenges. Section 6(c) of the OSH Act authorizes OSHA to issue these standards without following the typical standard-setting process if two legal requirements are met. The Secretary of Labor must determine that: (1) workers are exposed to grave danger from exposure to substances or agents determined to be toxic or physically harmful, or from new hazards, and (2) an emergency temporary standard is necessary to protect workers from that danger.effective immediately upon publication in the Federal Register and must An emergency temporary standard becomes be replaced within 6 months by a permanent standard issued using the process specified in section 6(b). OSHA officials told us that meeting the statutory requirements and issuing a permanent standard within the 6- month time frame has proven difficult. Furthermore, OSHA’s emergency temporary standards have received close scrutiny by federal courts, whose decisions have characterized OSHA’s emergency temporary standard authority as an extraordinary power to be used only in limited situations. OSHA officials noted that the emergency temporary standard authority remains available, but the legal requirements to issue such a standard are difficult to meet. OSHA issued nine emergency temporary standards between 1971, when the agency was established, and 1983, and none since that year. Five of those nine emergency temporary standards were either stayed or invalidated, at least in part, by federal courts. For OSHA to satisfy the first of the OSH Act’s two requirements for issuing an emergency temporary standard, the agency must determine that workers will be exposed to grave danger during the time an emergency temporary standard is in effect. Establishing sufficient evidence of grave danger to withstand a court challenge can be difficult, even for substances whose hazards are well-known, such as asbestos. In 1983, OSHA issued an emergency temporary standard lowering the PEL for asbestos, which was subsequently challenged in federal court by representatives of the asbestos industry. The court held that OSHA failed to show sufficient evidence that workers faced grave danger from exposure under current limits for the 6 months the emergency temporary standard would be in effect. OSHA had estimated, based on mathematical projections from long-term epidemiological studies, that during the 6 months the emergency temporary standard would be in effect, it could prevent at least 80 eventual asbestos-related deaths. However, the court found these projections too uncertain to establish a grave risk over a 6-month period and noted that the type of analysis OSHA used merited the public scrutiny of the notice and comment standard-setting process. OSHA has also found it challenging to meet the second OSH Act requirement: establishing that an emergency temporary standard is necessary to protect workers from the grave danger. In the asbestos case, the court found that OSHA was on its way to issuing a permanent standard within a year, already had the authority to conduct the education activities the emergency temporary standard contained, and could achieve many of the same benefits by increasing enforcement of the existing standard. The court, therefore, invalidated the emergency temporary asbestos standard because OSHA failed to meet both of the OSH Act’s requirements. OSHA officials cited diacetyl, a food flavoring ingredient, as a recent example of a hazardous substance for which the OSH Act’s second requirement might have been difficult to meet if the agency had chosen to pursue an emergency temporary standard. In 2006, the agency was urged to issue an emergency temporary standard for diacetyl after investigations showed its association with severe, irreversible lung disease among workers in microwave popcorn factories. OSHA officials told us they could likely have established that diacetyl exposure under then-current workplace conditions presented grave danger to workers in the near term. These officials noted, however, that because manufacturers responded quickly after diacetyl’s danger became clear, OSHA had less evidence that an emergency temporary standard was necessary. For example, they noted that manufacturers responded with a combination of measures including improved ventilation and housekeeping, reducing the concentration of diacetyl used, and substituting other ingredients. In addition to the legal requirements, OSHA has found that issuing an emergency temporary standard presents a logistical challenge. OSHA’s emergency temporary standards are effective on the date of publication in the Federal Register, but they must be replaced within 6 months by a permanent standard.evidence required for the typical standard-setting process—which, as noted above, involves engaging with stakeholders and can take many years—in this abbreviated time frame. OSHA officials noted that the Congress intended this emergency temporary standard-setting authority to be used under very limited circumstances. This means OSHA must compile the same OSHA has not issued an emergency temporary standard since 1983, despite many requests that it do so. Labor unions and public health and other advocacy organizations continue to petition OSHA to issue emergency temporary standards to address a variety of workplace hazards. According to OSHA records, it has received 23 petitions to issue emergency temporary standards on hazardous chemicals, such as formaldehyde, and also for safety hazards such as shock or injury from unsecured equipment. One petition, submitted in September 2011, urges OSHA to issue an emergency temporary standard to protect workers from potentially fatal exposure to heat. Although OSHA has generally denied these petitions, officials told us the agency considers whether to issue an emergency temporary standard and takes the information into account when setting its priorities for permanent standards. OSHA uses enforcement and education as alternatives to issuing emergency temporary standards to respond relatively quickly to urgent workplace hazards. OSHA officials consider their enforcement and education activities complementary: a high-profile citation or enforcement initiative on an urgent hazard generates attention that can improve worker safety industry-wide. OSHA may cite employers for failing to adequately protect workers from a specific workplace hazard even if it has not set a standard on that hazard. Under section 5(a)(1) of the OSH Act, known as the general duty clause, OSHA has the authority to issue citations to employers even in the absence of a specific standard under certain circumstances. The general duty clause requires employers to provide a workplace free from recognized hazards that are causing, or are likely to cause, death or serious physical harm to their employees. OSHA relied on the general duty clause when it cited Walmart for inadequate crowd management in the 2008 trampling death of a worker. OSHA’s investigation found that the company failed to protect its employees from the known risks of being crushed or suffocated by a large unmanaged crowd—in this case, about 2,000 shoppers surging into the store for a holiday sale. To cite an employer under the general duty clause, OSHA officials told us they must, among other things, have evidence that the hazard is “recognized” in the industry and that the employer failed to take reasonable protective measures. According to OSHA officials, using the general duty clause requires significant agency resources so is not always a viable option, for example when OSHA cannot prove an employer knows the hazard exists or when a hazard is just emerging. Some of OSHA’s standards require general protective measures that are sufficiently broad to cover a variety of hazardous substances or practices. Such standards may be the basis for enforcement actions regarding urgent hazards that are not the subject of a specific standard. OSHA officials explained that not every conceivable workplace hazard can be the subject of its own standard. The agency has issued specific exposure limits for some hazardous substances, such as formaldehyde, but indicated it would be impossible to test and establish specific exposure limits for all chemicals present in the modern workplace. OSHA’s general standards include, among others, requirements for employers to follow protective housekeeping practices, provide respiratory protection under certain conditions, and inform workers about hazardous chemicals they are exposed to on the job. OSHA uses education to promote voluntary protective measures against urgent hazards along with its enforcement and standard-setting activities. Standards and enforcement are critical parts of OSHA’s education activities: standards inform employers about their responsibilities, and enforcement initiatives raise awareness of urgent hazards. OSHA officials believe high-profile citations serve to focus attention throughout the relevant industry and can create a ripple effect of improved worker protection. In addition to setting standards, OSHA offers on-site consultations and publishes health and safety information to inform employers and workers about urgent hazards. If its inspectors discover a particular hazard, OSHA may send letters to all employers where the hazard is likely to be present to inform them about the hazard and their responsibility to protect their employees. OSHA officials also use education to improve safety in the near term while the agency compiles the information necessary to develop a standard. For example, OSHA decided not to issue an emergency temporary standard on diacetyl in part because, as it gathered evidence to support the standard, employers implemented changes to improve worker safety. As evidence mounts that other ingredients in food flavorings may be hazardous, OSHA is gathering information but has not yet published a proposed standard on diacetyl. OSHA has, in the meantime, published educational documents such as alerts and information bulletins for employers on diacetyl and flavorings in general, describing protective measures, compliance assistance programs, and employer responsibilities under the OSH Act and existing OSHA The agency has also developed material for workers, giving standards.them the information they need to determine when they may be exposed to diacetyl or similar substances and the types of protection they need. OSHA’s education efforts also address other hazards for which it has received petitions to issue emergency temporary standards. For example, OSHA officials told us they are addressing the risks of exposure to heat primarily through education, along with targeted enforcement in cases where workers are known to be most at risk. OSHA’s education efforts on this hazard include an initiative intended to reach and educate agricultural workers through training materials designed to be culturally appropriate and accessible, including a train-the-trainer approach for wide distribution. These training materials were supplemented by public service radio announcements intended to reach workers at risk of heat-related illness. Although the rulemaking experiences of two other federal agencies shed some light on OSHA’s challenges, their statutory framework and resources differ too markedly for them to be models for OSHA’s standard–setting process. Other regulatory agencies may also face challenges similar to OSHA’s. For example, as GAO has previously reported, EPA has faced difficulties regulating under the Toxic Substances Control Act of 1976. Some of these differences in statutory frameworks and resources may facilitate rulemaking efforts at other agencies. For example, EPA is directed to regulate specified air pollutants and review its existing regulations within specific time frames under section 112 of the Clean Air Act, and MSHA benefits from a narrower scope of authority than OSHA and has more specialized expertise as a result of its more limited jurisdiction. Similar to OSHA, EPA’s Office of Air and Radiation regulates a wide range of hazards across diverse industries to protect the public health. This office implements the Clean Air Act, including section 112, which requires EPA to regulate certain sources of air pollution and specifies the substances to be controlled. For example, under section 112, EPA must establish standards for sources of 187 specific hazardous air pollutants. EPA officials told us that this provision gave the agency clear requirements and statutory deadlines for regulating hazardous air pollutants, which it previously lacked. In contrast, some experts and agency officials we spoke with identified OSHA’s relatively broad discretion to set and change its regulatory agenda as a contributing factor to the length of time it takes OSHA to issue standards. Even with this relatively specific statutory mandate, EPA has faced challenges implementing its section 112 mandate, such as insufficient funding and court-imposed deadlines that make it difficult for the agency itself to implement its own agenda. EPA also has a statutory mandate to periodically review the standards issued under section 112. For example, section 112 requires that EPA set technology-based standards for stationary sources of hazardous air pollutants, and further requires that EPA review these standards at least every 8 years and revise them, as necessary, taking into account developments in practices, processes, and control technologies. contrast, the OSH Act does not specify when OSHA is to revise its standards. OSHA’s attempt to update its standards efficiently—by lowering the PELs for 212 air contaminants in one rulemaking—was The court held that OSHA failed to show struck down by a federal court.adequate evidence that each individual substance presented a significant risk at the existing exposure limit, or that the lower limit would reduce the risk to workers to the extent feasible. OSHA and Labor officials noted that, because the agency lacks an efficient update process, many of its standards lag behind advances in technology. 42 U.S.C. § 7412(d)(6). risks to human health or the environment. In contrast, OSHA must determine that significant risks to workers are present under current conditions before it can establish or change existing standards. OSHA has had to perform a specific risk assessment for every new toxic agent for which it intends to set a PEL. MSHA’s mission is more focused than OSHA’s because its authority is limited to one industry and it can target its regulatory resources more easily. In addition, the Federal Mine Safety and Health Act of 1977 requires that MSHA inspect each mine in the United States at least two times a year, which facilitates its regulatory work. Officials at MSHA noted that both this frequent on-site presence and relatively homogenous industry helps agency staff maintain a current knowledge base.officials contrasted this with the vast array of workplaces and types of industries OSHA oversees. Officials with OSHA and Labor noted that OSHA’s scope of authority is so large that it cannot inspect more than a fraction of workplaces in any given year. As a result, OSHA and Labor officials told us they can call upon inspectors when researching a standard but must often supplement the agency’s inside knowledge by conducting site visits using OSHA staff or contractors. MSHA’s legal framework may also present fewer challenges to standard setting than OSHA’s. First, MSHA standards are subject to the arbitrary and capricious standard of review, unlike OSHA standards, which are reviewed under the generally more stringent substantial evidence standard. Second, according to MSHA officials, the agency has met the statutory requirements for the five emergency temporary standards it has issued since 1987, and no legal challenges to these standards were filed. Similar to OSHA’s authority to issue emergency temporary standards, MSHA has statutory authority to issue “an emergency temporary mandatory health or safety standard” without following the APA’s notice and comment rulemaking procedures if the Secretary of Labor determines that (1) miners are exposed to grave danger from exposure to substances or agents determined to be toxic or physically harmful, or to other hazards, and (2) such a standard is necessary to protect miners from such danger. MSHA’s most recent emergency temporary standard required underground bituminous coal mine operators to increase the incombustible content of rock, coal, and other dust, in order to address the risk of explosion posed by such dust. Both OSHA and MSHA supplement their employees’ knowledge by calling upon the expertise at NIOSH, with MSHA benefiting from a specialized research group within NIOSH focused on the mining industry. According to officials with both NIOSH and OSHA, coordination between the two has varied over time and has improved significantly in recent years. For example, in 2011, NIOSH and OSHA adopted a Memorandum of Understanding that provides OSHA with access to specified NIOSH data on the health hazards of diacetyl and allows OSHA to coordinate with NIOSH in preparing a risk assessment to support the development of a new diacetyl standard. To fully leverage expertise at other federal agencies, experts and agency officials suggest improving interagency coordination. Specifically, they indicated that OSHA has not fully leveraged available expertise at other federal agencies, especially NIOSH, when developing and issuing its standards. As mentioned previously, NIOSH conducts research and makes recommendations on occupational safety and health, and it was created at the same time as OSHA by the OSH Act. OSHA has a number of staff with subject matter expertise relevant to standard setting, including industrial hygienists and scientists, but the agency does not always take advantage of the expertise and data at NIOSH on occupational hazards. One expert noted that NIOSH is uniquely positioned as a primary research institution to help OSHA develop standards using EPA-produced data and analysis on chemical hazards. OSHA officials said their agency’s staff consider NIOSH’s input on an ad hoc basis, but do not routinely work closely with NIOSH staff to analyze risks of occupational hazards. An OSHA official cited one case in which OSHA staff worked closely with NIOSH staff to prepare the technological feasibility analysis for a proposed silica standard, drawing on an extensive body of work on dust control technology by NIOSH engineers. In addition, officials described other cases of collaboration between the two agencies during OSHA’s process of visiting worksites. However, NIOSH officials told us that this type of coordination has been more common recently than it was in the past, when the two agencies performed separate risk assessments for hazards, such as hexavalent chromium. OSHA officials stated that collaborating with NIOSH on risk assessments could reduce the time it takes to develop a standard by several months. OSHA and NIOSH have coordinated on a number of OSHA standards projects; currently, the two agencies have a Memorandum of Understanding stipulating that NIOSH will perform the risk assessment for the OSHA standard on diacetyl. However, some experts and officials at both agencies noted that collaborating in a more systematic way could facilitate OSHA’s standard-setting process. To ensure that OSHA’s standards keep pace with changes in technology and best practices, experts suggested the agency be allowed to more easily adopt industry voluntary consensus standards. According to OSHA officials, many OSHA standards incorporate or reference outdated consensus standards, which results in challenges for employers in complying with the standards and OSHA in enforcing them. Officials also said that the majority of OSHA’s health standards were adopted from existing federal standards—originally adopted under the Walsh-Healy Act—during the agency’s first 2 years using section 6(a) of the OSH Act, which directed OSHA to set standards without following the typical section 6(b) standard-setting procedures or the APA. Although current at the time, many industry consensus standards have since been updated to reflect advancements in technology and science. However, according to OSHA, most of OSHA’s standards have not been similarly updated, so employers following current industry consensus standards may be out of compliance with OSHA’s standards. As a result, some employers may be discouraged from updating processes or technology at their worksites in order to avoid OSHA citations. One expert said, and OSHA reported, that this could leave workers at these worksites exposed to hazards that are insufficiently addressed by OSHA standards that are based on out-of-date technology or processes. OSHA has reported that these types of standards are challenging because their inspectors must spend time addressing them during worksite inspections. Additionally, officials told us that issuing citations to employers that are following the most up-to-date industry consensus standards reflects poorly on the agency. OSHA has attempted to update some of its standards to incorporate advances in technology and science, but the lengthy standard-setting process presents significant challenges for updating them. In accordance with the requirements in the OSH Act and the National Technology Transfer and Advancement Act, when updating its standards, OSHA considers using voluntary consensus standards. However, OSHA officials told us that, since standards developing organizations typically do not have to meet scientific requirements in developing voluntary standards, OSHA’s ability to base its standards on voluntary consensus standards is limited because staff must still perform a full quantitative risk assessment for new standards. Since 2004, OSHA has been engaged in an effort to update several of its standards using industry consensus standards, which officials told us started by first identifying standards that would be well- suited to more streamlined rulemaking approaches, such as issuing a direct final rule. For example, they said they chose to update the standard on personal protective equipment first because they expected employers would be amenable to the update, as changes would be consistent with the current industry consensus standard. To address the problem of standards based on outdated consensus standards, experts suggested that Congress pass new legislation that would allow OSHA, through a single rulemaking effort, to revise standards for a group of health hazards based on current industry voluntary consensus standards or the Threshold Limit Values developed by the American Conference of Governmental Industrial Hygienists. In 1989, OSHA attempted to revise the PELs for over 200 air contaminants by combining them into a single rulemaking effort, but the rule was invalidated by the court for failing to follow the OSH Act requirements for each hazard. To save OSHA time, experts specified that any new law to this effect should contain a provision similar to the one in the OSH Act that excused the agency during its first 2 years from following the standard-setting provisions of section 6(b) of the OSH Act or the APA. One potential disadvantage of this proposal is that OSHA may need to do a substantial amount of independent scientific research to ensure that consensus standards are based on sufficient scientific evidence. While such a law, if enacted, could exempt OSHA from conducting this research, an abbreviated regulatory process could also result in standards that fail to reflect relevant stakeholder concerns, such as an imposition of unnecessarily burdensome requirements on employers. For example, one expert stated that, while following the APA process takes time for regulatory agencies, it leads to higher quality standards and ensures that the basis for agency action is clear and defensible. Also, while this change could help ensure that existing OSHA standards are kept up to date, it could divert resources away from efforts to set new standards. To minimize the time it takes OSHA to develop and issue safety or health standards, experts and agency officials suggested that statutory deadlines for issuing occupational safety and health standards be imposed by Congress and enforced by the courts. OSHA officials indicated that it can be difficult to prioritize standards due to the agency’s numerous and sometimes competing goals. In the past, having a statutory deadline, combined with relief from procedural requirements, resulted in OSHA issuing standards more quickly. For example, the Needlestick Safety and Prevention Act directed OSHA to make specified revisions to its bloodborne pathogens standard within 6 months and exempted the agency from the typical procedural requirements under section 6(b) of the OSH Act or the APA. OSHA had already spent some time developing the standard before the law was passed, so it was able to complete the revised standard within the required time frame. Including the time spent on developing the standard before passage of the Act, OSHA completed the revised standard in less than 3 years. Another alternative to the full rulemaking process is for an agency to issue an interim final rule, which is immediately effective as a final rule but still allows for subsequent public comment. However, similar to one of the disadvantages described above, some legal scholars have noted that curtailing the current rulemaking process required by the APA may result in fewer opportunities for public input and possibly decrease the quality of the standard. Also, officials from MSHA told us that statutory deadlines make its priorities clear, but this is sometimes to the detriment of other issues that must be set aside in the meantime. Although a more streamlined approach could reduce opportunities for stakeholder comments and minimize agency flexibility, OSHA has used alternative rulemaking procedures in the past to issue standards for which officials perceive broad industry support. Experts and agency officials suggested OSHA’s substantial evidence standard of judicial review be replaced with the arbitrary and capricious standard, which would be more consistent with other federal regulatory agencies. As the court stated in the case involving PELs for 428 air contaminants, under the substantial evidence test, “ must take a ‘harder look’ at OSHA’s action than we would if we were reviewing the action under the more deferential arbitrary and capricious standard applicable to agencies governed by the Administrative Procedure Act.” As a result, OSHA officials said they spend a significant amount of time collecting evidence to ensure that its standards can withstand challenge under the substantial evidence standard of judicial review and to satisfy procedural requirements for setting standards. One expert said he understood that OSHA’s more stringent standard of judicial review was paired with informal rulemaking procedures as a congressional compromise. According to the author of a 1999 law review article, one justification for judicial review of agency rulemaking is when there is a genuine concern about the power many agencies have in the regulatory process. Congress has similar concerns about OSHA, it may be preferable to keep the current standard of review. However, the Administrative Conference of the United States has recommended that Congress amend laws that mandate use of the substantial evidence standard because it can be unnecessarily burdensome for the agency or confusing because it has been inconsistently applied by the courts. As a result, changing the designation for the standard of judicial review to “arbitrary and capricious” could reduce the agency’s evidentiary burden. Mark Seidenfeld, “Bending the Rules: Flexible Regulation and Constraints on Agency Discretion,” Administrative Law Review (spring, 1999). Experts suggested that OSHA minimize on-site visits by using surveys or basing its analyses on industry best practices, which could reduce the time, expense, and need for industry cooperation in conducting economic and technological feasibility studies. Primarily because OSHA has broad authority to regulate occupational hazards in nearly all private industries, the technological and economic feasibility analyses required by the OSH Act entail an extensive amount of time and resources. OSHA must conduct its feasibility analyses on an industry-by-industry basis, which requires numerous site visits—an activity that is time-consuming and largely dependent on industry cooperation. According to agency officials, in many cases, OSHA hires contractors to gather information from worksites that will support standards’ feasibility analyses. Two experts suggested OSHA could streamline its economic and technological feasibility analyses by surveying worksites rather than visiting them. However, one limitation to this method is that, according to OSHA officials, in-person site visits are imperative for gathering sufficient data in support of most health standards. Specifically, officials told us that to fully understand the industrial processes and application of a chemical to be regulated, OSHA staff or contractors must be able to observe the work being performed and ask questions of workers at the site. In addition, the only way for OSHA to know about ambient chemical levels is to collect on-site air samples all day long. In light of this limitation, this method may be more appropriate for safety hazards. The other method experts suggested is allowing OSHA to base economic and technological feasibility assessments on industry best practices, which one expert noted would require a statutory change. For example, OSHA could base these analyses on the fact that a minimum percentage of workplaces in a particular industry use technology or methods that decrease exposure to hazards. However, the broad scope of OSHA’s authority would still result in this being a substantial amount of work at the outset, as OSHA would still be required to determine feasibility on an industry-by-industry basis. Experts suggested that OSHA develop a priority-setting process for addressing hazards. GAO has reported that, by developing strategies such as aligning agencywide objectives, federal agencies can demonstrate a commitment to a course of action. Similarly, having a priority-setting process could lead to improved program results. Currently, however, OSHA has no process or guidance to use in setting priorities, as officials told us they do not have a document that explains how priorities are or should be set. OSHA officials also said that ideas for which hazards to regulate come from a number of sources, including petitions from stakeholders, information from NIOSH, OSHA’s enforcement efforts, recommendations from the Chemical Safety Board, and staff research. While staff in OSHA’s standards office use this information to make recommendations to Labor’s Assistant Secretary for OSHA and the Deputy Secretary on which hazards to regulate, not all of their recommendations make it to the agency’s regulatory agenda, which is developed according to agency goals and resources. In addition, according to OSHA officials, decisions about which hazards to regulate guide OSHA standards activity for 6 months, the duration of the biannual regulatory agenda. As a result, the ability of the managers of OSHA’s standards office to plan with certainty work beyond this 6-month time frame may be limited. One expert suggested that OSHA develop a priority-setting process that more directly involves stakeholders with expertise in occupational safety and health in recommending new standards. OSHA attempted such a process in 1994 when it initiated a formal priority planning process. However, the expert said that, after an established committee of experts identified a list of priority hazards, the political climate changed with a new Congress that was generally more critical of the role of executive agencies in developing new standards, and OSHA shifted its focus away from this initiative. Nevertheless, this process allowed OSHA to articulate its highest priorities for addressing occupational hazards. Reestablishing a similar priority-setting process could have several benefits for OSHA, such as improving a sense of transparency among stakeholders and facilitating OSHA management’s ability to plan its staffing and budgetary needs. However, adopting such a process may not immediately address OSHA’s challenges in expeditiously setting standards because a process like this could take time and would require commitment from agency management. Setting occupational safety and health standards is one of OSHA’s primary methods for ensuring that workers are protected from occupational hazards, but OSHA faces a number of challenges in setting these standards promptly and efficiently. The additional procedural requirements established since 1980 by Congress and various executive orders have increased opportunities for stakeholder input in the regulatory process and required agencies to evaluate and explain the need for regulations, but they have also resulted in a more protracted rulemaking process for OSHA and other regulatory agencies. The process for developing new standards for previously unregulated occupational hazards and new hazards that emerge is a lengthy one and can result in periods when there are insufficient protections for workers. Nevertheless, any streamlining of the current process must guarantee sufficient stakeholder input to ensure that the quality of standards does not suffer. In addition, ideas for changes to the regulatory process must weigh the benefits of addressing hazards more quickly against a potential increase in the regulatory burden to be imposed on the regulated community. Most methods for streamlining that have been suggested by experts and agency officials are largely outside of OSHA’s authority because many procedural requirements are established by federal statute or executive order. However, OSHA can coordinate more routinely with NIOSH on risk assessments and other analyses required to support the need for standards, saving OSHA time and expense. NIOSH’s and OSHA’s current efforts to coordinate on the development of a new standard, which officials and staff from both agencies support, provides a useful template for increased and regular coordination on similar efforts. To enhance collaboration and streamline the development of OSHA’s occupational safety and health standards, we recommend that the Secretary of Labor and the Secretary of the Department of Health and Human Services instruct the Assistant Secretary of Labor for Occupational Safety and Health and the Director of the National Institute for Occupational Safety and Health to develop a more formal means of collaboration between the two agencies. Specifically, the two agencies should establish a more consistent and sustained relationship through a formal agreement, such as a Memorandum of Understanding, allowing OSHA to better leverage NIOSH’s capacity as a primary research institution when building the scientific record required for standard setting. We provided a draft of this report to the six agencies that assisted us in gathering information: Labor (OSHA and MSHA), Department of Health and Human Services (NIOSH), EPA, U.S. Chemical Safety and Hazard Investigation Board, OMB, and the Department of Commerce (National Institute of Standards and Technology). We received written comments from Labor and the Department of Health and Human Services; both sets of comments are reproduced in appendices III and IV, respectively. Both Labor’s Assistant Secretary for OSHA and the Department of Health and Human Services’ Assistant Secretary for Legislation agreed with GAO’s recommendation. They also both described the ways in which OSHA and NIOSH currently collaborate, each noting the expected benefits of maintaining collaboration through a formalized agreement. Labor’s OSHA and MSHA, EPA, and the Department of Commerce also provided technical comments, which we incorporated in the report as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7215 or moranr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. To determine how long it takes the Occupational Safety and Health Administration (OSHA) to develop and issue safety and health standards, we reviewed occupational safety and health standards and substantive updates to those standards. We selected standards that met two criteria: (1) they were published as a final rule between calendar years 1981 and 2010 and (2) OSHA identified each standard as significant. To identify our universe of standards for this analysis, we first conducted an electronic legal database search for final rules published by OSHA in the Federal Register between 1981 and 2010. We chose this time frame because it spans multiple executive administrations and changes in congressional leadership. Also, several statutes, executive orders, and key court decisions affecting OSHA’s standard-setting process became effective in or after 1980. We excluded from our review any rules that were not occupational safety or health standards, such as recordkeeping requirements or general administrative regulations, and any rules that were minor or technical amendments to existing standards. For this list, we included only standards for which OSHA’s semiannual regulatory agenda or other evidence indicated that OSHA considered the standard to be important or a priority, including but not limited to standards that met the definition of “significant” under Executive Order 12866. For each standard, we identified the dates of three regulatory benchmarks— initiation, proposed rule, and final rulebetween each benchmark to analyze trends. We confirmed with OSHA staff the accuracy of our selected benchmark dates and that the list of standards met our criteria. —and calculated the time elapsed There are some limitations to this approach because the development of a standard may not have a clear beginning or end point. For example, OSHA may have begun work on a standard prior to its appearance on the regulatory agenda or the publication of a Request for Information or Advance Notice of Proposed Rulemaking in the Federal Register. Conversely, it is possible that although a standard appeared on the regulatory agenda, work did not begin on the standard until sometime later. According to OSHA officials, once development of a particular standard has begun, work may stop and start again due to various factors such as changing priorities. Furthermore, the date a final rule was published does not necessarily coincide with the date the rule took effect, which may be some time later. While our analysis will not reflect these distinctions, we selected these benchmarks to ensure consistency and maximize comparability across different standards. To identify the key factors affecting OSHA’s time frames for issuing standards and ideas for improving OSHA’s standard-setting process, we conducted semistructured interviews with current and former Labor staff, as well as occupational safety and health experts, and analyzed their responses. We identified these experts, who represented both workers and employers, through our own research and through recommendations from other experts. The experts had direct experience with setting standards at OSHA, testified at past congressional hearings on occupational safety and health issues, or published written material on federal rulemaking. Finally, we reviewed relevant federal laws, regulations, executive orders, and other guidance and interviewed officials from the Office of Management and Budget to determine the required steps in the standard-setting process and how those requirements affect the time it takes OSHA to develop and issue standards. To identify alternatives to the typical standard-setting process available for OSHA to address urgent hazards, we reviewed relevant federal laws and interviewed current OSHA staff and attorneys from the Department of Labor’s Office of the Solicitor. We also interviewed experts identified as described above. We assessed the extent to which OSHA has used its authority to issue emergency temporary standards by analyzing a history of petitions for these standards provided to us by Labor staff. To determine whether rulemaking at other regulatory agencies offers insight into OSHA’s challenges with setting standards, we explored the regulatory process at three other federal regulatory agencies and offices. For these comparisons, we selected agencies with authority to issue regulations relating to public health or safety. We also included some agencies whose statutory frameworks were similar to OSHA’s and some whose statutory frameworks were different than OSHA’s. We based our selection of comparison agencies and offices on our interviews with experts, as well as a review of the literature, previous GAO work, and relevant federal laws. Using these criteria, we initially selected Labor’s Mine Safety and Health Administration (MSHA) and two offices of the Environmental Protection Agency (EPA): the Office of Pollution Prevention and Toxics and the Office of Air and Radiation. For the EPA offices, we specifically focused on their rulemaking experiences under section 6 of the Toxic Substances Control Act and section 112 of the Clean Air Act. However, after further review, we concluded that the Office of Pollution Prevention and Toxics did not offer insights for OSHA because of the office’s limited recent standard-setting experience and, as a result, we excluded the Toxic Substances Control Act from our review. Through a review of relevant federal laws and semistructured interviews with staff in EPA’s Office of Air and Radiation and at MSHA, we learned about challenges each agency faces when developing and issuing regulations and the factors that affect their time frames. Although states may also issue standards in the absence of an applicable federal standard or under an OSHA-approved plan, we did not look to these states to gain insight into OSHA’s challenges with setting standards. Based on our interviews with experts, and because rulemaking at the state level is governed by state law and is not subject to federal rulemaking procedural requirements, we determined that any comparisons between OSHA and states with respect to time frames for issuing standards would be inapt. We compiled the ideas for improving OSHA’s standard-setting process by analyzing statements from interviews with current and former agency officials and experts representing both workers and employers. The six ideas discussed in the report represent those most frequently mentioned that are not otherwise addressed by other parts of our report. We conducted this performance audit from February 2011 to April 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusion based on our audit objectives. Table 2 presents a summary of federal rulemaking requirements that apply to OSHA standard setting. This table is not intended to be a complete list of all procedural requirements that govern rulemaking at OSHA or at other federal regulatory agencies. In addition, this table presents only a selected summary of the requirements; for the complete requirements contained in each source, refer directly to the cited source. In addition to the individual named above, Gretta L. Goodwin, Assistant Director; Sara Pelton, Analyst-in-Charge; and Anna Bonelli, Analyst-in- Charge; managed all aspects of this assignment; Suzanne Rubins and Sarah Newman made significant contributions to all phases of the work; Sarah Cornetto made substantial contributions by providing legal advice and assistance; Jean McSween provided assistance in designing the study; Ashley McCall provided assistance with occupational safety and health literature; Kate van Gelder and Susan Aschoff assisted in message and report development; James Bennett created the report’s graphics; and Ashanta Williams, Lise Levie, and Daniel S. Meyer reviewed the report to check the facts presented. | Occupational safety and health standards are designed to help protect about 130 million public and private sector workers from hazards at more than 8 million U.S. worksites. Questions exist concerning how long it takes OSHA to issue its standards. GAO was asked to examine: (1) the time OSHA takes to develop and issue safety and health standards and the key factors that affect these time frames, (2) alternatives to the typical standard-setting process available for OSHA to address urgent hazards (3) whether other regulatory agencies rulemaking offers insight into OSHAs challenges with setting standards, and (4) ideas from occupational safety and health experts and agency officials for improving OSHAs process. GAO analyzed standards issued by OSHA between 1981 and 2010, interviewed subject matter experts and agency officials at OSHA and two similar federal regulatory agencies and offices, and reviewed the standard-setting process at OSHA and the comparison agencies and offices. Between 1981 and 2010, the time it took the Department of Labors Occupational Safety and Health Administration (OSHA) to develop and issue safety and health standards ranged widely, from 15 months to 19 years, and averaged more than 7 years. Experts and agency officials cited increased procedural requirements, shifting priorities, and a rigorous standard of judicial review as contributing to lengthy time frames for developing and issuing standards. For example, they said that a shift in OSHAs priorities toward one standard took attention away from several other standards that previously had been a priority. In addition to using the typical standard-setting process, OSHA can address urgent hazards by issuing emergency temporary standards, directing additional attention to enforcing relevant existing standards, and educating employers and workers about hazards. However, OSHA has not issued an emergency temporary standard since 1983 because it has found it difficult to compile the evidence necessary to meet the statutory requirements. Instead, OSHA focuses on enforcement and education when workers face urgent hazards. For example, OSHA can enforce the general requirement of the Occupational Safety and Health Act of 1970 (OSH Act) that employers provide a workplace free from recognized hazards, as it did in 2009 when it cited a major retail employer after one of its workers was crushed to death by uncontrolled holiday crowds. To educate employers and workers, OSHA coordinates and funds on-site consultations and publishes information on matters as diverse as safe lifting techniques for nursing home workers and exposure to diacetyl, a flavoring ingredient used in microwave popcorn linked to lung disease among factory workers. Experiences of other federal agencies that regulate public or worker health hazards offer limited insight into the challenges OSHA faces in setting standards. For example, officials with the Environmental Protection Agency noted that certain Clean Air Act requirements to set and regularly review standards for specified air pollutants have facilitated that agencys standard-setting efforts. In contrast, the OSH Act does not require OSHA to periodically review and update its standards. Officials with the Mine Safety and Health Administration noted that their standard-setting process benefits from both the in-house knowledge of its inspectors, who inspect every mine at least twice yearly, and a dedicated mine safety research group within the National Institute for Occupational Safety and Health (NIOSH), a federal research agency that makes recommendations on occupational safety and health. OSHA must rely on time-consuming site visits for hazards information and has not consistently coordinated with NIOSH to engage that agencys expertise on occupational hazards. Experts and agency officials identified several ideas that could improve OSHAs standard-setting process. While some of the changes, such as improving coordination with other agencies to leverage expertise, are within OSHAs authority, others call for significant procedural changes that would require amending existing laws. For example, some experts recommended a statutory change that would allow OSHA to revise a group of outdated health standards at the same time, using industry consensus standards as support rather than having to analyze each hazard individually. To streamline OSHA standards development, GAO recommends that OSHA and NIOSH more consistently collaborate on researching occupational hazards, so that OSHA can more effectively leverage NIOSH expertise in determining the needs for new standards and developing them. Both agencies agreed with the recommendation. |
In 2011, Congress enacted the Budget Control Act of 2011 (BCA), which amended the Balanced Budget and Emergency Deficit Control Act of 1985 (BBEDCA), to impose spending limits (or caps) on discretionary spending for fiscal years 2012 through 2021. These initial caps were intended to reduce projected spending by about $1 trillion. Because additional legislation to reduce the deficit by at least another $1.2 trillion through fiscal year 2021 was not enacted, additional deficit reduction procedures were triggered under BCA. This included further annual downward adjustments to discretionary spending caps through fiscal year 2021. Congress and the President have since amended BBEDCA to allow for increased discretionary appropriations in certain years. Looking forward, nominal discretionary spending under the statutory caps grows in future years. However, the Congressional Budget Office projects that when measured as a share of the economy, discretionary spending is projected to be lower in 2021 than at any point in the last 50 years (see figure 1). As we have previously reported, state and local governments continue to face their own fiscal challenges. Our simulations of long-term fiscal trends for state and local governments suggest that they could continue to face gaps between revenue and spending during the next several decades that would require substantial policy changes to address. This suggests that the federal government’s fiscal challenges cannot be adequately met over the long term by shifting spending to state and local governments. Our analysis of federal budget data shows that the overall amount of new discretionary resources (or budget authority) declined by roughly 12 percent from fiscal year 2010 to 2015. The amount of actual reductions or increases in discretionary resources for individual agencies and programs is determined through the annual appropriations process, in which policymakers make choices between competing national priorities. The agencies and programs we selected for review all experienced a decline of at least 9 percent in newly appropriated discretionary resources from fiscal year 2010 to 2015. Figure 2 compares declines at select agencies with the changes in newly appropriated discretionary resources for large cabinet-level departments and for the federal government overall during this period. A number of other factors beyond changes in the amount of budgetary resources can also affect agencies’ ability to manage declining resources. This includes the enactment of laws that impose new requirements on the agency and changes in demand for services resulting from the expansion or contraction of the broader economy. For example, the number of new unemployment insurance claims increases during recessionary periods. This increases state workload, which is funded through appropriations from ETA’s State Unemployment Insurance and Employment Service Operations account. The appropriation allows for additional funds to be made available when workloads increase above those estimated in the budget. However, according to ETA, during the 2007-2009 recession and the slow recovery that followed, states faced challenges quickly expanding capacity to process a record number of claims. In contrast, ETA officials said that demand for foreign labor tends to increase during periods when the economy is expanding, which affects the volume of work at the Office of Foreign Labor Certification (OFLC). ETA reported that OFLC’s overall application volumes were 84 percent higher in fiscal year 2015 than in fiscal year 2010. Agency officials attributed this increase in part to the recovering economy. Table 1 provides an overview of the missions of selected programs at EPA and ETA. As described above, given its relatively smaller size, we included all FLETC activities in the scope of our review, and thus it is not shown in the following table. Appendixes I through III provide more detailed information on each selected agency’s budget authority and certain performance measures. The framework that we developed in 2012 for examining agencies’ efforts to effectively manage in an environment of declining resources outlines the following three key themes to guide agency officials in managing declining resources: 1. Top management should lead efforts to manage declining resources. 2. Data analytics should guide decision making. 3. Agencies should develop both short- and long-term, cost-cutting and cost-avoidance strategies. Figure 3 below provides illustrative examples of activities that address each theme and subtheme. These themes are not mutually exclusive and in many ways help reinforce one another. For example, robust and reliable data analytics can inform top management’s efforts to lead transformations within agencies that result in cost avoidance. Further, the examples in this framework, while not exhaustive, can help guide agencies through budget challenges by providing strategies for leading from the top, using data analytics to guide decisions, and reducing costs now and in the future. Agencies must continue to achieve their missions with declining resources. As outlined in our framework, to accomplish this, top management should lead agencies’ efforts to manage declining resources. For example, top management should take actions to ensure the agency maintains the capacity—including the agency’s workforce and physical capital such as infrastructure and IT—to achieve its mission. As part of this, they could consider clearly defining and communicating the key principles and priorities for guiding budget reductions. Top management should also reexamine the agency’s core mission. For example, they may want to reexamine programs and organizational structures to determine if the current environment still warrants continuing certain programs and activities. Top management should also consult with Congress and involved stakeholders and take into consideration how budget decisions align with congressional goals, constituent needs, and industry concerns. As shown in figure 4, top management at selected agencies led efforts to achieve their respective missions with fewer resources in various ways at both the agency and program level. For example, top management at OFLC and EPA initiated comprehensive reviews of the workforce to identify skills gaps and reshape the workforce to align with evolving mission needs and budgetary limitations. FLETC’s Director and executive team clearly defined key principles and priorities for guiding budget reductions and communicated them to agency employees and stakeholders. As described in our framework, when deciding how to implement reduced appropriations, data analytics should guide agency officials’ decision making. Data analytics involves turning data into meaningful information accessible to budget and program staff and agency leaders to help them make informed decisions. This should include activities such as reviewing operations to achieve additional efficiencies and assessing the availability and quality of data. As a part of this, agencies may want to consider using data analytics to set specific cost-savings goals and monitor progress toward achieving those goals. Agencies should also connect performance information to the budget. Linking strategic goals with related long-term and annual performance goals and with the costs of specific activities that contribute to those goals, for example, can help provide a basis for informed tradeoff decisions. As shown in figure 5, selected agencies used data analytics to help increase efficiency, manage workflow, or monitor the effects of cost-avoidance efforts on their respective missions. As described in our framework, when facing declining resources, agencies should employ strategies that consider both short and long-term cost cutting and cost avoidance strategies. Cost savings are a reduction in actual expenditures below the projected level of costs to achieve a specific objective. Cost avoidance is an action taken in the immediate time frame that will decrease costs in the future. Cost-cutting and cost- avoidance strategies should encompass both short- and long-term solutions. Agencies may want to consider (1) instituting an employee- input cost saving incentive program, (2) using capital funds and other mechanisms to support upfront investments, (3) expanding the use of shared services for functions that can be shared within or among agencies, or (4) reducing the size and cost of real property through consolidation. OMB has initiated a number of efforts to help agencies cut and avoid costs. Selected agencies reported implementing a broad range of short- and long-term strategies to cut or avoid costs. Balancing these strategies can provide agencies with the flexibility to weigh short-term needs and make adjustments towards achieving their long-term goals. Selected agencies reported taking short-term actions aimed at immediate cost savings. For example, given the limited resources available to hire additional permanent staff to process labor certification cases, ETA reported using temporary seasonal contract staff during peak filing periods to help address increases in filing volume in various temporary programs. In doing so, ETA reported that it saved $1 million compared to the cost of maintaining an equivalent staffing level year-round. Selected agencies also reported implementing a broad range of long-term strategies to avoid costs, such as reducing their real estate footprint and strategically sourcing goods and services. Figure 6 provides illustrative examples of selected agencies’ cost-cutting and cost-avoidance initiatives. While top management at the selected agencies have taken some steps to manage declining resources consistent with our framework, additional actions could address anticipated future challenges and help ensure that agencies continue to have the capacity to achieve their missions. FLETC. Top management at FLETC identified long-term strategies for better managing resources but have not developed or finalized plans with specific goals and resources needed to implement some of these strategies. Some actions could help ensure that the agency avoids longer-term costs and maintains capacity to achieve its mission. FLETC anticipates increased training requests for fiscal year 2017 and beyond but has not strategically planned for this surge. According to FLETC officials, in the near term, the anticipated increase is in part due to pent up demand following sequestration in fiscal year 2013 and budget cuts from previous fiscal years. Partner organizations are expected to increase law enforcement hiring and therefore demand for training. Looking out further, according to FLETC officials, demand for training is expected to increase to address the anticipated retirement of the large cohort of law enforcement officers hired in the wake of the September 11 terrorist attacks. However, FLETC has not planned strategically for how it will address the anticipated surge. A multi-year strategic plan that articulates the fundamental mission of the agency and lays out its long-term goals for implementing that mission, including the resources needed to reach these goals, for example, could help FLETC continue to have the capacity to fulfill its mission of providing law enforcement training and address increased demand for training. FLETC’s most recent strategic plan covered fiscal years 2008 through 2013. FLETC officials told us that its expired strategic plan, while dated, was relevant and useful to the agency. Nonetheless, FLETC reported that, in March 2016, the agency began revising this plan and in August formed a Strategic Planning Working Group to complete a strategic plan covering fiscal years 2016-2018 by December 31, 2016. However, fiscal year 2016—which represents one-third of the strategic plan—has already elapsed, and fiscal year 2017 is already underway. Further, a strategic plan that extends through fiscal year 2018 can assist the agency in formulating its 2018 budget but will be less useful for assessing longer-term resource needs. OMB’s guidance for implementing the GPRA Modernization Act of 2010, contained in Part 6 of its Circular No. A-11, states that strategic goals and objectives should be established for a period of not less than 4 years forward from the fiscal year in which it is published. Although the act’s requirements apply at the departmental levels, we have previously reported that they can serve as leading practices at other organizational levels, such as component agencies, offices, and programs. Finally, it is unclear based on the excerpts that FLETC provided to what extent the revised strategic plan addresses the anticipated surge in training needs. In addition, FLETC is in the process of developing an Online Campus that can provide distance learning opportunities for federal law enforcement officers and supplement the in-person training they receive at FLETC campuses. Consistent with our framework, the Online Campus represents a potential long-term cost avoidance strategy that could help the agency maintain capacity to provide the necessary law enforcement training, potentially at a reduced cost to both FLETC and its partner organizations. FLETC officials noted that when complete, the Online Campus could reduce the travel and lodging costs associated with in-person training and help FLETC meet growing demand for training in the future. While the Online Campus initiative is included in the six overall priority areas established by the FLETC Director and executive team, FLETC has not finalized a plan for the Online Campus initiative that clearly identifies the steps needed to achieve its goals. In October 2016, FLETC provided us with a draft plan that outlines some activities and milestones for the Online Campus initiative as well as some broad cost savings goals. According to FLETC officials, the agency plans to complete the plan by December 31, 2016, as part of its larger strategic planning effort. The first year of this strategic plan has passed, but FLETC did not provide evidence that the agency achieved milestones, including the cost savings and enrollment goals, established for that year. Furthermore, it is unclear where FLETC will identify the necessary resources to fund the Online Campus initiative within its existing budget. FLETC has funded the Online Campus initiative with annual appropriations and has in the past used funding from its other cost avoidance efforts to support this initiative. FLETC officials reported that they are currently working on a funding model for the Online Campus initiative. A final, up-to-date plan could allow FLETC management to monitor progress toward these activities and milestones and ensure that the Online Campus initiative helps FLETC avoid longer-term costs and maintain capacity to achieve its mission. ETA. Top management at ETA has begun to take actions to help ensure that the UI program maintains capacity to achieve its mission with reduced resources, such as reengineering the UI program’s accountability and performance measurement process (as previously described in figure 4). While ETA officials and stakeholder organizations raised concerns about the Unemployment Insurance program’s capacity to adequately respond to the next recession, ETA reported that it routinely communicates with states through meetings hosted by ETA's Regional Offices and participates in national conferences and that these and other efforts helped inform a number of cost-neutral proposed reforms to the UI program included in the President’s Budget for fiscal year 2017. These reforms are intended in part to make the UI program more responsive to economic downturns. For example, the President’s Budget proposed a new Extended Benefits program to provide up to 52 weeks of additional federally- funded benefits for states seeing increased and high unemployment, with the number of weeks tied to the state's unemployment rate. Although ETA has taken steps to communicate with states, ETA’s data collection efforts have not focused on systematically identifying lessons learned from the most recent recession to help ensure the UI program maintains capacity in response to a future economic downturn and related issues, such as the states’ ability to manage changes in workload. In addition to regular communications with the states, ETA officials said that they regularly monitor program activities through a wide array of data collection, and that these efforts have helped the agency identify lessons from the effects of the most recent recession on the federal-state Ul system. Officials, however, also noted that lack of capacity and competing priorities have prevented the agency and state UI programs from systematically gathering lessons learned by states from this recession. ETA officials said that they intend to support states gathering and sharing lessons learned in the future when their workload stabilizes. Systematically evaluating the challenges that states faced in administering the UI program during the recent recession—such as rapidly ramping up staffing at the start of the recession and ramping down as the economy recovered—and identifying and incorporating any lessons learned from this experience into a longer-term strategy could help further prepare the program for the next economic downturn. ETA officials reported that as the economy recovered and federal funding for state administration of the program declined following the recession that began in 2007, states reduced the number of employees, many with extensive knowledge and experience that state offices will have difficulty replacing in the future. ETA reports that many state employees with extensive knowledge have already left state unemployment insurance offices. If actions are not taken promptly, opportunities to gather key lessons from the most recent recession may be missed to help ensure capacity is maintained in the next economic downturn. In addition to OFLC workforce efforts underway, ETA identified two broad strategies for maintaining OFLC’s capacity over the longer- term. Both would require congressional action (see text box). While these legislative proposals, if enacted, would provide OFLC with additional funding, in an era of declining resources, agencies cannot rely solely on increased funding to continue to achieve their mission. ETA officials noted that efforts to reorganize and cross-train OFLC workforce (described in figure 4) should help the agency maintain capacity with existing resources. Employment and Training Administration (ETA) Legislative Proposals for Maintaining the Office of Foreign Labor Certification’s (OFLC) Capacity Proposed legislative changes for new fee. As part of the President’s Fiscal Year 2017 budget request, ETA requested authority to collect user fees from employers in an amount that would cover the costs of operating the H-2A, H-2B, permanent labor certification, and prevailing wage programs. According to ETA, transitioning to a fee- based funding structure would improve case-processing services by creating a market- structure that links the supply of resources with the demand for case-processing services. ETA noted that the H-1B program administered by OFLC is already supported by user fee collections and has experienced no backlogs despite a reported 76 percent increase in applications between fiscal year 2010 and fiscal year 2015. ETA further noted that the Citizenship and Immigration Services also already has the authority to collect user fees for its operational role in each of the programs for which OFLC is requesting similar authority. OFLC has developed a pricing structure conceptually similar to the existing Citizenship and Immigration Services user fee system that, according to ETA, would replace the need for annual appropriations into the future within several years of receiving enabling authority. Congressional action would be needed to authorize OFLC to collect and use (obligate) this new user fee. ETA officials said that they had received technical inquiries from Congress about the proposal, but that they had not received indication of congressional support from the current Congress to know whether the proposed user fee proposal constituted a viable long-term plan. Proposed one-time funding. ETA reported that another strategy for managing resources over the longer-term is a proposal for an additional $20 million in one-time funding to be made available for fiscal years 2017 and 2018 to process foreign labor certifications. According to ETA, these funds would be used to support processing in the H-2B and H-2A programs by investing in infrastructure and human capital to help meet expanding service requirements in future filing cycles. ETA noted that in fiscal year 2016, Congress responded to a backlog in another program—permanent foreign labor certifications—by providing $13 million in additional budget authority specifically to process these certifications. According to ETA, these additional resources helped the OFLC reduce the backlog by 51 percent as of September 26, 2016. EPA. Our work found that EPA top management also had opportunities to better manage declining resources by further addressing our prior recommendations. For example, in 2012 we reported that the way EPA measures the effectiveness of states’ NPS pollution programs has not consistently ensured that EPA select projects likely to yield measurable water quality outcomes. We recommended at the time that EPA emphasize measures that (1) more accurately reflect the overall health of targeted water bodies (e.g., the number, kind, and condition of living organisms) and (2) demonstrate states’ focus on protecting high-quality water bodies, where appropriate. In a July 2016 report, we noted that EPA officials said that the agency is planning to add one of the measures we recommended but has not yet had the time and resources to do so. EPA officials said the agency plans to begin developing a new measure for the protection of healthy water bodies this year and to establish a workgroup focused on this measure in fiscal year 2017. We found that funding for Section 319 NPS pollution grants declined by more than 20 percent from 2010 to 2015. EPA officials said that it would be difficult to isolate the effects of declining resources on environmental outcomes using existing performance measures for the Section 319 NPS pollution program. According to EPA officials, there are many activities that contribute to the restoration of polluted waterways and improve water quality. Some of these are EPA programs, others are state programs, and yet others may be other federal programs. In addition, EPA officials said that there is a time lag between changes to funding for Section 319 NPS grants and when effects from the changes take place. Implementing our prior recommendation could provide top management with an opportunity to better connect performance information to the budget, which could help the agency understand how resource constraints affect program outputs to guide decision making in the future. We testified earlier this year that, as of May 23, 2016, we had identified 51 open recommendations related to management and operations that we made to EPA since 2006 that had not been fully addressed and another 36 open recommendations related to water issues such as nonpoint source pollution. Implementing these recommendations could help EPA improve efficiency and avoid costs to better manage their limited resources. For example, in 2008, we identified an error in EPA’s calculation of reimbursable indirect costs for hazardous waste cleanup. EPA acknowledged the error and published revised indirect cost rates. As a result, we estimated, in 2010, that EPA had recovered or would recover $42.2 million. Selected agencies reported using data analytics to some extent and have identified additional opportunities to improve or expand their use of data analytics to better manage resources. As outlined in our framework, to use data analytics effectively, it is important for agencies to know what data are available within the agency. Agencies then can determine if those data are sufficiently granular, reliable, timely, accessible, and transparent. For example, FLETC has made initial investments in data analytics and is planning to further expand its use of data analytics. As discussed above, one of the FLETC Director and executive team’s top priorities is to enable FLETC to make data-driven decisions. FLETC officials reported that as the amount of data available about FLETC operations and training continues to grow, the skills required to process and analyze those data become more critical. FLETC’s overall goals for data analytics include unifying data sources to ensure data integrity, standardizing registration processes to ensure consistent data, establishing governance structures and processes to identify and share business data, and expanding use of business analytics in organizational decision making. FLETC’s data analytics team is in its early developmental stage. Thus far, the agency has dedicated two positions to begin both the strategic and operational work of understanding what questions can realistically be answered with the agency’s data and how those answers should be communicated. According to FLETC officials, the team is focusing initially on training-related activities. FLETC plans to use data analytics to assist in scheduling instructor leave and professional development during periods of lower instructional workload. These efforts could help the agency identify opportunities to better manage its existing resources through further efficiencies and cost avoidance. Selected agencies also have ongoing efforts to further identify and protect upfront investments in areas such as IT to increase efficiency and avoid longer-term costs. We have previously reported that IT investments across the federal government are becoming obsolete and the legacy systems that many agencies still use may become increasingly more expensive to maintain. Protecting investments in new IT systems can help agencies avoid longer-term costs. For example: ETA. ETA officials and stakeholders reported that OFLC could achieve increased efficiencies and functionality through modernizing IT investments. For example, stakeholders noted that OFLC does not have a reliable system for tracking the status of visa and immigrant applications as its other federal partner agencies do. According to ETA, OFLC has used its existing funds to make incremental improvements in existing systems while planning to replace its two aging electronic case processing systems. For example, the requirement to scan documents into OFLC’s integrated case processing system (iCERT) significantly adds to workload and until recently was the only way to ensure that analysts could work from home without taking paper files to their residence in the current telework environment. According to ETA, earlier this year, OFLC enabled employer applicants to upload some documents into the iCERT system directly, saving the time and effort required for OFLC staff to scan those documents into the system. OFLC is also in the early stages of an electronic case processing transformation plan that it recently initiated with the long-term goal of replacing the current antiquated and unreliable electronic case processing applications with a single, integrated solution for all labor certification programs. As an early step, ETA evaluated the current state of OFLC’s case processing systems and developed recommendations for how to reduce processing times, improve processing efficiency, improve decision quality and consistency, and increase ease of use. ETA acknowledged that creating and installing a single replacement platform will require significant funding. OFLC said that it has contingency plans that will enable the organization to continue this IT development at a slower pace without substantial additional funding but did not provide specific information on how this would be accomplished. EPA. In 2013, we reported that EPA’s Office of Pesticide Programs (OPP) faced a challenge in managing and tracking key information related to conditional registrations of pesticides, in part because reviewing each pesticide registration file is time-consuming and, depending on the pesticide, may take from a few hours to a few days to complete. We recommended that EPA complete plans to automate data related to conditional registrations to more readily track the status of these registrations and related registrant and agency actions and identify potential problems requiring management attention. During our review, agency officials reported that EPA had begun developing new IT infrastructure but more work is needed before a functional tracking system is in place. Currently, most of the work to review conditional registrations is done by employees who review large volumes of printed materials that are tracked through Excel spreadsheets. EPA expects that completing automation will create efficiencies by decreasing the number of hours needed to review applications. Despite agencies’ ongoing efforts to manage declining resources through cost avoidance initiatives, improved efficiency, and other strategies, some agency officials and some stakeholder organizations told us that declining resources and the actions that agencies took to manage them, among other factors, affected the timeliness of some services. In some instances, stakeholder organizations commended the selected agencies for their efforts to continue to meet their missions with fewer resources. However, they also noted that it took agencies longer to provide certain services, which negatively affected individuals, businesses, state operations, and other federal agencies. ETA Office of Foreign Labor Certification. According to ETA, OFLC’s overall application volumes were 84 percent higher in fiscal year 2015 than in fiscal year 2010, while the amounts appropriated to process these applications decreased by roughly 9 percent. ETA officials said that this—along with other factors such as new requirements and temporary court-ordered work stoppages—affected OFLC’s ability to meet timeliness measures for processing applications. For example, under an Interim Final Rule released in April 2015, OFLC is required to issue either a Notice of Deficiency or, if the application is complete and meets requirements, to issue a Notice of Acceptance within 7 business days of receiving the H-2B application. According to ETA data, as of September 2016, the average processing time for H-2B applications with no deficiencies in 2016 thus far was roughly 69 days and even longer for applications with deficiencies. One stakeholder organization representing employers who use the H-2B program to meet seasonal employment needs—including for landscaping, forestry, and housekeeping— conducted an informal survey of its members. Members reported that delays in processing H-2B applications resulted in companies: turning away work that they normally would have accepted or postponing their scheduled opening date; and delaying starting work, resulting in some instances in the loss of business contracts or challenges in meeting agreed upon deadlines on existing contracts. Employers estimated that delays cost them thousands of dollars in missed work or increased overtime and, in some instances, cost them hundreds of thousands in missed contract opportunities. EPA Office of Pesticide Programs. Annual appropriations for OPP declined by more than 14 percent from fiscal year 2010 through 2015. Despite these declines, according to EPA officials, there was no decline in the program’s outputs and outcomes for pesticide programs because Congress made collections from Pesticide Registration Improvement Act of 2013 (PRIA) user fees available to supplement OPP’s annual appropriations. Specifically, in 1988, Congress enacted annual registration maintenance fees to support the review of existing pesticide registrations. In 2004, Congress enacted pesticide registration fees, which are paid by registrants for some registration actions, such as registering new uses of pesticides, to help pay for registration costs. According to EPA, these fees were established to both create a more predictable evaluation process for affected pesticide decisions and to couple the collection of individual fees with specific deadlines for pesticide registration decisions. Further, as noted earlier, OPP reported improving the efficiency of some processes, such as through Lean Six Sigma process improvements. We found that data for pesticide registration decisions from fiscal year 2011 through fiscal year 2015 show both increases and decreases in registration decision times depending on the category. Overall, the average annual percentage of decisions completed on time has decreased slightly during the period of our review from 99.7 percent in fiscal year 2010 to 98.4 percent in fiscal year 2015. In fiscal year 2014, the average annual percentage of decisions completed on time did drop to 85 percent, which EPA attributed to the October 2013 government shutdown. Stakeholders that we spoke with noted that annual appropriations for OPP fell below a minimum appropriation threshold established by statute. Pesticide registration service fees may not be assessed for a fiscal year unless Congress provides at least a set amount of annual appropriations for certain OPP functions for that year. Nonetheless, Congress has at times authorized EPA to assess pesticide registration service fees for a given fiscal year notwithstanding the minimum appropriations provision. One stakeholder organization told us EPA management has done an outstanding job implementing PRIA despite funding challenges. Nonetheless, this stakeholder organization told us that reductions in annual appropriations affected overall staffing levels at OPP and that average processing times increased for certain types of registration decisions. For example, based on EPA data, the average decision time for an application that proposes to use a new food use active ingredient increased 214 days from an average of 703 days in fiscal year 2011 to an average of 917 days in fiscal year 2015. The average decision time for an application for amendments for conventional agricultural products requiring scientific review increased 67 days, from an average of 276 days in fiscal year 2011 to an average of 343 days in fiscal year 2015. According to one stakeholder organization we spoke with, the pesticide industry relies heavily upon timely and predictable registration decision time frames. This allows manufacturers to effectively market their new products to growers before their respective growing seasons. According to this stakeholder organization, slower timelines have resulted in uncertainty and slowed innovation and growth for industry and can ultimately result in lost revenue for the manufacturers of particular products if they are not ready to be released until after the growing season. FLETC. According to FLETC officials, FLETC’s capacity to accommodate the increased demand for training in recent years is reaching its limit and could increase waitlists and delay training for partner organizations. FLETC serves as an interagency law enforcement training organization for more than 90 federal partner organizations as well as international, state, local and tribal law enforcement agencies. According to FLETC officials, an improved budget outlook for partner organizations increased law enforcement hiring and therefore demand for training in recent years. FLETC expects that demand for training will further increase in future years to address the anticipated retirement of the large cohort of law enforcement officers hired in the wake of the September 11 terrorist attack. According to FLETC officials, capacity constraints resulted in longer waiting lists for training or some partner organizations being given less flexibility to make changes in their training schedule later in the fiscal year. FLETC officials said that the agency has used unobligated balances carried forward from prior fiscal years in which actual training enrollment fell short of partner organizations’ projections, to help manage increased demand. Partner organizations that we spoke with credited FLETC’s recent efforts to accommodate the increased demand in training by, for example, shifting the training from one campus to another or honoring short- notice training support requests for additional classes. However, if budgetary trends persist, FLETC officials said that the agency will have fewer unobligated balances and therefore less flexibility to manage future surges in training given the depletion in balances. In part due to declining resources, selected agencies provided states with less funding to administer and implement joint federal-state programs, such as unemployment insurance (UI) and Section 319 nonpoint source (NPS) pollution. While federal agencies are responsible for monitoring and overseeing these programs, states generally have some discretion in how they implement reductions. As a result, the actions taken by individual states can vary and the effects of those actions are difficult to isolate. In addition, some states may choose to use other sources of funding to offset a decline in funding for federal programs such as unemployment insurance and nonpoint source pollution. This includes both state resources and other federal sources. However, according to agency officials and stakeholder organizations, the use of state funding to supplement federal funding varied considerably. Some states provided no resources for these programs while others established dedicated revenue streams. For example, based on an annual survey of state workforce agencies by the National Association of State Workforce Agencies, states have provided between roughly $100 million and $362 million in supplemental funding for state administration of the UI program between 2010 and 2015. EPA Section 319 NPS pollution grants. Declining resources for Section 319 NPS pollution grants to states coincided with fewer projects to improve water quality in impaired waters. As of August 2016, a total of 671 projects were funded by Section 319 NPS grants awarded in fiscal year 2014, compared to 883 projects in 2010. While EPA regional offices administer the Section 319 NPS grants to states and are responsible for grant and programmatic oversight, states set their overall program priorities and devise processes for selecting projects. One stakeholder organization that represents local water conservation districts told us that the decreased funding for Section 319 NPS grants puts more pressure on local government entities and conservation districts to implement the program. This stakeholder expressed concerns that there has not been an overall commitment to funding the Section 319 program and other programs to reduce hypoxia in areas such as the Mississippi River/Gulf of Mexico watershed. Hypoxia can cause harmful algal blooms. For example, in August 2014, Lake Erie experienced a harmful algal bloom near the intake to the drinking water treatment plant serving the city of Toledo, Ohio. Toledo issued a “do not drink or boil advisory” that affected nearly 500,000 customers due to the presence of the harmful substance cyanotoxin that exceeded the safe drinking water threshold recommended by the World Health Organization. ETA’s Unemployment Insurance Program. The UI program is under a statutory objective requiring states to have methods of administration that ensure the full payment of unemployment compensation when due. Although the number of new initial claims filed has declined steadily since late 2009, according to ETA performance data, the percentage of first payments made within the first 21 days has remained below 87.5 percent—the target for fiscal year 2015. It averaged roughly 82 percent from 2010 to 2015. ETA attributed states’ inability to meet the performance goal partially to staff layoffs associated with reduced administrative funding caused by lower workloads. According to ETA, workloads were lower because temporary federal programs expired and the economy improved. Figure 7 shows the percentage of first unemployment benefits paid within 21 days. Two stakeholder groups that we spoke with said that declining federal funding contributed to increased wait times to process UI claims, which can lead to delays in claims being paid on time and lower customer satisfaction. This is consistent with our prior work, which indicates that states are experiencing customer service challenges related to insufficient staff, outdated IT systems, and limited funding. State UI programs rely extensively on IT systems funded through annual appropriations to administer the UI program and to support related administrative needs. As we have previously reported, states face challenges modernizing their IT systems to operate more efficiently. These challenges include limited funding for modernization efforts and limited resources for operating legacy systems while implementing modernized systems. The majority of the states’ existing systems for UI operations were developed in the 1970s and 1980s. However, it can be difficult to isolate effects of declining resources from other factors such as limited staff with the technical and project management expertise to manage system modernization efforts. According to ETA, current appropriations to fund state administration do not provide sufficient funding to enable states to modernize UI information technology infrastructure, which can cost many millions of dollars for each state. To address this challenge with limited additional funding, ETA has invested available resources to support consortia of states to jointly modernize their systems with the goal of sharing development and ongoing maintenance costs. FLETC. While the partner organizations that we spoke with reported no decline in the overall quality of FLETC training, some did note that the quality and availability of housing and other on-campus accommodations declined somewhat during this period. Specifically, two partner organizations that we spoke with noted that lodging capacity issues had resulted in some of the trainees being housed outside of the FLETC campus in Glynco, Georgia—sometimes over 30 miles away. One partner organization said that this increased their portion of the cost of training. Another partner organization noted that trainees housed off site may miss out on the added benefits— such as getting acclimated to the agency culture—that come with trainees being located together in the dormitories. Agency officials at this partner organization described these added benefits as an important part of the training experience. Management at selected agencies reported that they prioritized activities with a statutory deadline or activities where the agency had the authority to collect and obligate user fees. As a result, they said that activities without statutory deadlines or dedicated fees were disproportionately affected by declining resources at these agencies. ETA’s Office of Foreign Labor Certification. Delays and backlogs in processing applications have increased more for ETA’s permanent labor certification program than other applications for temporary labor programs. Unlike other programs administered by OFLC, the permanent labor certification program does not have a statutory deadline dictating how long OFLC has to process applications. Based on ETA data, the average processing time for permanent labor certification applications not selected for integrity review in fiscal year 2015 was 195 days. The number of permanent labor certification applications remaining to be processed at the end of the fiscal year 2015 increased almost 100 percent from 29,553 applications in 2010 to 58,926 applications. In comparison, ETA reported that 100 percent of H-1B applications were processed within the target goal of 7 days from the filing date in fiscal year 2015. Meanwhile, the backlogs for processing H-2A and H-2B applications in 2015 were considerably less than the backlog for the permanent labor certification program at 180 and 341 applications, respectively. However, overall application volumes for these temporary visa programs were also considerably less. According to representatives from an organization comprised of immigration attorneys who represent employers and permanent labor applicants, delays and backlogs in the permanent labor certification program delayed businesses from hiring workers and adversely affected business operations. EPA’s Office of Pesticide Programs. EPA officials said that those activities beyond the core pesticide registration work that Congress has not authorized the agency to fund through user fees are most likely to be affected by declining discretionary resources. Stakeholders that we spoke with also provided examples of activities that are not funded through user fees and, as a result, they said were significantly delayed. For instance, EPA has not updated 18-year-old testing guidelines addressing public health pests, such as bed bugs, which one of the stakeholders we spoke with identified as a significant problem in the United States. These guidelines specify methods for generating the necessary data to submit to EPA for registering a pesticide or setting a tolerance level (i.e., the maximum pesticide residue allowed) or tolerance exemption for pesticide residues. One stakeholder who represents an industry trade group said that it is vital for these efficacy guidelines to be finalized so that industry can know how and what to test for to ensure EPA accepts the study the first time it is submitted. The representative also said that uncertainty has slowed innovation for developing new products and renewing registrations of existing, effective products. This uncertainty also affects the availability of products. The need for some pesticides is seasonal, and if the product development timeline is slowed down by the need to repeat a study, manufacturers may miss the target window for a particular season. In response to declines in discretionary resources since 2010, the selected agencies that we reviewed took a number of actions to continue carrying out their respective missions that aligned with our framework. These actions mitigated service disruptions and offer illustrative examples that may help other agencies address their own budget challenges. It is difficult to isolate the effects of budget reductions on the programs from other factors, including overall program management, and even more difficult to know what the direct consequences are for individuals, businesses, and states and localities. Reduced timeliness and service levels for some programs indicate that agencies may need to take additional actions to manage their budgetary resources. Building and maintaining a longer-term focus on managing declining resources can help agencies’ top management ensure that they continue to maintain the capacity to address emerging challenges and achieve their mission. FLETC has been relying on a strategic plan that expired in fiscal year 2013. FLETC began efforts this year to draft a strategic plan that covers fiscal years 2016-2018, which it plans to complete by December 2016. However, fiscal year 2016—which represents one-third of the strategic plan—has already elapsed and fiscal year 2017 is already underway. A strategic plan that extends through fiscal 2018 can assist the agency in formulating its 2018 budget but will be less useful for assessing longer-term resource needs. Longer-term strategic planning could help FLETC maintain capacity to accommodate the anticipated increase in demand for law enforcement training in the coming years to avoid any delays for partner organizations in hiring or training the new law enforcement officers that they need to achieve their missions. Consistent with our framework, FLETC’s Director and executive team have led efforts to manage declining resources in part by clearly communicating key priorities for managing the agency’s budget to both internal and external stakeholders. Some of these priorities, such as investing in data analytics and developing an Online Campus, also align with other key themes from our framework. For example, the Online Campus represents a potential long-term cost avoidance strategy that could help the agency maintain capacity to provide the necessary law enforcement training, potentially at a reduced cost to both FLETC and its partner organizations. However, FLETC has not yet finalized its plan for the Online Campus with steps and timeframes needed to ensure it successfully implements this priority action. Moreover, it is unclear where FLETC will find the necessary sources within its existing budget. Without a final plan, it may be difficult for management to monitor progress toward these activities and milestones and ensure that the Online Campus initiative helps FLETC avoid longer-term costs and maintain capacity to achieve its mission. During the period of our review, the UI program was still addressing the implications of the most recent recession, which tested the program’s capacity to rapidly increase its workload. States experiences managing through this recession and its aftermath may offer important lessons learned for making the program more resilient and better managing its workload with existing resources. ETA officials said that they would help share such lessons when workloads stabilize. The recession officially ended in 2009, and temporary emergency unemployment benefits expired in 2013. Given that ETA reports that many state employees with extensive knowledge have left state unemployment insurance offices, ETA should move promptly to systematically identify any lessons learned from this experience that could help the agency maintain the program’s capacity to respond to changes in workload during a future economic downturn. While we are not making any new recommendations to EPA as part of this report, our work found that EPA top management also had opportunities to better manage declining resources by further addressing our prior recommendations. These recommendations could help the agency improve efficiency and avoid costs to better manage its limited resources. For example, completing plans to automate data related to conditional pesticide registrations could make the staff-intensive review process more efficient. We are making three recommendations to help FLETC and ETA better manage declining resources. To help ensure that FLETC builds and maintains capacity to achieve its mission with existing levels of resources over the longer-term, we recommend that the Secretary of Homeland Security direct the Director of FLETC to 1. complete a revised strategic plan that encompasses the agency’s long-term goals and objectives to address emerging challenges; and 2. as part of its strategic planning process, finalize the plan, including the steps and time frames, needed to further implement its Online Campus initiative. To help ensure that ETA continues to have the capacity to achieve its mission and manage changes in demand for services resulting from changes in the broader economy, we recommend that the Secretary of Labor direct the Administrator of ETA to systematically gather and evaluate information on the challenges that states faced administering the unemployment insurance program during the recession that began in 2007—such as rapidly ramping up staffing at the start of the recession and ramping down as the economy recovered—and identify and build upon any lessons learned from this experience that could be broadly shared to help the program respond to any changes in workload during a future economic downturn. We provided a draft of this report to DHS, Labor, and EPA for review and comment. In written comments reproduced in appendix IV, DHS concurred with our two recommendations for FLETC to (1) complete a revised strategic plan that encompasses the agency’s long-term goals and objectives to address emerging challenges and (2) as part of its strategic planning process, finalize the plan needed to further implement its Online Campus initiative. DHS discussed FLETC’s commitment to strategically planning how it will manage its resources to address emerging challenges such as anticipated surges in federal law enforcement training needs. DHS indicated that FLETC is on target to complete a multi-year strategic plan by December 31, 2016, and then implement a quarterly review process to monitor progress toward achieving the plan’s goals and objectives. DHS also reported that as part of FLETC’s forthcoming strategic plan, FLETC plans to finalize objectives, strategies, and milestones for its Online Campus that align with its anticipated resources. If implemented as described, DHS’s actions would meet the intent of our recommendations. A draft of this report contained a recommendation to the Secretary of Labor to direct the Administrator of ETA to develop alternative options to ensure OFLC’s workload demands can be met with existing resources in the absence of legislative changes. In an email response to the draft report, ETA provided additional documentation that included specific steps and timeframes for its ongoing and planned efforts to improve the efficient use of OFLC’s current resources, such as the reorganization and cross-training of staff and replacement of its current electronic case processing applications. Moreover, in oral comments on the draft report, a Deputy Assistant Secretary for ETA provided additional information about how these efforts could potentially help OFLC meet its workload demands with existing resources in the absence of legislative changes. We agree that these steps and planned actions constitute alternative options. Therefore, we removed the recommendation and revised the report accordingly. A draft of this report also contained a recommendation to the Secretary of Labor to direct the Administrator of ETA to gather and evaluate information on the challenges that states faced administering the unemployment insurance program during the recession that began in 2007 and identify any lessons learned from this experience that could be applied to help the program respond to any changes in workload during a future economic downturn. In oral comments on the draft report, the Administrator of the Office of Unemployment Insurance provided additional information regarding how ETA uses its routine communication with states to identify lessons learned from the most recent recession. We revised the report to further acknowledge the routine communication that ETA has with states through conferences and its program monitoring activities, which ETA says has helped the agency identify lessons learned and informed the proposal for UI included in the President’s Fiscal Year 2017 budget request. In addition, to better distinguish the purpose and intent of our recommendation from the more routine communications and data gathering that ETA performs, we revised our recommendation to (1) emphasize the need for information to be gathered systematically, (2) clarify the focus on challenges that states could face in maintaining capacity as a result of an economic downturn, such as rapidly ramping up staffing at the start of the recession and ramping down as the economy recovered, and (3) emphasize the need for lessons to be shared broadly. We provided ETA with revised wording of our recommendation, which appears in this report. In written comments, reproduced in appendix V, ETA did not state whether it concurred or not with this recommendation. ETA stated that it believed the recommendation does not fully recognize its work with states to identify lessons learned from the most recent recession. However, as ETA notes, previous data collection efforts have not focused specifically on the states’ ability to manage changes in workload during an economic downturn. We continue to believe that a systematic approach to gathering and sharing lessons learned from the most recent recession would further help the program maintain capacity in response to a future economic downturn. EPA’s National Coordinator for the Clean Water Section 319 Grants Reporting and Tracking System also provided us with comments in an e- mail regarding open recommendations from our May 2012 report on Section 319 NPS grants, which we cite in this report. The National Coordinator stated that, since 2012, EPA has invested substantially in reforming overall program guidance and oversight. According to the National Coordinator, these reforms will serve to strengthen the strategic use of Section 319 funds in NPS program management. In our July 2016 report, we noted that EPA has taken a number of actions since our 2012 report, in part to respond to our recommendations. EPA has not made changes to the program's measures of effectiveness, but Office of Water officials confirmed in September 2016 that EPA is still planning to do so. We will continue to monitor EPA's efforts on this and other recommendations. FLETC, ETA, and EPA also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Homeland Security and Labor, the Administrator of EPA, and other interested parties. In addition, the report is available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or krauseh@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. New discretionary appropriations for the Employment and Training Administration (ETA) declined by roughly 14 percent from fiscal year 2010 to fiscal year 2015. Meanwhile, new discretionary appropriations for selected programs declined by between 9 percent and 16 percent. The Unemployment Insurance (UI) State Administration program funds grants to states for the administration of UI programs in accordance with Section 302(a) of the Social Security Act. ETA uses a combination of national claims-related workload projections and other factors to develop the request for UI administrative funding for states. After Congress appropriates funds, the Department of Labor uses a formula based in large part on workload estimates ETA develops, as well as other information provided by states, such as cost accounting information to allocate “base” funding to states. Base resources attempt to provide 80 to 90 percent of total “need” for workload processing with the remaining funding for workload related activities provided after the conclusion of each quarter when states report workload activity in excess of the “base” workload. This approach is intended to address forecast error in the original estimates. Since available funding is calculated in large part on claims-related workloads, the federal funding available for states is sensitive to changes in total claims, with more funding available when claims increase and less when they decrease. Foreign Labor Certification Federal Administration funds most of the administrative costs of the immigration programs overseen by the Office of Foreign Labor Certification (OFLC). This includes salaries and expenses, IT systems development, case adjudication support, rent, equipment, and supplies. OFLC is responsible for reviewing employer requests for the certification of a foreign worker to work in the United States to ensure that hiring a foreign worker will not adversely affect the wages and working conditions of American workers, and that no qualified American workers are willing or available to fill a given vacancy. OFLC oversees the labor certifications for a number of visa programs, including the Permanent Labor Certification, H-2A, H-2B, and H-1B programs. Table 3 highlights select agency performance measures, showing the annual targets and results for each measure during the relevant timeframe. Years where results fell below the agency’s target are highlighted in gray. According to ETA budget and performance documents, ETA was unable to reach its target for some performance measures in part because of resource constraints. For example, ETA cited state staff layoffs, high state staff turnover, and technology issues as a few reasons for the states’ inability to achieve their target for the first payment timeliness measure. New discretionary appropriations for the Environmental Protection Agency (EPA) declined by roughly 21 percent from fiscal year 2010 to fiscal year 2015, while new discretionary appropriations for selected programs declined by between approximately 15 and 21 percent. EPA awards Section 319 nonpoint source (NPS) pollution categorical grants for implementing state NPS management programs. From fiscal year 2010 through fiscal year 2014, the most recent years for which complete data on awarded projects were available, states awarded approximately 4,000 projects under the Section 319 NPS program. The two most common categories of nonpoint source water pollution targeted by these projects were agricultural runoff and urban and stormwater runoff. The Environmental Programs and Management (EPM) account supports a broad range of activities involved in EPA developing pollution control regulations and standards, and enforcement of these requirements across multiple environmental media, such as air quality and water quality. For the purposes of this report, we focused on three of the EPM budget activities related to pesticides. These support the Office of Pesticide Programs (OPP) activities that protect human health and the environment from pesticide risk, as well as activities to realize the value of pesticide availability. In addition to receiving annual appropriations, OPP has the authority to collect and obligate Pesticide Registration Improvement Act of 2003 (PRIA) fees. Specifically, in 1988, Congress enacted annual registration maintenance fees to support EPA reviewing existing pesticide registrations. In 2004, Congress enacted pesticide registration fees, which are paid by registrants for some registration actions, such as registrations for new uses of pesticides, to help pay for registration costs. According to EPA, the goals of the PRIA fee system are to both create a more predictable evaluation process for affected pesticide decisions and to couple the collection of individual fees with specific decision review periods. Pesticide registration service fees have a minimum appropriation threshold established by statute. Pesticide registration service fees may not be assessed for a fiscal year unless Congress provides at least a set amount of annual appropriations for certain OPP functions for that year. Nonetheless, Congress has at times authorized EPA to assess pesticide registration service fees for a given fiscal year notwithstanding the minimum appropriations provision. PRIA fees, which include both discretionary budget authority and mandatory budget authority from offsetting collections, can supplement OPP’s annual appropriations. PRIA fees totaled roughly $44 million in fiscal year 2015. Table 5 highlights select agency performance measures, showing the annual targets and results for each measure during the relevant timeframe. Years where results fell below the agency’s targets are highlighted in gray. Currently, EPA is unable to effectively track the effects of declining resources on some performance measures. There are many activities that contribute to restoring polluted waterways and reducing nonpoint source pollution. Some of these are EPA programs, others are state programs, and yet others may be other federal programs. In addition, EPA officials said that there is a time lag between changes to funding for Section 319 NPS pollution grants and when effects from the changes take place. Further, we previously recommended that EPA revise Section 319 guidelines to states to emphasize measures that more accurately reflect the overall health of targeted water bodies (e.g., the number, kind, and condition of living organisms) and demonstrate states’ focus on protecting high-quality water bodies, where appropriate. EPA agreed that different measures would better represent Section 319 NPS pollution program progress, and, according to EPA, the agency will undertake work on a new measure for protecting unimpaired waters. Newly enacted appropriations for FLETC declined by roughly 11 percent from fiscal year 2010 to fiscal year 2015. FLETC’s annual appropriation is available for necessary expenses for Salaries and Expenses, including materials and support costs for federal law enforcement basic training, and public awareness and enhancement of community support of law enforcement training. According to FLETC officials, the projected cost of partner organizations’ basic training budgets is included in FLETC base appropriations except for the cost of instructors. The cost of instructors is split evenly between FLETC and partner organizations. Partner organizations have 3 options to cover their portion of the costs: provide 50 percent of the instructors from their own agency; provide the funding necessary for FLETC to hire instructors; or provide a combination of instructors and funding. Amounts appropriated for Acquisitions, Construction, Improvements and Related Expenses are available for the acquisition of necessary additional real property and facilities; construction; and ongoing maintenance, facility improvements, and related expenses of FLETC. Table 7 shows one of FLETC’s primary performance measures for monitoring the quality of its training programs and the annual targets and results from fiscal years 2011 to 2015. Years where results fell below the agency’s target are highlighted in gray. In addition to the contact named above, Carol M. Henn, Assistant Director; Thomas J. McCabe, Analyst-In-Charge; Ulyana Panchishin; Michelle Sager; and Elise Vaughan Winfrey made major contributions to this report. Also contributing to this report were Ann L. Czapiewski, Hilary R. Kelly, Julie Matta, Angie Nichols-Friedman, Michele Mackin, Diana Maurer, Eleni Orphanides, Sabine Paul, Barbara Patterson, Amanda Postiglione, Andrew Sherrill, Stewart W. Small, Anne Stevens, and Cynthia Saunders. | Federal discretionary appropriations declined by roughly 12 percent between FY 2010 and 2015. To better understand issues agencies face in an environment of declining resources and how agencies could address them, GAO developed a framework in 2012 for examining agencies' efforts to manage declining resources. GAO was asked to examine the specific actions agencies are taking to manage declining resources and the effects on services to the public. This report examined (1) to what extent selected agencies' actions to manage in an environment of declining resources aligned with GAO's framework and (2) the effects, if any, declines in discretionary spending after 2009 had on services to the public at selected agencies. GAO applied its framework to three agencies selected based on budget data from FY 2010 through 2014. For the larger agencies (EPA and ETA), GAO selected two programs within each agency for review. GAO reviewed agency documents; and interviewed agency officials, program partners, and external stakeholders. The three selected agencies GAO reviewed for this report—the Employment and Training Administration (ETA), Federal Law Enforcement Training Centers (FLETC), and the Environmental Protection Agency (EPA)—each took a number of different approaches to manage declining resources that aligned with the three key themes outlined in GAO's framework. For example: Top Management Should Lead Efforts to Manage Declining Resources. ETA's Office of Foreign Labor Certification top management led efforts to ensure the agency maintains capacity to achieve its mission by taking steps to restructure its workforce to better use existing staff to address changes in workload. This includes cross-training its workforce to achieve greater interoperability of employees among its three processing centers. Data Analytics Should Guide Decision Making. EPA used Lean Six Sigma, a data-driven process-improvement methodology, to evaluate agency processes and identify opportunities to make them more efficient. For example, EPA's Office of Pesticide Programs reported that it reduced the time it takes to post pesticide product labels, which provide critical information about proper use and handling of pesticides. Agencies Should Develop Cost-Cutting and Cost-Avoidance Strategies. FLETC reported that in FY 2013 and 2014 the agency reviewed its service contracts to identity potential cost avoidance opportunities. As a result, FLETC reported avoiding roughly $8 million out of $81 million in service contracts by reducing or eliminating nonessential services, such as reducing hours for the information technology (IT) service desk support and consolidating security guard services. However, opportunities exist for top management at selected agencies to take additional actions to ensure they maintain capacity to achieve their missions and avoid costs. For example, FLETC is working to develop an Online Campus initiative, which would provide distance-learning opportunities and represents a potential long-term cost avoidance strategy that could help the agency maintain capacity to provide necessary law enforcement training. However, FLETC has not yet finalized its plan for the Online Campus with steps and timeframes needed to ensure successful implementation. At ETA, the most recent recession tested the Unemployment Insurance (UI) program's capacity, but ETA has yet to systematically identify lessons learned to help ensure UI maintains capacity should workload increase again. Following through on these actions could help agencies better manage limited resources and maintain capacity to achieve their missions. Some agency officials and stakeholders reported that actions taken by the selected agencies affected timeliness and service level for some programs. While some stakeholder organizations commended the agencies for their efforts to continue to achieve their missions with fewer resources, they also noted that some actions had negative effects on individuals, businesses, states, localities, and others. The effects they cited included delays in receiving unemployment benefits and disruptions to businesses resulting from delays in processing foreign labor applications and pesticide registration applications. GAO makes three recommendations, including that FLETC finalize its plan for the Online Campus and ETA systematically identify lessons learned by the UI program that could help it respond to future economic downturns. FLETC concurred. ETA did not state whether it concurred or not but stated it believed the recommendation does not fully recognize its existing efforts. GAO continues to believe the recommendation is valid as discussed in the report. |
In November 1985, the Congress directed the Army to destroy the Department of Defense’s (DOD) stockpile of unitary chemical weapons. The stockpile is stored at eight Army installations in the continental United States and one installation on the Johnston Atoll in the Pacific Ocean. It consists of various lethal weapons, such as rockets, bombs, and projectiles, and bulk containers that contain nerve and mustard agents. Exposure to the agents can result in death. In 1993, the United States signed the U.N.-sponsored Convention on the Prohibition of the Development, Production, Stockpiling and the Use of Chemical Weapons and on Their Destruction, commonly referred to as the Chemical Weapons Convention. The United States agreed to dispose of (1) binary chemical weapons, recovered chemical weapons, and former chemical weapon production facilities within 10 years and (2) miscellaneous chemical warfare materiel within 5 years of the date the convention becomes effective. If ratified by the U.S. Senate, the convention becomes effective 180 days after the 65th nation ratifies the treaty, but not sooner than January 13, 1995. Under the terms of the convention, chemical weapons buried prior to 1977 are exempt from disposal as long as they remain buried. In the United States, burial was a common disposal method for chemical warfare materiel until the late 1950s. Should the United States choose to excavate the sites and remove the chemical weapons, the provisions of the convention would apply. DOD officials estimate that the convention will enter into force in fiscal year 1996. In the fiscal year 1993 National Defense Authorization Act (P.L. 102-484), the Congress directed the Army to report on its plans for disposing of all nonstockpile chemical warfare materiel within the United States. In 1993, the Army issued a report describing the nonstockpile chemical materiel, potential disposal methods, transportation alternatives, and disposal cost and schedule estimates. The report concluded that it would cost the Army $1.1 billion ($930 million in direct project disposal costs and $170 million in programmatic costs) to destroy, primarily by incineration, demolition, and neutralization, the nonstockpile chemical materiel required by the convention within the required time frames. Programmatic costs are associated with more than one disposal project or program category. For example, the portion of management and personnel costs that support more than one project is considered programmatic costs. Also, estimated costs to procure and test equipment to be used at more than one site are included in the programmatic cost estimate. The Army also reported that it would cost $16.6 billion ($12.04 billion in direct disposal costs and $4.56 billion in programmatic costs) to recover and destroy, primarily by incineration and neutralization, buried chemical materiel within 40 years. These estimates are considered rough order of magnitude estimates, typically used when a program is not fully developed. According to program officials, the Army plans to issue a supplement to its 1993 survey and analysis report, which will include revised cost and schedule estimates, in mid-1995. Appendix II describes the Army’s nonstockpile chemical warfare materiel. The Army Chemical Demilitarization and Remediation Activity, formerly named the Army Chemical Materiel Destruction Agency, is responsible for storing, transporting, and disposing of nonstockpile chemical warfare materiel. The extent to which other federal and state agencies will be involved in the program depends on the location and particulars of the nonstockpile chemical materiel. Appendix III describes federal and state agencies’ roles and responsibilities for the nonstockpile disposal program. As of November 1994, the Army had not issued a comprehensive implementation plan to dispose of nonstockpile chemical warfare materiel. Moreover, based on the Army’s experience with the stockpile disposal program, it is likely to be several years before the Army can develop a disposal plan that includes reliable cost and schedule estimates.The Army’s 1993 report provides an initial scoping of the magnitude of effort required to safely destroy all nonstockpile chemical materiel in the United States if so directed. However, because of uncertainties about the nature and magnitude of the materiel or the disposal methods to be used, the Army recognizes that its $17.7-billion cost estimate for the nonstockpile disposal program cannot be relied on for budget purposes. Appendix IV lists the disposal methods used by the Army to develop its program cost and schedule estimates. Whenever possible, the Army plans to dispose of nonstockpile chemical materiel on-site. However, there may be occasions when it is not feasible or practical for the Army to do so, and transportation to another disposal location may be required. Factors the Army intends to consider are population proximity and density, chemical weapon type, condition of the munitions, and public safety and environmental policy. In addition, the opinions and concerns of the affected states, local governments, and the public will affect the Army’s decisions. For example, there is strong public opposition to incineration and transportation of chemical weapons across state boundaries. The Army’s level of knowledge and stage of planning by category of nonstockpile materiel are summarized in table 1. The locations and quantities of binary chemical weapons are well-documented and understood by the Army. Binary weapon systems principally involve an artillery projectile and components of the bigeye bomb. The projectile is composed of chemical elements, a metal casing, and explosive components. Although the bigeye bomb was never produced or stockpiled, some associated chemical elements must be destroyed. Although the method for destroying binary chemical weapons has not been determined, the Army estimates that, subject to the availability of funds, it can destroy the binary weapons within 10 years for $190 million. According to Army officials, the chemical elements in binary weapons are not lethal agents until they are combined during flight to a target; therefore, handling and disposing of the chemical elements and components should not pose any major problems. Some of the disposal options being considered for binary weapon components are incineration, landfill, crushing, and smelting. The actual disposal method will be selected by the Army after a comprehensive environmental review. The Army has a good understanding of miscellaneous chemical warfare materiel to be destroyed and has documented them by location, configuration, quantity, and type. However, changes are likely to occur as materiel is added or deleted as a result of the Chemical Weapons Convention verification process. The materiel is predominantly metal containers and munitions components. Some of the components contain explosive charges that may need to be extracted before disposal. Despite an uncertainty about the disposal method, the Army estimates that, subject to the availability of funds, it can destroy the miscellaneous chemical warfare materiel within 5 years for $210 million. According to Army officials, disposal options are numerous since most of the materiel is not contaminated with a chemical agent. The options include incineration, smelting, and crushing. However, the decision on disposal methods will be based on (1) the location, configuration, and type of materiel, (2) results of the required environmental analyses and studies, and (3) input from the affected states, local governments, and the general public. The Army has some information on the recovered chemical weapons that it must dispose of, but the inventory will change as additional weapons are recovered. According to Army documents, chemical weapons have been recovered from range-clearing operations, chemical burial sites, and research and development test areas. As of November 1993, there were 7,056 recovered chemical items in the Army’s inventory, consisting of mortar cartridges, projectiles, bombs, German rockets, chemical agent identification sets, and bulk containers. With appropriate funding, the Army estimates that the destruction of recovered chemical items can be completed within 10 years, at a cost of $110 million. The Army believes that handling and disposing of recovered chemical weapons will be difficult as (1) they are more likely to have deteriorated than other nonstockpile materiel and (2) the identity of the agent is unknown in 25 percent of the weapons. The Army is studying several destruction options, including transportable incineration and neutralization systems. However, the actual method for destroying the recovered chemical weapons cannot be selected until after the Army completes the required technical and environmental studies. The Army has identified former chemical weapon production facilities that need to be cleaned up. They consist of buildings and equipment for producing, loading, storing, and assembling chemical munitions and agents. These facilities are located in four states and are in various degrees of contamination and deterioration. The Army estimates that it will take 10 years and $420 million to dispose of former chemical weapon production facilities. However, the Army has no experience in destroying former production facilities in compliance with the Chemical Weapons Convention. It is still in the process of determining the levels of contamination, identifying potential problems in the demolition process, and determining how to safely dispose of the buildings and their components. Some of the disposal options being considered are incineration of contaminated materiel and demolition of uncontaminated facilities and equipment. The final disposal decision will not be made until comprehensive environmental studies are completed with the participation of the affected states, local governments, and the public. The Army has limited and often imprecise information about the nature and extent of buried chemical materiel. However, it has begun to develop site characterization, excavation, removal, and treatment procedures for the burial sites. Since burial was considered to be the final disposal act, little record-keeping was done for burial activities and additional sites are likely to be identified. Available records indicate that some burial sites may still contain active chemical agents and explosives; therefore, they pose a threat to human health and the environment. According to Army officials, the lack of knowledge about buried chemical warfare materiel has created considerable difficulty in selecting appropriate disposal methods. The Army has conducted various analyses, including comprehensive documentation surveys, site visits, and interviews, to identify potential burial sites. Even at well-documented sites, the actual amount, chemical agent, condition, and type of buried materiel will remain relatively unknown prior to excavation and visual identification. Based on preliminary analyses, the Army has identified potential chemical warfare materiel at 215 burial sites in 33 states, the U.S. Virgin Islands, and Washington, D.C. (See fig. 1.) The Army has determined that 30 of the 215 potential burial sites warrant no further remediation activity. This determination is based on the Army’s site assessment, prior completed remedial work, or the restricted accessibility of the site. The Army is studying (1) several different on-site disposal technologies, (2) the plausibility of leaving the materiel in the ground while controlling access to the site and containing potential contamination, and (3) transportation of the materiel to an Army facility capable of storage and destruction. Prior to excavation, the Army will conduct soil samples and metal detection surveys as well as install monitoring wells to estimate the nature and extent of contamination and develop remedial alternatives. The Army could excavate by hand, which has been frequently used in the past. It is also studying the use of robotics in excavating buried materiel, although acceptable technology is not readily available. According to Army officials, mechanical means are more likely to cause a chemical release or detonation. The actual excavation method for recovering buried chemical warfare materiel cannot be selected until the Army completes further technical and environmental studies and the public has been involved in the Army’s selection. The Army estimates that it will cost $12.04 billion, plus $4.56 billion in programmatic costs, and take 40 years to recover and dispose of the buried chemical materiel. It included the estimated costs (1) of fixed incinerators for three of the four large burial sites, (2) for capping the remaining large site, and (3) of transportable incineration and neutralization systems for small sites. The transportable incineration and neutralization systems, when developed, will comply with safety and environmental requirements and be capable of moving or being moved from one disposal site to another. The Army expects the systems will use a batch-style process to treat relatively small quantities of chemical warfare materiel. Appendix V contains our case study of the Army’s investigation and disposal activities at the Spring Valley chemical burial site in Washington, D.C. Remediation of the Spring Valley site took 2 years and cost $20.22 million. The recovered chemical warfare materiel has not yet been destroyed. Because both chemical disposal programs involve similar environmental requirements and potentially similar disposal methods, many of the lessons learned from the stockpile disposal program may apply to the nonstockpile program. In the 1990s, we reported that the Army did not adequately anticipate and plan for (1) the time needed to obtain the necessary environmental approvals and permits for the stockpile disposal program and (2) the strong public opposition to the chemical weapons incineration process. Further, we reported that the stockpile program had been delayed by design, equipment, and construction problems at the new disposal facility at Johnston Atoll. As a result of these factors, the estimated cost of the stockpile disposal program increased and the Army’s destruction schedule slipped. According to Army officials, they have applied some lessons learned, such as the Army’s experience with environmental compliance procedures and research of alternative disposal methods from the stockpile program, to the nonstockpile disposal program. However, lessons learned were not discussed in the Army’s 1993 survey and analysis report on the nonstockpile program. In addition, because the Army based its disposal program and estimates on numerous assumptions as well as generic cost categories and work statements, we could not determine the effects of the lessons on the Army’s nonstockpile planning process and estimates. Prior to recovering, storing, moving, or destroying nonstockpile chemical warfare materiel, the Army must comply with federal and state environmental laws and regulations. These laws and regulations differ from state to state and change frequently. In its 1993 report, the Army reported that changes to environmental regulations could significantly affect its estimated disposal cost and schedule for the nonstockpile disposal program. Even when state regulatory agencies grant the Army permission to recover, move, or dispose of nonstockpile materiel, the Army is not insulated from legal actions by concerned citizens and groups. Previously, we reported that because of the Army’s difficulty in anticipating the time needed to comply with environmental requirements and to obtain environmental approvals and permits, the chemical stockpile disposal program cost more and took longer than planned. Army facilities must have environmental permits for the storage and disposal of the nonstockpile chemical materiel, and the methods for transporting and disposing of the materiel must adhere to appropriate environmental regulations and be based on comprehensive studies. In general, state governments are authorized, under federal environmental statutes, to adopt federal concepts and to promulgate and implement additional rules and regulations, which in some instances, are more stringent than federal standards. For example: The Resource Conservation and Recovery Act, as amended, is likely to apply to most aspects, including transportation and storage, of the nonstockpile disposal program. Under the act, the Environmental Protection Agency may authorize individual states to administer and enforce hazardous waste programs in lieu of the federal program. The act also allows states to establish requirements more stringent than federal standards. For example, the states of Kentucky and Indiana enacted legislation that require the Army to demonstrate the absence of any acute or chronic health or environmental effects from incineration of chemical weapons before an environmental permit will be granted. There are miscellaneous chemical warfare materiel, former chemical weapon production facilities, and five potential burial sites located in these states. The Comprehensive Environmental Response, Compensation, and Liability Act provides overall cleanup procedures for nonstockpile sites and incorporates the standards of other federal and state statutes if they are applicable or relevant and appropriate to the cleanup process. A specific sequence of activities, guaranteeing the participation of federal and state agencies and the public in key decisions, must be followed before cleanup of a nonstockpile site proceeds. The Hazardous Materials Transportation Act governs the transportation of most nonstockpile chemical materiel and limits the movement of the materiel without special permits, licenses, and authorizations. The act delegates regulatory and enforcement responsibilities to the states but limits some state regulations. Nevertheless, states may still implement routing restrictions, transportation curfews, notification deadlines, and public right-to-know requirements. The Army anticipates that each state the materiel originates in, passes through, or terminates in will have some jurisdiction over part of the transportation program. The nonstockpile disposal program has not reached the stage where appropriate laws, regulations, and concerns can be specifically identified for each location with nonstockpile chemical materiel. The applicability of laws and regulations to the recovery, transportation, storage, and disposal of nonstockpile materiel ultimately depends on the circumstances of the materiel. The participation of the states, local governments, and the public also affects the Army’s decisions concerning the transportation and disposal of the nonstockpile materiel. With respect to the nonstockpile program, the Army’s planning process must cover at least 185 potential burial sites with various environmental conditions and considerations, 29 different states with state-oriented environmental laws and regulations, numerous local governments, and the general public. As demonstrated in the stockpile disposal program, there is considerable public opposition to the incineration of chemical munitions or agents. However, the Army based its 1993 preliminary cost and schedule estimates on the use of incinerators to destroy potentially large portions of its nonstockpile chemical materiel. The opposition centers around concerns about adverse health effects and environmental hazards. This opposition, which has come from several citizen groups, environmental organizations, and state governments, has extended the environmental review and approval process and resulted in postponing the construction and operation of fixed incinerators. The actual disposal methods for the nonstockpile program will be selected by the Army after comprehensive environmental reviews are completed with the participation of the affected states, local governments, and public. In our 1994 report on the stockpile disposal program, we concluded that alternative technologies were unlikely to reach maturity in time to destroy the chemical weapons stockpile because they are in the initial development stages and over a decade away from full operations. Similarly, it is unlikely that these alternative technologies, if ever operational, will be available within the Chemical Weapons Convention’s established time frames for the nonstockpile disposal program. According to Army officials, they believe that the neutralization process will be operational in the 1996-97 time frame. The Environmental Protection Agency has stated that any proposed chemical disposal technology would have to undergo the same type of rigorous analysis and evaluation that the incineration process has gone through—a process that has required at least 9 years. The nonstockpile disposal program is vulnerable to change because it depends on disposal methods and destruction rates that have not been demonstrated. In our 1991 report on the stockpile program’s cost growth and schedule slippages, we concluded that the Army had limited experience with destroying stockpile chemical weapons and was unfamiliar with types of technical and mechanical problems to expect. As a result of these problems, the Army has not achieved its expected disposal rates for the stockpile program. Similarly, no nonstockpile chemical disposal project has been completed. Therefore, little procedure, cost, schedule, or engineering data are available, and the Army’s proposed disposal methods and estimated destruction rates have not yet been demonstrated. In its 1993 report, the Army concluded that the technical risk for the nonstockpile disposal program was high because none of the disposal projects were completed. The Army also concluded that if effective processes or procedures were not discovered, it would have to fund “a major research and development program.” The Army has reported that unforeseen events, such as an accidental chemical release or explosion, would increase the cost and duration of the nonstockpile disposal program. For example, the Army’s stockpile disposal facility at Johnston Atoll was shut down on March 23, 1994, and restarted again on July 12, 1994, because of a chemical agent release. According to Army officials, the release was small—approximately 11 milligrams. In addition, because of a hurricane and subsequent damage, the Johnston stockpile disposal facility was shut down on August 25, 1994, for more than 2 months. We recommend that the Secretary of the Army ensure that lessons learned from the stockpile disposal program are systematically incorporated into the nonstockpile planning process and establish milestones for developing accurate and complete cost data to effectively plan for and control future program expenditures. We conducted our review from June 1993 to November 1994 in accordance with generally accepted government auditing standards. Unless you publicly announce this report’s contents earlier, we plan no further distribution until 30 days from its issue date. At that time, we will send copies to the Chairmen, House and Senate Committees on Armed Services and on Appropriations; the Secretaries of Defense and the Army; the Director of the Office of Management and Budget; and other interested parties. We will make copies available to others upon request. Please contact me at (202) 512-8412 if you or your staff have any questions. Major contributors to this report are listed in appendix VI. In reviewing the Army’s nonstockpile chemical disposal program, we interviewed and obtained data from officials of the Department of Defense (DOD), the Department of the Army, the Army Chemical Demilitarization and Remediation Activity, the Army Chemical and Biological Defense Agency, and the U.S. Army Corps of Engineers. We also met with U.S. Environmental Protection Agency officials to discuss and collect data on environmental and legal issues related to the nonstockpile disposal program. We did not include overseas abandoned chemical warfare materiel in our review. To identify lessons learned from the Army’s stockpile disposal program, we reviewed our previous reports and testimonies and their supporting documentation. To assess the estimated disposal cost and schedule, we analyzed pertinent documentation and discussed the estimation methodology and problems that could affect the cost and duration of the program with Army and contractor officials. To assess the extent and nature of the nonstockpile disposal program, we visited Aberdeen Proving Ground, Maryland; Rocky Mountain Arsenal, Colorado; the former Raritan Arsenal, New Jersey; and the Spring Valley site, Washington, D.C. As requested, we did not obtain official agency comments, but we discussed our findings with officials from DOD and the Army and incorporated their views where appropriate. Binary chemical weapons: Chemical weapons formed from two nonlethal elements (called precursors) through a chemical reaction after the munitions are fired or launched. Binary weapons were manufactured, stored, and transported with only one of the chemical elements in the weapon. The second element was to be loaded into the weapon at the battlefield. As of November 1993, the precursors for the binary chemical weapons are stored at Aberdeen Proving Ground, Maryland; Pine Bluff Arsenal, Arkansas; Tooele Army Depot, Utah; and Umatilla Depot Activity, Oregon. Miscellaneous chemical warfare materiel: Materiel designed for use in the employment of chemical weapons, including unfilled munitions and components and support equipment and devices. According to Army records, miscellaneous materiel are stored at the Aberdeen Proving Ground, Maryland; Anniston Army Depot, Alabama; Blue Grass Army Depot, Kentucky; Dugway Proving Ground, Utah; Johnston Atoll, Pacific Ocean; Newport Army Ammunition Plant, Indiana; Pine Bluff Arsenal, Arkansas; Pueblo Depot Activity, Colorado; Tooele Army Depot, Utah; and Umatilla Army Depot Activity, Oregon. Recovered chemical weapons: Chemical weapons recovered from range-clearing operations, chemical burial sites, and research and development test areas. According to the Army’s 1993 report, recovered items are stored at Aberdeen Proving Ground, Maryland; Anniston Army Depot, Alabama; Dugway Proving Ground, Utah; Johnston Atoll, Pacific Ocean; Pine Bluff Arsenal, Arkansas; and Tooele Army Depot, Utah. Former chemical weapon production facilities: Government-owned or -contracted facilities used to (1) produce chemical agents, precursors for chemical agents, or other components for chemical weapons or (2) load or fill chemical weapons. These facilities are located at Aberdeen Proving Ground, Maryland; Newport Army Ammunition Plant, Indiana; Pine Bluff Arsenal, Arkansas; and Rocky Mountain Arsenal, Colorado. Buried chemical warfare materiel: Chemical warfare materiel, which are buried on both private lands and military installations, consisting of various munitions, bombs, rockets, and containers that may have been contaminated with nerve, blister, blood, or choking agents. At some sites, chemical munitions and agents were drained into holes in the ground, covered with lime or burned in an open pit, and finally covered with earth. Based on preliminary analyses, the Army has identified potential chemical warfare materiel at 215 burial sites in 33 states, the U.S. Virgin Islands, and Washington, D.C. The Army has determined that 30 of the 215 potential burial sites warrant no further remediation activity. This determination is based on the Army’s assessment of the potential burial site, prior remedial work, or accessibility to the site. The U.S. Army Chemical Demilitarization and Remediation Activity is responsible for implementing the destruction of all U.S. chemical warfare-related materiel, including the chemical weapons stockpile and nonstockpile chemical materiel, and for ensuring maximum protection to the environment, general public, and personnel involved in the destruction. The activity’s office of Program Manager for Nonstockpile Chemical Materiel is responsible for collecting and analyzing data on nonstockpile chemical materiel; identifying and assessing sites with possible buried chemical warfare materiel; coordinating the transportation of recovered chemical weapons to locations for interim storage; destroying recovered chemical warfare materiel on-site as needed to protect the general public and environment; researching, developing, evaluating, and selecting disposal methods for all destroying binary chemical weapons, miscellaneous chemical warfare materiel, recovered chemical weapons, and former production facilities in accordance with the Chemical Weapons Convention, in compliance with public safety and environmental requirements and regulations, and in coordination with the potentially affected public; and reclaiming and destroying buried chemical warfare materiel in the interest of safeguarding the general public and environment. Although the Army Chemical Demilitarization and Remediation Activity has overall responsibility for disposing of nonstockpile chemical materiel, other organizations within or outside DOD contribute to the disposal program. The involvement of the following organizations depends on the location and particulars of the materiel, storage area, or burial site: The Army Corps of Engineers provides technical support for site investigations, recoveries, and site restorations to Army and DOD organizations and is responsible for cleaning up formerly used defense sites. Restoration activities concerning the handling and disposal of nonstockpile chemical warfare materiel are coordinated with and authorized by the Army Chemical Demilitarization and Remediation Activity. The Technical Escort Unit, the Army Chemical and Biological Defense Agency, is responsible for the escort of nonstockpile chemical materiel, emergency destruction of chemical munitions, and emergency response to chemical agent incidents. The Army Environmental Center develops and oversees environmental policies and programs for the Army. The Army Surgeon General’s office provides advice to Army commands on health and safety issues related to handling, transporting, and processing chemical agents and materiel. The Air Force Civil Engineer provides program management and technical support to Air Force commands and installations on environmental compliance and restoration programs. The Environmental Protection, Safety & Occupational Health Division, Office of Naval Operations, provides environmental policy and management support to Navy activities on environmental or safety-related programs. The Office of Installation Services and Environmental Protection, Defense Logistics Agency, provides environmental policy and management support to the agency’s field commands and installations. The U.S. Environmental Protection Agency enforces federal laws protecting the environment and ensures that regulations mandated by federal statutes are followed. The U.S. Department of Health and Human Services reviews and provides recommendations on the Army’s plans to transport or destroy chemical warfare materiel in order to help ensure public health and safety. The Occupational Safety and Health Administration oversees and regulates safety and health conditions at the workplace. The U.S. Department of Transportation enforces regulations governing the transportation of hazardous or nonhazardous materiel. State governments and communities affected by the nonstockpile disposal program provide information for and have input into the Army’s decision-making process. They also review and comment on the Army’s planning and decision documents; grant necessary permits; and monitor and enforce their state, regional, and local statutes. The responsibilities for remedial activities differ between burial sites located on active defense installations and formerly used defense sites. At active installations, the installation commander has overall responsibility for remedial activities at the potential burial sites. The Army Corps of Engineers and the Army Environmental Center support the installation commander in site investigation, excavation, and environmental cleanup. At formerly used defense sites, the Corps of Engineers has overall responsibility for site investigations, planning, excavations, and environmental cleanups of the potential burial sites. In both instances, the Army Chemical Demilitarization and Remediation Activity is responsible for the transportation, interim storage, and destruction of recovered chemical warfare materiel. The activity is also responsible for the development of the equipment and technologies to safely dispose of the materiel. In January 1993, a construction crew unearthed World War I-era chemical and high-explosive munitions during routine residential construction activities in an area known as Spring Valley in Washington, D.C., setting in motion emergency recovery and removal operations, called phase I of Operation Safe Removal. Over 140 items, including mortars, projectiles, and debris, were recovered and removed from the area by the Army’s Chemical and Biological Defense Agency during this phase. Some of the recovered items were subsequently analyzed and determined to contain chemical agents. The Army Corps of Engineers is currently proceeding with phase II of Operation Safe Removal, which is the comprehensive investigation and cleanup of the Spring Valley site under the Defense Environmental Restoration Program. In 1917, the Chemical Warfare Service of the U.S. Bureau of Mines leased 92 acres from American University to establish the American University Experiment Station. The station was used by the Chemical Warfare Service, with personnel from the Army and the Navy, to research and conduct testing of chemical warfare items. Subsequently, additional land was leased northwest of American University to field test the chemicals and munitions developed at the station. In 1918, the Chemical Warfare Service was transferred from the Bureau of Mines to the War Department, and the station was renamed Camp American University Experiment Station, encompassing a total of 425 acres. During this period, the War Department also leased 84 acres northeast of American University to establish Camp Leach. This camp had mainly tents and barracks, along with staging and training areas for troops. According to the Army, no chemical testing was conducted at Camp Leach. From mid-1917 through 1918, 100,000 troops were trained in trench warfare and the handling of chemical munitions at Camps American University and Leach. In addition, mortars and projectiles were test-fired and chemical munitions were tested in various areas of the camps. The American University Experiment Station was also used to prepare and test chemical warfare agents and munitions for possible use develop procedures and methods to produce chemical warfare agents and develop gas masks, protective clothing, canisters, incendiaries, smokes, and signals. In December 1918, the War Department discontinued using Camps American University and Leach and burned all temporary buildings that had become unusable. In 1920, the department vacated the remaining buildings. The trenches and pits were filled in and the land returned to the original owners. Between 1942 and 1946, the Department of the Navy leased 5 acres and 15 buildings from American University to establish the Navy Bomb Disposal School. The Navy used the property and buildings for educational purposes. The Spring Valley site is a residential community located in northwest Washington, D.C., near the American University, schools, churches, a community park, hospital, a theological seminary, a new housing development project, and approximately 1,200 residences. The community is comprised of upper middle and upper income families, and the houses are valued from $600,000 to $1 million. The area immediately surrounding the initial discovery site consists of recently constructed or under-construction homes. Since the initial discovery of the munitions, the area of concern expanded to approximately 616 acres based on archival records. Operation Safe Removal is conducted under the Comprehensive Environmental Response, Compensation, and Liability Act procedures and provisions in two operational phases. The Chemical and Biological Defense Agency was responsible for phase I, or the emergency recovery and removal operational phase. Phase I was completed on February 2, 1993. The Army Corps of Engineers is proceeding with phase II, or the long-term investigation and cleanup operational phase of the site with the fieldwork scheduled to be completed in January 1995. On January 5, 1993, a construction crew unearthed a World War I-era chemical and high-explosive munitions disposal pit while installing a sewer line in the Spring Valley area. This discovery set in motion phase I of Operation Safe Removal. Shortly after the discovery, the Army’s emergency response force confirmed that several of the unearthed munitions were filled with chemical warfare materiel. Personnel in protective clothing recovered the visible munitions, sifted through the dirt piles, and segregated the liquid- and solid-filled munitions. During this period, residents of the Spring Valley area were evacuated. On the third day of the initial discovery, the Army activated a service response force to complete the removal operation. The service response force consisted of specialists to coordinate the on-site safety, security, and medical support; historical research; public affairs; hazard analysis; legal advice; environmental issues; and transportation of the recovered munitions. Within a few days, specialists from the Army Corps of Engineers, Army Chemical Demilitarization and Remediation Activity, Environmental Protection Agency, Federal Emergency Management Agency, Centers for Disease Control, Occupational Safety and Health Administration, American National Red Cross, local police and fire departments, and others were on-site. Numerous miscellaneous items, tons of scrap, and over 140 munitions were removed from the Spring Valley site during phase I. Most liquid-filled munitions were flown off-site by helicopter to Andrews Air Force Base, Maryland, and then air-shipped to Pine Bluff Arsenal, Arkansas, for storage. The solid-filled munitions were flown to Fort A.P. Hill, Virginia, for explosive destruction. The miscellaneous items were moved off-site for testing, and the scrap materiel were sent to a landfill in New York. Both on-site and off-site analyses confirmed that some of the recovered munitions contained or at one time contained chemical or toxic smoke agents. Table V.1 shows the disposition of the recovered materiel. The Army Corps of Engineers is responsible for the overall project management, investigation, design, and construction activities during phase II of Operation Safe Removal. Its mission is to investigate and verify that no additional World War I-era munitions remain in the Spring Valley area and, if necessary, to excavate, remove, and destroy any munitions discovered. The decision to continue the investigation of the Spring Valley site was based on research of archival data, topographic maps, aerial photographs, and anecdotal information, which indicated that more areas of interest existed. The Army also conducted geophysical investigations, including ground conductivity surveys, magnetometer sweeps, and soil and water sampling at the Spring Valley site. A computer system merged these data and maps and allowed the Corps of Engineers to create visual composite maps that summarized the investigations. Based on the results of the process, the Corps located suspected anomalies that required excavation to verify the presence or absence of munitions. The excavation process, which was approved in a safety plan, began with a Corps contractor mechanically digging to within 12 inches of the suspected anomaly, and then the process was turned over to the Army Technical Escort Unit for final excavation, exposure, identification, and removal. The excavation recovered several munitions and potential chemical warfare materiel. A brief description of some of the recovered materiel follows: A corroded piece of pipe, similar to shipping containers for liquid and gases during World War I, was recovered and moved to Pine Bluff Arsenal, Arkansas, for storage in June 1993. A 75-mm projectile, identified as a suspected chemical weapon, was recovered and flown to Pine Bluff Arsenal, Arkansas, for storage in October 1993. Shrapnel from several expended 75-mm projectiles were recovered and disposed of as scrap. A Livens smoke projectile was recovered and destroyed by incineration as waste munition in April 1994. Three glass vials, containing a clear liquid, were recovered and moved to Aberdeen Proving Ground, Maryland, for testing in November 1994. Also, various nonmilitary metallic materiel encompassing ferrous rocks, a bundle of 14-gauge wire, a 28- by 10-foot steel plate, and construction debris were recovered and moved to other locations. As part of the Spring Valley Safety and Work Plans, an interim holding area and helicopter pad were constructed at a cost of $284,000. They were designed to provide immediate, although temporary safe storage, for any recovered munitions prior to being moved by Army helicopter out of the Spring Valley area. The holding area and pad contain a fire suppression system, air filtration system, lightning arrester system, and beacon lights. They are located on federal property and are government controlled for security reasons. The interim holding area contains three storage magazines, one for high-explosive munitions and two for chemical munitions. The two chemical magazines are modified to include fire suppression and air filtration systems. The magazines are enclosed by a timber structure and earth embankment that provides a minimum of 3 feet of soil encompassing the magazines. No munition will remain in the interim holding area for longer than 10 consecutive days. The Corps of Engineers intends to demolish the holding area and helicopter pad once excavations at Spring Valley are completed. Recovered chemical weapons were moved by helicopter from the interim holding area to Andrews Air Force Base, Maryland, and then flown to Pine Bluff Army Arsenal, Arkansas, for storage and future destruction. Recovered high-explosive, conventional munitions were moved by helicopter from the area and transported to Letterkenny Army Depot, Pennsylvania. No shipment of other hazardous waste will be moved into or out of the interim holding area. As of November 29, 1994, the Army Corps of Engineers estimated that the investigation and cleanup of the Spring Valley site would cost $20.22 million. (See table V.2.) The estimate includes the costs of completing phase I operations, researching and investigating the site, constructing and operating the interim holding area, removing and sampling the recovered munitions and materiel, fulfilling safety and environmental requirements, and performing management activities. The Army Corps of Engineers costs include support costs for the Army Technical Escort Unit; the Army Chemical and Biological Defense Agency; Washington, D.C., government; resident office facilities; community evacuation; and others. According to the Army Corps of Engineers, the primary issues and concerns of the residents in Spring Valley are related to their personal safety, the effects of the presence of chemical munitions on the value of their property, the length of time their lives will be disrupted by the ongoing investigation and cleanup of the site, when the Spring Valley site will be certified safe and clear of dangerous munitions after Operation Safe Removal is completed, and the question of whether the Army is telling all that is known about or going on at the site. To address these issues and concerns, the Corps of Engineers developed a public involvement and response plan to promote efficient and effective communication among the Corps; various federal, city, and local agencies and officials; property owners; the housing development corporation; general public; and news media. The primary objectives of the plan are to (1) provide for clear and open exchange of information regarding current and planned investigation and cleanup activities, (2) address local community issues and concerns, (3) provide government agencies and the public the opportunity to participate in the Corps of Engineers’ planning and decision-making process, and (4) provide government agencies and the public with a centralized point of contact. According to the Corps of Engineers, the plan is flexible and can be modified as events, community issues and concerns, and situations change. We did not evaluate the effectiveness of the Army’s public involvement and response plan. David R. Warren, Associate Director Thomas J. Howard, Assistant Director Glenn D. Furbish, Senior Evaluator Mark A. Little, Evaluator-in-Charge Pauline F. Nowak, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Army's nonstockpile disposal program, focusing on: (1) the Army's planning process for the nonstockpile disposal program; (2) the Army's estimated disposal cost and schedule; and (3) applicable lessons learned from the Army's stockpile disposal program. GAO found that: (1) the Army has not finalized plans for its nonstockpile disposal program because it has not fully identified the amount of materiel to be destroyed or appropriate disposal methods; (2) the Army believes it can dispose of binary chemical weapons within 10 years for $190 million, miscellaneous chemical warfare materiel within 5 years for $210 million, and recovered chemical weapons within 10 years for $110 million; (3) the Army has limited information on buried chemical warfare materiel, which it estimates will take 40 years to find and destroy at a cost of $16.6 billion; (4) the Army's nonstockpile disposal program will likely be affected by the same issues as the stockpile program, including compliance with federal, state, and local laws and regulations, obtaining environmental approvals and permits, and strong public opposition to chemical weapons incineration and transportation; (5) although the Army said it applied lessons learned from the stockpile disposal program to the nonstockpile disposal program, its 1993 survey and analysis report on the nonstockpile program did not discuss those lessons; and (6) the Army's estimated cost and schedule for the nonstockpile disposal program are likely to increase, since the Army has limited experience in destroying nonstockpile materiel and will likely encounter difficulties similar to those experienced in the stockpile disposal program. |
On August 29, 2005, Hurricane Katrina devastated the Gulf Coast region, causing human casualties and billions of dollars in damage. During major disasters such as this, the Stafford Act authorizes the federal government to assist in saving lives, reducing human suffering, mitigating the effects of lost income, and helping repair or rebuild certain damaged facilities. As of June 2006, nearly $88 billion was appropriated by the Congress through four emergency supplemental appropriations for relief and recovery efforts related to the recent Gulf Coast hurricanes. FEMA, the DHS component statutorily charged with administering the provisions of the Stafford Act, uses appropriations made to the Stafford Act’s Disaster Relief Fund to assist relief and recovery efforts. Initially, in September 2005, the Congress appropriated $62.3 billion for the response and recovery effort related to Hurricane Katrina in two emergency supplemental appropriations acts. Of that amount, (1) FEMA received $60 billion for the Disaster Relief Fund, (2) DOD received $1.9 billion, and (3) the Army Corps of Engineers (COE), a DOD agency, received $400 million. As of late December 2005, FEMA reported that it had obligated about $25 billion, or 42 percent, of the $60 billion it had received. In December 2005, the Congress provided additional funds for the recovery effort related to the 2005 Gulf Coast hurricanes through a third emergency supplemental appropriation act. This legislation provided approximately $29 billion to 20 federal agencies and also rescinded approximately $23.4 billion from the $60 billion appropriated to FEMA’s Disaster Relief Fund in September 2005. The third emergency supplemental appropriation resulted in a net increase of about $5.5 billion in total direct federal funding for hurricane relief and recovery and the fourth resulted in a net increase of approximately $20.1 billion. Table 1 shows the agencies that received direct funding through the four emergency supplemental appropriations acts. FEMA has authority under the Stafford Act to issue an order, called a mission assignment, to other federal agencies. A mission assignment is a tasking issued by FEMA that directs other federal agencies and components of DHS, or “performing agencies,” to support overall federal operations pursuant to, or in anticipation of, a Stafford Act declaration. Once the mission assignment is issued and approved, the mission assignment document is FEMA’s basis for obligating the portion of FEMA’s funds allocated to the assigned relief and recovery effort. From a federal agency standpoint, the mission assignment provides the recipient agency reimbursable budgetary authority, not the actual transfer of funds, to perform the agreed upon work. Among other things, mission assignments include a description of work, an estimate of the dollar amount of work to be performed, completion date for the work, and authorizing signatures. Mission assignments may be issued for a variety of tasks, such as search and rescue missions or debris removal, depending on the performing agencies’ areas of expertise. After the agencies perform work under a mission assignment (e.g., perform directly or pay a contractor), the agencies bill FEMA, and FEMA reimburses them for the work performed using the Intra-Governmental Payment and Collection (IPAC) system. In the case of an IPAC payment to a performing agency, the IPAC funds transfer occurs immediately upon request by the agency seeking reimbursement. After the IPAC is made, FEMA requires that performing agencies provide it documentation supporting the costs incurred while performing the work under the mission assignment. FEMA can also reverse or “charge-back” the payment if it believes the agency did not provide sufficient supporting documentation. The funding and reimbursement process related to mission assignments is shown in figure 1. The federal government is not adequately tracking and reporting on the use of the $88 billion in hurricane relief and recovery funds provided thus far to 23 federal agencies in the four emergency supplemental appropriations acts. First, FEMA does not have mechanisms in place to collect and report on information from the other agencies that are performing work on its behalf through mission assignments. As a result, FEMA’s required weekly reports to the Congress have limited usefulness from a governmentwide perspective. Second, also from a governmentwide perspective, the federal government does not currently have a framework or mechanisms in place to collect and consolidate information from the 22 federal agencies in addition to FEMA that have directly received funding thus far for hurricane relief efforts and report on this information. Although each federal agency is responsible for tracking the funds it received, obligations incurred, and funds expended through it own internal tracking systems, no mechanisms are in place to consolidate this information. Therefore, it will be difficult for decision makers to determine how much federal funding has been spent and by whom, whether more may be needed, or whether too much was provided. FEMA is required to report weekly to the Appropriations Committees on the use of funds it received; however, these reports do not provide timely information from a governmentwide perspective because FEMA does not have a mechanism in place to collect and report on information from other agencies which perform work on its behalf. Specifically, when FEMA tasks another agency through a mission assignment, which is similar to an interagency agreement for goods and services, FEMA records the entire amount upfront as an obligation on its reports to the Congress. The agency performing the task for FEMA does not record an obligation until a later date when it has actually obligated funds to carry out its mission, thereby overstating reported governmentwide obligations. The opposite is true for expenditures. The agency expends the funds, but then has to bill FEMA for reimbursement. This may happen months after the actual payment is made. FEMA does not record the expenditure on its reports to the Congress until it has received the bill from the performing agency, reviewed it, and recorded the expenditure in its accounting system, thereby understating reported governmentwide expenditures. FEMA’s weekly report as of March 29, 2006, shows that of the $36.6 billion received as of that date, it had incurred obligations totaling $29.7 billion and had made expenditures of $15.9 billion related to Hurricanes Katrina, Rita, and Wilma. Of the $29.7 billion in obligations, FEMA issued mission assignments to federal agencies totaling $8.5 billion, or 28.6 percent. The other $21.2 billion includes, for example, obligations that FEMA made for areas such as the individual and household program ($7.0 billion) and manufactured housing ($4.7 billion), which are being reviewed in some respects by other auditors. As of March 29, 2006, FEMA reported approximately $8.5 billion of obligations for mission assignments and approximately $661 million of expenditures for Hurricanes Katrina, Rita, and Wilma as shown in table 2. While FEMA reports obligations based on the dollar amount of the mission assignments it has placed with other federal agencies when they are assigned, these obligation amounts do not represent the amount of funds that the agencies have, in turn, actually obligated to perform disaster relief work on behalf of FEMA. In some cases, the agencies have obligated tens or hundreds of millions of dollars less than the amount reported by FEMA. Our analysis of FEMA’s reported mission assignments to other federal agencies to perform work on behalf of FEMA in the amount of $8.5 billion identified two types of reporting problems, both of which resulted in FEMA’s obligations being overstated from a governmentwide perspective. First, some federal agencies recorded obligations in their internal tracking systems that were much less than the amount of obligations reported by FEMA. This occurred because FEMA’s recorded obligations are based on the dollar amount of the entire mission assignment. In contrast, the amount of obligations recorded by federal agencies is the amount of funds they actually obligated to perform disaster relief work. The performing agency does not incur obligations until it actually performs or contracts for the work. Four examples of this reporting problem follow: On September 28, 2005, FEMA’s report showed that obligations on mission assignments issued to DOD related to Hurricane Katrina totaled about $2.2 billion. As of March 2006, this amount had been substantially reduced twice. On November 3, 2005, FEMA amended the mission assignment and reduced the amount to about $1.7 billion, and it reduced the amount again on March 15, 2006, to about $1.1 billion. While FEMA was reporting obligations as high as $2.2 billion during this 6-month period, DOD’s reports show that it incurred only $481 million of actual obligations as of April 5, 2006—hundreds of millions of dollars less than what FEMA reported over the same 6-month period. According to a DOD official, it is currently reviewing the mission assignments and will be returning obligational authority that was not used to FEMA. On September 28, 2005, FEMA’s report showed that obligations on mission assignments issued to COE related to Hurricane Katrina were about $3.3 billion. Since then, this amount has increased. On October 20, 2005, FEMA amended and increased the mission assignment amounts to about $3.7 billion and on April 5, 2006, to about $4 billion. However, according to COE’s internal records as of April 7, 2006, it had actually obligated about $3 billion for Hurricane Katrina work, a difference of over $1 billion. Based on information provided by the Coast Guard, FEMA had recorded mission assignment obligations related to Hurricanes Katrina and Rita in the amount of nearly $192 million as of April 2006. However, at that time, the Coast Guard had only actually incurred about $85 million in obligations. Thus, the difference between what FEMA reported to the Congress and what Coast Guard information showed it had actually obligated is approximately $107 million. Based on information provided by the Department of Housing and Urban Development (HUD), at the end of March 2006, FEMA had obligated and reported approximately $83 million for HUD mission assignments related to Hurricane Katrina. However, HUD had only incurred about $47 million in obligations for work to be done under mission assignments. While HUD may eventually utilize the full amount obligated by FEMA, at that time, there was an approximately $36 million difference between the amounts FEMA reported as obligated for HUD and what HUD had actually obligated. HUD expects final reconciliation to be completed by December 2006. Second, at least three federal agencies we interviewed did not have mission assignments recorded in their internal tracking systems that were recorded in FEMA’s system. According to the officials from certain federal agencies, this occurred because the agency’s financial management office was not informed of the mission assignments. FEMA officials informed us that this problem likely occurred because, while the agencies’ program offices appropriately received mission assignment information from FEMA, those agencies’ program offices did not properly provide the information to their agencies’ financial management offices. Two examples of this reporting problem follow: At the Department of Health and Human Services, we noted $90 million in mission assignment obligations related to Hurricane Katrina or amendments to those obligations that were reported by FEMA as of January 18, 2006, but not recorded by the department’s financial management office as of February 24, 2006. The department told us that these mission assignments or amendments had been issued by FEMA, but had not been received by the department’s program or financial management offices. After we pointed out the discrepancies, the two agencies reconciled the differences. In another case, the Environmental Protection Agency had a similar situation involving $11.5 million in mission assignments and amendments related to Hurricane Katrina for which it did not record obligations as of March 2006 because the financial management office was unaware the mission assignments had been made by FEMA. According to the Environmental Protection Agency, for $10 million of the $11.5 million in mission assignments, not only was the financial management office unaware but the agency had never been informed that the mission assignment had been issued by FEMA. A different set of issues arises with regard to expenditure data. Because of the nature and timing of payments FEMA makes to performing agencies, FEMA’s reported expenditures from the Disaster Relief Fund do not present an accurate status of federal spending for hurricane relief and recovery from a governmentwide perspective. This is explained in part by problems with the timeliness and adequacy of billings to FEMA by other agencies. As previously explained, FEMA reimburses performing agencies for work they perform on behalf of FEMA in accordance with the mission assignment agreements. FEMA requires that performing agencies (1) bill it within 90 days after completion or upon termination of a mission assignment, and (2) provide a certain level of documentation for its review in order for the billings to be approved. FEMA does not recognize reimbursements to other agencies as expenditures in its accounting system (and therefore in its reports to the Congress) until this approval has occurred. From a governmentwide perspective, this process results in FEMA’s expenditures being understated. As of March 29, 2006, FEMA reported about $661 million of expenditures to agencies performing mission assignments for Hurricanes Katrina, Rita, and Wilma (see table 2). However, performing agencies’ internal tracking systems showed a significantly higher level of expenditures on their mission assignments. The process FEMA uses for reimbursing performing agencies creates timing differences between FEMA’s and the performing agencies’ records. As a result, FEMA’s reported expenditures are less than actual expenditures performing agencies have made in support of FEMA’s hurricane relief and recovery efforts. In the case of a mission assignment, a performing agency would recognize an expenditure when that agency pays costs (liquidates obligations) to employees, contractors, or other outside entities for work performed. However, FEMA does not recognize the reimbursement of these costs as an expenditure until it has reviewed and approved a bill from the performing agency. With the exception of COE, reimbursements to the performing agencies are made using the IPAC system. While the IPAC funds transfer occurs immediately upon request by the agency seeking reimbursement, in FEMA’s accounting records the IPAC transaction would be reflected as a suspense account transaction until FEMA has received and approved the supporting documentation for the IPAC billing. Therefore, by virtue of the timing delays, FEMA’s reported expenditures would be less than expenditures made and reported by performing agencies and a user of FEMA’s report could incorrectly infer that a particular agency has received tasks from FEMA but has not spent any of the funds. Thus, the cost of actual work performed is better reflected by the performing agencies. Two examples follow: FEMA’s report as of March 29, 2006, showed that approved mission assignment expenditures (cash reimbursements) related to Hurricane Katrina were about $210 million for DOD. However, DOD’s report as of April 5, 2006, showed that it had already received $324 million in reimbursement from FEMA for mission assignments related to Hurricane Katrina. The U.S. Forest Service had not billed FEMA for any of its work done under mission assignments even though the agency reported that it had made close to $170 million in expenditures related to its Hurricane Katrina mission assignments as of January 31, 2006. Accordingly, FEMA reported no expenditures for this agency in its weekly report since FEMA had not yet approved any billings. FEMA’s billing instructions state that reimbursement requests can be forwarded to FEMA monthly, regardless of the amount. Also, agencies should submit the final bill no later than 90 days after completion or upon termination of the mission assignment. The Forest Service, however, was not doing this, and as a result, FEMA did not report any expenditures for mission assignment work performed by the Forest Service as of March 29, 2006, even though the Forest Service had spent about $170 million. The Forest Service explained that it billed FEMA in March and June 2006 and planned to issue additional bills in August and September 2006. We noted that there had been some billing activity reported by FEMA subsequent to March 29, 2006. Aside from the timing issues discussed above, some performing agencies have not provided billing documentation that meets FEMA’s requirements to support their reimbursements for work performed on mission assignments. Although performing agencies using the IPAC system receive funds immediately upon requesting reimbursement, if upon review of supporting reimbursement documents, FEMA officials determine that some amounts are incorrect or unsupported, FEMA may retrieve or “charge back” the monies from these agencies through the IPAC system. For example, travel charges should be supported by a breakdown by object class with names, period of performance dates, and amounts. Failure to submit this documentation may result in FEMA charging back the agency for the related mission assignment billing. FEMA’s records as of May 15, 2006, showed that FEMA had “charged back” about $267 million from performing agencies for costs billed to FEMA for mission assignments related to Hurricanes Katrina, Rita, and Wilma. About $260 million, or over 97 percent, of these charge-backs involved five agencies: the Department of Transportation ($102 million), DOD ($57 million), the Environmental Protection Agency ($45 million), the Federal Protective Service within DHS ($32 million), and the Department of Health and Human Services ($24 million). Consistent with its practice of only reporting approved expenditures, these amounts were not recognized as expenditures by FEMA, even though the performing agencies claim they have expended those amounts. In addition, until FEMA requested the charge-backs, the billings would have been in a FEMA suspense account, and would have temporarily depleted monies from the Disaster Relief Fund since the agencies had already received reimbursement through the IPAC system. At least one agency, DOD, has indicated that it is trying to gather additional supporting documentation for the $57 million that FEMA charged back. Therefore, at least part of these charged back funds may be reported as expenditures by FEMA at some point in the future. If the agency cannot provide FEMA the needed supporting documentation, the agency may not be reimbursed and thus will be required to use its own appropriations. FEMA is also experiencing billing problems with COE, which does not use the IPAC system. According to FEMA personnel, COE had billing and documentation problems in the past and was not permitted to use the IPAC system for transactions with DHS. While COE was working on gaining access to using the IPAC system prior to Hurricane Katrina, this process was put on hold, and instead COE must manually submit supporting documentation before FEMA reimburses its mission assignment costs. This allows for a thorough review by FEMA, but has also led to payment delays. As of February 6, 2006, COE’s internal accounts receivable report showed that it had not received reimbursement for about $1.2 billion of bills submitted to FEMA for Hurricane Katrina mission assignments even though COE officials stated that they had sent documentation supporting the majority of the bills. Of that amount, about $610 million, or over half of the total, was over 60 days old. According to FEMA officials, as of April 7, 2006, it had not received documentation supporting about $800 million of the $1.2 billion of outstanding accounts receivable on COE’s records. None of the $1.2 billion has been reported as expenditures by FEMA, although COE reports these amounts as expenditures. From a governmentwide perspective, since Hurricane Katrina made landfall, about $88 billion through four emergency supplemental appropriations has been appropriated to 23 federal agencies. We found that no one agency or central collection point exists to compile and report on how these funds are being spent. Without a framework and mechanisms in place to collect and consolidate information from these agencies and report on a periodic basis, decision makers will not have complete and consistent information on the uses of the funding that has been provided thus far. Information on the amount of obligations and expenditures made on the actual relief and recovery effort would provide decision makers information they can use to determine, for example, if (1) additional funds should be provided for the relief and recovery work, (2) the funds already provided could be deemed excess and used for other disaster relief and recovery work, (3) funds should be rescinded, or (4) duplicate programs are providing similar assistance. As a result, in order to have governmentwide information on actual obligations incurred and expenditures made on the relief and recovery effort, the agencies would have to use their own internal tracking systems to extract this information and provide the information to a central point, where the data could be consolidated and reported. The ability to separately track and report on these funds is important to help ensure better accountability and clearly identify the status of funding provided in direct response to these hurricanes at both the individual federal agency level as well as the governmentwide level and to provide additional transparency so that hurricane victims, affected states, as well as American taxpayers, know how the government is spending these funds. At the same time, we recognize the substantial challenge in balancing the need to get money out quickly to those who are actually in need and sustaining public confidence in disaster programs by taking all possible steps to minimize fraud and abuse. Although each federal agency is responsible for tracking the funds it received, obligations incurred, and funds expended through its own internal tracking systems, no mechanisms are in place to consolidate and report on this information. Of the approximately $88 billion provided as of June 2006, FEMA received about $42.6 billion ($66 billion appropriated less the $23.4 billion rescinded) for the Disaster Relief Fund and 22 other agencies received the remaining $45.4 billion. Once these funds are appropriated, they are merged into, and commingled with existing appropriation accounts. OMB Circular No. A-11 requires agencies to report obligations and outlays on a quarterly basis at the appropriation level; however, those reports on budget execution and budgetary resources do not call for separately identifying amounts on a programmatic basis, such as hurricane relief and recovery efforts. Thus, reporting under this Circular will not provide the information needed to monitor the status of hurricane-related funding. Although FEMA was required to provide weekly reports to the Congress on obligation and expenditure information on the $42.6 billion it received (although with limited usefulness as discussed previously), most of the other 22 agencies that received over $45 billion would only be responsible for tracking this information internally. While there are some reporting requirements included in the emergency supplemental appropriation acts, overall reporting requirements differ greatly. Also, the reporting requirements do not call for consolidating information on obligations and expenditures on a governmentwide basis and, therefore, do not facilitate governmentwide reporting on hurricane- related spending. The reporting requirements that were included for the various agencies ranged from very detailed reporting to no reporting at all. For example, while FEMA was required to report obligations and expenditures, 16 other federal agencies did not have any reporting requirements. See appendix II for more information on the reporting requirements included in the first four emergency supplemental appropriations acts. Given that consolidated governmentwide reporting will require that financial information be compiled from 23 different agencies, an entity that regularly collects and compiles information from different agencies, such as OMB or the Department of the Treasury, would likely be in the best position for requesting this information and preparing consolidated governmentwide reporting on hurricane-related funding. Other options would be for either FEMA or the Office of the Federal Coordinator for Gulf Coast Rebuilding to compile this information. Success in the rebuilding efforts of the Gulf Coast area is critical. The federal government has already invested billions of dollars for this effort with more likely to come. Although FEMA is required to report on obligations and expenditures, these reports do not provide timely information from a governmentwide perspective. In addition, there is no framework or mechanisms in place to collect and consolidate information, and to report on the $88 billion in hurricane relief and recovery funds provided thus far to 23 federal agencies in the four emergency supplemental appropriations acts on a governmentwide basis. The government’s progress in the rebuilding efforts will be difficult to measure if decision makers do not know how much has been spent, what for, how much has been obligated but not yet spent, and how much more is still available. Without consistent, reliable, and timely governmentwide information on the use of this funding, the agencies and the Congress could lose visibility over these funds and not know the extent to which they are being used to support hurricane relief and recovery efforts. With rebuilding efforts likely to take many years, it is important that the federal government fulfill its role as steward of taxpayer funds and provide transparency to the affected states and victims, and account for and report on all funds received for the hurricane-related efforts. To improve the information on the status of hurricane relief and recovery funds provided in FEMA’s weekly reports to the Appropriations Committees from a governmentwide perspective, we recommend that the Secretary of Homeland Security direct the Director of FEMA to take the following four actions: Explain in the weekly reports how FEMA’s reported obligations and expenditures for mission assignments do not reflect the status from a governmentwide perspective. On an established basis (e.g., monthly or quarterly), request and include actual obligation and expenditure data from agencies performing mission assignments. Include in the weekly report amounts reimbursed to other agencies that are in suspense because FEMA has not yet reviewed and approved the documentation supporting the expenditures. Reiterate to agencies performing mission assignments its policies on (1) the detailed information required in supporting documentation for reimbursements, and (2) the timeliness of agency billings. To help ensure better accountability, provide additional transparency, and clearly identify the status of the hurricane-related funding provided by emergency supplemental appropriations at both the individual federal agency level as well as the governmentwide level, we recommend that the Director, Office of Management and Budget, establish a framework for governmentwide reporting on the status of the hurricane-related funding. OMB could either collect and consolidate this information itself or designate another appropriate agency, such as the Department of the Treasury, to do so and report to the Appropriations Committees on a periodic basis. We requested comments on a draft of this report from the Secretary of Homeland Security and the Director of OMB. These comments are reprinted in appendixes III and IV, respectively. While DHS concurred with our recommendations, it also stated that it believes our recommendation to periodically request and include actual obligation and expenditure data from agencies performing mission assignments is subsumed by our recommendation to OMB to establish a framework for governmentwide reporting on the status of hurricane-related funding. We believe our recommendation is still valid for FEMA since, as stated in the agency’s response, its mission assignments are a significant component in the establishment of a framework for governmentwide reporting on the status of hurricane-related funding. However, as the intent of our recommendation is to help ensure the Congress is receiving complete, timely, useful, and reliable reports, we agree that other alternatives could be considered to achieve the same objectives. OMB agreed that there should be clear accountability and transparency on the spending of emergency funds for hurricane relief and indicated it will fully consider our recommendation to establish a new framework for governmentwide reporting on the status of disaster-related funding. We also provided excerpts of the report to those agencies cited in examples for their review. They provided technical comments, and we made revisions as appropriate. We are sending copies of this report to other interested congressional committees and to affected federal agencies. Copies will be made available to others upon request. In addition, this report will also be available at no charge on GAO’s home page at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9095 or at williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix V. To determine whether the federal government was tracking and reporting on the use of funding provided in the four emergency supplemental appropriations acts, we obtained and analyzed the four emergency supplemental appropriation documents and conference reports. We also obtained the reports prepared by the Federal Emergency Management Agency (FEMA) and the Army Corps of Engineers (COE) in response to the second emergency supplemental appropriation act. We did not obtain the reports required by the third or fourth emergency supplemental appropriations acts since this was a new requirement for the federal agencies. In addition, we obtained and analyzed guidance on reporting of estimates of hurricane-related funding budget authority, outlays, and receipts, issued by the Office of Management and Budget (OMB) in 2005 and discussed this guidance with officials from OMB. To determine whether FEMA’s reports to the Appropriations Committees required by the second emergency supplemental appropriation act provided timely and useful information, we obtained and analyzed the weekly reports prepared by FEMA, specifically focusing on the obligations and expenditures reported for mission assignments to agencies performing disaster relief work related to Hurricane Katrina on behalf of FEMA because they have governmentwide implications. We met with FEMA officials to discuss (1) the definitions of the terms obligations and expenditures used in the report, (2) the process of FEMA issuing mission assignments to agencies and the obligation of FEMA’s funds related to the mission assignments, and (3) the process of agencies seeking reimbursement for goods and services provided in response to the disaster relief work including FEMA’s billing procedures. We also obtained and analyzed certain federal agencies’ reports that provide information on mission assignments, obligations incurred and expenditures made in performing disaster relief work on behalf of FEMA, amount of bills submitted to FEMA, and amount of bills paid by FEMA. Because the majority of FEMA’s mission assignment obligations related to Hurricane Katrina, we focused our review at the agencies on the Hurricane Katrina mission assignments. We met with officials from certain federal agencies to discuss the information contained in these reports. In performing our work, we obtained information from the OMB, Department of the Treasury, FEMA, Department of Defense, COE, Department of Transportation, Environmental Protection Agency, Department of Health and Human Services, U.S. Forest Service, General Services Administration, and Department of Housing and Urban Development. To assess the reliability of the data, we interviewed officials knowledgeable about the data and determined that the data were sufficiently reliable for the purposes of this report. We conducted our work from October 2005 through June 2006 in accordance with generally accepted government auditing standards. We provided a draft of this report to the Department of Homeland Security (DHS) and OMB for comment. DHS and OMB provided written comments, which are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendixes III and IV, respectively. We also provided excerpts of the report to those agencies cited in examples for their review. They provided technical comments, and we made revisions as appropriate. The four emergency supplemental appropriations acts enacted as of June 2006 provided funds to 23 federal agencies for the hurricane relief and recovery effort and included different reporting requirements. In addition, of the 23 agencies receiving appropriations in the four emergency supplemental appropriations acts, 16 agencies did not have any reporting requirements. The first two emergency supplemental appropriations acts provided funding to the Federal Emergency Management Agency (FEMA), Department of Defense (DOD), and Army Corps of Engineers (COE), and included the following reporting requirements: The first emergency supplemental appropriation act did not contain any requirements for FEMA to report on the $10 billion it received. The second emergency supplemental appropriation act required the Secretary of Homeland Security to provide, at a minimum, a weekly report to the Appropriations Committees detailing the allocation and obligation of the $50 billion in appropriated funds it received for Hurricane Katrina in the second emergency supplemental appropriation act. The fiscal year 2006 Department of Homeland Security Appropriations Act further explained that this weekly report was to include other information such as obligations, allocations, and expenditures, categorized by agency and state. COE was not provided any funding in the first emergency supplemental appropriation. The second emergency supplemental appropriation act required COE to provide a weekly report to the Appropriations Committees detailing the allocation and obligation of $400 million in appropriated funds it received under that act. There was no requirement for DOD to report on the $1.9 billion it received in the first and second emergency supplemental appropriations acts. The third emergency supplemental appropriation act provided $29 billion directly to 20 individual federal agencies and rescinded approximately $23.4 billion from the amount initially appropriated to FEMA’s Disaster Relief Fund in September 2005. The third emergency supplemental appropriation act included differing reporting requirements for each of the 20 federal agencies ranging from none to very detailed. Illustrative examples from the third emergency supplemental appropriation act and the conference report accompanying this legislation include the following specific reporting requirements: The third emergency supplemental appropriation act required each state receiving monies through the Community Development Fund from the Department of Housing and Urban Development (HUD) to report quarterly to the Appropriations Committees for all awards and uses of funds. The supplemental appropriation language also required some additional reporting from HUD, such as reporting quarterly to the Appropriations Committees with regard to all steps taken to prevent fraud and abuse of funds made available. The conference report accompanying the third emergency supplemental appropriation act directed the Secretary of Defense to submit quarterly reports to the congressional defense committees including, among other things, the expenditures of funds it received for hurricane relief and recovery operations. This did not include retroactive requirements for the first and second emergency supplemental appropriations. The conference report also directed the Secretary of Agriculture to provide quarterly reports including, among other things, the expenditures of funds received for hurricane relief. It also requested the Department of Education to submit a report by March 1, 2006, on the obligation and allocation of funds it received for hurricane relief and provided to assist college students under the Higher Education Act. The reporting requirements for some agencies were more detailed than others. Also, these reporting requirements do not cover funding authority of approximately $8.5 billion that agencies received through FEMA’s mission assignment process for Hurricanes Katrina, Rita, and Wilma as of March 29, 2006. The fourth emergency supplemental appropriation act provided approximately $20.1 billion directly to 22 individual federal agencies. This legislation did not include any new reporting requirements for the agencies receiving funding; however, the act contained reporting requirements for HUD that were consistent with the requirements outlined in the third emergency supplemental appropriation act. In addition to the contact named above, the following individuals also made significant contributions to this report: Christine Bonham, Richard Cambosos, Thomas Dawson, Francine DelVecchio, Heather Dunahoo, Abe Dymond, Gabrielle Fagan, Casey Keplinger, Stephen Lawrence, Greg Pugnetti, Lori Ryza, and Natalie Schneider. Other contributions were made by Felicia Brooks, Eric Essig, Lauren Fassler, Barry Grinnell, John Hong, James Maziasz, Patrick McCray, Shalin Pathak, and Chanetta Reed. | Hurricane Katrina devastated the Gulf Coast region of the United States and caused billions of dollars in damage. Hurricanes Rita and Wilma further exacerbated damage to the region. The Federal Emergency Management Agency (FEMA), within the Department of Homeland Security (DHS), was tasked with the primary role of managing the federal relief and recovery efforts. This review was performed under the Comptroller General's authority because of widespread congressional interest in the response to this disaster. GAO examined whether the federal government was adequately tracking and reporting on the use of the funding provided in the four emergency supplemental appropriations acts enacted as of June 2006. GAO analyzed the emergency supplemental appropriations acts and conference reports, reviewed FEMA's required weekly reports, and interviewed federal agency officials. FEMA's required weekly reports to the Appropriations Committees on the use of funds it received do not provide timely information from a governmentwide perspective because FEMA does not have a mechanism to report on the financial activity of the agencies performing work on its behalf. Specifically, when FEMA tasks another federal agency through a mission assignment, FEMA records the entire amount upfront as an obligation, whereas the performing agency does not record an obligation until a later date, thereby overstating reported governmentwide obligations. The opposite is true for expenditures. The performing agency expends the funds, but then bills FEMA for reimbursement. FEMA does not record the expenditure until it has received the bill and reviewed it, thereby understating reported governmentwide expenditures. As a result, while FEMA is reporting as required, from a governmentwide perspective, FEMA's reported obligations are overstated and expenditures are understated. The federal government also does not have a governmentwide framework or mechanisms in place to collect and consolidate information from the individual federal agencies that received emergency supplemental appropriations for hurricane relief and recovery efforts and report on this information. About $88 billion has been appropriated to 23 different federal agencies through four emergency supplemental appropriations acts; however, no one agency or central collection point exists to compile and report on how these funds are being spent. Decision makers need this consolidated information to determine how much federal funding has been spent and by whom, whether more may be needed, or whether too much has been provided. The ability to separately track and report on these funds is important to help ensure better accountability and clearly identify the status of funding provided in direct response to these hurricanes at both the individual federal agency level as well as the governmentwide level. Also, it is important to provide additional transparency so that hurricane victims, affected states, as well as American taxpayers, know how these funds are being spent. |
During fiscal years 2002 through 2008, the United States spent approximately $16.5 billion to train and equip the Afghan army and police forces in order to transfer responsibility for the security of Afghanistan from the international community to the Afghan government. As part of this effort, Defense—through the U.S. Army and Navy—purchased over 242,000 small arms and light weapons, at a cost of about $120 million. As illustrated in figure 1, these weapons include rifles, pistols, shotguns, machine guns, mortars, and launchers for grenades, rockets, and missiles. In addition, CSTC-A has reported that 21 other countries provided about 135,000 weapons for ANSF between June 2002 and June 2008, which they have valued at about $103 million. This brings the total number of weapons Defense reported obtaining for ANSF to over 375,000. The Combined Security Transition Command-Afghanistan (CSTC-A) in Kabul, which is a joint service, coalition organization under the command and control of Defense’s U.S. Central Command is primarily responsible for training and equipping ANSF. As part of that responsibility, CSTC-A receives and stores weapons provided by the United States and other international donors and distributes them to ANSF units. In addition, CSTC-A is responsible for monitoring the use of U.S.-procured weapons and other sensitive equipment. Lapses in weapons accountability occurred throughout the supply chain, including when weapons were obtained, transported to Afghanistan, and stored at two central depots in Kabul. Defense has accountability procedures for its own weapons, including (1) serial number registration and reporting and (2) 100 percent physical inventories of weapons stored in depots at least annually. However, Defense failed to provide clear guidance to U.S. personnel regarding what accountability procedures applied when handling weapons obtained for the ANSF. We found that the U.S. Army and CSTC-A did not maintain complete records for an estimated 87,000—or about 36 percent—of the 242,000 weapons Defense procured and shipped to Afghanistan for ANSF. Specifically: For about 46,000 weapons, the Army could not provide us serial numbers to uniquely identify each weapon provided, which made it impossible for us to determine their location or disposition. For about 41,000 weapons with serial numbers recorded, CSTC-A did not have any records of their location or disposition. Furthermore, CSTC-A did not maintain reliable records, including serial numbers, for any of the 135,000 weapons it reported obtaining from international donors from June 2002 through June 2008. Although weapons were in Defense’s control and custody until they were issued to ANSF units, accountability was compromised during transportation and storage. Organizations involved in the transport of U.S.- procured weapons into Kabul by air did not communicate adequately to ensure that accountability was maintained over weapons during transport. In addition, CSTC-A did not maintain complete and accurate inventory records for weapons at the central storage depots and allowed poor security to persist. Until July 2008, CSTC-A did not track all weapons at the depots by serial number and conduct routine physical inventories. Without such regular inventories, it is difficult for CSTC-A to maintain accountability for weapons at the depots and detect weapons losses. Moreover, CSTC-A could not identify and respond to incidents of actual or potential compromise, including suspected pilferage, due to poor security and unreliable data systems. Illustrating the importance of physical inventories, less than 1 month after completing its first full weapons inventory, CSTC-A officials identified the theft of 47 pistols intended for ANSF. During our review, Defense indicated that it would begin recording serial numbers for all weapons it obtains for ANSF, and CSTC-A established procedures to track weapons by serial number in Afghanistan. It also began conducting physical inventories of the weapons stored at the central depots. However, CSTC-A officials stated that their continued implementation of these new accountability procedures was not guaranteed, considering staffing constraints and other factors. Despite CSTC-A training efforts, ANSF units cannot fully safeguard and account for weapons, placing weapons CSTC-A has provided to ANSF at serious risk of theft or loss. In February 2008, CSTC-A acknowledged that it was issuing equipment to Afghan National Police units before providing training on accountability practices and ensuring that effective controls were in place. Recognizing the need for weapons accountability in ANSF units, Defense and State deployed hundreds of U.S. trainers and mentors to, among other things, help the Afghan army and police establish equipment accountability practices. In June 2008, Defense reported to Congress that it was CSTC-A’s policy not to issue equipment to ANSF without verifying that appropriate supply and accountability procedures are in place. While CSTC-A has established a system for assessing the logistics capacity of ANSF units, it has not consistently assessed or verified ANSF’s ability to properly account for weapons and other equipment. Contractors serving as mentors have reported major ANSF accountability weaknesses. Although these reports did not address accountability capacities in a consistent manner that would allow a systematic or comprehensive assessment of all units, they highlighted the following common problems relating to weapons accountability. Lack of functioning property book operations. Many Afghan army and police units did not properly maintain property books, which are fundamental tools used to establish equipment accountability and are required by Afghan ministerial decrees. Illiteracy. Widespread illiteracy among Afghan army and police personnel substantially impaired equipment accountability. For example, a mentor noted that illiteracy in one Afghan National Army corps was directly interfering with the ability of supply section personnel to implement property accountability processes and procedures, despite repeated training efforts. Poor security. Some Afghan National Police units did not have facilities adequate to ensure the physical security of weapons and protect them against theft in a high-risk environment. In a northern province, for example, a contractor reported that the arms room of one police district office was behind a wooden door that had only a miniature padlock, and that this represented the same austere conditions as in the other districts. Unclear guidance. Afghan government logistics policies were not always clear to Afghan army and police property managers. Approved Ministry of Interior policies outlining material accountability procedures were not widely disseminated, and many police logistics officers did not recognize any of the logistical policies as rule. Additionally, a mentor to the Afghan National Army told us that despite new Ministry of Defense decrees on accountability, logistics officers often carried out property accountability functions using Soviet-style accounting methods and that the Ministry was still auditing army accounts against those defunct standards. Corruption. Reports of alleged theft and unauthorized resale of weapons are common, including one case in which an Afghan police battalion commander in one province was allegedly selling weapons to enemy forces. Desertion. Desertion in the Afghan National Police has also resulted in the loss of weapons. For example, contractors reported that Afghan Border Police officers at one province checkpoint deserted to ally themselves with enemy forces and took all their weapons and two vehicles with them. In July 2007, Defense began issuing night vision devices to the Afghan National Army. These devices are considered dangerous to the public and U.S. forces in the wrong hands, and Defense guidance calls for intensive monitoring of their use, including tracking by serial number. However, we found that CSTC-A did not begin monitoring the use of these sensitive devices until October 2008—about 15 months after issuing them. Defense and CSTC-A attributed the limited monitoring of these devices to a number of factors, including a shortage of security assistance staff and expertise at CSTC-A, exacerbated by frequent CSTC-A staff rotations. After we brought this to CSTC-A’s attention, it conducted an inventory and reported in December 2008 that all but 10 of the 2,410 night vision devices issued had been accounted for. We previously reported that Defense cited significant shortfalls in the number of trainers and mentors as the primary impediment to advancing the capabilities of ANSF. According to CSTC-A officials, as of December 2008, CSTC-A had only 64 percent of the nearly 6,700 personnel it required to perform its overall mission, including only about half of the over 4,000 personnel needed to mentor ANSF units. In summary, we have serious concerns about the accountability for weapons that Defense obtained for ANSF through U.S. procurements and international donations. First, we estimate that Defense did not systematically track over half of the weapons intended for ANSF. This was primarily due to staffing shortages and Defense’s failure to establish clear accountability procedures for these weapons while they were still in U.S. custody and control. Second, ANSF units could not fully safeguard and account for weapons Defense has issued to them, despite accountability training provided by both Defense and State. Poor security and corruption in Afghanistan, unclear guidance from Afghan ministries, and a shortage of trainers and mentors to help ensure that appropriate accountability procedures are implemented have reportedly contributed to this situation. In the report we are releasing today we make several recommendations to help improve accountability for weapons and other sensitive equipment that the United States provided to ANSF. In particular, we recommend that the Secretary of Defense (1) establish clear accountability procedures for weapons while they are in the control and custody of the United States, including tracking all weapons by serial number and conducting routine physical inventories; (2) direct CSTC-A to specifically assess and verify each ANSF unit’s capacity to safeguard and account for weapons and other sensitive equipment before providing such equipment, unless a specific waiver or exception is granted; and (3) devote adequate resources to CSTC-A’s effort to train, mentor, and assess ANSF in equipment accountability matters. In commenting on a draft of our report, Defense concurred with our recommendations and has begun to take corrective action. In January 2009, Defense directed the Defense Security Cooperation Agency to lead an effort to establish a weapons registration and monitoring system in Afghanistan, consistent with controls mandated by Congress for weapons provided to Iraq. If Defense follows through on this plan and, in addition, clearly requires routine inventories of weapons in U.S. custody and control, our concern about the lack of clear accountability procedures will be largely addressed. According to Defense, trainers and mentors are assessing the ability of ANSF units to safeguard and account for weapons. For the Afghan National Army, mentors are providing oversight at all levels of command of those units receiving weapons. For the Afghan National Police, most weapons are issued to units that have received instruction on equipment accountability as part of newly implemented training programs. We note that at the time of our review, ANSF unit assessments did not systematically address each unit’s capacity to safeguard and account for weapons in its possession. We also note that Defense has cited significant shortfalls in the number of personnel required to train and mentor ANSF units. Unless these matters are addressed, we are not confident the shortcomings we reported will be adequately addressed. Defense also indicated that it is looking into ways of addressing the staffing shortfalls that hamper CSTC-A’s efforts to train, mentor, and assess ANSF in equipment accountability matters. However, Defense did not state how or when additional staffing would be provided. Mr. Chairman and members of the subcommittee, this concludes my prepared statement. I will be happy to answer any questions you may have. To address our objectives, we reviewed documentation and interviewed officials from Defense, U.S. Central Command, CSTC-A, and the U.S. Army and Navy. On the basis of records provided to us, we compiled detailed information on weapons reported as shipped to CSTC-A in Afghanistan by the United States and other countries from June 2002 through June 2008. We traveled to Afghanistan in August 2008 to examine records and meet with officials at CSTC-A headquarters, visit the two central depots where the weapons provided for ANSF are stored, and meet with staff at an Afghan National Army unit that had received weapons. While in Afghanistan, we attempted to determine the location or disposition of a sample of weapons. Our sample was drawn randomly from a population of 195,671 U.S.-procured weapons shipped to Afghanistan for which Defense was able to provide serial numbers. We used the results of our sampling to reach general conclusions about CSTC-A’s ability to account for weapons purchased by the United States for ANSF. We also discussed equipment accountability with cognizant officials from the Afghan Ministries of Defense and Interior, the U.S. Embassy, and contractors involved in building ANSF’s capacity to account for and manage its weapons inventory. We performed our work from November 2007 through January 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. For questions regarding this testimony, please contact Charles Michael Johnson, Jr. at (202) 512-7331 or johnsoncm@gao.gov. Albert H. Huntington III, Assistant Director; James Michels; Emily Rachman; Mattias Fenton; and Mary Moutsos made key contributions in preparing this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony discusses the GAO report on accountability for small arms and light weapons that the United States has obtained and provided or intends to provide to the Afghan National Security Forces (ANSF)--the Afghan National Army and the Afghan National Police. Given the unstable security conditions in Afghanistan, the risk of loss and theft of these weapons is significant, which makes this hearing particularly timely. This testimony today focuses on (1) the types and quantities of weapons the Department of Defense (Defense) has obtained for ANSF, (2) whether Defense can account for the weapons it obtained for ANSF, and (3) the extent to which ANSF can properly safeguard and account for its weapons and other sensitive equipment. During fiscal years 2002 through 2008, the United States spent approximately $16.5 billion to train and equip the Afghan army and police forces in order to transfer responsibility for the security of Afghanistan from the international community to the Afghan government. As part of this effort, Defense--through the U.S. Army and Navy--purchased over 242,000 small arms and light weapons, at a cost of about $120 million. These weapons include rifles, pistols, shotguns, machine guns, mortars, and launchers for grenades, rockets, and missiles. In addition, CSTC-A has reported that 21 other countries provided about 135,000 weapons for ANSF between June 2002 and June 2008, which they have valued at about $103 million. This brings the total number of weapons Defense reported obtaining for ANSF to over 375,000. The Combined Security Transition Command-Afghanistan (CSTC-A) in Kabul, which is a joint service, coalition organization under the command and control of Defense's U.S. Central Command is primarily responsible for training and equipping ANSF.3 As part of that responsibility, CSTC-A receives and stores weapons provided by the United States and other international donors and distributes them to ANSF units. In addition, CSTC-A is responsible for monitoring the use of U.S.-procured weapons and other sensitive equipment. |
Since his inauguration in December 2006, President Felipe Calderon has mobilized the Mexican military and law enforcement in a series of large scale counternarcotics operations throughout the country. These efforts have targeted areas, particularly along the U.S.-Mexican border, where DTOs have exerted most influence. By pursuing and detaining the leaders of these criminal organizations, Mexican authorities have disrupted DTOs’ internal power structures and territorial control. The DTOs have countered government pressure with increased violence against law enforcement entities. The government’s efforts to disrupt drug trafficking operations also appear to have intensified conflicts among DTOs over access to lucrative trafficking routes to the United States. The result has been an escalation of drug-related assassinations, kidnappings, and other violent crimes. While the majority of the casualties have been individuals involved in the drug trade in some way, victims also include law enforcement officers, journalists, and innocent bystanders. Gun violence in Mexico has increased dramatically in the last 2 years, with the number of drug-related murders more than doubling from around 2,700 in 2007 to over 6,200 in 2008. The drug-related murder rates for the first quarter of 2009 remain high and thus the yearly total for 2009 will likely be close to the 2008 level. See figure 1. Growing criminal activity in Mexico, particularly in communities across the Southwest border, has raised concerns that the violence might spill over to the United States. Since 2006, DOJ’s annual National Drug Threat Assessment has reported Mexican DTOs and criminal groups are the most influential drug traffickers and the greatest organizational threat to the United States. Law enforcement reporting indicates Mexican DTOs maintain drug distribution networks or supply drugs to distributors in at least 230 U.S. cities. See figure 2. Mexican DTOs control most of the U.S. drug market and are gaining strength in markets they do not yet control. President Obama has expressed concern about the increased level of violence along the border, particularly in Ciudad Juarez and Tijuana, and has called for continued monitoring of the situation to guard against spillover into the United States. Since the 1970s, the United States has collaborated with Mexican authorities and provided assistance to Mexico to combat transnational crimes associated with drug trafficking, including illicit firearms smuggling. However, counterarms trafficking efforts have been a modest component of broader bilateral law enforcement cooperation. U.S. and Mexican officials also told us that, in the past, the Mexican government considered illicit arms trafficking a problem that originated in the United States and thus needed to be dealt with by U.S. authorities. However, the Mexican government has taken on a greater focus in combating arms trafficking in recent years. For example, Mexico’s Secretary of Public Security noted, “the arms issue … is a subject that was not considered when discussing drug trafficking, however today it is part of the dialogue we have with our colleagues from the United States.” Moreover, Mexican officials told us they now regard illicit firearms as the number one crime problem affecting the country’s security, and they are intent on working with their U.S. counterparts to address the threat posed by weapons smuggling. DOJ’s ATF and DHS’s ICE are the two primary agencies combating illicit sales and trafficking of firearms across the Southwest border. For over 40 years, ATF has implemented efforts to combat arms trafficking within the United States and from the United States to other countries as part of its mission under the Gun Control Act, and it is the only entity within the U.S. government able to trace firearms recovered in crime in Mexico. ATF also conducts inspections of FFL gun dealers to ensure they comply with applicable federal firearms laws and regulations. Through Project Gunrunner—ATF’s key effort to address arms trafficking to Mexico—the agency has conducted investigations to identify and prosecute individuals involved in arms trafficking schemes and has provided training to Mexican law enforcement officials on firearms identification and tracing techniques, among other efforts. According to ICE, for over 30 years, ICE—and previously the U.S. Customs Service—has implemented efforts to enforce U.S. export laws, and ICE agents and other staff address a range of issues, including combating the illicit smuggling of money, people, drugs, and firearms. Examples of ICE’s arms trafficking-related activities include its efforts to raise public awareness through the dissemination of posters and brochures to educate FFLs and firearms purchasers about U.S. laws related to firearms and smuggling, as well as ICE’s more recent effort to expand seizures of firearms destined for Mexico on the U.S. side of the border. ICE enhanced its efforts on arms trafficking to Mexico through Operation Armas Cruzadas, announced in 2008. Table 1 provides more information on ATF and ICE efforts to combat arms trafficking. Several other U.S. agencies also play a role in stemming the flow of illicit firearms across the Southwest border into Mexico, including the following: DHS’s CBP is charged with managing, securing, and controlling the nation’s borders with a priority mission of keeping terrorists and their weapons out of the United States. It also has a responsibility for securing and facilitating trade and travel while enforcing hundreds of U.S. regulations, including immigration and drug laws; as such, CBP is involved in intercepting contraband firearms to Mexico. DOJ’s U.S. Attorneys serve as the nation’s principal litigators under the direction of the U.S. Attorney General. U.S. Attorneys handle criminal prosecutions and civil suits in which the United States has an interest, including cases against individuals who violate federal criminal laws related to firearms trafficking. Since each U.S. Attorney exercises wide discretion in the use of resources to further the priorities of local jurisdictions, the caseload distribution related to firearms trafficking varies between districts. DOJ’s DEA is responsible for the enforcement of U.S. controlled substances laws and regulations and bringing to justice key individuals and organizations involved in the production or distribution of controlled substances appearing in or destined for illicit traffic in the United States. In carrying out its mission, the DEA also coordinates and cooperates with U.S. and Mexican law enforcement officials in efforts to combat criminal violence and thus shares intelligence on DTO activities, including weapons violations. State’s INL advises the President, Secretary of State, and other U.S. government agencies on policies and programs to combat international narcotics and crime. INL programs support State’s strategic goals to reduce the entry of illegal drugs into the United States and to minimize the impact of international crime on the United States and its citizens. INL oversees funding provided to assist Mexico in its fight against organized crime under the Merida Initiative. Merida is a U.S. interagency response to transborder crime and security issues affecting the United States, Mexico, and Central America. The Initiative seeks to strengthen partner countries’ capacities to combat organized criminal activities that threaten the security of the region, including arms trafficking. DOJ’s Criminal Division attorneys serve as DOJ’s primary legal experts on firearms related issues and contribute to the nation’s prosecutorial efforts from the headquarters level. Criminal Division prosecutors are charged with developing and implementing strategies to attack firearms trafficking networks operating in the United States and abroad. These prosecutors prosecute important firearms related cases, formulate policy, assist and coordinate with local U.S. Attorneys Offices on legal issues and multidistrict cases, and work with numerous domestic and foreign law enforcement agencies to construct effective and coordinated enforcement strategies. ONDCP, whose principal purpose is to establish policies, priorities, and objectives for the nation’s drug control program, produces a number of publications including a National Southwest Border Counternarcotics Strategy, which this year includes a component on combating arms trafficking. Available evidence indicates many of the firearms fueling Mexican drug violence have come from the United States, including a growing number of increasingly lethal weapons. Many of these firearms came from gun shops and gun shows in Southwest border states, such as Texas, California, and Arizona, according to ATF officials and trace data. U.S. and Mexican government officials stated most guns trafficked into Mexico are facilitated by and support operations of Mexican drug trafficking organizations. According to U.S. and Mexican government and law enforcement officials and data from ATF on firearms seized in Mexico and traced from fiscal year 2004 to fiscal year 2008, a large portion of the firearms fueling the Mexican drug trade originated in the United States, including a growing number of increasingly lethal weapons. As is inherently the case with various types of illegal trafficking, such as drug trafficking, the extent of firearms trafficking to Mexico is unknown; however, according to ATF, a large number of guns are seized from criminals by the military and law enforcement in Mexico, and information on many of these guns is submitted to ATF for the purposes of tracing their origins and uncovering how the guns arrived in Mexico. ATF maintains data on the firearms that are seized in Mexico and submitted for a trace, and, from these firearms trace requests, ATF officials told us, they are often able to detect suspicious patterns and trends that can help identify and disrupt arms trafficking networks on both sides of the U.S.-Mexico border. Using ATF’s eTrace data, which currently serves as the best data we found available for analyzing the source and nature of firearms trafficked and seized in Mexico, we determined over 20,000, or 87 percent, of firearms seized by Mexican authorities and traced from fiscal year 2004 to fiscal year 2008 originated in the United States. Figure 3 shows the percentages of firearms seized in Mexico and traced from fiscal year 2004 to fiscal year 2008 that originated in the United States. Over 90 percent of the firearms seized in Mexico and traced over the last 3 years have come from the United States. United States. Around 68 percent of these firearms were manufactured in the United States, while around 19 percent were manufactured in third countries and imported into the United States before being trafficked into Mexico. ATF could not determine whether the remaining 13 percent foreign sourced arms had been trafficked into Mexico through the United States, due to incomplete information. While the eTrace data only represents data from gun trace requests submitted from seizures in Mexico and not all the guns seized, it is currently the only systematic data available, and the conclusions from its use that the majority of firearms seized and traced originated in the United States were consistent with conclusions reached by U.S. and Mexican government and law enforcement officials involved personally in combating arms trafficking to Mexico. In 2008, of the almost 30,000 firearms that the Mexican Attorney General’s office said were seized, only around 7,200, or approximately a quarter, were submitted to ATF for tracing. U.S. and Mexican government and law enforcement officials indicated Mexican government officials had not submitted all of the firearms tracing information due to bureaucratic obstacles between the Mexican military and the Mexican Attorney General’s Office and lack of a sufficient number of trained staff to use eTrace. For instance, at one point, State officials told us, the Government of Mexico had only one staff person collecting gun information and entering it into eTrace. Further, as ATF pointed out, not all guns seized in the United States are submitted by U.S. entities to ATF for tracing either, due to some of the same type of bureaucratic and resource challenges faced in Mexico. Consistent with the results of eTrace data, U.S. law enforcement officials who had worked on arms trafficking in Mexico and along the U.S.-Mexican border told us their experience and observations corroborated that most of the firearms in Mexico had originated in the United States. Furthermore, U.S. and Mexican government and law enforcement officials also stated this scenario seemed most likely, given the ease of acquiring firearms in the United States; specifically, they told us they saw no reason why the drug cartels would go through the difficulty of acquiring a gun somewhere else in the world and transporting it to Mexico when it is so easy for them to do so from the United States. While existing data does not allow for an analysis of trends of all firearms seized in Mexico, according to U.S. and Mexican government officials, the firearms seized in Mexico have been increasingly more powerful and lethal in recent years. For example, around 25 percent of the firearms seized in Mexico and traced in fiscal year 2008 are high-caliber and high-powered such as AK and AR-15 type semiautomatic rifles, which fire ammunition that can pierce armor often used by Mexican police (see table 2). Moreover, U.S. and Mexican government officials told us they have encountered an increasing number of the higher caliber, high-powered weapons, particularly in the past 2 years in seizures resulting from operations against drug cartels. A video clip of the types of firearms recovered near the Southwest border and in Mexico is available at http://www.gao.gov/media/video/gao-09-709 . In addition, U.S. government officials told us there had been a decrease in some of the smaller, lower- powered guns, such as the .22 caliber pistol and rifle. Mexican and U.S. government officials told us that the guns used by the drug cartels often overpower Mexican police and rival that of the military. See figure 4. In addition, there have been some examples of military grade firearms recovered in Mexico. Some of these recovered firearms, ATF officials noted, were guns commercially available in the United States that were altered to make them more lethal. For instance, AK-type and AR-15 type semiautomatic rifles have been altered to make them fully automatic, like machine guns used by the U.S. and Mexican militaries. Seventy machine guns were submitted for tracing to ATF between fiscal year 2004 and fiscal year 2008, which represents a small percentage, 0.30 percent, of the total number of 23,159. A small number of the firearms seized in Mexico have been traced back to legal sales of weapons from the United States to Mexico or a third country, according to ATF. For instance, firearms traced back to the Government of Mexico, from 2004 to 2008, constituted 1.74 percent, or 403 firearms, of the total number of trace requests made during that time. This included 70 .223 caliber AR-15-type semiautomatic rifles and one machine gun. In addition, 39 guns were recovered in 2008 that had been sold legally by the United States to a third party country, including 6 guns each from Germany, Belize, and Guatemala and 1 from El Salvador. These 39 guns included 21 semiautomatic pistols, and nothing larger or more powerful than the Colt 45. According to U.S. law enforcement officials we met with, there have not been any indications of significant trafficking of firearms from U.S. military personnel or U.S. military arsenals. According to ATF data for fiscal years 2004-2008, of the 23,159 guns seized in Mexico and traced, 160 firearms, or 0.70 percent, were found to be U.S. military arms. From fiscal year 2004 to fiscal year 2008, most of the firearms seized in Mexico and traced came from U.S. Southwest border states. In particular, about 70 percent of these firearms came from Texas, California, and Arizona. Figure 5 provides data on the top source states for firearms trafficked to Mexico and traced from fiscal year 2004 to fiscal year 2008. Most of the firearms seized in Mexico and successfully traced come from gun shops and pawn shops, according to ATF gun trace data. According to ATF, there are around 6,700 retail gun dealers—gun shops and pawn shops—along the Southwest border of the United States. This represents around 12 percent of the approximately 55,000 retail gun dealers nationwide. These gun dealers, or FFLs, can operate in gun shops, pawn shops, their own homes, or out of gun shows. From fiscal year 2004 to fiscal year 2008, of those firearms ATF was able to trace back to a retail dealer, around 95 percent were traced back to gun shops and pawn shops—around 71 to 79 percent from gun shops and 15 to 19 percent from pawn shops, according to ATF. In addition to these firearms that are successfully traced back to a retail dealer, some ATF officials told us, based on information from their operations and investigations, many seized guns also come from private sales at gun shows, though it is impossible to know this exact number due to the lack of records kept for such purchases, which is discussed further below. The illicit purchase of firearms in the United States happens in various ways depending upon where the purchase takes place. Gun shops and pawn shops. Firearms purchased at gun shops and pawn shops for trafficking to Mexico are usually made by “straw purchasers,” according to law enforcement officials. These straw purchasers are individuals with clean records who can be expected to pass the required background check and who are paid by drug cartel representatives or middlemen to purchase certain guns from gun shops. Because the straw purchasers are legitimately qualified to purchase the guns, they can be difficult to identify by gun shop owners and clerks, absent obvious clues that would signify that a straw purchase is happening. For instance, ATF officials were tipped off to straw purchases when older women purchased multiple AK-type semiautomatic rifles, or individuals who seemed to know little about guns made purchases off of a written shopping list. In far fewer cases, ATF officials stated, corrupt gun shop owners or staff facilitate these illicit purchases. ATF officials told us they have not estimated what percentage of firearms trafficked to Mexico result from such illegal actions on the part of the gun shop owners or staff, but ATF has identified gun shop personnel who sold guns they knew would be trafficked to Mexico, as well as instances where gun shop personnel have altered their records to mask the disappearance of guns from their inventory after being sold illegally. Gun shows. According to ATF officials, individuals can use straw purchasers as they would at gun shops to acquire guns from gun shops with booths at gun shows. In addition, individuals can also purchase guns at gun shows from other individuals making sales from their private collections. These private sales require no background checks of the purchaser and require no record be made or kept of the sale. ATF officials told us this prevents their knowing what percentage of the problem of arms trafficking to Mexico comes from these private sales at gun shows. U.S. and Mexican government officials stated most guns trafficked into Mexico are facilitated by and support operations of Mexican DTOs. According to ATF officials, once the gun is acquired in the United States, typically a middleman or someone representing the drug cartel will transport or pay another individual to transport the firearm or firearms into Mexico. Firearms are generally trafficked along major U.S. highways and interstates and through border crossings into Mexico. The firearms are normally transported across the border by personal or commercial vehicle because, according to U.S. and Mexican government officials, the drug cartels have found these methods to have a high likelihood of success. (We will discuss the challenges to seizing illicit southbound firearms at the border in the second objective of this report.) Once in Mexico, the firearms are generally deposited in border towns or trafficked along major highways to their destinations. The transporter drops off the firearm or firearms at a set location for pick up and use by members of a drug cartel. Figure 6 displays the primary trafficking routes from the United States into Mexico. ATF and Mexican government officials told us they have found in Mexican arms trafficking investigations that a small number of firearms illicitly trafficked into Mexico from the United States are for hunters, off-duty police officers, and citizens seeking personal protection. Officials from ATF, ICE, and the Government of Mexico told us most of the guns seized and traced come from seizures Mexican military or law enforcement make in their war with the drug cartels. Government of Mexico data showed almost 30,000 firearms were seized in Mexico in 2008. Government of Mexico officials told us almost all of them were seized in operations against the drug cartels. U.S. efforts to combat illicit sales of firearms in the United States and to prevent the trafficking of these arms across the Southwest border into Mexico confront several key challenges. First, relevant law enforcement officials we met with noted certain provisions of some federal firearms laws present challenges to their efforts to address arms trafficking. Second, we found poor coordination and a lack of information sharing have hampered the effectiveness of the two key agencies—ATF and ICE— that implement various efforts to address arms trafficking to Mexico. Third, a variety of factors, such as infrastructure limitations and surveillance by drug traffickers at the border, hinder U.S. efforts to detect and seize southbound firearms at U.S.-Mexico border crossings. Finally, agencies lack systematic gathering and recent analyses of firearms trafficking data and trends that could be used to more fully assess the problem and plan efforts, and they were unable to provide complete information to us on the results of their efforts to seize firearms destined for Mexico and to investigate and prosecute cases. U.S. agencies implement efforts to address arms trafficking to Mexico within current applicable federal firearms laws. In enacting federal firearms laws such as the Gun Control Act of 1968, Congress has sought to keep firearms out of the hands of those not legally entitled to possess them and to assist law enforcement in efforts to reduce crime and violence, without placing an unnecessary burden on law-abiding citizens who may acquire, possess, or use firearms for lawful activity. Furthermore, Congress stated that in enacting the Gun Control Act of 1968 that the law was not intended to discourage or eliminate the private ownership or use of firearms by law-abiding citizens for lawful purposes. However, ATF officials stated certain provisions of some federal firearms laws present challenges to their efforts to combat arms trafficking to Mexico. For example, they identified key challenges related to (1) restrictions on collecting and reporting information on firearms purchases, (2) a lack of required background checks for private firearms sales, and (3) limitations on reporting requirements for multiple sales. Restrictions on collecting and reporting information on firearms purchases. FFLs are required by federal law to maintain records of firearm transactions and to provide information on the first retail purchaser of a firearm to ATF in response to a trace request within 24 hours. ATF has stated that information obtained through the firearms trace process is critical to its efforts to identify individuals involved in firearms trafficking schemes and to detect trafficking patterns. In addition, ATF documents and officials noted the trace of a firearm recovered in crime in Mexico often leads to the initiation of an arms trafficking investigation or provides agents with information to assist with an investigation. However, the U.S. government is prohibited by law from maintaining a national registry of firearms. As a result, ATF must take a number of steps to trace a crime gun, including, as applicable, contacting the importer, manufacturer, and wholesaler of the firearm in order to identify the FFL retailer who sold the firearm to the first retail purchaser. Key law enforcement officials stated restrictions on establishing a federal firearms registry lengthen the time and resources required by ATF to complete a crime gun trace and can limit the success of some traces. ATF officials added that information ATF is able to maintain on certain firearms purchases, such as information on some multiple firearms purchases, enables ATF to more quickly trace those firearms if they turn up in crime because the information is already entered into a searchable database. According to ATF, from fiscal year 2004 to fiscal year 2008, it took the agency an average of about 14 days to complete a trace of a firearm recovered in Mexico to the first retail purchaser. However, officials stated investigative leads obtained from trace results are most useful within the first few days following a firearm seizure, in part because the sort of conspiracies often associated with firearms trafficking tend to change personnel frequently and, as a result, an individual found to be responsible for the purchase of a particular firearm may no longer have ties to the principal gun trafficker directing the scheme. DOJ documents and ATF officials also noted secondary firearms— firearms resold following the first retail purchase from an FFL, or “used guns”—are commonly trafficked to Mexico. Federal law permits the private transfer of certain firearms from one unlicensed individual to another in places such as at gun shows, without requiring any record of the transaction be maintained by the unlicensed individuals, an FFL, or other law enforcement authority. Secondhand firearms may also be sold to and purchased from FFL pawnshops. Although pawnshops maintain records of any secondhand firearm transfers, ATF cannot directly trace the firearm from the first retail sale at an FFL to the pawnshop. Through the firearms trace process, ATF can follow the records of a firearm from the manufacturer or importer to the first retail sale at an FFL; however, if the firearm was resold from one individual to another or through a pawnshop, there is a break in the chain of records, and ATF must then consult with the last recorded purchaser of the firearm to determine the continuing disposition of the firearm. As a result, ATF officials stated that, while ATF may be able to trace a firearm to the first retail purchaser, it generally has no knowledge of any secondhand firearms purchases from gun shows or pawnshops—where many traffickers buy guns—without conducting further investigation, which may require significant additional resources and time. Lack of required background checks for private firearms sales. Federal firearms law prohibits certain persons from possessing or receiving firearms. A 1993 amendment to the Gun Control Act (the Brady Handgun Violence Prevention Act) required background checks be completed for all nonlicensed persons seeking to obtain firearms from FFLs, subject to certain exceptions. These background checks provide an automated search of criminal and noncriminal records to determine a person’s eligibility to purchase a firearm. However, private sales of firearms from one individual to another, including private sales at gun shows, are not subject to the background checks requirement and, therefore, do not require the seller to determine whether the purchaser is a felon or other prohibited person, such as an illegal or unlawful alien. DOJ documents and ATF officials stated that, as a result, many firearms trafficked to Mexico may be purchased through these types of transactions by individuals who may want to avoid background checks and records of their firearms purchases. Limitations on reporting requirements for multiple sales. Under the federal multiple sale reporting requirement, an FFL that sells two or more handguns within 5 business days to an individual must report information on the transaction to ATF. The federal reporting requirement was established to cover multiple sales of handguns, following studies showing that handguns sold in multiple sales to the same individual purchaser were frequently used in crime. ATF has identified multiple sales or purchases of firearms by a nonlicensee as a “significant indicator” of firearms trafficking, and officials noted the federal multiple sale reporting requirement helps expedite the time required by ATF to complete a crime gun trace. ATF officials added that information ATF received from FFLs on multiple sales has provided critical leads for some investigations of arms trafficking to Mexico. However, the requirement does not apply to purchases of long guns. As a result, although according to ATF data about 27 percent of firearms recovered in Mexico and traced from fiscal year 2004 to fiscal year 2008 were long guns, ATF does not have information in its multiple sales database on any long guns recovered in crime in Mexico that may have been purchased through a multiple sale. In addition, law enforcement officials noted traffickers are aware of how to avoid the federal reporting requirement by spreading out purchases of handguns at different FFLs. For example, traffickers can effectively purchase two or more guns within 5 business days without having such purchases reported as long as they purchase no more than one gun at any individual FFL. Some officials we met with from ATF and ICE—the two primary agencies combating arms trafficking to Mexico—noted the agencies have worked well together on various efforts to address the issue; however, we found ATF and ICE have not consistently coordinated their efforts to combat arms trafficking. ATF has stated it aims to address arms trafficking to Mexico in collaboration with domestic and Mexican law enforcement partners, including Mexican government entities, as well as U.S. agencies such as ICE and DEA. Specifically, a 2007 ATF document outlining its plan for Project Gunrunner stated ATF would incorporate ICE, CBP, and other participating agencies in joint initiatives, to expand information sharing and coordinated operations. According to ICE, its BEST initiative was largely developed to facilitate cooperation and bring together resources of ICE, CBP, and other U.S. and Mexican law enforcement entities to take a comprehensive approach to address border violence and vulnerabilities. However, an outdated interagency agreement and jurisdictional conflicts have led to instances of poor coordination between the two agencies. Officials from both agencies in Washington and in the field cited examples of inadequate communication on investigations, unwillingness to share information, and dysfunctional operations. As a consequence, it is unclear whether ATF and ICE are optimizing the use of U.S. government resources and minimizing duplication of efforts to address the issue. ATF and ICE officials we interviewed had differing views of their respective roles and responsibilities for addressing arms trafficking to Mexico. ATF officials stated ATF’s relative experience on firearms issues and broad range of relevant authorities under which it operates—including its role in tracing crime guns and regulating the firearms industry—make it the logical U.S. agency to lead efforts to combat arms trafficking to Mexico. For example, ATF enforces provisions of federal firearms laws related to prohibited persons in possession of a firearm, knowingly giving or selling a firearm to a prohibited person, a lawful purchaser acquiring a firearm on behalf of an unlawful purchaser (known as a “straw purchase”), dealing in firearms without a license, and the unlawful interstate transfer of firearms in certain instances. Although ICE officials acknowledged ATF had more years of experience on firearms issues, they told us they viewed ATF’s role as focused on firearms trafficking on the U.S. side of the border, while ICE has the primary role in cases involving firearms smuggled across the U.S. border into Mexico. ICE enforces provisions related to the illegal export or smuggling of goods, including firearms and ammunition, from the United States into Mexico in the Arms Export Control Act of 1976 and its implementing regulations, the International Trafficking in Arms Regulations; the USA Patriot Improvement and Reauthorization Act of 2005; and the Export Administration Act and its implementing regulations, Export Administration Regulations; among other authorities. In the locations we visited during our audit work, officials cited examples of how unclear roles and responsibilities have hindered communication and cooperation during some operations. Examples are as follows: Several officials told us they felt the agencies were not taking sufficient advantage of each other’s expertise to more effectively carry out operations, such as ATF’s expertise in firearms identification and procedures for conducting surveillance at gun shows, and ICE’s experience dealing with export violations and combating money laundering and alien smuggling, which ICE officials noted also may be relevant to cases of arms trafficking. Information on intelligence related to arms trafficking to Mexico was not being shared by the agencies at the El Paso Intelligence Center (EPIC), which was established to facilitate coordinated intelligence gathering and dissemination among member agencies related to Southwest border efforts to address drug, alien, and weapons smuggling. The 9/11 Commission Report asserted intelligence sharing is critical to combat threats to the United States and that intelligence analysts should utilize all relevant information sources. According to ATF, its “gun desk” at EPIC was established as a conduit or clearinghouse for weapons-related intelligence from federal, state, local, and international law enforcement entities, including weapons seizure information from ICE, CBP, and Mexican authorities. ICE stated its Border Violence Intelligence Cell (BVIC) was established at EPIC to coordinate weapons smuggling investigations and other related efforts with partner agencies and facilitate timely information sharing and analysis. However, ATF officials we met with at EPIC told us that, although they thought it was important for the two agencies’ efforts to be integrated at EPIC, they had minimal interaction with BVIC staff. Additionally, senior ICE officials at headquarters told us that, in the past, ATF has taken information shared by ICE and used it to lead its own investigations; as a result, ICE has subsequently been reluctant to share information with ATF at EPIC. Although CBP had a representative assigned to the gun desk during our site visit in January 2009, the ATF official in charge of the gun desk stated ICE did not have a representative at the gun desk as of May 1, 2009. After reviewing a draft of this report, ATF and ICE officials at headquarters noted ICE had requested permission from ATF to assign a representative to the gun desk in the past 6 months, and ATF permitted the assignment at the end of May; they stated an ICE analyst had been assigned to the gun desk as of June 1, 2009, which may contribute to improved coordination between the two agencies at EPIC in the future. The agencies have not coordinated and collaborated on some covert operations, potentially compromising the effectiveness of these efforts. For example, ATF officials stated that, in some cases, ICE did not follow standard procedures ATF has established for conducting operations at gun shows. ATF officials told us of one case in which ICE did not coordinate with ATF on an operation at a gun show, which led to an ICE agent unknowingly conducting surveillance on an ATF agent who was pursuing a suspect trafficker. ICE officials stated that, in another case, ATF had conducted “controlled delivery” covert operations in an attempt to identify organizations receiving illicit weapons in Mexico, without coordinating with ICE. The ICE officials said ATF did not notify them of their operations, including preclearing the controlled export of the weapons, which could have put ATF’s operation in conflict with ICE, CBP, or Mexican government law enforcement and raised the risk that weapons smuggled to Mexico as part of the operations would end up in the wrong hands and be used in crime. In some cases, ATF and ICE refused to provide required documentation to assist each other in arms trafficking investigations, according to ICE officials. ICE officials stated, in some cases, ATF officials would not provide necessary statements for cases ICE was investigating that involved interstate firearms violations. As a result, the officials said they would not provide a required immigration certificate to ATF for arms trafficking cases ATF was investigating that involved an immigration violation. Although ICE established BEST teams to facilitate interagency coordination and an integrated approach to address border issues, ATF and ICE officials indicated ATF has had minimal participation on the BEST teams. ICE officials stated that, as of May 2009, ATF was working with 4 of the 10 Southwest border BEST teams. ATF stated in February 2009 it had not permanently assigned any agents to BEST teams, but some ATF agents had been available on an as-needed or part-time basis to assist with BEST efforts to stop the illegal export of weapons from the U.S. A senior ATF official subsequently told us resource constraints prevented ATF from fully participating in the Southwest border BEST teams, but the official said ATF recently agreed to assign one ATF agent to each of the Southwest border BEST teams that are located where ATF also has a field office. ATF, ICE, State, and other relevant officials we met with at the U.S. Embassy in Mexico City agreed on the potential usefulness of creating an interagency, bilateral operational arms trafficking task force to conduct joint operations and investigations including both U.S. and Mexican government officials. However, ATF and ICE could not agree on who would take the lead, or whether they would co-lead, the effort. Senior ICE officials told us ICE preferred not to co-lead an interagency task force with ATF unless ATF could provide an equivalent level of resources, and ATF had relatively fewer resources in Mexico. The officials also said ICE wanted to minimize coordination meetings that would be required with an interagency task force. While a senior ATF official stated ATF and ICE agreed at an April conference in Mexico to create an interagency task force including both agencies that would be led by the Mexican government, senior ICE officials said the interagency task force would be led by ICE, and ICE is also moving forward with its own plans to create several bilateral taskforces comprising relevant ICE and Mexican officials at key locations in Mexico, without ATF involvement. ATF and ICE officials acknowledged the need to better coordinate their efforts to leverage their expertise and resources, and to ensure their strategies are mutually reinforcing, particularly given the recent expanded level of effort to address arms trafficking. In our past work, we have found that in an interagency effort where interagency collaboration is essential, it is important that agencies have clear roles and responsibilities and that there be mechanisms to coordinate across agencies. Officials from both agencies stated ATF and ICE are in the process of updating a 1978 MOU that existed between ATF and Customs (before the 2003 creation of ICE), and the agencies are working to improve coordination and cooperation. ATF officials said the new MOU will more clearly define the agencies’ statutory jurisdictions and reflect changes in some laws since the previous MOU was created. A draft copy of the MOU we obtained also included some guidelines for coordinating investigations and resolving interagency conflicts. A senior ATF official suggested that, in the future, it would be helpful if ICE and ATF officials in charge of field offices could develop more detailed standard operations procedures for their respective locations, based on the MOU. However, ATF and ICE have not reached formal agreement on the MOU to date, and officials said they have not established other formal coordination mechanisms to facilitate high-level information sharing and integrate strategies for addressing arms trafficking to Mexico. We conducted site visits to three locations along the U.S.-Mexico border (Laredo/Nuevo Laredo, San Diego/Tijuana, and El Paso/Juarez) and to Monterrey and Mexico City, Mexico, between September 2008 and January 2009. In March 2009, the Secretary of Homeland Security announced a new Southwest border security initiative that will expand screening technology and add personnel and canine teams that can detect weapons and currency for southbound inspections at ports of entry, among other efforts. Although we have not reviewed these new plans, our review of operations found various factors limit the potential for southbound inspections to reduce the flow of arms at the Southwest border. While ATF and ICE play important roles in investigating cases of arms trafficking to Mexico, CBP is responsible for the ports of entry at the U.S.- Mexico border, and its efforts include intercepting southbound illicit firearms at the border. Although CBP reported that from fiscal year 2005 to fiscal year 2008 some weapons were seized as a result of southbound inspections along the U.S.-Mexico border, in general, such inspections have yielded relatively few seizures. According to agency officials we met with, in general, southbound inspections of vehicles and persons have not been a high priority for the U.S. government and have resulted in relatively few weapons seizures. For example, in fiscal year 2008, CBP reported 35 southbound weapons seizures occurred at 10 of the 25 land ports of entry along the Southwest border, involving a total of 70 weapons. The other 15 ports of entry did not report any southbound weapons seizures. Efforts to increase southbound weapons seizures at the Southwest border are limited by several factors, including resource and infrastructure limitations, drug traffickers’ surveillance capabilities, and the limitations of Mexican government efforts. Resource and infrastructure limitations. Although CBP officials stated CBP does not track the overall number of southbound inspections conducted at Southwest border crossings, officials stated such inspections have generally been periodic and ad hoc, depending on available resources and local intelligence. For example, at one border crossing we visited, CBP officials stated law enforcement agencies typically conducted about one to two southbound operations per month. Officials noted southbound border crossings generally lack the infrastructure available at northbound crossings for screening vehicles and persons, such as overhead canopies, inspection booths, X-ray units, and other technologies. CBP officials we met with at the San Ysidro border crossing from San Diego to Tijuana, Mexico, noted there are 24 northbound lanes at that crossing, with inspection booths and screening technologies that enable them to process about 110,000 vehicles and pedestrians crossing from Mexico into the United States every day with an average 1-1.5 hour wait per vehicle. However, the officials said there are only 6 southbound lanes, none of which have the inspections infrastructure northbound lanes have, and the majority of vehicles cross into Mexico without stopping on either side of the border. Because of the lack of southbound infrastructure at this border crossing, officials said they use orange cones and concrete barriers to designate inspection areas during southbound operations (see fig. 7). To increase southbound inspections, law enforcement officials told us significant additional resources for personnel, equipment, and infrastructure along the Southwest border would be required beyond what is already spent conducting northbound screenings. For example, as of March 2009, Laredo, Texas, was the only CBP location along the Southwest border with a permanent team of individuals available to conduct southbound inspections at local border crossings. CBP noted that, under the new security initiative, it plans to conduct more regular southbound operations, in collaboration with other law enforcement entities. However, officials stated some border crossings lack the additional space that would be required to expand southbound infrastructure in order to accommodate primary and secondary screening areas while limiting the impact on traffic. For example, we visited one border crossing in El Paso, Texas, that is adjacent to park land not owned by CBP, which CBP officials said would preclude any efforts to expand southbound infrastructure at that crossing. Officials added that the new Southwest border security initiative will include efforts to survey existing southbound infrastructure to assess needs for functionality and worker safety, but they said any efforts to expand southbound infrastructure under the new security initiative would be long-term, since projects generally take 7-10 years. Drug traffickers’ surveillance capabilities. Law enforcement officials stated they typically only have about 45 minutes to an hour to conduct a southbound inspections operation before drug traffickers conducting surveillance at the border detect the operation and tell potential traffickers to wait for the operation to end before attempting to cross. As a result, officials said inspections are typically conducted during random brief intervals over a certain time period, such as 2 to 3 days. Limited Mexican southbound operations. Although Mexican customs aims to inspect 10 percent of vehicles crossing into Mexico on the Mexican side of the border, they have generally inspected much less than that to date. U.S. and Mexican officials noted, Mexican customs typically has focused more on inspections of commercial vehicles for illicit goods, which result in the payment of a fine, than on inspections for illicit weapons. Officials said this variance was due to several factors, including Mexico’s general lack of capacity for detecting illicit weapons, as well as concerns about corruption and the risks faced by Mexican officials involved in a seizure of illicit firearms. However, the Mexican government is taking some steps to improve inspections, such as enhancing background checks and vetting staff involved in inspections, and putting in place new processes, equipment, and infrastructure to improve the security, efficiency, and effectiveness of inspections. DHS’s new Southwest border security initiative has the potential to mitigate some of the limitations we found with existing border operations. We did not review efforts under the new initiative, and it is too early to tell whether and to what extent these recent efforts may effectively stem the flow of illicit weapons at the U.S.-Mexico border. Additionally, even if southbound operations were significantly expanded along the Southwest border, they might still result in a relatively small percentage of the weapons intended for Mexico being seized. For example, in comparison, even with the level of screening that is currently conducted on vehicles and persons coming into the United States from Mexico, the U.S. interagency counternarcotics community has noted only a portion of illicit drugs crossing into the United States from Mexico are seized at the border. With the exception of information maintained by ATF on traces of firearms seized in Mexico, in general, U.S. agencies were not able to provide comprehensive data to us related to their efforts to address arms trafficking to Mexico. We found agencies lack recent systematic analysis and reporting of aggregate data related to arms trafficking, which could be used to better understand the nature of the problem and to help plan and assess ways to address it. Additionally, while agencies provided some information on efforts to seize firearms, and initiate and prosecute cases of arms trafficking to Mexico, they were not able to provide complete and accurate information related to results of their efforts to address trafficking to Mexico specifically. As mentioned previously, ATF maintains some data on firearms that are seized in Mexico and submitted for a trace, which can be used to help characterize arms trafficking patterns and trends. For example, ATF has used this information to identify primary trafficking routes from the United States to Mexico and to identify types of firearms frequently recovered in crime in Mexico. ATF also provided data we requested on the number of traces that were linked to a multiple handgun sale and on firearms that had been reported lost or stolen. However, ATF was unable to provide data to us on the number of arms trafficking to Mexico cases involving straw purchasers or unlicensed sellers because the agency does not systematically track this information. ATF was also unable to provide information we requested on the number of traces completed for firearms recovered in Mexico that were linked to FFL gun dealer sales at gun shows from fiscal year 2004 to fiscal year 2008, although the agency began a new effort to track this information in its firearms tracing system in June 2008. Multiple sales, straw purchasers, trafficking by unlicensed sellers, and gun shows have been cited in prior ATF reports and by ATF officials as sources or indicators for firearms trafficking in general and to Mexico in particular. For example, in 1999 and 2000, the Department of the Treasury and ATF released three reports that included analyses of firearms trafficking trends based on ATF investigations. The reports included information such as primary reasons for initiating firearms trafficking investigations, sources of illegal firearms, types of traffickers identified in investigations, and trafficking violations commonly associated with investigations. Law enforcement agencies and the National Academy of Sciences have stated the type of information related to arms trafficking included in the reports can be used by Congress and implementing agencies to more accurately assess the problem and to help target and prioritize efforts. One of the three reports, released in February 2000, stated it was to be the first in an annual series. However, it has not been updated, and similar analyses and reporting have not been completed since the three reports were released. Senior ATF officials stated ATF had not recently compiled reports including an analysis of aggregate data on firearms trafficking due to a provision in their appropriation that was in place from fiscal year 2004 to fiscal year 2007 that restricted the sharing of this type of information. The officials stated an update would be useful and, since the appropriations restrictions were relaxed in 2008, ATF was considering such an update in the future, though no funding was requested for this activity in ATF’s fiscal year 2010 budget. ICE officials also acknowledged the importance of compiling this type of information, and they noted that, for the first time in March 2009, ICE, CBP, and DHS intelligence staff had compiled an assessment providing an overview of southbound weapons smuggling trends, such as primary smuggling routes and destination states for firearms in Mexico. The assessment included an analysis of 212 southbound weapons seized by CBP and ICE in the Southwest border states in fiscal year 2007 and fiscal year 2008, as well as data from the Mexican government on firearms seizures in Mexico between 2006 and 2008 and data from ATF on a portion of traces ATF completed for firearms recovered in Mexico in 2007 and 2008. However, the assessment notes that it “does not provide an all inclusive picture of…firearms smuggling” from the United States to Mexico. ICE stated it worked closely with ATF intelligence staff in developing the assessment. Nevertheless, the senior ATF intelligence official cited by ICE as its primary ATF contact for the assessment told us while ATF answered specific questions from ICE, such as regarding ATF’s firearms trace process, ATF was not asked to provide comprehensive data and analysis or significant input into the assessment’s overall findings and conclusions. In addition, although ICE officials stated that the assessment had been completed in March 2009, senior ATF officials we met with in April, including the Chief of ATF’s National Tracing Center, had just received a copy for the first time. We found the assessment only includes a subset of trace statistics we were able to obtain from ATF on firearms recovered in crime in Mexico and traced over the last 5 years. The senior ATF intelligence official told us that it would make sense for future assessments to be developed jointly, in order to leverage more comprehensive data and analysis available from both agencies; however, he noted that until both agencies improve their interagency coordination, developing a joint assessment was unlikely. Law enforcement agencies also reported tracking some information related to results of their efforts to address arms trafficking to Mexico to date, such as data on firearms seizures and cases initiated, but they lack complete data on results of their efforts to combat arms trafficking to Mexico specifically and have not systematically reported information on results of their efforts. Examples follow: Firearms seizures. Law enforcement agencies could not provide complete data on the number of firearms seizures they made involving arms trafficking to Mexico. ATF reported to us it acquired 8,328 firearms in the four Southwest border states (Arizona, California, New Mexico, and Texas) in fiscal year 2008 as evidence in support of criminal investigations (through abandonment, purchase, or seizure), related to its Southwest border enforcement efforts. However, a senior ATF official stated the agency is not able to readily retrieve data on whether a firearm was headed north or south, or, for example, whether an agent determined a firearm was being trafficked to Mexico or was seized for some other reason. In addition, seizures of firearms in other states that may relate to arms trafficking to Mexico are not reflected in the above number. Similarly, ICE reported to us it seized a total of 1,767 firearms in those same states in support of criminal investigations related to its Southwest border enforcement efforts in fiscal year 2008, including 152 firearms designated as having transited through or being destined for Mexico. ICE officials said agents have not consistently indicated in ICE’s data tracking system when a seizure relates to Mexico, so the latter number is likely less than the actual number of firearms seized related to Mexico. They added that seizures of firearms in other states that may relate to arms trafficking to Mexico also would not be reflected in the data. Additionally, while CBP reported to us it seized a total of 70 southbound firearms at official land ports of entry along the Southwest border in fiscal year 2008, it noted not all southbound weapons seizures necessarily relate to arms trafficking, such as in instances when an individual is arrested at the border due to an outstanding warrant and the individual also had a weapon. Cases initiated. Law enforcement agencies could not provide complete data on cases they initiated involving arms trafficking to Mexico. ATF reported to us it initiated 280 cases nationwide related to arms trafficking to Mexico in fiscal year 2008. However, a senior ATF official stated some cases involving weapons that were exported to another country, such as Guatemala, and were later recovered in crime in Mexico would not be included in the above number, because the intermediate location would be recorded as the destination country in ATF’s data tracking systems; therefore, the number provided is likely fewer than the actual number of cases. The official also noted ATF’s data systems do not capture information on reasons for initiating cases, such as whether a case was initiated based on information provided from a confidential informant or based on findings from an FFL inspection. ICE reported to us it initiated 103 cases involving Mexico-related weapons smuggling in fiscal year 2008. However, ICE officials stated some cases that do relate to arms trafficking to Mexico are not included in the data since ICE agents have not consistently indicated whether a case is related to Mexico in ICE’s data tracking system. They also noted their data systems do not capture information on specific reasons for initiating cases, such as whether a case was initiated following a highway interdiction on the U.S. side of the border or based on information provided from a confidential informant. However, ICE was able to provide a breakdown to us of cases by referring agency. For example, ICE reported 15 of the cases involving weapons smuggled to Mexico were initiated following a weapons seizure by CBP at an official port of entry, and 16 were initiated following a referral from ATF. Prosecutions. Agencies were also unable to provide complete data on prosecutions of cases involving arms trafficking to Mexico. Officials from DOJ’s Executive Office for U.S. Attorneys (EOUSA) stated their national database for tracking criminal cases does not have a category specific to Mexico arms trafficking cases. They said there is not a simple way to determine which cases involve arms trafficking to Mexico since cases may involve various defendants and charges, and no charges are specific to arms trafficking to Mexico. They added that, to date, most of the cases U.S. Attorneys Offices have prosecuted relating to arms trafficking to Mexico have been referred to U.S. Attorneys by ATF. ATF reported to us it referred 73 cases involving arms trafficking to Mexico for prosecution in fiscal year 2008. ATF officials stated although their data systems track the outcome of all cases, including firearms trafficking cases in general, or,as another example, for cases related to Southwest border violence (which may involve arms trafficking as well as other related offenses) they do not readily track the outcome of arms trafficking to Mexico cases specifically. However, based upon further review and analysis, ATF was able to generate some information for us on the outcome of the 73 cases: specifically, as of September 30, 2008, 22 cases were pending a prosecutorial decision, 46 had been accepted for prosecution, and 5 had not been accepted for prosecution. In addition, ATF reported 47 of the cases had been indicted, and 33 had resulted in convictions. While ICE was not able to provide data on the number of cases involving arms trafficking to Mexico that it referred for prosecution, it reported 66 cases involving Mexico-related weapons smuggling had been indicted, and 47 had resulted in convictions in fiscal year 2008. ICE officials noted some narcotics, money laundering, or human trafficking cases may also result in charges related to weapons smuggling that are not reflected in this data. They said compiling more complete and precise data related to prosecutions of cases involving arms trafficking to Mexico would require extensive documentary review and analysis. U.S. law enforcement agencies have provided some technical and operational assistance to Mexican counterparts to combat arms trafficking to Mexico. However, these efforts have been limited in scope and hampered by the incomplete use of ATF’s eTrace system and a lack of targeting resources at needs. In addition, concerns about corruption among some Mexican government officials limit the United States’ ability to establish a full partnership with Mexican government entities in combating illicit arms trafficking to Mexico. U.S. law enforcement agencies have provided some assistance to Mexican counterparts in combating arms trafficking. As noted previously in this report, U.S. law enforcement agencies conduct their work in Mexico in cooperation with Government of Mexico counterparts under the Treaty on Cooperation Between the United States of America and the United Mexican States for Mutual Legal Assistance. ATF agents in Monterrey, for instance, have built working relationships with federal, state, and local law enforcement, as well as the Mexican military, in the Monterrey area. This type of outreach has given the United States the opportunity to provide Mexican government counterparts some technical and operational assistance on firearms trafficking. Technical assistance. ATF has provided training sessions on firearms identification, developing arms trafficking investigations, and the use of eTrace. For instance, according to ATF, from fiscal years 2007 through 2008, ATF trained 375 law enforcement officials on the use of eTrace, at a cost of just under $10,000. Government of Mexico officials told us the training was extremely helpful in improving the skills of the officers who received it. However, only a small percentage of officers received the training, and more training is needed, Government of Mexico officials told us. In addition, ATF has provided some equipment to Mexican government counterparts, such as providing forensics equipment to the State Crime Lab of Nuevo Leon in Monterrey, Mexico. Operational assistance. ATF currently has 3 agents in Mexico, and ICE has 12, though ICE agents are required to work on a wide variety of issues and, at the time of our field work in Mexico, none was exclusively dedicated to arms trafficking issues. In May 2009, ICE officials told us that one ICE agent in Mexico would now be dedicated to arms trafficking. Where they can, these ATF and ICE agents work with their Mexican counterparts to assist at crime scenes and to gain access to firearms information necessary to conduct gun traces. As part of this, ATF has worked with Mexican law enforcement to collect gun data for submission to eTrace. Once they have received the data on the guns through eTrace, ATF’s National Tracing Center in West Virginia conducts the gun traces and returns information on their findings to the submitting party. In addition, ATF uses that trace information to launch new investigations or inform existing ones. ATF officials told us these investigations are the means by which ATF shuts down arms trafficking networks. However, despite these efforts, overall ATF and ICE assistance has been limited, according to Mexican and U.S. government officials. For example, due to ATF’s resource limitations, it has provided only a portion of the training ATF officials told us is needed to federal, state, and local law enforcement and to the Mexican military. In addition, though ATF and ICE have provided operational assistance in investigations, and though ATF has assisted in the collection of firearms information for submission to ATF’s eTrace when Mexican law enforcement and military seize firearms, ATF and ICE have been significantly limited in what assistance they can provide. For instance, there are several firearms seizures in Mexico every week, but in a country as large as Mexico, neither ATF nor ICE have enough staff in multiple locations to assist with the vast majority of gun seizures that take place. Also, U.S. assistance has been limited due to the incomplete use to date of eTrace by Mexican government officials. The inputting of firearms information into eTrace provides an important tool for U.S. law enforcement to launch new, or to further existing, arms trafficking investigations in the United States, which can lead to the disruption of networks that traffic arms into Mexico, according to ATF officials. In addition, the data inputted into eTrace currently serves as the best data we found available for analyzing the source and nature of the firearms that are being trafficked and seized in Mexico. However, because Mexican government officials have only entered a portion of the information on firearms seized, the eTrace data only represents data from these gun trace requests, not from all the guns seized. U.S. and Mexican government and law enforcement officials told us Mexican government officials’ failure to submit all of the firearms tracing information could be attributed to several factors, including the following: Mexican officials only recently began to fully appreciate the long-term value to Mexico of providing gun trace information to ATF; the Mexican military serves as the central repository for all seized guns in Mexico, while the Mexican Attorney General’s office is responsible for maintaining information on seized firearms, and coordinating access to the guns in order to collect necessary information has presented some challenges, according Mexican government officials; the Mexican Attorney General’s office is understaffed and has not had sufficient resources to clear the eTrace backlog, according to U.S. and Mexican government officials; only some of the Mexican Attorney General’s office staff had received ATF-provided training on identification of firearms and on using the eTrace system; and eTrace has been provided only in an English language version. Recent trends in submissions of trace requests to ATF’s National Tracing Center indicate Mexican government officials have begun to input more information using eTrace. ATF officials attribute this increased use of eTrace by Mexican government officials to the training and outreach the agency has provided over that period of time, and they hope this number will continue to grow as Mexican government officials become more aware of the long-term benefits to Mexico of submitting firearms trace requests, participate in ATF firearms identification and eTrace training, and devote more resources to gathering the firearms information and entering it into eTrace. Nonetheless, the ability of Mexican officials to input data into eTrace has been hampered because a Spanish language version of eTrace has still not been deployed across Mexico. In September 2008, ATF and State officials told us eTrace would soon be deployed across Mexico. However, ATF officials told us that to date the eTrace system is still being adapted to include all planned changes—such as the ability to enter more than one last name for a suspect or other party and to enter addresses that are differently configured from those in the United States—and that they were not sure when Spanish eTrace would be deployed across Mexico. U.S. and Government of Mexico officials told us it was important to complete the development of Spanish eTrace and immediately deploy it across Mexico because providing it and the necessary training for it to all relevant parties in Mexico would likely improve Mexican government officials’ use of the system. In addition, according to U.S. law enforcement and embassy officials, no needs assessments regarding arms trafficking were conducted in advance of Merida Initiative funding and, as a result, some needs that have been identified have not been addressed. The United States has recently provided significant funding for assistance to the Government of Mexico under the Merida Initiative; however, the Initiative currently provides general law enforcement and counternarcotics assistance to Mexico but has not focused on arms trafficking. State told us that the initial allocation of assistance was more general in order to enable it to be provided to get the money out for use in Mexico more quickly. Going forward, State told us that with additional time they could potentially develop and seek funding for more specific programs or efforts to assist Mexico in combating arms trafficking. State’s Narcotics Affairs Section (NAS) in Mexico City administers Merida Initiative funding for Mexico and had been able to use some of the Initiative’s monies in support of ad hoc arms trafficking initiatives conducted by U.S. law enforcement agencies and other U.S. entities at the embassy. However, there were specific needs identified by Mexican and U.S. government officials that were not being met, including the following: Mexican government officials we met with consistently stated their agencies needed training from U.S. law enforcement on firearms trafficking. They said ATF had provided some training that was very useful to their agencies, including training on identifying firearms, discovering trafficking trends, or developing firearms trafficking cases and that their agencies did not have their own courses on the issue. However, there had only been a few training sessions and only a small percentage of Mexican government officials to date had received the training. ATF, ICE, and embassy officials agreed more training was needed but said they had minimal resources to devote to address the problem. NAS officials at the embassy told us they were able to take some of the Merida Initiative money for building general capacity and use it to support some training with an arms trafficking application. However, these amounts were small, and the money was not designated in such a way that an arms trafficking curriculum or training program could be developed on a large scale and funded through Merida Initiative monies. Both U.S. and Mexican government officials told us designing and providing a comprehensive training program could be very helpful in boosting Mexican law enforcement capacity to combat arms trafficking. Mexican government officials, as well as U.S. law enforcement and embassy officials, told us another currently unmet need was the development of a bilateral, interagency investigative task force for arms trafficking. While the embassy uses a law enforcement working group to share general, nonoperational information on a whole range of law enforcement issues in Mexico, there was no group of U.S. and Mexican law enforcement officials working jointly at an operational and investigative level on combating arms trafficking. Mexican and U.S. government and embassy officials told us that such a task force would include a group of vetted Mexican law enforcement and government officials working jointly with U.S. counterparts in relevant law enforcement agencies, such as ATF, ICE, and others, on identifying, disrupting, and investigating arms trafficking on both the Mexican and U.S. sides of the border. Such types of vetted units, called Special Investigative Units, work with DEA on counternarcotics operations in Mexico. DEA officials we met with told us that these units are time, energy, and resource intensive, but that they are essential for success in their efforts. However, there is no dedicated money that can be used for establishing and maintaining such a group for combating arms trafficking. NAS officials we met with in Mexico City who were administering funds for the Merida Initiative told us they had been able to provide some funding for various projects that had an arms trafficking application. However, funding a standing bilateral, interagency task force would require significant money that would need to be consistently available year to year. As such, these officials told us that, to date, they had not been able to use Merida Initiative funding to develop and maintain such an arms trafficking task force. In addition, NAS officials said that when the embassy had supported the possibility of creating such a task force, ATF and ICE each insisted on leading such an effort and refused to work under the other’s leadership, preferring instead to run their own agency units with the Mexican government. Embassy officials told us they were unsure whether any such units would be created in the future without significant dedicated funding and agreement. Since taking office in December 2006, President Calderon has recognized the need to address the problem of organized crime and the corruption it creates throughout Mexican government and society. Calderon’s administration has reached out to the United States for cooperation and U.S. assistance in an unprecedented way. However, U.S. assistance to Mexico has been limited due to concerns about corruption among Mexican government entities, according to Mexican and U.S. government officials. According to Mexican government officials, corruption pervades all levels of Mexican law enforcement—federal, state, and local. For example, some high ranking members of federal law enforcement have been implicated in corruption investigations, and some high publicity kidnapping and murder cases have involved corrupt federal law enforcement officials. Furthermore, corruption is more of a problem at the state and local levels than federal, according to U.S. and Mexican government officials. The Mexican military, however, is generally considered to be less vulnerable to corruption than law enforcement, according to U.S. and Mexican government officials. As a result, the Calderon administration has used the military extensively to disrupt drug cartel operations and seize illicit firearms and to assist or replace local law enforcement when they are overwhelmed or deemed corrupt. For example, in late 2008, President Calderon’s administration terminated around 500 officers on Tijuana’s police force and brought in the military to fill the gap until new officers who had been sufficiently vetted could be hired and trained. U.S. government and law enforcement officials told us that corruption inhibits their efforts to ensure a capable and reliable partnership with Mexican government entities in combating arms trafficking. For instance, U.S. law enforcement officials we met with along the Southwest border and in Mexico told us they attempt to work with Mexican counterparts in law enforcement, the military, and Attorney General’s Office whenever possible. However, incidents of corruption among Mexican officials compel them to be selective about the information they share and with whom they share it. For example, in 2006, the Government of Mexico reported that it had dismissed 945 federal employees and suspended an additional 953, following aggressive investigations into public corruption. Similarly, CBP officials told us that, on the border, collaboration between Mexican and U.S. counterparts has been limited due to concerns about corruption among Government of Mexico customs officers. In fact, in one major border crossing location, CBP officers told us they had not been in contact with their Mexican customs counterparts and would not know who they could trust if they were. The Mexican military has been brought in to work along the border, due to the corruption within Mexican customs, according to Mexican and U.S. government officials. The Government of Mexico is implementing anticorruption measures, including polygraph and psychological testing, background checks, and salary increases for federal law enforcement and customs officers, and has implemented reforms to provide some vetting for state and local officers as well. However, these efforts are in the early stages and may take years to affect comprehensive change, according to Mexican and U.S. government officials. While U.S. law enforcement agencies have developed initiatives to address arms trafficking to Mexico, none have been guided by a comprehensive, governmentwide strategy. Strategic plans for ATF and ICE raise the issue of arms trafficking generally or overall smuggling to Mexico, but neither focuses on arms trafficking to Mexico or lays out a comprehensive plan for addressing the problem. In our past work, we have identified key elements that constitute an effective strategy, including identifying needs and objectives and the resources necessary to meet them, as well as establishing mechanisms to monitor progress toward objectives. In June 2009, the administration released its 2009 National Southwest Border Counternarcotics Strategy, which, for the first time, contains a chapter on arms trafficking to Mexico. We reviewed the strategy, and it contains some key elements of a strategy, such as setting objectives, but lacks others, such as performance measures for monitoring progress toward objectives. ONDCP officials said an appendix with an “implementation plan” for the strategy will be added in late summer of 2009 that will have some performance measures for its objectives. However, at this point, it is not clear whether the implementation plan will include performance indicators and other accountability mechanisms to overcome shortcomings raised in our report. In addition, in March 2009, the Secretary of Homeland Security announced a new DHS Southwest border security effort to significantly increase DHS presence and efforts along the Southwest border, including conducting more southbound inspections at ports of entry, among other efforts. However, it is unclear how the new resources that the administration has recently devoted to the Southwest border will be tied to the new strategy and implementation plan. Strategic plans for ATF and ICE raise the issues of arms trafficking in general and overall smuggling to Mexico, but neither plan focuses on arms trafficking to Mexico or lays out a comprehensive plan for how the agencies would address the problem. In addition, the Merida Initiative does not include provisions that would constitute a strategy to combat arms trafficking. ATF’s current strategic plan for fiscal years 2004-2009 does not mention arms trafficking to Mexico. The strategic plan lays out the strategic goal to “enforce Federal firearms laws in order to remove violent offenders from our communities and keep firearms out of the hands of those who are prohibited by law from possessing them.” As part of this objective, the plan identifies one tactic to “partner with law enforcement agencies and prosecutors at all levels to develop focused strategies that lead to the investigation, arrest, and prosecution of…domestic and international firearms traffickers…and others who attempt to illegally acquire or misuse firearms.” However, the strategic plan neither lays out how ATF will go about implementing the tactic or achieving the goal, nor does it include any performance metrics to measure performance and monitor progress. ATF officials told us that they are currently developing their new fiscal year 2010 strategic plan, which will include more information relevant to arms trafficking to Mexico; however, as ATF’s fiscal year 2010 strategic plan was in draft form and subject to change, we could not determine whether the final version will contain key elements, such as needs assessments, clear definition of roles and responsibilities, or metrics to measure progress. “Working with its domestic and international law-enforcement partners, ATF will deny the “tools of the trade” to the firearms-trafficking infrastructure of the criminal organizations operating in Mexico through proactive enforcement of its jurisdictional areas in the affected border States in the domestic front, as well as through assistance and cooperative interaction with the Mexican authorities in their fight to effectively deal with the increase in violent crime.” ATF included certain action items, which provided some specific tasks for ATF to accomplish its strategic goals under Project Gunrunner. Some examples of action items include the following: The United States and Mexico establishing a point of contact for each ATF border field division who will meet regularly with the Mexican Attorney General’s Office’s representative to coordinate investigative and firearms- trafficking issues. ATF and other DOJ components, such as DEA, U.S. Marshals Service, and FBI; and DHS components, such as ICE, operating along the border implementing investigative strategies and for developing intelligence relating to trafficking into Mexico. The United States and Mexico forming a consultative group of attorneys and law enforcement officials from both countries to address legal issues and policies involving firearms trafficking and enforcement strategies. The United States exploring the availability of funding to provide technology and equipment to assist the government of Mexico in upgrading its firearms forensics analysis and tracing capabilities. While this ATF Project Gunrunner document does contain useful strategic goals and specific action items to achieve those goals, key elements are missing, such as mechanisms that could measure and ensure progress toward these goals. “The mission of Armas Cruzadas is for U.S. and Mexican government agencies to synchronize bi-lateral law enforcement and intelligence-sharing operations in order to comprehensively identify, disrupt, and dismantle trans-border weapons smuggling networks. The goals include (1) establishing a bilateral program to stop weapons smuggling; (2) coordinating operations; (3) developing intelligence about arms trafficking networks; (4) strengthening interagency cooperation; (5) promoting intelligence information exchange; and (6) implementing points of contact for information exchange.” To meet these goals, ICE detailed some action items, which included creating a border violence intelligence cell; developing a vetted arms trafficking group; implementing a weapons virtual task force; reinvigorating the ICE Border Liaison Program; and leveraging investigation, interdiction, and intelligence. While ICE’s fact sheet does contain relevant strategic goals and specific action items to achieve those goals, key elements of a strategy are missing as well, such as mechanisms to ensure progress toward the strategic goals. In CBP’s current strategic plan, there are no goals specific to arms trafficking to Mexico. The primary focus of CBP’s plan is on preventing dangerous people and goods from getting into the United States, and the issue of preventing arms from going across the border into Mexico is not addressed. In other reports and publications, CBP mentions items it has seized on the border, including drugs, illicit currency, and even prohibited plant materials and animal products, but the agency does not mention illicit firearms. However, CBP officials told us they have an important role to play in combating arms trafficking to Mexico and will continue to increase their efforts to combat arms trafficking with new initiatives, including some in coordination with and support of operations involving ICE and other U.S. law enforcement agencies. The Merida Initiative does not include provisions that would constitute a strategy to combat arms trafficking. While a bill in Congress to authorize the Merida Initiative included a “sense of Congress” that an “effective strategy to combat ... illegal arms flows is a critical part of a United States … anti-narcotics strategy,” a subsequent appropriations act, which makes reference to Merida, included no details on which agency or agencies should be responsible for developing and implementing such a strategy. And, as mentioned previously in the report, State has not dedicated funding for the Merida Initiative that targets illicit arms trafficking. The U.S. Embassy in Mexico, where the Merida Initiative funding is administered, also maintains a Mission Performance Plan to guide its efforts each fiscal year. This plan lays out goals of working with Government of Mexico partners on law enforcement issues including transborder issues, such as smuggling of arms. However, there are neither detailed performance measures, nor are there mechanisms to ensure collaboration across agencies on the issue of combating arms trafficking to Mexico. We have previously identified several key elements of an effective strategy. These elements include defining roles and responsibilities for each party to meet those objectives ensuring sufficient funding and resources necessary to accomplish implementing mechanisms to facilitate coordination across agencies; and monitoring progress toward objectives and identifying needed improvements. We have found that having a strategy with elements such as these has the potential for greatly enhancing agency performance. For example, managers can use performance information to identify problems in existing programs, to try to identify the causes of problems, and to develop corrective actions. In June 2009, the administration released its 2009 National Southwest Border Counternarcotics Strategy, which, for the first time, contains a chapter on arms trafficking to Mexico. By law, the Office of National Drug Control Policy (ONDCP) is required to issue a new strategy every 2 years. The previous version of this document, from 2007, did not include any strategy to combat illicit arms trafficking to Mexico. According to ONDCP officials, initially, this new version did not include arms trafficking either, but in February, a working group, co-led by ATF and ICE, began working on an arms trafficking piece. “U.S. law enforcement organizations and intelligence agencies operate a variety of intelligence collection and analysis programs which are directly or indirectly related to weapons smuggling. The Department of Defense provides analytical support to some of these programs with regard to captured military weapons and ordnance. In order to provide better operational access and utility to law enforcement agencies, the U.S. Government will capitalize upon the existing law enforcement interagency intelligence center, EPIC, to reinforce rapid information sharing methods for intelligence derived from Federal, State, local and Government of Mexico illicit weapons seizures. Absent statutory limitations, plans should be made to move to a real-time data sharing methodology.” While the arms trafficking chapter of the strategy contains some key elements of a strategy, such as setting objectives, it lacks others, such as providing detailed roles and responsibilities for relevant agencies or performance measures for monitoring progress toward objectives. However, ONDCP officials said an appendix with an “implementation plan” for the strategy will be added in late summer of 2009 that will have more detailed actions for each agency to take, as well as some performance measures for each item under the objectives. Furthermore, ONDCP officials said there will be annual reporting that addresses performance towards the plan’s goals within the National Drug Control Strategy’s annual reporting to Congress. However, at this point, it is not clear whether the implementation plan will include performance indicators and other accountability mechanisms to overcome shortcomings raised in our report. In addition, in March 2009, the Secretary of Homeland Security announced a new DHS Southwest border security effort to significantly increase DHS presence and efforts along the Southwest border, including conducting more southbound inspections at ports of entry, among other efforts. However, it is unclear how the new resources that the administration has recently devoted to the Southwest border will be tied to the new strategy and implementation plan. Combating arms trafficking has become an increasing concern to U.S. and Mexican government and law enforcement officials, as violence in Mexico has soared to historic levels, and U.S. officials have become concerned about the potential for increased violence brought about by Mexican DTOs on the U.S. side of the border. However, while this violence has raised concern, there has not been a coordinated U.S. government effort to combat the illicit arms trafficking to Mexico that U.S. and Mexican government officials agree is fueling much of the drug-related violence. Agencies such as ATF and ICE have made some efforts to combat illicit arms trafficking, but these efforts are hampered by a number of factors, including the constraints of the legal framework in which law enforcement agencies operate, according to agency officials, and poor coordination among agencies. In addition, agencies have not systematically and consistently gathered and reported certain types of data on firearms trafficking that would be useful to the administration and Congress to better target resources to combat arms trafficking to Mexico. Gaps in this data hamper the investigative capacity of law enforcement agencies. Further, a Spanish language version of ATF’s eTrace has been in development for months but has yet to be finalized; the lack of this new version of eTrace has impeded the use of eTrace by Mexican law enforcement officials, which limits data that could be used in investigations on both sides of the border and results in incomplete information on the nature of firearms trafficked and seized in Mexico. Quick deployment of eTrace across Mexico and training of the relevant officials in its use could increase the number of guns submitted to ATF for tracing each year, improving the data on the types and sources of firearms trafficked into Mexico and increasing the information that law enforcement officials have to investigate and build cases. U.S. and Mexican government officials in locations we visited told us that, while they have undertaken some efforts to combat illicit arms trafficking, they are concerned that without a targeted, comprehensive, and coordinated U.S. government effort, their efforts could fall short. In June 2009, the administration released its 2009 National Southwest Border Counternarcotics Strategy, containing a chapter on arms trafficking to Mexico. We reviewed the strategy’s chapter on arms trafficking and found that the chapter does contain some key elements of a strategy, such as setting objectives, but it lacks others, such as providing detailed roles and responsibilities for relevant agencies or performance measures for monitoring progress toward objectives. ONDCP officials said they will develop an implementation plan for the strategy in late summer of 2009 that will have more detailed actions for each agency to take, as well as some performance measures for each item under the objectives. However, at this point, it is not clear whether the implementation plan will include performance indicators and other accountability mechanisms to overcome shortcomings raised in our report. Furthermore, in March 2009, the administration announced more resources for the Southwest border, including more personnel and equipment for conducting southbound inspections. However, it is unclear how the new resources that the administration has recently devoted to the Southwest border will be tied to the new strategy and implementation plan. The current level of cooperation on law enforcement issues between the United States and Mexico under President Calderon’s administration presents a unique opportunity to work jointly to combat illicit arms trafficking. Taking advantage of this opportunity will require a unified, U.S. government approach that brings to bear all the necessary assets to combat illicit arms trafficking. We recommend that the U.S. Attorney General prepare a report to Congress on approaches to address the challenges law enforcement officials raised in this report regarding the constraints on the collection of data that inhibit the ability of law enforcement to conduct timely investigations. To further enhance interagency collaboration in combating arms trafficking to Mexico and to help ensure integrated policy and program direction, we recommend the U.S. Attorney General and the Secretary of Homeland Security finalize the Memorandum of Understanding between ATF and ICE and develop processes for periodically monitoring its implementation and making any needed adjustments. To help identify where efforts should be targeted to combat illicit arms trafficking to Mexico, we have several recommendations to improve the gathering and reporting of data related to such efforts, including that the U.S. Attorney General direct the ATF Director to regularly update ATF’s reporting on aggregate firearms trafficking data and trends; the U.S. Attorney General and the Secretary of Homeland Security, in light of DHS’s recent efforts to assess southbound weapons smuggling trends, direct ATF and ICE to ensure they share comprehensive data and leverage each other’s expertise and analysis on future assessments relevant to the issue; and the U.S. Attorney General and the Secretary of Homeland Security ensure the systematic gathering and reporting of data related to results of these efforts, including firearms seizures, investigations, and prosecutions. To improve the scope and completeness of data on firearms trafficked to Mexico and to facilitate investigations to disrupt illicit arms trafficking networks, we recommend that the U.S. Attorney General and the Secretary of State work with the Government of Mexico to expedite the dissemination of eTrace in Spanish across Mexico to the relevant Government of Mexico officials, provide these officials the proper training on the use of eTrace, and ensure more complete input of information on seized arms into eTrace. To support the 2009 Southwest Border Counternarcotics Strategy, we recommend the ONDCP Director ensure that the implementation plan for the arms trafficking chapter of this strategy (1) identifies needs and clearly defines objectives for addressing those needs, (2) identifies roles and responsibilities for meeting objectives that leverage the existing expertise of each relevant agency, (3) ensures agencies are provided guidance on setting funding priorities and providing resources to address those needs, (4) establishes mechanisms to facilitate coordination across agencies, and (5) employs monitoring mechanisms to determine and report on progress toward objectives and identifies needed improvements. We provided a draft of this report to the Departments of Homeland Security, Justice, and State and to the Office of National Drug Control Policy. DHS and State provided written comments, which are reproduced in appendixes III and IV. DHS generally agreed with our recommendations; however, DHS raised questions regarding our interpretation of certain data and the relationship between ICE and ATF. We disagree that our presentation of the data is misleading, and the evidence in the report clearly demonstrates coordination problems between ICE and ATF. State agreed with our recommendation that the U.S. Attorney General and the Secretary of State work with the Government of Mexico to expedite the dissemination of eTrace in Spanish across Mexico to the relevant Government of Mexico officials, provide these officials the proper training on the use of eTrace, and ensure more complete input of information on seized arms into eTrace. In addition, State added that the agency is funding a $5 million Forensics Laboratories project with the Government of Mexico’s Office of the Attorney General (PGR) for the successful investigation and prosecution of criminal cases. This funding, State said, will be used to provide state-of-the-art equipment and training, which directly support DOJ and DHS efforts in the disruption of firearms. DOJ provided no formal departmental comment on the draft of this report. However, ATF and DEA provided technical comments, which we incorporated throughout the report where appropriate. DHS and ONDCP also provided technical comments on our report, which we incorporated throughout the report where appropriate. We are sending copies of this report to interested congressional committees and to the Attorney General, the Secretaries of Homeland Security and State, and the Director of the Office of National Drug Control Policy. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other GAO contacts and staff acknowledgments are listed in appendix V. To identify data available on types of firearms trafficked to Mexico and the sources of these arms, we consulted U.S. and Mexican government databases, as well as research prepared by nongovernmental entities. We relied primarily on the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) data compiled by its National Tracing Center (NTC) since it contained the most detailed information on the types of illicit firearms seized in Mexico and where they had originated. NTC data on firearms seized in Mexico, however, is not comprehensive. The data is based chiefly on trace information submitted through ATF’s eTrace system. As mentioned earlier in this report, over the last 5 years, about one-quarter to one-third of the illicit firearms seized in Mexico had information submitted through eTrace, and not all of these were successfully traced. Notwithstanding its limitations, NTC data was sufficiently reliable to permit an analysis of where the firearms seized in Mexico that could be traced had been manufactured and whether they had been imported into the United States before arriving in Mexico. For those arms that were traced to a retail dealer in the United States before being trafficked to Mexico, NTC data also contained information on the states where they had originated. Based on the trace data and discussions with ATF and other law enforcement officials, we were able to develop an analysis of the type of retail transactions involved in the initial marketing of the firearms in the United States before they were trafficked to Mexico. NTC trace data also contained information allowing identification of the types of firearms (e.g., caliber and model) that were most commonly seized in Mexico and subsequently traced. We corroborated this information in extensive discussions with U.S. and Mexican law enforcement officials. However, as noted earlier in the report, because firearms seized in Mexico are not always submitted for tracing within the same year they were seized, it was not possible for us to develop data to track trends on the types of firearms trafficked or seized. Similarly, we were unable to obtain quantitative data from U.S. or Mexican government sources on the users of illicit firearms in Mexico. However, there was consensus among U.S. and Mexican law enforcement officials that most illicit firearms seized in Mexico had been in the possession of organized criminal organizations linked to the drug trade. The involvement of criminal organizations involved in drug trafficking in the trafficking of illicit firearms into Mexico was confirmed by law enforcement intelligence sources. To learn more about trends in illicit firearms seizures in Mexico, we obtained data from the Mexican Federal Government’s Planning, Analysis and Information Center for Combating Crime—Centro Nacional de Planeación, Análisis e Información para el Combate a la Delincuencia— (CENAPI) on seizures from 2004 to the first quarter of 2009. To determine the geographical distribution of firearms seized in Mexico, we obtained data from CENAPI on seizures by Mexican federal entity—31 states and the Federal District of Mexico City. We did not assess the reliability of data provided by CENAPI, but we considered this data generally acceptable to provide an overall indication of the magnitude and nature of the trends in arms seizures since 2004. To identify key challenges confronting U.S. government efforts to combat illicit sales of firearms in the United States and to stem the flow of these arms across the Southwest border into Mexico, we interviewed cognizant officials from the Department of Justice’s (DOJ) ATF, Executive Office for U.S. Attorneys (EOUSA), and the Drug Enforcement Administration (DEA); the Department of Homeland Security’s (DHS) U.S. Immigration and Customs Enforcement (ICE) and U.S. Customs and Border Protection (CBP); and the Department of State (State) regarding their relevant efforts. We reviewed and analyzed DOJ and DHS documents relevant to U.S. government efforts to address arms trafficking to Mexico, including funding data provided to us by ATF, CBP, and ICE; the 1978 Memorandum of Understanding (MOU) between CBP and ATF and a draft version of a revised MOU between ATF and ICE; data from ATF, CBP, and ICE on firearms seizures; data from ATF and ICE on efforts to investigate and prosecute cases involving arms trafficking to Mexico; and agency reports and assessments related to the issue. We also reviewed relevant prior GAO reports, Congressional Research Service (CRS) reports and memorandums, and reports from DOJ’s Office of Inspector General related to ATF’s efforts to enforce federal firearms laws. We reviewed provisions of federal firearms laws that agency officials identified as relevant to U.S. government efforts to address arms trafficking to Mexico, including the Gun Control Act of 1968, the National Firearms Act of 1934, and the Arms Export Control Act of 1976. We did not review Mexican firearms laws and to the extent that we comment on these in this report, we relied on secondary sources. To explore challenges faced by U.S. agencies collaborating with Mexican authorities to combat illicit arms trafficking, we visited U.S.-Mexico border crossings at Laredo and El Paso, Texas, and San Diego, California. In these locations, we interviewed ATF, CBP, DEA, and ICE officials responsible for overseeing and implementing efforts to stem the flow of illicit arms trafficking to Mexico and related law enforcement initiatives. We observed U.S. government efforts to develop and share intelligence related to arms trafficking to Mexico at the El Paso Intelligence Center. We also conducted fieldwork in Mexico City, Monterrey, Nuevo Laredo, and Tijuana, Mexico. In Mexico, we met with ATF, CBP, DEA, ICE, and State officials working on law enforcement issues at the U.S. embassy and consulates. We interviewed Mexican government officials engaged in efforts to combat arms trafficking from the Attorney General’s Office (Procuraduría General de la República), including CENAPI; the Ministry of Public Safety (Secretaria de Seguridad Pública); the Ministry of Defense (Secretaría de la Defensa Nacional); and Customs (Servicio de Administración Tributaria). Since we did not conduct fieldwork in a generalizeable sample of locations along the Southwest border and in the interior of Mexico, our observations in these locations are illustrative but may not be representative of all efforts to address the issue. To assess the U.S. government’s strategy for addressing the issue of arms trafficking to Mexico we reviewed strategic planning, internal guidance, policy, and procedures documents for relevant agencies and departments. Following the March 2009 decision to include a chapter on arms trafficking in the Southwest Border Counternarcotics Strategy, we met with Office of National Drug Control Policy (ONDCP) officials to discuss development of this document, and obtained a general overview. ONDCP officials also arranged for one of our team members to review the draft document. Finally, to assess the reliability of data provided by ATF, CBP, and ICE on funding for efforts to address arms trafficking to Mexico, seizures of southbound firearms, and cases involving arms trafficking to Mexico, we reviewed and discussed the sources of the data with agency officials. We determined the program and project information provided to us were sufficiently reliable to provide an overall indication of the magnitude and nature of the illicit firearms trade and of the completeness of data agencies have related to their efforts to address the issue. Any financial data we reported were for background purposes only. Western Hemisphere Subcommittee staff requested that we compare data on firearms seizures in Mexico and ATF firearms trace data to determine if ATF’s trace data reflected the geographic distribution of firearms seizures in that country. Our analysis indicates that there is a strong positive correlation between the data we obtained from CENAPI on seizures by Mexican federal entity—that is, 31 states and the Federal District of Mexico City—for calendar year 2008, and ATF’s firearms trace data linked to specific Mexican federal entities for fiscal year 2008. Eight of the top 10 Mexican federal entities for firearms seizures in 2008, according to CENAPI data—Baja California, Chihuaha, Guanajuato, Jalisco, Michoacan, Oxaca, Tamaulipas, and the Federal District of Mexico City—also showed up among the top 10 Mexican federal entities where firearms traced by ATF were seized. Figure 8 shows that the Mexican federal entities where most firearms are seized are very similar to those submitting the most firearms trace requests. In order to determine the geographic distribution of firearms seized in Mexico, we obtained data from CENAPI on seizures, by Mexican federal entity. According to CENAPI data, a total of 29,824 firearms were seized in Mexico in 2008. In order to ascertain the geographic distribution of firearms seized in Mexico that were traced by ATF, we obtained data from ATF linking firearms traced to the Mexican federal entities where they were seized. In fiscal year 2008, ATF traced 7,198 firearms seized in Mexico. Of these, 6,854 were linked to specific states or the Federal District. However, 344 firearms were traced in fiscal year 2008 that could not be linked to a specific state where they may have been seized. We excluded these from our analysis. We ranked the Mexican states and Federal District by the number of firearms seized according to the data provided by CENAPI, and we ranked them a second time according to the trace data provided by ATF. We then compared the two sets of data using a correlation analysis. See figure 9 below. The correlation coefficient for the data was 0.85, indicating a strong positive correlation. We also performed a correlation analysis for the raw data—that is, the number of firearms seized in Mexico, and the number of firearms traced by ATF, by Mexican federal entity. The correlation coefficient for those two sets of data was 0.79. We also examined the ratio of arms seized to arms traced and this ranged from .03 to 1.78. Figure 9 shows a strong positive correlation between the number of firearms seized in Mexico and the number of firearms traced by ATF, by Mexican federal entity. The following are GAO’s comments on the Department of Homeland Security’s letter dated June 9, 2009. 1. We disagree that our use of the 87 percent statistic is misleading. Our report clearly states that the number of firearms traced by ATF represents a percentage of the overall firearms seized in Mexico. More importantly, ATF trace data for each year since 2004 identified that most of the firearms seized in Mexico and traced came from the United States. Our recommendation to the U.S. Attorney General and the Secretary of State to expedite further enhancement of eTrace and work with the Government of Mexico to expand its use is designed to shed further light on the origin of guns seized in Mexico. 2. We have added additional information to the text of the report to clarify ATF’s role in DHS’s recent assessment. While ICE stated it worked closely with ATF intelligence staff in developing the assessment, the senior ATF intelligence official ICE cited as its primary ATF contact for the assessment told us that ATF provided some information to ICE for its assessment, but ATF was not asked to provide comprehensive data and analysis or significant input into the assessment’s overall findings and conclusions. We found the assessment only includes a subset of trace statistics we were able to obtain from ATF on firearms recovered in crime in Mexico and traced over the last 5 years. The senior ATF intelligence official told us that it would make sense for future assessments to be developed jointly, in order to leverage more comprehensive data and analysis available from both agencies; however, he noted that until both agencies improve their interagency coordination, developing a joint assessment was unlikely. 3. We did not comment in the report on whether any of ICE’s databases enable the agency to capture, track, and provide statistical information on all ICE investigations, as well as associate seizures and enforcement actions against individuals linked to criminal behavior. However, as we noted in the report, ICE was unable to provide comprehensive statistical information specifically on cases involving arms trafficking to Mexico. 4. Contrary to DHS’s assertion that ICE and ATF enjoy an excellent working relationship in their efforts to combat arms trafficking to Mexico, ATF, ICE, and State officials we met with along the Southwest border, in Mexico, and at headquarters cited problems with ATF and ICE working well together. These officials included senior officials at ICE’s Office of International Affairs in Washington, from ICE’s Attaché office in Mexico City, and from ICE’s office at the U.S. Consulate in Tijuana, Mexico. 5. It was not within the scope of our audit to review specific ICE investigations or their disposition, and we did not comment on this in our report. Our report noted that, in general, ICE was not able to provide comprehensive data to us related to its efforts to address arms trafficking to Mexico. For instance, ICE was not able to provide complete data on the seizure of firearms destined for Mexico, the number of cases it initiated related to arms trafficking to Mexico, or the disposition of cases ICE submitted for prosecution. Also, DHS’s recent assessment of southbound weapons smuggling trends, which could be used to better understand the nature of the problem and to help plan and assess ways to address it, notes that it “does not provide an all inclusive picture of…firearms smuggling” from the United States to Mexico. In addition, as we noted in comment 2, it only contains a subset of the data we were able to obtain from ATF relevant to the issue. 6. As noted in the report, GAO has found that one of the key elements that should be part of any strategy is clearly identifying an agency’s objectives and establishing mechanisms for determining progress toward those objectives. Neither CBP’s strategic plan, nor other CBP reports and publications, make mention of illicit firearms, focusing instead on other CBP efforts screening people and goods entering the United States. However, our report noted that CBP is involved in new Southwest border initiatives announced by DHS to significantly increase DHS presence along the border, including conducting more southbound inspections at ports of entry, among other efforts. The following are GAO’s comments on the Department of State’s letter dated June 5, 2009. 1. State agreed with our recommendation that the U.S. Attorney General and the Secretary of State work with the Government of Mexico to expedite the dissemination of eTrace in Spanish across Mexico to the relevant Government of Mexico officials, provide these officials the proper training on the use of eTrace, and ensure more complete input of information on seized arms into eTrace. In addition, State added that the department is funding a $5 million Forensics Laboratories project with the Government of Mexico’s Office of the Attorney General (PGR) for the successful investigation and prosecution of criminal cases, and we incorporated this information into the report. In addition to the individual named above, Juan Gobel, Assistant Director; Joe Carney; Virginia Chanley; Matt Harris; Elisabeth Helmer; Grace Lui; and J. Addison Ricks provided key contributions to this report. Technical assistance was provided by Joyce Evans, Theresa Perkins, Jena Sinkfield, and Cynthia Taylor. | In recent years, violence along the U.S.-Mexico border has escalated dramatically, due largely to the Mexican government's efforts to disrupt Mexican drug trafficking organizations (DTO). U.S. officials note the violence associated with Mexican DTOs poses a serious challenge for U.S. law enforcement, threatening citizens on both sides of the border, and U.S. and Mexican law enforcement officials generally agree many of the firearms used to perpetrate crimes in Mexico are illicitly trafficked from the United States across the Southwest border. GAO was asked to examine (1) data on the types, sources, and users of these firearms; (2) key challenges confronting U.S. government efforts to combat illicit sales of firearms in the United States and stem the flow of them into Mexico; (3) challenges faced by U.S. agencies collaborating with Mexican authorities to combat the problem of illicit arms; and (4) the U.S. government's strategy for addressing the issue. GAO analyzed program information and firearms data and met with U.S. and Mexican officials on both sides of the border. Available evidence indicates many of the firearms fueling Mexican drug violence originated in the United States, including a growing number of increasingly lethal weapons. While it is impossible to know how many firearms are illegally smuggled into Mexico in a given year, about 87 percent of firearms seized by Mexican authorities and traced in the last 5 years originated in the United States, according to data from Department of Justice's Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF). According to U.S. and Mexican government officials, these firearms have been increasingly more powerful and lethal in recent years. Many of these firearms come from gun shops and gun shows in Southwest border states. U.S. and Mexican government and law enforcement officials stated most firearms are intended to support operations of Mexican DTOs, which are also responsible for trafficking arms to Mexico. The U.S. government faces several significant challenges in combating illicit sales of firearms in the United States and stemming their flow into Mexico. In particular, certain provisions of some federal firearms laws present challenges to U.S. efforts, according to ATF officials. Specifically, officials identified key challenges related to restrictions on collecting and reporting information on firearms purchases, a lack of required background checks for private firearms sales, and limitations on reporting requirements for multiple sales. GAO also found ATF and Department of Homeland Security's (DHS) U.S. Immigration and Customs Enforcement, the primary agencies implementing efforts to address the issue, do not effectively coordinate their efforts, in part because the agencies lack clear roles and responsibilities and have been operating under an outdated interagency agreement. Additionally, agencies generally have not systematically gathered, analyzed, and reported data that could be useful to help plan and assess results of their efforts to address arms trafficking to Mexico. U.S. law enforcement agencies have provided some assistance to Mexican counterparts in combating arms trafficking, but these efforts face several challenges. U.S. law enforcement assistance to Mexico does not target arms trafficking needs, limiting U.S. agencies' ability to provide technical or operational assistance. In addition, U.S. assistance has been limited due to Mexican officials' incomplete use of ATF's electronic firearms tracing system, an important tool for U.S. arms trafficking investigations. Another significant challenge facing U.S. efforts to assist Mexico is corruption among some Mexican government entities. Mexican federal authorities are implementing anticorruption measures, but government officials acknowledge fully implementing these reforms will take considerable time, and may take years to affect comprehensive change. The administration's recently released National Southwest Border Counternarcotics Strategy includes, for the first time, a chapter on combating illicit arms trafficking to Mexico. Prior to the new strategy, the U.S. government lacked a strategy to address arms trafficking to Mexico, and various efforts undertaken by individual U.S. agencies were not part of a comprehensive U.S. governmentwide strategy for addressing the problem. At this point, it's not clear whether ONDCP's "implementation plan" for the strategy, which has not been finalized, will include performance indicators and other accountability mechanisms to overcome shortcomings raised in our report. |
Many of the 62 million people living in over 2,300 rural counties in the United States lack access to a supply of clean water and sanitary waste disposal facilities. In 1937, the Congress created a program that provided low-cost loans to ranchers, farmers, and rural residents of 17 arid and semiarid western states for water storage projects. Since that time, the Congress has changed the program to also fund water distribution systems and waste disposal facilities and to provide grant funds in addition to loans. Currently, the program, known as the Water and Waste Disposal Program, offers grants and loans to construct or modify water and/or sewer systems in rural communities that cannot obtain funding from other sources. Administered by the U.S. Department of Agriculture (USDA), this program is now the major federal program providing such loan and grant funds to rural America. USDA administers the Water and Waste Disposal Program—referred to in this report as the water and sewer program—through its Rural Utilities Service. To be eligible for this program, a rural community must have a population of 10,000 or less and be financially needy, meeting low-income criteria. USDA headquarters allocates both loan and grant funds to its state offices through an allocation formula that it established through regulations. The formula consists of three weighted factors: rural population (50 percent), rural poverty (25 percent), and rural unemployment (25 percent). No state may receive more than 5 percent of the total loan and grant funds initially allocated. About 10 percent of both loan and grant funds are set aside in a reserve pool for emergencies, cost overruns, and other unforeseen problems. Furthermore, twice a year, USDA headquarters withdraws to its reserve pool a portion of the unobligated loan and grant funds that may remain in a state’s accounts. State offices can request pooled funds and receive funding above a state’s initial allocation; USDA headquarters determines how pooled funds will be distributed. Generally, these pooled funds are used to provide supplemental funding for projects that are ready to be approved. In the following fiscal year, the states receive their allocations on the basis of the formula, not on whether they spent the prior year’s allocation. In fiscal year 1995, USDA headquarters withdrew about $60 million in loan and grant funds as a result of the pooling process. The water and sewer program has been funded at an average of $1 billion per year for the last 6 fiscal years; funding in fiscal year 1994 was about $1.3 billion. USDA administers the water and sewer program through a network of state and district offices. USDA headquarters allocates the program’s funds to the state offices, which are responsible for general oversight of the program, including approval of district offices’ project and funding recommendations. District offices administer the loan and grant program at the local level and serve as the point of contact for communities seeking assistance. Through a preapplication process, a district office obtains preliminary information to determine a community’s eligibility for assistance and the proposed project’s feasibility. If the community meets these requirements and funds are available, the district office asks the community to prepare a full application package. The district office provides the state office with data on the project, including the application package and the district office’s recommendation for approval. Most state offices have approval authority for loans of up to about $3 million; they can approve grants of any dollar amount. Under certain conditions, state offices must obtain final approval through USDA headquarters. Generally, USDA finances water and sewer projects through a combination of loans and grants. In addition, other funds—such as those from federal or state agencies or the applicant community—may be combined with financing from USDA. USDA state and district offices determine the applicant’s eligibility and the project’s feasibility, including the reasonableness of user charges, which USDA headquarters officials interpret as an affordable charge. USDA state and district offices determine the community’s ability to repay a loan, including consideration of the community’s outstanding debt to USDA. These offices initially attempt to finance the project through a loan. Since USDA expects its loans to be fully repaid, district and state offices estimate what the average monthly user charges for the water and/or sewer services would have to be in order to sufficiently cover anticipated costs and avoid defaulting on the loan. Typically, a community repays its loan through monthly charges collected from the residents who use these services. If USDA state and district offices conclude that the loan amount would result in an onerous user charge, they consider replacing a portion of the loan with a grant to bring the user charge down to a more manageable level. In addition, officials in some states encourage the local community to obtain funding from other sources, such as state and/or other federal agencies, to reduce the amount of the USDA loan and grant funds needed. The grant amount that USDA state and district offices provide for a specific project can vary for several reasons—for example, the amount of grant funds on hand, the urgency for the project, and competing demands for grant funds within the district and across the state. There are two principal limits on the grant provided for a particular project. First, legislation limits the amount of the grant to 75 percent of the project development costs and provides for higher grants for projects in communities that have lower population and income levels. By regulation, USDA limits some communities having a somewhat higher median household income to a maximum grant of 55 percent of the project development costs. Second, under USDA regulations, grants cannot be so large that they cause average monthly user charges to be lower than those prevailing in the area. A state office may also fund a project at less than the allowable amount if its grant allocations are not sufficient to provide maximum grant funds for that project. To determine the yearly user charge for a project, USDA state and district offices consider costs in four categories: debt service, operations and maintenance, reserve fund, and other costs. These offices add the yearly debt service calculation—including outstanding USDA debt—to the yearly costs for operations and maintenance to arrive at the total yearly cost. When applicable, these offices also add the cost of maintaining a reserve fund (generally 10 percent of debt service), which is used to replace certain types of equipment that have a relatively short useful life. This reserve fund should not be large enough to build a substantial surplus. Ordinarily, the total reserve will be equal to one average annual loan installment, accumulating at a rate of one-tenth of the total each year. In addition, USDA state and district offices may consider other costs, such as funded depreciation and delinquent accounts. To arrive at the total grant amount, these offices determine how much debt service a community can afford. They then factor in amortization over a period of time, usually 40 years. In deciding on the mix of loan and grant funds to award for water and sewer projects, USDA state and district officials estimate the maximum size of the grant on the basis of a comparison of a community’s median household income with the state’s poverty level. A community may not receive the maximum grant if further calculations of the debt service amount that the community can afford reveal that the grant should be less. A key factor in estimating affordability is determining how much the average customer can pay for water and/or sewer service on the basis of a community’s median household income. USDA officials may override this affordability measure and increase or decrease the grant amount to bring the user charge in line with the average charges paid by comparable communities for similar systems. However, officials may not change the grant amount if the change will result in user charges that are lower than those charged to customers in nearby communities. Representative William F. Clinger, Jr., asked us to review certain aspects of USDA’s Water and Waste Disposal Program. This report provides information on (1) funding levels for the program and the projects supported, (2) the formula that USDA uses to allocate loan and grant funds among its state offices, and (3) the approach that USDA state and district offices use to distribute funds within states. To address the first objective, we obtained access to the USDA database that contains information on the water and sewer program since its inception in the 1930s. We analyzed data for projects begun from fiscal year 1965 through June 1995—the period during which USDA was authorized to provide both grants and loans for water and sewer projects. We excluded (1) about 4,000 projects (with a value of about $1.3 billion in nominal dollars) from our analysis because USDA’s database did not provide the year in which the projects were begun and (2) about 3,000 additional projects because the database did not provide the dollar amounts for these loans and/or grants. We summarized, by state, information on USDA’s loans and grants and on other sources of funding. We converted amounts in the database to constant fiscal year 1994 dollars. We did not perform a reliability assessment of USDA’s database. To respond to the second objective, we reviewed the literature on allocation formulas used for distributing federal funds and spoke with experts in other federal and private agencies. We identified generally accepted criteria for the factors that should go into an allocation formula and compared these factors with those used for the current water and sewer allocation formula. We also analyzed allocation formulas used to distribute funding for other federal programs. To address the third objective, we reviewed files at USDA headquarters for a random sample of 120 projects receiving funding from fiscal year 1992 through fiscal year 1994. We selected 30 cases each from four of the five states that are the largest recipients of loan and grant funds (Mississippi, North Carolina, Ohio, and Pennsylvania). We analyzed the approach used to distribute funds within the states and identified variations in funding decisions. We visited these four states and talked with USDA water and sewer officials at the state level and with officials in 12 of USDA’s districts. We also talked with nine borrowers who had received grants or loans from USDA for water or sewer projects in two of these states. We performed our work from September 1994 through August 1995 in accordance with generally accepted government auditing standards. We provided copies of a draft of this report to USDA’s Rural Utilities Service for its comments. We met with several agency officials, including the Deputy Administrator of the Rural Utilities Service and the Director of the Water and Waste Disposal Division. These officials agreed that the information presented in the report is accurate. They provided new or clarifying information that we incorporated as appropriate. From fiscal year 1965 through June 1995, USDA supported the development of water and sewer projects in thousands of rural communities. The expenditures, number of projects, and average costs varied by state. On average, the water and sewer program provided about 70 percent of the funds for the projects that it supported. The remainder of the funds came from other sources such as the Environmental Protection Agency (EPA), states, and counties. Since fiscal year 1965, USDA has provided financial assistance to over 12,500 rural communities and almost 17,000 water and sewer projects. The number of projects supported and the amount of loan and grant funds provided varied, ranging from a low of two projects and about $5.6 million in the Western Pacific Territories to a high of more than 1,100 projects and $1.8 billion in expenditures in Texas. Furthermore, the average expenditure per project varied widely among the states. Table 2.1 shows the top five states in total expenditures and the average expenditure for each project in those states since fiscal year 1965. As table 2.1 shows, while total expenditures were comparable for three of the five states, the average expenditure per project and the average annual number of projects varied considerably. For example, the average expenditure per project in Ohio was about 2-1/2 times the expenditure in Mississippi. This occurs in part because the USDA office in Mississippi funded more water projects than did the USDA office in Ohio, which funded more sewer projects. In general, water projects are less costly than sewer projects. (App. I provides information on projects and expenditures by state.) Many projects that the water and sewer program supported also received funding from sources other than USDA, including the community itself, the state and county, and other federal sources, such as EPA. Figure 2.1 shows the amount and percentage of support provided by these sources and USDA. EPA ($6.3) USDA ($27.7) The $27.7 billion provided by USDA’s water and sewer program represents about 70 percent of the total expenditures on these projects from fiscal year 1965 through June 1995. The extent of all other funding sources varied widely by state—from 8 percent in New Jersey to 62 percent in Vermont. According to USDA officials in one of the four states we visited, they encouraged and aided applicants for the projects in soliciting funds outside of the USDA program. Projects in that state and in two others that we visited averaged over 30 percent in all other sources of funding. Conversely, the fourth state we visited relied more heavily on USDA’s water and sewer funds, obtaining only 15 percent of funding from all other sources. (See app. I for sources of funding by state.) The current water and sewer formula—which is based on rural population, poverty, and unemployment—is easy to administer and draws on data that are readily available and directed toward rural areas. As we have reported on a number of previous occasions, experts in public finance have identified three criteria—need, ability to pay, and differences in cost—that are commonly considered in allocation formulas aimed at producing an equitable distribution of funds among states. USDA’s current formula may partially satisfy the first two criteria but does not address the third. Data on need, on the ability to pay, and on certain cost differences are available from the Bureau of the Census, EPA, the Department of the Treasury, and the Bureau of Labor Statistics. USDA’s water and sewer formula is easy to administer because of its simplicity and its use of factors that are based on readily available data. It consists of three weighted factors for each state: rural population (50 percent), rural poverty (25 percent), and rural unemployment (25 percent). Rural population is measured by a state’s rural share of population as a percentage of the national rural population. Rural poverty is measured by the state’s rural population below the poverty level as a percentage of the national rural population below the poverty level. Rural unemployment is measured by the state’s nonmetropolitan unemployed population as a percentage of the national nonmetropolitan unemployed population. USDA officials informed us that they use rural population, poverty, and unemployment in the allocation formula because the data are readily available from the Bureau of the Census and Bureau of Labor Statistics and do not require any further alterations. (For population and poverty levels, data are collected every 10 years; for unemployment rates, data are collected annually.) In addition, the data are directed toward rural areas. Public finance experts have identified three criteria that are commonly considered in allocation formulas aimed at producing an equitable distribution of funds among states. These criteria are the (1) need for services or projects, (2) ability of states to fund projects from their own resources, and (3) differences between the states in the cost of providing these services. Some federal allocation formulas consider one or more of these criteria in distributing program funding to the states, as discussed below. The Advisory Commission on Intergovernmental Relations, reported that the “need for services” is the most common criterion used to allocate federal funds. Some formulas use direct indicators of this need. For example, the formula for the Highway Bridge Replacement and Rehabilitation Program is based on the number of a state’s bridges that are eligible for replacement or rehabilitation. Similarly, the formula for the Hazardous Waste Management State Program is based in part on a direct indicator of need—the number of hazardous waste management facilities in the state. This program assists states in transporting, treating, storing, and disposing of hazardous wastes. Indirect indicators of need, or proxies, may be used when direct factors are not available. For example, the Highway Planning and Construction, Interstate 4R Program formula contains a factor for vehicle miles traveled on interstate routes in a calendar year. This factor serves as a proxy for those interstate highways that are in the greatest need of repair. Indirect indicators of need often have the advantage of objectivity and prevent any perverse incentive effects that may result from the formula itself. However, when direct indicators of need are available, their use may more precisely target funds. A state’s ability to raise revenues from its own resources—its fiscal capacity—is also an important factor found in many federal allocation formulas. The rationale for including an ability-to-pay factor is that a greater share of funds should go to recipients who are least able to finance their needs from their own resources. Many federal and state grant programs over the past decade have included a measure of ability to pay in their formulas. Because it is readily available information, per capita income is the factor used almost exclusively to account for ability to pay. However, according to economists and other analysts, per capita income is not a comprehensive measure of ability to pay because it does not include other sources of income, such as corporate income and taxes paid by nonresidents (e.g., hotel and sales taxes). Therefore, using an indicator such as per capita income may understate states’ ability to pay. Several other factors could be used to develop a more comprehensive indicator of ability to pay, such as total taxable resources. This indicator, developed by the Department of the Treasury, is an average of per capita income and per capita gross state product. By averaging gross state product with personal income, total taxable resources covers more types of income than does personal income alone, including income received by nonresidents. This measure is used in the formula specified in the 1987 reauthorization of the block grant for the Alcohol, Drug Abuse, and Mental Health Program. Many allocation formulas include an adjustment for cost disparities across states. Ideally, for these formulas to reflect cost differences fairly, they must incorporate factors that reflect differences between states in costs that are beyond the states’ direct control. One formula that includes an indicator to adjust for costs is the formula for the Highway Bridge Replacement and Rehabilitation Program. This formula considers the costs of replacing or improving bridges in different states. The current formula may partially reflect the need for services and the ability to pay for such services, but it does not reflect cost differences between the states. First, to the extent that a state’s relative need for services is proportional to rural population and poverty, the population and poverty factors may serve as a proxy for need. But the formula’s reliance on poverty data can result in more funding to a state that has more resources to help itself than its poverty data would indicate. Such a state may have both a relatively high average income and a high level of poverty. Also, poverty data are not adjusted for cost-of-living differences across states. Second, the formula partially provides a means for measuring a state’s ability to pay for needed water or sewer services. The current formula’s unemployment factor provides an indirect measure of a state’s financial capacity but does not directly address a state’s ability to pay for services. In addition, the use of the unemployment rate as a targeting mechanism cannot be expected to reflect the economic conditions of rural areas. According to USDA, rural workers are more likely to rely on two or more part-time jobs rather than one full-time job. These part-time jobs do not show up in unemployment statistics. Also, the unemployment rate may not be representative of the economic condition of self-employed farmers, whose employment status is unlikely to change in good or bad times. On the other hand, the current formula does not adjust for cost differences. It does not recognize that the costs for building and maintaining water and sewer projects differ from one state to another. These costs can differ because of state-to-state differences in labor costs or other inputs as well as the amount of resources needed to accomplish the project. For example, costs may be higher because of a harsh winter climate or the topography of certain states, making it necessary to bury water or sewer pipes more deeply or to drill through rocky terrain. Most data that could be incorporated into a formula that addresses a community’s need, ability to pay, and cost differences are currently available. Appendix II provides details on the availability of such data. Any changes that would incorporate such data, however, could alter the amounts of loan and grant funds that states receive. Depending on the factors selected and their respective assigned weights, changes could be significant. The ultimate results of any changes would depend upon assumptions about the relative importance of factors. We did not analyze how potential changes would affect individual states. USDA state and district officials have the authority to vary the amount of grant and loan funds that they award to communities eligible to receive funding for water and sewer projects. The officials may base their decisions on either the applicant communities’ median household income (MHI) or the user rates for similar systems. This flexibility in funding decisions has the advantage of allowing state and district offices to vary the mix of grant and loan funds among competing projects. This same flexibility results in different funding decisions for similar communities. USDA state and district officials decide on whether to provide only a loan or a mix of loan and grant funds for water and sewer projects by determining what constitutes an affordable payment or average user charge. As discussed in chapter 1, if a loan by itself would result in a user charge that is too high, officials can reduce the loan’s amount by providing grant funds. The amount of the grant is ultimately determined by considering a community’s MHI or the results of a comparison between the proposed system and other similar systems. USDA officials advised us that most funding decisions are based on user charges for similar systems in the area, rather than on the community’s MHI. According to a number of USDA state and district officials with whom we spoke, the option of comparing similar communities and systems provides them with latitude in distributing funds within the state. This option allows them to provide more or less funds to projects, depending on the number and cost of projects competing for funds. Accordingly, these states could either fund multiple projects at reduced grant levels or fewer projects at higher levels. USDA offices in all four states chose the latter—assisting a larger number of projects with relatively lower amounts of grants. For example, USDA officials in one state told us that they had a 4- to 5-year backlog of projects totaling about $220 million. In this state, when choosing similar systems for comparison, officials were more likely to pick systems with higher user charges, thus establishing a lower grant amount for the project under consideration and spreading grant funds among competing projects. According to USDA headquarters, state, and district officials, selecting comparable communities and user charges is inherently judgmental. Water and sewer systems and user charges can differ because of such factors as the type and age of the system and the size and density of the population served. While the flexibility for selecting similar systems provides latitude in determining the amount of a grant that a particular project will receive, it also means that differing funding decisions may be made for similar communities. We identified variations in funding decisions both between and within the four states we visited. Table 4.1 provides information on four communities—one from each of the four states we visited. The district and state offices in each of the states based their funding decisions for these communities on the user charges for similar water and/or sewer systems in comparable communities within their respective states. The table presents project development costs, the community’s MHI, the community’s maximum grant eligibility, the grant’s amount based on MHI, the amount of the grant awarded, the annual user charges, and the community’s user charges used for comparison. The community in State 1 was eligible for a grant of up to 55 percent of its project development costs, on the basis of its median household income when compared with the state’s median household income. As table 4.1 shows, USDA provided this community with the maximum grant, about $3.4 million. USDA arrived at an annual user charge of $147, which was comparable with the annual charges of three other communities. In contrast, the community in State 2 was eligible for a 75-percent grant but received no grant funds even though it had a median household income similar to that of the first community. Without a grant, the community in State 2 projected a user charge of $376 annually, which was 2-1/2 times higher than the user charge for the first community. However, this user charge was comparable with the three communities that the district office had selected for comparison in that state. USDA made differing funding decisions for these two communities. While the community in State 2 was eligible for a larger grant than the community in State 1, it received no grant at all. Similarly, the communities in the other two states were each eligible for a grant of 75 percent, but the grant amounts differed. One community’s annual user charge of $182 was close to an average of the three communities selected for comparison, while the other community received a grant amount that resulted in an annual user charge lower than that of any of the three systems identified as similar. We also found variations in the approaches used within individual states to determine how much grant funding, if any, USDA would provide to a particular community. Table 4.2 presents information similar to that in table 4.1 for four communities within the same state. For these communities, USDA based its decisions on user charges in similar communities. On the basis of its MHI, Community A was eligible for a grant of up to 55 percent of its project development costs and received a grant of $572,000. In contrast, Community B was eligible for a 75-percent grant but received no grant funds. For Community A, the annual user charge was $396, which was higher than the annual user charges for the three communities used for comparison. For Community B, the annual user charge was $376, which was higher than two of the communities used for comparison. USDA made differing funding decisions for Communities A and B. Community A, which had a higher MHI than Community B, received a grant, while Community B received no grant. Our analysis also showed that Communities B, C, and D were eligible for grants up to 75 percent ($5.5 million, $2.4 million, and $657,000, respectively). While USDA compared similar communities to arrive at projected user charges for these three applicants, it provided no grant to Community B, almost the maximum grant to Community C ($2.3 million), and less than half the maximum grant to Community D ($300,000). Several USDA district officials in this state told us that they regularly choose systems for comparison that support a $30 to $35 monthly charge because they believe that user charges in this range are necessary to get the state office’s approval for the project. However, USDA state officials disagreed with the district officials’ views that a $30 to $35 monthly charge was expected. Nonetheless, in another state, USDA state and district officials told us that they emphasize having a consistent outcome for user charges in their state. They informed us that they expected the awards to projects to result in monthly user charges of about $30 for water projects and about $35 for sewer projects. Also, within each of the four states visited, USDA’s rationale for making grant determination decisions was often not documented in the files. For example, files on the projects frequently showed that the similar systems approach was used but the communities and user charges selected for comparison were not identified. | Pursuant to a congressional request, GAO reviewed the Department of Agriculture's (USDA) process for allocating and distributing loan and grant funds for water and sewer projects, focusing on: (1) funding levels for the Water and Waste Disposal Program; (2) the formula used to allocate funds among USDA state offices; and (3) the approach that USDA state offices use to distribute funds within states. GAO held that: (1) the USDA water and sewer program has provided loan and grant support totalling about $28 billion, supporting almost 17,000 projects, and assisting over 12,500 communities throughout the United States; (2) the three factors that USDA considers in its allocation formula for water and sewer funds are rural population, rural poverty, and rural unemployment; (3) no state may receive more than 5 percent of the total available funds in the initial allocation; (4) the allocation formula may partially reflect states' needs and ability to pay, but it does not reflect cost differences between states; and (5) although USDA state and district offices have considerable flexibility in determining the amount of grant assistance for individual projects under the current approach, this flexibility can result in differing funding decisions for similar communities. |
FAA has authority to authorize all UAS operations in the national airspace—military; public (academic institutions and federal, state, and local governments including law enforcement organizations); and civil (non-government including commercial). Currently, since a final rulemaking is not completed, FAA only allows UAS access to the national airspace on a case-by-case basis. FAA provides access to the airspace through three different means: Certificates of Waiver or Authorization (COA): Public entities including FAA-designated test sites may apply for COA. A COA is an authorization, generally for up to 2 years, issued by the FAA to a public operator for a specific UAS activity. Between January 1, 2014 and March 19, 2015 FAA had approved 674 public COAs. Special Airworthiness Certificates in the Experimental Category (Experimental Certificate): Civil entities, including commercial interests, may apply for experimental certificates, which may be used for research and development, training, or demonstrations by manufactures. Section 333 exemptions: Since September 2014, commercial entities may apply to FAA for issued exemptions under section 333 of the 2012 Act, Special Rules for Certain Unmanned Aircraft Systems. This exemption requires the Secretary of Transportation to determine if certain UASs may operate safely in the national airspace system prior to the completion of UAS rulemakings. FAA has granted such exemptions to 48 of 684 total applications (7 percent) from companies or other entities applying under section 333. These companies may apply to fly at their own designated sites or the test sites. While limited operations continue through these means of FAA approval, FAA has been planning for further integration. In response to requirements of the 2012 Act, FAA issued the UAS Comprehensive Plan and the UAS Integration Roadmap, which broadly map the responsibilities and plans for the introduction of UAS into the national airspace system. These plans provide a broad framework to guide UAS integration efforts. The UAS Comprehensive Plan described the overarching, interagency goals, and approach and identified six high- level strategic goals for integrating UAS into the national airspace. The FAA Roadmap identified a broad three-phase approach to FAA’s UAS integration plans—Accommodation, Integration, and Evolution—with associated priorities for each phase that provide additional insight into how FAA plans to integrate UAS into the national airspace system. This phased approach has been supported by both academics and industry. FAA plans to use this approach to facilitate further incremental steps toward its goal of seamlessly integrating UAS flight into the national airspace. Accommodation phase: According to the Roadmap, in the accommodation phase, FAA will apply special mitigations and procedures to safely facilitate limited UAS access to the national airspace system in the near-term. Accommodation is to predominate in the near-term with appropriate restrictions and constraints to mitigate any performance shortfalls. UAS operations in the national airspace system are considered on a case-by-case basis. During the near-term, R&D is to continue to identify challenges, validate advanced mitigation strategies, and explore opportunities to progress UAS integration into the national airspace system. Integration phase: The primary objective of the integration phase is establishing performance requirements for UAS that would increase access to the NAS. During the mid- to far-term, FAA is to establish new or revised regulations, policies, procedures, guidance material, training, and understanding of systems and operations to support routine NAS operations. FAA plans for the integration phase to begin in the near- to mid-term with the implementation of the small UAS rule and is to expand the phase further over time (mid- and far-term) to consider wider integration of a broader field of UASs. Evolution phase: In the evolution phase, FAA is to work to routinely update all required policy, regulations, procedures, guidance material, technologies, and training to support UAS operations in the NAS operational environment as it evolves over time. According to the Roadmap, it is important that the UAS community maintains the understanding that the NAS environment is not static and that many improvements are planned for the NAS over the next 13—15 years. To avoid obsolescence, UAS developers are to maintain a dual focus: integration into today’s NAS while maintaining cognizance of how the NAS is evolving. In February 2015, FAA issued a Notice for Proposed Rulemaking for the operations of small UASs—those weighing less than 55 pounds—that could, once finalized, allow greater access to the national airspace. To mitigate risk, the proposed rule would limit small UASs to daylight-only operations, confined areas of operation, and visual-line-of-sight operations. FAAs release of this proposed rule for small UAS operations started the process of addressing additional requirements of the 2012 Act. See table 1 for a summary of the rule’s major provisions. FAA has also met additional requirements outlined in the 2012 Act pertaining to the creation of UAS test sites. In December 2013, FAA selected six UAS test ranges. According to FAA, these sites were chosen based on a number of factors including geography, climate, airspace use, and a proposed research portfolio that was part of the application. All UAS operations at a test site must be authorized by FAA through either the use of a COA or an experimental certificate. In addition, there is no funding from FAA to support the test sites. Thus, these sites rely upon revenue generated from entities, such as those in the UAS industry, using the sites for UAS flights. Foreign countries are also experiencing an increase in UAS use, and some have begun to allow commercial entities to fly UASs under limited circumstances. According to industry stakeholders, easier access to testing in these countries’ airspace has drawn the attention of some U.S. companies that wish to test their UASs without needing to adhere to FAA’s administrative requirements for flying UASs at one of the domestically located test sites, or obtaining an FAA COA. It has also led at least one test site to partner with a foreign country where, according to the test site operator, UAS test flights can be approved in 10 days. Since being named in December 2013, the six designated test sites have become operational, applying for and receiving authorization from FAA to conduct test flights. From April 2014 through August 2014, as we were conducting our ongoing work, each of the six test sites became operational and signed an Other Transaction Agreement with FAA. All flights at a test site must be authorized under the authority of a COA or under the authority of an experimental certificate approved by FAA. Since becoming operational in 2014 until March 2015, five of the six test sites received 48 COAs and one experimental certificate in support of UAS operations resulting in over 195 UAS flights across the five test sites. These flights provide operations and safety data to FAA in support of UAS integration. While there are only a few contracts with industry thus far, according to test site operators these are important if the test sites are to remain operational. Table 2 provides an overview of test-site activity since the sites became operational. FAA officials and some test sites told us that progress has been made in part because of FAA’s and sites’ efforts to work together. Test site officials meet every two weeks with FAA officials to discuss current issues, challenges, and progress. According to meeting minutes, these meetings have been used to discuss many issues from training for designated airworthiness representatives to processing of COAs. In addition, test sites have developed operational and safety processes that have been reviewed by FAA. Thus, while FAA has no funding directed to the test sites to specifically support research and development activities, FAA dedicates time and resources to supporting the test sites, and FAA staff we spoke to believe test sites are a benefit to the integration process and worth this investment. According to FAA, its role is to ensure each test site sets up a safe-testing environment and to provide oversight that guarantees each test site operates under strict safety standards. FAA views the test sites as a location for industry to safely access the airspace. FAA told us it expects to collect data obtained from the users of the test ranges that will contribute to the continued development of standards for the safe and routine integration of UASs. The Other Transaction Agreement between FAA and the test sites defines the purpose of the test sites as research and testing in support of safe UAS integration into the national airspace. FAA and the test sites have worked together to define the role of the test sites and see that both the FAA and the test sites are effectively supporting each other and the goal of the test sites, we will continue to examine this progress and will report our final results late this year. As part of our ongoing work, we identified a number of countries that allow commercial UAS operations and have done so for years. In Canada and Australia, regulations pertaining to UAS have been in place since 1996 and 2002, respectively. According to a MITRE study, the types of commercial operations allowed vary by country. For example, as of December 2014, Australia had issued over 180 UAS operating certificates to businesses engaged in aerial surveying, photography, and other lines of business. In Japan, the agriculture industry has used UASs to apply fertilizer and pesticide for over 10 years. Furthermore, several European countries have granted operating licenses to more than 1,000 operators to use UASs for safety inspections of infrastructure, such as rail tracks, or to support the agriculture industry. The MITRE study reported that the speed of change can vary based on a number of factors, including the complexity and size of the airspace and the supporting infrastructure. In addition, according to FAA, the legal and regulatory structures are different and may allow easier access to the airspace in other countries for UAS operations. While UAS commercial operations can occur in some countries, there are restrictions controlling their use. We studied the UAS regulations of Australia, Canada, France, and the United Kingdom and found these countries impose similar types of requirements and restrictions on commercial UAS operations. For example, all these countries except Canada require government-issued certification documents before UASs can operate commercially. In November 2014, Canada issued new rules creating exemptions for commercial use of small UASs weighing 4.4 pounds or less and from 4.4 pounds to 55 pounds. UASs in these categories can commercially operate without a government-issued certification but must still follow operational restrictions, such as a height restriction and a requirement to operate within line of sight. Transport Canada officials told us this arrangement allows them to use scarce resources to regulate situations of relatively high risk. In addition, each country requires that UAS operators document how they ensure safety during flights and that their UAS regulations go into significant detail on subjects such as remote pilot training and licensing requirements. For example, the United Kingdom has established “national qualified entities” that conduct assessments of operators and make recommendations to the Civil Aviation Authority as to whether to approve that operator. If UASs were to begin flying today in the national airspace system under the provisions of FAA’s proposed rules, their operating restrictions would be similar to regulations in these other four countries. However, there would be some differences in the details. For example, FAA proposes altitude restrictions of below 500 feet, while Australia, Canada, and the United Kingdom restrict operations to similar altitudes. Other proposed regulations require that FAA certify UAS pilots prior to commencing operations, while Canada and France do not require pilot certification. Table 3 shows how FAA’s proposed rules compare with the regulations of Australia, Canada, France, and the United Kingdom. While regulations in these countries require UAS operations remain within the pilot’s visual line of sight, some countries are moving toward allowing limited operations beyond the pilot’s visual line of sight. For example, according to Australian civil aviation officials, they are developing a new UAS regulation that would allow operators to request a certificate allowing beyond line-of-sight operations. However, use would be very limited and allowed only on a case-by-case basis. Similarly, according to a French civil aviation official, France approves on a case-by-case basis, very limited beyond line-of-sight operations. Finally, in the United States, there have been beyond line-of-sight operations in the Arctic, and, NASA, FAA and the industry have successfully demonstrated detect-and-avoid technology, which is necessary for beyond line-of-sight operations. In March 2015, the European Aviation Safety Agency (EASA) issued a proposal for UAS regulations that creates three categories of UAS operations—open, specific, and certified. Generally, the open category would not require authorization from an aviation authority but would have basic restrictions including altitude and distance from people. The specific category would require a risk assessment of the proposed operation and an approval to operate under restrictions specific to the operation. The final proposed category, certified operations, would be required for those higher-risk operations, specifically when the risk rises to a level comparable to manned operations. This category goes beyond FAA’s proposed rules by proposing regulations for large UAS operations and operations beyond the pilot’s visual line-of-sight. As other countries work toward integration standards organizations from Europe and the United States are coordinating to try and ensure harmonized standards. Specifically, RTCA and the European Organization for Civil Aviation Equipment (EUROCAE) have joint committees focused on harmonization of UAS standards. We found during our ongoing work that FAA faces some critical steps to keeping the UAS integration process moving forward, as described below: Issue final rule for small UASs: As we previously discussed, the NPRM for small UAS was issued in February 2015. However, FAA plans to process comments it receives on the NPRM and then issue a final rule for small UAS operations. FAA told us that it is expecting to receive tens of thousands of comments on the NPRM. Responding to these comments could extend the time to issue a final rule. According to FAA, its goal is to issue the final rule 16 months after the NPRM, but it may take longer. If this goal is met, the final rule would be issued in late 2016 or early 2017, about 2 years after the 2012 Act required. FAA officials told us that it has taken a number of steps to develop a framework to efficiently process the comments it expects to receive. Specifically, the officials said that FAA has a team of employees assigned to lead the effort with contractor support to track and categorize the comments as soon as they are received. According to FAA officials, the challenge of addressing comments could be somewhat mitigated if industry groups consolidated comments, thus reducing the total number of comments that FAA must address. Implementation plan: The Comprehensive Plan and Roadmap provide broad plans for integration, but some have pointed out that FAA needs a detailed implementation plan to predict with any certainty when full integration will occur and what resources will be needed. The UAS Aviation Rulemaking Committee developed a detailed implementation plan to help FAA and others focus on the tasks needed to integrate UAS into the national airspace.need for an implementation plan that would identify the means, necessary resources, and schedule to safely and expeditiously integrate civil UASs into the national airspace. The proposed implementation plan contains several hundred tasks and other activities needed to complete the UAS integration process. FAA stated it used this proposed plan and the associated tasks and activities when developing its Roadmap. However, unlike the Roadmap, an implementation plan would include specific resources and time frames to meet the near-term goals that FAA has outlined in its Roadmap. An internal FAA report from August 2014 discussed the importance for incremental expansion of UAS operations. While this report did not specifically propose an implementation plan, it suggested that for each incremental expansion of operations, FAA identify the tasks necessary, responsibilities, resources, and expected time frames. Thus, the internal report suggested FAA develop plans to account for all the key components of an implementation plan. The Department of Transportation’s – Inspector General issued a report in June 2014 that contained a recommendation that FAA develop such a plan. The FAA mentioned concerns regarding the augmentation of appropriations and limitations on accepting voluntary services. As a general proposition, an agency may not augment its appropriations from outside sources without specific statutory authority. The Antideficiency Act prohibits federal officers and employees from, among other things, accepting voluntary services except for emergencies involving the safety of human life or the protection of property. 31 U.S.C. § 1342. operations conducted by the test sites must have a COA.requires the test sites to provide safety and operations data collected for each flight. Test site operators have told us incentives are needed to encourage greater UAS operations at the test sites. The operators explained that industry has been reluctant to operate at the test sites because under the current COA process, a UAS operator has to lease its UAS to the test site, thus potentially exposing proprietary technology. With a special airworthiness certificate in the experimental category, the UAS operator would not have to lease its UAS to the test site, therefore protecting any proprietary technology. FAA is, however, working on providing additional flexibility to the test sites to encourage greater use by industry. Specifically, FAA is willing to train designated airworthiness representatives for each test site. These individuals could then approve UASs for a special airworthiness certificate in the experimental category for operation at a test site. As previously indicated, three test sites had designated airworthiness representatives aligned with the test site, but only one experimental certificate had been approved. More broadly, we were told that FAA could do more to make the test sites accessible. According to FAA and some test site operators, FAA is working on creating a broad area COA that would allow easier access to the test site’s airspace for research and development. Such a COA would allow the test sites to conduct the airworthiness certification, typically performed by FAA, and then allow access to the test site’s airspace. As previously stated, one test site received 4 broad area COAs that were aircraft specific. Officials from test sites we spoke with during our ongoing work were seeking broad area COAs that were aircraft “agnostic”—meaning any aircraft could operate under the authority of that COA. According to FAA officials, in an effort to make test sites more accessible, they are working to expand the number of test ranges associated with the test sites, but not increasing the number of test sites. Currently, test sites have ranges in 14 states. Public education program: UAS industry stakeholders and FAA have begun an educational campaign that provides prospective users with information and guidance on flying safely and responsibly. The public education campaign on allowed and safe UAS operations in the national airspace may ease public concerns about privacy and support a safer national airspace in the future. UASs’ operating without FAA approval or model aircraft operating outside of the safety code established by the Academy of Model Aeronautics potentially presents a danger to others operating in the national airspace. To address these safety issues, FAA has teamed up with industry to increase public awareness and inform those wishing to operate UAS how to do so safely. For example, three UAS industry stakeholders and FAA teamed up to launch an informational website for UAS operators. UASs are increasingly available online and on store shelves. Prospective operators—from consumers to businesses—want to fly and fly safely, but many do not realize that, just because you can easily acquire a UAS, that does not mean you can fly it anywhere, or for any purpose. “Know Before You Fly” is an educational campaign that provides prospective users with information and guidance on flying safely and responsibly (see table 4). UAS and air traffic management: As FAA and others continue to address the challenges to UAS integration they are confronted with accounting for expected changes to the operations of the national airspace system as a FAA part of the Next Generation Air Transportation System (NextGen)has stated that the safe integration of UAS into the national airspace will be facilitated by new technologies being deployed. However, according to one stakeholder, UASs present a number of challenges that the existing national airspace is not set up to accommodate. For example, unlike manned aircraft, UASs that currently operate under COAs do not typically follow a civil aircraft flight plan where an aircraft takes off, flies to a destination, and then lands. Such flights require special accommodation by air-traffic controllers. Additionally, the air-traffic-control system uses navigational waypoints for manned aircraft, while UASs use Global Positioning System coordinates. Finally, if a UAS loses contact with its ground-control station, the air traffic controller might not know what the UAS will do to recover and how that may affect other aircraft in the vicinity. NextGen technologies, according to FAA, are continually being developed, tested, and deployed at the FAA Technical Center, and the FAA officials are working closely with MITRE to leverage all available technology for UAS integration. Chairman Ayotte, Ranking Member Cantwell, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202)512-2834 or dillinghamg@gao.gov. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Brandon Haller, Assistant Director; Daniel Hoy; Eric Hudson; Bonnie Pignatiello Leer; and Amy Rosewarne. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | UAS—often called drones—are aircraft that do not carry a pilot but instead operate on pre-programmed routes or are manually controlled. Currently, UAS only operate in the United States with FAA approval on a case-by-case basis. However, in the absence of regulations, unauthorized UAS operations have, in some instances, compromised safety. The FAA Modernization and Reform Act of 2012 emphasized the need to integrate UAS into the national airspace by requiring that FAA establish requirements governing them. In response, FAA has taken a number of steps, most notably, issuing an NPRM for small UAS operations, and designating six UAS test sites which became operational in 2014 and have begun to conduct test flights. Other countries have started to integrate UAS as well, and many currently allow commercial operations. This testimony provides preliminary observations on 1) status of FAA's test sites, 2) how other countries have progressed integrating UAS for commercial purposes, and 3) critical steps for FAA going forward. This testimony is based on GAO's ongoing study examining issues related to UAS integration into the national airspace system for UAS operations. To conduct this work, GAO reviewed documents and met with officials from test sites, FAA, and industry stakeholders. Since becoming operational in 2014, the Federal Aviation Administration's (FAA) unmanned aerial systems (UAS) test sites have conducted over 195 flights across five of the six test sites. These flights provide operations and safety data that FAA can use in support of integrating UAS into the national airspace. FAA has not provided funding to the test sites in support of research and development activities but has provided staff time through, for example bi-weekly meetings to discuss ongoing issues with test site officials. FAA staff said that the sites are a benefit to the integration process and worth this investment. GAO's preliminary observations found that other countries have progressed toward UAS integration and allow commercial use. GAO studied the UAS regulations in Australia, Canada, France, and the United Kingdom and found these countries have similar rules and restrictions on commercial UAS operations, such as allowing line of sight operations only. In November 2014, Canada issued new rules creating exemptions for UAS operations based on size and relative risk. In addition, as of December 2014, Australia had issued over 180 UAS operating certificates to businesses engaged in aerial surveying, photography, and other lines of business. Under the provisions of FAA's proposed rules, operating restrictions would be similar to regulations in these other four countries. For example, all countries have UAS altitude restrictions of 500 feet or below. |
Section 232 of the National Housing Act, as amended, authorizes FHA to insure mortgages made by private lenders to finance the construction or renovation of nursing homes, intermediate care facilities, board and care homes, and assisted living facilities. Congress established the Section 232 program in 1959 to provide mortgage insurance for the construction and rehabilitation of nursing homes. The Housing and Community Development Act of 1987 expanded the program to allow for the insuring of refinancing or purchase of FHA-insured facilities and, in 1994, HUD issued regulations implementing legislation to expand the program to allow for the insuring of assisted living facilities and the refinancing of loans for facilities not previously insured by FHA. Since 1960, FHA has insured 4,372 loans through the Section 232 program in all 50 states, the District of Columbia, the U.S. Virgin Islands, and Puerto Rico. As of the end of fiscal year 2005, there were 2,054 currently insured loans. FHA does not insure all residential care facilities, as there are approximately 16,500 nursing home facilities and over 36,000 assisted living facilities in operation. We did not identify any private mortgage insurance that is currently available for loans made to nursing homes or other similar facilities. According to HUD officials, in recent times, the Section 232 program exists, in part, to support the market for residential care facilities when the private market is reluctant to finance such projects due to market conditions. The loans are advantageous to borrowers because they are nonrecourse loans whereby the lender (in this case the lender and the insurer, FHA) has no claim against the borrower in the event of default and can only recover the property. The loans are also generally long term (in some cases up to 40 years) and, according to HUD and lender officials, offer an interest rate that is, in many cases, lower than what private lenders offer for non-FHA insured loans made to nursing homes and other similar facilities. Additionally, FHA insures 99 percent of the unpaid principal balance plus accrued interest. HUD administers the Section 232 program through its field offices, with HUD headquarters oversight. HUD’s field structure consists of 18 Hub offices and 33 program centers. Generally, each Hub office has a number of program centers that report to it. Program centers administer multifamily programs within the states in which they are located or portions thereof. Hub offices also administer multifamily programs, as well as augment the operations of and coordinate workload between their program centers. Under Medicaid, states set their own nursing home payment rates (reimbursement rates), and the federal government provides funds to match states’ share of spending as determined by a federal formula. Within broad federal guidelines, states have considerable flexibility to set reimbursement rates for nursing homes that participate in Medicaid but are required to ensure that payments are consistent with efficiency, economy, and quality of care. Under Medicare, skilled nursing facilities receive a federal per diem payment that reflects the resident’s care needs and is adjusted for geographic differences in costs. While the decentralization of the program allows field offices some flexibility in their specific practices, the results of our visits to five field offices revealed differences in the extent to which field office staff were aware of current program requirements. Further, while individual offices had developed useful practices for implementing the program’s loan underwriting and monitoring requirements, they lack a mechanism for systematically sharing practices with other offices. We also found that field office officials were concerned about adequate current or future levels of staff expertise—a critical factor in avoiding unwarranted risk in the Section 232 program, in that health care facility loans are generally more complicated and require specialized expertise compared with loans insured under HUD’s other multifamily programs. Lack of awareness of current requirements and insufficient staff expertise can contribute to insuring loans with increased risks. Both factors are related to recommendations made in the HUD Office of Inspector General’s 2002 report that HUD has not fully addressed (see app. III for further information on weaknesses identified by HUD’s Inspector General). FHA has numerous underwriting requirements for loans insured under the Section 232 program; for example, facilities must provide evidence of market need; a (real estate) appraisal; and be in compliance with limits on loan-to-value and debt service coverage ratios. FHA also requires a variety of reviews for monitoring Section 232 loans. (Loan underwriting and monitoring requirements, which can involve fairly complex reviews and analyses, are described in more detail in app. II.) According to HUD headquarters officials, the field offices that administer the Section 232 program are required to follow all program statutes and regulations, but the decentralization of the program allows field offices some flexibility in their specific practices. For example, individual field offices can designate how to staff the underwriting and monitoring of Section 232 loans, depending on such factors as loan volume relative to other multifamily programs, to fully utilize resources. HUD headquarters provides guidance on program policies and requirements; when necessary, reviews applications for certain types of loans, such as those submitted by nonprofit entities; and provides technical assistance or additional guidance and support if contacted by field offices. HUD headquarters staff also conduct Quality Management Reviews, which are management reviews of field offices administering HUD programs and services. For these reviews, evaluators visit offices and coordinate subsequent reports. The process also involves reporting the status of follow-up corrective actions. While not focused on the Section 232 program, this process helps to oversee the program by reviewing the management of the field offices that administer it. We found that the five field offices that we visited varied in their understanding and awareness of policies related to the Section 232 program. For example, staff in two field offices said that their standard regulatory agreement (that serves as the basic insurance contract and spells out the respective obligations of FHA, the lender, and the borrower) did not include language that would require operators of insured facilities to submit financial statements on new loans. According to officials at HUD headquarters, field offices should be using language requiring these financial statements. HUD and most lender officials we interviewed told us that operator financial statements provide information on the legal entity operating the facility in cases where the borrower and the operator of the residential care facility are different entities. These officials also stated that, in such situations, borrower financial statements may not disclose expenses, income, and other financial information, and may only show the transactions between the borrower and operator, thus making operator financial statements a necessity. Also, HUD’s Inspector General identified HUD’s lack of a requirement for operators to submit financial statements electronically to be part of an internal control weakness for the Section 232 program. Additionally, we found that the field offices that we visited were not always aware of specific notices that established new requirements or processes for the Section 232 program. For example: Four of the five field offices that we visited were not aware of a notice that disqualifies potential Section 232 borrowers if they have had a bankruptcy in their past. According to a HUD headquarters officials, this policy is intended to protect HUD from insuring a potentially risky loan based on a borrower’s financial history. Officials at four of the five field offices we visited did not know about required addendums to the regulatory agreement regarding state licensing requirements for nursing homes. HUD developed these addendums to place a lien on a property’s operational documents, such as a Certificate of Need and state licenses, to prevent operators from taking these documents with them upon termination of a property’s lease. Without these documents, a facility may not be able to operate and, consequently, the property’s value would be greatly diminished. According to HUD headquarters officials, HUD headquarters communicates changes in the Section 232 program’s policies and procedures to field offices in a variety of ways besides sending formal notices. For example, HUD headquarters also posts some notices on a “frequently-asked questions” section of a Web site available to field offices, lenders, attorneys, and others. HUD headquarters officials also conduct nationwide conference calls with the field offices in which various HUD multifamily programs, including the Section 232 program, are discussed. The conference calls are conducted separately for loan development staff and asset management staff that work, respectively, on the underwriting and monitoring of loans. HUD headquarters officials stated that these conference calls provide a forum to disseminate information to the field offices and for individual field offices to discuss any issues, questions, or concerns regarding any multifamily programs, including the Section 232 program. HUD headquarters officials stated that they plan to address the lack of awareness we observed by updating the “Multifamily Asset Management and Project Servicing Handbook” to clarify current policies and requirements for the Section 232 program. HUD is also planning to update the handbook to address the 2002 HUD Inspector General report that identified that HUD's current handbook was not specific to Section 232 nursing home operations. However, HUD officials told us the updates to the handbook would not be completed until the proposed revisions to the applicable regulatory agreements have been approved. The proposed revisions have been awaiting approval since August 2004, and it is not clear when the revised agreements will be approved. As discussed earlier, field offices have some flexibility in practices that they use in administering the Section 232 program. In our visits to five field offices, we found a variety of practices that could be useful in the underwriting and monitoring of Section 232 loans if shared with other field offices. However, currently, HUD does not have systematic means by which to share this information among field offices. Officials in two of the five field offices we visited identified specific practices they had developed to carry out loan underwriting requirements. For example: Asset management staff, whose focus is monitoring the performance of loans that are already insured, are asked to review a variety of documents submitted in the underwriting process, such as financial statements and information on the occupancy of the facility. In one of the offices, staff members may contact relevant state officials, just before the closing of a loan, to verify that the state has not identified any quality of care deficiencies since the facility submitted the application for mortgage insurance. Officials in one office stated that they conduct an additional review before approving a loan application for mortgage insurance to ensure that all required steps, such as mortgage credit analysis and valuation, have been properly performed. According to the officials in these two offices, it is necessary to take these additional steps in order to adequately underwrite a loan under this program. They stated that the additional steps result in the better screening of loan risk and could result in the rejection of a risky loan they might otherwise approve. We found a similar variety of practices in the monitoring of Section 232 loans. In some cases, field offices we visited had taken additional steps beyond those required by HUD. For example: While HUD requires a review of the annual financial statements of insured facilities, two field offices that we visited require monthly financial accounting reports from facilities either for the first year of the loan or until the facility has reached stable occupancy. Two field offices had developed their own specialized checklists for monitoring Section 232 loans. These checklists were specifically designed for the oversight of residential care facility loans and included items such as the facility’s replacement reserve accounts and professional liability insurance, among other items. One of the offices had established a Section 232 working group, where underwriting and asset management staff met periodically to discuss loans in the portfolio and issues related to the overall management of the program in the field office. Additionally, three of the five field offices we visited had specialized staff with expertise in overseeing residential care facility loans. These were asset management staff whose primary or sole responsibility was oversight of the Section 232 portfolio. While HUD headquarters officials stated that they do not require management reviews of Section 232 facilities, three of the five field offices we visited conducted management reviews on some part of their Section 232 portfolios. One field office obtained the state annual inspection reports on its Section 232 facilities on a regular basis. According to officials in these offices, the unique characteristics associated with residential care facilities make the additional measures necessary. Officials in field offices we visited that had developed these specific practices stated that the practices result in better underwriting and monitoring of loans and could potentially help to prevent claims. However, HUD field offices do not have a systematic means by which to share information with other field offices about practices they have developed. While field office officials can raise concerns and issues through conference calls with HUD headquarters officials, most explained that these conference calls are not particularly designed for field offices to share practices with other field offices. Officials in the five field offices that we visited told us that they occasionally contact their counterparts in other field offices regarding loan processing or asset management questions or issues. Additionally, officials in some field offices said that they occasionally see their counterparts at regional lender conferences. However, aside from these forms of contact, there was no systematic method by which to learn about other field office practices. Consequently, officials in one field office are likely to be unaware of additional steps or practices taken by another field office that are intended to help officials improve underwriting or monitoring of Section 232 loans. Officials at all field offices that we visited told us that they could benefit from the sharing of such practices regarding underwriting and monitoring procedures established by different offices. Officials in two of the five field offices stated that a lack of expertise on residential care facility loans, either in underwriting or loan oversight, is a current concern in their office. They specifically noted a lack of expertise in residential care facilities and their overall management. Officials in all of the field offices that we visited stated that additional training on Section 232 loans would be beneficial to provide more knowledge and expertise, as there has been very little Section 232-specific training. In its 2002 report, HUD’s Office of Inspector General also identified that field office project managers did not have sufficient training on reviewing Section 232 loans and dealing with the issues unique to Section 232 properties. All of the private lenders we interviewed—those that offer non-FHA insured loans to residential care facilities and face similar risks to FHA— had a specialized group that conducted the underwriting of these loans. All of the individuals that conducted the underwriting of these loans were part of a health care lending unit that focused exclusively on loans made to health care facilities. According to the lenders, they believed it was necessary to have specialized staff underwriting such loans due to the unique nature of lending money to a facility that was designed for a residential health care business. Additionally, almost all of the private lenders we interviewed had specialized staff that monitored their residential care facility loans. According to lender staff we interviewed, nursing home and assisted living facility loans require an understanding of the market, trends, expenses, income, and other such unique characteristics associated with these types of facilities. While officials in only two of the five offices expressed concern about the expertise of current staff, officials in all field offices we visited stated that they are concerned about the ability to adequately staff the Section 232 program in the next 5 years. They stated that as older staff retire in the next 5 years or so, any expertise that such staff currently have will take time to replace. All of the field offices that we visited staffed the underwriting process for Section 232 loans similar to that of other multifamily programs, based on workload and staff resources. However, while two field offices assigned their Section 232 properties, along with other multifamily properties, to general asset management staff for oversight, three field offices designated specific staff to oversee Section 232 properties. This was due to the latter field office officials’ belief, similar to that of the private lenders we interviewed, that the properties require a certain level of knowledge and expertise associated with residential care facilities. Expertise in Section 232 loans allows for a better understanding of the distinct issues associated with oversight of residential care facilities. In one of the offices that had general asset management staff overseeing the portfolio, eight project managers shared responsibility for monitoring Section 232 properties in conjunction with other multifamily program properties. In contrast, in one of the offices with staff designated specifically for the Section 232 program, one member of the asset management staff was responsible for the entire Section 232 portfolio. Officials from the two field offices that have experienced staff specialized in monitoring Section 232 loans stated that they are concerned about losing their specialized staff over time and acknowledged that they will need to find replacements in order to continue to adequately monitor Section 232 loans. Their concern stems in part from the fact that Section 232 facilities, unlike other multifamily properties, require specialized knowledge and an understanding of the marketing, trends, and revenue streams associated with residential care facilities. According to officials in all of the field offices that we visited, monitoring of Section 232 loans, when compared with other FHA-insured multifamily programs, requires additional measures. Section 232 loans contain a complex business component—the actual assisted living service or the nursing service operating in a facility—making them different from other multifamily programs that are solely realty loans. Consequently, for Section 232 loans, field office officials monitor the financial health of the business, including expenses, income, and other such items. Some field office officials also stated that it is important to monitor the operator to ensure that the facility is adequately managed. Additionally, some field office officials stated that to ensure the facility is generating enough income, they have to monitor Medicare and Medicaid reimbursement rates, as well as occupancy rates. According to HUD headquarters officials, as part of its overall strategic human capital efforts, HUD is currently assessing the loss of human capital in field offices over time. However, this effort is not focused on the Section 232 program specifically but is intended to examine general human capital issues and needs. FHA requires field office officials, when processing applications for Section 232 mortgage insurance from existing state-licensed facilities to review the most recent annual state-administered inspection report for the facilities, but does not require the continued monitoring of annual inspection reports for state-licensed facilities once it has insured them. Four of the five HUD field offices we visited do not routinely collect annual inspection reports for the insured facilities they oversee. While such reports are but one of several means of monitoring insured properties, FHA’s limited use of them may lead the agency to overlook potential indicators of risk for some of its insured loans. State inspections or surveys of residential care facilities may stem from state licensing requirements or the facilities’ participation in Medicare or Medicaid. Nursing homes are state licensed, while states vary in their licensing requirements for assisted living facilities. The Department of Health and Human Services’ Centers for Medicare & Medicaid Services requires that nursing homes receiving Medicare and Medicaid funding be federally certified, and all certified facilities are subject to annual federal inspections administered by the states. State survey agencies, under agreements between the states and the Secretary of Health and Human Services, conduct the annual federally required inspections. To complete the annual inspections, teams of state surveyors visit Medicare and Medicaid participating facilities and assess compliance with federal facility requirements, particularly whether care and services provided meet the assessed needs of the residents. These teams also assess the quality of care provided to residents of the facilities, looking at indicators such as preventing avoidable pressure sores, weight loss, or accidents. Overall, annual inspections provide a regular review of quality of care by officials with relevant backgrounds, such as, registered nurses, social workers, dieticians, and other specialists. For facilities that are applying for mortgage insurance under the Section 232 program, FHA requires a copy of the state license needed to operate the facility and a copy of the latest state annual inspection report on the facilities’ operation. HUD’s “Multifamily Asset Management and Servicing Handbook” recommends that, once nursing home loans are insured under the program, HUD officials responsible for loan monitoring continue to review state annual inspection reports if they do not undertake management reviews of the facility. Management reviews focus on an insured facility’s financial indicators and general management practices, but, particularly if conducted on-site, could provide some information on issues related to the quality of care at a facility. Because of their wider scope, however, management reviews would not likely go into the same depth on quality of care issues as annual inspections. HUD headquarters officials told us that the handbook’s recommendation applies to all Section 232 facilities; further, HUD headquarters officials stated that management reviews for Section 232 properties should be conducted based on need and available resources. We found that two of five field offices we visited did not regularly conduct any regular management reviews and did not review annual inspection reports during loan monitoring. Of the three field offices that did conduct management reviews on some Section 232 properties, one also reviewed annual inspection reports during loan monitoring. Additionally, the offices that did not review annual inspection reports had little direct interaction with the state agencies. Private lenders overseeing non-FHA insured residential care facilities told us that they regularly conduct various levels of management reviews and review annual inspection reports on a consistent basis. FHA has emphasized the importance of ongoing coordination with state oversight agencies in its proposed revisions to its regulatory agreements, which require owners or operators of insured facilities to report any state or federal violations to FHA. HUD’s proposed revisions to the regulatory agreements also include a requirement that the owner or operator provide HUD with copies of annual inspection reports that can be used as part of loan monitoring. However, the proposed revisions to the regulatory agreements have yet to be approved. Serious quality of care deficiencies can have a variety of implications that affect cash flow streams, ranging from a related reduction in occupancy to the potential for civil money penalties and loss of licensing and reimbursements. Consequently, quality of care concerns can ultimately affect a facility’s financial condition. For many Section 232 properties, in particular nursing homes, state oversight of quality of care helps to determine whether a facility is licensed and eligible to receive Medicaid and Medicare reimbursements. This is particularly important to the Section 232 program because, as noted earlier in this report, Medicaid and Medicare reimbursements typically account for a significant portion of nursing home income. Federal or state annual inspection reports, to the extent that they are available for facilities, provide regular evaluations of nursing homes and other residential care facilities. As discussed earlier, annual inspections provide a review of quality of care by officials with relevant backgrounds. In a 2005 report, we found inconsistencies across states in conducting surveys and state surveyors understating serious deficiencies in quality of care. Nonetheless, annual inspection reports serve as an important indicator of a property’s risk related to problems with the quality of care to residents. Annual inspection reports, coupled with other information such as facility staffing profiles, resident turnover, and data from financial statements, could assist HUD’s field offices in overseeing loan performance. Additionally, reviewing facilities’ quality of care records over time, as well as any corrective action plans needed to come into compliance with state and federal quality of care requirements could further the field offices’ ability to identify loan performance risks. The reports may also prompt HUD field office officials to communicate with federal or state nursing home regulatory agencies for further information on facilities that appear to be high risk. These agencies may have available information on civil money penalties and sanctions, which serve as additional indicators of quality of care risk. Private lenders we spoke with acknowledged that annual inspection reports provided insight into the management of a facility and coupled with other information could help to assess financial risk. The Section 232 program represents a relatively small share of the broader GI/ SRI Fund. However, program and industry trends show sources of potential risks that could affect the future performance of the Section 232 portfolio and the GI/SRI Fund. FHA uses a number of tools to mitigate risk to the program and to the fund. The Section 232 program is a relatively small share of the total GI/SRI Fund. HUD estimated that the program would represent only about 5.3 percent of the fund’s fiscal year 2006 commitment authority. Similarly, the Section 232 program represents a little less than 16 percent or a little more than $12.5 billion of the nearly $80 billion in unpaid principle balance in the GI/SRI Fund (see fig. 1). Despite its small size, a significant worsening in the performance of the Section 232 program could negatively affect the performance of the GI/SRI Fund. The extent, though, of the impact on the overall performance of the GI/SRI Fund would depend upon numerous factors including changes in the size and performance of the other programs in the fund. As discussed below, several trends exist within the Section 232 program that pose potential risks to the Section 232 portfolio and, therefore, to the GI/SRI Fund. To identify potential trends in loan performance, we analyzed 5- and 10- year claim rates for Section 232 loans based on data that spanned from fiscal year 1960 through the end of fiscal year 2005, for the entire portfolio, as well as by type of loan purpose and type of insured facility. The analysis of the entire portfolio showed that the 10-year claim rates for more recent loan cohorts (loans originated between 1987 and 1991 and loans originated between 1992 and 1996) ranked among the highest historical cohort claim rates (see fig. 2). The 5-year claim rate for loans originated between 1997 and 2001 also ranked among the highest historical cohort claim rates. A continued increase in claim rates could have a negative effect on the performance of the GI/SRI Fund. Section 232 loans can have a loan purpose in one of two categories—new construction/substantial rehabilitation loans or refinance/purchase loans. New construction loans are for loans that involve the construction of a new residential care facility. Substantial rehabilitation loans are for loans that meet HUD criteria for substantial rehabilitation of a residential care facility, such as two or more building components being substantially replaced. Purchase loans are for loans in which the borrower is acquiring an existing residential care facility, while refinance loans are the refinancing of an existing HUD insured loan or a loan not previously insured by HUD. As described earlier in the report, HUD began to allow for the refinancing of FHA-insured facilities and non-FHA insured facilities in 1987 and 1994, respectively. When analyzing Section 232 loan data by loan purpose, we found that new construction/substantial rehabilitation loans have a higher 5-year claim rate than refinance/purchase loans for the most recent cohort for which data are available (see fig. 3). New construction/substantial rehabilitation loans originated between 1997 and 2001 also have the highest historical 5-year cohort claim rate for these type of loans. Because of the higher claim rates in recent years, continued monitoring will be important. In contrast, the number of refinance and purchase loans endorsed in the last 5 years is more than double those endorsed in the previous 5 years. The future impact of the refinance and purchase loans on the overall performance of the Section 232 program is uncertain since they have existed for a shorter period of time and thus there is currently limited data available to assess the relative risk of claims. As discussed earlier in the report, HUD insures different types of residential care facilities that include nursing homes, intermediate care facilities, assisted living facilities, and board and care facilities. Assisted living facilities are relatively new to the portfolio, and the number of these loans have been increasing. Our analysis of Section 232 loan data by facility type found that board and care facilities had a slightly higher 10-year claim rate than nursing home facilities in the most recent cohorts; however, these loans remain a very small percentage of the active portfolio and are being made in decreasing numbers. There are limited data to observe claim trends on assisted living facilities, making their risk difficult to assess, but the 5-year claim rates for assisted living facilities have increased significantly in the most recent cohort years for which claim rate data are available (see fig. 4). A continued high claim rate in assisted living facilities could negatively affect the performance of the Section 232 program and the GI/SRI Fund. However, lenders and HUD officials told us that, although assisted living facilities had high claim rates in the past, they believe the market has stabilized and lessons have been learned. Another observable trend is the increase in the portion of loans in each cohort that is prepaid. (Prepayment occurs when a borrower pays a loan in full before the loan reaches maturity.) There have been 1,688 prepayments in the Section 232 program from 1960 through the end of fiscal year 2005 and loans that terminate do so overwhelmingly because of prepayment. Moreover, the proportion of loans that terminate due to prepayment within 10 years of origination is increasing. Specifically, the 10-year prepayment rates for the three most recent cohorts for which 10-year claim rates are available are more than double that of some earlier cohorts. As more borrowers prepay their loans, HUD loses future cash flows from premiums; thus, higher prepayment rates will likely make the net present value of cash flows decrease. However, the decrease could be offset to the extent that higher prepayment rates result in fewer claims (a prepaid loan cannot result in a claim). Market concentration also poses some risks to the GI/SRI Fund. The Section 232 program is concentrated in several large markets and in loans made by relatively few lenders. As of 2005, five states (California, Illinois, Massachusetts, New York, and Ohio) held 51 percent of active Section 232 loan dollars and 38 percent of active loan properties (see fig. 5). New York holds close to 24 percent of the active loan dollars in the portfolio. This is an improvement since 1995 when we found that eight states accounted for 70 percent of the portfolio, and New York accounted for 32 percent of the portfolio. However, the current market concentration could still pose risk to the portfolio if a sudden market change took place in one or more of the states with a larger percentage of the insured Section 232 loans. We also found significant loan concentration among a small group of lenders. While a total of 109 lenders held active loans, 6 hold over half of the active loan portfolio. GMAC Commercial Mortgage Corporation holds more than 17 percent of all active mortgages in the Section 232 program, the single largest share of any lender. This concentration among lenders potentially makes the program more vulnerable if one or a few large lenders encounter financial difficulty. The Section 232 program may also face risks from trends in the residential care industry at large that include uncertainty about sources of revenue and occupancy. Nursing home revenue is generated in large part from the Medicare and Medicaid programs, which make up 58 percent of national nursing home spending. Private lenders we interviewed that offer non- FHA-insured residential care facility loans explained that one of the primary reasons their loans are shorter-term loans than those of HUD is due to their perception of the potential, long-term uncertainty in the funding of the Medicaid and Medicare programs, which generally account for a large share of patient payments in nursing homes. We and others have reported that Medicare and Medicaid spending may not be sustainable at current levels. In our 2003 report on the impact of fiscal pressures on state reimbursement rates, however, we found that even in states that recently faced fiscal pressures, reimbursement rates remained largely unaffected. At that time, we concluded that any future changes to state reimbursement rates remain uncertain. If program cuts occur in federal spending on Medicaid that result in shifting costs from the federal government to state governments, states could contain costs by taking a number of steps, including freezing or reducing reimbursement rates to providers. An ongoing tension exists, however, between what federal and state governments and the nursing home industry believe to be reasonable Medicare and Medicaid reimbursement rates to operate efficient and economic facilities that provide quality care to public beneficiaries. As the federal and state governments face growing long-term financial pressure on their budgets, these budgetary pressures may have some spillover effects on Medicare and Medicaid revenue streams for the nursing home industry. Uncertainty also exists about the future demand for residential care facilities and the corresponding effects on occupancy. As the number of Americans aged 65 and older increases at a rapid pace, lenders we interviewed projected an increased need for residential care facilities. Industry officials also noted a rise in alternatives to nursing home care, such as assisted living facilities and home and community-based care options. As patients choose alternative care options, traditional nursing homes may face occupancy challenges. Overall, these changes to the nursing home facilities patient base may lower occupancy and income levels for nursing homes, including those in the Section 232 portfolio. However, these changes may positively affect the occupancy and income levels of other types of residential care facilities, including those in the Section 232 portfolio. As described elsewhere in this report, FHA uses a number of tools to mitigate risks to the program and to the GI/SRI Fund. These tools include imposing requirements prior to insuring loans to help prevent riskier loans from entering the Section 232 portfolio. FHA also uses various tools—such as reports on physical inspections of facilities, and financial and other information captured in data systems—to monitor the status of insured facilities and the performance of their loans. Additionally, FHA officials use quality control reviews to mitigate the risk for the program as a whole using two processes: Quality Management Reviews and Lender Qualifications and Monitoring Division reviews (the latter reviews are described in app. II). HUD’s model for estimating annual credit subsidies—which incorporates assessments of various risks that loan cohorts will face and includes assumptions consistent with the Office of Management and Budget (OMB) guidance—does not explicitly consider the impacts of some potentially important factors. These factors include: variables to capture the impact of prepayment penalties or restrictions on prepayments, the loan-to-value ratio and debt service coverage ratios of Section 232 properties at the time of loan origination and differences between types of residential care facilities. Further, the model does not fully capture the effects on existing loans to changes in market interest rates, and it uses proxy data that are not comparable to the loans in the Section 232 program. As a result, HUD’s model for estimating the program’s credit subsidy may result in over- or underestimation of costs. Federal law requires HUD to estimate a credit subsidy for its loan guarantees. The credit subsidy cost is the estimated long-term cost to the government of a loan guarantee calculated on a net present value basis and excluding administrative costs. HUD estimates a credit subsidy for each loan cohort. This estimate reflects HUD’s assessment of various risks, based in part on the performance of loans already insured. Since 2000, HUD has annually estimated two credit subsidy rates for the Section 232 program, reflecting its two largest risk categories: loans for new construction and substantial rehabilitation, and loans for refinance and purchase loans. HUD uses an identical methodology for each estimate. To estimate the initial subsidy cost of the Section 232 program, HUD uses a cash flow model to project the cash flows for all identified loans over their expected life. The cash flow model incorporates regression models and uses assumptions based on historical and projected data to estimate the amount and timing of claims, subsequent recoveries from these claims, prepayments, and premiums and fees paid by the borrower. The regression models incorporate various economic variables such as changes in GDP, unemployment rate, and 10-year bond rates. The model also has broken out claim and prepayment data into new construction and refinance loans since these loans are expected to perform differently. HUD inputs its estimated cash flows into OMB’s credit subsidy calculator, which calculates the present value of the cash flows and produces the official credit subsidy rate. A positive credit subsidy rate means that the present value of cash outflows is greater than inflows, and a negative credit subsidy rate means that the present value of cash inflows is greater than cash outflows. For the Section 232 program, cash inflows include premiums and fees, servicing and repayment income from notes held in inventory, rental income from properties held in inventory, and sale income from notes and properties sold from inventory. Cash outflows include claim payments and expenses related to properties held in inventory. Since HUD began estimating the initial subsidy cost of the Section 232 program, it has estimated that the present value of cash inflows would exceed the outflows. As a result, the initial credit subsidy rates for the Section 232 program were negative. However, estimates from more recent years showed that the negative subsidy rates on new construction and substantial rehabilitation loans have generally been shrinking, meaning that the projected difference between the program’s cash inflows and cash outflows was decreasing. In HUD’s most recent estimate (for the fiscal year 2007 cohort), the estimated cash inflows exceed the estimated cash outflows by a considerably greater margin than in any previous year’s estimate. This may reflect increased premiums for Section 232 loans; the President’s proposed budget for fiscal year 2007 specifies increases in mortgage insurance premiums for almost all FHA programs, including increasing the rate for Section 232 refinance and new construction loans to 80 basis points from 57 basis points. Figure 6 shows changes in the initial estimated credit subsidy rate over time for both loan categories. HUD’s model for estimating credit subsidy rates incorporates numerous variables, but the model’s exclusion of potentially relevant factors and its use of proxy data from another FHA loan program may negatively affect the quality of the estimates. Including additional information in the model could enhance the predictive value of the model. According to some economic studies, prepayment penalties, or penalties associated with the payment of a loan before its maturity date, can significantly affect borrowers’ prepayment patterns. This is also important for claims, since if a loan is prepaid it can no longer go to claim. HUD’s model does not explicitly consider the potential impact of prepayment penalties or restrictions, even though they can influence the timing of prepayments and claims and collections of premiums. According to FHA officials, FHA does not place prepayment penalties on FHA-insured nursing home loans. However, according to the Section 232 program’s regulations, a lender can impose a prepayment penalty charge and place a prepayment restriction on the mortgage’s term, amount, and conditions. We reviewed a sample of Section 232 loans and found that prepayment penalties and restrictions were consistently applied to these loans. According to FHA officials and mortgage bankers, prepayment restrictions on Section 232 loans typically range from 2 to 10 years of prepayment restrictions and 2 to 8 years of prepayment penalties. While FHA does not specifically maintain data on insured residential care facility financing terms, prepayment restrictions are specified on the mortgage note, which is available to FHA. Incorporation of such data into the Section 232 program’s credit subsidy rate model could refine HUD’s credit subsidy estimate by enhancing the model’s ability to account for estimated changes in cash flows as a result of prepayment restrictions. According to HUD officials responsible for HUD’s cash flow model, prepayment penalties and restrictions are not incorporated into the model because HUD does not collect such data. HUD officials added that even though the cash flow model does not explicitly account for prepayment penalties and restrictions, its use of historic data implicitly captures trends that may occur as a result of prepayment penalties and restrictions. The model’s projections are influenced by the average level of prepayment protection in the historical data but not by the trend. If prepayment penalties and other restrictions have changed over time in the past, or change in the future, then not incorporating this information could lead to less reliable estimates. Initial debt service coverage ratios are another important factor that may affect cash flows, as loans with lower initial debt service coverage ratios may be more likely to default and result in a claim payment. HUD’s cash flow model does not consider the initial debt service coverage ratio of Section 232 loans at the point of loan origination. By initial debt service coverage ratio, we are referring to the projected debt service coverage ratio that is considered during loan underwriting. According to the HUD official responsible for HUD’s cash flow model, the initial debt service coverage ratio of a residential care facility is not included as a part of the cash flow model because it (1) is not a cash flow, (2) does not vary, and (3) has no predictive value. We agree that a debt service coverage ratio is not a cash flow. However, initial debt service coverage ratios potentially affect relevant cash flows, as do other factors that are included in HUD’s model but are also not cash flows to HUD, such as prepayments. For example, the model considers estimated prepayments because they potentially affect future cash inflows from fees and future cash outflows from claim payments. Our analysis of available projected debt service coverage ratios, which include the amount of new debt being insured, shows that these ratios varied from 1.1 to 3.6. All other factors being equal, loans with debt service coverage ratios of 3.6 are generally considered to have less risk than a loan with only a 1.1 debt service coverage ratio. Economic theory suggests that the debt service coverage ratio is an important factor in commercial mortgage defaults. However, empirical studies show mixed results regarding the significance of the impact of debt service coverage ratios upon commercial mortgage defaults. Some studies indicate that debt service coverage ratios are meaningful factors in modeling default risk and are helpful in predicting commercial mortgage terminations. Other studies find initial debt service coverage ratios to be statistically insignificant in modeling commercial mortgage defaults. These mixed results may be the consequence of relatively small sample sizes and model specification issues. Initial loan-to-value ratios are another important factor that may affect cash flows, as loans with higher initial loan-to-value ratios may be more likely to default and result in a claim payment. By initial loan-to-value ratio, we are referring to the projected loan-to-value ratio that is considered during loan underwriting. HUD’s cash flow model also does not consider the initial loan-to-value ratio of Section 232 loans at the point of loan origination. According to the HUD official responsible for HUD’s cash flow model, the initial loan-to-value ratio of a Section 232 property is not included as a part of the cash flow model because it does not vary and has no predictive value. However, our analysis of available projected loan-to-value ratios, which include the amount of new debt being insured, shows that these ratios varied from 66 percent to 95 percent. All other factors being equal, loans with loan-to-value ratios of 66 percent are generally considered to have less risk than a loan with only a 95 percent loan-to-value ratio. While economic theory suggests that the loan-to-value ratio is an important factor in commercial mortgage defaults, empirical studies show mixed results regarding its significance. Some studies indicate that loan-to-value ratios are meaningful factors in modeling default risk and are helpful in predicting commercial mortgage terminations. Other studies find initial loan-to-value ratios to be statistically insignificant in modeling commercial mortgage defaults. These mixed results may be the consequence of relatively small sample sizes and model specification issues. The model’s ability to reliably forecast claim rates may be enhanced by incorporating a variable indicating facility type into the regression analysis. HUD’s cash flow model does not explicitly consider differences in loan performance between types of facilities, such as nursing homes, assisted living facilities, and board and care facilities. However, when looking at the most recent cohorts for which 5-year claim rates are available, our analysis found the 5-year claim rates for assisted living facilities to be significantly higher than the 5-year claim rates for nursing homes (6.7 percent 5-year claim rate for nursing homes versus 13.6 percent for assisted living facilities). In addition, we found that HUD’s cash flow model generally incorporates the interest rate on the individual loans (the contract rate) and the prevailing market interest rate (captured by the 10-year bond rate) as separate variables. Economic theory suggests, when modeling mortgage terminations, that considering these two variables jointly as a single variable in the form of a ratio is the best way to capture the effects on existing loans when market interest rates change. For example, if market rates fall below the contract rate on existing Section 232 loans, then it may become more attractive for borrowers to prepay. However, if market rates fall but remain above the contract rates, then it may not become more attractive for borrowers to prepay. Using a ratio captures the distinction between these two examples because it considers the relative cost to the borrower of the mortgage given the contract rate, as compared to the mortgage with the market interest rate. By generally considering the contract rate and market interest rate separately, HUD potentially loses the ability to capture this distinction and predict large responses when market rates fall and small responses when market rates rise. HUD’s use of Section 207 loans as a proxy for Section 232 refinance loans could lead to less reliable credit subsidy estimates for the Section 232 program. HUD uses certain Section 207 loans—refinance loans for existing multifamily housing properties—as proxy data for the claim regression for Section 232 refinance loans. The Section 207 loans are not residential health care facility loans. According to HUD officials, HUD uses the Section 207 loans because there are insufficient data on Section 232 refinance loans. A HUD official told us that Section 207 loans were selected as proxy data because they are refinance loans and because they have similar performance to the Section 232 refinance loans, as indicated by the cumulative claim rates they calculated. Consideration of the basis for using proxy data is important. When using the experience of another agency or a private lender as a proxy, the Federal Accounting Standards Advisory Board (FASAB) suggests that an agency explain why this experience is applicable to the agency's credit program and examine possible biases for which an adjustment is needed, such as different borrower characteristics. HUD could reasonably be expected to follow the FASAB guidance when using data from a different program at HUD. HUD told us that they did not compare borrower characteristics for Section 207 loans and Section 232 loans. A HUD official told us that HUD agreed that they would not expect borrowers of Section 207 loans to have similar characteristics to borrowers of Section 232 loans. HUD analyzed the comparability of Section 207 and Section 232 refinance loans using cumulative claim rate analysis, but we question the methodology the agency used to make this comparison. Additionally, we compared the refinance loans for each of the programs by calculating conditional claim and prepayment rates as well as 5-year cumulative claim and prepayment rates, and we found significant differences between the programs (see app. IV for a further description of HUD’s methodology and our comparison of the two programs). We question HUD’s use of Section 207 loans as a proxy for Section 232 loans, given the differences we observed. We cannot fully estimate the overall impact on the credit subsidy estimate, and the effects of the claim and prepayment rates could partially offset each other. The higher prepayment rates for Section 207 loans could lead to HUD underestimating future revenues for Section 232 loans (HUD would project that many of these loans would terminate, although they would actually remain active and pay premium revenue to HUD.) The lower claim rates on Section 207 loans could result in HUD estimating that fewer of its Section 232 loans would result in a claim and thus lead it to underestimate future costs. In the future, more data will be available on the actual performance of Section 232 refinance loans that can be used in estimating credit subsidy needs. To avoid using questionable proxy data in the interim, one possible approach, among others, would be to use a simpler estimation method, such as using average claim and prepayment rates over time as is done in estimating credit subsidy rates for the Section 242 Hospital Mortgage Insurance program. The Section 232 program is the only source of mortgage insurance for residential care facilities. Accordingly, it is important to ensure good program and risk management practices. While some field offices we visited had adopted practices to better manage risks of their Section 232 loans, varying awareness of program requirements and insufficient levels of staff expertise contribute to increased financial risk in the Section 232 program loan portfolio and thus the GI/SRI Fund. HUD has numerous underwriting and monitoring guidelines and policies to manage the risks of Section 232 loans. However, to the extent that field office staff do not accurately implement current underwriting and monitoring guidelines and policies, they potentially allow loans with unwarranted risks to enter the portfolio and may miss opportunities to identify problems with already- insured loans early enough to help prevent claims. Revising the “Multifamily Asset Management and Project Servicing Handbook” to include monitoring requirements specific to the Section 232 program, as the Office of Inspector General noted in its 2002 report, would help in this regard. So too would the sharing of additional practices, such as involving asset management staff in the underwriting process, undertaken by some field offices to better manage risks in their program loans. Moreover, adequately training staff to develop expertise on residential care loans and industry could help assure proper underwriting and oversight of Section 232 loans, which tend to be more complex than those in other HUD multifamily programs. Field office officials’ concerns about their existing levels of staff expertise heighten the need for appropriate guidance and additional training specific to the Section 232 program, while the potential loss of specialized staff within the next 5 years underscores the need for HUD, in the context of its strategic human capital efforts, to assure adequate program expertise in the future. Although HUD recommends that field offices obtain and review annual inspection reports for licensed facilities insured by the program, four of five offices we visited did not do so. By not routinely using, in combination with other performance indicators, the results of annual inspection reports on insured facilities subject to such inspections, HUD may be missing important indicators of problems that could result in claims that might otherwise have been prevented. Reviewing inspection reports is also a means of obtaining relevant information about insured facilities that have not been the subject of FHA management reviews. HUD’s long-proposed revisions to its residential care facility regulatory agreement recognize the potential usefulness of information on state-administered inspections by requiring that owners or operators report inspection violations and supply HUD with copies of annual inspection reports. The proposed revisions would also address a number of the internal control weaknesses identified by the HUD Inspector General’s 2002 report, but it remains unclear when the proposed revisions will be approved, leaving the program exposed to identified weaknesses in the interim. While the Section 232 program represents a relatively small portion of the GI/SRI Fund, it faces risks that could affect the performance of the loan portfolio and the fund. HUD uses a number of tools to mitigate risks, and it will be important to continue monitoring program trends and industry developments. Recent increases in the numbers of assisted living facility loans and refinance loans are a source of uncertainty, in that there is as yet little data with which to assess their long-term performance. Similarly, industry trends and the availability of future Medicaid and Medicare funds are sources of uncertainty, and heighten the need for HUD to have sufficient staff expertise with which to monitor future developments that could affect the program and ultimately the GI/SRI Fund. HUD’s model for estimating the program’s credit subsidy incorporates assessments of various risks that loan cohorts face, but it does not explicitly consider certain factors that could result in over- or underestimation of costs. These factors include prepayment penalties, lockout provisions, facility type, loan-to-value ratio, the debt service coverage ratio of loans at commitment, and the ratio of contract rates to markets rates, which some economic studies suggest are potentially useful in modeling risks. Including such factors could enhance the credit subsidy estimates and provide HUD and the Congress with better cost data with which to assess the program. Additionally, HUD’s use of Section 207 refinance loans, which we do not find to be a good proxy for Section 232 refinance loans, could specifically contribute to over- or underestimation of the credit subsidy for the refinance loans in the program. To ensure that field offices are aware of and implement current requirements and policies for the Section 232 Mortgage Insurance for Residential Care Facilities program, and reduce risk to the GI/SRI Fund, we recommend that the Secretary of Housing and Urban Development direct the FHA Commissioner to take the following actions: Revise the “Multifamily Asset Management and Project Servicing Handbook” in a timely manner to include monitoring requirements specific to Section 232 properties; Establish a process for systematically sharing loan underwriting and monitoring practices among field offices involved with the Section 232 program; Assure, as part of the department’s strategic human capital management efforts, sufficient levels of staff with appropriate training and expertise for Section 232 loans; Incorporate a review of annual inspection reports for insured Section 232 facilities that are subject to federal or state inspections, even in the absence of a revised regulatory agreement; and Complete and implement the revised regulatory agreements in a timely manner. To potentially improve HUD’s estimates of the program’s annual credit subsidy, we recommend that the Secretary of Housing and Urban Development explore the value of explicitly factoring additional information into its credit subsidy model, such as prepayment penalties and restrictions, debt service coverage and loan-to-value ratios of facilities as they enter the program, facility type, and the ratio of contract rates to market rates. We also recommend that the Secretary of Housing and Urban Development specifically explore other means of modeling the performance of Section 232 refinance loans. We provided a draft of this report to HUD for their review and comment. In written comments from HUD’s Assistant Secretary for Housing-Federal Housing Commissioner, HUD generally concurred with our recommendations intended to ensure that field offices are aware of and implement current program requirements and policies. However, the agency disagreed with most parts of our recommendation related to HUD’s credit subsidy model. The Assistant Secretary’s letter appears in appendix V. HUD stated that it has initiated a full review of the Section 232 program and that GAO’s recommendations related to ensuring that field offices are aware of and implement current requirements are being incorporated into plans for revising the program. More specifically, HUD stated that it: will draft and implement changes to the program handbook; will initiate staff training and assure that staff is adequately trained in underwriting and servicing policies; and plans to prepare a report addressing state and federal inspections, among other things, to enhance FHA participation in and oversight of insured health care mortgages. HUD also provided a timeline by which to complete and implement the revised regulatory agreements. Concerning our recommendation that HUD explore the value of explicitly factoring in additional information into its credit subsidy model, HUD stated that it agreed to take into account differences among types of residential care facilities in its modeling, when it has sufficient historical data and if the data indicate that loan performance varies sufficiently by type of facility. However, HUD disagreed with considering other factors we suggested, as follows: Initial loan-to-value and debt service coverage ratios. HUD stated that (1) studies we cited in our draft report found these ratios to be statistically insignificant in predicting commercial mortgage defaults and (2) that data are unavailable for this analysis. We agree, as our draft report stated, that economic studies have shown mixed results regarding the significance of the impact of loan-to-value and debt service coverage ratios on commercial mortgage defaults, with some studies finding them to be significant predictors and others finding them to be insignificant predictors. We further stated that these mixed results may be the result of small sample sizes and model specification issues. Nevertheless, we continue to believe that HUD should explore the value of factoring initial loan-to-value ratio and debt service coverage ratio into its credit subsidy model, and we did not change our recommendation. Regarding the second point, HUD has the data for analyzing loan-to-value and debt service coverage ratios in individual loan files and could include these data in its credit subsidy modeling by creating an electronic record of this information either for its entire portfolio or for a sample of the portfolio. Consequently, we did not change the recommendation. Factors potentially affecting prepayments. HUD disagreed with our suggestion that its credit subsidy model does not fully capture the effects of prepayment penalties, stating that its use of historical data captures the effect of prepayment penalties on project owners’ behavior. However, as we stated in the draft report, HUD’s use of historic data would not fully capture trends related to changes in prepayments. HUD also stated that it has tested using the difference between mortgage interest rates and the 10-year Treasury bond rates in its modeling of prepayments. However, our recommendation was to consider a ratio of these two interest rates, not the difference. As we noted in our report, economic theory suggests that the use of a ratio is the best way to capture the effects on existing loans when market interest rates change. Consequently, we did not change the recommendation. Use of Section 207 loans as proxy data for refinance loans. HUD stated that it did not believe that the differences between Section 207 and Section 232 loans that our report noted justify concerns that residential care refinance loans are being improperly modeled and noted a lack of available data. We agree that sufficient relevant data on Section 232 refinance loan performance do not yet exist, but we continue to question the use of Section 207 loan data as a proxy. While we did not change the recommendation, we added language to our report suggesting that, until enough Section 232 refinance loan data are available, one possible approach, among others, would be to use a simpler estimation method, such as using average claim and prepayment rates over time as is done in estimating credit subsidy rates for the Section 242 Hospital Mortgage Insurance program. We are sending copies of this report to the Secretary of the Department of Housing and Urban Development (HUD). We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at 202-512-8678 or woodd@gao.gov. Contact points for our offices of Congressional Relations or Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objectives were to examine (1) the Department of Housing and Urban Development’s (HUD) overall management of the program, including loan underwriting and monitoring; (2) the extent to which HUD’s oversight of insured health care facilities is coordinated with the states’ oversight of the quality of care provided by facilities; and (3) the financial implications of the program to the General Insurance/Special Risk Insurance (GI/SRI) Fund, including risk posed by program and market trends; and (4) how HUD estimates the annual credit subsidy for the program, including the factors and assumptions used. In addition, we examined HUD’s action in response to a HUD Inspector General report that concluded that HUD’s Office of Housing did not have adequate controls to effectively manage the Section 232 program; this information is summarized in appendix III. To examine HUD’s overall management of the Section 232 program, we obtained and reviewed program manuals, guidance, and documentation, including the “MAP Guide,” HUD’s Section 232 “Mortgage Insurance for Residential Care Facilities Handbook,” and HUD’s “Multifamily Asset Management and Project Servicing Handbook,” for loan processing procedures, underwriting policies and requirements, and oversight policies and requirements. We also interviewed HUD officials at HUD headquarters who are responsible for providing guidance and policies on loan underwriting and oversight and three private lenders that offered FHA- insured Section 232 loans. In addition, we conducted site visits to five HUD field offices (Atlanta, Georgia; Buffalo, New York; Chicago, Illinois; Los Angeles, California; and San Francisco, California) and conducted interviews with HUD officials, including the Hub or acting Hub director, appraisers, mortgage credit analysts, and project managers that are responsible for Section 232 loan applications, underwriting, and oversight, as well as other Federal Housing Administration (FHA) programs. We gathered relevant program documentation from each site visit. We also interviewed an official from one of HUD’s Multifamily Property Disposition Centers during our site visit to Atlanta. To capture a variety of Section 232 loan activity, we selected five HUD field offices on the basis of (1) the volume of Section 232 loans the field office had processed during fiscal year 2004 up to September 2005; (2) the dollar amount of Section 232 loans processed in the field office during fiscal year 2004 up to September 2005; (3) the timeliness of processing Section 232 loans during the last 2 years; (4) historical claim-rate data for the field office—that is, the rate at which Section 232 loans processed by the field office have gone to claim; (5) HUD’s suggestions for field office site visits; and (6) geographical dispersion. To better understand how private lenders that do not participate in the Section 232 program manage risks, we interviewed five private lenders that offered non-FHA insured loans to residential health care facilities. We also interviewed representatives of three residential care facilities with FHA- insured Section 232 loans to better understand the borrowers perspective of the Section 232 program. To examine the extent to which HUD coordinated with states’ oversight of quality of care provided by facilities, we reviewed FHA requirements for conducting management reviews and reviewing annual inspection reports. We also interviewed officials in FHA’s Office of Multifamily Development and Office of Asset Management and field office officials about policies for coordination between FHA and state residential care oversight and rate setting agencies, as well as policies for review of annual inspection reports. In addition, we interviewed private lenders of FHA-insured and non-FHA insured residential care facilities to better understand common industry practices for coordination between lenders and state residential care oversight and rate setting agencies. To examine the financial risks that the program poses to the GI/SRI Fund, we interviewed and obtained documentation from HUD’s Office of Evaluation and analyzed HUD data on program portfolio characteristics, including number of loans by cohort, current insurance in force, geographic and lender concentration of loans, and claims. We also analyzed HUD data used for their refinance credit subsidy regression model. Specifically: To obtain the number of active and terminated loans and claim rate history, we analyzed data from extracts of HUD’s F47 database, a multifamily database. We obtained extracts from HUD in May 2005, September 2005, and February 2006. Unless otherwise indicated, all analyses from the F47 data in the report utilized the May 2005 extract with subsequent updates from the other extracts and was current as of the end of fiscal year 2005. To assess the reliability of the F47 database extract, we reviewed relevant documentation, interviewed agency officials who worked with the database, and conducted manual data testing, including comparison to published data. Because of the small number of loans endorsed in individual fiscal years, we conducted analyses of cohorts that were created by combining data from 5 to 6 fiscal years. For claim rate analyses, we analyzed 5- and 10-year claim rates for the data based on the date of loan termination. Our analyses found 13 loans for which facility type information was not able to be determined from the extract. FHA administrators were able to determine the facility type for all but one of these loans using the Development Application Processing (DAP) system. This one terminated loan was excluded from facility type endorsement and claim rate analysis and, therefore, had little impact on this report. We also determined final endorsement date information to be missing from 799 records. Our analyses only used initial endorsement date information for which data was available for every record; therefore, there was no impact on this report. We also determined there were nine loans for which the facility type information was incorrect based on the endorsement date. FHA administrators checked in the DAP system and confirmed the correct facility type for these loans; therefore, there was no impact on the report. We determined the data to be sufficiently reliable for analysis of number of active and terminated loans, as well as claim rates. To determine the proportion of the Section 232 Mortgage Insurance program’s commitment authority to the larger GI/SRI Fund’s commitment authority, we reviewed HUD’s fiscal year 2006 budget. To determine the proportion of the Section 232 Mortgage Insurance program’s unpaid principal balance to the larger GI/SRI Fund unpaid principal balance, we obtained the GI/SRI Fund’s unpaid principal balance as of December 31, 2005 from HUD’s Office of Evaluation. We also analyzed data from HUD’s Multifamily Data Web site, which is extracted from HUD’s F47 database, to determine the unpaid principal balance of Nursing Home Mortgage Insurance program loans as of December 31, 2005. To determine the geographic concentration of loan properties in the program, we analyzed data current as of the end of fiscal year 2005 from our extract of HUD’s F47 database. Our analysis determined property state data was missing for 270 project numbers. FHA administrators informed us that loans endorsed more than 20 years ago, before electronic records were maintained, may have missing data that is unavailable. Our analyses of geographic concentration of loan properties utilized only one record with missing property state data; therefore, there was little impact on our findings. We determined the data to be sufficiently reliable for analysis of geographic loan concentration. To determine the geographic concentration of loan dollars in the program we analyzed data current as of December 31, 2005, from HUD’s Multifamily Housing Data Web site. To determine prepayment history in the program, we analyzed data from our F47 extract, current as of the end of fiscal year 2005. We also analyzed 5-and 10-year prepayment rates for the data based on the date of loan termination. To determine the appropriateness of using Section 207 refinance loans as proxy data in the Section 232 refinance loan credit subsidy estimate regression model, we analyzed data from several extracts from HUD’s Office of Evaluation. The extracts contained the loan data used by HUD to calculate cumulative claim rates for Section 232 and 207 refinance loans for loans endorsed from fiscal year 1992 through fiscal year 2005. The extracts did not include termination codes for all terminated loans. We determined termination code data for these loans from HUD data current as of December 31, 2005, from HUD’s Multifamily Housing Data Web site. We also combined the extracts to include all loans in one larger extract. In addition, we performed manual data reliability assessments of these extracts and determined that three loans should not have been included in the extracts because they had section of the act codes that were not within the parameters of our analysis as defined by the notes included in HUD’s extracts. These loans were not included in our analysis and, therefore, had no impact on our findings. We determined the data to be sufficiently reliable for analysis of the comparability of Section 207 refinance loans to Section 232 refinance loans. We conducted a literature review and interviewed numerous officials of lenders, residential care associations, and HUD to obtain information on risks due to health care market trends. We also searched for Inspectors General and agency reports through HUD Web sites. Finally, we conducted a search on our internal Web site to identify previous work on the Section 232 program. To determine how HUD estimates the annual credit subsidy rate for the program, we reviewed documentation of HUD’s credit subsidy estimation procedures, reviewed the cash flow model for the program, and we interviewed program officials from HUD’s Office of Evaluation and program auditors from the Office of Management and Budget (OMB). We also compared the assumptions used in HUD’s cash flow model with relevant OMB guidance and reviewed economic literature on modeling defaults to identify factors that are important for estimation. Additionally, we analyzed data provided by HUD field offices on initial loan-to-value ratios and debt service coverage ratios (at the time of loan application). We obtained the credit subsidy rates from the Federal Credit Supplement of the United States Budget. To review the actions HUD has taken in response to the HUD Inspector General’s 2002 report on the Section 232 program, we interviewed officials in HUD’s Office of Inspector General. In addition, we reviewed the HUD Inspector General’s 2002 report, as well as HUD’s Management Plan Status Reports for Implementation of Recommendation 1A of audit 2002-KC-0002. We also interviewed HUD headquarters officials, as well as field office officials during our site visits. Our review did not include an evaluation of underwriting criteria or the need for the program. We conducted our work in Atlanta, Georgia; Buffalo, New York; Chicago, Illinois; Los Angeles, California; San Francisco, California, and Washington, D.C., between February 2005 and April 2006, in accordance with generally accepted government auditing standards. The Department of Housing and Urban Development (HUD) currently processes a majority of the Section 232 loans using the Multifamily Accelerated Processing (MAP) program and processes some loans under Traditional Application Processing (TAP). Under MAP, the lender conducts the underwriting of the loan and submits a package directly to the Hub or program center for mortgage insurance. The Hub or program center reviews the lender’s underwriting and makes a decision whether or not to provide mortgage insurance for the loan. New construction and substantial rehabilitation loans require a preapplication meeting where HUD reviews required documentation up front. Under TAP, HUD, not the lender, is primarily responsible for the underwriting of the loan and determines whether or not to accept the loan. FHA has numerous underwriting requirements for loans made under the Section 232 program. Some examples include: Requiring documentation of a state-issued Certificate of Need (CON) for skilled nursing facilities and intermediate care facilities, and in states without a certificate of needs procedure, an alternative study of market needs and feasibility. Requiring an appraisal of the facility (prepared by the lender under the MAP program) and a market study with comparable properties. Reviewing current or prospective operators of the residential care facility and ensuring that they meet certain standards. For example, FHA has a requirement that operators of an assisted living facility have a proven track record of at least 3 years in developing, marketing, and operating either an assisted living facility or a board and care home. For new construction facilities specifically, FHA requires a business plan along with an estimate of occupancy rates and prospective reimbursement rates with the percentage of population for patients whose costs are reimbursed through Medicare and Medicaid. For existing facilities applying for a refinance loan, FHA requires the submission of vacancy and turnover rates and current provider agreements for Medicare and Medicaid, 3 years of balance sheet and operating statements, as well as the latest inspection report on the project’s operation. Requiring limits on loan-to-value and debt service coverage ratios, ratios identified by field office officials we interviewed as two of the more important financial ratios in the underwriting process. For example, for Section 232 loans, the loan-to-value ratio cannot exceed 90 percent for new construction loans, and 85 percent loan-to-value for refinance loans. For loans processed under MAP, HUD field office officials are required to use MAP Guide checklists to ensure that lenders follow FHA’s underwriting requirements. These checklists contained guidelines for reviewing lender submissions and overall parameters that an application must meet. For example, field office officials use an appraisal review checklist in the MAP Guide to ensure that the submitted market study complies with MAP requirements. For applications processed under TAP, field office officials stated that they use similar checklists to the ones included in the MAP Guide as the MAP Guide incorporates many of the Section 232 underwriting requirements. For MAP loans, HUD headquarters has a Lender Qualifications and Monitoring Division (LQMD) that conducts reviews of loans. LQMD is responsible for evaluating lender qualifications and lender performance. It reviews and ultimately approves lenders requesting MAP lender approval for loan underwriting. The division reviews a sample of lenders when a loan has defaulted or there is a need for additional lender oversight. While LQMD reviews are not specific to the Section 232 program, they help to monitor lenders participating in the program and ultimately help to reduce the number of risky loans that enter the portfolio. FHA requires field office staff to conduct a number of reviews for oversight of Section 232 loans. For example, staff address noncompliance items that are identified by HUD’s Financial Assessment Sub-System (FASS) for each facility. Noncompliance items can include items such as unauthorized distribution of project funds or unauthorized loans from project funds. Using information from the annual financial statement, FASS’s computer model statistically calculates financial ratios, or indicators, for each facility, and applies acceptable ranges of performance, weights, and thresholds for each indicator. FASS then generates a score for each facility based on these indicators, and this financial score represents a single aggregate financial measure of the facility. However, a HUD draft contractor study found that FASS did not adequately account for the unique nature of nursing homes in the Section 232 portfolio and, therefore, was a poor predictor of a nursing home going to claim. Field office officials we interviewed also review physical inspections conducted by HUD’s Real Estate Assessment Center (REAC), which is responsible for conducting physical assessments of all HUD-insured properties. Officials also ensure that the professional liability requirement for facilities is met and conduct file reviews to identify any activities that warrant additional oversight. Additionally, officials in each field office we visited stated that they are required to monitor projects in HUD’s Real Estate Management System, the official source of data on HUD’s multifamily housing portfolio that maintains data on properties and to conduct risk assessments on their properties at least once a year to identify those facilities that are designated as troubled or potentially troubled based on their physical inspection, financial condition, and other factors. Field offices also varied in the utilization of HUD’s Online Property Integrated Information Suite (OPIIS), a centralized resource for HUD multifamily data and property analysis. According to officials at HUD headquarters, field office officials can use OPIIS to conduct a variety of portfolio analysis and view risk assessments on their properties to better assist them in overseeing their portfolios. For example, OPIIS contains an Integrated Risk Assessment score that combines financial, physical, loan payment status history, and other data into a score that can be used to identify at-risk properties and prioritize workloads. However, four of the five field offices that we visited did not frequently use OPIIS. Some of these offices used the system to develop risk rankings for their properties or in trying to obtain data about a property, but none of them regularly used the system for the monitoring of Section 232 loans. The one field office that utilized OPIIS more frequently did so because the system partly incorporates a loan risk and rankings system that the field office had previously developed for its own use. Officials in this field office stated that an issue with OPIIS is that it is not designed to capture important, specific financial information that is unique to some Section 232 loans, such as expenses on food or medication. In a 2002 report, the Department of Housing and Urban Development’s (HUD) Inspector General found that HUD’s Office of Housing did not have adequate controls to effectively manage the Section 232 program. Because of these weaknesses, the Inspector General found that HUD lacked assurance of the effective operation of Section 232 properties. The Inspector General noted that the Office of Housing had already taken steps to develop an action plan to address the weaknesses identified by a task force, but that time frames had not yet been established. The Inspector General recommended that the Office of Housing establish specific time frames for implementing the corrective actions for the 10 weaknesses identified by the task force and that it monitor the actions to ensure timely and effective completion. HUD officials developed a plan to correct the 10 control weaknesses identified by the Office of Housing, which included the current status of each action and specific target dates to complete the corrective actions. According to the Inspector General, HUD has taken action to address 2 of the 10 control weakness findings identified by the Office of Housing Task force and for which the Inspector General recommended that timelines for corrective actions be established. The eight unresolved control weaknesses identified by the Office of Housing task force are all contingent upon approval of the proposed revisions to the regulatory agreements. However, the proposed revisions have been awaiting approval since August 2, 2004. According to HUD officials, the delay is a result of numerous administrative issues, which include changes in FHA management and extended public comment periods. The addressed control weaknesses and respective corrective actions involved loan underwriting. The Inspector General agreed with the Office of Housing task force, which found that HUD’s underwriting process for Section 232 properties needed to be strengthened and that HUD needed to complete market studies and background checks of applicants as part of the process. The Inspector General also agreed with the Office of Housing task force’s finding of potential problems associated with the nonrecourse nature of HUD Section 232 loans. In particular, it found that HUD needed to strengthen the regulatory agreements and underwriting process for Section 232 loans if these mortgages were to remain nonrecourse and to avoid potential increase in the portfolio claim rate. HUD addressed these findings by adding requirements for operators, reviews of operators’ financial statements, and professional liability insurance. Furthermore, applications for projects that are considered marginal are rejected. The eight remaining control weaknesses for which HUD has not fully completed its corrective actions are as follows: HUD lacks a handbook detailing monitoring requirements for nursing homes and assisted living facilities. The Inspector General found that HUD did not have a handbook specific to the Section 232 program monitoring requirements ensuring that all facilities follow the applicable regulatory agreements and state and federal requirements. In our site visits to five field offices, we found inconsistencies in the extent to which oversight procedures were followed, such as requiring operators to submit financial statements. HUD plans to include Section 232 project monitoring requirements in the “Multifamily Asset Management and Project Servicing Handbook” once the proposed revisions to the applicable regulatory agreements have been approved. In addition, HUD headquarters officials told us that they plan to issue updated guidance on loan oversight for Section 232 properties while awaiting approval of the proposed revisions to the regulatory agreements. HUD’s regulatory agreement does not include specific requirements for Section 232 properties. The Inspector General found that the regulatory agreement for owners lacked requirements for Section 232 properties, such as compliance with Medicare and Medicaid guidelines. The Inspector General also found inconsistencies between the requirements for facilities operated by the owners and those operated by a separate entity. The Inspector General recognized that these omissions created an inability for HUD to control the activities of operators and ultimately created risk to the General Insurance/Special Risk Insurance Fund. HUD’s proposed revisions to the regulatory agreements have provisions that address these concerns; however, they are still awaiting approval. The Financial Assessment Subsystem (FASS) does not allow the owner and operator to submit annual financial statements electronically, denying HUD the ability to use the financial check and compliance feature in the system. The Inspector General found that the Real Estate Assessment Center’s (REAC) FASS did not include all Section 232 properties. Furthermore, operators were not required to submit annual financial statements electronically through the system. HUD headquarters officials agreed that, while operators are unable to submit annual financial statements electronically, FASS has allowed electronic submissions from owners since the system’s inception. However, the Inspector General found that because operator financial statements are not required to be submitted electronically, HUD is unable to utilize the financial and compliance checks performed within the system to identify and follow up on deficiencies. HUD plans to modify FASS to allow electronic submission of operator financial statements; however, implementation has been delayed by funding problems and approval of the proposed revision to the operator regulatory agreement. The Office of Housing needs to improve monitoring and legal tools to provide early indication of possible default. The Office of Housing task force identified a need for improved monitoring and legal tools to provide early indication of potential default. To better understand issues related to monitoring loans, HUD’s Office of Evaluation completed several studies on Section 232 program performance. As of April 2006, all of these studies remain in draft form. Also, to aid in monitoring, HUD has proposed revisions to the applicable regulatory agreements to require that owners and operators submit annual inspection reports and inform HUD of state or federal violations. These reports can be an early indicator of quality of care concerns and possible claim. However, the proposed revisions to the regulatory agreements have not been made final. The Office of Housing staff needs additional training on servicing nursing homes and assisted living facilities. The Inspector General identified that project managers did not have sufficient training on reviewing Section 232 properties and dealing with the issues unique to Section 232 properties. HUD’s management plan states that, as of September 2004, REAC has conducted financial statement analysis for HUD hubs for the last 2 fiscal years. HUD has also proposed training specific to Section 232 program financial analysis upon approval of revisions to the applicable regulatory agreements and subject to the availability of funds. However, HUD headquarters officials stated that there were very limited funds available for training. Certain conditions lead to loss of Certificate of Need (CON) or license. The HUD Inspector General identified that, in some states, the CON and operating licenses may not transfer with the property. Consequently, an operator may hold these operational documents and take them with them upon termination of the lease. Without these documents, a facility is not viable as a residential care facility and its value is significantly diminished. This presents a large risk to HUD should the loan go to claim or should HUD have to acquire the property. HUD’s proposed revisions to the applicable regulatory agreements address this concern by categorizing these operational documents as part of the mortgaged properties. Receivables need to be included in the relevant legal documents to strengthen HUD's control over assets of the property in case of regulatory agreement violations. The Inspector General established that the Section 232 security agreement language was too broad to ensure that all property assets are covered by the mortgage. To address this concern, HUD proposed revisions to the applicable regulatory agreements to include receivables in the personalty pledged as security for the mortgage. Additionally, HUD proposed added language in the owner regulatory agreement requiring the owner to execute a security agreement and financial statement upon all items of equipment and receivables. Field offices do not have consistent procedures for using different addendums for mortgages, regulatory agreements, and security agreements. The Office of Housing’s task force found inconsistencies in the field offices’ use of legal agreements between HUD and owners and operators, such as differing addendums to mortgages, regulatory agreements, and security agreements. We also found similar discrepancies during our five site visits. For example, only one office used addendums to HUD’s legal agreements to prevent operators from keeping these operational documents once the lease terminates. HUD has proposed revisions to the regulatory agreements, and once they are approved and implemented all offices will use the same legal documentation. In the interim, HUD headquarters officials told us they plan to provide field offices with updated guidance on Section 232 loan oversight. As discussed earlier in this report, we question the Department of Housing and Urban Development’s (HUD) use of Section 207 loans as a proxy for Section 232 loans in the claim regression that is part of HUD’s credit subsidy estimates. This appendix provides greater detail on our analysis. Cumulative claim rates are generally compared for a set period of time and for loans from the same years of origination. However, HUD calculated the cumulative claim rates without making these adjustments, which confounds claim differences between programs with differences due solely to timing. HUD calculated the cumulative claim rates for each program by taking the total number of loans that went to claim during a 14- year time period and dividing this by the total number of loans in that same time period. In this case, HUD was comparing a program that has been expanding over time, the Section 232 program, with a program that has had less loan volume in recent years, the Section 207 program. From 1992 to 1998, HUD insured 1,434 Section 207 loans. From 1999 to 2005, HUD insured 870 Section 207 loans. As a result, HUD has been comparing the claim rate of loans that have had very little time in which to default (Section 232 loans had an average age of 4 years) with the claim rate of loans that have had substantial time in which to default (Section 207 loans had an average age of 7.5 years). A comparison between two programs’ claim rates should allow for differences in the age of the loans. HUD officials also told us that they have not analyzed the comparability of these two loan types in terms of their prepayment rates. To examine the comparability of the Section 207 and Section 232 loans, we compared the conditional claim and prepayment rates of the two types of loans. An analysis of conditional claim and prepayment rates compares claim and prepayment probabilities for loans of the same age, so that comparisons based on loans of widely varying ages are avoided. We found that the Section 207 loans generally had lower and, in some cases, significantly lower conditional claim rates than the Section 232 loans. The differences were greater in the later years when loans more often go to claim. (see fig. 7). For example, the conditional claim rate for Section 207 loans in fiscal year 8 was .14 percent as compared with a conditional claim rate of 3.88 percent for Section 232 loans in fiscal year 8. We found that Section 207 loans had generally higher, and sometimes significantly higher, conditional prepayment rates compared to Section 232 loans. The differences were greater in the later years when loans more often are prepaid. (see fig. 8). For example, the conditional prepayment rate for Section 207 loans in fiscal year 8 was 21.72 percent as compared to a conditional prepayment 11.25 percent for Section 232 loans in fiscal year 8 (making the conditional prepayment rate for the Section 207 loans 93 percent higher than the conditional prepayment rate for Section 232 loans). Additionally, we examined and compared cumulative 5-year claim and prepayment rates. Section 207 loans had a 5-year cumulative claim rate of 3 percent, while for the Section 232 loans it was approximately 6.7 percent. The 5-year cumulative prepayment rate for Section 207 loans was about 27 percent, while for Section 232 loans it was about 11 percent. In addition to the individual named above, Paul Schmidt, Assistant Director; Austin Kelly; Tarek Mahmassani; John McGrail; Andy Pauline; Carl Ramirez; Richard Vagnoni; Wendy Wierzbicki; and Amber Yancey- Carroll made key contributions to this report. | Through its Section 232 program, the Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA) insures approximately $12.5 billion in mortgages for residential care facilities. In response to a requirement in the 2005 Consolidated Appropriations Conference Report and a congressional request, GAO examined (1) HUD's management of the program, including loan underwriting and monitoring; (2) the extent to which HUD's oversight of insured facilities is coordinated with the states' oversight of quality of care; (3) the financial risks the program poses to HUD's General Insurance/Special Risk Insurance (GI/SRI) Fund; and (4) how HUD estimates the annual credit subsidy cost for the program. While HUD's decentralized program management allows its 51 field offices flexibility in their specific practices, GAO found differences in the extent to which staff in the five field offices it visited were aware of current program requirements. For example, four offices were unaware of required addendums to the programs' standard regulatory agreement. Further, while individual offices had developed useful practices for loan underwriting and monitoring, they lacked a mechanism for systematically sharing such practices with other offices. Also, field office officials were concerned about adequate current or future levels of staff expertise--a critical factor in managing program risk in that health care facility loans are complicated and require specialized knowledge and expertise. FHA requires a review of the most recent annual state-administered inspection report for state-licensed facilities applying for program insurance, and recommends, but does not require, continued monitoring of such reports for facilities once it has insured them. Four of the five HUD field offices GAO visited do not routinely collect annual inspection reports for their insured facilities. While the reports are but one of several monitoring tools, they provide potential indicators of future financial risk. HUD has proposed revising its standard regulatory agreements to require insured facility owners or operators to submit annual inspection reports and to report notices of violations. However, the proposed revisions have been awaiting approval since August 2004, and the implementation date is uncertain. The Section 232 program accounts for only about 16 percent of the GI/SRI Fund's total unpaid principal balance, but program and industry trends pose potential risks to the Section 232 program and to the GI/SRI Fund. For example, in recent years the program has insured increasing numbers of assisted living facility loans and refinancing loans, for which there are limited data available to assess long-term performance. Other potential risk factors include increasing prepayments (full repayment before loan maturity) and loan concentration in several large markets and among relatively few lenders. Projected shifts in demand for residential care facilities could affect currently insured facilities and the overall market for the types of facilities that HUD insures under the program. To estimate the program subsidy cost, HUD uses a model to project cash flows for each loan cohort (the loans originated in a given fiscal year) over its entire life. HUD's model does not explicitly or fully consider certain factors, such as loan prepayment penalties, interest rate changes, or differences in loans to different types of facilities, and uses some proxy data that is not comparable to Section 232 loans. The model's exclusion of potentially relevant factors and it use of this proxy data could affect the reliability of HUD's credit subsidy estimates. |
Since 1796, the federal government has had a role in developing and funding surface transportation infrastructure such as roads and canals to promote the nation’s economic vitality and improve the quality of life for its citizens. In 1956, Congress substantially broadened the federal role in road construction by establishing the Highway Trust Fund, a dedicated source of federal revenue, to finance a national network of standardized highways, known as the Interstate Highway System. This system, financed and built in partnership with state and local government over 50 years, has become central to transportation in the United States. Currently most federal surface transportation programs funded by the HTF span four major areas of federal investment: highway infrastructure, transit infrastructure and operations, highway safety, and motor carrier safety. Federal surface transportation funds are distributed either by a formula or on a discretionary basis through several individual grant programs. These grant programs are organized by mode and administered by four of DOT’s operating administrations—the Federal Highway Administration (FHWA), the Federal Transit Administration (FTA), the National Highway Traffic Safety Administration (NHTSA), and the Federal Motor Carrier Safety Administration (FMCSA). The modal administrations work in partnership with the states and other grant recipients to administer federal surface transportation programs. For example the federal government currently provides financial assistance, policy direction, technical expertise and some oversight, while state and local governments are ultimately responsible for executing transportation programs by matching and distributing federal funds and by planning, selecting and supervising infrastructure projects and safety programs while complying with federal requirements. Appendix II provides further information on the current and historical operation of these federal surface transportation programs. Additionally, the federal government provides financial assistance for other surface transportation programs such as intercity passenger rail, which has received over $30 billion of federal support since its inception in 1971. However this program is financed and operated separately from other surface transportation programs and an in-depth discussion of federal intercity passenger rail assistance is not included in this report. Increases over the past 10 years in transportation spending at all levels of government have improved the physical condition of highways and transit facilities to some extent, but congestion has worsened and safety gains have leveled off. According to the most recent DOT data, between 1997 and 2004 total highway spending per year by federal, state, and local governments grew by 22.7 percent in constant dollars. During this time, DOT reported some overall improvements in physical condition for road systems and bridges. For example, the percentage of vehicle miles traveled per year on “good” pavement conditions increased from 39.4 percent to 44.2 percent and the percentage of deficient bridges fell from 29.6 percent in 1998 to 26.7 percent per year in 2004. At the same time, incidents such as the Minneapolis bridge collapse in August 2007 indicate that significant challenges remain. Furthermore, despite increases in investment levels and some improvements in physical condition, operational performance has declined. For example, during the same period the average daily duration of travel in congested conditions increased from 6.2 hours to 6.6 hours, and the extent and severity of congestion across urbanized areas also grew. Transportation safety has improved considerably over the past 40 years, and although motor vehicle and large truck fatality rates have generally continued to fall modestly since the mid-1990s, the improvements yielding the greatest safety benefits (e.g., vehicle crashworthiness requirements and increases in safety belt use) have already occurred, making future progress more difficult. Furthermore, demand on transportation facilities nationwide has grown considerably since our transportation systems were built and is projected to increase in the coming decades as population, income levels, and economic activity continue to rise. According to the Transportation Research Board, an expected population growth of 100 million people could double the demand for passenger travel by 2040. Similarly, freight traffic is expected to climb by 92 percent from 2002 to 2035. These trends have the potential to substantially deepen the strain on the existing system, increasing congestion, and decreasing the reliability of our transportation network—with potentially severe consequences ranging from the economic impact of wasted time and fuel to the environmental and health concerns associated with increased fuel emissions. Moreover, at the current fuel tax rate, revenues to support the HTF may not be sufficient to sustain it. Currently, trust fund receipts are growing and will continue to grow with increased traffic. However, the purchasing power of the dollar has declined with inflation, and the federal motor fuel tax rate has not increased since 1993. In addition, more fuel-efficient and alternative-fuel vehicles are using less taxable motor fuel per mile driven. Recent legislation has authorized spending that is expected to outstrip the growth in trust fund receipts. According to a recent estimate from CBO, the remaining balance in the Highway Account of the Highway Trust Fund will be exhausted in 2009, and in fiscal year 2009 projected highway spending will exceed revenue by $4 to $5 billion. In January 2008 the National Surface Transportation Policy and Revenue Study Commission released a report with several recommendations to place the trust fund on a sustainable path, as well as reform the current structure of the nation’s surface transportation programs. The recommendations include significantly increasing the level of investment by all levels of the government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current program more performance- and outcome-based and mode- neutral, among other things. To finance the additional investment, the Commission recommended raising the current federal fuel tax rate by 25 to 40 cents per gallon on an incremental basis equivalent to an increase of 5 to 8 cents per gallon per year for 5 years. It also said that states would have to raise revenue from a combination of higher fuel taxes and other sources. In addition to raising the fuel tax, the Commission recommended a number of other user-based fees such as tolling, congestion pricing, and freight fees to provide additional revenue for transportation improvements. Three members of the Commission disagreed with some of the findings and recommendations of the Commission report. For example, the minority view disagreed with the Commission’s recommendations on expanding the federal role and increasing the federal fuel tax, among others. Rather, the minority view proposed sustaining fuel taxes at the current levels, refocusing federal investment on two areas of national interest, and providing the states with greater regulatory flexibility, incentives, and the analytical tools to allow adoption of market-based reforms on their highway systems. We have ongoing work assessing the Commission’s proposal and other reauthorization proposals and will be issuing a report in 2008. Although most surface transportation funds are still directed to highway infrastructure, the federal role in surface transportation has broadened over the past 50 years to incorporate goals beyond highway construction, and federal surface transportation programs have grown in number and complexity. The resulting conglomeration of program structures reflects a variety of federal approaches for setting priorities, distributing federal funds, and sharing oversight responsibility with state and local partners for surface transportation programs. The HTF was established in 1956 to provide federal funding for Interstate highway construction and other infrastructure improvements based on the “user-pay principle”— that is, users of transportation systems should pay for the systems’ construction through highway user fees such as taxes on motor fuels, tires, and trucks. However, since 1956, the federal role in surface transportation has expanded beyond funding Interstate construction and highway infrastructure to include grant programs that address other transportation, societal, and environmental goals. For example, although most HTF expenditures continue to support highway infrastructure improvements (see fig. 1), Congress established new federal grants for highway safety and transit during the 1960s and added a motor carrier safety grant program during the 1980s. Furthermore, Congress has since expanded the initial basic grant programs in each of these areas to incorporate a variety of different goals. For example, the highway program has expanded to include additional programs to fund air quality improvements, Interstate maintenance, and safety-related construction improvements (see fig. 2). Federal transit assistance expanded from a single grant program that funded capital projects to multiple programs that provide general capital and operating assistance for urban and rural areas, as well as numerous specialized grants with goals ranging from supporting transit service for the elderly, persons with disabilities, and low-income workers to promoting the use of alternative fuels (see fig. 3). Federal safety assistance has also expanded from funding general state highway and motor carrier safety programs and enforcement activities to additionally funding many specialized grants to address specific issues. For example, federal highway safety assistance currently includes several grant programs to address specific accident factors (e.g., alcohol-impaired driving) and safety data gaps (see fig. 4). Similarly, the number of federal motor carrier assistance programs has increased to include several grants for improving data collection, supporting commercial driver’s license programs and funding border enforcement activities (see fig. 5). Consequently, federal funds currently support a wide variety of goals and modes beyond the initial federal focus on highway infrastructure, ranging from broad support for transit in urban areas, to targeted grants to increase seat-belt usage. Furthermore, Congress has also expanded the scope of federal safety goals to include specific legislative changes at the state level. For example, in accepting certain federal-aid highway infrastructure funds, states must enact certain laws to improve highway safety or face penalties in the form of either withholdings or transfers in their federal grants. Over the past 30 years, penalty or incentive provisions have been used to encourage states to enact laws that establish a minimum drinking age of 21 years, a maximum blood alcohol level of 0.08 to determine impaired driving ability, and mandatory seat belt usage, among others (see fig. 4), with transfer or withholding penalties as high as 10 percent of a state’s designated highway infrastructure funds. While most states have chosen to adopt laws that comply with many of these provisions, some remain subject to certain penalties. For example, as of January 2008, 11 states are penalized for not enacting an open container law and 11 are penalized for not enacting a repeat offender law. As federal goals have broadened, Congress has added new federal procedural requirements for infrastructure projects and programs and agencies have issued more complex rules to address these additional federal goals. For example, Congress established cooperative urban transportation planning as a matter of national interest and passed legislation in 1962 requiring all construction projects to be part of a continuing, comprehensive, and cooperative planning process between state and local governments. In another example, grant recipients may be required to conduct environmental assessments for many federally funded transportation projects to comply with the federal environmental goals established by the National Environmental Policy Act of 1969 (NEPA). Other federal requirements may include compliance with the Americans with Disabilities Act, nondiscrimination clauses in the Civil Rights Act of 1964, labor standards mandated by the Davis-Bacon Act, and Buy America procurement provisions, among others. Although behavior-oriented safety programs and activities are generally not subject to construction-related requirements, Congress has required that agencies address additional federal goals in safety-related rulemaking processes. For example, to address national environmental objectives, Congress expanded NHTSA’s regulatory scope in highway safety to include establishing regulations for corporate average fuel economy standards, in addition to issuing rules in areas such as tire-safety standards and occupant-protection devices (e.g., seat belts). Similarly, to address other areas of national concern, Congress has broadened FMCSA’s regulatory authority in motor carrier safety to include household goods movement, medical requirements for motor carrier operators, and greater oversight of border and international safety. Furthermore, when establishing federal standards in these areas, regulatory agencies such as NHTSA and FMCSA may be subject to increasingly rigorous requirements for analysis and justification associated with a wide range of federal legislation and executive orders including NEPA, Executive Order 12866 requiring cost-benefit analysis for proposed rules, Executive Order 13211 requiring consideration of the effects of government regulation on energy, and the Unfunded Mandates Reform Act of 1995, among others. Program expansion over the past 50 years has created a variety of grant structures and established different federal approaches for setting priorities and distributing federal funds across surface transportation programs. These approaches, which range from formula grants to dedicated spending provisions, give state and local governments varying degrees of discretion in allocating federal funds. As in the past, most surface transportation programs are jointly administered by the federal government in partnership with state or local governments, but in recent years the federal government has increasingly delegated oversight responsibility to state and local governments. Federal approaches for setting priorities and distributing funds currently range from giving state and local governments broad discretion in allocating highway infrastructure funds to directly targeting specific federal goals through the use of incentive grants and penalty provisions in safety programs. In 1956 federal surface transportation funds were distributed to the states through four formula grant programs that provided federal construction aid for certain eligible highway categories (e.g., Interstate, primary, and secondary highways and urban extensions). The states in turn, matched and distributed funds at their discretion, within each program’s eligibility requirements. Within the highway program, this federal-state partnership has changed in response to considerable increases in state and local authority and flexibility since 1956. Largely because of revisions to federal highway programs in the 1990s, state and local governments currently have greater discretion to allocate the majority of their federal highway funds according to state and local priorities. For example, core highway programs such as the Surface Transportation Program and the National Highway System program have broader goals and project eligibility requirements than earlier highway infrastructure grant programs. Although funds continue to be distributed by formula to the states for individual programs based on measures of highway use or the extent of a state’s highway network, or other factors, as figure 6 demonstrates, six core highway programs permit the states to transfer up to 50 percent of their apportioned funds, with certain restrictions, to other eligible highway programs. Furthermore, although the process for calculating the distributions is complex for some programs, the end result of most highway program formulas is heavily influenced by minimum apportionment and “equity” requirements. For fiscal year 2008, each state’s share of formula funds will be at least 92 percent of their relative revenue contributions to the Highway Account of the Highway Trust Fund. According to FHWA estimates, the equity requirements will provide approximately $9 billion in highway funds to the states in addition to the amount distributed by formula through the individual grant programs. Over $2 billion of these additional funds will have the same broad eligibility requirements and transfer provisions of the Surface Transportation Program. Moreover, flexible funding provisions within highway and transit programs allow certain infrastructure funds to be used interchangeably for highway or transit projects. Major transit infrastructure grants currently range from broad formula grants that provide capital and operating assistance, such as the Block Grants Program (Urbanized Area Formula Grants), to targeted discretionary grants for new transit systems, such as New Starts and Small Starts, that require applicants to compete for funding based on statutorily defined criteria. For example, projects must compete for New Starts funds on the basis of cost-effectiveness, potential mobility improvements, environmental benefits, and economic development effects, among other factors. Additionally, smaller formula grants direct funds to general goals such as supporting transit services for special populations like elderly, disabled, and low-income persons. Unlike most surface transportation funding, which is distributed through the states, most transit assistance is distributed directly to local agencies, since transit assistance was originally focused on urban areas. Current major highway and motor carrier safety grants include formula grants to provide general assistance for state highway safety programs and improving motor carrier safety and enforcement activities, such as Highway Safety Programs (402) and Motor Carrier Safety Assistance Program (MCSAP) Grants. They also include targeted discretionary grants such as Occupant Protection Incentive Grants and Border Enforcement Grants. Additionally, they include penalty provisions, such as Open Container Requirements (154) and Minimum Penalties for Repeat Offenders for Driving While Intoxicated or Driving Under the Influence (164), designed to address specific safety areas of national interest. Unlike formula-based funding, some of the discretionary grants, such as the Safety Belt Performance Grants, directly promote national priorities by providing financial incentives for meeting specific performance or safety activity criteria (e.g., enforcement, outreach). Additionally, penalty provisions such as those associated with Open Container laws and MCSAP Grants promote federal priorities by either transferring or withholding state highway infrastructure funds from states that do not comply with certain federal provisions. For example, in 2007, penalty provisions transferred over $217 million of federal highway infrastructure assistance to highway safety programs in the 19 states and Puerto Rico that were penalized for failure to enact either open container or repeat offender laws. Finally, Congress provides congressionally directed spending for surface transportation through specific provisions in legislation or committee reports. While estimates of the precise number and value of these congressional directives vary, observers agree that they have grown dramatically. For instance, the Transportation Research Board found that congressional directives have grown from 11 projects in the 1982 reauthorization act to over 5,000 projects in the 2005 reauthorization act. Most federal surface transportation programs continue to be jointly administered by the federal and state, or local governments, but the federal government has increasingly delegated oversight responsibility to state and local governments. This trend is most pronounced for highway infrastructure programs; however, it has also occurred in federal transit and safety programs. For example, when Interstate construction began, the federal government fully oversaw all federally funded construction projects, including approving design plans, specifications, and estimates, and periodically inspecting construction progress. In 1973, Congress authorized DOT to delegate oversight responsibility to states for compliance with certain federal requirements for noninterstate projects. During the 1990s, Congress further expanded this authority to allow states and FHWA to cooperatively determine the appropriate level of oversight for federally funded projects, including some Interstate projects. Currently, based on a stewardship agreement with each state, FHWA exercises full oversight over a limited number Federal-aid Highway projects, constituting a relatively limited amount of highway mileage. States are required to oversee all Federal-aid Highway projects that are not on the National Highway System, which constitutes a large majority of the road mileage receiving federal funds, and states oversee design and construction phases of other projects based on an agreement between FHWA and the state. Full federal oversight for transit projects is limited to major capital projects that cost over $100 million, and grant recipients are allowed to self-certify their compliance with certain federal laws and regulations for other projects. Although state and local grant recipients have considerable oversight authority, FHWA and FTA both periodically review the recipients’ program management processes to ensure compliance with federal laws and regulations. State and local government responsibilities for overseeing transportation planning processes have also grown in recent decades. Although such responsibilities predate federal transportation assistance programs, since 1962, the federal government has made compliance with numerous planning and project selection requirements a condition for receiving federal assistance. During the 1970s, federal requirements grew in range and complexity and, in some cases, specified how state and local governments should conduct planning activities. However, since the 1980s, state and local governments have had greater flexibility to fulfill federal planning requirements. For example, in 1983, urban transportation planning regulations were revised to reduce the level of direct federal involvement in state and local planning processes, and state and local agencies were allowed to self-certify their compliance with federal planning requirements. Similarly, although the federal government identified specific environmental and economic factors to be considered in the planning process as part of the surface transportation program legislation enacted in 1991 and subsequently amended in 1998, these requirements give state and local governments considerable discretion in selecting analytical tools to evaluate projects and make investment decisions based on their communities’ needs and priorities. The states have also been given greater oversight responsibility for safety programs as federal agencies have shifted from direct program oversight to performance-based oversight of state safety goals. For example, since 1998, NHTSA has not approved state highway safety plans or projects, but instead focuses on a state’s progress in achieving the goals it set for itself in its annual safety performance plan. Under this arrangement, a state must provide an annual report that outlines the state’s progress towards meeting its goals and performance measures and the contribution of funded projects toward meeting its goals. If a state does not meet its established safety goals, NHTSA and the state work cooperatively to create a safety improvement plan. FMCSA uses a similar approach to oversee state motor carrier safety activities. Starting in 1997, the states were required to identify motor carrier safety problems based on safety data analysis, target their grant activities to address these issues, and report on their progress toward the national goal of reducing truck crashes, injuries, and fatalities. Much as FHWA and FTA do for their grant programs, both NHTSA and FMCSA periodically review state management processes for compliance with federal laws and regulations. Many federal surface transportation programs do not effectively address identified transportation challenges such as growing congestion. While program goals are numerous, they are sometimes conflicting and often unclear—which contributes to a corresponding lack of clarity in the federal role. The largest highway, transit, and safety grant programs distribute funds through formulas that are typically not linked to performance and, in many cases, have only an indirect relationship to needs. Mechanisms generally do not link programs to the federal objectives they are intended to address, in part due to the wide discretion granted to states and localities in using most federal funds. Furthermore, surface transportation programs often do not employ the best tools and approaches available, such as rigorous economic analysis for project selection and a mode-neutral approach to planning and investment. The federal role in surface transportation is unclear, in part because program goals are often unclear. In some cases, stated goals may be contradictory or may come into direct conflict. For example, it may not be possible to improve air quality while spurring economic development with new highway construction. With the proliferation of goals and programs discussed in the previous section of this report, the federal role varies from funding improvements in specific types of infrastructure (such as the National Highway System) to aiming at specific outcomes (such as reducing highway fatalities). At a recent expert panel on transportation policy convened by the Comptroller General, experts cited the lack of focus of the federal role in transportation as a problem, and some stakeholders have also made similar criticisms. In some policy areas, the federal role is limited despite consensus on goals. For example, freight movement is widely viewed as a top priority, yet no clear federal role has been established in freight policy. DOT’s draft Framework for a National Freight Policy, issued in 2006, is a step toward clarifying a federal role and strategy, but it lacks specific targets and strategies and criteria for achieving them. Current approaches to planning and financing transportation infrastructure do not effectively address freight transportation issues—few programs are directly aimed at freight movement, and funding is based on individual modes, but freight moves across many modes. Similarly, despite statutes and regulations that identify an intermodal approach that provides connections across modes as a goal of federal transportation policy, there is currently only one federal program specifically designed for intermodal infrastructure, and all the funds available for the program are congressionally designated for specific projects. The federal government also lacks a defined role in or mechanism for aiding projects that span multiple jurisdictions. The discretion and differing priorities of individual states and localities can make it difficult to coordinate large projects that involve more than one state or local sponsor. There have been some successful multijurisdictional transportation initiatives, such as the FAST Corridor across several metropolitan areas in Washington State, but a lack of established political or administrative mechanisms for cooperation, combined with the large degree of state and local autonomy in transportation decision-making, is an obstacle to such “megaprojects.” At a hearing of the National Surface Transportation Policy and Revenue Study Commission in New York City, an expert on the regional economy cited the Tappan Zee Bridge in New York State as an example of the obstacles such projects can face. Neighboring Connecticut wants the bridge’s capacity expanded, but there is currently no established mechanism that allows Connecticut to help move the project forward. In testimony for the Commission, stakeholders such as the U.S. Chamber of Commerce and the American Association of Port Authorities cited fostering interjurisdictional coordination as a key federal role, and AASHTO has also highlighted the need for improved multijurisdictional coordination mechanisms in its reports on the future of federal transportation policy. At times, DOT has undertaken new activities without assessing the rationale for a federal role. For example, the agency made short sea shipping of freight a priority, but did not first examine the effect of federal involvement on the industry or identify obstacles to success and potential mitigating actions. Without a consistent approach to identifying the rationale for a federal role, DOT is limited in its ability to evaluate potential investments and determine whether short sea shipping—or another available measure—is the most effective means of enhancing freight mobility. Most federal surface transportation programs lack links between funding and performance. Federal funding for transportation has increased significantly in recent years, but because spending is not explicitly linked to performance, it is difficult to assess the impact of these increases on the achievement of key goals. During this period of funding increases, the physical condition of the highway system has improved, but the system’s overall performance has decreased, according to available measures of congestion. DOT has established goals under the Government Performance and Results Act (GPRA) of 1993 that set specific benchmarks for performance outcomes such as congestion and highway fatalities. However, these performance measures are not well-reflected in individual grant programs because disbursements are seldom linked to outcomes— most highway funds are apportioned without relationship to the performance of the recipients. The largest transit and safety programs also lack links to performance. States and localities receive the same disbursement regardless of their performance at, for example, reducing congestion or managing project costs. As a result, the incentive to improve return on investment—the public benefits gained from public resources expended—is reduced. Safety and some transit grants are more directly linked to goals than highway infrastructure programs, and several incorporate performance measures. Whereas highway infrastructure programs tend to focus on improving specific types of facilities such as bridges, highway safety, and, to a lesser extent, transit programs, are more often designed to achieve specific objectives. For instance, the goal of the Job Access and Reverse Commute transit program is to make jobs more accessible for welfare recipients and other low-income individuals. Likewise, under the Section 402 State and Community Highway Safety Grant Program, funds must be used to further the goal of reducing highway fatalities. To some extent, transit and safety programs also have a more direct link to needs because their formulas do not incorporate equity adjustments that seek to return funds to their source. Furthermore, several highway safety and motor carrier safety grants make use of performance measures and incentives. For example, under the Motor Carrier Safety Assistance Program, some funds are set aside for incentive grants that are awarded using five state performance indicators that include, among others, large truck-involved vehicle fatality rates, data sharing, and commercial driver’s license verification. Most highway transportation programs lack links to need as well as performance. As discussed above, most grant funds are instead distributed according to set formulas that typically have an indirect relation to need. As a result, grant disbursements for these programs not only fail to reflect performance, but they may also not reflect need. Some of the formula criteria, such as population, are indirect measures of need, but the equity bonus and minimum apportionment criteria are not related to need, and exert a strong influence on formula outcomes. Certain programs, such as the Highway Bridge Replacement and Rehabilitation Program, which bases disbursements on the cost of needed repairs, use more direct measures. In general, however, the link between needs and federal highway funding is weak. Besides lacking links between funding and performance, federal surface transportation programs generally lack mechanisms to tie state actions to program goals. DOT does not have direct control over the vast majority of activities that it funds; instead, states and localities have wide discretion in selecting projects to fund with federal grants. Federal law calls the federal-aid highway program a “federally-assisted state program,” and specifies that grant funds “shall in no way infringe on the sovereign rights of the States to determine which projects shall be federally financed.” In addition, states have broad flexibility in using more than half of federal highway funds as a result of a combination of programs with wide eligibility (such as the Surface Transportation Program) and the ability to transfer some funds between highway programs. Furthermore, “flex funding” provisions allow transfers between eligible highway and transit programs; between 1992 and 2006, states used this authority to transfer $12 billion from highway to transit programs. While these provisions give states the discretion to pursue their own priorities, the provisions may impede the targeting of federal funds toward specific national objectives. Federal rules for transferring funds between highway programs are so flexible that the distinctions between individual programs have little meaning. To some extent, the Federal-aid Highway program functions as a cash transfer, general purpose grant program, not as a tool for pursuing a cohesive national transportation policy. Transit and safety grants, in contrast, are more linked to goals because they do not allow transfers among programs to the same degree. Safety grants are linked to goals because states must use data on safety measures to create performance plans that structure their safety investments, yet states are still able to set their own goals, develop their own programs, and select their own projects. Performance measures are also used in allocating funding in several highway safety grant programs, providing an even more direct link to goals. In some areas, federal surface transportation programs do not use the best tools and approaches available. Rigorous economic analysis, applied in benefit-cost studies, is a key tool for targeting investments, but does not drive transportation decision-making. While such analysis is sometimes used, we have previously reported that it is generally only a small factor in a given investment decision. Furthermore, statutory requirements of the planning and project selection processes—such as public participation procedures or NEPA requirements that may be difficult to translate into economic terms—can interfere with the use of benefit-cost analysis. Decision makers often also see other factors as more important. In a survey of state DOTs that we conducted in 2004 as part of that same study, 34 said that political support and public opinion are factors of great or very great importance in the decision to recommend a highway project, while 8 said that the ratio of benefits to costs was a factor of great or very great importance. Economic analysis was more common for transit projects, largely because of the requirements of the competitive New Starts grant program, which uses a cost-effectiveness measure. However, the New Starts program constitutes only 18 percent of transit funding authorizations under the Safe, Accountable, Flexible, and Efficient Transportation Equity Act – A Legacy for Users (SAFETEA-LU) authorization. There are also few formal evaluations of the outcomes of federally-funded projects. As a result, policymakers miss a chance to learn more about the efficacy of different approaches and projects. Such evaluations are especially important because highway and transit projects often have higher costs and lower usage than estimated beforehand. New Starts is also the only transportation grant program that requires before- and-after studies of outcomes. The modal basis of transportation funding also limits opportunities to invest scarce resources as efficiently as possible. Instead of being linked to desired outcomes, such as mobility improvements, funds are “stovepiped” by transportation mode. Although, as discussed above, states and localities have great flexibility in how they use their funds, this modal structure can still discourage investments based on an intermodal approach and cross-modal comparisons. Reflecting the separate federal transportation funding programs, many state and local DOTs are organized into several operating administrations with responsibilities for particular modes. Because different operating administrations oversee and manage separate funding programs, these programs often have differing timelines, criteria, and matching fund requirements, which can make it difficult for public planners to pursue the goal—stated in law and DOT policy—of an intermodal approach to transportation needs. For example, a recent project at the Port of Tacoma (Washington) involved widening a road and relocating rail tracks to improve freight movement on both modes, but it was delayed because highway funding was available, but rail funding was not. Moreover, despite the wide funding flexibility within the highway program and between the highway and transit programs, many funds are dedicated on a modal basis, and state and local decision makers may choose projects based on the mode eligible for federal funding. Experts on the Comptroller General’s recent transportation policy panel cited modal stovepiping as a problem with the current federal structure, saying that it inhibits consideration of a range of transportation options. State officials have also criticized stovepiping, both in AASHTO policy statements and individually. For instance, a state transportation official told a hearing of the National Surface Transportation Policy and Revenue Study Commission that modal flexibility should be increased to allow states to select the best project to address a given goal. The federal government is not equipped to implement a performance- based approach to transportation funding in many areas because it lacks comprehensive data. Data on outcomes—ideally covering all projects and parts of the national transportation network, as well as all modes—would be needed in order to consider performance in funding decisions. Presently, data on key performance and outcome indicators is often absent or flawed. For example, DOT does not have a central source of data on congestion—the available data are stovepiped by mode—and some congestion information for freight rail is inaccessible because it is proprietary and controlled by railroad companies. Likewise, FTA does not possess reliable and complete data on transit safety. A partial exception is highway safety, for which NHTSA and FMCSA have data on a variety of outcomes, such as traffic fatalities. NHTSA employs this information to help states set priorities, FMCSA uses it to target enforcement activities, and both agencies use it to monitor states’ progress toward achieving their goals and to award incentive grants. However, the safety data that states collect are not always timely, complete, and consistent. For example, a review of selected states found that some of the information in their databases was several years old. Tools to make better use of existing infrastructure have not been deployed to their full potential, in part because their implementation is inhibited by the current structure of federal programs. Research has shown that a variety of congestion management tools, such as Intelligent Transportation Systems (ITS) and congestion pricing are effective ways of increasing or better utilizing capacity. Although such tools are increasingly employed by states and localities, their adoption has not been as extensive as it could be given their potential to decrease congestion. One factor contributing to this slow implementation is the lack of a link between funding and performance in current federal programs—projects with a lower return on investment may be funded instead of congestion management tools such as ITS. Furthermore, DOT’s measures of effects fall short of capturing the impact of ITS on congestion, making it more difficult for decision makers to assess the relative worth of alternative solutions. State autonomy also contributes to the slowed rollout of these tools. Even though federal funding is available to encourage investment in ITS, states often opt for investments in more visible projects that meet public demands, such as capacity expansion. Federal investment in transportation may lead to the substitution of federal spending for state and local spending. One strategy that Congress has used to meet the goals of the Federal-aid Highway program has been to increase federal investment. However, not all of the increased federal investment has increased the total investment in highways, in part because Congress cannot prevent states and localities from using some of their own highway funds for other purposes when they receive additional federal funds. We reported, on the basis of our own modeling and a review of other empirical studies, that increased federal highway grants influence states and localities to substitute federal funds for funds they otherwise would have spent on highways. Specifically, we studied the period from 1983 through 2000 and our model suggests that over the entire time period, states substituted about 50 cents of every dollar increase in federal highways grants for funds they would have spent on highways from their own resources. For the latter part of that period, 1992 through 2000, we estimated a substitution rate of about 60 cents for every dollar increase in federal aid. These results were consistent with other study findings and indicate that substitution is reducing the impact of federal investment. Federal grant programs have generally not employed the best tools and approaches to reduce this potential for substitution—maintenance of effort requirements and higher nonfederal matching requirements, discussed in the next section of this report. One reason for the high rate of substitution for the Federal-aid Highway program is that states typically spend more than the amount required to meet federal matching requirements—generally 20 percent. Thus, states can reduce their own highway spending and still obtain increased federal funds. Finally, congressionally directed spending may not be an ideal means of allocating federal grant funds. Some argue that Members of Congress are good judges of investment needs in their districts, and some congressional directives are requested by states. However, officials from FHWA and FTA have stated that congressional directives sometimes displace their priority transportation projects by providing funds for projects that would not have been chosen in a competitive selection process. For example, FHWA officials stated that some congressional directives listed in the Projects of National and Regional Significance program would not have qualified for funding in a merit-based selection process. Officials from three state departments of transportation also noted that inflexibilities in the use of congressionally directed funds limit the states’ ability to implement projects and efficiently use transportation funds by, for example, providing funding for projects that are not yet ready for implementation or providing insufficient funds to complete particular projects. However, an official from one state department of transportation noted that although congressional directives can create administrative challenges, they often represent funding that the state may not have otherwise received. The solvency of the federal surface transportation program is at risk because expenditures now exceed revenues for the Highway Trust Fund, and projections indicate that the balance of the Highway Trust Fund will soon be exhausted. According to the Congressional Budget Office, the Highway Account will face a shortfall in 2009, the Transit Account in 2012. The rate of expenditures has affected its fiscal sustainability. As a result of the Transportation Equity Act for the 21st Century (TEA-21), Highway Trust Fund spending rose 40 percent from 1999 to 2003 and averaged $36.3 billion in contract authority per year, and the upward trend in expenditures continued under SAFETEA-LU, which provided an average of $57.2 billion in contract authority per year. Congress also established a revenue-aligned budget authority (RABA) mechanism in TEA-21 to help assure that the Highway Trust Fund would be used to fund projects instead of accumulating large balances. When revenues into the Highway Trust Fund are higher than forecast, RABA ensures that additional funds are apportioned to the states. The RABA provisions were written so that the adjustments could work in either direction—going up when the trust fund had greater revenues than projected and down when revenues did not meet projected levels. However, when the possibility of a downward adjustment occurred in fiscal year 2003 as a result of lower-than-projected trust fund revenues, Congress chose to maintain spending at the fiscal year 2002 level. If the RABA approach is kept in the future, allowing downward adjustments could help with the overall sustainability of the fund. While expenditures from the trust fund have grown, revenues into the fund have not kept pace. The current 18.4 cents per gallon fuel tax has been in place since 1993, and the buying power of the fixed cents-per-gallon amount has since been eroded by inflation. The reallocation to the Highway Trust Fund of 4.3 cents of federal fuel tax previously dedicated to deficit reduction provided an influx of funds beginning in 1997. However, this influx has been insufficient to sustain current funding levels. In addition, if changes are not made in policy to compensate for both the increased use of alternative fuels that are not currently taxed and increased fuel economy, fuel tax revenues, which still account for the majority of federal transportation financing, may further erode in the future. A sound basis for reexamination can productively begin with identification of and debate on underlying principles. Through our prior work on reexamining the base of government, our analysis of existing programs and other prior reports, we identified a number of principles that could help drive reexamination of federal surface transportation programs and an assessment of options for restructuring the federal surface transportation program. The appropriateness of these options will depend on the underlying federal interest and the relative potential of the options to develop sustainable strategies addressing complex national transportation challenges. These principles are as follows: Create well-defined goals based on identified areas of federal interest. Establish and clearly define the federal role in achieving each goal. Incorporate performance and accountability for results into funding decisions. Employ best tools and approaches to emphasize return on investment. Ensure fiscal sustainability. Determining the federal interest involves examining the relevance and relative priority of existing programs in light of 21st century challenges and identifying emerging areas of national importance. For instance, increases in passenger and freight travel have led to growing congestion, and this strain on the transportation system is expected to grow with population increases, technology changes, and the globalization of the economy. Furthermore, experts have suggested that federal transportation policy should recognize emerging national and global imperatives such as reducing the nation’s dependence on foreign fuel sources and minimizing the impact of the transportation system on global climate change. Given these and other challenges, it is important to assess the continued relevance of established federal programs and to determine whether the current areas of federal involvement are still areas of national interest. Key to such an assessment is how narrowly or broadly the federal interest in the nation’s transportation system should be defined and whether the federal interest is greater in certain areas of national priority: Should federal spending and programs be more focused on specific national interests such as interstate freight mobility or on broad corridor development? Is there a federal interest in local issues such as urban congestion? If so, are there more distinct ways in which federal transportation spending and programs could address local issues that would enhance inherent local incentives and choices? To what extent should federal transportation policy address social concerns such as mobility for disadvantaged persons and transportation safety? If environmental stewardship is part of the federal interest, how might federal transportation policy better integrate national long-term goals related to energy independence and climate change? The proliferation of federal surface transportation programs has, over time, resulted in an amalgam of policy interests that may not accurately reflect current national concerns and priorities. Although policymakers have attempted to clarify federal transportation policy in the past and an FHWA Task Force has called for focusing federal involvement on activities that clearly promote national objectives, current policy statements continue to cover a wide spectrum of broadly defined federal interests ranging from promoting global competitiveness to improving citizens’ quality of life. While these federal programs, activities, and funding flows reflect the interests of various constituencies, they are not as a whole aligned with a strategic, coherent, and well-defined national interest. In short, the overarching federal interest has blurred. Once the federal interest has been refocused and more clearly defined, policymakers will have a foundation for allocating scarce federal resources according to the level of national interest. With the federal interest in surface transportation clearly defined, policymakers can clarify the goals for federal involvement. The more specific, measurable, achievable, and outcome-based the goals are, the better the foundation will be for allocating resources and optimizing results. Even though some federal transportation safety programs are linked to measurable outcome-based goals, such as achieving a specific rate of safety-belt use to reduce traffic fatalities, the formula funding for general improvements to transit facilities or highway systems is generally provided without reference to achieving specific outcomes for federal involvement. For example, the guidelines for state and local recipients’ use of the largest highway and transit formula grant funds, such as the Surface Transportation Program or Block Grant Program (Urbanized Area Formula Grants), are based on broad project eligibility criteria. These criteria involve the type of highway or type of work (e.g., transit capital investment versus operating assistance) rather than the achievement of clearly defined and measurable outcomes. Furthermore, although DOT has already established some outcome measures as part of its strategic planning process, its agencywide goals and outcomes cover a vast array of activities and are generally not directly linked to project selection or funding decisions for most highway funding and the largest transit and safety programs. Without specific and measurable outcomes for federal involvement, policymakers will have difficulty determining whether certain programs are achieving desired results. After identifying the federal interest and federal goals, policymakers can clearly define the federal government’s role in working toward each goal and define that role in relation to the roles of other levels of government and other stakeholders. This would involve an examination of state and local government roles, as well as of the federal role. Following such an examination, the current relationship between the federal and other levels of government could change. For example, in the federal-aid highway program, the current “partnership” between the federal government and the states is based on an explicit recognition of state sovereignty in the conduct of the program, and the states have considerable flexibility in moving funds within this program. By contrast, highway safety programs operate under a grantor-grantee relationship and for transit the grantees are largely local units of government, although the role of states has grown. An examination of these programs could change these relationships, since different federal goals may require different degrees and types of federal involvement. Where the federal interest is greatest, the federal government may play a more direct role in setting priorities and allocating resources, as well as fund a higher share of program costs. Conversely, where the federal interest is less evident, state and local governments could assume more responsibility. Functions that other entities may perform better than the federal government could be turned back to the states or other levels of government. Given the already substantial roles states and localities play in the construction and operation of transportation facilities, there may be areas that no longer call for federal involvement and funding could be reassessed. Notably, we have reported that the modal focus of federal programs can distort the investment and decision-making of other levels of government and a streamlining of federal goals and priorities could better align programs with desired outcomes. Turning functions back to the states has many other implications. For example, states would likely have to raise additional revenues to support the increased responsibilities. While states might be freer to allocate funds internally without modally stovepiped federal funding categories, some states could face legal funding restrictions. For example, some states prohibit the use of highway funds for transit purposes, so if a transit program were returned to the states, alternative taxes would have to be raised or the laws would have to be changed. Until a program or function is actually turned back to the states or localities, it is uncertain how these other levels of government will perform. For example, if highway safety programs were turned back to the states, it is not known whether states would continue to target the same issues that they currently choose to address under federally-funded programs or would emphasize different issues. Likewise, if a program that targets a specific area such as urban transit systems is turned back to the states, there is no assurance that the states would continue to fund this area. Turning programs back to the states would have far-reaching consequences, as discussed in appendix III. Observers have argued that certain issues, such as urban mobility, are essentially metropolitan in character and therefore should be addressed by metropolitan regions, rather than by states or cities. In addition, regional organizations can promote collaborative decision-making and advance regional coordination by creating a forum for stakeholders, address problems of mutual concern, and engage in information and resource sharing. Metropolitan Planning Organizations (MPO) currently perform this function for surface transportation. While MPOs do receive some federal funding for operations, they are not regional governments and generally do not execute projects. Addressing these regional problems remains difficult in the absence of more powerful regional governmental bodies. The development of more powerful regional entities could create new opportunities to address regional transportation problems. Once federal goals and the federal role in surface transportation have been clarified, significant opportunities exist to incorporate performance and accountability mechanisms into federal programs. Tracking specific outcomes that are clearly linked to program goals could provide a strong foundation for holding grant recipients responsible for achieving federal objectives and measuring overall program performance. In particular, substituting specific performance measures for the federal procedural requirements that have increased over the past 50 years could help to shift federal involvement in transportation from the current process-oriented approach to a more outcome-oriented approach. Furthermore, shifting from process-oriented structures such as mode-based grant programs to performance-based programs could improve project selection by removing barriers to funding intermodal projects and giving grantees greater flexibility to select projects based on the project’s ability to achieve results. Directly linking outcome-based goals to programs based on clearly defined federal interests would also help to clarify federal surface transportation policy and create a foundation for a transparent and results-based relationship between the federal government and other transportation stakeholders. Accountability mechanisms can be incorporated into grant structures in a variety of ways. For example, grant guidelines can establish uniform outcome measures for evaluating grantees’ progress toward specific goals, and grant disbursements can depend in part on the grantees’ performance instead of set formulas. Thus, if reducing congestion was an established federal goal, outcome measures for congestion such as travel time reliability could be incorporated into infrastructure grants to hold states and localities responsible for meeting specific performance targets. Similarly, if increasing freight movement was an established federal goal, performance targets for freight throughput and travel time in key corridors could be built into grant programs. Performance targets could either be determined at the national level or, where appropriate, in partnership with grantees—much as DOT has established state performance goals for highway safety and motor carrier safety assistance. Incentive grants or penalty provisions in transportation grants can also create clear links between performance and funding and help hold grantees accountable for achieving desired results. For example, the current highway and motor carrier safety incentive grants and penalty provisions can be used to increase or withhold federal grant funds based on the policy measures that states enact and the safety outcomes they achieve. Depending on the federal interest and established goals, these types of provisions could also be used in federal infrastructure grants. In addition, a competitive selection process can help hold recipients accountable for results. For example, DOT’s competitive selection process for New Starts and Small Starts transit programs require projects to meet a set of established criteria and mandates post-construction evaluations to assess project results. To better ensure that other discretionary grant programs are aligned with federal interests and achieve clearly defined federal transportation goals, Congress could establish specific project selection criteria for those programs and require that they use a competitive project selection process. For instance, key freight projects of national importance could be selected through such a competitive process that would identify those investments that are most crucial to national freight flows. DOT also recently selected metropolitan areas for Urban Partnership Agreements, which are not tied to a single grant program but do provide recipients with financial resources, regulatory flexibility, and dedicated technical support in exchange for their adoption of aggressive congestion-reduction strategies. When a national competition is not feasible, Congress could require a competitive selection process at the state or local level, such as those required for the Job Access and Reverse Commute Program. This program, however, lacks the statutorily defined selection criteria used to select projects for the New Starts and Small Starts programs. The effectiveness of any overall federal program design can be increased by promoting and facilitating the use of the best tools and approaches. Within broader federal program structures that fit the principles we discuss in this report, a number of specific tools and approaches can be used to improve results and return on investment, which is increasingly necessary to meet transportation challenges as federal resources become even more constrained. We and others have identified a range of leading practices, discussed below, however their suitability varies depending on the level of federal involvement or control that policymakers desire for a given area of policy. Rigorous economic analysis is recognized by experts as a useful tool for evaluating and comparing potential transportation projects. Benefit-cost analysis gives transportation decision makers a way to identify projects with the greatest net benefits and compare alternatives for individual projects. By translating benefits and costs into quantitative comparisons to the maximum extent feasible, these analyses provide a concrete way to link transportation investments to program goals. However, in order for benefit-cost analysis to be effective, it must be a key factor in project selection decisions and not seen simply as a requirement to be fulfilled. A complementary type of tool is outcome evaluation, which is already required for New Starts transit projects. Such evaluations would be useful in identifying leading practices and understanding project performance, especially since the available information indicates that the costs of highway and transit projects are often higher than originally anticipated. It should be recognized, however, that benefit-cost comparisons and other analyses do not necessarily identify the federal interest—many local benefits from transportation investments are not net benefits in national terms. For example, economic development may provide financial benefits locally, but nationally the result may be largely a redistribution of resources rather than a net increase. Accordingly, in emphasizing return on federal investment, the relationship of investments to national goals must be considered along with locally-based calculations of benefit and cost. Because current programs are generally based on specific modes, it is difficult to plan and fund intermodal links and projects that involve more than one mode, despite a consensus among experts and DOT itself that an intermodal approach is needed. A number of strategies could be used to move toward an intermodal approach. For example, policy could be changed to allow a single stream of funding to pay for all aspects of a corridor-based project—even if the improvements include such diverse measures as highway expansion, transit expansion, and congestion management. DOT recently created competitive Urban Partnership Agreements, which award grants for initiatives that address congestion through congestion pricing, transit, telecommuting, and ITS elements. Finally, decision makers cannot make full use of cross-modal project comparisons, such as those developed through benefit-cost analysis, if funding streams remain stovepiped. Better management of existing capacity is another strategy that has proved successful, primarily on highways; it is useful because of the growing cost and, in some cases the impracticality, of building additional capacity. We have reported that implementing ITS technology can improve system performance. Congestion pricing of highways, where toll rates change according to demand, is another such leading practice. From an economic perspective, congested highways are generally “underpriced.” Although the social cost of using a roadway is much higher at peak usage times, this higher cost is usually not reflected in what drivers pay. When toll rates increase with demand, some drivers respond to higher peak-period prices by changing the mode or time of their travel for trips that are flexible. This tool can increase the speed of traffic and has the potential to increase capacity as well—an evaluation of the variably priced lanes of State Route 91 in Orange County, California, showed that although the priced lanes represent only 33 percent of the capacity of State Route 91, they carry an average of 40 percent of the traffic during peak travel times. Although the Value Pricing Pilot Program encourages the use of this tool, tolling is prohibited on most Interstate highways by statute. Broader support in policy could increase the adoption of congestion pricing, improving the efficiency and performance of the system. Public-private partnerships are another tool that may benefit public sponsors by bringing private-sector financing and efficiencies to transportation investments, among other potential advantages. Specifically, private investors can help public agencies improve the performance of existing facilities, and in some cases build new facilities without directly investing public funds. At the same time, such partnerships also present potential costs and trade-offs, but the public sector can take steps to protect the public interest. For example, when evaluating the public interest of public-private partnerships, the public sector can employ qualitative public interest tests and criteria, as well as quantitative tests such as Value for Money and Public Sector Comparators, which are used to evaluate if entering into a project as a public-private partnership is the best procurement option available. Such formal assessments of public interest are used routinely in other countries, such as Australia and the United Kingdom, but use of systematic, formal processes and approaches to the identification and assessment of public interest issues has been more limited in the United States. Since public interest criteria and assessment tools generally mandate that certain aspects of the public interest are considered in public-private partnerships, if these criteria and tools are not used, then aspects of public interest might be overlooked. Although these techniques have limitations, they are able to inform public decision making—for instance, the Harris County, Texas, toll authority conducted an analysis similar to a public- sector comparator, and the results helped inform the authority’s decision not to pursue a public-private approach. Tools can also be used in designing grants to help increase the impact of federal funds. One such tool is maintenance of effort requirements, under which state or local grantees must maintain their own level of funding in order to receive federal funds. Maintenance of effort requirements could discourage states from substituting federal support for funds they themselves would otherwise have spent. However, our past work has shown that maintenance of effort requirements should be indexed to inflation and program growth in order to be effective. Matching requirements are another grant design tool that can be adjusted to increase the impact of federal programs. The allowable federal share covers a substantial portion of project costs—often 80 percent—in many transportation programs, especially for highways. Increasing the state share can help induce recipients to commit additional resources. For example, NHTSA’s Occupant Protection grant program provides 75 percent federal funding the first year, but reduces the federal share to 25 percent in the fifth and sixth years to shift the primary financing responsibility to the states. Data collection is a key tool to give policymakers information on how the transportation system is functioning. Data on the system and its individual facilities and modes are useful in their own right for decision making, but are also essential to enable other effective approaches, such as linking grant disbursements to grantees’ performance. As discussed previously, DOT does not have complete data in some crucial areas; the effective use of data in safety programs, despite problems, demonstrates the potential of more comprehensive data gathering to improve evaluations and induce improved performance in the surface transportation system. A restructured federal program could increase the application of these and other leading tools and approaches by providing incentives for or requiring their use in certain circumstances. For example, in competitive discretionary grant programs, the application of specific tools and approaches could be considered in evaluating proposals, just as the use of incentives or penalties could be considered in noncompetitive grant programs. The Motor Carrier Safety Assistance Program already employs this approach—one factor considered in awarding incentive funds is whether states provide commercial motor vehicle safety data for the national database. The use of certain tools and approaches could also simply be required in order to receive federal funds under relevant transportation grant programs. However, if federal programs were restructured to be based on performance and outcomes, states would have more incentive to implement such tools and approaches on their own. Under such a scenario, an appropriate federal role could be to facilitate their identification and dissemination. Transportation financing, and the Highway Trust Fund in particular, faces an imbalance of revenues and expenditures and other threats to its long- term sustainability. In considering sustainable sources of funds for transportation infrastructure, the user-pay principle is often cited as an appropriate pricing mechanism for transportation infrastructure. While fuel taxes do reflect usage, they are not an exact user-pay mechanism and they do not convey to drivers the full costs of their use of the road. These taxes are not tied to the time when drivers actually use the road or which road they use. Taxes and fees should also be equitably assigned and reflect the different costs imposed by different users. The trucking industry pays taxes and fees for the highway infrastructure it uses, but its payments generally do not cover the costs it imposes on highways, thereby giving the industry a competitive price advantage over railroads, which use infrastructure that they own and operate. An alternative to fuel taxes would be to introduce mileage charges on vehicles—Oregon is pilot testing the technology to implement this approach. Finally, the use of congestion pricing to reflect the much greater cost of traveling congested highways at peak times will help optimize investment by providing market cues to policymakers. Concerns about funding adequacy have led state and local governments to search for alternative revenue approaches, including alternative financing vehicles at the federal level, such as grant anticipation revenue vehicles, grant anticipation notes, state infrastructure banks and federal loans. These vehicles can accelerate the construction of projects, leverage federal assistance, and provide greater flexibility and more funding techniques. However, they are also different forms of debt financing. This debt ultimately must be repaid with interest, either by highway users— through tolls, fuel taxes, licensing or vehicle fees—or by the general population through increases in general fund taxes or reductions in other government services. Highway public-private partnerships show promise as an alternative, where appropriate, to help meet growing and costly transportation demands. Highway public-private partnerships have resulted in advantages, from the perspective of state and local governments, such as the construction of new infrastructure without using public funding, and obtaining funds by extracting value from existing facilities for reinvestment in public transportation and other public programs. However, there is no “free” money in public-private partnerships. Highway financing through public-private partnerships also is largely a new source of borrowed funds that must be repaid to private investors by road users, over what could be a period of several generations. Finally, the sustainability of transportation financing should also be seen in the context of broader fiscal challenges. In a time of growing structural deficits, constrained state and local budgets, and looming Social Security and Medicare spending commitments, the resources available for discretionary programs will be more limited. The federal role in transportation funding must be reexamined to ensure that it is sustainable in this new fiscal reality. The long-term pressures on the Highway Trust Fund and the governmentwide problem of fiscal imbalance highlight the need for a more efficient, redesigned program based on the principles we have identified. The sustainability of surface transportation programs depends not only on the level of federal funding, but also on the allocation of funds to projects that provide the best return on investment and address national transportation priorities. Using the tools and approaches for improving transportation programs that we have discussed could also help surface transportation programs become more fiscally sustainable and more directly address national transportation priorities. The National Surface Transportation Policy and Revenue Study Commission (National Commission) issued its final report in January 2008. The report recommended significantly increasing the level of investment by all levels of government in surface transportation, consolidating and reorganizing the current programs, speeding project delivery, and making the current program more performance-based and mode-neutral, among other things. However, several commissioners offered a dissenting view on some of the Commission’s recommendations, notably the level of investment, size of the federal role, and the revenue sources recommended. The divergent views of the commission members indicate that while there is a degree of consensus on the need to reexamine federal surface transportation programs, there is not yet a consensus on the form a restructured surface transportation program should take. The principles that we discussed for examining restructuring options are a sound basis on which this discussion can take place. These principles do not prescribe a specific approach to restructuring, but they do provide key attributes that will help ensure that a restructured surface transportation program addresses current challenges. The current federal approach to addressing the nation’s surface transportation problems is not working well. Despite large increases in expenditures in real terms for transportation the investment has not resulted in a commensurate improvement in the performance of nation’s surface transportation system, as congestion continues to grow, and looming problems from the anticipated growth in travel demand are not being adequately addressed. The current collection of flexible but disparate programs grants that characterizes the existing approach is the result of a patchwork evolution of programs over time, not a result of a specific rationale or plan. This argues for a fundamental reexamination of the federal approach to surface transportation problems. In cases where there is a significant national interest, maintaining strong federal financial support and a more direct federal involvement in the program may be needed. In other cases, functions may best be carried by other levels of government or not at all. There may also be instances where federal financial support is desirable but a more results-oriented approach is appropriate. In addition, it is important to recognize that depending on the transportation issue and the desired goals, different options and approaches may best fit different problems. Reforming the current approach to transportation problems will take time, but a vision and strategy is needed to begin the process of transforming to a set of policies and programs to effectively address the nation’s transportation needs and priorities. The current system evolved over many years and involves different modes, infrastructure and safety issues, and extends widely into the operations of state and local governments. Given the proliferation of programs and goals previously discussed, refocusing federal programs is needed to address the shortfalls of the current approach. Focusing federal programs around a clear federal interest is key. Well-defined goals based on identified areas of federal interest would establish what federal participation in surface transportation is designed to accomplish. A clearly defined federal role in achieving these goals would give policymakers the ability to direct federal resources proportionately to the level of national interest. Once this is accomplished, a basis exists to reexamine the current patchwork of programs, test their continued relevance and relative priority, potentially devolve programs and policies that are outdated or ineffective, and modernize those programs and policies that remain relevant. Once those areas of federal interest are known, tying federal funds to performance and having mechanisms to test whether goals are met would help create incentives to state and local governments to improve their performance and the performance of the transportation system. Both incentive programs and sanctions are possible models for better tying performance to outcomes. Having more federal programs operate on a competitive basis and projects selected based on potential benefits could also help tie federal funds to performance. There also is a need to improve the use of analytical tools in the selection and evaluation of the performance of projects. Better use of tools such as benefit-cost analysis and using return on investment as a criterion for the selection of individual projects can help identify the best projects. Specifically, the use of a return on investment framework will help to emphasize that federal financial commitments to transportation infrastructure projects are, in fact, long-term capital investments designed to achieve tangible results in a transparent fashion. Finally, a fundamental problem exists in the fiscal sustainability of surface transportation programs as a result of the impending shortfall in the Highway Trust Fund. The trust fund is the primary source of federal support to state and local governments across highways, transit, and surface transportation safety programs. This fiscal crisis is fundamentally based on the balance of revenues and expenditures in the fund, and thus either reduced expenditures, increased revenues, or a combination of the two is now needed to bring the fund back into balance. Finally, given the scope of needed transformation, the shifts in policies and programs may need to be done incrementally or on a pilot basis to gain practical lessons for a coherent, sustainable, and effective national program and financing structure to best serve the nation for the 21st century. To improve the effectiveness of the federal investment in surface transportation, meet the nation’s transportation needs, and ensure a sustainable commitment to transportation infrastructure, Congress should consider reexamining and refocusing surface transportation programs to be responsive to these principles so that they: have well-defined goals with direct links to an identified federal interest institute processes to make grantees more accountable by establishing more performance-based links between funding and program outcomes, institute tools and approaches to that emphasize the return on the federal investment, and address the current imbalance between federal surface transportation revenues and spending. We provided copies of a draft of this report to DOT for its review and comment. In an email on February 22, 2008, DOT noted that surface transportation programs could benefit from restructured approaches that apply data driven performance oriented criteria to enable the nation to better focus its resources on key surface transportation issues. DOT officials generally agreed with the information in this report, and they provided technical clarifications which we incorporated, as appropriate. We will send copies of this report to interested congressional committees and the Secretary of Transportation. Copies will also be available to others upon request and at no cost on GAO’s Website at www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834, or heckerj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We were asked to (1) provide an historical overview of the federal role in surface transportation and the goals and structures of federal surface transportation programs funded by the Highway Trust Fund, (2) summarize conclusions from our prior work on the structure and performance of these and other federal programs, and (3) identify principles to help assess options for focusing the future federal role and the structure of federal surface transportation programs. We focused our work on programs funded by the Highway Trust Fund (HTF) because it is the primary vehicle for federal financing of surface transportation, receiving nearly all federal fuel tax revenue; it is also a focus of most proposals to reform the current federal role. We examined the Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), Federal Transit Administration (FTA) and National Highway Traffic Safety Administration (NHTSA) as part of this study; we did not look at two other DOT agencies that receive HTF funds, the Research and Innovative Technology Administration (RITA) and the Federal Railroad Administration (FRA). RITA was excluded because it focuses on federal research, in contrast to our focus on federal-state programs; FRA was excluded because the portion of HTF funds that it receives is so small that it cannot be compared to the other operating agencies. To provide an historical overview of the federal role in surface transportation and the goals and structures of federal surface transportation programs, we drew information from statutes, especially transportation authorization laws; regulations; budget documents; agency reports; and literature on transportation policy by outside experts. We interviewed officials in DOT’s modal administrations, including FHWA, FMCSA, FTA, and NHTSA in order to help clarify agency goals, roles and structures. We also interviewed representatives of stakeholder groups such as the American Association of State Highway and Transportation Officials (AASHTO) and the American Public Transit Association (APTA). To describe conclusions that we and others have drawn about the current structure and performance of these federal programs, we reviewed relevant GAO reports on specific transportation programs, as well as reports that looked at broader issues of performance measurement, oversight, grant design, and other related issues. We also reviewed reports, policy statements, and other materials from stakeholder groups and other organizations. Additionally, we reviewed materials from hearings held by the National Surface Transportation Policy and Revenue Study Commission. Finally, we sought the views of transportation experts, including the 22 who participated in a forum convened by the Comptroller General in May 2007, that included public officials, private-sector executives, researchers, and others. To review policy options for addressing the federal role, we identified options from previous proposals, both those originating in Congress and presidential administrations, as well as those presented by stakeholder groups such as AASHTO. We also reviewed options discussed in previous GAO reports, as well as testimony and other materials generated by the National Surface Transportation Policy and Revenue Study Commission, which the Congress also tasked to examine the federal approach to surface transportation programs. In addition, to complement our appendix III discussion of the implications of turning over responsibility for surface transportation to the states, we analyzed the potential fiscal impact of turning over most elements of the federal transportation program to the states. We obtained DOT data on state grant disbursements and calculated total federal grant receipts for each state and the District of Columbia. We limited our analysis to grant programs funded by the HTF, because the federal fuel taxes that would be eliminated or sharply reduced under this scenario are deposited almost exclusively in the HTF. We also omitted discretionary grants because they are a small portion of federal transportation grants and often vary significantly from year to year in a given state. Separately, we obtained state fuel consumption data from DOT. In order to calculate the extent to which individual states would have to raise their fuel taxes to maintain the same level of spending if federal grants were eliminated, we divided the total grant receipts (as described above) for each state by the number of gallons of highway fuel used in that state in the prior year. This calculation yielded the per-gallon increase in state taxes that would be needed to maintain spending, assuming it would be implemented evenly across all types of fuel. Because diesel and gasoline are taxed at different federal rates, and represent different shares of total usage in each state, we used a weighted average to calculate the current effective per-gallon federal fuel tax rate in each state. We then expressed the per-gallon tax rate results in terms of change from the current federal tax rate. Where we had not previously assessed the reliability of the source data, we conducted a limited data reliability analysis and found the data suitable for the purpose of this analysis. We conducted this performance audit between April 2007 and February 2008 in accordance with Generally Accepted Government Auditing Standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence that provides a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Federal assistance for highway infrastructure is distributed through several grant programs, known collectively as the Federal-Aid Highway Program. Both Congress and DOT have established multiple broad policy goals for the Federal-Aid Highway Program, which provides financial and technical assistance to states to construct, preserve, and improve eligible federal-aid highways. The program’s current goals include safety, efficiency, mobility, congestion relief, interstate and international commerce, national security, economic growth, environmental stewardship, and sustaining the nation’s quality of life. The Federal-Aid Highway Program currently consists of seven core formula grant programs and several smaller formula and discretionary grant programs. The majority of Highway Trust Fund revenues are distributed through the core formula grant programs to the states for a variety of purposes, including road construction and improvements, Interstate highway and bridge repair, air pollution mitigation, highway safety, and equity considerations. Broad flexibility provisions allow states to transfer funds between core highway programs and to the Federal Transit Administration (FTA) for eligible transit projects. Highway Trust Fund revenues are also distributed through the smaller formula and discretionary grant programs, which cover a wide range of projects, including border infrastructure, recreational trails, and safe routes to schools. Congress has also designated funds for specific projects. For example, according to the Transportation Research Board, SAFETEA- LU—the most recent reauthorization legislation—contained over 5,000 dedicated spending provisions. The Federal-Aid Highway Program is administered through a federal-state partnership. The federal government, through FHWA, provides financial assistance, policy direction, technical expertise, and some oversight. FHWA headquarters provides leadership, oversight, and policy direction for the agency, FHWA state division offices deliver the bulk of the program’s technical expertise and oversight functions, and five FHWA regional service resource centers provide guidance, training, and additional technical expertise to the division offices. In turn, state and local governments execute the programs by matching and distributing federal funds; planning, selecting, and supervising projects; and complying with federal requirements. Currently, based on stewardship agreements with each state, FHWA exercises full oversight on a limited number of federal-aid projects. States are required to oversee all federal-aid highway projects that are not on the National Highway System, and states oversee design and construction phases of other projects based on an agreement between FHWA and the state. FHWA also reviews state management and planning processes. Many state and local government processes are driven by federal requirements, including not only highway-specific requirements for transportation planning and maintenance, but also environmental review requirements and labor standards that are the result of separate federal legislation designed to address social and environmental goals. Since its reauthorization under the Federal-Aid Highway Act of 1956, the Federal-Aid Highway Program has grown in size, scope, and complexity as federal goals for the program have expanded. In 1956, the primary focus of the Federal-Aid Highway Program was to help states finance and construct the Interstate Highway System to meet the nation’s needs for efficient travel, economic development, and national defense. The Federal-aid Highway Program made funds available to states for road construction and improvements through four formula programs—one program for each of four eligible road categories—with a particular focus on the Interstate system. Yet the Federal-Aid Highway Program has also served as a mechanism to achieve other societal goals. For example, the 1956 Act requires that states adhere to federal wage and labor standards for any state construction project using federal-aid funds. In successive reauthorizations of the program, Congress has increased program requirements to achieve other societal goals such as civil rights, environmental protection, urban planning, and economic development. Besides increasing compliance requirements, Congress has authorized new grant programs to achieve expanded program objectives. For example, Congress authorized new core grant programs to address Interstate highway maintenance, environmental goals, and safety. In response to controversy over the distribution of highway funds between states that pay more in federal taxes and fees than they receive in federal- aid (donor states) and states that receive more in federal-aid than they contribute (donee states), Congress established and strengthened equity programs that guarantee states a minimum relative return on their payments into the Highway Account of HTF. Additionally, Congress has further expanded the program’s scope by authorizing highway funds for additional purposes and uses, such as highway beautification, historic preservation, and bicycle trails. The federal-state partnership has evolved as programs have changed to give states and localities greater funding flexibility. For example, in 1991, when Interstate construction was nearly complete, Congress restructured the Federal-aid Highway Program to promote a more efficient and flexible distribution of funds. Specifically, under the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA), Congress substantially increased flexibility by consolidating road-category grant programs, creating a surface transportation block grant, and establishing broad flexible fund transfer provisions between highway programs and transit— a structure that remains today. At the same time, Congress altered the established federal-state partnership by increasing the authority of metropolitan planning organizations—local governmental planning bodies—in federally mandated planning processes. The federal-state partnership has further evolved as Congress has delegated federal oversight responsibilities to state and local governments, but has assumed a greater role in project selection. When Interstate construction began, the federal government provided direct oversight during the construction and maintenance phases of projects and ensured that the states complied with federal requirements. By 1973, states could self-certify compliance with most federal grant requirements, and during the 1990s, Congress further expanded this authority to allow states and FHWA to cooperatively determine the appropriate level of oversight for federally funded projects, including some Interstate projects. While reducing the federal role in oversight, Congress has increased its role in project selection—traditionally a state and local responsibility—through congressional directives. For example, according to the Transportation Research Board, there were over 5,000 directives in the latest reauthorization from 2005, up from 1,850 in 1998 and 11 in 1982. As the Federal-Aid Highway Program has grown in size and complexity, so too has the federal administrative structure although some shifting or consolidation of responsibilities has occurred. Before FHWA was created in 1967, its predecessor, the Bureau of Public Roads, established a decentralized administrative structure and a field office in each state, reflecting the close partnership between the federal government and the states. Moreover, as the number of the Federal-Aid Highway Program requirements and the scope of the program increased, the agency, which initially had an engineering focus, hired a wide range of specialists including: economists, landscape architects, planners, historians, ecologists, safety experts, civil rights experts, and others. When DOT was formed in 1967, new motor carrier and traffic and vehicle safety functions were assigned to FHWA. These functions have since shifted to NHTSA and FMCSA, although FHWA continues to collaborate on these issues and retains responsibility for highway infrastructure-related safety projects and programs. In 1998, FHWA consolidated its organization by eliminating its nine regional offices and establishing regional service resource centers, as well as devolving responsibility for state projects and programs entirely to the FHWA division offices in each state. For fiscal year 2009, FHWA requested funding for 2,861 full-time equivalent staff divided between headquarters, 5 regional service resource centers and 55 division offices. Both Congress and DOT have established multiple broad policy goals for FTA, which provides financial and technical assistance to local and state public agencies to build, maintain, and operate mass transportation systems. FTA’s current statutory goals include (1) promoting the development of efficient and coordinated urban transportation systems that maximize mobility, support economic development, and reduce environmental and energy consumption impacts, and (2) providing mobility for vulnerable populations in both urban and rural areas. DOT’s six strategic goals also apply to FTA: safety, congestion mitigation, global connectivity, environmental stewardship, security and preparedness, and organizational excellence. Currently, FTA divides its major capital and operating assistance programs into two categories: formula and bus grants, which are funded entirely from HTF’s Mass Transit Account, and capital investment grants, which are financed using general revenue. The formula and bus grants provide capital and operating assistance to transit agencies and states through a combination of seven relatively large and five smaller formula and discretionary grants. Under these grants, the federal government generally provides 80 percent of the funding and the locality provides 20 percent, with certain exceptions. The capital investment grants provide discretionary capital assistance for the construction of new fixed- guideway and corridor systems and extensions of existing systems. Funds for new fixed-guideway systems are distributed through the New Starts and Small Starts grant programs and are awarded to individual projects through a competitive selection process. Although the statutory federal match for the New Starts and Small Starts programs is 80 percent, agency officials stated the actual federal match is closer to 50 percent due to high levels of state and local investment and the competitive selection process that favors projects that require a lower federal match. FTA also provides financial support for research and planning activities. Funds for research are allocated on a discretionary basis out of the General Fund, and planning funds are taken from the Mass Transit Account of the Highway Trust Fund and distributed to states by formula. In addition to the funding they obtain through these programs, states may transfer a portion of certain highway program funds to FTA for eligible transit expenses. According to the most recent DOT data, in 2004, 28.1 percent of the funding for transit was system-generated through fares or other charges, and the remaining funds came from local (34.6 percent), state (19.7 percent), and federal (17.6 percent) sources. Approximately 75 percent of federal transit assistance is directed to capital investments, and the remainder is directed to other eligible expenses such as operating expenses. In contrast to federal highway infrastructure programs, which are administered through a federal-state partnership, federal transit programs are generally administered through a federal-local partnership, although rural programs are administered at the state level. The federal government, through FTA headquarters and 10 FTA regional offices, provides financial assistance, establishes requirements, performs oversight, and conducts research. Grant recipients such as local transit agencies are responsible for matching federal funds and for planning, selecting, and executing projects while complying with federal requirements. The degree of federal oversight varies across programs and among grant recipients. Currently, full federal oversight is limited to major capital projects that cost over $100 million, and local and state grant recipients are allowed to self-certify their compliance with certain federal laws and regulations. For example, FTA conducts periodic reviews of program management processes for recipients of Block Grants Program (Urbanized Area Formula Grants) funds and provides direct project management oversight for recipients of New Starts funding. In addition, FTA conducts discretionary reviews of grantees’ compliance with requirements in other areas such as financial management or civil rights and uses a rating system to determine the level of oversight needed for each grantee. FTA employees work with external contractors to conduct project management and program management process reviews. For fiscal year 2009, FTA requested funding for 526 full- time-equivalent staff, divided among its 10 regional offices and headquarters. From the modern transit program’s inception as part of the Urban Mass Transportation Act of 1964 (UMTA), Congress justified federal funding for mass transportation capital improvements as a means to address pressing urban problems such as urban decay, traffic congestion, and poor development planning. Federal capital assistance was distributed to local governments on a discretionary basis to help urban areas improve and expand urban mass transportation systems. Congress also established federal transit programs to achieve other societal goals. For example, UMTA required grant recipients to provide labor protections for transit employees and relocation assistance for individuals displaced by transit projects. Later federal legislation increased grant requirements to achieve other societal goals such as civil rights, environmental protection, and economic development. In addition to increasing compliance requirements, Congress has authorized new grant programs and broadened program eligibility requirements to promote expanding objectives. For example, federal transit assistance expanded during the 1970s to include grant programs designed to meet social and transportation-related goals such as: improving mobility in rural areas and making public transportation more accessible for the elderly and the disabled. More recently, Congress has further broadened the scope of programs to include making transportation to jobs more accessible for welfare recipients and low-income individuals and providing transit service within public parks and lands. Although federal transit funding was initially provided on a discretionary basis from the General Fund of the Treasury, many of the newer programs make funds available through formulas, and highway user fees have replaced general revenues as the major source of transit assistance since the creation of the Mass Transit Account of the Highway Trust Fund in 1983. In addition, Congress has broadened the scope of federal transit assistance to include operating expenses and capital maintenance as well as capital expenses. For example, concerns about growing operating deficits among transit agencies led Congress to authorize the use of federal funds for transit operating expenses in 1974. Although federal support for operating expenses in urbanized areas has since declined, operating assistance is still available for areas with a population of less than 200,000. The federal-local relationship in transit has evolved as Congress has expanded federal involvement in transit and increased state and local government authority and flexibility in using federal funds. For example, in 1978, Congress expanded federal transit assistance to rural areas and made state governments responsible for receiving and distributing these funds. According to agency officials, states previously played a limited role in transit projects because the federal government worked directly with urban areas and transit agencies. In 1991, Congress increased local authority by expanding the role of metropolitan planning organizations in project selection and transportation planning. At the same time, Congress substantially increased state and local authority to transfer funds between highway and transit programs. The combination of additional transfer authority and the gradual shift toward apportioning funds through formulas rather than individual project awards has increased flexibility for both state and local transit grant recipients. In addition, state and local government oversight responsibilities have increased for federal transit grants, much as they have for federal highway infrastructure grants, with self-certification procedures for compliance with federal laws and regulations, and additional federal compliance requirements such as those for environmental review. Federal highway safety and motor carrier safety assistance programs are separately administered by NHTSA and FMCSA. The primary statutory policy goals of these programs are directed to reducing accidents, and the bulk of NHTSA’s and FMCSA’s financial support and research, education, rulemaking, and enforcement activities fall under DOT’s strategic goal of improving safety. Although FHWA and FTA exercise rulemaking authority in the administration of their programs, rulemaking and enforcement are primary tools that NHTSA and FMCSA use to reduce accidents and their associated damages. Highway safety and motor carrier safety grant programs are similarly organized. Both use a basic formula grant to provide funding to states for safety programs, enforcement activities, and related expenditures, coupled with several targeted discretionary grants. Currently, almost 40 percent of authorized federal highway safety assistance is distributed by formula to states through the State and Community Highway Safety Grant Program (Section 402), which supports a wide range of highway safety initiatives at the state and local level. This basic program is augmented by several smaller discretionary grant programs that mostly target funds to improve safety through the use of measures such as seat belts and child safety restraints, among others. Most of these discretionary grants provide states with financial incentives for meeting specific performance or safety activity criteria. For example, to be eligible for Alcohol-Impaired Driving Countermeasures Incentive grants, most states must either have a low alcohol fatality rate or meet programmatic criteria for enforcement, outreach, and other related activities. In addition to discretionary grants, Congress has authorized highway safety provisions that penalize states by either transferring or withholding state highway infrastructure funds from states that do not comply with certain federal provisions. These penalty provisions can provide a substantial amount of additional funding for state safety activities. For example, in 2007, penalty provisions transferred over $217 million of federal highway infrastructure assistance to highway safety programs in the 19 states and Puerto Rico that were penalized for failure to meet federal criteria for either open container requirements or minimum penalties for repeat offenders for driving while intoxicated or under the influence. The majority of federal motor carrier safety funds are distributed by formula to states through the Motor Carrier Safety Assistance Program (MSCAP), which provides financial assistance to states for the enforcement of federal motor carrier safety and hazardous materials regulations. In addition, several smaller discretionary programs are targeted to achieve specific goals such as data system improvements and border enforcement, among others. Some of these grants require states to maintain a level of funding for eligible motor carrier safety activities to reduce the potential for federal funds to replace state financial support. Finally, FMCSA sets aside MCSAP funds to support high-priority areas such as audits of new motor carrier operations. Unlike the highway safety grants, most of these discretionary programs do not have statutorily defined performance or outcome-related eligibility criteria, and funds are allocated at the agency’s discretion. States that do not comply with federal commercial driver licensing requirements may have up to 5 percent of their annual highway construction funds withheld in the first fiscal year and 10 percent in the second fiscal year of violation. However, these withheld funds, unlike the funds withheld or transferred under some highway safety penalty provisions, are not available to the penalized states for motor carrier safety activities. Like highway infrastructure grants, most federal highway safety and motor carrier safety grants are jointly administered through a federal-state partnership. Through NHTSA and FMCSA, the federal government provides funds, establishes and enforces regulations, collects and analyzes data, performs oversight, conducts research, performs educational outreach, and provides technical assistance. In turn, states provide matching funds, develop and execute safety and enforcement plans and programs, distribute funds to other governmental partners, collect and analyze data, and comply with federal grant and reporting requirements. Both NHTSA and FMCSA use a performance-based approach to grant oversight. Each agency reviews state safety plans, which establish specific performance goals, and then monitors states’ progress towards achieving their goals. Because these efforts rely on the accuracy and completeness of state safety data, both NHTSA and FMCSA emphasize state data collection and analysis in the administration of their grant programs. In addition to their annual safety performance reviews, NHTSA and FMCSA conduct periodic management and compliance reviews of grant recipients. NHTSA and FMCSA also each have a substantial regulatory role. NHTSA establishes and enforces safety standards for passenger vehicles in areas such as tire safety, occupant protection devices, and crashworthiness, as well as issuing fuel economy standards. FMCSA establishes and enforces standards for motor carrier vehicles and operations, hazardous materials, household goods movement, commercial vehicle operator medical requirements, and international motor carrier safety. NHTSA conducts testing, inspection, analysis, and investigations to identify noncompliance with vehicle safety standards, and if necessary, initiates a product recall. FMCSA conducts compliance reviews of motor carriers’ operations at their places of business as well as roadside inspections of drivers and vehicles, and can assess a variety of penalties including fines and cessation orders for noncompliance. Both NHTSA and FMCSA rely on data to target their enforcement activities. NHTSA and FMCSA use different organizational structures to administer their grant programs. NHTSA has both a headquarters office and 10 regional offices. Headquarters staff develop policy and programs and provide technical assistance to regional staff. Regional staff review and approve state safety plans, and provide technical assistance. According to agency officials, since NHTSA does not provide the same level of technical assistance as FHWA, a regional rather than a state division structure is appropriate to NHTSA’s needs. For fiscal year 2009, NHTSA requested funding for 635 full-time-equivalent staff divided among its headquarters and regional offices. Similar to FHWA, FMCSA has a field structure of 4 regional service centers and 52 division offices. Headquarters staff establish and communicate agency priorities, issue policy guidance, and carry out financial management activities. Regional service centers act as an intermediary between headquarters and division offices by clarifying policy and organizing training and goal-setting meetings for MSCAP grants. Division offices have primary responsibility for overseeing state motor carrier safety programs and work closely with the states to develop commercial vehicle safety plans. These offices also monitor state progress and grant expenditures. For fiscal year 2009, FMCSA requested funding for 1119 full-time equivalent staff divided among its headquarters and field offices. In broad terms, both federal highway safety and motor carrier safety programs have followed a similar path since their inception. Both federal highway safety and motor carrier safety activities were components of the federal highway program before separate modal agencies were established within DOT. Both state-assistance programs began as a single basic formula grant that was then expanded to include smaller targeted discretionary grants. Additionally, Congress has given states greater flexibility to set their own priorities within the parameters of national safety goals, and both NHTSA and FMCSA have adopted a performance- based approach to grant oversight. Although broader environmental and social goals have had less of an impact on federal safety grant programs, the scope and administrative complexity of highway safety and motor carrier safety regulatory functions has expanded to incorporate these goals. Because of growing concerns about vehicle safety and traffic accidents, the National Traffic and Motor Vehicle Safety Act and Highway Safety Act established highway safety as a separate grant program and regulatory function in 1966. Two major grants provided federal highway safety assistance in 1966: the State and Community Highway Safety (Section 402) grants and Highway Safety Research and Development (Section 403) grants. Section 402 grants distributed federal assistance to states by formula to support the creation of state highway safety programs and the implementation of countermeasures to address behavioral factors in accidents. State safety programs were required to meet several uniform federal standards to be eligible for funding and avoid withholding penalties. Section 403 grants provided discretionary federal funding for research, training, technical assistance, and demonstration projects. Although originally administered by the Department of Commerce, federal highway safety grants and regulatory authority were transferred to the Federal Highway Administration (FHWA) upon its creation in 1967. In 1970, FHWA’s National Highway Safety Bureau became a separate agency within DOT and was renamed the National Highway Traffic Safety Administration. Since 1966, Congress has increased state and local government authority and flexibility to set and fund safety priorities by removing some federal grant requirements and restrictions, and by relying more on incentive- based discretionary grants to achieve national safety goals. For example, the uniform federal standards first established in 1966 for state highway safety programs funded by Section 402 grants became guidelines in 1987, and in 1998, Congress amended federal oversight procedures from direct oversight of state safety programs to selective oversight of state safety goals based on state performance. Additionally Congress has removed dedicated spending restrictions on Section 402 funds and replaced some of them with separate incentive grant programs. For example, provisions that required a percentage of Section 402 funds to be dedicated to 55 mph speed limit enforcement, school bus safety, child safety restraints, and seat belt use have been discontinued. Some of the priorities addressed by these spending restrictions have become separate incentive programs designed to reward state performance and activities in these areas rather than limit the availability of Section 402 funds. However, in certain priority areas, Congress has provided additional incentives for state compliance by authorizing penalty provisions to withhold or transfer state highway infrastructure funds for failure to meet specific safety criteria. Unlike federal highway and transit infrastructure grants, NHTSA’s grants have not been as directly affected by emerging national social and environmental goals, although Congress has incorporated these goals into NHTSA’s regulatory processes. States must comply with several broad federal requirements such as nondiscrimination policies to receive federal safety funds. However, these requirements have not increased the administrative complexity of highway safety grants to the same extent as infrastructure grants because most safety activities funded through NHTSA do not require construction. For example, state safety activities such as enforcement of traffic laws and accident data collection are generally not subject to construction-related requirements such as environmental assessments and construction contract labor standards which apply to highway and transit infrastructure programs. Similarly, Congress has added only one targeted highway safety grant program to specifically address a social goal unrelated to safety—the reduction of racial profiling in law enforcement—and one grant provision requiring states to ensure accessibility for disabled persons on all new roadside curbs. In contrast, federal social and environmental goals have had a greater impact on NHTSA’s regulatory processes. For example, in response to the energy crisis during the 1970s, Congress gave NHTSA authority to set corporate average fuel economy standards. Furthermore, the agency’s rulemaking process is subject to executive orders and regulations designed to meet legislatively established social and environmental goals such as NEPA, the Paperwork Reduction Act, energy effects, and unfunded mandates. Before FMCSA was established as a separate modal administration within DOT in 1999, federal motor carrier safety functions were administered by both the former Interstate Commerce Commission and FHWA. Until 1982, the federal government regulated motor carrier safety but did not provide financial assistance to states for enforcement. The Surface Transportation Act of 1982 authorized the Secretary of Transportation to make grants to the states for the development or implementation of state programs to enforce federal and state commercial motor vehicle regulations. This authorization became the foundation for the basic MCSAP grant. Since 1982, Congress has expanded the number and scope of motor carrier grant programs and requirements to meet emerging areas of concern, including border enforcement, vehicle and driver information systems, commercial driver license oversight, and safety data collection. Congress has also set- aside grant funds for purposes such as high-priority areas and new entry audits. Additionally, grant eligibility requirements have increased. For example, state enforcement plans must meet 24 criteria to be eligible for a basic MCSAP grant today, compared with 7 criteria when the program started in 1982. Although grant requirements have increased, Congress has given states some flexibility to set enforcement priorities by restructuring the programs to become performance-based and allowing states to tailor their activities to meet their particular circumstances, provided these activities work toward national goals. Additionally, FMCSA follows a performance-based approach to grant oversight. Like highway safety grant programs, motor carrier safety grant programs have undergone fewer structural and administrative changes in response to emerging national social and environmental concerns than have federal highway and transit infrastructure grant programs. Although states must adhere to broad requirements to receive federal funds, some of these requirements, such as those calling for environmental assessments, are not relevant for safety activities that do not involve construction. Furthermore, Congress has not added any specific grant programs or grant requirements exclusive to motor carrier safety assistance that directly address other social and environmental goals. FMCSA’s regulatory and enforcement scope has expanded considerably over time. Much of this expansion is related directly to safety, but Congress has also incorporated other policy goals into FMCSA’s regulatory functions. For example, hazardous materials transport, commercial driver licensing programs, and operator medical requirements have become additional areas of FMCSA regulation and enforcement that directly relate to safety. However, Congress has also given FMCSA regulatory authority for consumer protection in interstate household goods movement, which does not specifically address reducing motor carrier-related fatalities. Additionally, FMCSA’s rulemaking process is subject to executive orders and regulations designed to meet legislatively established social and environmental goals. A fundamental reexamination of surface transportation programs begins with identifying issues in which there is a strong federal interest and determining what the federal goals should be related to those issues. Once the federal interest and goals have been identified, the federal role in relation to state and local governments can be clearly defined. For issues in which there is a strong federal interest, ongoing federal financial support and direct federal involvement could help meet federal goals. But for issues in which there is little or no federal interest, programs and activities may better be devolved to other levels of government or to other parties. In some cases, it may be appropriate to “turn back” activities and programs to state and local governments if they are best suited to perform them. Many surface transportation programs have a dedicated source of funding, that is, they are funded from a dedicated fund—the Highway Trust Fund. Devolving federal responsibility for programs could entail simultaneously relinquishing the federal revenue base, in this case, revenues that go into the Highway Trust Fund. A turnback of federal programs, responsibilities, and funding would have many implications and would require careful decisions to be made at the federal, state, and local levels. These implications and decisions include the following: At the federal level, it would need to be determined (1) what functions would remain and (2) how federal agencies would be structured and staffed to deliver those programs. In deciding what functions would remain, the extent of federal interest in the activity compared to the extent of state or local interest should be considered. Furthermore, in deciding how to staff and deliver programs, for agencies with a large field presence, like FHWA and FMCSA, it would have to be determined what their responsibilities would be. At all levels of government, it would need to be determined how to handle a variety of other federal requirements that are tied to federal funds, such as the requirements for state highway safety programs related to impaired driving and state and metropolitan planning roles. At the federal level, Congress would have to decide whether to keep the requirements, and if so, how to ensure that they are met without federal funds to provide incentives or to withhold with sanctions. If the effect of a turnback is to relinquish requirements, then states and localities would have to decide what kind of planning and other requirements they want to have and how to implement them. At the state and local levels, it would need to be determined (1) whether to replace revenues with state taxes and (2) what type of programs to finance. Deciding whether to replace federal revenues with state taxes may be difficult because states also face fiscal challenges and replacing revenues would have different effects on different states. For example, if states decided to raise fuel taxes, some states could simply replace the current federal tax with an equivalent state tax, but other states might have to levy additional state taxes at a much higher level than the current federal tax. States would also have options of using other revenue sources such as vehicle registration fees or expanded use of tolling. With states deciding what type of programs to continue there is no way to predict which federal programs would be replaced with equivalent state programs. Finally, while states may gain flexibility in how they deliver projects, in some cases states could actually lose some flexibility they currently have using federal funds—for example, the flexibility to move funds between highway and transit programs. The functions that would remain at the federal level would be determined by the level of federal interest. Some functions are financed from the Highway Trust Fund but exist because of broader commitments. For example, the federal government owns land managed by agencies such as the Bureau of Land Management, Bureau of Indian Affairs, and the Forest Service. The responsibility for funding and overseeing construction of these roads is within DOT, specifically within FHWA’s federal lands division. It is unlikely that the federal government would assign the responsibilities to construct roads on federal lands to state or local government. Thus, the decision may be whether, in a restructured federal program, to continue to finance this responsibility from federal gas taxes or shift responsibility to the managing agency, but not whether the responsibility would be turned over to another level of government. In another area, the federal government takes a defined role in response to disasters, as exemplified in the Robert T. Stafford Disaster Relief and Emergency Assistance Act. Similarly, the Emergency Relief program provides funds to states and other federal agencies for the repair or reconstruction of federal-aid highways that have been damaged or destroyed by natural disasters or catastrophic failures. This is a long- established federal function and Congress has provided funds for the emergency repair of roads since at least 1928. Given the ongoing federal commitment to respond to disasters it is likely that emergency relief would remain a federal function. Devolving other programs would depend on how the federal interest and the federal role were defined. For example, maintaining systems such as Interstate highways or the National Highway System could be designated as part of the national interest. The effect of various turnback scenarios on DOT modal agencies would depend on how expansively the federal role is defined. For example, FHWA in fiscal year 2008 had about 1,400 personnel in field offices, or about half of its total staff. FHWA maintains a division office in each state that provides oversight of state programs and projects as defined in a stewardship agreement between the state and the division office. The division offices may provide project-level oversight in some cases or delegate that responsibility to the state. Division offices also review state DOTs’ programs and processes to ensure that states have adequate controls in place to effectively manage federally assisted projects. Thus, if a substantial portion of federal highway programs is turned back to the states, the greatest effect might be felt at the division office level, as the oversight activities of these offices might largely be considered for elimination. However, certain functions and offices could remain, such as the Office of Federal Lands Highways, which provides funding and oversight for highways on federal lands and constitutes, including both headquarters and field, about one-fourth of all FHWA staff. Other functions, such as Emergency Relief program or environmental oversight, might remain and require a field office presence of some type. A reduced or eliminated division office structure might be warranted, or residual functions might suggest a regional structure. Even under an extensive turnback scenario, FHWA might retain a technical support function, along with its five existing resource center locations. Effects on other DOT agencies of a general turnback of transportation grants would vary and would hinge on what activities the agencies would continue to perform. For example, assuming FMCSA’s inspection activities continue, the significant numbers of field staff required to perform those functions would remain. If NHTSA’s safety grants to the states for purposes such as reducing impaired driving or increasing seat belt use were turned back, the functions of NHTSA field staff would need to be reviewed, as these staff would no longer be needed for grant oversight. However, NHTSA could still retain its regulatory and research responsibilities, such as those related to fuel economy standards, automotive recalls, and crash testing, among others, and might need to retain those staff. In some programs, federal funding is contingent on actions taken by states. In the highway safety area the federal government has applied both incentives and sanctions based on state actions. In the past these strategies have been used to encourage states to enact laws that establish a minimum drinking age of 21 years and a maximum blood alcohol level of 0.08 to determine impaired driving ability. In addition, Safety Belt Performance Grants promote national priorities by providing financial incentives for meeting certain specific performance or safety activity criteria. Penalty provisions such as those associated with Open Container laws and Motor Carrier Safety Assistance Program grants promote federal priorities by transferring or withholding the state’s federal funds if states do not comply. If such programs were turned back to the states and if these incentive and sanction programs were eliminated, there would not appear to be a substitute basis for the federal government to influence state actions. Extensive state and metropolitan planning requirements could be affected by a turnback of the highway program. Federal laws and requirements specify an overall approach for transportation planning that states and regional organizations must follow in order to receive federal funds. This approach includes involving numerous stakeholders, identifying state and regional goals, developing long- and short-range state and metropolitan planning documents, and ensuring that a wide range of transportation planning factors are considered in the process. Without this structure, it is not clear what form planning processes might take at the state level, or what role, if any, the federal government would have in relation to planning activities. At the local level, metropolitan planning organizations (MPO) came into being largely as result of federal planning requirements, and MPO activities are in part funded through the current federal-aid program. In general, the role MPOs would play after a turnback of the federal program is unclear and would need to be redefined. The status of existing planning requirements and the amount of federal funding for metropolitan planning organizations (MPOs), if any, would have to be determined. If the effect of a turnback is to relinquish requirements, then states and localities would have to decide what kind of planning and other requirements they want to have and how to establish those requirements as a matter of policy. In addition, a turnback of federal surface transportation programs would necessitate a review of which federal requirements still apply. As a condition of receiving federal funds, states must adhere to federal regulations such as those covering contracting practices. For example, under the current highway program states must comply with the provisions of the Disadvantaged Business Enterprise Program, which requires that a certain percentage of contracts be awarded to socially or economically disadvantaged firms such as minority and women-owned businesses. Yet another area requiring review would be the applicability of federal environmental requirements. Federal laws not predicated on the receipt of federal funds would still apply and in some cases states have environmental regulations requiring their own environmental process. States would have to decide whether to replace revenues with state taxes. This decision would have different impacts on different states because some states contribute more in taxes than they get back in program funds and vice versa. In the highway context, these states are referred to as donor and donee states. However, a turnback might require states to replace Highway Trust Fund revenues for transit programs and safety grants as well as highways. For some states replacing federal revenues with state taxes sufficient to continue to fund existing federal programs would result in a net decrease in fuel taxes in that state while in others a net increase in fuel taxes—in some cases a substantial increase. This raises questions whether surface transportation programs would continue at the same funding level under a turnback because states face their own long-term fiscal challenges, and the fiscal capacity of states varies. Other factors could affect outcomes at the state level. For example, there is no way to reliably predict the extent to which “tax competition” between states—efforts to keep taxes lower as a way of attracting business—would occur. We considered the implications of a relatively complete turnback of federal grant programs, including highway, transit and safety grants. In the following example, almost all federal surface transportation programs funded through the Highway Trust Fund would be turned back to the states, with the exception of Federal Lands and Emergency Relief. In order to provide a consistent basis for comparison, we assumed that states would substantially continue current programs and activities that now receive federal funding, and that states would raise their fuel taxes to provide the additional revenues needed to cover the cost of these programs and activities. However, if a turnback of the federal program were to actually occur, the outcome would almost certainly differ from these results, because states would not necessarily elect to replace all current federal programs or finance the same programs and activities from their own resources. Furthermore, states might not elect to replace federal revenue with state fuel taxes as states have options for raising revenue other than fuel taxes. For example, a state might choose to raise vehicle registration fees or increase the use of tolling. The illustrative analysis of this turnback scenario showed that 27 states could achieve the same funding level as they currently receive through federal transportation grants with taxes lower than the existing federal tax, while 23 states and the District of Columbia would require taxes higher than the existing federal tax, or other revenue sources, to achieve full replacement value. Figure 1 lists the net change in per-gallon fuel taxes that would occur if the federal fuel tax were eliminated and states replaced Highway Trust Fund grants with their own fuel taxes. States in table 1 with a negative value would need to raise state taxes less than the current federal tax level, and states with a positive value would need to raise state taxes more than the current federal tax level, or obtain other revenue sources. Although table 1 shows that a similar number of states would likely require net increases and net decreases, the range is much wider among states that would require a net increase. While some states, such as Virginia and Arizona, would likely end up with modest net decreases in fuel taxes of up to 6 cents per gallon under this scenario, nine states and the District of Columbia would face increases of more than twice that— Mississippi and Alaska would all require comparatively extreme net increases of more than 30 cents per gallon, and the District of Columbia over $1 per gallon. These results reflect a cumulative effect of many factors, such as the “donor-donee” distinctions between states, equity and minimum apportionment adjustments from the Highway Trust Fund, the various allocations made to states for safety, and allocations to states and localities for transit programs. In general, states would have great flexibility in how they use funds under a turnback approach. States would have greater flexibility to develop their own programs and approaches without being limited to the current federal program categories, and would have greater discretion to define and fund projects that best suit their needs. In addition, there would be no congressionally directed spending. To the extent that federal programs affect the targeting of funds, states might shift funds to different projects. However, the current federal-aid program already gives states great discretion in setting priorities and selecting projects. In contrast, the current federal program may provide some states with flexibility they otherwise would not have. For example, some federal highway programs provide that funds may be transferred (flexed) between highway and transit programs. However, under a turnback of surface transportation programs, this flexibility could be lost in some states. For example, some states have constitutional provisions that require all fuel taxes to be spent solely on roads, thus making transit and safety programs ineligible barring constitutional change. Such states would have to revise certain laws and constitutional provisions or develop alternative sources of revenue in order to replace federal funds. In addition to the individual named above, other key contributors to this report were Steve Cohen, Assistant Director; Lauren Calhoun; Robert Ciszewski; Jay Cherlow; Elizabeth Eisenstadt; Teague Lyons; Josh Ormond; and Lisa Van Arsdale. The following are GAO products pertinent to the issues discussed in this report. Other products may be found at GAO’s Web site at www.gao.gov. Surface Transportation: Preliminary Observations on Efforts to Restructure Current Program. GAO-08-478T. Washington, D.C.: February 6, 2008. Freight Transportation: National Policy and Strategies Can Help Improve Freight Mobility. GAO-08-287. Washington, D.C.: January 7, 2008. Highlights of a Forum: Transforming Transportation Policy for the 21st Century. GAO-07-1210SP. Washington, D.C.: September 19, 2007. Railroad Bridges and Tunnels: Federal Role in Providing Safety Oversight and Freight Infrastructure Investment Could Be Better Targeted. GAO-07-770. Washington, D.C.: Aug. 6, 2007. Motor Carrier Safety: Preliminary Information on the Federal Motor Carrier Safety Administration’s Efforts to Identify and Follow Up with High-Risk Carriers. GAO-07-1074T. Washington, D.C.: July 11, 2007. Intermodal Transportation: DOT Could Take Further Actions to Address Intermodal Barriers. GAO-07-718. Washington, D.C.: June 20, 2007. Intercity Passenger Rail: National Policy and Strategies Needed to Maximize Public Benefits from Federal Expenditures. GAO-07-15. Washington, D.C.: November 13, 2006. Freight Railroads: Industry Health Has Improved, but Concerns about Competition and Capacity Should Be Addressed. GAO-07-94. Washington, D.C.: October 6, 2006. Public Transportation: New Starts Program Is in a Period of Transition. GAO-06-819. Washington, D.C.: August 30, 2006. Freight Transportation: Short Sea Shipping Option Shows Importance of Systematic Approach to Public Investment Decisions. GAO-05-768. Washington, D.C.: July 29, 2005. Rail Transit: Additional Federal Leadership Would Enhance FTA’s State Safety Oversight Program. GAO-06-821. Washington, D.C.: July 26, 2006. Intermodal Transportation: Potential Strategies Would Redefine Federal Role in Developing Airport Intermodal Capabilities. GAO-05-727. Washington, D.C.: July 26, 2005. 21st Century Challenges: Reexamining the Base of the Federal Government. GAO-05-325SP. Washington, D.C.: February, 2005. Homeland Security: Effective Regional Coordination Can Enhance Emergency Preparedness. GAO-04-1009. Washington, D.C.: September 15, 2004. Freight Transportation: Strategies Needed to Address Planning and Financing Limitations. GAO-04-165. Washington, D.C.: December 19, 2003. Surface and Maritime Transportation: Developing Strategies for Enhancing Mobility: A National Challenge. GAO-02-775. Washington, D.C.: August 30, 2002. Highway Infrastructure: Interstate Physical Conditions Have Improved, but Congestion and Other Pressures Continue. GAO-02-571. Washington, D.C.: May 31, 2002. Highway Public-Private Partnerships: More Rigorous Up-front Analysis Could Better Secure Potential Benefits and Protect the Public Interest. GAO-08-44. Washington D.C.: February 8, 2008. Federal-Aid Highways: Increased Reliance on Contractors Can Pose Oversight Challenges for Federal and State Officials. GAO-08-198. Washington D.C.: January 8, 2008. A Call For Stewardship: Enhancing the Federal Government’s Ability to Address Key Fiscal and Other 21st Century Challenges. GAO-08-93SP. Washington, D.C.: December 2007. Public Transportation: Future Demand Is Likely for New Starts and Small Starts Programs, but Improvements Needed to the Small Starts Application Process. GAO-07-917. Washington, D.C.: July 27, 2007. Surface Transportation: Strategies Are Available for Making Existing Road Infrastructure Perform Better. GAO-07-920. Washington, D.C.: July 26, 2007. Motor Carrier Safety: A Statistical Approach Will Better Identify Commercial Carriers That Pose High Crash Risks Than Does the Current Federal Approach. GAO-07-585. Washington, D.C.: June 11, 2007. Public Transportation: Preliminary Analysis of Changes to and Trends in FTA’s New Starts and Small Starts Programs. GAO-07-812T. Washington, D.C.: May 10, 2007. Older Driver Safety: Knowledge Sharing Should Help States Prepare for Increase in Older Driver Population. GAO-07-413. Washington, D.C.: April 11, 2007. Older Driver Safety: Survey of States on Their Implementation of Federal Highway Administration Recommendations and Guidelines, an E-Supplement. GAO-07-517SP. Washington, D.C.: April 11, 2007. Performance and Accountability: Transportation Challenges Facing Congress and the Department of Transportation. GAO-07-545T. Washington, D.C.: March 6, 2007. Transportation-Disadvantaged Populations: Actions Needed to Clarify Responsibilities and Increase Preparedness for Evacuations. GAO-07-44. Washington, D.C.: December 22, 2006. Federal Transit Administration: Progress Made in Implementing Changes to the Job Access Program, but Evaluation and Oversight Processes Need Improvement. GAO-07-43. Washington, D.C.: November 17, 2006. Truck Safety: Share the Road Safely Pilot Initiative Showed Promise, but the Program’s Future Success Is Uncertain. GAO-06-916. Washington, D.C.: September 8, 2006. Public Transportation: Preliminary Information on FTA’s Implementation of SAFETEA-LU Changes. GAO-06-910T. Washington, D.C.: June 27, 2006. Intermodal Transportation: Challenges to and Potential Strategies for Developing Improved Intermodal Capabilities. GAO-06-855T. Washington, D.C.: June 15, 2006. Federal Motor Carrier Safety Administration: Education and Outreach Programs Target Safety and Consumer Issues, but Gaps in Planning and Evaluation Remain. GAO-06-103. Washington, D.C.: December 19, 2005. Large Truck Safety: Federal Enforcement Efforts Have Been Stronger Since 2000, but Oversight of State Grants Needs Improvement. GAO-06- 156. Washington, D.C.: December 15, 2005. Highway Safety: Further Opportunities Exist to Improve Data on Crashes Involving Commercial Motor Vehicles. GAO-06-102. Washington, D.C.: November 18, 2005. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations. GAO-06-52. Washington, D.C.: November 2, 2005. Highway Congestion: Intelligent Transportation Systems Promise for Managing Congestion Falls Short, and DOT Could Better Facilitate Their Strategic Use. GAO-05-943. Washington, D.C.: September 14, 2005. Highlights of an Expert Panel: The Benefits and Costs of Highway and Transit Investments. GAO-05-423SP. Washington, D.C.: May 6, 2005. Federal-Aid Highways: FHWA Needs a Comprehensive Approach to Improving Project Oversight. GAO-05-173. Washington, D.C.: January 31, 2005. Highway and Transit Investments: Options for Improving Information on Projects’ Benefits and Costs and Increasing Accountability for Results. GAO-05-172. Washington, D.C.: January 24, 2005. Highway Safety: Improved Monitoring and Oversight of Traffic Safety Data Program Are Needed. GAO-05-24. Washington, D.C.: November 4, 2004. Surface Transportation: Many Factors Affect Investment Decisions. GAO-04-744. Washington, D.C.: June 30, 2004. Highway Safety: Better Guidance Could Improve Oversight of State Highway Safety Programs. GAO-03-474. Washington, D.C.: April 21, 2003. Executive Guide: Leading Practices in Capital Decision Making. GAO/AIMD-99-32. Washington, D.C.: December 1998. Congressional Directives: Selected Agencies’ Processes for Responding to Funding Instructions. GAO-08-209. Washington, D.C.: January 31, 2008. Highway and Transit Investments: Flexible Funding Supports State and Local Transportation Priorities and Multimodal Planning. GAO-07-772. Washington, D.C.: July 26, 2007. State and Local Governments: Persistent Fiscal Challenges Will Likely Emerge within the Next Decade. GAO-07-1080SP. Washington D.C.: July 18, 2007. Highway Emergency Relief: Reexamination Needed to Address Fiscal Imbalance and Long-Term Sustainability. GAO-07-245. Washington, D.C.: February 23, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Highway Finance: States’ Expanding Use of Tolling Illustrates Diverse Challenges and Strategies. GAO-06-554. Washington, D.C.: June 28, 2006. Highway Trust Fund: Overview of Highway Trust Fund Estimates. GAO-06-572T. Washington, D.C.: April 4, 2006. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. U.S. Infrastructure: Funding Trends and Federal Agencies’ Investment Estimates. GAO-01-986T. Washington, D.C.: July 23, 2001. Federal Budget: Choosing Public Investment Programs. GAO/AIMD-93- 25. Washington, D.C.: July 23, 1993. | Surface transportation programs need to be reexamined in the context of the nation's current unsustainable fiscal path. Surface transportation programs are particularly ready for review as the Highway Trust Fund faces a fiscal imbalance at a time when both congestion and travel demand are growing. As you requested, this report (1) provides an overview of the federal role in surface transportation and the goals and structures of federal programs, (2) summarizes GAO's conclusions about the structure and performance of these programs, and (3) provides principles to assess options for focusing future surface transportation programs. GAO's study is based on prior GAO reports, stakeholder reports and interviews, Department of Transportation documents, and the views of transportation experts. Since federal financing for the interstate system was established in 1956, the federal role in surface transportation has expanded to include broader goals, more programs, and a variety of program structures. To incorporate additional transportation, environmental and societal goals, federal surface transportation programs have grown in number and complexity. While some of these goals have been incorporated as new grant programs in areas such as transit, highway safety, and motor carrier safety, others have been incorporated as additional procedural requirements for receiving federal aid. Broad program goals, eligibility requirements, and transfer provisions give states and local governments substantial discretion for allocating most highway infrastructure funds. For transit and safety programs, broad basic grant programs are augmented by programs that either require a competitive selection process or use financial incentives to directly target federal funds toward specific goals or safety activities. Many current programs are not effective at addressing key transportation challenges such as increasing congestion and freight demand. They generally do not meet these challenges because federal goals and roles are unclear, many programs lack links to needs or performance, and the programs often do not employ the best tools and approaches. The goals of current programs are numerous and sometimes conflicting. Furthermore, states' ability to transfer highway infrastructure funds among different programs is so flexible that some program distinctions have little meaning. Moreover, programs often do not employ the best tools and approaches; rigorous economic analysis is not a driving factor in most project selection decisions and tools to make better use of existing infrastructure have not been deployed to their full potential. Modally-stovepiped funding can impede efficient planning and project selection and, according to state officials, congressionally directed spending may limit the states' ability to implement projects and efficiently use transportation funds. A number of principles can help guide the assessment of options for transforming federal surface transportation programs. These principles include: (1) ensuring goals are well defined and focused on the federal interest, (2) ensuring the federal role in achieving each goal is clearly defined, (3) ensuring accountability for results by entities receiving federal funds, (4) employing the best tools and approaches to emphasize return on targeted federal investment, and (5) ensuring fiscal sustainability. With the sustainability and performance issues of current programs, it is an opportune time for Congress to more clearly define the federal role in transportation and improve progress toward specific, nationally-defined outcomes. Given the scope of needed transformation, it may be necessary to shift policies and programs incrementally or on a pilot basis to gain practical lessons for a coherent, sustainable, and effective national program and financing structure to best serve the nation for the 21st century. |
Congress appropriates federal assistance grant funds to executive branch agencies that then use funding formulas to distribute federal assistance to states or local entities. These funding formulas are typically established through statute and expressed as one or more equations containing one or more variables. Executive branch agencies also use formulas to determine the amount of federal matching grants for jointly funded federal assistance programs where the amount of the federal match varies among the states based upon the formula calculation. For example, Medicaid’s Federal Matching Assistance Percentage (FMAP) is determined through a statutory formula based on each state’s per capita income relative to U.S. per capita income. Various statutory or administrative provisions can also modify the amount that would otherwise be determined under the formula. These provisions may be included to avoid disruptions that could be caused by year-to-year changes in funding, to cover fixed costs of a program, or for other reasons. Congress can use formula grants to target funds to achieve federal assistance program objectives by including specific variables in the formulas that relate to the programs’ objectives. For example, for a program intended to serve a specific segment of the population, the formula may contain variables that measure or identify the subset of the population. Therefore, the formula for a program designed to provide services for children in low income areas may contain variables that identify the total number of children living in poverty in a certain area. Historically, many formulas have relied at least in part on decennial census and related data as a source of these variables. The decennial census collects, among other things, information on whether a residence is owned or rented, as well as respondents’ sex, age, and race. To update decennial population counts, the Bureau’s Population Estimates Program produces population estimates for each year following the last published decennial census, as well as for past decennials, using administrative records such as birth and death certificates and federal tax returns. Census-related data stem from the decennial census and the Bureau’s population estimates and include (1) surveys with statistical samples designed to represent the entire population using data from the decennial census or its annual updates, and (2) statistics derived from decennial census data, its annual updates, or census-related surveys. Two of the census-related surveys produced by the Bureau include the American Community Survey (ACS) and the Current Population Survey (CPS). The ACS is an annual survey of about 3 million housing units that collects information about people and housing, including information previously collected during the decennial census. The CPS is a monthly survey of about 50,000 households conducted by the Census Bureau for the Bureau of Labor Statistics and provides data on the labor force characteristics of the U.S. population. Supplemental questions also produce estimates on a variety of topics including school enrollment, income, previous work experience, health, employee benefits, and work schedules. Federal agencies use census data, annual updates, and surveys based on these data to produce other statistics used in federal assistance grant formulas. For example, the Bureau of Economic Analysis produces per capita income data—a derivative of decennial census data—by dividing personal income by population obtained from census population estimates. Per capita income is used to calculate Medicaid’s FMAP. Another derivative is Fair Market Rent (FMR) that the Department of Housing and Urban Development calculates and uses to determine payment standard amounts for the Section 8 Housing Choice Voucher Program. The FMR for a particular area is based on decennial census data or other surveys such as the ACS for the years between censuses. Our analysis showed that each of the 10 largest federal assistance programs in fiscal year 2008 and 2009 relied at least in part on decennial census and related data to determine funding. For fiscal year 2008, this totaled about $334.9 billion, representing about 73 percent of total federal assistance. We considered funding based on decennial census and related data if any part of the funding formula or eligibility requirements relied on these data sources. Table 1 shows the fiscal year 2008 obligations for the 10 largest federal assistance programs in that year. For fiscal year 2009, the estimated obligations of the 10 largest federal assistance programs totaled about $478.3 billion, representing about 84 percent of total federal assistance. This amount included about $122.7 billion funded by the Recovery Act and about $355.6 billion funded by other means. The 10 largest federal assistance programs in fiscal year 2009 included a new program added by the Recovery Act—the State Fiscal Stabilization Fund. Table 2 shows the fiscal year 2009 estimated obligations for the 10 largest federal assistance programs and how much the Recovery Act increased that amount. Decennial census and related data play an important role in funding for the largest federal assistance programs. However, changes in population do not necessarily result in an increase or decrease in funding. Based on our prior work and related research on formula grants, we identified some of the factors that could affect the role of population grant formulas. We found that factors related to the formula equation(s) and those that modify the amount that a state or local entity would otherwise receive under the formula could affect the role of population in grant funding formulas. Further, the extent to which one particular factor can affect the role of population in grant funding varied across programs. Although at least one factor that could affect the role of population in grant funding formulas was present in each program in our review, the number and combination of factors varied across programs. To illustrate how these factors can be used in formula grant funding, we selected examples from the federal assistance programs in our review. The examples presented below are illustrative and do not necessarily indicate the relative importance of a factor compared to the other factors present. All of the programs in our review included one or more grants with formulas containing variables other than total population. Obviously, absent other factors, funding based on these formulas will be affected less by changes in population than those that rely solely on total population. The State Fiscal Stabilization Fund formula is based on total population and a subset of total population—states’ shares of individuals aged 5 to 24 relative to total population. The Federal Transit Program grants for urbanized areas with populations of 200,000 are based on total population and variables related to the level of transit service provided. TANF supplemental grants are awarded based on a formula with multiple variables. According to the Department of Health and Human Services’s Administration of Children and Families (ACF), which administers the program, supplemental grants are awarded to states with exceptionally high population growth in the early 1990s, historic welfare grants per poor person lower than 35 percent of the national average, or a combination of above average population growth and below average historic welfare grants per poor person. Medicaid’s FMAP is based on a 3-year average of a state’s per capita income relative to U.S. per capita income with per capita income defined as personal income divided by total population. The FMAP is affected by both changes in population and personal income. Because changes in population and personal income are not correlated, the affect of a population change may be diminished or increased by a change in personal income. Finally, because per capita is squared—that is, multiplied by itself—in the formula, the affect of a population change may be greater than if per capita income were not squared. In addition to the number of variables, the number of equations can also affect the role of population in grant funding formulas. Under CDBG, metropolitan counties and cites are eligible for the greater of the amounts calculated under two different equations. The variables in the first equation are population, extent of poverty, and extent of overcrowded housing. The variables in the second equation are population growth lag, extent of poverty, and age of housing. The use of the dual equation structure and the variables other than population in each equation reduce the effect of population changes on grant funding. Some formulas also have base amounts that are set at the amount of funding in a specified prior year and the remainder for funding is calculated according to a formula. For programs with set base amounts, only a portion of the funding might be affected by a change in population. Because appropriation amounts can change from year to year, the base amount portion of the grant will represent less of the total grant amount if appropriations increase, making total grant funding affected more by a change in population. When appropriations decrease, the share of the overall funding subject to the formula is lower, lessening the effect of a change in population on total funding. Some programs we reviewed contained such base amounts in their funding formula. Under IDEA Part B, generally each state first receives the same amount it received for fiscal year 1999 for the program for children aged 3 through 21, and, for the program for children aged 3 through 5, the amount the state received in fiscal year 1997. For the remainder of the state’s funding in a given year, (1) 85 percent is based on the state’s share of the 3 through 21 year old population for the school-aged program, and the 3 through 5 year old population for the preschool program and (2) 15 percent is based on the state’s share of those children living in poverty. In another example, the Head Start program guarantees the same base amount as in the prior year. The remainder of the funding is allocated to cost of living increases and Indian and migrant and seasonal Head Start programs depending upon the amount remaining. According to ACF, which administers Head Start, when the increase in appropriation is large enough to allow for expansion of Head Start, those funds are calculated based on the relative share of children aged 3 and 4 living in poverty in each state. Some factors modify the amount that a state would otherwise receive under the funding formula and could affect the role of population in grant funding formulas. The factors include the following: (1) hold harmless provisions and caps; (2) small state minimums; and (3) funding floors and ceilings. Hold Harmless Provisions/Caps: Hold harmless provisions and caps limit the amount of a decrease or increase from a prior year’s funding. Hold harmless provisions guarantee that the grantee will receive no less than a specified proportion of a previous year’s funding. If a population change resulted in a decrease in funding below a designated amount, the hold harmless provision would raise the amount of funding above what the grantee would otherwise have received under the formula and the amount of the increase would be deducted from the funding amounts of grantees not affected by the hold-harmless provision. Title I includes a hold harmless provision guaranteeing the amount made available to each local educational agency (LEA) not be less than from 85 to 95 percent of the previous fiscal year’s funding, depending on the LEA school age child poverty rate. Similarly, caps—also known as “stop gains”—limit the size of an annual increase as a proportion of a previous year’s funding amount or federal share. If a population change resulted in an increase in funding above a certain amount, the cap would limit the effect of the population change. Under IDEA Part B, no state’s allocation is to exceed the amount the state received under this section for the preceding fiscal year multiplied by the sum of 1.5 percent and the percentage increase in the amount appropriated under this section from the preceding fiscal year. Small-State Minimums: Small-state minimums guarantee that each state will receive at least a specified amount or percentage of total funding. These minimums can typically benefit smaller states that would otherwise receive allocations below the minimum. However, whether a state is considered “small” depends upon the program and is not necessarily based directly on a state’s population or geographic size. Several components within the federal-aid highway program contain such state minimums. For example, there is a statutory 0.5 percent state minimum on the annual apportionment from the Highway Trust Fund to the Surface Transportation Program for states having less than a specified threshold of qualifying roads, vehicle miles traveled on those roads, and taxes paid into the fund. When states’ minimums are applied, grant funding formulas may be affected less by changes in population. Floors/Ceilings: Floors and ceilings are lower and upper limits placed on the amount a state can receive under a formula. If a change in population results in funding under the formula falling below the floor, the state would be guaranteed the amount of the floor. If a population change results in the state exceeding the ceiling, the state could not receive more than the ceiling amount. The federal government’s share of Medicaid expenditures ranges from 50 percent (floor) to 83 percent (ceiling). Although 1973 was the most recent year that any state was affected by the ceiling, states often benefit from the FMAP floor. In fiscal year 2009, 13 states received the minimum 50 percent matching rate. In our 2003 report on federal formula grant funding, we found that in 2002, under the statutory formula, which is based on the ratio of a state’s per capita income relative to U.S. per capita income, Connecticut would have received a 15 percent federal matching rate. Despite Connecticut’s relatively high per capita income—a calculation based in part on population—Connecticut received a 50 percent federal match. For Connecticut, in this particular year, the floor affected the role of population in the amount of the federal match. Similarly, because CHIP’s matching formula is based on the Medicaid FMAP, CHIP’s enhanced FMAP is also affected by Medicaid’s floor and ceiling. For example, if a state was affected by the 50 percent floor, the state would receive a matching percentage of 65 percent. As a result, funding for states benefiting from the floor would be affected less by changes in population. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 7 days from the report date. At that time, we will send copies to interested parties. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report please contact me at (202) 512- 2757 or goldenkoffr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objectives were to determine (1) how much the federal government obligates to the largest federal assistance programs based on the decennial census and related data and how the Recovery Act changed that amount, and (2) what factors could affect the role of population in grant funding formulas. To answer our objectives, we identified 11 federal assistance programs representing the 10 largest programs in each of the fiscal years 2008 and 2009 based on the dollar amounts obligated reported in the President’s budget, issued in May 2009, Office of Management and Budget (OMB), Analytical Perspectives, Budget of the United States Government, Fiscal Year 2010 (Fiscal Year 2010 budget), Table 8-4, Summary of Programs by Agency, Bureau, and Program. We believe that these data are sufficiently reliable for purposes of our review. We included the following programs in our review: Children’s Health Insurance Program; Community Development Block Grants and Neighborhood Stabilization Education State Grants, State Fiscal Stabilization Fund; Federal Transit Formula Grants Programs; Head Start; Highway Planning and Construction; Individuals with Disabilities Education Act, Part B; Medicaid; Section 8 Housing Choice Vouchers; Temporary Aid for Needy Families; and Title I Grants to Local Education Agencies. To determine whether the program’s funding relied on decennial census and related census data, we reviewed statutes, GAO reports, the Catalog of Federal Domestic Assistance Programs (CFDA), Congressional Research Service (CRS) reports, and agency Web pages and reports related to each of the programs. For purposes of our analysis, we defined census and related data as (1) data obtained from the decennial census and annual updates, (2) census-related surveys—that is, those surveys that base their samples on the decennial census; or (3) their derivatives—that is, statistics produced from data contained in the decennial census or a census-related survey. We considered funding to be based on census or related data if any part of the funding formula or eligibility requirements relied on these data sources. For the programs that relied at least in part on census and related data, we summed the total obligation amounts reported in the Fiscal Year 2010 budget, Table 8-4, Summary of Programs by Agency, Bureau, and Program, as well as Table 8-6, Summary of Recovery Act Grants by Agency, Bureau, and Program. Because the actual obligations for fiscal year 2009 for each of these programs are not yet available from OMB, we are reporting the estimated fiscal year 2009 obligations reported in the Fiscal Year 2010 budget. We did not independently verify or assess the extent to which an agency actually distributes funds according to the statutory formula. We did not identify all possible uses of decennial census and related data to fund the selected programs. We did not conduct any simulations to determine the extent to which any particular variable relied on the funding formula. To determine what factors could affect the role of population in grant funding formulas, from our prior work related to formula grants (see the list of related GAO products at the end of this report) and other research on formula grants, we first identified factors that illustrate the different ways that such factors could affect the amount of grant funding. To obtain illustrative examples of how the factors are used in the selected programs, we reviewed statutes, GAO reports, the CFDA, CRS reports, and agency Web pages and reports related to each of the programs. We asked the responsible agencies to confirm the accuracy of information being reported on the existence of the factors in and descriptions of each program. We received responses on each of the 11 programs. We did not identify all possible factors that could affect the amount of grant funding. The presentation of these factors is not intended to suggest that they are the most important either generally, or to the specific programs listed here. The number of times a factor or a program is cited in reported examples does not indicate anything judgmental about the feature or the program. The presence of a factor in statute does not indicate that a factor is either significant or relevant to actual funding for the program. We conducted our work from June 2009 to December 2009, in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions. Children’s Health Insurance Program (CHIP): CHIP is a federal-state matching grant program administered by the Department of Health and Human Services’ Centers for Medicare & Medicaid Services (CMS). The program provides funding for states to cover children (and in some states pregnant women) who lack health insurance and whose families’ low to moderate income exceeds Medicaid eligibility levels. Each state has a different federal match level based on the Medicaid Federal Medical Assistance Percentage (FMAP), called the enhanced FMAP. Community Development Block Grant (CDBG) program: The Department of Housing and Urban Development provides CDBG funding to communities to develop decent housing, suitable living environments, and economic opportunities for people of low and moderate income. Funds are distributed among communities using a formula based on indicators of community development need. Education State Grants, State Fiscal Stabilization Fund (State Fiscal Stabilization Fund): The State Fiscal Stabilization Fund program is a new one-time appropriation under the American Recovery and Reinvestment Act of 2009. It is administered by the U.S. Department of Education. The funds are intended to help (1) stabilize state and local government budgets in order to minimize and avoid reductions in education and other essential public services; (2) ensure that local educational agencies and public institutions of higher education have the resources to avert cuts and retain teachers and professors; and (3) support the modernization, renovation, and repair of school and college facilities. According to the Department of Education, states participating in the program must provide a commitment to advance essential education reforms to benefit students from early learning through post-secondary education. Federal Transit Formula Grants Programs: Administered by the Department of Transportation’s Federal Transit Administration, these grant programs provide capital and operating assistance for public transit systems. Three of the major formula federal assistance programs include the following: (1) the Urbanized Area Formula Program, which makes federal resources available to areas with populations of 50,000 or more and to governors for transit capital and operating assistance and for transportation related planning; (2) the Nonurbanized Area Formula Program, which provides formula funding to states for the purpose of supporting public transportation in areas with populations of less than 50,000; and (3) Capital Investment—Fixed Guideway Modernization Program, which may be used for capital projects to maintain, modernize, or improve fixed guideway systems. Head Start: Head Start is administered by the Department of Health and Human Services’s Administration for Children and Families (ACF) and provides grants directly to over 1,600 local agencies. Head Start provides funds for early childhood development services to low-income children and their families. These services include education, health, nutrition, and social services to prepare children to enter kindergarten and to improve the conditions necessary for their success later in school and life. Highway Planning and Construction: The Department of Transportation’s Federal Highway Administration (FHWA) administers the Highway Planning and Construction Program, also known as the federal- aid highway program. According to FHWA, the federal-aid highway program provides federal financial resources and technical assistance to state and local governments for planning, constructing, preserving, and improving federal-aid eligible highways. The federal-aid eligible highway system includes the National Highway System (NHS), a network of about 163,000 miles of roads that comprises only 4 percent of the nation’s total public road mileage, but carries approximately 45 percent of the nation’s highway traffic as well as an additional 1.1 million miles of roads that are not on the NHS, but that are eligible for federal-aid. Individuals with Disabilities Education Act (IDEA) Part B: The Department of Education has responsibility for oversight of IDEA and for ensuring that states are complying with the law. IDEA Part B grants provide funding for special education and related services for children and youth ages 3 to 21. IDEA Part B governs how states and public agencies provide special education and related services to more than 6.5 million eligible children and youth with disabilities. To receive IDEA Part B funding, states agree to comply with certain requirements regarding appropriate special education and related services for children with disabilities. Medicaid: CMS provides federal oversight of state Medicaid programs. Medicaid is a health insurance program jointly funded by the federal government and the states. Generally, eligibility for Medicaid is limited to low-income children, pregnant women, parents of dependent children, the elderly, and people with disabilities. The federal government’s share of a state’s expenditures for most Medicaid services is called the Federal Medical Assistance Percentage (FMAP). Federal Medicaid funding to states is not limited, provided the states contribute their share of program expenditures. Section 8 Housing Choice Vouchers: The Section 8 Housing Choice Voucher Program is one of three key rental subsidy programs of the Department of Housing and Urban Development. The program is administered by local public housing agencies and provides rental vouchers to very low-income families to obtain decent, safe, and affordable housing. Following the discontinuation of funds for new construction of public housing and project-based Section 8, the Section 8 Housing Choice Voucher program has been the primary means of providing new rental assistance on a large scale. The program currently serves over 2 million families. Temporary Aid for Needy Families (TANF): TANF is administered by ACF and provides funding to states through four grants—basic block, supplemental, and two contingency (recession-related). These grants are intended to: (1) provide assistance to needy families with children so they can live in their own homes or relatives’ homes; (2) end parents’ dependence on government benefits through work, job preparation, and marriage; (3) reduce out-of-wedlock pregnancies; and (4) promote the formation and maintenance of two-parent families. Title I Grants to Local Education Agencies (LEA): Title I is administered by the Department of Education and provides financial assistance to LEAs that target funds to the schools with the highest percentage of low-income families. Schools use Title I funds to provide additional academic support and learning opportunities to help low- achieving children master challenging curricula and meet state standards in core academic subjects. Federal funds are currently allocated through four statutory formulas that are based primarily on census poverty estimates and the cost of education in each state, as measured by each state’s expenditure per elementary and secondary student. In addition to the individual named above, Ty Mitchell, Assistant Director; Robert Dinkelmeyer; Gregory Dybalski; Amber G. Edwards; Robert L. Gebhart; Lois Hanshaw; Andrea J. Levine; Victor J. Miller; Melanie H. Papasian; and Tamara F. Stenzel made key contributions to this report. Formula Grants: Census Data Are among Several Factors That Can Affect Funding Allocations. GAO-09-832T. Washington, D.C.: July 9, 2009. 2010 Census: Population Measures Are Important for Federal Funding Allocations. GAO-08-230T. Washington, D.C.: October 27, 2007. Community Development Block Grant Formula: Options for Improving the Targeting of Funds. GAO-06-904T. Washington, D.C.: June 27, 2006. Federal Assistance: Illustrative Simulations of Using Statistical Population Estimates for Reallocating Certain Federal Funding. GAO-06-567. Washington, D.C.: June 22, 2006. Community Development Block Grant Formula: Targeting Assistance to High-Need Communities Could Be Enhanced. GAO-05-622T. Washington, D.C.: April 26, 2005. Federal-Aid Highways: Trends, Effect on State Spending, and Options for Future Program Design. GAO-04-802. Washington, D.C.: August 31, 2004. Medicaid Formula: Differences in Funding Ability among States Often Are Widened. GAO-03-620. Washington, D.C.: July 10, 2003. Formula Grants: 2000 Census Redistributes Federal Funding Among States. GAO-03-178. Washington, D.C.: February 24, 2003. Title I Funding: Poor Children Benefit Though Funding Per Poor Child Differs. GAO-02-242. Washington, D.C.: January 31, 2002. Formula Grants: Effects of Adjusted Population Counts on Federal Funding to States. GAO/HEHS-99-69. Washington, D.C.: February 26, 1999. Federal Grants: Design Improvements Could Help Federal Resources Go Further.GAO/AMID-97-7. Washington, D.C.: December 18, 1996. Block Grants: Characteristics, Experience, and Lessons Learned. GAO/HEHS-95-74. Washington, D.C.: February 9, 1995. Formula Programs: Adjusted Census Data Would Redistribute Small Percentage of Funds to States. GAO/GGD-92-12. Washington, D.C.: November 7, 1991. | Many federal assistance programs are funded by formula grants that have historically relied at least in part on population data from the decennial census and related data to allocate funds. In June 2009, the Census Bureau reported that in fiscal year 2007 the federal government obligated over $446 billion through funding formulas that rely at least in part on census and related data. Funding for federal assistance programs continues to increase. Government Accountability Office (GAO) was asked to determine (1) how much the federal government obligates to the largest federal assistance programs based on the decennial census and related data, and how the Recovery Act changed that amount; and (2) what factors could affect the role of population in grant funding formulas. To answer these objectives, GAO identified the 10 largest federal assistance programs in each of the fiscal years 2008 and 2009 based on data from the President's fiscal year 2010 budget. GAO reviewed statutes, agency reports, and other sources to obtain illustrative examples of how different factors could affect the role of population data in grant funding. GAO's analysis showed that each of the 10 largest federal assistance programs in fiscal years 2008 and 2009 relied at least in part on the decennial census and related data--that is, data from surveys with designs that depend on the decennial census, or statistics, such as per capita income, that are derived from these data. For fiscal year 2008, this totaled about $334.9 billion, representing about 73 percent of total federal assistance. For fiscal year 2009, the estimated obligations of the 10 largest federal assistance programs totaled about $478.3 billion, representing about 84 percent of total federal assistance. This amount included about $122.7 billion funded by the Recovery Act and about $355.6 billion funded by other means. Several factors can affect the role of population in grant funding formulas. When a formula includes variables in addition to total population, the role of population in the grant funding formula is less than if the formula relies solely on total population. All of the programs in GAO's review included one or more grants with formulas containing variables other than total population, such as the level of transit service provided. In addition, other factors can modify the amount that a state or local entity would have otherwise received under the formula. These factors include (1) hold harmless provisions and caps; (2) small state minimums; and (3) funding floors and ceilings. With the application of these factors, grant funding may be affected less or entirely unaffected by changes in population. |
To be considered eligible for benefits for either SSI or DI as an adult, a person must be unable to perform any substantial gainful activity by reason of a medically determinable physical or mental impairment that is expected to result in death or that has lasted or can be expected to last for a continuous period of at least 12 months. Work activity is generally considered to be substantial and gainful if the person’s earnings exceed a particular level established by statute and regulations. The process of determining eligibility for SSA disability benefits is complex, fragmented, and expensive. The current decision-making process involves an initial decision and up to three levels of administrative appeals if the claimant is dissatisfied with the decision. The claimant starts the process by filing an application either online, by phone or mail, or in person at any of SSA’s 1,300 field offices. If the claimant meets the non- medical eligibility criteria, the field office staff forwards the claim to one of the 54 federally-funded, but primarily state-run Disability Determination Service (DDS) offices. DDS staff—generally a team composed of disability examiners and medical consultants—obtains and reviews medical and other evidence as needed to assess whether the claimant satisfies program requirements, and makes the initial disability determination. If the claimant is not satisfied with the decision, the claimant may ask the DDS to reconsider its finding. If the claimant is dissatisfied with the reconsideration, the claimant may request a hearing before one of SSA’s federal administrative law judges in an SSA hearing office. If the claimant is still dissatisfied with the decision, the claimant may request a review by SSA’s Appeals Council. The complex and demanding nature of this process is reflected in the relatively high cost of administering the DI and SSI programs. Although SSI and DI program benefits account for less than 20 percent of the total benefit payments made by SSA, they consume nearly 55 percent of the annual administrative resources. SSA has experienced difficulty managing its complex disability determination process, and consequently faces problems in ensuring the timeliness, accuracy, and consistency of its disability decisions. Although SSA has made some gains in the short term in improving the timeliness of its decisions, the Commissioner has noted that it still has “a long way to go.” Over the past 5 years, SSA has slightly reduced the average time it takes to obtain a decision on an initial claim from 105 days in fiscal year 1999 to 97 days in fiscal year 2003, and significantly reduced the average time it takes the Appeals Council to consider an appeal of a hearing decision from 458 to 294 days over the same period. However, the average time it takes to receive a decision at the hearings level has increased by almost a month over the same period, from 316 days to 344 days. According to SSA’s strategic plan, these delays place a significant burden on applicants and their families and an enormous drain on agency resources. Lengthy processing times have contributed to a large number of pending claims at both the initial and hearings levels. While the number of initial disability claims pending has risen more than 25 percent over the last 5 years, from about 458,000 in fiscal year 1999 to about 582,000 in fiscal year 2003, the number of pending hearings has increased almost 90 percent over the same time period, from about 312,000 to over 591,000. Some cases that are in the queue for a decision have been pending for a long time. For example, of the 499,000 cases pending in June 2002 at the hearings level, about 346,000 (69 percent) were over 120 days old, 167,000 (33 percent) were over 270 days old, and 88,500 (18 percent) were over 365 days old. In addition to the timely processing of claims, SSA has also had difficulty ensuring that decisions regarding a claimant’s eligibility for disability benefits are accurate and consistent across all levels of the decision- making process. For example, the Social Security Advisory Board has reported wide variances in rates of allowances and denials among DDSs, which may indicate that DDSs may be applying SSA standards and guidelines differently. In fiscal year 2000, the percentage of DI applicants whose claims were allowed by a DDS ranged from a high of 65 percent in New Hampshire to a low of 31 percent in Texas, with a national average of 45 percent. In addition, the high percentage of claimants awarded benefits upon appeal may indicate that adjudicators at the hearings level may be arriving at different decisions on similar cases compared to the DDSs. In fiscal year 2000, about 40 percent of the applicants whose cases were denied at the initial level appealed, and about two-thirds of those who appealed were awarded benefits. Awards granted on appeal happen in part because decision-makers at the initial level use a different approach to evaluate claims and make decisions than those at the appellate level. In addition, the decision-makers at the appeals level may reach a different decision because the evidence in the case differs from that reviewed by the DDS. We are currently reviewing SSA’s efforts to assess consistency of decision-making between the initial and the hearings levels. Moreover, in 2003, we reported on possible racial disparities in SSA’s disability decision-making at the hearings level from 1997 to 2000 between white and African-American claimants not represented by attorneys. Specifically, among claimants without attorneys, African-American claimants were significantly less likely to be awarded benefits than white claimants. We also found that other factors—including the claimant’s sex and income and the presence of a translator at a hearing—had a statistically significant influence on the likelihood of benefits being allowed. In addition to difficulties with the timeliness, accuracy, and consistency of its decision-making process, SSA’s disability programs face the more fundamental challenge of being mired in concepts from the past. SSA’s disability programs remain grounded in an approach that equates impairment with an inability to work despite medical advances and economic and social changes that have redefined the relationship between impairment and the ability to work. Unlike some private sector disability insurers and social insurance systems in other countries, SSA does not incorporate into its initial or continuing eligibility assessment process an evaluation of what is needed for an individual to return to work. In addition, employment assistance that could allow claimants to stay in the workforce or return to work—and thus potentially to remain off the disability rolls—is not offered through DI or SSI until after a claimant has gone through a lengthy determination process and has proven his or her inability to work. Because applicants are either unemployed or only marginally connected to the labor force when they apply for benefits, and remain so during the eligibility determination process, their skills, work habits, and motivation to work are likely to deteriorate during this long wait. In SSA’s most recent attempt to improve its determination process, the Commissioner, in September 2003, set forth a strategy to improve the timeliness and accuracy of disability decisions and foster return to work at all stages of the decision-making process. SSA’s Commissioner has acknowledged that the time it now takes to process disability claims is unacceptable. The Commissioner has also recognized that going through such a lengthy process to receive benefits would discourage individuals from attempting to work. To speed decisions for some claimants, the Commissioner plans to initiate an expedited decision for claimants with more easily identifiable disabilities, such as aggressive cancers. Under this new approach, expedited claims would be handled by special units located primarily in SSA’s regional offices. Disability examiners employed by the DDSs to help decide eligibility for disability benefits would be responsible for evaluating the more complex claims. To increase decisional accuracy, among other approaches, the strategy will require DDS examiners to develop more complete documentation of their disability determinations, including explaining the basis for their decisions. The strategy also envisions replacing the current SSA quality control system with a quality review that is intended to provide greater opportunity for identifying problem areas and implementing corrective actions and related training. The Commissioner has predicated the success of her claims process improvement strategy on enhanced automation. In 2000, SSA issued a plan to develop an electronic disability folder and automated case processing systems. According to SSA, the technological investments will result in more complete case files and the associated reduction of many hours in processing claims. SSA also projects that the new electronic process will result in significantly reduced costs related to locating, mailing, and storing paper files. SSA is accelerating the transition to its automated claims process, known as AeDib, which will link together the DDSs, SSA’s field offices, and its Office of Hearings and Appeals. According to the Commissioner, the successful implementation of the automated system is essential for improving the disability process. Beyond steps to improve the accuracy and timeliness of disability determinations, the Commissioner’s strategy is also consistent with our 1996 recommendations to develop a comprehensive plan that fosters return to work at all stages of the disability process and integrates as appropriate: 1) earlier intervention in returning workers with disabilities to the workplace, 2) identifying and providing return-to-work services tailored to individual circumstances, and 3) structuring cash and medical benefits to encourage return to work. The Commissioner has proposed a series of demonstrations that would provide assistance to applicants to enhance their productive capacities, thus potentially reducing the need for long-term benefits for some. The demonstrations include early interventions to provide benefits and employment supports to some DI applicants, and temporary allowances to provide immediate, but short- term, cash and medical benefits to applicants who are highly likely to benefit from aggressive medical care. In addition, demonstrations will provide health insurance coverage to certain applicants throughout the disability determination process. While the Commissioner’s proposed approaches for improving the disability determination process appear promising, challenges, including automation, human capital, and workload growth, have the potential to hinder its success. Automation. We have expressed concerns about AeDib, which could affect successful implementation of the Commissioner’s strategy. Our recent work noted that SSA had begun its national rollout of this system based on limited pilot testing and without ensuring that all critical problems identified in its pilot testing had been resolved. Further, SSA did not plan to conduct end-to-end testing to evaluate the performance of the system’s interrelated components. SSA has maintained that its pilot tests will be sufficient for evaluating the system; however, without ensuring that critical problems have been resolved and conducting end-to- end testing, SSA lacks assurance that the interrelated electronic disability system components will work together successfully. Additionally, while SSA has established processes and procedures to guide its software development, the agency could not provide evidence that it was consistently applying these procedures to the AeDib initiative. Further, while SSA had identified AeDib system and security risks, it had not finalized mitigation strategies. As a result, the agency may not be positioned to effectively prevent circumstances that could impede AeDib’s success. To help improve the potential for AeDib’s success, we have made a number of recommendations to SSA, including that the agency resolve all critical problems identified, conduct end-to-end testing, ensure user concurrence on software validation and systems certifications, and finalize AeDib risk mitigation strategies. Key human capital challenges. We have also expressed concerns about a number of issues surrounding human capital at the DDSs that could adversely affect the Commissioner’s strategy. The more than 6,500 disability examiners in the DDSs who help make initial decisions about eligibility for disability benefits are key to the accuracy and timeliness of its disability determinations. The critical role played by the DDS examiners will likely be even more challenging in the future if the DDSs are responsible for adjudicating only the more complex claims, as envisioned by the Commissioner. Yet, we recently found that the DDSs face challenges in retaining examiners and enhancing their expertise. High examiner turnover. According to the results of our survey of 52 DDSs, over half of all DDS directors said that examiner turnover was too high. We also found that examiner turnover was about twice that of federal employees performing similar work. Nearly two-thirds of all directors reported that turnover had decreased overall staff skill levels and increased examiner caseloads, and over one-half of all directors said that turnover had increased DDS claims-processing times and backlogs. Two-thirds of all DDS directors cited stressful workloads and noncompetitive salaries as major factors that contributed to turnover. Difficulties recruiting staff. More than three-quarters of all DDS directors reported difficulties in recruiting and hiring enough people who could become successful examiners. Of these directors, more than three-quarters reported that such difficulties contributed to decreased accuracy in disability decisions or to increases in job stress, claims- processing times, examiner caseload levels, backlogs, and turnover. More than half of all directors reported that state-imposed compensation limits contributed to these hiring difficulties, and more than a third of all directors attributed hiring difficulties to other state restrictions, such as hiring freezes. Gaps in key knowledge and skill areas. Nearly one-half of all DDS directors said that at least a quarter of their examiners need additional training in areas critical to disability decision-making, such as assessing symptoms and credibility of medical information, weighing medical opinions, and analyzing a person’s ability to function. Over half of all directors cited factors related to high workload levels as obstacles to examiners receiving additional training. Lack of uniform staff standards. SSA has not used its authority to establish uniform human capital standards, such as minimum qualifications for examiners. Currently, requirements for new examiner hires vary substantially among the states. Over one-third of all DDSs can hire new examiners with either a high school diploma or less. Despite the workforce challenges facing them, a majority of DDSs do not conduct long-term, comprehensive workforce planning. Moreover, SSA’s workforce efforts have not sufficiently addressed current and future DDS human capital challenges. SSA does not link its strategic objectives to a workforce plan that covers the very people who are essential to accomplishing those objectives. While acknowledging the difficulties SSA faces as a federal agency in addressing human capital issues in DDSs that report to 50 state governments, we have recommended that SSA take several steps to address DDS workforce challenges to help ensure that SSA has the workforce with the skills necessary for the Commissioner’s strategy to be successful. These include developing a nationwide strategic workforce plan addressing issues such as turnover in the DDS workforce, gaps between current and required examiner skills, and qualifications for examiners. Future workload growth. According to SSA’s strategic plan, the most significant external factor affecting SSA’s ability to improve service to disability applicants is the expected dramatic growth in the number of applications needing to be processed. Between 2002 and 2012, SSA expects the DI rolls to grow by 35 percent, with applications rising as baby boomers enter their disability-prone years. Over the same period, more modest growth is expected in the SSI rolls. SSA estimates that, between 2002 and 2012, the number of SSI recipients with disabilities will rise by about 16 percent. The challenges SSA faces in keeping up with its workload have already forced agency officials to reduce efforts in some areas. For example, the Commissioner explained that in order to avoid increasing the time disability applicants have to wait for a decision, she chose to focus on processing new claims rather than keeping current with reviewing beneficiaries’ cases to ensure they are still eligible for disability benefits, called Continuing Disability Reviews (CDRs). In fiscal year 2003, SSA did not keep current with the projected CDR caseload. The Commissioner says that this situation will continue in fiscal year 2004, despite the potential savings of $10 for every $1 invested in conducting CDRs. However, in reducing the focus on CDRs, not only is SSA forgoing cost savings, but the agency is also compromising the integrity of its disability programs by potentially paying benefits to disability beneficiaries who are no longer eligible to receive them. In closing, as stated earlier, SSA is at a crossroads and faces a number of challenges in its efforts to improve and reorient its disability determination process. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at this time. For further information regarding this testimony, please contact Robert E. Robertson, Director, Education, Workforce, and Income Security at (202) 512-7215, or Shelia Drake, Assistant Director, at (202) 512-7172. Michael Alexander, Barbara Bordelon, Kay Brown, Beverly Crawford, Marissa Jones, Valerie Melvin, Angela Miles, and Carol Dawn Petersen made key contributions to prior work covered by this testimony. Electronic Disability Claims Processing: SSA Needs to Address Risks Associated With Its Accelerated Systems Development Strategy. GAO-04- 466. Washington, D.C.: March 2004. Social Security Administration: Strategic Workforce Planning Needed to Address Human Capital Challenges Facing the Disability Determination Services. GAO-04-121. Washington, D.C.: January 27, 2004. SSA Disability Decision Making: Additional Steps Needed to Ensure Accuracy and Fairness of Decisions at the Hearings Level. GAO-04-14. Washington, D.C.: November 12, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 1, 2003. Major Management Challenges and Program Risks: Social Security Administration. GAO-03-117. Washington, D.C.: January 2003. Social Security Disability: Disappointing Results From SSA’s Efforts to Improve the Disability Claims Process Warrant Immediate Attention. GAO-02-322. Washington, D.C.: February 2002. Social Security Disability: Efforts to Improve Claims Process Have Fallen Short and Further Action is Needed. GAO-02-826T. Washington, D.C.: June 11, 2002. SSA Disability: Other Programs May Provide Lessons for Improving Return-to-Work Efforts. GAO-01-153. Washington, D.C.: January 12, 2001. SSA Disability Redesign: Actions Needed to Enhance Future Progress. GAO/HEHS-99-25. Washington, D.C.: March 12, 1999. Social Security Disability: SSA Must Hold Itself Accountable for Continued Improvement in Decision-making. GAO/HEHS-97-102, Washington, D.C.: August 12, 1997. SSA Disability: Return-to-Work Strategies from Other Systems May Improve Federal Programs. GAO/HEHS-96-133. Washington, D.C.: July 11, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Delivering high-quality service to the public in the form of fair, timely, and consistent eligibility decisions for disability benefits is one of SSA's most pressing challenges. This testimony discusses (1) the difficulties SSA faces managing disability claims processing; (2) the outmoded concepts of SSA's disability program; and (3) the Commissioner's strategy for improving the disability process and the challenges it faces. SSA is at a crossroads in its efforts to improve and reorient its disability determination process. Although SSA has made some gains in the short term in improving the timeliness of its decisions, we found that SSA's disability decisions continue to take a long time to process. Despite some recent progress in improving the timeliness of disability decision-making, individuals who initially are denied disability benefits and who appeal still have to wait almost an additional year before a final hearing decision is made. In addition, evidence suggests that inconsistencies continue to exist between decisions made at the initial level and those made at the hearings level. Also, SSA's disability programs are grounded in an outdated concept of disability that has not kept up with medical advances and economic and social changes that have redefined the relationship between impairment and the ability to work. Furthermore, employment assistance that could allow claimants to stay in the workforce or return to work--and thus to potentially remain off the disability rolls--is not offered through DI or SSI until after a claimant has gone through a lengthy determination process and has proven his or her inability to work. Further, the Commissioner has developed a strategy to improve the disability determination process, including the timeliness and consistency of decisions. While this strategy appears promising, we believe that several key challenges have the potential to hinder its progress, including risks to successfully implementing a new electronic disability folder and automated case processing systems; human capital problems, such as high turnover, recruiting difficulties, and gaps in key knowledge and skills among disability examiners; and an expected dramatic growth in workload. |
The U.S. agricultural sector—renowned for its productivity—owes much of its success to a continuing flow of improved crop varieties that produce higher yields and better withstand pests, diseases, and climate extremes. The genes necessary for these improved crops are contained in plant germplasm—the material in seeds or other plant parts that controls heredity. To maintain a high level of agricultural productivity, plant breeders need access to an ample supply of germplasm with diverse genetic characteristics so that they can continue to develop plant varieties that will provide increased yields and better resist pests, diseases, and environmental stresses. However, the diversity of germplasm available to present and future generations of breeders has been reduced by several factors, including the widespread use of genetically uniform crops in commercial agriculture and the destruction of natural habitats, such as forests, that have been important sources of germplasm. In the United States, the National Plant Germplasm System (NPGS), primarily administered by the U.S. Department of Agriculture (USDA), maintains germplasm collections for over 85 crops at 22 sites nationwide and in Puerto Rico. These collections contain numerous germplasm samples and provide breeders with access to germplasm with a broad range of genetic traits. In addition to maintaining the collections, NPGS is responsible for acquiring germplasm, developing and documenting information that describes the germplasm in the collections, and distributing germplasm to plant breeders and other users in the United States and worldwide. Germplasm collections are an important source of genetic material for plant breeders targeting specific traits, such as higher yield, increased resistance to disease and pests, good taste, improved nutritional quality, and environmental and climatic hardiness. To be of greatest use, these collections need to be genetically diverse, thereby giving breeders more possibilities to find the traits they need to develop improved crop varieties. In addition, information on germplasm traits and other related information (e.g., site of origin of the germplasm) should be obtained and documented, and the germplasm must be adequately preserved to be of optimal use to potential users. Diverse germplasm has played a key role in increasing food security through enhanced crop productivity and reduced crop vulnerability to pests and diseases. For example: According to a survey on the use of germplasm in 18 crops grown in the United States from 1976 to 1980, from 1 percent (sweet clover) to 90 percent (sunflower and tomato) of the crop varieties had been improved in part by the use of germplasm from wild relatives of the cultivated crops. The high productivity of modern wheat—resistant to many pests, diseases, and other stresses—results from combining germplasm from various varieties of wheat grown around the world to create improved wheat varieties. For example, one well-known germplasm sample from Turkey has been a source of resistance for three different types of disease—common bunt, stripe rust, and snow mold. This germplasm also has the ability to establish vigorous seedlings in hot, dry soils that deter the emergence of many other varieties. Most of the genes for insect and disease resistance in tomatoes come from a related wild species that originated outside of the United States. Germplasm from wild species is also a source of tolerance to environmental stress, such as drought. In particular, the discovery of resistance to a soil-borne organism known as the root-knot nematode has made the difference between growing or not growing tomatoes in many subtropical areas of the United States (such as southern California and Florida). In addition to providing a source of genetic diversity for plant breeders, germplasm collections serve as an archive for rare and endangered crop species. The loss of biodiversity worldwide has made the need for these collections all the more compelling. Expanding human populations, urbanization, deforestation, destruction of the environment, and other factors threaten many of the world’s plant genetic resources. These resources are vital to the future of agricultural productivity and the world’s food security. Many national and international collections have been established to rescue and conserve these resources for future use. In breeding plant germplasm into a narrowing genetic base of highly productive crop varieties, breeders have also reduced the genetic diversity of these crops, making them more uniform. Genetic uniformity in breeding also results when breeders inadvertently eliminate certain traits (such as resistance to disease and pests) that do not contribute directly to the desired characteristic (such as high yield) for which they were searching. While the resulting genetic uniformity can offer substantial advantages in both the quantity and quality of a commercial crop, it can also make crops more vulnerable to pests, diseases, and environmental hazards. A narrow genetic base presents the potential danger of substantial crop loss if a crop’s genetically uniform characteristics are suddenly and adversely affected by disease, insects, or poor weather. The risk of loss through the genetic vulnerability of uniform, common-origin planted crops is a serious concern. Such losses have occurred in the past. The Irish potato famine of the 1840s was a major factor in the death, impoverishment, and emigration of millions of Irish people. A single variety of the potato became Ireland’s staple food after its arrival from South America in the eighteenth century. The widespread use of this single variety increased the potato crop’s vulnerability to a previously unknown blight, which devastated a number of successive potato harvests. While the United States has not experienced such a widespread loss, several sizable crop failures have occurred as a result of a crop’s vulnerability to a particular disease. For example, in the late 1950s and early 1960s, about 70 percent of the wheat crop in the Pacific Northwest was wiped out by a disease known as stripe rust. In 1970, a disease known as the southern corn leaf blight swept from the southeastern United States to the Great Plains, costing farmers 15 percent of their corn crop that year. U.S. agriculture is based on crops that originated from areas outside of the United States. For example, as shown in figure 1.1, corn originated in Mexico and Guatemala, wheat in the Near East (in such countries as Iran), and soybeans in China. Crops of economic importance that are native to the United States are limited and include sunflowers, cranberries, blueberries, strawberries, and pecans. Thus, almost all the germplasm needed to increase the genetic diversity of U.S. agriculture comes from foreign locations. While immigrants to the United States, including the first colonists from Europe, brought seeds with them, native North Americans had already introduced corn, beans, and other crops from Central and South America. Today, to obtain new germplasm for U.S. collections, plant breeders and researchers often rely on collections located in foreign countries or on plant exploration trips to the centers of origin for their crops. Between 1986 and 1996, an estimated 75 percent of the germplasm samples added to NPGS’ collections were obtained from foreign countries. Although plant exploration trips are an important source of germplasm, most of the germplasm in NPGS has been obtained from existing collections both in the United States and in foreign national and international collections. Some of the U.S. and foreign collections belong to universities and private companies. Other foreign collections include (1) an international collection based in 16 international agricultural research centers that is administered by the Consultative Group on International Agricultural Research and (2) foreign national collections. The international agricultural research centers, located primarily in developing countries, specialize in research intended to enhance the nutrition and well-being of poor people through sustainable improvements in the productivity of agriculture, forestry, and fisheries. These centers, according to the International Plant Genetic Resources Institute, have together assembled the world’s largest international collection of plant genetic resources for food and agriculture. They account for a significant proportion, possibly over 30 percent, of the world’s unique germplasm samples maintained in collections away from their native environment. The international research centers are funded by voluntary contributions, and their plant germplasm has historically been freely available to any user. Moreover, users have not applied intellectual property protection to the material. The United States works cooperatively with these centers to support international activities to preserve germplasm. For example, U.S. germplasm facilities maintain duplicate collections for some of the international centers to provide for secure backup. In addition, U.S. scientists help various centers screen germplasm for resistance to pests and pathogens and serve in scientific liaison roles between the centers and the U.S. Agency for International Development. Finally, many countries, including most European nations, maintain germplasm collections. These national collections vary considerably in terms of the quality of preservation, organizational structure, the number of crops preserved, and the access provided to requesters. One of the largest collections of plant germplasm in the world is maintained at Russia’s Vavilov Institute of Plant Industry, named for the Russian scientist who was a pioneer in the study of plants. The National Plant Germplasm System is primarily a federally and state-supported effort aimed at maintaining supplies of germplasm with diverse genetic traits for use in breeding and scientific research. While NPGS has been evolving since USDA established its plant-collecting program in 1898, the main components of NPGS were not established until the passage of the Agricultural Marketing Act of 1946. The act also provided a legal basis for state and federal cooperation in managing crop genetic resources. The current organizational structure of NPGS—a geographically dispersed network of germplasm collections administered primarily by USDA’s Agricultural Research Service (ARS)—emerged in the early 1970s. Although ARS provides the lion’s share of support for NPGS, the system is also supported by the agricultural experiment stations at the state level.In addition, private industry provides some support for selected projects and develops and transfers germplasm in the form of plant hybrids and varieties to farmers and other consumers. NPGS’ major activities are (1) acquiring germplasm, (2) developing and documenting information on the germplasm in its collections, and (3) preserving the germplasm. (See table 1.1.) NPGS also distributes samples, free of charge, on request to plant breeders and other scientists. NPGS maintains about 440,000 germplasm samples for over 85 crops. In 1996, NPGS distributed about 106,000 germplasm samples to requesters in the United States and in 94 countries; it received about 7,800 germplasm samples, about 5,000 of which originated in foreign countries. NPGS is responsible for developing characterization information—data on traits such as plant structure and color that are little influenced by the environment. However, other information critical to the use of NPGS germplasm and documented in the Germplasm Resources Information Network (GRIN) is generally developed outside of NPGS. (GRIN, a database of NPGS’ holdings, is available to scientists and researchers worldwide.) For example, most evaluation data, which document traits typically affected by environmental conditions (e.g., plant yield and disease resistance), are developed outside of NPGS. These data are particularly important in providing plant breeders with the information they need to select the specific germplasm samples they seek from the sometimes thousands of possible choices offered by NPGS. Passport data, often provided by the person or organization that collected or supplied the germplasm, document the geographic origin and ecological conditions of its site of origin. Other germplasm collections in the United States—beyond NPGS’—are maintained by private companies, institutions such as universities and state agricultural experiment stations, and nonprofit organizations such as the Seed Savers Exchange. Some of these collections, as well as some foreign collections, are not freely available to users of germplasm. Although NPGS could not provide information on the number, size, and condition of all of these collections, they represent a substantial germplasm pool. NPGS maintains collections at 22 sites throughout the United States and in Puerto Rico. In addition, staff at 10 other sites work cooperatively with NPGS but do not receive NPGS funding. NPGS also maintains the National Seed Storage Laboratory (NSSL) and the National Germplasm Resources Laboratory (NGRL). Figure 1.2 shows the locations of these sites and laboratories. While most NPGS collections are maintained at sites that house germplasm for numerous crops, NPGS also has five sites that specialize in crop-specific collections, such as potatoes or soybeans. In addition, NPGS has nine sites that are national clonal germplasm repositories and four that maintain genetic stock collections. The four regional plant introduction stationsare responsible for maintaining many of the major seed-reproducing species held by NPGS. In total, as of June 1997, they accounted for almost half of the germplasm samples maintained in NPGS collections. NPGS sites generally contain either “backup” or “active” collections, depending on the storage objectives. Backup collections maintain germplasm for long-term conservation, and active collections maintain germplasm for short- to medium-term conservation and distribution. Germplasm is maintained either as seeds or as living plants. The latter category is generally referred to as “clonal” germplasm and includes fruit trees, sugarcane, and strawberries. Clonal germplasm is likely to lose some of its distinct genetic characteristics when reproduced from seed; therefore, it is reproduced asexually from its own plant parts. Clonal germplasm can be costly to preserve. Some fruit trees, for example, may require isolation to prevent loss from pests as well as screened protection and other measures to ensure the normal development of plants or to keep the fruit free of pests. At each site, crop curators and other staff are responsible for maintaining the germplasm collections. Curators regenerate (or replenish) germplasm samples by growing additional plants from seed or other plant parts to ensure that an adequate number of samples are available for (1) distribution to plant breeders, research scientists, and institutions and (2) storage in long-term collections. In the process of regeneration, curators must ensure that each plant generation is as genetically similar to its predecessor as possible. During regeneration, curators also document certain plant characteristics (such as plant height and color) if this information is not already available. Curators and other staff are responsible for entering information about each germplasm sample into GRIN. The National Seed Storage Laboratory (NSSL) at Fort Collins, Colorado, maintains the long-term backup collection of seeds for NPGS and some non-NPGS collections located in the United States and foreign countries and conducts research on preserving plant germplasm. NSSL’s storage facilities were modernized and expanded fourfold in 1992, with high-security vaults to protect the germplasm against natural disasters. The collection duplicates (or backs up) many of the germplasm samples in NPGS’ active collections in the event that the germplasm kept in active collections is lost. Germplasm can be lost for a variety of reasons, including natural disasters or degeneration through inadequate storage. Seeds preserved at NSSL are kept in colder, more secure conditions (i.e., sealed, moisture-proof containers in vaults at –18 degrees Celsius or containers over liquid nitrogen at –160 degrees Celsius) that preserve them longer than seeds preserved at many active sites. With few exceptions, such as apple buds that can be preserved in liquid nitrogen, NSSL does not back up clonal germplasm. Clonal collections may be backed up—in greenhouses, as tissue culture, or through cryopreservation— at the same sites as their active collections. The National Germplasm Resources Laboratory, located in Beltsville, Maryland, contains several units that support NPGS. The Plant Exchange Office—with extensive input from the CGCs and NPGS’ crop curators—is responsible for setting priorities for the germplasm needs of NPGS’ collections. Furthermore, the Office coordinates plant exploration trips, facilitates germplasm exchanges with other collections, and documents the entry of germplasm into NPGS, including its passport data. In addition, the Germplasm Resources Information Network/Database Management Unit manages GRIN, NPGS’ database, which provides information for users and managers, such as passport information on NPGS samples. ARS’ Plant Germplasm Quarantine Office works with USDA’s Animal and Plant Health Inspection Service (APHIS) in administering the National Plant Germplasm Quarantine Center in Beltsville, Maryland. These sites test specific types of imported germplasm for pests and pathogens before the germplasm is introduced into the United States. All plant germplasm coming into the United States must comply with federal quarantine regulations intended to prevent the introduction of pests and pathogens that are not widespread in the United States. APHIS writes, interprets, and enforces quarantine regulations, while ARS is generally responsible for providing research support, including the development of tests for pests and pathogens. In addition, ARS, through a 1986 Memorandum of Understanding with APHIS, maintains and tests germplasm that falls into the “prohibited” quarantine category. NPGS’ activities are supported at the federal level primarily by ARS, with additional support provided by states’ land grant universities through their agricultural experiment stations. Many of NPGS’ collections have been jointly developed and maintained by federal and state scientists at states’ agricultural experiment stations, and most NPGS sites are located on experiment station properties. State universities provide in-kind support in the form of services, personnel, and facilities. In addition, private industry provides limited support, such as regenerating germplasm at company sites or funding special projects. In fiscal year 1996, NPGS funding was $23.3 million. Of this amount, $19.5 million was provided by ARS; $1.4 million by USDA’s Cooperative State Research, Education, and Extension Service; $1.3 million by APHIS; $0.8 million (in-kind support) from the states’ agricultural experiment stations; and $0.3 million from other nonfederal sources. Included in the ARS funding was $3.9 million for plant collection activities—germplasm acquisition, quarantine, and classification—and $15.6 million for such activities as preservation, documentation, and distribution. From fiscal years 1992 through 1996, ARS’ funding for NPGS has been essentially level; however, if calculated in constant dollars, funding declined by 14 percent during this period. During this period, NPGS’ germplasm collections increased by 10 percent. While ARS has the primary responsibility for managing NPGS, no single individual or entity has overall authority for managing the entire system. Within ARS, numerous officials and committees have different levels of authority and responsibility for components of the system. ARS’ National Program Leader for Plant Genetic Resources has a broad range of leadership responsibilities for the system, including developing budget proposals, planning resource allocations among the NPGS sites, and addressing international issues affecting germplasm. The program leader also participates in and is advised by various groups that make recommendations concerning NPGS’ operations and policies. The program leader, however, has limited authority for the budgets, projects, or management of each NPGS site. Responsibility for these activities rests with (1) ARS’ area directors, who have direct oversight responsibility and authority for the NPGS sites located within their areas of jurisdiction, (2) NPGS’ site leaders, and (3) ARS’ national program staff. In particular, the area directors coordinate some site program reviews, conduct performance ratings for key administrative staff, hire personnel, and manage discretionary funding for NPGS sites located in their jurisdiction. Because of the broad array of crops represented in NPGS’ collections—each requiring specific scientific and technical expertise—NPGS relies on 40 Crop Germplasm Committees (CGC) to provide expert advice on technical matters relating to germplasm activities. Among other things, the CGCs are expected to provide recommendations on the management of the germplasm collections for their crops, including setting priorities for acquisition and evaluation research. CGC members—representing ARS, universities, and the private sector—include plant breeders, NPGS curators, pathologists, and other scientists who are experts on specific crops. A crop committee can represent one crop group or several. For example, the soybean CGC provides advice on soybeans, while the leafy vegetable CGC is responsible for lettuce, spinach, chicory, and celery. (See app. III for a listing of the CGCs and the crops for which they are responsible.) These committees generally meet about once a year and issue reports on the status of their respective collections. However, they receive no funding for their work or related expenses. GAO and National Research Council reports, dating as far back as 1981, have cited management and organizational shortcomings and needs that have hindered NPGS’ overall effectiveness. In 1981, for example, GAO concluded that insufficient management attention by USDA to germplasm collection, storage, and maintenance had endangered the preservation of germplasm in the United States. Another GAO report, issued earlier that year, recommended that USDA centralize control over the Department’s genetic resources and develop a comprehensive plan for their use. In 1990, GAO reported that ARS had difficulty in setting priorities and allocating funding among the various plant germplasm management activities. In a comprehensive evaluation of NPGS issued in 1991, the National Research Council concluded that NPGS had no discernible structure and organization for managing and setting priorities for its activities, formulating national policies, or developing budgets to act on emerging priorities. The Council made many recommendations, including that USDA strengthen NPGS by centralizing its management and budgeting functions and by establishing clear goals and policies for NPGS’ leadership to use in developing long-range plans. Other recommendations included expanding the capacity of NSSL and providing financial support to the CGCs. During the 1990s, USDA has made several changes to address some of the operational shortcomings discussed above. In particular, it has expanded NSSL’s long-term, secure storage facility fourfold. Furthermore, NPGS’ sites with active collections are making greater use of –18 degree Celsius storage to improve germplasm preservation. In addition, NPGS’ GRIN database has been substantially improved by the addition of such features as a new search function and access to users through the Internet. We surveyed the members of the 40 CGCs for their views on the sufficiency of NPGS’ principal activities—acquiring germplasm to ensure the diversity of the collections in order to reduce crop vulnerability, developing and documenting information on germplasm, and preserving germplasm. Specifically, we surveyed the 680 members of the CGCs—including 38 additional experts identified by USDA. The median CGC response rate was 86 percent, and all NPGS curators participated in the survey. We conducted this survey from November 1996 through March 1997. In addition, we obtained information about NPGS’ major activities—acquisition, development and documentation of information, and preservation—from interviews with the following: two acting National Program Leaders for Plant Genetic Resources; several NGRL officials responsible for plant exploration, quarantine, and GRIN; the Director, National Plant Germplasm Quarantine Center, APHIS; the Director and research leaders, NSSL; the site leaders of the four regional plant introduction stations and the Davis, California clonal repository; a number of curators and breeders at various NPGS sites; and ARS budget staff. We visited NGRL and APHIS officials in Beltsville, Maryland; two of the four regional plant introduction stations (Ames, Iowa, and Griffin, Georgia); the National Soybean Collection, Urbana, Illinois; and NSSL in Fort Collins, Colorado. We also interviewed officials from USDA’s Economic Research Service; Pioneer Hi-Bred International, Inc., a large seed producer; the Department of State; and the Agency for International Development. In addition, we reviewed (1) NPGS program documents, including planning and budget documents; (2) acquisition and preservation data (based on GRIN data) provided to us by NGRL officials, as well as preservation data provided by officials from the four plant introduction stations; (3) CGC reports; (4) site and program reviews; and (5) documents from the Food and Agriculture Organization of the United Nations and from international sources related to germplasm access. We did not verify the accuracy and reliability of the data provided by NPGS. We conducted our review from July 1996 through September 1997 in accordance with generally accepted government auditing standards. We provided USDA with a draft of our report for review and comment. These comments and our response to them are in appendix IV. Most CGCs reported that the overall diversity in freely available germplasm collections—including NPGS’—is sufficient for reducing their crops’ vulnerability. Nonetheless, they ranked the acquisition of additional germplasm as a top priority for NPGS, thereby underscoring the importance they place on having maximum genetic diversity in NPGS’ collections. A number of issues may be contributing to the CGCs’ emphasis on acquiring germplasm for the NPGS collection. For example, most CGCs said that at least one of the four types of germplasm that generally constitute their collections is inadequate; each type contains genetic material that plays an important role in a collection’s overall diversity. Most CGCs considered acquiring more germplasm to be a top priority; however, problems with some countries have hindered access to potential sources of new germplasm in those areas. In addition, certain provisions in the Convention on Biological Diversity, which entered into force in 1993, may place constraints on the use of and access to some foreign germplasm in the future. Even when NPGS acquires new germplasm, its release to breeders and research scientists has sometimes been delayed as a result of problems in USDA’s management of the quarantine process. According to many CGCs whose germplasm generally undergoes the most intensive quarantine testing, the process has resulted in the delayed release and, to a lesser extent, the loss of some germplasm. When all freely available collections were taken into account, almost three-quarters of the CGCs reported that these collections are sufficiently diverse for reducing the vulnerability of their crops. For the NPGS collections alone, just over half the CGCs reported that the genetic diversity of their NPGS collections is sufficient to reduce crop vulnerability. Nonetheless, the CGCs overall viewed the acquisition of additional germplasm as a top NPGS priority—out of 14 germplasm-related activities—in the event of additional funding. Several concerns highlighted by the CGCs in our survey may contribute to the importance they place on increased acquisition. These concerns include the lack of diversity within specific parts of their collections and the potential loss of germplasm that is endangered in nature or in at-risk collections (e.g., collections of scientists who are retiring). When all freely available collections (including NPGS’) were considered, 29 of the 40 CGCs reported that the genetic diversity in the collections for their crops is sufficient for reducing crop vulnerability. Major crops—such as corn, wheat, and soybeans—are in this category. The sufficiency of the collections declined somewhat when only NPGS collections were considered: Twenty-two, or just over half of the CGCs reported that the NPGS collections for their crops have sufficient genetic diversity overall to reduce crop vulnerability. (See fig. 2.1.) Nine CGCs said that the genetic diversity of the NPGS collection for their crops is insufficient for reducing crop vulnerability: grapes, cool season food legumes, sweet potatoes, cucurbits (e.g., squash and melons), tropical fruit and nut, walnuts, herbaceous ornamentals, prunus (e.g., peach and cherry trees), and woody landscape. In addition, nine CGCs said that their collections have neither sufficient nor insufficient diversity. While over half the CGCs believed that the genetic diversity of their NPGS germplasm collections for their crops is sufficient, they all reported that it is moderately to extremely important to increase the diversity of their NPGS collections. The importance the CGCs placed on increasing diversity is underscored by the high priority given to germplasm acquisition in the event of additional funding—of 14 germplasm-related activities, the CGCs, on average, gave acquisition the highest ranking. (Fig. 2.2 shows the average ranking that CGCs gave to each activity, with 1 being the highest possible ranking.) All 40 CGCs stated that they knew of germplasm samples that would increase the genetic diversity of the NPGS collections and that should be added to them. For example, the Wheat CGC’s 1996 report to NPGS cited three critical collection needs for the NPGS wheat collection and specified where much of this germplasm could be obtained, including landraces (seeds passed down by farmers from one generation to another to produce desired plant characteristics) from Guatemala, where they have not been collected before, and wild wheat relatives from Albania, Greece, and the former Yugoslavia. Similarly, the Sweet Potato CGC wanted to enhance the limited genetic diversity of the NPGS sweet potato collection by obtaining a representative sample of germplasm from the International Potato Center in Peru. This collection contains about 6,500 germplasm samples of sweet potato, compared with about 1,170 in the NPGS collection. Although most CGCs reported that their NPGS collections overall are sufficiently diverse at this time, they cited several concerns with the collections that may account for the importance they place on increased acquisition. First, most CGCs reported that at least one of the following types of germplasm in their collections is insufficiently diverse for reducing crop vulnerability: wild and weedy relatives of cultivated crops, landraces, and genetic stocks. Only obsolete and current cultivars, the fourth type of germplasm samples in a collection, are considered to be sufficient by most CGCs. Specifically: Wild and weedy relatives of crops were reported to be insufficient by almost half the CGCs, including those for major crops such as corn and soybeans. Wild relatives have often been used to improve crops, such as tomatoes, and sometimes to develop new ones. Landraces—many of which are grown from selected quality seed passed down by farmers from one generation to another—were reported to be insufficient by 12 of the 40 CGCs. Landraces are rich sources of genes for traits such as resistance to pests and pathogens. Genetic stocks are insufficiently diverse, according to over half the CGCs, including those for major crops such as alfalfa, peanuts, and grapes. While genetic stock material is essential to genetic research, according to NPGS officials, it has generally played a minor role in commercial breeding programs. However, it is expected to become increasingly important in breeding programs that use molecular genetic tools to manipulate and transfer genes to create new products, according to the National Research Council. Obsolete and current cultivars are sufficient for reducing the vulnerability of their crops, according to most CGCs. Only five CGCs cited insufficiencies in this area. Furthermore, 39 CGCs said that NPGS should place increased emphasis on acquiring germplasm endangered in nature or acquiring germplasm from collections at risk, such as the Vavilov collection in Russia or the collections of scientists who are retiring. If such collections are not obtained and preserved, their germplasm may be lost. Finally, 37 CGCs reported that certain plants are becoming extinct or hard to find. NPGS’ acquisition policy is to rely heavily on the 40 CGCs and the NPGS curators to assess the adequacy of their respective germplasm collections and recommend areas where additional acquisition may be needed. However, NPGS has not developed a comprehensive, long-term plan to establish critical acquisition needs for its germplasm collections and priorities for collection trips to fill those needs. Currently, NPGS’ collection trips are based primarily on proposals that are submitted to NPGS’ Plant Exchange Office by federal and university scientists and endorsed by the appropriate CGCs. In addition, staff from the Plant Exchange Office occasionally make or participate in collection trips. However, some exploration trips are funded by other USDA or non-USDA federal agencies. According to NPGS officials in the Plant Exchange Office, some germplasm collections are more frequently targeted for collection trips than others because (1) the gaps in some collections are better known and (2) some collections have more assertive champions—e.g., a germplasm curator, CGC, or other interested scientist who aggressively seeks out collection opportunities. This approach may overlook the needs of some crops. For example, according to the head of the Plant Exchange Office, 16 of the CGCs’ reports state acquisition needs only in a general fashion and therefore are of limited value for planning or setting acquisition priorities. The exchange officer acknowledged the need to develop a long-term plan that would reflect collection priorities for each crop. He noted that such a plan would use existing funds more efficiently and help ensure that the needs of all crops are being addressed. NPGS has been working to develop such a plan for several years, but progress has been slow because the office has lacked the resources to adequately staff the project and provide needed scientific expertise. The initial plan, which is intended to be flexible to accommodate changing needs and conditions, is expected to be completed by Spring 1998. Concerns about NPGS’ acquisition planning process are long-standing. For example, over 15 years ago, GAO recommended that a long-range plan be developed to address gaps in germplasm collections and objectives for collecting or otherwise acquiring needed germplasm. In 1991, the National Research Council recommended, among other things, that NPGS develop a comprehensive plan for plant exploration. The Council noted that in the past, the lack of an exploration plan resulted in some crops receiving attention, while others went unserved. Although CGCs want to acquire more germplasm, most reported that difficulties between the United States and some foreign countries have hindered NPGS’ efforts to obtain the germplasm needed to increase the diversity of its collections. For example, the Soybean CGC report indicated that relations between the United States and North Korea have hindered the CGC from obtaining germplasm from North Korea. The report stated that the few soybean germplasm samples from North Korea in NPGS’ collection were either obtained more than 60 years ago or have been received since then through third parties. Several other CGC reports—including those for sugarbeets, peas, and wheat—cited difficulties in obtaining germplasm from the Middle East. The Wheat CGC, for example, noted that Iran, a country with which the United States does not have diplomatic relations, holds potentially valuable wheat germplasm. In addition, issues relating to the ownership and use of foreign germplasm have become more problematic as a result of the entry into force of the Convention on Biological Diversity in 1993. Prior to the Convention, germplasm from most countries, other than those where access was restricted, has been generally available to requesters. However, the Convention recognizes the sovereign rights of nations over their natural resources and their rights to exchange these resources under terms mutually agreeable to the nation and the germplasm recipient. Officials from NPGS, the State Department, the Agency for International Development, and the World Bank observed that access to plant germplasm could be reduced as a result of these provisions but that the full impact of the Convention may be unknown for a number of years. However, one likely result of the Convention will be the increased use of material transfer agreements—contracts that require germplasm users to agree to certain conditions in exchange for the use of the germplasm. These agreements may require, for example, that the requester not seek intellectual property rights or claim ownership over the germplasm. USDA officials will sign material transfer agreements only if their terms are consistent with NPGS’ policy to provide users with free and open access to germplasm. A number of problems related primarily to USDA’s overall management of the germplasm quarantine program have hampered the program’s effectiveness and resulted in delays in the release of some germplasm. While most CGCs reported that U.S. quarantine regulations and processes have been effective in reducing the introduction of pests and pathogens into the United States, 13 CGCs, most of whose germplasm often undergoes more intensive scrutiny in quarantine, reported problems with the timeliness of the quarantine process, and 5 reported problems with the release of viable germplasm. While the CGC for prunus (e.g., cherry and peach trees) reported that USDA’s regulations and processing have been very ineffective in both of the above areas, CGCs for crops such as apples, pears, potatoes, and corn also reported problems. All plant germplasm coming into the United States must comply with federal quarantine regulations intended to prevent the introduction of pests and pathogens not widespread in the United States. These regulations range from a category requiring only visual inspection at the port of entry for germplasm such as the seeds of most vegetables and flowers, to a category—known as “prohibited”—requiring that the germplasm be sent to a quarantine facility for testing or observation before release. Although less than 3 percent of the world’s plant species are in this latter category, it includes a wide range of crops: all or most clonally propagated prunus, apples, pears, potatoes, sugarcane, strawberries, sweet potatoes, grapes, certain woody landscape plants, and grasses as well as the seeds of wheat, corn, and rice from some regions where there are serious diseases not already in the United States. Thirteen CGCs—most of whose germplasm is often in the prohibited category—reported that USDA’s management of the quarantine process hinders the timely acquisition of viable germplasm. In addition, ARS officials told us that some germplasm has died while in quarantine because it was poorly maintained. The specific types of problems identified by the CGCs, ARS and APHIS officials, and ARS reviews included (1) poor production practices during quarantine, (2) inadequate facilities or sites, and (3) the types of testing procedures that are currently in use. Eleven CGCs, representing such germplasm collections as prunus, apples, pears, potatoes, and sweet potatoes, reported that poor crop production practices—such as inadequate watering, soil preparation, and weeding—during quarantine hinder the timely acquisition of viable germplasm. Furthermore, an internal review of tree-growing practices at the Maryland quarantine facility, conducted in 1996 by a horticultural scientist at the request of ARS, noted the death of several thousand fruit trees planted between 1993 and 1995. The review cited improper horticultural practices as a major cause of many of the deaths and recommended improved practices. When trees in quarantine are not properly maintained, they may die and their germplasm will need to be imported again. For example, an ARS scientist at the quarantine office estimated that about 20 percent of all prunus germplasm samples brought into the country in the past 10 years had died because they did not receive proper horticultural care. In addition, poor production practices have kept trees from maturing sufficiently to permit testing, thereby delaying the release of germplasm. Such delays have occurred with the germplasm of prunus, apple, pear, and quince trees. For example, since 1991, the release of hundreds of germplasm samples for apple, pear, quince, and prunus trees has been delayed as a result of inadequate horticultural practices, according to the ARS scientists at the quarantine office who test and monitor these trees. Delays for most of the clonal apple, pear, and quince germplasm have been about 8 to 10 years. Furthermore, the average time for the unconditional release of prunus germplasm at the Maryland quarantine facility has been about 10 years; however, generally no more than 4 years should be required, according to APHIS officials. ARS officials expect that it will not unconditionally release apple, pear, quince, or prunus clonal material until the year 2000 or later because of horticultural practices that have resulted in the lack of mature trees needed for testing. Thirteen CGCs—including those for prunus, pears, corn, and rice—reported that conditions at the quarantine facilities used to grow their plants hinder the timely release of viable germplasm. Problems with quarantine facilities were also reported in ARS reviews in 1994 and 1996. The 1996 review stated that conditions at the quarantine facilities in Maryland were not conducive to promoting plant health. For example, it noted that the Maryland site’s soil was unsuitable for growing trees and recommended the installation of space heaters in the screenhouses to keep the temperature slightly above freezing. In addition, a plant breeder on the pear CGC said that the Beltsville facility is not ideal for pears or apples because the climate of the mid-Atlantic region is conducive to the development of fire blight, a serious bacterial disease that is difficult to control once trees are infected. Sixteen CGCs—including the prunus, apple, pear, corn, wheat, rice, and potato CGCs—reported that required testing procedures hinder the timely acquisition (e.g., introduction and distribution) of viable germplasm for their crops. While ARS is responsible for developing new tests, APHIS must approve the tests that are used as well as the release of germplasm from quarantine. Nearly all of the quarantine testing procedures currently in use date back to the early 1980s or before. These procedures involve testing for pathogens such as viruses and other infectious agents. For some crops, testing begins by closely observing the quarantined plants for symptoms of disease during plant growth and subjecting the plants to a battery of tests for latent pathogens. Some tests for trees can take considerable time because the tree must first bear fruit before tests can be completed. For example, tests on prunus trees generally require a minimum of about 3, and no more than 4, years to complete, according to APHIS officials. More sophisticated testing methods using molecular techniques to identify pathogens are being developed, and some are already available. These tests could save considerable time in quarantine as well as the costs associated with caring for the plants during that time. Such tests could also curtail the loss of germplasm that is associated with longer quarantine periods, according to APHIS and ARS officials. ARS has developed, and APHIS has approved, molecular tests for potato viruses; these tests have cut quarantine testing from 2 years to 1, according to an ARS scientist. In addition, APHIS is currently reviewing newly developed molecular tests for detecting certain diseases in prunus that would allow the conditional release of prunus in about 18 months, on average. In addition, ARS is working on the development of molecular tests for certain sweet potato pathogens. However, some plant breeders are concerned that the development and approval of new testing methods has been unduly slow. A 1994 review of the germplasm quarantine office, conducted by ARS and university scientists at the request of ARS, noted that virtually all popular new apple and pear trees clones of foreign origin enter the United States illegally, without pathogen testing. It stated that both ARS and APHIS needed to adopt policies that would make pathogen testing more responsive to the needs of the deciduous fruit industry and its associated germplasm collections and CGCs. According to most CGCs, NPGS collections for their crops lack sufficient information on germplasm traits to facilitate the germplasm’s use in crop breeding. Specifically, these CGCs raised concerns about two types of information—evaluation and characterization. Evaluation information describes traits (such as yield and resistance to disease) of particular interest to plant breeders, while characterization information describes traits (such as plant structure, seed type, and color) that are little influenced by environmental conditions. Most CGCs reported that passport data—a third type of information that describes, among other things, the site of origin of the germplasm—are sufficient for breeding crops. NPGS officials acknowledged that gaps exist in needed information, in part because the information has not been developed and in part because the information that has been developed has not always been entered into NPGS’ centralized database—the Germplasm Resources Information Network (GRIN). They noted, however, that given their limited resources, the day-to-day tasks of preserving germplasm to maintain its viability take precedence over developing and documenting information. Three-quarters of the CGCs reported insufficiencies with evaluation information, and almost half found characterization information insufficient for crop-breeding purposes. On the other hand, most CGCs reported that passport information is sufficient for crop-breeding purposes. Several NPGS managers told us, however, that passport information—particularly for older samples—is not adequate for NPGS’ internal planning and management. Breeders need comprehensive evaluation information to select germplasm with the traits they are seeking from the myriad of germplasm samples. According to the National Research Council, evaluation is a prerequisite for the use of germplasm—germplasm samples that are not evaluated remain mostly curiosities. In developing evaluation data, scientists test germplasm samples for various traits under a wide range of conditions. Although the preliminary evaluation of traits is generally considered an NPGS activity, most evaluations are part of the research that accompanies breeding programs and are conducted and funded primarily through other ARS programs and universities. In addition, industry conducts and funds a small amount of germplasm evaluation for NPGS. Thirty of the 40 CGCs reported that the evaluation information on their NPGS collections is somewhat or very insufficient for crop breeding, and only 3 reported that it is somewhat sufficient—the alfalfa, sugarbeets, and tropical fruit and nut CGCs. Figure 3.1 shows the sufficiency of evaluation information, as reported by the 40 CGCs. The CGCs reported that the trait most likely to have been evaluated—of the five traits we asked for their views on—is “resistance to pests and pathogens considered to be a serious risk.” Even so, less than half the CGCs reported that their germplasm has been evaluated to a moderate extent for this trait and only one to a great extent. For the remaining four evaluation traits, 35 to 38 CGCs reported their germplasm had been evaluated only to some, little, or no extent. These traits include tolerance to abiotic stresses, such as salt or drought, considered a serious risk; quality characteristics, such as flavor or appearance; production characteristics, such as yield; and root stock traits. (See fig. 3.2.) While identifying shortcomings in the evaluation information, almost half of the CGCs said that NPGS’ management of evaluation data has improved since about 1990. (In addition, 20 CGCs said that there has been no change, and 1 said it has worsened.) Root stocks are used in grafting clonal crops. germplasm sample is maintained. It is generally the responsibility of NPGS curators to develop characterization information when they regenerate germplasm samples. Nineteen of the 40 CGCs reported that characterization information on their NPGS germplasm is somewhat or very insufficient for crop breeding. These 19 CGCs included some economically important crops, such as cotton, grapes, and peanuts. Only nine CGCs reported that characterization information for their crops’ germplasm is somewhat sufficient for breeding. Figure 3.3 shows the sufficiency of characterization information, as reported by the CGCs. In addition, over half the CGCs said that NPGS’ management of characterization data has improved since 1990. Passport information includes the data on the plant’s classification, the location of the germplasm sample’s origin, and the ecology of that location. This information is essential for assessing the quality of the collections and for using and managing these collections. NPGS uses the data to ensure, for example, that it does not unnecessarily collect samples that have previously been collected from the same location. Passport data are generally the first data obtained on a new germplasm sample and are often provided by the donor when the germplasm is given to NPGS. However, much germplasm is donated to NPGS without complete passport information. Although NPGS’ passport information may be incomplete, the CGCs were considerably more positive about the passport information than about either evaluation or characterization information. As shown in figure 3.4, almost three-quarters of the CGCs reported that passport information for their crops is somewhat or very sufficient for crop-breeding purposes. Only five CGCs reported passport information to be somewhat insufficient for breeding. Furthermore, three-quarters of the CGCs said that NPGS’ management of passport data has improved since about 1990. Although most CGCs found passport information to be somewhat or very sufficient for crop-breeding purposes, NPGS officials told us that it is not sufficient for their internal planning for germplasm acquisition. About two-thirds of NPGS’ samples lack passport data on the location of origin, according to the GRIN data provided by NPGS officials. This information is key to pinpointing areas where germplasm has already been collected, thereby minimizing the possibility of unnecessarily collecting material already in the NPGS collection. Origin information also assists in targeting sites for future collection trips. Furthermore, according to NPGS officials, even when location information is available, it is sometimes inaccurate or incomplete. GRIN data, for example, show that 90 percent of NPGS’ samples have no information on the latitude and longitude of the site of origin. Incomplete passport information also makes it more difficult for curators to determine which samples are unique and which are duplicates.Identification of duplicate samples is necessary to avoid needless duplication of costly germplasm-related activities, such as preservation, characterization, and evaluation. Curators for about half of the crop collections reported that it is moderately to extremely important to decrease the duplication of samples in their NPGS collection. For example, the sorghum curator estimated that about 10 to 25 percent or more of the samples in the sorghum collection are duplicates. He added that the elimination of these duplicates would be expensive and time-consuming because many samples lack complete passport data. While some information has not been developed because of resource constraints, even data that have been developed have not always been entered into GRIN. NPGS officials told us that developing, obtaining, and documenting information in GRIN are lower priorities than preserving the germplasm collections, and in some cases, these activities are outside the system’s control. Thirty-nine CGCs estimated that, on average, 50 percent of existing, useful evaluation data on their collections are not in GRIN. According to the NPGS managers of several sites and ARS officials who oversee crop-specific research programs, gaps in evaluation data for NPGS germplasm result from a variety of factors, including the large amount of germplasm that needs to be evaluated, the resource-intensive nature of evaluations, and limited resources. In addition, most germplasm evaluations are conducted outside of NPGS, primarily by ARS and university scientists who do not always provide NPGS with the resulting information for entry into GRIN. Thus, even when evaluation data exist, they are not always available through GRIN. Some scientists who conduct germplasm evaluations are funded by ARS and are required to submit their evaluation results to NPGS. However, other scientists, not funded by ARS, conduct evaluations as part of their larger research objectives. According to a former National Program Leader for Plant Genetic Resources, some of these evaluations merit inclusion in GRIN; however, he said that NPGS does not have a clear policy on the curators’ responsibility in obtaining this information. Several CGC reports developed for NPGS have identified the need to enter additional evaluation information into GRIN. For example, the 1996 corn CGC report stated that much evaluation data had accumulated without being entered into GRIN or otherwise disseminated. Furthermore, according to the 1996 CGC report for cucurbits (e.g., squash, watermelon, cucumbers), NPGS has had relatively few requests for watermelon germplasm, in part because of the lack of relevant evaluation data in GRIN. In addition, NPGS does not have a process for tracking whether scientists under agreement with ARS to evaluate NPGS germplasm have submitted evaluation data for entry into GRIN. As a result, NPGS has little assurance that the results of these ARS-supported evaluations are entered into GRIN. While several NPGS managers said they believe that most of this information is in GRIN, NPGS is nonetheless developing a system to track the information. The system is expected to be completed by early 1998. Finally, some passport information—for example, the location of origin—cannot be developed because the germplasm samples were provided many years ago, and it would be very difficult or impossible to reconstruct the missing data. In addition, some passport information may be available but has not been added to GRIN. Although GRIN may not have complete data, 36 CGCs reported that it effectively provides information about their NPGS germplasm collections. Thirty-seven CGCs reported that NPGS’ management of GRIN had improved since about 1990, making it the NPGS activity that was cited most frequently as having improved. According to several NPGS officials responsible for managing germplasm activities, preserving germplasm to keep it viable is of more fundamental importance than developing information and making it available. In addition, the total number of germplasm samples in NPGS’ collections has increased about 29 percent from 1986 through 1996, according to the GRIN data provided by an NPGS official. With larger collections come greater demands on curators’ time and resources. Therefore, the development and documentation of characterization information, which is done primarily by NPGS curators, occurs only as time permits. A case in point is the cucurbit collection. The CGC for cucurbits reported that characterization and evaluation information is insufficient for breeding of its crops. However, the curators for these crops reported that some cucurbit regeneration backlogs had increased and that between 5 and 40 years would be required to regenerate various parts of this collection given current resources. Preservation activities—including viability testing, germplasm regeneration, and secure, long-term backup storage of germplasm—have not kept pace with the preservation needs of the collections. First, only minimal viability testing—testing that determines the amount of live germplasm in a sample—has been conducted at some sites, including two plant introduction stations that account for over one-fourth of NPGS’ germplasm samples. Viability testing is needed to determine when germplasm should be reproduced to prevent the loss of the sample. Second, NPGS has significant backlogs for regenerating germplasm at all four plant introduction stations. Regeneration—reproducing germplasm to obtain sufficient numbers of viable seeds—is essential, particularly when viability is known to be low or has not been tested. Third, over one-third of NPGS’ germplasm is not backed up in NPGS’ National Seed Storage Laboratory (NSSL), which provides secure, long-term storage for the system. Germplasm that is not backed up at NSSL is at greater risk of being lost. NPGS’ standards require that viability testing be conducted as often as is needed for each species. Managers of three plant introduction stations stated that the germplasm in their collections should be tested every 5 to 10 years, depending on the species and the storage conditions for the germplasm. Viability testing is important to determine when the sample is at risk of being lost. According to NPGS data and NPGS officials, the amount of testing at some locations—including two of the four plant introduction stations—is insufficient. These two stations account for more than one-quarter of NPGS’ active collection. The stations—in Griffin, Georgia, and Pullman, Washington—had tested less than one-fourth of their germplasm from 1986 through 1996. A curator at the Griffin station cited a specific consequence of the failure to test for viability on a regular basis—all 10 samples of recently tested butternut squash were dead. The collection had previously not been tested for many years. As a result, he feared that much or all of this collection of about 500 samples—the only one of its kind in NPGS—may be dead. While agreeing that viability testing is important, the Griffin and Pullman station managers told us that, given their large regeneration backlogs, focusing their limited resources on regeneration to maintain germplasm viability is more likely to save diversity in the germplasm collections than testing the germplasm. Other obstacles cited as reasons for infrequent testing include the large numbers of different species to test and the lack of testing methods for some of them. NSSL also conducts viability tests on the germplasm it maintains in long-term storage. At NSSL, 82 percent of its samples have been tested, 69 percent from 1985 through 1996. Of the 18 percent never tested, 61 percent do not have enough seeds for testing, and 39 percent are part of a backlog that has not yet been processed because of the lack of resources, according to NSSL data and NPGS officials. While NPGS’ data indicate that viability testing is not conducted as often as it should be, responses to our survey on the sufficiency of viability testing were mixed. Only 4 of the 40 CGCs we surveyed reported that NPGS’ viability testing activities are insufficient for their crops, although 29 indicated that the current staff levels for testing (as well as for regeneration) have hindered the preservation of their collections. However, when we examined the responses of the curators alone—who are responsible for maintaining and preserving the collections and are most knowledgeable about their condition—curators for part or all of 16 of 38 crop collections (including major crops such as corn, alfalfa, and cotton) reported that viability testing for their crop collections is insufficient. For example, the curator responsible for over 80 percent of the corn collection reported that regeneration and viability testing are somewhat insufficient and should be the first priority in case of additional funding. Regeneration is necessary to ensure that NPGS has an adequate supply of viable seeds. NPGS generally schedules a sample for regeneration when the viability of the sample is low—i.e., more than 35 percent of the sample’s seeds are dead—or the quantity of seeds is too low for distribution. NPGS has significant backlogs of germplasm requiring regeneration. According to NPGS officials, large backlogs may cause the loss of diversity in collections or prevent distribution to users and to NSSL for secure backup. NPGS officials from two plant introduction stations told us that, generally, their sites’ germplasm that is low in viability or quantity should be regenerated within 2 to 5 years in order to minimize the loss of diversity in their collections over the long term. However, it may take as much as 75 to 100 years for the samples at these two locations that need regenerating to be regenerated, according to NPGS curators. Table 4.1 shows the estimated number of years required to regenerate samples, at current resource levels, for various crops at the four plant introduction stations, as of Spring 1997. Some of these years are underestimated because they do not include the regeneration that would be required to provide germplasm for secure backup to NSSL and material to users that has been correctly regenerated. As table 4.1 shows, of the four plant introduction stations, the Pullman, Washington, location has the biggest backlog in terms of the percentage of samples requiring regeneration. Such regeneration is important not only for preservation of diversity but also for supplying seed to NSSL for long-term, secure backup. Several factors contribute to these backlogs. The biggest single factor is the limited number of permanent employees and seasonal laborers available to manage and carry out the necessary field and greenhouse activities, according to NPGS officials. Furthermore, at some locations, facilities for regeneration are inadequate, and at others the growing conditions for germplasm are less than ideal for producing good yields of high-quality seed. For some collections, these regional climatic conditions also contribute to the development of pests and pathogens, which can hinder the preservation and use of germplasm. To overcome these problems and increase its capacity to regenerate quality seed, NPGS recently established a new site—at Parlier, California—that is in an arid region with a long growing season. The Department has requested increased funding for genetic resources research in the fiscal year 1998 budget, part of which is to increase regeneration capability, according to an NPGS official. CGC responses to our survey regarding the sufficiency of regeneration activities were similar to those on viability testing. Only 7 of the 40 CGCs we surveyed reported that NPGS’ regeneration activities are insufficient for their collections, although 29 CGCs reported that the lack of staff for regeneration and viability testing had hindered the preservation of their collections. When we examined the responses of the curators (those most knowledgeable about the collections’ conditions), curators for part or all of 15 of 39 crop collections reported that regeneration is insufficient for part or all of their crop collections. The curator responsible for most of NPGS’ corn collection reported that regeneration is insufficient and that the 15-year regeneration backlog for corn placed an important part of this collection at the risk of losing diversity. Although NPGS’ policy requires that all seed samples in active collections be backed up at NSSL, over one-third are not. Furthermore, methods to ensure the secure backup of most clonal germplasm have not yet been developed. Backup is needed to provide protection against losses at the active sites resulting from (1) deterioration, which generally occurs more rapidly in seeds stored at active sites, or (2) human error, extreme weather, equipment failure, flood, fire, vandalism, or other catastrophes. Sixty-one percent of the approximately 440,000 seed samples at NPGS’ active sites are backed up at NSSL, where they are stored at –18 degrees Celsius or in containers over liquid nitrogen to slow deterioration. Of these backed-up samples, 44 percent do not meet NPGS’ standards and goals for the quantity of seeds and the percentage that should be viable—65 percent. The seed samples not stored at NSSL are at increased risk of deterioration because seeds generally deteriorate much more rapidly at active sites, which generally store germplasm at warmer temperatures—5 degrees Celsius. According to NPGS officials, seeds have not been adequately backed up primarily because of the large regeneration backlogs at active sites. That is, until the sites regenerate germplasm, they often do not have a sufficient number or quality of seeds to send to NSSL for backup storage. In addition, even when they have sufficient quantities of seeds, some sites have not sent the seeds to NSSL because before they can be sent, the sites must reinventory the germplasm samples and repackage the seeds. According to NPGS officials, these activities use resources that are in short supply. In addition, NSSL has its own 16-month backlog of about 27,000 samples that must be processed (which includes viability testing) before being placed in secure, long-term storage. The backup of clonal samples is even more limited, with only 4 percent of the approximately 30,000 samples at the active sites backed up at NSSL. This limited backup occurs because the methods for providing secure, long-term storage for most clonal germplasm have not yet been developed. Clonal germplasm may be backed up—in greenhouses as living plants, as tissue culture, or through cryopreservation—at the active site where the primary collection is maintained. Thus, in case of a natural disaster, disease, or other catastrophe, both the active and backup samples could be destroyed. For example, in 1992, over 2,000 germplasm samples were lost at NPGS’ Miami facility following Hurricane Andrew. These samples were not backed up at another NPGS site or at NSSL. Included in this group were about 30 percent of the mango and avocado collections and about 50 percent of the site’s ornamental collection (e.g., palm trees). The storm uprooted the trees, and they could not be successfully replanted. The curator for these crops stated that most of this material will not be replaced because of resource constraints, difficulties in locating the material, and difficulties in getting foreign collections to provide replacement samples. CGC responses to our survey regarding the sufficiency of backup storage of germplasm varied. Only 6 of the 40 CGCs surveyed reported that NPGS’ activity in the area of backup storage/preservation is insufficient for their crop collections. In contrast, the curators for part or all of 15 of 40 crop collections reported that NPGS’ activity in the area of backup storage/preservation of their crop collection is insufficient. The curators for the collections of six major crops—corn, soybeans, wheat, alfalfa, potato, and cotton—reported no insufficiencies in this area. | Pursuant to a congressional request, GAO surveyed the 680 members of the 40 crop germplasm committees (CGC) for their views on the sufficiency of the National Plant Germplasm System's (NPGS) principal activities focusing on: (1) acquiring germplasm to ensure the diversity of the collections in order to reduce crop vulnerability; (2) developing and documenting information on germplasm; and (3) preserving germplasm. GAO noted that: (1) just over half of the CGSs reported that the genetic diversity contained in NPGS' collections is sufficient to reduce the vulnerability of their crops; (2) considering both this collection and all other freely available collections, almost three-quarters of the committees said that the diversity in these collections is sufficient for reducing their crops' vulnerability; (3) at the same time, the committees identified several concerns affecting the diversity of their collections, and they ranked the acquisition of germplasm as the highest priority for the germplasm system if more funding becomes available; (4) current acquisition efforts are hindered by problems in obtaining germplasm from some countries and by the Department of Agriculture's (USDA) management of the quarantine system, which has contributed to the loss of germplasm and delays in its release for certain plants; (5) according to the crop committees, many of the system's collections lack sufficient information on germplasm traits to facilitate the germplasm's use in crop breeding; (6) officials of the germplasm system acknowledged that some information on plant traits, such as resistance to disease or plant structure, has not been developed because it is considered to be a lower priority than preserving germplasm; in other instances, the information has been developed by scientists outside of the system and has not been provided for entry into the database; (7) preservation activities--viability testing, regeneration, and the long-term backup storage of germplasm--have not kept pace with the preservation needs of the collections; (8) only minimal viability testing--testing the seeds in a sample to determine how many are alive in order to prevent the loss of the sample--has occurred at two of four major locations; (9) in addition, the system has significant backlogs for regenerating (that is, replenishing) germplasm at the four major locations; and (10) over one-third of the system's germplasm is not stored in the system's secure, long-term storage facility, thereby increasing the risk that samples located around the nation could be lost through environmental damage or other catastrophes. |
WIA created a comprehensive workforce investment system that brought together multiple federally funded employment and training programs into a single system, called the one-stop system. Prior to the enactment of WIA, services were often provided through a fragmented employment and training system. One-stop centers serve two customers—job seekers and employers—and WIA provided flexibility to states and local areas to develop approaches that best meet local needs. In its redesign of the system, WIA created three new programs—Adult, Dislocated Workers, and Youth—that provide a broad range of services including job search assistance, assessment, and training for eligible individuals. The WIA programs provide for three levels of service for adults and dislocated workers—core, intensive, and training. Core services include basic services such as job search and labor market information; intensive services include activities such as comprehensive assessments and case management. Training is provided through individual training accounts that participants can use to pay for training they select from a list of eligible providers. In serving employers, one- stops have the flexibility under WIA to provide a variety of tailored services, including customized screening and referral of qualified participants in training to employers. In addition to establishing the three new programs, WIA required that services from these programs, along with those of a number of other employment and training programs, be provided through the one-stop system so that jobseekers, workers, and employers could find assistance at a single location. Table 1 shows these mandatory programs and their administering federal agencies. Labor is responsible for providing guidance to states and localities on delivering services through the one- stop system, and states, through state workforce boards, have a number of responsibilities for the workforce system statewide, such as developing state plans. WIA requires that each state have one or more local workforce investment areas (designated by state governors), each governed by a local workforce investment board. To help align employment and training programs with the needs of employers, WIA requires that local boards include representatives from one-stop partner programs, local educational entities, labor organizations, community-based organizations, and economic development agencies. It also requires that a majority of the members be representatives of local businesses and that the local board chairman be a representative of a local business. Through these requirements, WIA gave business representatives a key role in deciding how services should be provided and overseeing operations at one-stop centers. Local workforce boards are also responsible for coordinating workforce investment activities with economic development strategies, and developing relationships with employers. The local workforce boards also select the entities to operate one-stop centers and conduct oversight of the one-stop system. In addition to the mandatory programs, local workforce boards have the flexibility to include other programs in the one- stop system. Labor suggests that these additional, or optional, programs may help one-stop systems better meet specific state and local workforce development needs. Over $40 billion in federal funding has been provided for Adult, Dislocated Worker, and Youth programs since fiscal year 2000. In 2009, Congress passed the American Recovery and Reinvestment Act (Recovery Act), which included a one-time addition of $3.15 billion for the three programs through program year 2010. Compared to fiscal year 2000, annual WIA funding declined in nominal terms by about 24 percent in fiscal year 2011. While the federal funds for Adult, Dislocated Worker, and Youth programs that flow to states and then to local areas through statutory formulas—WIA formula funds—are intended to support services and training for individual jobseekers, WIA provides for certain set-asides for statewide workforce activities, as well. Governors may reserve up to 15 percent of these program funds for statewide activities, referred to in this report as WIA Governor’s set-aside. WIA requires states to use the Governor’s set-aside for activities such as statewide evaluations of workforce programs and incentive grants for local areas, and allows them to fund a wide array of activities, including innovative training programs for incumbent workers, research and demonstration projects, and capacity building and technical assistance. For program year 2011, Congress reduced the 15 percent Governor’s set-aside to 5 percent. Additionally, governors may reserve up to 25 percent of the Dislocated Worker program funds for “rapid response” activities, to help serve employers and workers facing layoffs and plant closings. 29 U.S.C. § 2916. program.three employment and training grant initiatives: the High Growth Job Training Initiative beginning in 2001, the Community-Based Job Training Initiative beginning in 2005, and the Workforce Innovation in Regional Economic Development (WIRED) initiative beginning in 2006. These grants were designed to identify the workforce and training needs of growing, high-demand industries and engage workforce, industry, and educational partners to develop innovative solutions to workforce challenges. Between 2001 and 2007, Labor spent almost $900 million on Beyond the WIA programs, Commerce runs programs that can support workforce training. For example, the department’s Hollings Manufacturing Extension Partnership (MEP) provides technical assistance upon request to manufacturers, including advice on workforce practices or skills training in some cases. MEP clients reported having invested about $270 million of their own funds in workforce training in 2009. In order to receive their full funding allocations, states must report on the performance of their three WIA programs. WIA requires performance measures that gauge program results for jobseekers in the areas of job placement, retention, earnings, and skill attainment. In addition, WIA requires measures of customer satisfaction for jobseekers and for employers, which may be collected through surveys. WIA holds states accountable for achieving their performance levels by tying those levels to financial sanctions and incentive funding. While the 14 selected initiatives varied in terms of their purpose, sector, and partners involved, the boards and their partners cited common factors that facilitated and sustained collaboration (see fig. 1). For example, virtually all of these collaborations grew out of efforts to address critical workforce needs of multiple employers, typically in a specific sector, rather than focusing on individual employers. Additionally, the partners in these initiatives made extra effort to understand and work with employers so they could tailor services such as jobseeker assessment, screening, and training to address specific employer needs. In all cases, the partnerships included workforce boards, employers, and education and training providers, and in some cases, they also included local school districts, regional organizations that promoted economic development, state agencies, or labor unions. Partners remained engaged in these collaborative efforts because they continued to produce a range of results for employers, jobseekers and workers, and the workforce system and other partners. Workforce boards, employers, education and training providers, and other partners in the 14 initiatives focused on urgent and commonly shared employer needs that served as a catalyst for collaboration. Virtually all of the initiatives focused on ways to supply workforce skills that were commonly needed by multiple employers in a specific sector. The urgency of their needs ranged from a shortage of critical skills in health care and manufacturing to the threat of layoffs and business closures (see table 2). In San Bernardino, California, for example, some companies were at risk of layoffs and closures because of declining sales and other conditions, unless they received services that included retraining for their workers. Also, in Rochester, Minnesota, the turnover rates at long-term care facilities threatened their ability to comply with state regulations and remain open. In one case, in Gainesville, Florida, employers joined with the board and others to tackle the need to create additional jobs by embarking on an initiative to develop entrepreneurial skills. See appendix II for a profile of each initiative. In most cases, workforce boards worked with employers and other partners to determine the scope of the problem, which galvanized the partners’ commitment to collaborate. In Northern Virginia, for example, officials told us that, individually, local hospitals were experiencing difficulties recruiting skilled workers and anticipated that their needs would become more acute due to pending retirements among their existing workers. However, until the initiative, there had been no attempt to quantify the staffing needs overall among the hospitals. The partners commissioned a study which projected a shortage of 17,000 nurses and other health care workers in 23 occupations—a finding that reinforced the partners’ commitment to collaborate on a common strategy, according to local officials. In some initiatives, the partners obtained information about employers’ needs through studies or a focus group. In a few cases, boards used previously developed state models to engage employers in specific industries and regions and identify information about their critical workforce needs. For example, in both Seattle and southeast Michigan, employers participated in models—known as “Skill Panels” or “Skills Alliances”—that Washington State and Michigan had developed for this purpose. By focusing on common employer needs across a sector, the boards and their partners also produced innovative labor force solutions that, in several cases, had evaded employers who were trying to address their needs individually, according to those we interviewed. In several cases, employers cited the recruitment costs they incurred by competing against each other for the same workers. By working together to develop the local labor pool they needed, the employers were able to reduce recruitment costs in some cases. Describing the competitive approach to labor supply that had prevailed before collaboration, one employer said, “When we steal staff , we don’t add people to the workforce, and it increases the cost of doing business.” In addition to finding common ground among employers, some initiatives also forged greater collaboration among education and training providers. As with the employers, the focus on common needs galvanized support, leading the educational institutions in some cases to share curricula or agree to recognize courses offered by other participating institutions. For example, in southeast Michigan’s initiative in automotive manufacturing, education partners agreed to grant credit for courses offered by other participating institutions. In Seattle’s health care initiative, a community college official said that area colleges had competed against each other and worked with employers independently, but collaboration helped them think more in terms of scale and as a system. In several cases, the initiatives involved multiple workforce boards as partners to address issues on a regional basis. For example, the health care initiative in Cincinnati involved the efforts of four boards across three states—Ohio, Kentucky, and Indiana—because board officials recognized that the local labor market includes communities in each state. In Wichita, Kansas, the aircraft manufacturing initiative included two boards—one serving the six counties around Wichita, and the other serving 63 rural counties in the western half of the state. To address initial concerns that the larger Wichita-based board might benefit disproportionately, both boards took steps to ensure that they would each benefit from and have access to resources. In doing so, according to board staff, they were able to expand the recruitment pool and promote job openings in a wider area. Their collaboration also fostered communications about new market opportunities for companies. Officials from many boards emphasized the importance of securing leaders who had the authority or the ability, or both, to persuade others of the merits of a particular initiative. In some cases, workforce board or one-stop staff took the lead. In San Bernardino, California, for example, a leader from the one-stop’s Business Services unit used feedback from local employers to persuade the board of the need for layoff aversion services, such as training manufacturers’ incumbent workers in more efficient production techniques. In Lancaster, Pennsylvania, workforce staff helped public school teachers gain state approval for an agricultural curriculum by obtaining the state’s designation of agricultural careers as “high priority occupations,” and as a result, convinced previously wary partners that collaborating with the workforce system would be beneficial. In some cases, other partners led efforts to launch the initiative. For example, to address a critical need for health care workers in Northern Virginia, a community college president personally marshaled support from area hospital chief executive officers and other local leaders to address common needs for health care workers. Employers also provided leadership that was instrumental in persuading other employers and partners to join the initiatives. In Greensboro, North Carolina, officials reported that many employers joined round-table discussions after their competitors encouraged them to participate. Many partners also emphasized the value of having a leader whose perceived neutrality could help build trust among parties who were otherwise competitors. Officials from several initiatives also said that neutral leaders helped different partners understand how their mutual interests could be served through collaboration. In Seattle, for example, officials said that having such a neutral convener helped to both manage partners’ relationships and focus on opportunities for systematic change, and that without such leadership, their efforts would have likely produced smaller, more isolated projects. The leaders who served as neutral conveners varied among the initiatives. For several initiatives, including Seattle, the local boards served this purpose. In southeast Michigan, the state workforce agency was viewed as a neutral convener. All of the boards and their partners leveraged resources in addition to or in lieu of WIA formula funds to launch or sustain their initiatives. In some cases, partners were able to use initial support, such as discretionary grants, to attract additional resources. For example, in Golden, Colorado, the board leveraged a Labor WIRED grant of slightly more than $285,000 to generate an additional $441,000 from other partners. WIA formula funds were typically used to support some training, as well as pre- employment assessment and screening. According to several workforce board directors, however, WIA funds were not sufficient to meet the training needs of employers and jobseekers in their areas. Various partners contributed resources to meet these training needs and sustain their collaboration. For example, projects that stemmed from Seattle’s Health Care Sector Panel have had a wide variety of support, including WIA formula funds, Governor’s set-aside funds, and federal H-1B monies and state funds, among others. In addition to public funds, in all cases that we reviewed, employers demonstrated their support by contributing as well. For example, in Northern Virginia, health care employers offered matching funds to help persuade the state legislature to provide financial support, according to local officials. In Cincinnati, Ohio, all participating employers committed tuition reimbursement dollars for workers’ course costs, including remedial education, and in Greensboro, North Carolina, employer matching funds supported a sector-specific training center. In addition, employers also provided in-kind support, such as equipment, use of facilities, and providing training during working hours. Table 3 lists the most frequently used resources to build and sustain initiatives. In all cases, the initiatives of the boards and their partners provided employer-responsive services to actively involve employers and keep them engaged in the collaborative process. Some boards and their partners employed staff with industry-specific knowledge to better understand and communicate with employers. In Madison, Wisconsin, for example, the board designated some one-stop staff as industry specialists. According to officials, these specialists better served employers because they understood what the available jobs entailed, what training was required, and what to look for on a resume for specific occupations when screening jobseekers. Similarly, in Chicago, to help meet the needs of manufacturers, staff of a sector-based center specialized in certain kinds of manufacturing, such as food processing, plastics and paper, and metal. These examples are consistent with the finding of a prior GAO report that one-stop staffs were better able to respond to labor shortages when they had a strong understanding of the employment and training needs in specific industries. In other initiatives, boards and partners gained employers’ confidence in the collaboration by tailoring services such as jobseeker assessment and screening services to address specific employers’ needs. For example, in Greensboro, North Carolina, the board staff provided expedited services for an aircraft company by designing a web-based recruitment tool and customized assessment process within 48 hours and quickly screening over 2,400 initial applicants, according to a board official. Similarly, to better match jobseekers with specific job openings, the sector-based center in Chicago worked closely with employers to review and validate employers’ own assessment tools, or develop new ones, and administer them on behalf of the employers. Center staff said they could screen 50 candidates and winnow them to the top 5, which saved employers time in the hiring process. One employer noted that without the center’s help, they would have needed at least two human resource specialists to do this job. Boards and their partners also strengthened collaborative ties with employers by making training services more relevant and useful to them. In some cases, employers provided direct input into training curricula. For example, in Wichita, employers from the aviation industry worked closely with education partners to develop a training curriculum that met industry needs and integrated new research findings on composite materials. In southeast Michigan, education partners adjusted the course content— while the training was under way—in response to shifting industry needs. Another way that some initiatives met employers’ training needs was to provide instruction that led to industry-recognized credentials. For example, in San Bernardino, a training provider integrated an industry- recognized credential in metalworking into training to make it more relevant for employers. To address employers’ long-term training needs, some initiatives incorporated career pathways, in which training is sequenced and linked to additional training in a way that supports career advancement. In Seattle, employers reported that this approach to training could play an important role in reducing employer turnover costs associated with certain workers such as medical assistants, who formerly had few options for career advancement. In addition, some boards engaged employers in collaborative efforts by ensuring that certain jobseekers had adequate basic skills to meet employers’ needs. In some cases, this entailed working closely with the adult education system.boards adapted existing state approaches to help jobseekers improve both basic and occupational skills simultaneously, by including a basic skills instructor in training at least part of the time or by using curricula that basic skills and occupational instructors had developed together. In San Bernardino, where employers had been struggling to find skilled machinists, the training provider incorporated basic math and basic blueprint reading in the 630 hours of training that it provided to jobseekers. To address employers’ need for a long-term supply of workers, some boards and their partners also included youth in their initiatives. For example, in Gainesville, Florida, the TechQuest initiative enrolled WIA youth participants in 8 weeks of business-oriented learning to introduce them to technology-oriented businesses while improving their reading and writing skills. In Seattle and Madison, for example, the Another way that boards and their partners facilitated collaboration with employers was by reaching beyond WIA’s primary mission of serving the unemployed and including employed workers or jobseekers not receiving Several officials described this broad approach as services under WIA. driven by the employer’s need for certain skills, irrespective of a jobseeker’s status under WIA. In Northern Virginia, for example, the partners first identified employers’ needs—in this case, a projected local shortage of health care workers—and then developed a broad, responsive strategy that incorporated WIA as a key but not the sole element. As a result, the partners’ efforts to increase the supply of skilled workers extended beyond WIA to the general population, while including some workers who had individual training accounts under WIA. Similarly, staff of the sector-based center in Chicago explained that their practice of screening and referring qualified jobseekers—both those who received services under WIA and those who did not—was responsive to manufacturers’ need for specific skills. In other cases, boards and their partners focused largely on employers’ current workers to meet specific employer needs while also serving some workers receiving services under WIA. The manufacturing initiatives in Golden, Colorado, and southeast Michigan, for example, focused on upgrading the skills of incumbent workers while at the same time retooling the skills of some dislocated workers. Boards also facilitated collaboration by minimizing administrative burden for employers and other partners. Board staff with several initiatives said that complex administrative processes would have discouraged employers from collaborating in the initiatives. In some cases, boards and their partners streamlined data collection or developed shared data systems to enhance efficiency. For example, in Cincinnati, the partners developed a shared data system to more efficiently track participants, services received, and outcomes achieved across multiple workforce providers in the region. Additionally, in southeast Michigan, an industry official reported that the administrative burden would have been “almost insurmountable” for employers if they had not collaborated with the workforce system and received administrative support. For several initiatives, one-stop staff or board members said they managed administrative processes because the level of documentation could be intimidating and discourage collaboration. Another way many initiatives minimized administrative burden was by designating a single point of contact or having a program manager to streamline communication channels, which facilitated collaboration among disparate partners. For example, in Golden, Colorado, a partner reported that having a program manager who was focused and responsive to employers’ concerns improved stakeholder relations and helped employers understand the benefits of collaboration. In other cases, some partners limited the frequency, length, and focus of meetings to use partners’ time most effectively. For example, in Northern Virginia, officials said they opted for biannual meetings focused on outcomes, needs, and employer support to accommodate the busy schedules of hospital executives and college presidents. For employers, the partnerships produced diverse results that generally addressed their need for critical skills in various ways. In some cases, employers cited an increased supply of skilled labor. In Northern Virginia, for example, the partners said that the initiative produced an increased supply of nurses and other health care workers. In other cases, employers said the initiatives helped reduce their recruitment and retention costs. For example, in Rochester, Minnesota, one employer, a long-term care facility, reported a reduction in turnover from 60 percent to 6 percent in 1 year. In Cincinnati, Ohio, according to an independent study, employers who participated in the health care initiative realized about $4,900 in cost savings per worker hired. Other employers cited less quantifiable benefits. For example, in Gainesville, Florida, one employer said that participants in the entrepreneurship training initiative had acquired the specialized skills needed for new ventures that were otherwise difficult for them to find. In a few cases, employers credited the initiative with helping to increase their competitiveness. In the Wichita, Kansas, area and southeast Michigan, where employers focused on upgrading skills to keep pace with changes in technology, officials told us that the initiatives had helped them develop a talent base that could help them compete internationally. For jobseekers and workers, the partnerships produced results that mainly reflected job placement and skill attainment. For example, in Wichita, of the 1,195 workers who were trained in the use of composite materials in aircraft manufacturing, 1,008 had found jobs in this field. In some cases where incumbent workers were promoted after having received services to support their career progression, as in Seattle’s health care initiative, new openings were created for entry-level jobseekers. In Gainesville, Florida, some participants who had completed the initiative’s entrepreneurship training had started seven new companies. Elsewhere, such as in San Bernardino’s layoff aversion initiative, workers retained jobs when 426 jobs were saved. In addition, some of those who attained new skills included jobseekers with barriers to employment. For example, in San Bernardino, officials reported that 32 percent of the trainees who had completed the Technical Employment Training for machinists faced barriers to employment, such as a disability or criminal record. For the workforce system, the partnerships led to results such as increased participation by employers in the workforce system, greater efficiencies, and models of collaboration that could be replicated. Officials with several initiatives said they had generated repeat employer business or that the number and quality of employers’ job listings had increased, allowing the workforce system to better serve jobseekers. In some cases, officials also cited efficiency improvements. For example, in Cincinnati, an education partner cited cost savings achieved when two colleges shared a training program, splitting certain costs. Some partners also applied their lessons learned through the partnership to new initiatives. For example, the Rochester board’s pre-employment health care initiative has been replicated in manufacturing, while officials in Golden, Colorado, replicated elements of their manufacturing initiative in the energy sector. Although all of the boards were successful in their collaborative efforts, they identified some challenges they needed to overcome. For example, some boards cited challenges related to using WIA formula funds to address diverse employer needs. Staff from most, but not all, of the boards also said that WIA performance measures do not directly reflect their efforts to work with and engage employers. However, we found that many of the boards developed a number of their own performance measures to assess their services to employers. Some boards identified challenges in using WIA funds to address employers’ need for workers at various skill levels. For example, staff from some boards said it was challenging to collaborate with employers using WIA formula funds because they could generally only use these funds to train eligible jobseekers. Furthermore, WIA prioritizes funding for intensive services and training for low-income individuals when funding for adult employment and training activities is limited. The director of one board said that pursuing comprehensive strategies for an entire economic sector can be challenging, because WIA funds are typically used for lower-skilled workers, and employers in the region wanted to attract a mix of lower- and higher-skilled workers. To address this challenge, the director noted that the board used a combination of WIA and other funds to address employers’ needs for a range of workers. In at least five of the initiatives, board officials described services to incumbent workers as a major part of their successful collaboration with employers, yet some boards were challenged to piece together funds to do this. WIA formula funds generally cannot be used to train incumbent workers, with certain exceptions. For instance, employed workers who need training to reach self-sufficiency may be served with WIA formula funds. In addition, WIA Governor’s set-aside funds may be used for innovative programs for incumbent workers. Among the initiatives that served incumbent workers, the most common funding sources were employer contributions and state funds. In addition, states may apply to Labor for waivers to provide layoff aversion services to incumbent workers with both WIA formula and reserve rapid response funds. One board used WIA rapid response funds for layoff aversion services because the state had a waiver for this purpose. Even in cases where boards used some WIA funds to serve incumbent workers, most boards had to find other funds to address this employer need. Because WIA formula funds for training are primarily designed to pay for services for individual jobseekers through individual training accounts, some boards found it challenging to fund other training-related activities. For example, some board staff said they needed to use other funding to develop new curriculum for their initiatives. Boards can use WIA funds to develop curriculum as part of “customized” training for employers. However, this type of training has some restrictions, such as requiring employers to pay for 50 percent of the training, which could be a deterrent for some employers. In other cases, board staff reported that they wanted to provide training for an entire cohort, but boards usually cannot use WIA funds to contract for training services. WIA requires training to be provided through individual training accounts with limited exceptions, such as when the services include on-the-job training provided by an employer or when the local board determines there is an insufficient number of training providers in the area. See 29 U.S.C. 2964(d)(4)(G)(ii). Staff from a majority, but not all, of the boards said that WIA performance measures do not directly reflect their efforts to work with and engage employers. The current WIA performance measures primarily focus on local-level outcomes for jobseekers, except for a required measure of employer satisfaction which may be measured by a survey of a sample of employers for this purpose. While this survey provides a high-level indicator of employer satisfaction with WIA-funded programs, the statewide sample that is reported to Labor does not provide information on each local area’s performance. Thus, board staff explained, the sole employer measure does little to acknowledge or reward local boards that successfully engage employers. For example, one board’s staff said that although they prioritize working with employers, they do not get credit for this under WIA. On the other hand, the executive director of one board noted that their successful efforts to engage employers were reflected in improved outcomes for jobseekers. However, WIA does not preclude local efforts to develop measures of employer engagement, and we found that many of the boards track a variety of their own employer measures. In Chicago, staff track measures such as the number of new employers served every year and the hiring rate among jobseekers that they had referred to employers. In two initiatives where the boards collaborated with Commerce’s MEP program, the boards collected data on measures such as increased employer sales and employers’ satisfaction with initiative services. Furthermore, one board’s executive director expressed appreciation for the flexibility to adopt measures of employer engagement that best align with local needs. Other examples of the types of measures boards reported that they used to gauge employer engagement or satisfaction include interview-to-hire ratio from initiative jobseeker referrals, retention rate of initiative-referred hires, number of businesses returning for services, number of new business contacts, employers’ willingness to invest time or funding in initiatives, employer satisfaction with initiative, and increased or retained employer sales. These examples are consistent with prior GAO work. In a 2004 report, we found that about 70 percent of local areas nationwide reported that they required one-stop centers to track some type of employer measure, such as the number of employers that use one-stop services, how many hire one-stop customers, and the type of services that employers use. To help boards form partnerships with employers, colleges, and other partners, Labor has conducted webinars and shared information on the uses of and opportunities for funding such collaborations. One webinar explained how to work with Commerce’s MEP program to obtain manufacturing expertise on behalf of local employers. Other webinars hosted by Labor have suggested how boards can use various discretionary grants, such as WIRED grants, to help support their collaborative initiatives. In addition, Labor has featured the innovative work of several local boards in forming partnerships, such as the approach to career pathways used by the board in Madison, Wisconsin. To help boards optimize their use of WIA funds without overstepping program rules, Labor has issued guidance on such topics as entrepreneurship training, working with economic development entities, and training incumbent workers. Labor has also engaged in collaborative efforts with other federal agencies that could help support local collaboration. For example, Labor is working with the Department of Education and other federal agencies to identify existing industry-recognized credentials and relevant research projects, and has issued guidance to help boards increase credential attainment among workforce program participants. In addition, Labor has recently worked with Commerce and the Small Business Administration to fund a new discretionary $37 million grant program called the Jobs and Innovation Accelerator Challenge to encourage collaboration and leveraging funds. This program encourages the development of industry clusters, which are networks of interconnected firms and supporting institutions that can help a region create jobs. These grants are also intended to help local boards and others leverage existing federal resources by, for example, encouraging boards to develop collaborative relationships with career and technical education providers that receive federal funds. A total of 16 federal agencies will provide technical resources to help leverage existing agency funding, including the 3 funding agencies listed above. In September 2011, Labor announced the 20 regions that will receive grant funds. Labor estimates the grants will result in the creation of 4,800 jobs. As discussed previously, board staff stressed the importance of leveraging resources to facilitate collaboration, but Labor has not made information it has collected on effective practices for doing so easily accessible. For example, Labor maintains a website for sharing innovative state and local workforce practices called Workforce3One, which has some examples of leveraging funding at the local level. However, the website does not group these examples together in an easy to find location, as it does for other categories such as examples of innovative employer services or sector-based strategies. In addition, Labor funded evaluations of two of its grant programs and identified how local grantees leveraged funding from educational institutions, businesses and employers, and industry associations, among other sources, as well as the levels of funding that grantees leveraged and planned to leverage. Other research, such as reviews of boards’ coordination with faith-based community organizations, has also addressed how local workforce initiatives can leverage resources through outside organizations to help jobseekers obtain and retain employment. However, this information on leveraging resources has not been compiled and disseminated in one location such as on the Workforce3One website, so interested parties must search for and read through separate reports. Regarding local measures of employer engagement, Labor acknowledged the value of such a measure in 2006 and later developed a method for local boards to track and report on employers that use the one-stop system, but has postponed its implementation indefinitely. Labor officials said they plan to defer changes to their performance reporting system until WIA is reauthorized to avoid spending time and money on a system that may not meet new requirements. At a time when the nation continues to face high unemployment in the wake of the recent recession, it is particularly important to consider ways to better connect the workforce investment system with employers to meet local labor market needs. Indeed, that connection is fundamental to the vision of WIA, which created the one-stop system, in part, to help meet employers’ needs and ensure that one-stop training and services are aligned with those needs. The 14 local initiatives profiled in this report illustrate how workforce boards collaborated with partners to develop innovative and employer-driven services that helped address urgent local workforce needs. The variety of ways in which they helped employers meet their needs and yielded results may be testimony to the viability and potential of WIA’s vision for local partnership: critical skill needs were met, individuals obtained or upgraded their skills, and the local system of workforce programs was reinvigorated by increased employer participation. Furthermore, the common factors that facilitated these collaborations, as well as the key challenges they encountered, can be instructive for future collaborations and efforts to enhance the workforce investment system. Labor has taken several important steps that support local initiatives like these through guidance and technical assistance, and its collaborative efforts with other federal agencies such as the new grant program with Commerce and the Small Business Administration. However, while Labor has also collected relevant information on effective strategies that local boards and partners have used to leverage resources, it has not compiled this information or made it readily accessible. All the boards we profiled emphasized the importance of leveraging resources to build innovative new partnerships. Moreover, as the workforce system and its partners face increasingly constrained resources, it will be important for local boards to have at their disposal information on how boards have effectively leveraged funding sources. Without such information, boards seeking to emulate such strategies may lack the information they need to benefit from lessons learned, augment their resources, and explore innovative new ways to collaborate with partners. To better support the capacity of the local workforce investment system to collaborate with employers and other partners, the Secretary of Labor should compile information on workforce boards that effectively leverage WIA funds with other funding sources and disseminate this information in a readily accessible manner. This might involve compiling lessons learned from prior research or evaluations of grant programs and new grant initiatives such as the one with Commerce and the Small Business Administration, and disseminating this information by creating a super category for this topic on the Workforce3One website to group examples in one place. We provided a draft of the report to the Departments of Labor, Education, and Commerce for review and comment. Labor provided a written response (see app. III). Labor and Education also provided technical comments, which we incorporated as appropriate. Commerce did not provide comments. In addition, we provided workforce board officials from each of the 14 initiatives with a draft of pertinent sections and incorporated their technical comments as appropriate. Labor agreed with our findings and recommendation. Specifically, Labor’s response noted that our findings validate the department’s position that stronger partnerships between employers and the workforce system lead to improved employment and retention outcomes for workers. Labor also noted efforts to serve both employers and workers more effectively, and cited recent technical assistance it has provided to help the workforce system engage employers. In addition, Labor cited efforts to work across federal agencies to help align resources and programs to ensure effective service delivery. Labor also noted states’ critical role in engaging employers. Labor concurred with GAO’s recommendation, and added that it is examining ways to improve the identification and dissemination of promising practices. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of Labor, as well as the Secretaries of Education, Health and Human Services, Housing and Urban Development, and Commerce, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. A list of related GAO products is included at the end of this report. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or at sherrilla@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our review focused on (1) the factors that facilitated innovative collaborations among workforce boards, employers, and others; (2) the major challenges to collaboration; and (3) the actions Labor has taken to support local workforce boards in their collaborative efforts. To obtain background information, we interviewed experts on the Workforce Investment Act of 1998 (WIA), workforce development and economic development; reviewed numerous studies, reports, and other literature; and reviewed relevant federal laws, regulations, guidance, and other documentation. We interviewed officials from five key federal agencies: the Departments of Labor, Education, Health and Human Services, and Housing and Urban Development, which administer the programs required by WIA to participate in the one-stop system, as well as officials from the Department of Commerce (Commerce), which administers relevant economic development programs. We interviewed officials from the National Governor’s Association, the National Association of Workforce Boards, and the National Association of State Workforce Agencies. To identify relevant studies, we searched for literature in various bibliographic databases, including ProQuest, Social SciSearch, Social Sciences Abstracts, NTIS, PAIS and EconLit. We also asked the experts we interviewed to recommend additional studies. To learn about what factors facilitated innovative collaborations among workforce boards, employers, and other partners, as well as the challenges, we interviewed officials from 14 local initiatives in which local workforce boards achieved positive results by collaborating with employers, educational partners, and others. To identify these selected sites, we asked officials from the five key federal agencies and national experts representing 20 organizations to identify what they viewed as the most promising and innovative local initiatives in which local workforce boards collaborated effectively with employers and other partners to achieve positive results. We received nominations from the Departments of Labor, Commerce, and Health and Human Services. Officials from the Department of Housing and Urban Development told us that the agency currently has no programs that it considers mandatory one-stop partners and was not able to provide any nominations. Officials from the Department of Education provided state-level information, but told us that they were not able to provide local nominations. The nonfederal experts we contacted consisted of representatives from the following organizations, the majority of whom provided nominations: American Association of Community Colleges Aspen Institute: Workforce Strategies Initiative Center for Law and Social Policy, Inc. Corporation for a Skilled Workforce Council for Community and Economic Research Council of State Administrators of Vocational Rehabilitation Insight Center for Community Economic Development: National Network of Sector Partners John J. Heldrich Center for Workforce Development National Association of Development Organizations National Association of State Directors of Career Technical Education National Association of State Workforce Agencies National Association of Workforce Boards National Coalition for Literacy National Governor’s Association Social Policy Research Associates U.S. Chamber of Commerce: Institute for a Competitive Workforce Experts were selected based on their knowledge of workforce development or economic development activities at the local level. Specifically, we identified them through a review of relevant studies, their participation in conferences focused on workforce or economic development issues at the local level, previous assistance to GAO on other reports, and the recommendation of other experts. We asked the experts and federal agency officials to identify promising local initiatives that engaged the workforce board as a strategic partner and involved at least one economic development partner, education partner, and major Department of Labor (Labor) program required by WIA to provide services through the one-stop system. We also specified that the nominated initiatives be data-driven, involve collaborative strategic planning, and demonstrate positive results. In order to ensure that our nomination criteria were clear and appropriate for our objectives, we solicited input from the Departments of Labor and Education, as well as from associations with expert knowledge of workforce issues, specifically, the National Governor’s Association and the Council for Community and Economic Research. More than 89 initiatives or sponsoring organizations from 28 states were nominated and we selected 14 initiatives, corresponding to 13 boards, for in-depth review. Two of the 14 initiatives were led by the same workforce board. To assist our selection, we first used a data collection instrument to gather additional information on 29 of the nominated initiatives, which were chosen based on factors such as the number of nominations they received, results reported by the nominators, and their geographic diversity. From these, we then selected 14 initiatives for our review. These 14 initiatives should be viewed as our selection of diverse, promising efforts rather than a comprehensive list of initiatives that were more innovative than the initiatives not selected. Our selections were intended to represent initiatives that targeted different industries; demonstrated evidence of replicability; and served a variety of WIA populations. We also considered the characteristics of the workforce areas, such as unemployment rates and geographic locations. In some cases where workforce boards engaged with employers in more than one sector, we asked board officials to identify the initiative that demonstrated the most positive results. In one case, we selected the sector to achieve a range of targeted industries. At our selected sites, we conducted semi-structured interviews with local and state workforce officials, representatives of educational institutions, training providers, economic development officials and employers. We conducted in-person interviews during seven site visits to Chicago, Illinois; Golden, Colorado; Lancaster, Pennsylvania; Madison, Wisconsin; San Bernardino, California; Seattle, Washington; and Vienna, Virginia. In some cases, we also toured training facilities and workplaces. We conducted interviews for the remaining sites via videoconference and by telephone. We asked interviewees to tell us how the initiative started, what funding sources were used to start and sustain it, and what it had accomplished. To supplement the testimonial evidence employers and officials provided regarding the results of these initiatives, we requested and analyzed written documents such as performance reports, return on investment analyses, external evaluations, and other information to examine the results of the initiatives. However, we did not assess the reliability of their data. We are using their reported results for illustrative purposes only. We also asked them for their views on what had facilitated collaboration, the challenges involved, and the implications for WIA reauthorization. The information we obtained about the selected initiatives is not generalizable to other local workforce boards. To determine the actions Labor has taken to support local workforce boards in these types of collaborative efforts, we interviewed Labor, Education, and Commerce officials. We conducted our work between November 2010 and January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following profiles provide snapshots of the 14 initiatives we reviewed. We gathered data from a variety of sources, including initiative staff, board members, education and training providers, employers, state officials, relevant reports, and the Bureau of Labor Statistics. In each profile, we provide an overview of the initiative and describe some of the factors that facilitated collaboration. We also provide some background information, including the key partners, the workforce challenge, and the role of the board. While the profiles identify some common themes among the initiatives, they are not comprehensive descriptions of each initiative. We also provide information on the funding sources used for each initiative and highlights of the results that partners reported. The reported results generally reflected the diverse goals of these initiatives, which often included the goals of WIA programs as well as other federal grants or programs. Officials from each initiative were given the opportunity to review the information for accuracy. Overview In about 2000, the Workforce Boards of Metropolitan Chicago began to form strategic alliances with manufacturers and other partners to try to increase the supply of highly skilled workers needed in local manufacturing. Stakeholders said the efforts culminated in the City of Chicago issuing a request for proposal for a manufacturing sector center. A community-based organization, the Instituto del Progreso Latino, responded to the request for proposal and now runs ManufacturingWorks. The center provides a range of services to employers, including customized training development, jobseeker recruitment projects, comprehensive assessments of manufacturers’ needs, links with training providers to help employers find qualified candidates, and post-placement services to help employers retain newly placed workers, staff said. The center principally works with employers who meet certain standards for pay and working conditions. The center’s staff identified the following as key to its effectiveness: Industry-knowledgeable staff: Center staff specializes in sub- sectors, such as metal, plastic and paper, and food processing, which helps build credibility with manufacturers, staff said. Workforce challenge Manufacturing is an important sector for the Chicago region; however, employers reported that they could not find enough skilled workers, according to initiative staff. Additionally, jobs in manufacturing require a higher skill level than they did previously according to local stakeholders. Stakeholders in Chicago identified a need for 10,000 new and replacement manufacturing workers in the Chicago area each year. Screening of potential candidates: Screening helps ensure that employers get job candidates with the right skills, and that they therefore succeed in their new positions, stakeholders said. The workforce investment board’s role The Chicago Workforce Investment Board began promoting the idea of a manufacturing sector initiative in the late 1990s, a stakeholder said. At the same time, the stakeholder explained, the board also began to organize much of its other activity around specific sectors. The board held workforce summits with representatives from employers, labor, nonprofits, and educational partners to identify reasons for the shortage of trained, skilled workers in manufacturing and other sectors. Key federal and state support Federal WIA formula funds and another federal grant funded some of the program’s operations. However, the business services were funded with other, outside funds, according to city staff. Other federal grants or funds Employer cash or in-kind contributions they have served hundreds of different employers since 2005, including more than 100 different employers in 2006, 2007, 2008, and 2009. On average, manufacturers hired 1 out of every 1.6 jobseekers referred to them by ManufacturingWorks in program year 2010, a key indicator of success, according to initiative partners. Benefits for workforce system and other partners: ManufacturingWorks staff reported that the center tracks a number of “system relationship measures,” including the number of job requests they share with the entire Chicago workforce system and the jobseeker referrals ManufacturingWorks gets from the rest of the system. Workforce challenge Local hospitals in Cincinnati were competing against each other for workers and experiencing high turnover. According to a local official, they realized that was raising the cost of doing business overall. shared data system to track participants, services received, and outcomes achieved across multiple workforce providers. Program manager: The program manager administered the grants and helped communication among partners. The workforce investment boards’ role The four participating boards assessed jobseekers’ skills and used individual training accounts to pay for training in some cases. Key federal and state support Labor provided funding for the initiative through Recovery Act funds for high-growth and emerging industries and through a discretionary grant for community college-based job training. Employer cash or in-kind contributions compared costs and benefits of participating in this initiative, estimated benefits for one participating employer exceeded costs by over $200,000 for an 8-year period. Additionally, that study found that participating employers realized an estimated net benefit of about $4,900 for every worker hired through the initiative, with about half of these savings attributable to lower turnover and recruitment costs. Additionally, collaboration among the education partners ensured a consistent approach to training. Benefits for workforce system and other partners: Education partners said that graduation rates that generally exceeded 80 percent helped them achieve cost savings, because more individuals continued to advanced courses, which have fixed costs. In addition, they said they were able to reduce duplication among their educational programs and streamline clinical programs, thereby achieving additional cost savings and reducing the time needed for credential completion by 1 year for certain participants. The partners said they are considering expanding the initiative to include employers in long-term care, and the board said it has replicated elements of this initiative in other sectors, such as manufacturing and construction. FloridaWorks (Workforce Investment Board) Overview In 2009, a community partnership led by the Gainesville Area Chamber of Commerce convened local leaders to discuss the area’s economic growth strategy and agreed to make job creation a priority. To this end, the local workforce board and other partners developed several projects—called “quests”—to help impart entrepreneurial skills to three populations: (1) unemployed, highly skilled jobseekers; (2) Temporary Assistance for Needy Families (TANF) cash recipients and (3) WIA Youth. All three projects were first piloted in 2011. Startup Quest: The board partnered with the University of Florida’s Office of Technology Licensing to provide entrepreneurship training to 83 unemployed or dislocated high-skilled professionals. These individuals worked in 13 hand-selected teams, each with finance, marketing, operations, and other management skills, under the direction of a successful entrepreneur/mentor/chief executive officer for a total of 10 weeks. The Office of Technology Licensing made available new technologies to take to market. Working with this office, each mentor selected a technology, such as a sinkhole sensor and an inner ear implant, and the teams then developed market analyses. At the end of the project, participants pitched their analyses to venture capitalists. Workforce challenge A board official said that the community had substantially more unemployed individuals than available jobs. The lack of employment opportunities impacted jobseekers across the skill spectrum. Officials also reported that the area’s economic development was negatively affected by the high-school dropout rate and the difficulty local entrepreneurs faced in finding workers able to succeed in a startup environment. Opportunity Quest: The board provided 65 TANF cash recipients with training to help them discern their talents, learn marketing techniques and develop business concepts. Topics included networking, marketing, customer service, and finance. TechQuest: In partnership with the local school board, staff selected 47 at-risk high school students and enrolled them in WIA Youth for 8 weeks of business-oriented learning. The project’s goal was twofold: to introduce students to technology-oriented businesses, and to improve their reading and writing skills. A local entrepreneur taught the classes, and students earned iPads for completing the training. Students also received support from a local community college, the Chamber of Commerce, and multiple local business owners. Some key factors of the projects were: Program manager/single point of contact: A program manager and a single point of contact for employers helped ensure partners quickly adjusted to changing plans. Screening of candidates: Startup Quest participants were screened at multiple points to ensure that they could commit to the training and had an appropriate educational background. The workforce investment board’s role The board applied for grants that funded training, and planned and developed the quests. Also, officials said that the board's ongoing contract for business services with the Chambers of Commerce made the quests possible. FloridaWorks (Workforce Investment Board) Key federal and state support Each of the entrepreneurship projects was funded through separate grants, including $175,500 for Startup Quest from a Recovery Act state-level grant; $101,000 for TechQuest from WIA Governor’s set-aside; and $300,000 from Recovery Act state- level funds for Opportunity Quest. Officials told us that they could not have established and implemented the projects without this funding. In addition, the University of Florida’s Office of Technology Licensing is part of a Department of Commerce program. WIA Governor’s set-aside Employer cash or in-kind contributions mentors for the Startup Quest project reported that a majority of participants are now skilled in startup management. One entrepreneur explained that working in a startup is much different from working with an established employer, and that Startup Quest participants understand the urgency and needs of such businesses, a rare and valued skill set. Officials also reported that the TechQuest project helped introduce employers to the potential talent of WIA Youth, as evidenced by the fact that several employers offered internships to students. Benefits for workforce system and other partners: Startup Quest provided market analyses for the technologies to the University of Florida’s Office of Technology Licensing, which an official said could be useful for future activities. As a result of the Opportunity Quest project, a new business incubator service is being established at the one-stop in a rural area. An official also said that Opportunity Quest changed the one-stop’s management processes: now entrepreneurship is considered another option for jobseekers, particularly those with more independent personalities. Overview Manufacturing is a key industry in Colorado, ranging from food processing to chemicals to fabricated metals, according to local officials. In 2007, the Jefferson County Workforce Center commissioned a focus group study of local manufacturing employers to learn more about the staffing and human resource needs of the manufacturing industry and how the workforce system could better serve them. As a result of the focus group and other communication with industry partners, the local workforce system decided to provide training in “lean” (i.e., more efficient) manufacturing to dislocated and incumbent workers of small to mid-size manufacturing companies. They selected the Colorado Association for Manufacturing and Technology, an affiliate of Commerce’s Manufacturing Extension Partnership, as the training provider. Workforce challenge An official said that the economic downturn greatly increased the number of local dislocated workers. In addition, employers reported that they needed to cut costs and increase their workforce’s soft skills, including the ability to take initiative, work in teams, and communicate effectively. The initiative’s goals were to (1) help businesses become more profitable; (2) provide individuals with training certifications that help them upgrade their employment positions or salary ranges, and (3) provide employer-defined critical foundational skills. The training emphasized “lean” manufacturing, in which workers received training in more efficient techniques and problem solving. An official explained that the principles guiding such training are not new concepts, but they can be new to individual manufacturers. Some key factors of the initiative included: Program manager: Partners reported that having a program manager improved stakeholder relations. The program manager helped employers understand the benefits of the initiative and through various forms of outreach—including overview sessions and e-mails—solidified the business community’s commitment to the initiative. Limited or action-oriented meetings: Holding a limited number of meetings with employers was useful, according to partners. Industry-recognized credentials: Credentials provided independent verification of jobseekers’ abilities to employers. The workforce investment board’s role One-stops overseen by the board—Jefferson County Workforce Center and Workforce Boulder County—focused on identifying funding sources and clarifying the requirements related to federal funding. Key federal and state support Of the approximately $725,000 project budget, slightly more than $284,000 was provided through a WIRED grant from the Department of Labor. The balance was achieved through leveraging the WIRED grant to obtain other funds. Other federal grants or funds Employer cash or in-kind contributions the initiative saved or created 81 jobs, though one employer cautioned that the data on jobs created could be difficult to attribute to the initiative, especially in large companies. As of August 2011, at least 20 of the 63 dislocated workers were reported to be employed (not all responded to a survey to determine their employment status). Participants attributed a variety of additional results to the training they received, including receiving more interviews and obtaining a higher starting salary. Benefits for employers: The employers served by this initiative reported that the training resulted in approximately $2.7 million in decreased costs and other savings, and slightly more than $9 million in increased sales. One employer reported that the initiative helped workers become problem solvers, and that it helped change the culture of his company. He said that formerly, the company culture was one in which employees were given direction. The training focused on developing a leaner culture in which employees were encouraged to be proactive, and as a result they developed a new product. Benefits for workforce system and other partners: As part of the initiative, the workforce centers developed what they termed a “speed-dating” service, during which a variety of employers conducted interviews with dislocated workers. According to an official, this approach has been adapted for the workforce system’s youth services, and for a state program. The partners also formed a resource group for manufacturers, which included representatives of the community colleges, the Colorado Association for Manufacturing and Technology, the workforce system, and other partners. The group meets once a month to review companies with challenges and identify opportunities for growth. In addition, workforce system officials reported that they are replicating elements of the initiative in a new energy-sector initiative. Overview In response to mounting job losses in the area, the board identified transportation, distribution, and logistics as a target sector for a WIRED grant in 2006. The board chose this sector for its job creation potential, according to a local official, and also took into account the region’s existing assets, such as the presence of several aircraft and trucking companies, proximity to major highways, and the airport’s potential to become the center of a distribution network. According to a board official, some trucking and transportation companies, such as Volvo Logistics and Old Dominion Freight Line, have increased their staff in the area. Furthermore, as this official explained, as more employers locate in the region, they are likely to bring in companies in their supply chain, which could spur additional growth in manufacturing. In addition, as one employer explained, this sector was chosen because it is a component that every industry needs to be successful. Although the initiative has had as many as 100 industry partners, according to board staff, a recent focus was helping Honda Aircraft Company recruit jobseekers, following the firm’s 2007 decision to locate in the area. The board helped the company identify qualified jobseekers, and a community college designed a course of study, which met Federal Aviation Administration standards, to train jobseekers to become airframe mechanics. While immediate recruitment needs have been a recent focus, the initiative has also featured career fairs for youth. Partners said their collaboration continued after Labor’s WIRED grant expired in 2010. Some key elements were: Screening of candidates: For Honda Aircraft Company, the board facilitated three rounds of assessments for over 2,400 initial applicants, ensuring that those who passed had the necessary skills for the positions. Workforce challenge According to local reports, the region around Greensboro, North Carolina, had experienced severe job losses, in part stemming from losses in manufacturing. The board and other partners selected certain sectors, including the transportation, distribution, and logistics sector, for their job creation potential. Industry-recognized credentials: According to a board official, the primary relevant credentials include commercial drivers’ licenses and Federal Aviation Administration certification for the training of aircraft mechanics, and most training programs assist in preparing jobseekers for credential testing. Employer input into curriculum: Employers developed the curriculum for basic logistics training, now provided by several local colleges and universities. The workforce investment board’s role According to board and company officials, board staff provided expedited service for Honda Aircraft Company by designing and implementing a web-based recruitment and assessment tool within 48 hours. Partners said the board was key to collaboration, as it facilitated employers’ understanding of their common needs. Key federal and state support Labor’s WIRED grant provided critical support for the collaboration, according to the partners. A foundation created by the state legislature also supported training as well as a survey of industry needs. Board staff and other partners provided information about the results of the initiative for jobseekers and workers, employers, and for the workforce system overall: Benefits for jobseekers and workers: According to a board official, individual jobseekers have found jobs in this sector, although the board could not provide specific job placement data. In addition, while production delays and economic conditions have restrained hiring in some cases, new jobs are anticipated, in part due to Honda Aircraft Company’s decisions to manufacture aircraft in Guilford County and to locate a maintenance facility in the area, according to a board official. In addition, according to this official, other employers in the sector, such as TIMCO Aviation Services and FedEx, have been hiring or expect to do so. Employer cash or in-kind contributions initiative helped them reduce their recruitment costs, although they did not quantify the amount. In addition, an employer board member highlighted employers’ continued interest in collaboration by noting that, since the WIRED grant expired, the initiative has been sustained in part by continued employer contributions. Benefits for workforce system and other partners: A board official said they have developed similar web-based recruitment tools for other companies. In addition, they said they have developed a closer relationship with local economic development agencies which helps them work together to serve employers. Board officials said they have replicated elements of this initiative in their efforts to serve employers in the manufacturing sector. Additionally, partners said the initiative led to the formation of the North Carolina Center for Global Logistics, to help meet the training needs of employers in the transportation, distribution, and logistics sector on an ongoing basis. Overview In early 2007, facing new challenges in a critical sector, a diverse group of partners formed the Lancaster County Center of Excellence in Production Agriculture. The center serves as a clearinghouse for information, and also provides training through a variety of vendors on topics such as dairy feeding, poultry flock management, and manure hauling. To meet the needs of workers with limited English, the center has contracted with other partners to develop workplace literacy materials. The center also provides subsidies for farmers to attend events such as the Pennsylvania Dairy Summit, training workshops, and other events in the dairy and poultry industries that help keep farmers abreast of best practices. Some key factors of the initiative are: Program manager: A coordinator serves as a point of contact for the center, and works with industry organizations to reach more farmers. Industry-knowledgeable staff: Employers and other partners said that the one-stop staff members’ knowledge of both government processes and industry needs was helpful. Workforce challenge Initiative partners explained that in recent decades, local farmers have increased the size and scope of their operations in order to remain economically viable. However, family labor is often no longer sufficient to run these businesses, and technological advances require workers with new skills (see photos). Farmers hired jobseekers with limited English skills for some entry-level jobs, but struggled to communicate with them. Moreover, workers needed to be trained for middle-management positions that did not previously exist, and farmers needed to remain knowledgeable about best practices in their industries in order to remain competitive. The workforce investment board’s role Through partners, the board helps employers access incumbent worker training, largely in the dairy and poultry industries. It also makes available information about jobs in agriculture through the one-stop. An official also said that the board serves as a neutral party that can forge connections, perform evaluation, and approach problems systemically. The board has reinforced connections to employers by serving as staff to the Agriculture Council, a policy body that houses the Center of Excellence in Production Agriculture. The board also supports four additional centers in areas such as long-term care and packaging operations. Key federal and state support The state’s Industry Partnership grant provided substantial support over the years. Employer cash or in-kind contributions local agricultural businesses do not typically have many workers, so the center is able to train more workers by focusing on the overall sector’s needs instead of the needs of individual employers. From project years 2005 to 2010, the initiative trained approximately 3,000 individuals, 654 of whom received training in 2010. Benefits for employers: In project year 2010, 400 employers received services. An official explained that it can be challenging to quantify the benefits for employers, because some do not wish to share information that they consider to be proprietary. However, one employer noted that financial support provided by employers is an indicator of satisfaction. In project years 2008- 2010, participating employers contributed approximately $134,000 in cash contributions and approximately $345,000 from in-kind contributions. Benefits for workforce system and other partners: Officials explained that developments in the agriculture industry sparked interest in the renewable energy sector. As farms increase the quantity of livestock, manure removal presents a new challenge. Methane digesters can remove the manure and provide farmers with a new source of income. Therefore, the Center for Excellence in Production Agriculture spun off its agriculture projects related to renewable energy and created a new Center of Excellence in Renewable Energy to ensure that area employers remain competitive. Overview Starting in 1999, the Workforce Development Board of South Central Wisconsin began to reassess its training approach after seeing low training program completion rates among jobseekers, board officials said. The board decided to adopt a career pathways approach throughout its one-stop operation, and focus training on seven sectors. This approach provides education and training progressively so that students can gain skills and advance in an occupation or industry as they complete successive training. Workforce challenge Companies, particularly in health care and manufacturing sectors, indicated they were having a hard time finding new employees. Additionally, the board wanted to address low training program completion rates among jobseekers. Industry-knowledgeable staff: The board directed the one-stop to organize its operations around industry specialists. These specialists develop knowledge of available jobs in their fields so they can ensure jobseekers have the right training for these openings. Single point of contact: Board staff reported that there is a partnership coordinator who collaborates with the board staff and industry partners to help meet industry needs and assures that the board is preparing workers for occupations that are in demand. The workforce investment board’s role Instead of using the career pathways approach exclusively for training programs, the board worked to use the model for all services provided by the workforce system. In adopting the career pathways approach to its one stop operation, the board funded the pathways training with WIA funds. The board also convened the partners from selected industries to discuss the needs of a given sector as a whole. Key federal and state support The board uses WIA funding to support individual training accounts for jobseekers to pursue career pathways training. However, to test this approach with prototypes, the board first used other funds, such as federal and state grants. Benefits for jobseekers or workers: One of the major benefits of the career pathways approach is that it has reduced student drop out rates in training programs, board staff said. According to board staff, preliminary results indicate that students pursuing “stackable” or cumulative credentials have an 85 percent rate of completion compared to a 65 percent rate of completion for other students. Benefits for employers: Through informal feedback, board staff have concluded that employers value getting the right jobseekers with the training they need. WIA Governor’s set-aside Benefits for workforce system and other partners: Employers have become more aware of the public resources available to them through the one-stop, and are more likely to work with the federally funded workforce system to find and train new and incumbent employees, according to an evaluation of board efforts by the University of Wisconsin-Madison Center on Wisconsin Strategy. Workforce Development, Inc. (Workforce Investment Board) Workforce challenge Local employers in the health care industry, particularly rural long-term care providers, have had a difficult time finding sufficient staff for their operations and had identified high turnover rates as a problem. jobseekers before they enter the healthcare academy and through this process, some jobseekers realize that health care careers are not a good personal choice for them. In those cases, one-stop staff then help these jobseekers explore other possible careers. This screening keeps health care employers engaged because the jobseekers who complete the training and get hired are more likely to succeed. Single point of contact: Employers contact health care program coordinators if they are seeking a new hire, which allows employers to find recent health care academy graduates with skills that meet their needs. Industry knowledgeable staff: The program coordinators are also nurses, which helps build credibility with employers. The workforce investment board’s role The board determined that training additional workers for jobs in health care was a key local need, and has supported the pre-employment training with both WIA and non-WIA funds. The board’s health care subcommittee provides industry input, and includes people who are not on the full workforce board, according to board officials. Workforce Development, Inc. (Workforce Investment Board) Key federal and state support The board has mostly funded the academy with grants from the state and federal government. For those who are eligible, the board also uses WIA formula funds to pay for Pre-employment Healthcare Academy classes. Employer cash or in-kind contributions has used the pre-employment academy approach in the manufacturing and energy sectors in the past, and other boards have replicated the health care academy. Additionally, other workforce boards have used the health care academy as a model in other sectors such as manufacturing. The board has estimated there has been a greater than 6-to-1 return on investment from the health care academy, factoring in taxes paid and foregone public assistance for newly employed graduates. Overview The Business Services unit overseen by the local workforce board recognized that area manufacturers were closing or under stress. To provide assistance, in 2009, Business Services provided workshops that addressed a variety of business needs, according to board staff. Following the feedback received at those workshops, however, staff realized that more in-depth assistance was needed to help employers prevent layoffs. At the direction of the local workforce board, Business Services issued a request for proposal to provide employers with services to improve the efficiency of their processes and reduce their costs, and selected CMTC, an affiliate of Commerce’s Manufacturing Extension Partnership, as the service provider. Workforce challenge One-stop staff reported that the local economy struggled during the recent economic downturn, with high unemployment and numerous foreclosures. Local businesses were increasingly unable to meet their financial obligations. Business closures were becoming increasingly prevalent, primarily due to decreasing sales. To avoid further layoffs, businesses said they needed to reduce operating costs and increase sales. CMTC provided intervention services to small and medium at-risk manufacturers. Sometimes, these services included assistance with marketing or helping an employer achieve specific industry certifications. However, partners agreed that when an employer needed to improve efficiency, there was also a training component for incumbent workers. Almost all of the participating employers received services that addressed workforce issues, according to officials. Workforce services included training incumbent workers in more efficient techniques. Some key factors of this collaborative were: Streamlined data collection: Partners reported that the intake form for employers interested in layoff aversion services was kept short, but still addressed essential data needed for program requirements and performance measures. Single point of contact: Each employer was assigned a consultant to provide personalized services. Industry knowledgeable staff: CMTC was familiar with the business community’s needs, according to an official. The workforce investment board’s role Officials told us that the board’s decision to fund the Business Services unit (even in lean budget years) proved critical, as it established relationships with employers that allowed for better communication with the workforce system. Key federal and state support The training was funded through Recovery Act and WIA Dislocated Worker Rapid Response funds. The training was provided under a waiver from Labor that allowed the board to use Rapid Response funds for layoff-aversion purposes. Reported results CMTC provided services to 15 businesses and training to incumbent workers as needed. Using an economic impact tool, officials estimated that this intervention added $5.7 million to the local economy. Benefits for jobseekers or workers: According to the reported MEP results, as of May 2011, the employers reported that 71 new jobs were created as a result of this initiative, and 426 jobs were saved. Employer cash or in-kind contributions Benefits for employers: For individual employers reporting cost savings over a 12-month period, benefits ranged from $50,000 to $1,000,000. Collectively, employers also reported that total sales increased by more than $3.3 million during the same period. A workforce official estimated that approximately 80 percent of the businesses would have closed without intervention. A representative from a business that received CMTC’s services said that his company had been in a precarious financial position that put the employees’ jobs at risk, but now the company is on a growth trajectory. On a scale of 1 to 10—with 10 as the highest level of satisfaction—the participating employers reported an average satisfaction score of 9.6. Benefits for workforce system and other partners: An official told us that the Business Services unit streamlined its operations as a result of the partnership. For example, staff simplified intake forms and kept the project organized in a way that reflected business practices. Another benefit identified by officials was an increase in mutual understanding and awareness between business and the workforce system. As evidence that the partnership was strengthened, an official told us that a representative of one of the employers that received services wanted to join the board. Moreover, the partnership created new opportunities for the workforce investment system. For example, a representative of a small business told us that his company expected to hire new workers by the end of 2011, and that he would consider using the workforce system to hire them. Technical Employment Training, Inc. Workforce challenge According to employers, local companies have been struggling to find skilled machinists, despite the county’s high unemployment rate. One company had recruited skilled workers from Switzerland and another had created an additional, lower-ranking position for new hires lacking the necessary entry-level skills. Screening of candidates: Applicants are screened prior to the training, and before being referred to companies for interviews, they are screened again by a placement specialist familiar with the employers’ needs. Industry-recognized credentials: Students earn National Institute for Metalworking Skills certifications. The workforce investment board’s role The board approved the core contents of the training, offered technical assistance and program development guidance, referred trainees to the program, and required that the classes provide industry recognized credentials upon completion. The board also funded the first TET class with a Recovery Act-funded contract. Technical Employment Training, Inc. Key federal and state support The Recovery Act funded 30 of 36 trainees in the first TET class. The San Bernardino City Employment and Training Agency used Recovery Act funds to fund the second class. The California State Department of Rehabilitation funded part of the third class. Reported results The average completion rate for the three training classes is 90 percent. Board officials noted that 32 percent of the trainees who completed TET’s training faced a barrier to employment, such as a criminal record, positive drug tests, or a physical disability. Some reported results of the initiative include: Benefits for jobseekers or workers: On average, nearly 75 percent of graduates are employed or full time students, which an official said was particularly notable because of the area’s high unemployment rate. Of those who graduated from the first class, 85 percent are now employed or full time students. Moreover, officials told us that some workers who previously struggled to find employment because of criminal records are now employed. In addition, some employers told us that they hired TET graduates at higher wages than other new hires. Employer cash or in-kind contributions pleased by the quality of the employees graduating from TET. They noted the importance of soft skills, such as work ethic, as well as technical skills. Employers told us that TET trainees made fewer mistakes, and they did not have to spend as much time training TET graduates in order for them to reach proficiency. Officials reported that employers consistently return to TET to interview and hire recent graduates. Benefits for workforce system and other partners: The board reported that of the 30 trainees funded by the Recovery Act in the first TET class, 28 completed the program. One official told us that keeping dropout rates low is a more efficient use of the workforce system’s funds. Overview In 2002, the Washington State Hospital Association and Seattle’s local workforce board convened a collaborative panel of representatives from hospitals, labor groups, local colleges, and other partners to address worker shortages in the local health care industry. This group came to be known as the Health Care Sector Panel, and their work subsequently launched a collection of mutually reinforcing projects which share the goal of maintaining both the short- and long-term pipelines of health care workers. A few of these projects are discussed below. Workforce challenge According to a 2003 report that Seattle’s local workforce board and the Washington State Hospital Association produced for the Health Care Sector Panel, health care facilities throughout the state faced a critical lack of staff, including registered nurses, licensed practical nurses, and radiology technicians, among others. Patients were turned away from 55 percent of the state’s emergency departments because of nursing shortages. The health care workforce was retiring faster than it could be replaced, colleges throughout the state had long waiting lists for courses in health care fields, and up to 30 percent of students in some health care training programs were dropping out before completing their training. Officials said that it would have been difficult to increase the pipeline of Seattle’s health care workers without providing additional training and support to those already in health care jobs so that they could progress to more advanced work. For example, one project positioned career specialists on-site at hospitals and other facilities to provide career counseling and subsequent training to incumbent staff. To add new workers to the pipeline, WIA youth were enrolled in a college-level Certified Nursing Assistant program with some wraparound services to help them navigate the educational and employment systems. For example, WIA case managers were paired with coordinators on college campuses to help youths register for classes, apply for financial aid, and utilize student services. Rather than seeking training only for individuals, the board also used Recovery Act and state matching funds to purchase blocks of training from seven local colleges in order to seat entire cohorts of trainees in classrooms. Each trainee was paired with a board-funded employment specialist, and in some cases the board worked with the colleges to shape the curriculum. In addition, the board also found that requiring certain students to complete basic skills education before beginning vocational training impeded course completion, and therefore worked with the colleges to integrate basic skills and vocational instruction by placing a basic skills instructor in some vocational classes. Work stemming from the Health Care Sector Panel continues, although the panel itself is not active. A key factor of their collaboration was: Limited or action-oriented meetings: Officials said that short, focused meetings helped keep leaders involved. Professional facilitators were brought in for the first few panel meetings to ensure that time was well-spent. The workforce investment board’s role The board has been involved in sector work on an ongoing basis. Employers who were already serving on the board also participated on the Health Care Sector Panel and helped identify and attract additional partners. For the individual health care projects, the board also served as a neutral convener, planner, project manager, and as a data collector and reporter. The board particularly focused on job placements that allowed jobseekers to attain self-sufficiency, and adopted a tool called the Self-Sufficiency Calculator to measure those results. Key federal and state support From the beginning, projects that stemmed from the Health Care Sector Panel have had a wide variety of funding support. For example, career advancement projects have been supported by WIA formula funds, Governor’s set- aside funds, federal H-1B grant monies, and state funds, among others. A youth training project was piloted with Governor’s set-aside funds and has been sustained through WIA Youth formula funds and a state program that allows high-school students to earn college credit. Training for cohorts of health care jobseekers was purchased with Recovery Act and state matching funds. In October 2010, the board was awarded a 5- year, $11 million grant from the U.S. Department of Health and Human Services to support work in the health care sector. WIA Governor’s set-aside Other federal grants or funds stemming from the Health Care Sector Panel’s work include the replication of the sector panel approach in other industries, and relationships with employers that generate knowledge and a positive regard for the workforce system. An education official also said that this type of collaboration changed the way hospitals and colleges communicate and conduct business in the Seattle area. Partners agreed that as a result of the initiative, hospitals and colleges approach workforce problems in a more systematic fashion, and colleges have stronger industry connections. Southeast Michigan Community Alliance Michigan Works! (Workforce Investment Board) Overview In response to employer-defined needs, the state workforce agency created the Michigan Academy for Green Mobility Alliance (MAGMA). The academy’s mission is to develop courses to help provide rapid skill growth in emerging green technologies in vehicle design. MAGMA Advisory Group members reported that collaboration helped the “Big 3”— Ford, Chrysler, and General Motors—work together and see the benefits of cooperating with competitors to train new workers. The academy classes are offered to incumbent and displaced engineers and technicians (see photo). Initiative staff identified the following as integral to collaboration: Industry-knowledgeable staff: Initiative staff became knowledgeable about the engineering competencies needed in the automotive industry, to help build credibility with employers. Employer input into curriculum: Employers have suggested changes to the courses and can provide information about their current and future workforce needs, initiative staff said. Workforce challenge In 2007, according to a state official, an automotive employer projected a need for about 500 electrical engineers with skills in designing hybrid vehicles. Unable to find such electrical engineers locally, the company reported that they would need to either recruit them from overseas or send their workers abroad for training. More generally, the state recognized that growth in production of “green” mobility products such as electric cars created an increased need for workers trained to work on the new technology. The workforce investment board’s role MAGMA board members reported that the workforce board helped administer grants and managed the administration of MAGMA. Additionally, Southeast Michigan Community Alliance Michigan Works! is the lead board for the six other workforce boards in the initiative, initiative staff reported. Southeast Michigan Community Alliance Michigan Works! (Workforce Investment Board) Key federal and state support To fund the initiative, the Southeast Michigan Community Alliance Michigan Works! applied for statewide grants, which came from WIA Governor’s set-aside funds, according to initiative staff. Staff said they also used WIA formula dollars from several boards, employer contributions, and another state grant for incumbent worker training. Reported results Initiative staff reported the following benefits from the initiative: Benefits for jobseekers or workers: According to initiative staff, 312 people completed the training from the fall of 2009 through January 2011, including 30 dislocated workers, and 281 additional students had enrolled for 2011 summer/fall academy classes. Benefits for employers: According to initiative staff, MAGMA will serve 15 companies in the summer and fall of 2011, including Chrysler, Ford, and General Motors. Initiative staff said they believe that, because employers continue to encourage their workers to participate in training, this is evidence that the process is meeting their needs. Furthermore, employers continue to provide information on their training needs. WIA Governor’s set-aside Employer cash or in-kind contributions Benefits for workforce system and other partners: According to board staff, employers have now embraced the local workforce system, which they had not done in a sustained way in the past. Also, according to board staff, the initiative has helped the workforce system better align its resources with employers’ information about their workforce needs in a way that is more directly associated with employment opportunities. In addition, a state official noted that the effort spurred other similar initiatives, such as convening employers to discuss their skill needs in the area of battery storage technology. Overview The goal of this initiative, known as NoVaHealthFORCE, is to establish a long-term, sustainable, business-driven strategy to address the area’s shortage of nurses and allied health professionals. The initiative’s partners—the board, local hospitals, and educational institutions— commissioned a report to study the problem and developed an action plan to address it in different ways. They sought to increase capacity within the health care training and education system, develop and sustain an ongoing supply of persons interested in entering health care careers, and nurture innovation, such as by developing a forum for best practices. Workforce challenge Northern Virginia’s health care employers faced a workforce shortage of nurses and 23 other health occupations, including therapists and technicians in a variety of fields. Vacancy rates of 5 percent or higher are significant in the health care industry, and the area’s vacancy rate had reached 10 percent for nonmanagerial registered nurses, for example. As a result, employers faced an increasing cost of labor as they tried to hire the limited number of workers away from one another. Moreover, officials noted that some hospital units might be forced to close in the event of a staffing shortage. Officials cautioned that merely providing subsidies for additional students to receive training would have been an insufficient response to the workforce shortage: they needed to address the lack of available faculty for training, increase the availability of clinical training sites, and improve the preparedness of youth interested in health care careers. Since 2006, the Virginia General Assembly and six regional health care providers have provided financial support to increase the number of nursing faculty. The initiative’s participating employers contributed other forms of support as well. For example, employers provided subsidies to develop new radiation oncology and sonography programs. Education officials said these programs could not have been developed without those subsidies. Initiative partners said they had integrated new partners, such as local elected officials and the Chamber of Commerce, by citing the business costs incurred with increased health care costs. Meanwhile, they identified the following areas as some of the elements that supported their collaboration: Limited or action-oriented meetings: Biannual, roundtable meetings provide chief executive officers, educational leaders, and other partners the opportunity to talk about results and about the next stages of implementation without over-burdening their schedules. Program manager: Partners said that having a full-time program manager for the initiative was important to the initiative. Streamlined data collection: The initiative’s quarterly data collection and reporting system focuses on key metrics to generate discussions about partners’ needs. The workforce investment board’s role The board helped the other partners prepare an action plan, and a nonprofit arm of the board served as the initiative’s fiscal agent. Partners said that the board was perceived as a neutral arbiter that could bring people together to solve regional problems. Officials said that the board’s neutrality was critical to ensuring the participation of competitors within the business and education communities. Key federal and state support Hospitals pledged money to support new programs at local colleges if the state would match the funds, which it did. Labor’s Community-Based Job Training grant supported efforts to expose secondary students to careers in health care, among other activities. Employer cash or in-kind contributions supply of skilled workers, as reflected by education partners’ expansion of their capacity to train nurses by approximately 35 percent. In addition, to help meet employers’ needs, two institutions developed an accelerated program of study in nursing. Generally, the partners agreed that the urgency of employers’ needs had abated, although they anticipated future needs as retirements of existing workers begin to accelerate. Benefits for workforce system and other partners: The workforce system benefited from strengthened relationships with employers, as indicated by one former hospital executive’s appreciation for the role the board had played in helping to identify qualified workers. Also, partners reported that the strategies they had developed could be applied to other sectors. Workforce Alliance of South Central Kansas (Workforce Investment Board) Workforce challenge Local companies, including Boeing, Raytheon, Cessna, and Bombardier, identified an impending shortage of skilled workers, driven both by imminent retirements and by the need to upgrade workers’ skills and maintain competitiveness as a result of the increasing use of composite materials in aircraft manufacturing, according to local stakeholders. Single point of contact: Partners said that establishing a single point of contact for employers was key to facilitating collaboration. Employer input into curriculum: Employers had direct input into the curriculum, and aimed to accelerate the integration of research findings into training and production. The workforce boards’ role The boards provided case management and career guidance, screening and placement in training, and convened community partnerships. Workforce Alliance of South Central Kansas (Workforce Investment Board) Key federal and state support Labor’s WIRED grant supported training, equipment, and curriculum costs. The U.S. Departments of Commerce and Housing and Urban Development, and the Small Business Administration provided funds for the facility. The state also provided funding for construction. Other federal grants or funds Employer cash or in-kind contributions said that employers now see the workforce system as more valuable than before, and that one-stop staff provide better advice to individuals on careers in aviation. Additionally, according to the partners, the experience gained during the initiative was applied to other activities, such as the establishment of a center for advanced manufacturing and an effort to leverage the new skills in composites to grow a new medical device industry cluster, based on the use of composite materials in orthopedic devices, such as knee and hip replacements. In addition to the individual named above, Laura Heald (Assistant Director) and Chris Morehouse (Analyst-in-Charge) led the engagement. Aron Szapiro and Alison Hoenk also made significant contributions to this report in all facets of the work. In addition, Jean McSween assisted with methodology, Rhiannon Patterson lent subject matter expertise, Jessica Botsford provided legal support, Susan Bernstein and James Bennett provided assistance with writing and graphics, and Charles J. Ford and Kathy Leslie also made significant contributions to this report. Factors for Evaluating the Cost Share of Manufacturing Extension Partnership Program to Assist Small and Medium-Sized Manufacturers. GAO-11-437R. Washington, D.C.: April 4, 2011. Multiple Employment and Training Programs: Providing Information on Colocating Services and Consolidating Administrative Structures Could Promote Efficiencies. GAO-11-92. Washington, D.C.: January 13, 2011. English Language Learning: Diverse Federal and State Efforts to Support Adult English Language Learning Could Benefit from More Coordination. GAO-09-575. Washington, D.C.: July 29, 2009. Workforce Investment Act: Labor Has Made Progress in Addressing Areas of Concern, but More Focus Needed on Understanding What Works and What Doesn’t. GAO-09-396T. Washington, D.C.: February 26, 2009. Employment and Training Program Grants: Labor Has Outlined Steps for Additional Documentation and Monitoring but Assessing Impact Still Remains an Issue. GAO-08-1140T. Washington, D.C.: September 23, 2008. Workforce Development: Community Colleges and One-Stop Centers Collaborate to Meet 21st Century Workforce Needs. GAO-08-547. Washington, D.C.: May 15, 2008. Workforce Investment Act: One-Stop System Infrastructure Continues to Evolve, but Labor Should Take Action to Require That All Employment Service Offices Are Part of the System. GAO-07-1096. Washington, D.C.: September 4, 2007. Workforce Investment Act: Additional Actions Would Further Improve the Workforce System. GAO-07-1051T. Washington, D.C.: June 28, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Workforce Investment Act: Employers Are Aware of, Using, and Satisfied with One-Stop Services, but More Data Could Help Labor Better Address Employers’ Needs. GAO-05-259. Washington, D.C.: February 18, 2005. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. Workforce Training: Almost Half of States Fund Worker Training and Employment through Employer Taxes and Most Coordinate with Federally Funded Programs. GAO-04-282. Washington, D.C.: February 13, 2004. Workforce Investment Act: Exemplary One-Stops Devised Strategies to Strengthen Services, but Challenges Remain for Reauthorization. GAO-03-884T. Washington, D.C.: June 18, 2003. | As the United States continues to face high unemployment in the wake of the recent recession, federally funded workforce programs can play an important role in bridging gaps between the skills present in the workforce and the skills needed for available jobs. The Workforce Investment Act of 1998 (WIA) sought to strengthen the connection between workforce programs and employers, but GAO's prior work has found that collaboration remains a challenge. With WIA currently awaiting reauthorization, GAO reviewed (1) factors that facilitated innovative collaborations among workforce boards, employers, and others; (2) major challenges to collaboration; and (3) actions the Department of Labor (Labor) has taken to support local collaborative efforts. GAO examined 14 local initiatives identified by experts as among the most promising or innovative efforts in which local workforce boards collaborated effectively with employers and other partners to achieve positive results. GAO interviewed representatives of the 14 initiatives and officials from five federal agencies. GAO also reviewed reports on the initiatives and relevant federal laws, regulations, and other documents. Workforce board officials and their partners in the 14 initiatives cited a range of factors that facilitated building innovative collaborations. Almost all of the collaborations grew out of efforts to address urgent workforce needs of multiple employers in a specific sector, such as health care, manufacturing, or agriculture, rather than focusing on individual employers. Additionally, the partners in these initiatives made extra effort to understand and work with employers so they could tailor services such as jobseeker assessment, screening, and training to address specific employer needs. For example, in Greensboro, North Carolina, board staff provided expedited services for an aircraft company that just moved to the area by designing a web-based recruitment tool and customized assessment process within 48 hours and screening over 2,400 initial applicants. In all the initiatives, partners remained engaged in these collaborative efforts because they continued to produce a wide range of reported results, such as an increased supply of skilled labor, job placements, reduced employer recruitment and turnover costs, and averted layoffs. For example, in Cincinnati, Ohio, employers who participated in the health care initiative realized almost $5,000 in estimated cost savings per worker hired, mainly due to lower turnover and recruitment costs, according to an independent study. While these boards were successful in their efforts, they cited some challenges to collaboration that they needed to overcome. Some boards were challenged to develop comprehensive strategies to address diverse employer needs with WIA funds. For example, some boards' staff said that while their initiatives sought to meet employer needs for higher-skilled workers through skill upgrades, WIA funds can be used to train current workers only in limited circumstances, and the boards used other funding sources to do so. Staff from most, but not all, boards also said that WIA performance measures do not reflect their efforts to engage employers. Many of these boards used their own measures to assess their services to employers, such as the number of new employers served each year or the hiring rate for jobseekers they refer to employers. Labor has taken various steps to support local collaborations, such as conducting webinars and issuing guidance on pertinent topics, and contributing to a new $37 million grant program to facilitate innovative regional collaborations. Many of the boards we reviewed cited leveraging resources as a key to facilitating collaboration. However, while Labor has collected information on effective practices for leveraging resources, it has not compiled this information and made it easy to access. To better support the capacity of the local workforce investment system to collaborate with employers and other partners, Labor should compile information on workforce boards that effectively leverage WIA funds with other funding sources and disseminate this information in a readily accessible manner. Labor agreed with our findings and recommendation. |
Floods are the most frequent natural disasters in the United States, causing billions of dollars of damage annually. In 1968, Congress created NFIP to address the increasing cost of federal disaster assistance by providing flood insurance to property owners in flood-prone areas, where such insurance was either not available or prohibitively expensive. Since its inception, the NFIP has been a key component of the nation’s efforts to minimize or mitigate the financial impact of flood damage on property owners and limit federal expenditures after floods occur. Community participation is central to NFIP’s success. In order to participate in the program, communities must adopt and agree to enforce floodplain management regulations to reduce future flood damage. In exchange, NFIP makes federally backed flood insurance available to homeowners and other property owners (for example, farmers and other businesses) in these communities. As of May 2014, about 22,052 communities were participating in the program. Property owners can purchase flood insurance to cover both buildings and contents for residential and nonresidential properties. Insurable structures must have two or more outside rigid walls and a fully secured roof that is affixed to a permanent site. NFIP’s maximum coverage limit for residential policyholders is $250,000 for buildings and $100,000 for contents. For nonresidential policyholders, the maximum coverage is $500,000 for buildings and $500,000 for contents. Agricultural structures are considered nonresidential structures, so items such as grain stored in a bin or a tractor stored in a shed are covered by contents coverage. Policyholders purchase separate policies for each structure they insure. Deductibles range from $1,000 to $5,000 on residential structures and $1,000 to $50,000 on nonresidential structures. When NFIP was created, property owners were not required to buy flood insurance, so participation was voluntary. Congress amended the original law in 1973 to require some property owners to purchase flood insurance in certain circumstances (mandatory purchase requirement). The mandatory purchase requirement applies to owners of properties located in SFHAs in participating communities with mortgages held by federally regulated lenders or federal agency lenders, or who receive direct financial assistance for acquisition or construction purposes. Individuals in SFHAs who receive federal disaster assistance after September 23, 1994, for flood losses to real or personal property are also required to purchase and maintain flood insurance on the property as a condition for receiving future disaster assistance. The 2014 Act permits residential policyholders to forgo coverage for detached structures that do not serve as residences. The 1973 Act also added certain requirements that, according to FEMA officials, were intended to encourage community participation in NFIP. Specifically, communities are required to adopt and agree to enforce adequate floodplain management regulations as a condition of participation in NFIP. In exchange, flood insurance and certain federal disaster assistance will be made available to property owners in the community. Community ordinances or regulations must be consistent with NFIP’s minimum regulatory requirements, although communities may exceed the minimum criteria by adopting more comprehensive regulations. The following are some of the key NFIP building requirements and alternatives for new and substantially improved or substantially damaged structures located in riverine SFHAs. Elevation. All new and substantially improved or substantially damaged structures must be elevated to or above the base flood elevation (BFE). The BFE is the projected level that flood water is expected to reach or exceed during a flood with an estimated 1 percent chance of occurring in any given year. The flood depth— height at which structures should be built—is calculated by the difference between the BFE and ground elevations that is established by topographic surveys. Dry flood-proofing. Nonresidential structures, including agricultural structures, may be flood-proofed instead of elevated. Nonresidential structures that are dry flood-proofed are designed to be watertight below the BFE. Wet flood-proofing. FEMA also has guidance to allow communities to grant some categories of nonresidential structures, including certain agricultural structures, an exception from the requirement that certain structures be elevated or dry flood-proofed. This variance enables certain structures to be wet flood-proofed—applying permanent or contingent measures to a structure and/or its contents that prevent or provide resistance to damage from flooding by allowing flood waters to enter the structure. FEMA has instructed communities that variances may be issued for certain types of agricultural structures located in wide, expansive floodplains that are used solely for agricultural purposes, such as storage, harvesting, or drying. These types of structures include grain bins, corn cribs, general purpose barns open on at least one side, and buildings that store farm machinery and equipment. FEMA bases premium rates for NFIP policies on a property’s risk of flooding and several other factors. Specifically, FEMA uses location and property characteristics, such as flood zone designation, elevation of the property relative to the property’s BFE, building type (e.g., residential or nonresidential), number of floors, presence of a basement, and the year of construction relative to the year of a community’s original flood map. Additionally, FEMA uses data on prior claims, coverage amount, and policy deductible amount. NFIP has historically had two types of flood insurance premium rates: those that reflect the full risk of flooding to a property (full-risk rates) and those that do not. Properties that have not been charged property-specific full-risk rates have included those with grandfathered and subsidized rates. The largest number of subsidized policies has been for properties built before the initial flood insurance rate maps became available. The authority for subsidized rates was included in the National Flood Insurance Act of 1968 as an incentive to encourage participation in the program. In July 2012, Congress enacted the Biggert-Waters Act, which made significant changes to FEMA’s ability to charge subsidized rates. These changes phased out existing subsidies for certain types of properties through 25 percent annual premium increases until the full-risk rate is reached, including business properties, residential properties that are not a primary residence, properties that have experienced or sustained substantial damage exceeding 50 percent of fair market value or substantial improvement exceeding 30 percent of fair market value, and severe repetitive loss properties. For other properties, the Biggert-Waters Act raised the cap on annual premium rate increases from 10 percent to 20 percent, by risk class. The Biggert-Waters Act also prohibited subsidies from being extended for homes sold to new owners and removed them if properties were not covered or had a lapse in coverage after the date of enactment of the act as a result of the policyholders’ deliberate choice. However, the 2014 Act reinstated premium subsidies for properties that were purchased after July 6, 2012, and properties not insured as of July 6, 2012. It also generally limited annual increases in property-specific premium rates to 18 percent for policies not covered by the 25-percent increases by the Biggert-Waters Act, although it changed the substantial improvement threshold to 50 percent from the Biggert-Waters Act’s 30 percent. The 2014 Act does not remove the phase out for policies covering nonprimary residences, severe repetitive loss properties, and business properties, among others. The Biggert-Waters Act also generally prohibited the grandfathering of rates after future remapping and required any rate increases stemming from future remapping to be phased in over time. However, the 2014 Act eliminated the Biggert-Waters Act’s changes to grandfathering provisions, but included a provision which may prohibit grandfathering in limited situations. FEMA creates maps that show the degree of flood hazard so that properties in participating communities can be assigned actuarial premium rates—that is, rates that reflect the full risk of flooding—for insurance purposes. Flood maps, also show SFHAs for which communities must adopt and enforce building requirements as part of their NFIP participation. Lending institutions use flood maps to identify properties that are required to have flood insurance and to help ensure that the owners buy and maintain it. FEMA engineers create flood maps using statistical information such as data for river flow, storm tides, hydrologic/hydraulic analyses, and rainfall and topographic surveys. The results of the topographic and flood hazard analyses are combined and integrated into digital maps that depict floodplain boundaries and the projected height of the base flood—the flood level that has a 1 percent chance of being equaled or exceeded in any given year. NFIP establishes flood zone designations through its mapping process (see table 1). Areas designated as A, AE, V, or VE zones have a high risk of flooding and are considered SFHAs. Areas designated as V or VE zones are located along the coast and have an additional hazard associated with storm waves. Areas with a moderate to low risk of flooding are designated as B, C, or X zones. Areas where flood risk is possible but undetermined are designated as D zones. For the purpose of our study, we are considering areas with flood zone designations beginning with an A to be high-risk riverine floodplains. FEMA is required by statute to assess the need to revise and update all floodplain areas and flood risk zones at least every 5 years. The agency has undertaken two initiatives to update and modernize its flood maps. Until 2003, flood maps were created and stored in paper format. From 2003 to 2008, FEMA spent $1.2 billion to upgrade the nation’s flood maps to digital format as part of the Map Modernization initiative. Through this program, FEMA created digital flood maps for more than 92 percent of the population.Risk MAP—to improve the quality of data used in flood mapping. FEMA’s goals for the initiative include addressing gaps in flood hazard data; increasing public awareness of risk; and supporting mitigation planning by state, local, and tribal entities. In fiscal year 2009, FEMA began a 5-year initiative— Risk MAP’s primary areas of focus are coastal flood hazard areas, areas affected by levees, and significant riverine flood hazards. Risk MAP received $325 million in appropriations in fiscal year 2009, but appropriations have declined since, falling to about $216 million in fiscal year 2014. According to FEMA officials, FEMA prioritizes its mapping projects based on needs and risk and balances them with available funding. Need is determined by assessing current flood data and changes since the last update. Risk is assessed largely by population and the number of structures and their exposure to flood hazards. While rural and agricultural areas may have needs identified, they are generally low risk and thus may not be a high priority for map updates. FEMA officials, low-risk areas are more likely to receive approximated mapping studies than detailed mapping studies. Approximated mapping studies are not based on the same quality or quantity of data as are detailed studies. Maps made using approximated studies also do not show the BFE. This may require that communities or property owners in those areas obtain a BFE from local or state officials, developers, or other organizations. They may also develop their own BFE by hiring an engineer or surveyor or using guidance provided by FEMA, according to FEMA officials. However, according to FEMA officials, some rural or agricultural areas would have been a part of these mapping efforts because for Risk MAP, FEMA maps on a watershed basis, which is a large area of land that may include both populated and unpopulated areas. accredited levee. In order to have a levee accredited, the owners or community officials must demonstrate that the levee system provides adequate flood protection and has been adequately maintained by submitting an engineering certification indicating that the levee complies with established criteria. If a levee receives accreditation, property owners in the area it protects may not be subject to the mandatory purchase requirement if the area is not mapped as an SFHA. In some cases, areas behind accredited levees are still prone to flooding due to a lack of interior drainage or flooding from other sources and will therefore still be mapped as an SFHA, resulting in the property owners behind that levee still being required to purchase flood insurance. Because FEMA does not identify whether floodplains are in urban or rural areas for the purposes of administering NFIP, we used available data to estimate the location of rural communities and agricultural areas in riverine SFHAs. We defined rural areas as areas that are not considered urbanized areas or urban clusters using U.S. Census Bureau data. We defined agricultural areas as those counties with 50 percent or more of their land areas used in agriculture, according to USDA’s Atlas of Rural and Small-Town America. Figure 2 shows the location of riverine SFHAs according to FEMA’s flood map data in the areas we defined as agricultural areas and rural communities. Our analysis of FEMA data showed that the population mapped in rural and agricultural SFHAs stayed about the same during FEMA’s Map Modernization initiative, though certain areas saw increases or decreases.increased by 0.11 percent through Map Modernization, while the population in urban SFHAs decreased by 0.8 percent. Based on interviews with floodplain management officials, farmers, and others in selected communities, the effects of NFIP’s building requirements for agricultural structures have generally varied. To comply with these requirements, new or substantially improved nonresidential structures in high-risk areas must be elevated or dry flood-proofed. FEMA guidance issued in 1993 noted that communities could allow wet flood- proofing that permits water to flow through a structure, for some nonresidential structures, including certain types of agricultural structures located in vast, expansive floodplains. However, the agency acknowledged that the methods included in the guidance do not cover all of the different types of agricultural structures located in vast flood plains with deep flood depths and may not reflect the changes in the size and scale of farm operations in recent years. Without additional guidance from FEMA, farmers may face challenges in effectively complying with its building requirements. We found that the effects of NFIP building requirements varied in selected communities and the requirements negatively affected certain farmers who were located in vast floodplains with relatively deep flood depths. We selected eight geographically diverse locations in SFHA riverine floodplains in California, Louisiana, North Carolina, and North Dakota that supported crops or livestock requiring onsite agricultural structures.Representatives from FEMA, USDA, and national floodplain management and farm organizations told us that they were unaware of any farmers in these states or others that faced negative effects on their operations from the NFIP building requirements (e.g., elevation, dry flood-proofing, or wet flood-proofing for certain nonresidential structures). State and local floodplain managers we spoke with from Louisiana, North Carolina, and North Dakota also said that they were not aware of any widespread concerns that farmers were having with NFIP’s building requirements or of any negative effects the requirements might be having on agricultural expansion. Correspondingly, 12 farmers in the communities we selected concurred with these views and generally told us that they had not been adversely affected by NFIP building requirements. However, state and local floodplain managers we spoke with from California said that some farmers in their state had been negatively affected by the requirements. The California state floodplain manager told us that the affected farmers typically lived and operated in agricultural areas behind levee systems that trapped water and had deep flood depths—up to 15 feet in some areas, compared with 1 to 6 feet in other states. The deep flood depths make it difficult for the farmers to build new structures in accordance with NFIP requirements because of the cost and complexity of elevating and dry or wet proofing the new structures. This challenge is especially difficult in several counties along the lower Sacramento River, including Sutter and Yolo Counties where building requirements had affected farmers’ ability to expand or rebuild agricultural structures, according to the California state floodplain manager. In addition, representatives of an agricultural floodplain management group whose members are primarily from California’s Central Valley said that farmers they represented were concerned about the financial and technical feasibility of elevating or flood-proofing some agricultural structures to meet NFIP’s building requirements. The 11 farmers we spoke to in these two communities shared these concerns and told us that they had experienced similar negative effects due to the NFIP building requirements. Two key factors may partly explain the differing views of farmers in California as compared to those in the other selected rural and agricultural communities regarding the effects of NFIP building requirements. First, SFHAs in the two California communities have greatly increased in size in recent years compared to the other communities (see fig. 3). According to FEMA, the increase was mainly a result of areas behind unaccredited levees at risk of flooding being remapped into SFHAs. Second, the requirement to elevate or dry flood- proof structures above the BFE is harder to meet in the California communities because the flood depth is up to 15 feet in certain areas, compared to the other selected communities in North Dakota and Louisiana whose flood depths range from 1 to 6 feet. Farmers in Louisiana, North Carolina, and North Dakota generally have been able to expand their operations in areas outside of SFHAs. For example, local floodplain managers in Duplin and Tyrrell Counties (North Carolina) told us that huge livestock processing plants were usually built outside of SFHAs after Hurricane Floyd in 1999 destroyed millions of livestock in the state. Because of the severe damage from this hurricane, the state encouraged farmers to build their agricultural structures outside of SFHAs whenever possible. In addition, according to some farmers we spoke to in the selected Louisiana communities, at least a portion of their farmland was in non-SFHA areas, and they built or expanded their agricultural structures in those areas. As a result, they were not required to comply with the NFIP building requirements because those structures were not built in SFHAs. Further, four farmers in the Louisiana communities told us that they generally built their agricultural structures at the highest points on their farms, areas that were outside the SFHA. Updated levee analysis can result in levee de-accreditation—that is, a determination that a levee no longer meets federal design, construction, maintenance, and operation standards to provide protection from a major flood. Subsequently, areas behind the levees can be remapped into SFHAs. See 44 C.F.R. §§ 65.10, 65.14. process crops that far from the harvest area (which lay inside the SFHAs) because the walnuts could be damaged during transport. We also found that the California farmers from our selected communities experienced greater challenges in relation to elevating structures than farmers in other areas. Local floodplain managers from the selected communities in Louisiana, North Carolina, and North Dakota told us farmers in their communities typically needed to raise building foundations by just a few feet (which they were generally able to do by adding fill dirt) to meet the BFE requirements for structures built inside SFHAs. Farmers we spoke to also concurred with these views. For example, a farmer from Louisiana’s St. Landry Parish who grows rice and soybeans and raises crawfish told us that although most of his structures were outside of the SFHA, he took precautionary steps to elevate them all—those outside it as well as those within it—by at least 2 feet based on his experience with regular flooding in the past and estimated future flooding trends. However, in both Sutter and Yolo Counties in California, the flood depths were relatively deeper (up to 15 feet in some areas). The Sutter County floodplain manager explained that elevating a structure 3 or more feet could require a base, or building pad, that occupied much more square footage than the structure. It could require additional land to build a slope that was not too steep to allow access to the structure. A slope that was too steep could present an obstacle for truck and equipment movement, making it impractical to conduct business. Further, 7 farmers there told us that it was technically difficult and cost prohibitive to elevate structures to the required height. According to state and local floodplain managers and farmers we spoke with, farmers in Sutter and Yolo Counties who were subject to the NFIP building requirements were also facing challenges flood-proofing their new or substantially expanded agricultural structures to comply with NFIP building requirements. FEMA allows new, substantially improved, or substantially damaged nonresidential structures, including agricultural structures, to be dry flood-proofed (made watertight below the BFE). However, according to FEMA guidance, dry flood-proofing is often feasible only when the flood depth is less than around 3 feet, because deeper flood depths produce pressure on structures that may crack the walls or cause them to collapse. In addition, a local floodplain manager and a farmer told us that, regardless of the flood depth, it would be difficult to dry flood-proof structures used for rice and fruit drying because these buildings needed large openings for fan exhausts to dry the crops and prevent moisture from spoiling them (see fig. 4). FEMA has provided guidance on wet flood-proofing as an alternative to elevation and dry flood-proofing for certain nonresidential structures, including agricultural structures, but officials recognize that this guidance still may not be sufficient for assisting farmers in riverine floodplains with deep flood depths. Realizing the need to provide alternative methods to meet building requirements after a catastrophic flood in the Midwest in 1993, in the same year, FEMA issued guidance that allowed certain structures that cannot be elevated or dry flood-proofed to be wet flood- proofed, allowing water to flow through a building while minimizing damage to the structure and its contents.may not be viable for certain agricultural structures. For example, according to Sutter County’s floodplain manager, USDA and the Food and Drug Administration have requirements for the water-tight storage of certain farm products, making wet flood-proofing not a viable option. The walnut farmer from Sutter County that we spoke to further explained that as a result of these requirements, he had to seal the structure to prevent cross-contamination of different crops, something that is important for allergy sufferers. Another farmer told us that if water could get into openings, so could pests that would damage crops. Further, crops such as rice would be ruined if moisture enters the structure. Furthermore, FEMA’s current guidance does not take into account important changes to the agricultural industry that have occurred in recent years. According to FEMA and USDA officials, the agricultural industry has become more consolidated, which has greatly increased the size and scale of farm operations. For example, supporting agricultural structures are now much more expensive to build and replace and may represent unique challenges not envisioned in the existing guidance. Such changes in the agricultural industry underscore the need for FEMA to periodically update and provide additional guidance that reflects current conditions. The absence of current guidance on alternative methods has led some farmers to “work around” the building requirements. Six farmers we interviewed in Yolo and Sutter Counties in California told us that they worked around the building requirements while trying to expand their businesses. Two farmers in these communities told us that they had quickly built their facilities before flood map revisions placed their farms in SFHAs. A nursery farmer in Sutter County built a laboratory in an existing warehouse to avoid building a separate structure, although he lost the warehouse function. Three of the farmers said that instead of building new structures, they were careful to make incremental additions or repairs that were below NFIP’s substantial improvement threshold. Two of the farmers also told us that, rather than building anything separately, they attached every expansion to an existing structure, thus sacrificing space for loading and unloading. Because it is costly, or, in certain circumstances, not technically feasible to comply with current NFIP building requirements, some farmers in our selected California communities were concerned about future expansion after recent map updates. Three farmers cited the importance of agriculture to the local economy and said that agriculture was the best use for floodplains.However, these workarounds may not fully address the long-term expansion needs of these farmers, and more importantly, the workarounds may ultimately defeat the purpose of the NFIP building requirements because they may increase the risks of flood damage to the structures. FEMA officials stated that it is their practice to update technical guidance as needed and recognized that the challenges some farmers faced in expanding or building agricultural structures in SFHAs might call for additional approaches for complying with NFIP building requirements. Officials explained that FEMA has not updated the guidance for wet flood- proofing in over 20 years because the agency thought the guidance covered the types of agricultural structures that could be feasibly wet flood-proofed. However, FEMA has identified the need for better ways to protect structures, especially in wide, expansive floodplains where flood depths may range from a few feet to 20 feet or more in depth. In particular, FEMA officials said they would like to further evaluate the vulnerability of structures and their contents to flood hazards and identify how mitigation measures, such as elevation, dry and wet flood-proofing, and other measures could be used to minimize flood damage. FEMA also plans to solicit input from structure manufacturers and from farmers. FEMA officials told us that they intend to begin updating all technical bulletins, including the 1993 bulletin, in the next 18 months; however, they are at a preliminary stage and have not yet identified resources for such a study or determined its scope and time frames for completion. In addition, FEMA officials told us that, although a recent statutory mandate in the 2014 Act for providing new guidelines on alternatives to elevation is specifically required for residential structures, they plan to issue broader guidance that could apply to nonresidential structures as well. Without updating and providing additional guidance, FEMA is missing an opportunity to help farmers who face challenges in effectively complying with its building requirements, especially if more agricultural production areas are remapped into SFHAs. Such guidance may not only be needed by farmers in the selected communities in California that we reviewed, but also in other similar agricultural areas across the country. Specifically, FEMA officials noted that there are other agricultural areas in vast riverine floodplains with deep flood depths across the country— some up to 37 feet—including Southwest Illinois, Northeast Arkansas, Southwest Mississippi, Southeast North Carolina, and Northwestern Missouri. Some stakeholders from selected communities stated that NFIP’s building requirements in SFHAs could contribute to the long-term economic decline of some small towns in rural areas. The local floodplain manager from Yolo County told us that in addition to difficulties in building and expanding agricultural structures, demand for farm worker housing is strong, and the requirement that new or substantially improved homes be elevated up to or above the BFE, which can be up to 15 feet, adds significantly to the already high price of housing. The floodplain manager stated that NFIP building restrictions that make it infeasible to build or expand agricultural structures, including farm worker housing, could reduce both the tax base and the economic stability of the county by driving agricultural businesses elsewhere. However, according to FEMA, the current building requirements are effective in reducing flood-related damage and the loss of life because of specific requirements, such as elevation. Further, according to FEMA, properties that adhere to building requirements sustain less damage and as a result, may have lower insurance premiums, which in turn could make insurance rates more affordable and attract broader participation in the program. Farmers and rural residents we interviewed in Yolo County expressed similar concerns about the economic viability of their communities. For example, one farmer told us that a small nearby town that had been remapped into an SFHA would likely have trouble attracting viable businesses to keep the community thriving, because the building restrictions meant that businesses could only take over existing structures. Some residents of Yolo County also told us their fire station needed a new roof, which would have been considered a substantial improvement because its cost would have exceeded 50 percent of what the building was worth. However, according to the residents, the county had not allowed permits for any new buildings or substantial improvements to existing buildings since the 2012 map update because FEMA had not designated the BFE for the community. For these reasons, and because undertaking a substantial improvement would have meant elevating or dry flood-proofing the fire station, the town had to do minimal repairs, keeping the costs under the substantial improvements threshold. The mandatory purchase requirement and premium changes resulting from remapping and the elimination of subsidies and grandfathered rates appear to have affected rural home markets more than they have farming operations. For example, some homes affected by these changes might have lost value and become harder to sell and some development has been halted according to some state and local floodplain managers, rural residents, and developers we spoke with. Further, farmers often did not need to buy flood insurance on some structures because they were able to provide their own financing or take other measures, such as obtaining a loan only on land without structures. The mandatory purchase requirement and potential premium rate increases associated with recent map updates and, in some cases, legislative changes to NFIP, are likely to affect the residential real estate markets in rural areas more than the farming operations in those same areas, according to state floodplain managers and other stakeholders in our selected communities. Representatives from national farm organizations were unaware of any effects of the mandatory purchase requirement on farmers; and local floodplain managers, agricultural lenders, and 12 farmers we spoke with in the selected communities generally agreed that mandatory purchase requirements had not affected agricultural land values. However, all of the state floodplain managers with whom we spoke had heard concerns about the effects on the rural residential real estate market of increased rates resulting from the elimination of some subsidies and grandfathering provisions. In addition, some local floodplain managers, agricultural lenders, and five farmers we spoke with expect that being mapped into an SFHA would have a negative impact on the value of residential housing in certain communities either now or in the future. For instance, one agricultural lender in both selected communities in Louisiana said that being mapped into an SFHA would decrease the value of residential homes on the market in rural communities because of the increased cost of flood insurance premiums. Also, a resident with whom we spoke who lived in a rural part of Louisiana’s Rapides Parish said that being mapped into an SFHA had reduced the value of his house and made it more difficult to sell, because prospective buyers would see it as prone to flooding. Similarly, in Walsh County, North Dakota, three residents told us that the requirement to buy flood insurance and the rate increases seen in their community after the SFHA was expanded in a 2012 map update had nearly halted the residential real estate market in their community. One resident said that he had tried to move but could not, because potential buyers walked away when they realized his home was in an SFHA. Some concerns were also raised about the overall affordability of NFIP insurance for homeowners mapped into SFHAs. Representatives of the Property Casualty Insurers Association of America told us that remapping would likely cause some affordability concerns as more areas were moved into high-risk zones. However, they noted that remapping would likely not impact residents of rural areas any differently than it would remapped residents in urban areas. Similarly, two residents of Walsh County, North Dakota, told us that the rate increases associated with their recent map change had made it hard for them to now afford to live in their homes. Concerns were also raised about the affordability of insurance premiums and the impact on the housing market once the phasing out of subsidized rates established in the Biggert-Waters Act and the elimination of grandfathering provisions began, but some of these concerns may no longer be relevant, because the 2014 Act amended sections of the Biggert-Waters Act that would have resulted in rate increases for some residential policyholders. At the same time, local floodplain managers and residents of some selected communities said that NFIP insurance requirements associated with being in an SFHA could lead to positive outcomes for rural towns, including more mitigation actions and less development in the floodplain. For instance, the local floodplain manager of Duplin County, North Carolina, said that the few homeowners in the SFHA who had not elevated their homes would probably choose to do so, since mitigation actions could lower premium rates. Similarly, a resident of Walsh County, North Dakota who was concerned about rate increases after being mapped into an SFHA, said that he and some of his neighbors had already elevated their homes above the BFE or were considering elevating them. In addition, the local floodplain managers from Sutter County, California, and Duplin County, North Carolina, both stated that inhibiting development in SFHAs could help manage the adverse impacts of floods and help meet one of FEMA’s goals of mitigation. We heard about areas in most of our selected communities where development had begun prior to a map update but was halted when the areas were remapped into SFHAs. For example, in Yolo County, California, and St. Landry Parish, Louisiana, we visited developments that had been partially built before being remapped into SFHAs. The developers in both areas said that the elevation requirements and probable decline in the value of the homes because of the flood insurance requirements would make further development economically infeasible. In both cases, the developers were not sure what would happen to the undeveloped land. We also heard from local floodplain managers in Duplin County, North Carolina, and Yolo and Sutter Counties in California that being mapped into an SFHA had halted development in parts of their counties. While the lack of development in SFHAs may be beneficial for floodplain management, the local floodplain managers and other stakeholders in Yolo and Sutter Counties in California noted the possible negative effects of being remapped into SFHAs—including changes in building requirements and insurance costs—on residents of small rural towns. As with building requirements, members of the selected communities said that insurance costs associated with being remapped into an SFHA could contribute to the long-term economic decline of some small towns. For instance, the local floodplain manager in Yolo County, California, told us that the town with the unfinished development that we discussed previously would probably enter a long, slow decline, in part because of recent changes in building requirements and insurance costs resulting from being remapped into an SFHA. He added that not only was it no longer economically feasible to develop certain areas within the town’s borders, but also most of the town’s inhabitants were farm workers who could not afford flood insurance for their houses. However, he said that NFIP requirements were only one factor that was impacting the economic future of this town. In addition, he noted that changes to building requirements and insurance costs resulting from being remapped into an SFHA would not impact all small towns in the same way and that other towns in the community would prosper despite being remapped into SFHAs. An agricultural lender we spoke with in Yolo County agreed that being remapped into SFHAs could have long-term economic impacts on rural towns that depended on the agricultural economy, because farm businesses that were already operating on thin profit margins could be hurt by the additional cost of flood insurance. This is because farmers must accept the market price for their crops, and therefore it may be difficult to pass the price of flood insurance on to their customers, according to one farmer and one lender we spoke with in California. In addition, the local floodplain manager in Sutter County said that some small businesses that supported agriculture, such as a local tractor dealership, had already seen premium rate increases due to the Biggert- Waters Act eliminating their subsidies. He believed that some of these small businesses would have to close because they would not be able to afford the full-risk rates for business structures. Like NFIP’s building requirements, the mandatory purchase requirement and changes in flood insurance premiums have had limited effects on farmers we spoke to in the selected communities, except some in California. Many of those we spoke with—including FEMA and USDA officials, representatives of national farming organizations and a floodplain management organization, all state floodplain managers, and one insurance industry organization—were not aware of farm businesses that had been adversely impacted by flood insurance costs. However, representatives of an agricultural floodplain management group, whose members were primarily from California’s Central Valley, said that its members were concerned that the cost of flood insurance on their structures in areas that had recently been remapped into SFHAs could make their businesses unsustainable. For example, according to a rice farmer in California, recent mapping updates placed his structures in an SFHA, raising his flood insurance premiums substantially. He said that his flood insurance premiums were now his third largest production expense. Three farmers in Yolo and Sutter Counties and the local floodplain manager in Sutter County were also concerned about rate increases they expected in the next year as NFIP moved toward full-risk rates. However, six farmers we spoke with in the California communities told us that their flood insurance premiums were a very small portion of their total production cost. In addition, some of the farmers from these communities chose to purchase flood insurance even though they were not required to do so and considered it another cost of doing business. According to state floodplain managers for most of the selected communities, many farmers were not required to insure their structures, for varying reasons. For instance, In the two Louisiana communities we reviewed, all but one of the farmers with whom we spoke had farm structures only on parts of their land that lay outside SFHAs. None of these farmers voluntarily purchased flood insurance on these structures. In North Carolina, the floodplain manager said that many farms in the state were sponsored by large corporations that funded the construction of any necessary structures, and as a result farmers did not need loans that might include a mandatory purchase requirement. In contrast, the floodplain manager from California said that institutions that provided loans to farmers for structures, such as rice or prune dryers, might require flood insurance as a condition of the loan, even if they were not required to do so. Among other requirements, buildings with two or more outside rigid walls and a fully secured roof that are affixed to a permanent site are considered insurable structures, according to NFIP regulations. 44 C.F.R. § 59.1. delayed planting a new crop because he lacked the cash to do so and did not want to take out a loan because he would have had to purchase flood insurance. He said that he expected it would take him 2 years to raise the needed money. Also, almost all (five of six) of the agricultural lenders with whom we spoke had concerns about requiring farmers to purchase flood insurance on farm structures that had little or no value, such as dilapidated sheds or chicken coops. These lenders told us that this issue was their most significant concern in implementing the mandatory purchase requirement for farm loans. These structures often provide little to no economic value to farmers, and lenders said that they would not require insurance on them in the absence of the mandatory purchase requirement because they did not need to use the structures as collateral. Two of the lenders told us that they had lost business because of this requirement. Further, one lender told us that it was difficult to determine the replacement value of a building that the appraiser valued at zero or in some cases did not even include in the appraisal. One lender told us that in these situations their loan officers worked with the farmers to exclude the structures from the mortgage to avoid the mandatory purchase requirement. Local floodplain managers, farmers and lenders identified several options to help farmers located in SFHAs manage NFIP requirements for building new or substantially improved structures and lowering the cost of NFIP insurance. The most commonly cited option involved exempting agricultural structures from NFIP building requirements and the mandatory purchase requirement. Other options included charging insurance premiums based on an area’s historical flood losses, accounting for some level of protection by certain unaccredited levees, providing need-based assistance to farmers and rural residents, and increasing funding for mitigation efforts. However, FEMA officials, experts from national floodplain management and city and regional planning organizations, and academics told us that many of these options carried risks and may run counter to the NFIP objectives. Exempt Agricultural Structures. The most commonly cited option from farmers and local lenders, mainly from California and Louisiana, involved exempting new agricultural structures and those that needed substantial improvements from NFIP building requirements and the mandatory purchase requirement. Legislation has been proposed to amend NFIP to include relaxing NFIP requirements for some agricultural structures, including the Agricultural Structures Building Act of 2013, which aims to allow farmers to repair, expand, and construct agricultural structures without elevation in SFHAs. In addition, one group has advocated the creation of a separate agricultural zone that would not require expensive elevation and dry flood-proofing but would require wet flood-proofing of certain structures. Some farmers from Sutter and Yolo Counties in California told us that they did not believe that the flood risk for their areas was high, since these counties have not experienced a major flood since the 1950s. The farmers have said that they would be willing to assume all risks and opt out of federal disaster relief if they could expand and construct buildings without being required to follow NFIP building requirements. However, experts from national floodplain management organizations and academics told us that such exemptions were counter to the objectives of NFIP and carried significant risks. For example, one expert indicated that it might be difficult to differentiate agricultural structures from other nonresidential structures that may also store agricultural products (e.g., a corner store or a large industrial facility that may also store grain in an adjacent warehouse). He said that the tendency would be to classify any structures that could be remotely related to agriculture as agricultural structures. Further, experts we spoke to indicated that such an exemption could set a precedent, leading others to ask for similar exemptions. FEMA officials shared these views, adding that FEMA had no legal authority to allow farmers or any other specific population group to opt out of disaster relief. According to FEMA officials, allowing farmers to assume all risks and not receive disaster relief would require further legislative changes to the Stafford Disaster Relief and Emergency Assistance Act. Furthermore, one of the primary goals of FEMA’s building requirements is to help reduce flood-related property damage. Complying with FEMA’s building requirements would reduce flood-related losses and lower insurance premiums for compliant structures, according to FEMA officials. They added that this reduction in turn may help attract broader participation in the program. Exempting structures may defeat this goal and encourage farmers to build noncompliant structures in high-risk areas that may inadvertently cause damage to nearby communities, according to officials. For example, agricultural structures that do not adhere to building requirements—that is, that are not elevated or flood-proofed— could be washed downstream, creating blockages that could cause additional flooding in communities there. Both FEMA and the experts told us that while farmers might view their choices as affecting only themselves, flood mitigation needed to be considered holistically from the perspective of risks to the larger community. Further, experts indicated that exempting structures may reinforce farmers’ potential misperceptions of their flood risks. Charge Insurance Premiums Based on Historical Losses to Flooding. Some farmers, rural residents, state and local floodplain managers, and other organizations have suggested creating a variable premium rate structure based on historical flood risks in different areas. For example, some farmers from California told us that they should pay lower flood insurance premiums than others residing in areas that the farmers consider to be more flood-prone areas, such as coastal areas, as these farmers had not experienced flooding since the 1950s and did not perceive their flood risks as significant. However, according to FEMA the premium rates are determined by flood zone, among other factors, and policyholders in high-risk coastal areas (V zones) already pay higher rates than policy holders in other zones. Further, FEMA stated that flood maps already account for historical floods, in addition to other factors. According to the national floodplain management expert we spoke with, some states that had so far collected less in claims from NFIP than other states might welcome this option. But they also noted that people tended to underestimate their long-term flood risks. Exempt Low-Value Agricultural Structures. As mentioned earlier, lenders from four of the selected communities suggested giving them the flexibility to decide whether a farmer needed flood insurance on low-value agricultural structures. Some lenders told us that they did not need to use the low-value structures as collateral. Experts indicated that this option could be further explored, provided that independent third parties appraised the structures and confirmed their values. FEMA officials also noted that federal financial regulators, not the agency, set the standards for insurance requirements for low-value structures and that FEMA did not have the authority to dictate to lenders what they could do. According to FEMA, in some instances lenders may require insurance even though it may not be required under the law. Therefore, farmers may face the prospect of paying for flood insurance coverage on properties that have low value. Account for Some Protection Provided by Unaccredited Levees. According to a floodplain manager from Sutter County, California, and others, unaccredited levees still provide some protection and insurance premiums should reflect this fact. The experts we spoke with said that this option would help adjust insurance rates and provide more flexibility for policyholders in adhering to NFIP building requirements and mandatory purchase requirements. FEMA recognizes that unaccredited levee systems may still provide some measure of protection against flooding and has developed Levee Analysis and Mapping Procedures (LAMP) to account more precisely for the level of protection levees provide when mapping flood risk. LAMP’s goal is not to reduce insurance rates but to use the best scientific methodologies to more accurately determine flood risks and help ensure that premiums are based on the most accurate determination of flood risk. For example, LAMP may determine that an area around the levee should be in zone D (a non-SFHA area with undetermined risks). The levee may still technically not be accredited, but structures located in zone D have no mandatory purchase requirement or building requirements because it is not considered as SFHA. Policyholders in this zone would not be required by law to purchase insurance, but FEMA strongly advises that they do. However, some experts said that determining the safety of levees was difficult. FEMA officials noted that while LAMP allowed for a more detailed analysis of unaccredited levees, this analysis might not always result in lower BFEs, smaller SFHAs, or reduced NFIP premiums. FEMA and other experts emphasized that levees were never 100 percent safe and that communities needed to acknowledge the possibility that any levee— including those that are accredited to provide protection for a 1 percent annual event—could fail. Provide Need-Based Assistance. Some farmers also cited need-based assistance as an option to help those who could not afford NFIP premiums to meet the insurance requirements. In general, stakeholders agreed that this option warranted further exploration, since flood insurance has been an affordability issue for many people. We have previously identified targeted assistance or subsidies based on financial need of policyholders as an option to consider to reduce the financial impact of subsidies on NFIP. See GAO, Flood Insurance: More Information Needed on Subsidized Properties, GAO-13-607 (Washington D.C.: July 3, 2013). the agency currently does not have the statutory authority or resources to provide need-based and targeted assistance to help property owners with NFIP insurance premiums. As required by the Biggert-Waters Act and the 2014 Act, the National Academy of Sciences is studying the issue of affordability but has not yet produced its report. FEMA officials said that it would be premature to comment on how need-based assistance might operate. FEMA supports a variety of flood mitigation activities that are designed to reduce the risk of flood damage and the financial exposure of NFIP. These activities, which are mostly implemented at the state and local levels, include hazard mitigation planning; the adoption and enforcement of floodplain management regulations and building codes; and the use of hazard control structures such as levees, dams, and floodwalls or natural protective features such as wetlands and dunes. Additionally, property-level mitigation options include elevating a building to or above the area’s base flood elevation, relocating the building to an area of less flood risk, or purchasing and demolishing the building and turning the property into green space. large communities with high population densities, according to FEMA officials. The officials indicated that in general, agricultural areas and rural communities may be unlikely to meet these criteria and thus may have difficulty obtaining mitigation funding. A number of rural and agricultural areas have recently been mapped into SFHAs. Farmers with new or substantially improved structures in these areas must now comply with NFIP building requirements, and farmers in some locales—specifically counties that we visited in California—face challenges meeting them. Based on information from FEMA, complying with NFIP’s building requirements may be a broader problem applicable to agricultural communities that have vast floodplains with deep flood depths similar to those in California. The two options of complying with the program’s building requirements—elevating and dry flood-proofing— are not always feasible for certain structures in these types of locations. For example, farmers in areas with deep flood depths cannot realistically elevate large structures to meet FEMA requirements and may not be able to dry flood-proof all structures. With regard to wet flood-proofing for some nonresidential structures, including certain agricultural structures, FEMA last updated its guidance for granting such variances in 1993. Although FEMA typically updates guidance as needed and acknowledges the challenges some farmers face, it has not updated its guidance with alternatives for complying with building requirements in over 20 years, or expanded it to reflect changes in the agricultural industry. Updated and detailed guidance that provides alternative mitigation methods for protecting agricultural structures from flooding and takes into account relevant changes to the agricultural industry would be an important step in assisting farmers in identifying feasible alternatives to complying with building requirements in expansive floodplains with deep flood depths. As FEMA determines the scope of its efforts to revise its existing guidance, we recommend that the Secretary of the Department of Homeland Security (DHS) direct the Administrator of FEMA to update existing guidance to include additional information on and options for mitigating the risk of flood damage to agricultural structures to reflect recent farming developments and structural needs in vast and deep floodplains. We provided a draft of this report to the Department of Homeland Security (DHS) for its review and comment. DHS provided written comments that are presented in appendix IV. In its comments, DHS concurred with our recommendation to update existing guidance to include additional information on and options for mitigating the risk of flood damage to agricultural structures to reflect recent farming developments and structural needs in vast, deep floodplains. In particular, the letter noted that FEMA recognizes that agriculture is a good use of the floodplain. Further, changes in the agricultural industry and the diversity of agricultural structures are important to recognize in future guidance. FEMA stated that it is working to determine the best approach to update its guidance, but has not yet determined a completion date. FEMA also provided technical comments, which we incorporated, as appropriate. As agreed with your offices, unless you publicly release the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to FEMA and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. This report discusses (1) the effects on farmers and rural residents of the National Flood Insurance Program’s (NFIP) building requirements for agricultural and residential structures, (2) the effects of the mandatory purchase requirement and changes in premium rates, and (3) options that have been proposed to address any issues resulting from changes to NFIP requirements and stakeholders’ views on these proposals. We focused our review on riverine rural and agricultural floodplains and excluded coastal areas. For all objectives, we analyzed relevant laws, as well as Federal Emergency Management Agency (FEMA) regulations and policies, including building requirements for properties located in special flood hazard areas (SFHA), flood mapping modernization efforts, and the analysis and mapping procedures for unaccredited levees. statutory requirements such as the mandatory purchase requirement for properties located in SFHAs. We reviewed the Biggert-Waters Flood Insurance Reform Act of 2012 (the Biggert-Waters Act), including provisions to phase out some premium subsidies. We also reviewed provisions of the Homeowner Flood Insurance Affordability Act of 2014 (2014 Act) that repealed or altered portions of the Biggert-Waters Act. We identified and reviewed research on the effects of NFIP requirements on farmers and rural residents. Levees are man-made structures, usually earthen embankments, designed and constructed in accordance with sound engineering practices to contain, control, or divert the flow of water to provide protection from temporary flooding. 44 C.F.R. § 59.1. Levees that are accredited by FEMA can result in a community being mapped in a flood zone with a lower risk than it would be without the accredited levee. regional planning organizations (i.e., American Planning Association, Association of State Floodplain Managers, and National Association of Flood & Stormwater Management Agencies). We interviewed academics in the areas of floodplain management, officials from FEMA’s Mapping, Insurance, Building Science and Flood Management Branches, and officials from Department of Agriculture’s (USDA) Economic Research Service and Rural Development branches. In addition, we interviewed representatives of Agricultural Floodplain Management Alliance (AFMA) members primarily in California, and the insurance industry. To identify the locations of rural and agricultural areas in SFHAs, we distinguished rural and agricultural land areas from urban land areas. FEMA does not make such a distinction for the purposes of administering NFIP. To make these distinctions, we first analyzed data from the U.S. Census Bureau (2010) and USDA’s Atlas of Rural and Small Town America (2007) to determine the rural and agricultural areas within the United States. We defined rural areas as areas that were not considered urbanized areas or urban clusters using Census data and agricultural areas as counties where 50 percent or more of the land area was used for farming. We considered all other areas as urban (see fig. 5). We reviewed information available online from the Census web site and the USDA web site on the data quality assurance processes for the data. We concluded that the Census and USDA data that we used were sufficiently reliable for purposes of using them as a base for this determination. We provided FEMA the data on rural and agricultural areas described above. FEMA mapping specialists used the data we provided them and combined it with FEMA’s flood map data. For the rural and agricultural areas with maps that had been converted to a digital format as of February 2014, FEMA mapped the SFHAs. For the rural and agricultural areas that had flood maps that had not yet been converted to digital format as of February 2014, FEMA showed these areas on the map. FEMA excluded areas with coastal flood zones from the map. To determine the number and percentage of policyholders located in rural and agricultural riverine SFHAs, we determined which ZIP codes were in the rural, agricultural, and urban areas. If 50 percent or more of land area of a ZIP code was within a rural or agricultural area, we considered it a rural or agricultural ZIP code. We analyzed FEMA’s policy data as of September 30, 2013, (most recently available fiscal year-end data), to determine how many policies were zoned in an SFHA in the ZIP codes we deemed rural or agricultural using the method described above. We excluded policies with a coastal flood zone designation because the scope of this study was on riverine flooding. To determine the percentage of the population mapped into or out of SFHAs because of FEMA’s Map Modernization initiative, we analyzed available FEMA data on the number of people that received a map change at the Census Block Group level under this initiative. We determined which Census Block Groups were in rural and agricultural ZIP codes and compared the number of people that received a change in SFHA designation in those Census Block Groups to population data from the 2010 Census, which was also provided by FEMA. We reviewed documentation on how the data were collected and interviewed a FEMA official on the usability of the data. We determined these data were sufficiently reliable for our purposes. To assess any effects of NFIP’s building requirements and the mandatory purchase requirement on farmers and rural residents, we conducted case studies in eight selected NFIP communities. We selected these communities using the following criteria: crop and livestock production requiring nonresidential farm structures or nearby on-farm processing (e.g., rice, corn, soybeans, cotton, sugar beets, hogs, chickens, and cattle (dairy)); some agricultural land located in SFHAs that was prone to flooding; and geographic variations (e.g., East coast, West Cost, the South and the Midwest) of the riverine agricultural areas located in SFHAs across the country. We selected California, Louisiana, North Carolina, and North Dakota as key states. We then interviewed four state floodplain managers from each state to obtain their views on any effects NFIP building requirements and the mandatory purchase requirement have had or could have on farmers and rural residents. In addition, we solicited their input, as well as additional input from three state agricultural extension specialists in California, Louisiana, and North Carolina, in identifying two additional communities in their states that met our criteria. The eight selected communities were: Sutter County, California; Yolo County, California; Rapides County, Louisiana; St. Landry Country, Louisiana; Duplin County, North Carolina; Tyrrell County, North Carolina; Cass Country, North Dakota; and Walsh Country, North Dakota. We interviewed eight local floodplain managers and five agricultural extension service officials in the suggested communities to obtain their views on the effects of NFIP on farmers and rural residents. We also requested the help of the floodplain managers and extension personnel in identifying local farmers and rural residents with properties located in SFHAs. The local officials helped us identify a total of 24 farmers and 10 rural residents from the selected communities. Although we provided the officials with guidance for the characteristics of persons identified, we did not independently verify that all of our criteria were met and acknowledge that some selection bias may be present since we relied on local officials for selecting the farmers to participate in our study. We contacted the people identified for each community. We conducted structured interviews with all farmers and rural residents who had been remapped into SFHAs according to local officials and could provide first-hand perspectives on any challenges they faced in complying with NFIP’s building requirements and the mandatory purchase requirement. We also discussed identified options to address these challenges. We spoke with some farmers and rural residents who had been remapped into SFHAs after their community’s initial flood map had been established and some farmers and rural residents who were not currently mapped into an SFHA. We also spoke with six agricultural lenders about the effect insurance requirements had on farmers and rural residents and with two developers about the effects of the requirements on rural communities. We then summarized all interviews and analyzed them by category of questions: NFIP building requirements, the mandatory purchase requirement, effects on the community, and options to address these challenges. Table 2 shows, for each of the eight selected communities, the number of farmers and rural residents with whom we spoke and the major crops produced by those farmers. We could not obtain the same number of interviews in each community, because the local floodplain managers and agricultural extension specialists who provided referrals to people provided different numbers and types of contacts in each of the selected communities. In addition, the relationships between the local floodplain manager and the contacts sometimes differed, and in some cases a relationship may have affected our ability or inability to obtain an interview with that person. For example, some successful contacts served on community water management task forces with the local floodplain manager. We visited California and Louisiana and interviewed the local farmers and residents. For the other two states (North Carolina and North Dakota), we interviewed the farmers and rural residents by telephone. The purpose of our extensive work in these selected communities was to illustrate and more fully understand farmers’ and residents’ experiences in dealing with NFIP’s requirements. Our individual interviews were not designed to demonstrate the extent of an issue such as a survey might do and we determined that personal contact would prove more reliable in completing interviews with this rural population. In addition, through individual interviews we were able to obtain a more complete understanding of each person’s perspective, the reasons for their opinions or attitudes on specific topics and their insights into concerns related to NFIP requirements, all of which would supplement the information provided by state and local NFIP officials. The combination of design, targeted research questions, multiple sources of information, the use of selected representative communities to address the research questions and systematic analyses all serve to support greater generalizability of our findings. Nevertheless, due to the differing nature of communities and their responses to the NFIP requirements, a possibility exists that had we selected different communities we might have found some different results. We believe that the patterns and consistency of our findings within and across our selected cases support the widespread applicability of our findings. To identify options to address any challenges farmers and rural residents faced in complying with NFIP’s building requirements and the mandatory purchase requirement, we gathered suggestions from local NFIP administrators, local lenders, farmers, and rural residents that we met with during our case studies. We then asked experts from flood management and city and regional planning organizations, cognizant academics, and officials from FEMA to comment on the ideas that we gathered and summarized their views. To determine historical NFIP premium and claims amounts, we analyzed annual NFIP premium data for years 1994-1998 and 2000-2013, and the NFIP claims database as of September 30, 2013 (most recently available fiscal year-end data). We adjusted these premium and claim amounts for inflation to report them in constant 2014 dollars. We conducted electronic testing including checks for outliers and missing data. We also interviewed FEMA officials on the usability and reliability of the data and reviewed our past assessments of these data. We determined these data were sufficiently reliable for our purposes. We determined the premiums and claims attributable to rural and agricultural areas and to urban areas using the ZIP codes for rural, agricultural, and urban areas we found using the method described above. We used 2007 agricultural data and 2010 rural and urban data as the base years for determining whether a ZIP code area was rural, agricultural, or urban. As a result, we may under-represent the premiums and claims attributable to the rural and agricultural areas for earlier years because urban areas have tended to grow larger over time. Data were not available for 1999 and the years prior to 1994 that would allow us to determine the premium amounts comparable to the premium amounts we reported for 1994 through 2013. FEMA told us that the available premium data for 1999 and years prior to 1994 was for all policies that had been in place during the year, as opposed to the policies in force at a specific point in time of each year. Using these data would have resulted in overstated premiums. Also, FEMA told us that in some of the earlier years ZIP codes were not reported consistently from the insurance companies. In some years, ZIP codes were not available at all (1978–1981, 1983, and 1992). We analyzed FEMA data on National Flood Insurance Program (NFIP) premiums and claims from 1994 through 2013 (except 1999) to determine the claims paid to and the premiums taken-in by FEMA from rural and agricultural riverine areas and urban riverine areas. We also analyzed the total premiums and claims for rural and agricultural areas and urban areas on a state-by-state basis for this time period. Overall, our analysis of premiums and claims indicates that in both rural and agricultural and urban areas nationwide, policyholders have historically received more in claims than they have paid in premiums. However, flooding is a highly variable event, with losses differing widely from year to year. Therefore, analysis of historical data can lead to unreliable conclusions about the actual flood risk faced by a given state or area. Also, catastrophic events greatly impact the long-term aggregate experience of a state. While the difference between premiums and claims in rural and agricultural and urban areas is not a meaningful measure of whether policyholders are paying premiums commensurate with their risk because NFIP premiums are intended to cover losses as well as operating expenses, among other reasons, it provides additional descriptive information. Table 3 shows NFIP premiums and claims of policyholders in rural and agricultural areas from 1994 through 2013 (except 1999). This information provides some indication of the trends over this period for rural areas. Similarly, table 4 provides 1994-2013 (except 1999) premium and claims data for urban areas. Table 5 includes available premium and claims data by year in the rural and agricultural riverine areas of each state. Because comparable 1999 premium data were not available, the ratio of claims to premiums for some states may be distorted. In 1999, some states on the east coast experienced large losses from Hurricane Floyd likely resulting in high claim amounts. According to FEMA, for example, NFIP policyholders in the state of North Carolina received over $141 million in claims between September 1999 and June 2000. If the premiums and claims for 1999 were included, the ratio of claims to premiums for states affected by Hurricane Floyd could have been larger. Table 6 provides the same premium and claims information for urban areas by state. Additional study would be required to determine whether policyholders in some states with lower losses are paying a higher premium than is appropriate for their risk and others paying too little. For example, our analysis did not control for differences in the type of policy purchased, such as the mix of certain property types across states and insurance coverage amounts, which could affect both premiums and claims. In addition, we did not control for differences in the mix of subsidized and full-risk policies or the impact of subsidized premiums on our results. As we have reported previously, some states have a relatively large number or proportion of subsidized properties that generally would lead to higher expected claims relative to premiums. The limitations in setting full-risk rates that we discussed in the prior report could result in systematic mispricing relative to risk that becomes apparent only over long periods. Further, the analysis conducted for this report included both subsidized and full-risk properties, and so the results should be considered in this context. The following are some basic characteristics of the selected communities: Sutter County, California; Yolo County, California; Rapides Parish, Louisiana; St. Landry Parish, Louisiana; Duplin County, North Carolina; Tyrrell County, North Carolina; Cass Country, North Dakota; and Walsh Country, North Dakota. Tables 7 to 14 show, for each individual community, the total number of National Flood Insurance Program (NFIP) policies, the number of policies in a special flood hazard area (SFHA), the number of miles of levees in the county, and the top agricultural commodities in the county. Figures 6 to 11 show FEMA’s flood maps for the counties, when available. In addition to the contact named above, Triana McNeil and Jill Naamane (Assistant Directors); Simin Ho (Analyst in Charge); Emily Chalmers; William Chatlos; Barbara El Osta; Melissa Kornblau; John Mingus; Marc Molino; and Ruben Montes De Oca made key contributions to this report. | NFIP helps protect property in high-risk floodplains by, among other things, requiring communities that participate in the program to adopt floodplain management regulations, including building requirements for new or substantially improved structures such as elevating, dry flood-proofing, or wet flood-proofing structures. GAO was asked to evaluate the possible effects of NFIP, including its building requirements, on farmers in riverine areas that have a high risk of flooding. This report examines, among other things, the effects of building requirements on farmers in high-risk areas and options that could help address any challenges farmers face. To do this work, GAO analyzed laws, regulations, and FEMA policy and claims data; interviewed 12 state and local floodplain managers, 24 farmers, and 6 lenders in 8 selected communities in California, Louisiana, North Carolina, and North Dakota (selection based on geographic diversity, presence of high-risk flood areas, and type of farming that required on-site structures); and interviewed flood management and planning experts and FEMA officials. The effects of the National Flood Insurance Program's (NFIP) building requirements for elevating or flood-proofing agricultural structures in high-risk areas varied across selected communities, according to interviews GAO conducted with floodplain managers and farmers. Specifically: Floodplain managers and 12 farmers in selected rural communities with whom GAO spoke in Louisiana, North Carolina, and North Dakota generally were not concerned about these requirements. Most of these farmers told GAO that they had land outside the high-risk areas where they could build or expand their structures, or they could elevate their structures relatively easily. Floodplain managers in selected California communities told GAO that farmers in their communities had been adversely affected by the building requirements. They said that most farm land was in high-risk areas and elevation of structures would be difficult and costly—due to the relatively deep flood depths, structures would be required to be elevated up to 15 feet to comply with the building requirements. They also indicated that some structures were difficult to make watertight below the projected flood level (dry flood-proofing). According to a California floodplain manager and several farmers with whom GAO spoke, the farmers who were adversely affected by the building requirements have had to work around outdated Federal Emergency Management Agency (FEMA) guidance that does not fully address the challenges of vast and relatively deep floodplains or reflect industry changes. For example, the 1993 guidance from FEMA allowed an alternative flood-proofing technique (wet flood-proofing) that permits water to flow through certain agricultural structures in expansive high-risk areas. However, farmers in the California communities told GAO this was not a viable option because pests might enter openings and contaminate crops stored inside. FEMA typically updates guidance as needed but acknowledged the need for additional guidance that covers all of the different types of agricultural structures and reflects recent developments in the size and scale of farm operations, including supporting structures that were expensive to build and replace. Additional and more comprehensive guidance would allow FEMA to better respond to recent developments and structural needs in vast and deep floodplains. Some local floodplain managers, farmers, and lenders from the selected communities identified options to help farmers manage the challenges of building or expanding agricultural structures in high-risk areas, but many of the options would entail certain risks and may run counter to the objectives of NFIP. For example, one commonly cited option calls for exempting agricultural structures from building requirements, with farmers assuming all of the flood risk and opting out of federal disaster relief. Both FEMA and the experts noted such an exemption could set a precedent, leading others to ask for similar exemptions. Further, FEMA officials stated that the agency had no legal authority to allow farmers or any other group to opt out of disaster relief. The Administrator of FEMA should update existing guidance on mitigating the risk of flood damage to agricultural structures to include additional information that reflects recent farming developments and structural needs in vast and deep floodplains. FEMA agreed with the recommendation. |
On average, about 3 people have died and about 8 people have been injured each year over the last 10 years in natural gas transmission pipeline incidents. The number of incidents has increased from 77 in 1996 to 122 and 200 in 2004 and 2005, respectively, mostly reflecting more frequent occurrence of property damage. Much of this increase may be attributed to increases in the price of gas (which has the effect of lowering the reporting threshold) over the past several years and to damage as a result of hurricanes in 2005. As a means of enhancing the security and safety of gas pipelines, the 2002 act included an integrity management structure that, in part, requires that operators of gas transmission pipelines systematically assess for safety risks the portions of their pipelines located in highly populated or frequently used areas, such as parks. Safety risks include corrosion, welding defects and failures, third-party damage (e.g., from excavation equipment), land movement, and incorrect operation. The act requires that operators perform these assessments (called baseline assessments) on half of the pipeline mileage in highly populated or frequented areas by December 2007 and the remainder by December 2012. Those pipeline segments potentially facing the greatest risks are to be assessed first. Operators must then repair or replace defective pipelines. Risk-based assessments are seen by many as having a greater potential to improve safety than focusing on compliance with safety standards regardless of the threat to pipeline safety. The act further provides that pipeline segments in highly populated or frequented areas must be reassessed for safety risks at least every 7 years. PHMSA’s regulations implemented the act by requiring that operators reassess their pipelines for corrosion damage every 7 years, using an assessment technique called confirmatory direct assessment. Under these regulations, and consistent with industry national consensus standards, operators must also reassess their pipeline segments for any safety risk at least every 5, 10, 15, or 20 years, depending on the pressure under which the pipeline segments are operated and the condition of the pipeline. There are about 900 operators of about 300,000 miles of gas transmission and gathering pipelines in the United States. As of December 2005, according to PHMSA, 429 of these operators reported that about 20,000 miles of their pipelines lie in highly populated or frequented areas (about 7 percent of all transmission pipeline miles). Operators reported that they had as many as about 1,600 miles and as few as 0.02 miles of pipeline in these areas. PHMSA, within the Department of Transportation, administers the national regulatory program to ensure the safe transportation of gas and hazardous liquids (e.g., oil, gasoline, and anhydrous ammonia) by pipeline. The agency attempts to ensure the safe operation of pipelines through regulation, national consensus standards, research, education (e.g., to prevent excavation-related damage), oversight of the industry through inspections, and enforcement when safety problems are found. PHMSA employs about 165 staff in its pipeline safety program, about half of whom are pipeline inspectors who inspect gas and hazardous liquid pipelines under integrity management and other more traditional compliance programs. Nine PHMSA inspectors are currently devoted to the gas integrity management program. In addition, PHMSA is assisted by inspectors in 48 states, the District of Columbia, and Puerto Rico. While the gas integrity management program is still being implemented, early indications suggest that it enhances public safety by supplementing existing safety standards with risk-based management principles. Prior to the integrity management program, there were, and still are, minimum safety standards that operators must meet for the design, construction, testing, inspection, operation, and maintenance of gas transmission pipelines. These standards apply equally to all pipelines and provide the public with a basic level of protection from pipeline failures. However, minimum standards do not require operators to identify and address risks that are specific to their pipelines nor do they require operators to assess the integrity of their pipelines. While some operators did assess the integrity of some of their pipelines, others did not. Some pipelines have been in operation for 40 or more years with no assessment. The gas integrity management requirements, finalized in 2004, go beyond the existing safety standards by requiring operators, regardless of size, to routinely assess pipelines in highly populated or frequented areas for specific threats, take action to mitigate the threats, and document management practices and decision-making processes. Representatives from the pipeline industry, safety advocate groups, and operators we have contacted agree that the integrity management program enhances public safety. Some operators noted that, although the program’s requirements can be costly and time consuming to implement, the benefits to date are worth the cost. The primary benefit identified was the comprehensive knowledge the program requires all operators to have of their pipeline systems. For example, under integrity management, operators must gather and analyze information about their pipelines in highly populated or frequented areas to get a complete picture of the condition of those lines. This includes developing maps of the pipeline system and information on corrosion protection, exposed pipeline, threats from excavation or other third-party damage, and the installation of automatic shut off valves. Another benefit cited was improved communications within the company. Investigations of pipeline incidents have shown that, in some cases, an operator possessed information that could have prevented an incident but had not been shared with employees who needed it most. Integrity management requires operators to pull together pipeline data from various sources within the company to identify threats to the pipelines, leading to more interaction among different departments within pipeline companies. Finally, integrity management focuses operator resources in those areas where an incident could have the greatest impact. While industry and operator representatives have provided examples of the early benefits of integrity management, operators must report semi- annually on performance measures that should quantitatively demonstrate the impact of the program over time. These measures include the total mileage of pipelines and the mileage of pipelines assessed in highly populated or frequented areas, as well as the number of repairs made and leaks, failures, and incidents identified in these areas. In the 2 years that operators have reported the results of integrity management, they have assessed about 6,700 miles of their 20,000 miles of pipelines located in highly populated or frequented areas and they have completed 338 repairs that were immediately required and another 998 repairs that were less urgent. While it is not possible to determine how many of these needed repairs would have been identified without integrity management, it is clear that the requirement to routinely assess pipelines enables operators to identify problems that may otherwise go undetected. For example, one operator told us that it had complied with all the minimum safety standards on its pipeline, and the pipeline appeared to be in good condition. The operator then assessed the condition of a segment of the pipeline under its integrity management program and found a serious problem causing it to shut the line down for immediate repair. One of the most frequently cited concerns by the 25 operators we contacted was the uncertainty about the level of documentation needed to support their gas integrity management programs. PHMSA requires operators to develop an integrity management program and provides a broad framework for the elements that should be included in the program. Each operator must develop and document specific policies and procedures to demonstrate their commitment to compliance and implementation of the integrity management requirements. In addition, an operator must document any decisions made related to integrity management. For example, an operator must document how it identified the threats to its pipeline in highly populated or frequented areas and who was involved in identifying the threats, their qualifications, and the data they used. While the operators we contacted did not disagree with the need to document their policies and procedures, some said that the detailed documentation required for every decision is very time consuming and does not contribute to the safety of pipeline operations. Moreover, they are concerned that they will not know if they have enough documentation until their program has been inspected. After conducting 11 inspections, PHMSA found that, while operators are doing well in conducting assessments and making the identified repairs, they are having difficulty overall in the development and documentation of their management processes. Another concern raised by most of the operators is the requirement to reassess their pipelines at least every 7 years. I will discuss the 7-year reassessment requirement in more detail shortly. As part of our assessment of the integrity management program, we are also examining how PHMSA and state pipeline agencies plan to oversee operator implementation of the program. To help federal and state inspectors prepare for and conduct integrity management inspections, PHMSA developed detailed inspection protocols tied to the integrity management regulations and a series of training courses covering the protocols and other relevant topics, such as corrosion and in-line inspection. Furthermore, in response to our 2002 recommendation, PHMSA has been working to improve its communication with states about their role in overseeing integrity management programs. For example, PHMSA’s efforts include (1) inviting state inspectors to attend federal inspections, (2) creating a website containing inspection information, and (3) providing a series of updates through the National Association of Pipeline Safety Representatives. I am pleased to report that preliminary results from an ongoing survey of state pipeline agencies (with more than half the states responding thus far) show that the majority of states that reported believe that the communication from PHMSA has been very or extremely useful in helping them understand their role and responsibilities in conducting integrity management inspections. Nationwide, pipeline operators reported to PHMSA that they have found, on average, about one problem requiring immediate repair or replacement for every 20 miles of pipeline assessed in highly populated or frequented areas. Operators we contacted recognize the benefits of reassessments; however, almost all would prefer following the industry national consensus standards that use safety risk, rather than a prescribed term, for determining when to reassess their pipelines. Most operators expect to be able to acquire the services and tools needed to conduct these reassessments including during an overlap period when they are starting to reassess pipeline segments while completing baseline assessments. As discussed earlier, as of December 2005, operators nationwide have notified PHMSA of 338 problems that required immediate repair in the 6,700 miles they have assessed—about one immediate repair required for every 20 miles of pipeline assessed in highly populated or frequented areas. The number of immediate repairs may be due, in part, to some operators systematically assessing their pipelines for the first time as a result of the 2002 act. Of the 25 transmission operators and local distribution companies that we contacted, most told us that they found few safety problems that required reducing pressure and performing immediate repairs during baseline assessments covering (1) about 3,000 miles of pipeline in highly populated or frequented areas and about (2) 35,000 miles outside of these areas. (See fig. 1.) Most operators reported finding pipelines in good condition and free of major defects, requiring only minor repairs or recoating. A few operators found more than 10 immediate repairs. Operators nonetheless found these assessments valuable in determining the condition of their pipelines and finding damage. Most of the operators told us that, if the 7-year reassessment requirement was not in place, they would respond to the conditions that they identified during baseline assessments by reassessing their pipelines every 10, 15, or 20 years, based on industry consensus standards. These baseline assessment findings suggest that—at least for the operators we contacted—the 7-year requirement is conservative. However, the 7-year reassessment requirement may be more appropriate for higher-stress pipelines than for lower-stress pipelines. The 7-year reassessment requirement is generally more consistent with scientific- and engineering-based intervals for pipelines operating under higher-stress. Higher-stress transmission pipelines are typically those that transport natural gas across the country from a gathering area to a local distribution company. For higher-stress pipelines, the industry consensus standard sets maximum reassessment periods at 5 or 10 years, depending on operating pressure. PHMSA does not collect information in such a way that would allow us to readily estimate the percentage of all pipeline miles in highly populated or frequented areas that operate under higher pressure. For the 25 operators that we contacted, the operators told us that about three-fourths of their pipeline mileage in highly populated or frequented areas operated at higher pressures. Finally, industry data suggest that in the neighborhood of 250,000 miles of the 300,000 miles (over 80 percent) of all transmission pipelines nationwide may operate at higher pressure. Some operators told us that the 7-year reassessment requirement is conservative for pipelines that operate under lower-stress. This is especially true for local distribution companies that use their transmission lines mainly to transport natural gas under lower pressures for several miles from larger cross-country lines in order to feed smaller distribution lines. They pointed out, for example, that in a lower-pressure environment, pipelines tend to leak rather than rupture. Leaks involve controlled, slow emissions that typically create little damage or risk to public safety. Most local distribution companies we spoke with reported finding few, if any, conditions during baseline assessments that would necessitate another assessment within 7 years. As a result, if the 7-year requirement did not exist, the local distribution companies would likely reassess every 15 to 20 years following industry consensus standards. Some of these operators often pointed out that since third-party damage poses the greatest threat to their systems. Operators added that third-party damage can happen at any time and that prevention and mitigation measures are the best ways to address it. Operators viewed a risk-based reassessment requirement such as in the consensus standard as valuable for public safety. Operators of both higher- stress and lower-stress pipelines indicated a preference for a risk-based reassessment requirement based on engineering standards rather than a prescriptive one-size-fits-all standard. Such a risk-based reassessment standard would be consistent with the overall thrust of the integrity management program. Some operators noted that reassessing pipeline segments with few defects every 7 years takes resources away from riskier segments that require more attention. While PHMSA’s regulations require that pipeline segments be reassessed only for corrosion problems at least every 7 years using a less intensive assessment technique (confirmatory direct assessment) some operators point out that it has not worked out that way. They told us that, if they are going to the effort of assessing pipeline segments to meet the 7-year reassessment requirement, they will typically use more extensive testing—for both corrosion and for other problems—than required, because doing so will provide more comprehensive information. Thus, in most cases, operators plan to reassess their pipelines by using in-line inspections or direct assessment for problems in addition to corrosion sooner than required under PHMSA’s rules. Most operators and inspection contractors we contacted told us that the services and tools needed to conduct periodic reassessments will likely be available to most operators. All of the operators reported that they plan to rely on contractors to conduct all or a portion of their reassessments and some have signed, or would like to sign, long-term contracts that extend contractor services through a number of years. However, few have scheduled reassessments with contractors, as they are several years in the future, and operators are concentrating on baseline assessments. Nineteen of the 21 operators that reported both baseline and reassessment schedules to us said that that they primarily plan to use in-line inspection or direct assessment to reassess segments of their pipelines located in highly populated or frequented areas. In-line inspection contractors that we contacted report that there is capacity within the industry to meet current and future operator demands. Unlike the in-line inspection method, which is an established practice that many operators have used on their pipelines at least once prior to the integrity management program, the direct assessment method is new to both contractors and operators. Direct assessment contractors told us that there is limited expertise in this field and one contractor said that newer contractors coming into the market to meet demand may not be qualified. The operators planning to use direct assessment for their pipelines are generally local distribution companies with smaller diameter pipelines that cannot accommodate in- line inspection tools. An industry concern about the 7-year reassessment requirement is that operators will be required to conduct reassessments starting in 2010 while they are still in the 10-year period (2003-2012) for conducting baseline assessments. Industry was concerned that this could create a spike in demand for contractor services resulting from an overlap of assessments and reassessments from 2010 through 2012, and operators would have to compete for the limited number of contractors to carry out both. The industry was worried that operators might not be able to meet the reassessment requirement and that it was unnecessarily burdensome. Most operators that we contacted do not anticipate a spike and baseline activity should decrease as they begin to conduct reassessments. (See fig. 2.) They predict that operators will have conducted a large number of baseline assessments between 2005 and 2007 in order to meet the statutory deadline for completing at least half of their baseline assessments by December 2007 (2 years before the predicted overlap). There has also been a concern about whether baseline assessments and reassessments would affect natural gas supply if pipelines are taken out of service or operate at reduced pressures when repairs are being made. We are addressing this issue and will report on it in the fall. Recently, PHMSA reassessed its approach for enforcing pipeline safety standards in response to our concern that it lacked a comprehensive enforcement strategy. In August 2005, PHMSA adopted a strategy that focuses on using risk-based enforcement, increasing knowledge of and accountability for results, and improving its own enforcement activities. The strategy also links these efforts to goals to reduce and prevent incidents and damage, in addition to providing for periodic assessment of results. While we have neither reviewed the revised strategy in depth nor examined how it is being implemented, our preliminary view is that it is a reasonable framework that is responsive to the concerns that we raised in 2004. PHMSA has established overall goals for its enforcement program to reduce incidents and damage due to operators’ noncompliance. PHMSA also recognizes that incident and damage prevention is important, and its strategy includes a goal to influence operators’ actions to this end. To meet these goals, PHMSA has developed a multi-pronged strategy that is directed at the pipeline industry and stakeholders (such as state regulators), and ensuring that its processes make effective use of its resources. For example, PHMSA’s strategy calls for using risk-based enforcement to, among other things, take enforcement actions that clearly reflect potential risk and seriousness and deal severely with significant operator noncompliance and repeat offenses. Second, the strategy calls for increasing knowledge and accountability for results through such actions as (1) soliciting input from operators, associations, and other stakeholders in developing and refining regulations, inspection protocols, and other guidance; (2) clearly communicating expectations for compliance and sharing lessons learned; and (3) assessing operator and industry compliance performance and making this information available. Third, the strategy, among other things, calls for improving PHMSA’s own enforcement activities through developing comprehensive guidance tools and training inspectors on their use, and effectively using state inspection capabilities. Finally, to understand progress being made in encouraging pipeline operators to improve their level of safety and, as a result, reduce accidents and fatalities, PHMSA annually will assess its overall enforcement results as well as various components of the program. Some of the program elements that it may assess are inspection and enforcement processes, such as the completeness and availability of compliance guidance, the presentation of operator and industry performance data, and the quality of inspection documentation and evidence. Our work to date suggests that PHMSA’s gas integrity management program should enhance pipeline safety, and operators support it. We have not identified major issues that need to be addressed at this time. We expect to provide additional insights into these issues when we report to this Subcommittee and others this fall. Because the program is in its early phase of implementation, PHMSA is learning how to oversee the program and operators are learning how to meet its requirements. Similarly, operators are in the early stages of assessing their pipelines for safety problems. This means that the integrity management program will be going through this shake down period for another year or two as PHMSA and operators continue to gain experience. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or the other Members of the Subcommittee might have. For further information on this testimony, please contact Katherine Siggerud at (202) 512-2834 or siggerudk@gao.gov. Individuals making key contributions to this testimony were Jennifer Clayborne, Anne Dilger, Seth Dykes, Maria Edelstein, Heather Frevert, Matthew LaTour, Bonnie Pignatiello Leer, James Ratzenberger, and Sara Vermillion. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | About a dozen people are killed or injured in natural gas transmission pipeline incidents each year. In an effort to improve upon this safety record, the Pipeline Safety Improvement Act of 2002 requires that operators assess pipeline segments in about 20,000 miles of highly populated or frequented areas for safety risks, such as corrosion, welding defects, or incorrect operation. Half of these baseline assessments must be done by December 2007, and the remainder by December 2012. Operators must then repair or replace any defective pipelines, and reassess these pipeline segments for corrosion damage at least every 7 years. The Pipeline and Hazardous Materials Safety Administration (PHMSA) administers this program, called gas integrity management. This testimony is based on ongoing work for Congress, as required by the 2002 act. The testimony provides preliminary results on the safety effects of (1) PHMSA's gas integrity management program and (2) the requirement that operators reassess their natural gas pipelines at least every 7 years. It also discusses how PHMSA has acted to strengthen its enforcement program in response to recommendations GAO made in 2004. GAO expects to issue two reports this fall that will address these and other topics. Early indications suggest that the gas transmission pipeline integrity management program enhances public safety by supplementing existing safety standards with risk-based management principles. Operators have reported that they have assessed about 6,700 miles as of December 2005 and completed 338 repairs for problems they are required to address immediately. Operators told GAO that the primary benefit of the program is the comprehensive knowledge they must acquire about the condition of their pipelines. For some operators, the integrity management program has prompted such assessments for the first time. Operators raised concerns about (1) their uncertainty over the level of documentation that PHMSA requires and (2) whether their pipelines need to be reassessed at least every 7 years. The 7-year reassessment requirement is generally consistent with the industry consensus standard of at least every 5 to 10 years for reassessing pipelines operating under higher stress (higher operating pressure in relation to wall strength). The majority of transmission pipelines in the U.S. are estimated to be higher stress pipelines. However, most operators told GAO that the 7-year requirement is conservative for pipelines that operate under lower stress because they found few problems requiring reassessments earlier than the 15 to 20 years under the industry standard. Operators GAO contacted said that periodic reassessments are beneficial for finding and preventing problems; but they favored reassessments on severity of risk rather than a one-size-fits-all standard. Operators did not expect that the existence of an "overlap period" from 2010 through 2012, when operators will be conducting baseline assessments and reassessments at the same time, would create problems in finding resources to conduct reassessments. PHMSA has developed a reasonable enforcement strategy framework that is responsive to GAO's earlier recommendations. PHMSA's strategy is aimed at reducing pipeline incidents and damage through direct enforcement and through prevention involving the pipeline industry and stakeholders (such as state regulators). Among other things, the strategy entails (1) using risk-based enforcement and dealing severely with significant noncompliance and repeat offenses, (2) increasing knowledge and accountability for results by clearly communicating expectations for operators' compliance, (3) developing comprehensive guidance tools and training inspectors on their use, and (4) effectively using state inspection capabilities. |
CMS and states jointly administer the Medicaid program. States have flexibility within broad federal parameters for designing and implementing their Medicaid programs. For example, state Medicaid programs must cover certain populations and benefits—known as mandatory populations and benefits—but may choose to also cover other populations and benefits—known as optional populations and benefits.choose different payment and delivery systems to provide benefits to Medicaid beneficiaries, such as fee-for-service or managed care. Under a fee-for-service system, health care providers claim reimbursement from state Medicaid programs for services rendered to Medicaid beneficiaries. Under a Medicaid managed care system, states contract with managed care organizations to provide or arrange for medical services, and prospectively pay the organizations a per person, or capitated, payment. In turn, the managed care organizations pay providers, such as hospitals and physicians, for services provided to Medicaid enrollees. States contract with managed care organizations to provide a comprehensive set Also, states may of services to Medicaid beneficiaries; states may also contract with limited benefit plans to provide a defined set of services, such as mental health services. CMS reviews and approves states’ plans to implement their Medicaid programs. Medicaid is an important source of mental health services for millions of vulnerable individuals. In 2009, an estimated 5.8 million adult Medicaid beneficiaries, or 30 percent of all adult Medicaid beneficiaries, were diagnosed with some type of mental illness. States that provide mental health services through limited benefit plans establish these delivery systems under Medicaid waivers. Sections 1115 and 1915(b) of the Social Security Act allow the Secretary of Health and Human Services to waive certain federal requirements, such as limitations on a state’s ability to require certain beneficiaries to enroll in managed care. States that provide Medicaid benefits through managed care arrangements, including limited benefit plans, must comply with certain requirements, including those requiring states to establish procedures to monitor managed care program operations. Federal regulations require states to ensure that managed care organizations coordinate mental and physical health care services; however, states have the option to exempt limited benefit plans States seeking to provide Medicaid services from these requirements.through managed care waivers must submit an application to CMS and obtain approval. Medicaid managed care waivers approved under section 1915(b) of the Social Security Act—the most common type of waiver states use to establish limited benefit plans to provide mental health services to beneficiaries—are typically authorized for a period of two years. At the end of the waiver period, the state can request an extension of the waiver or submit an application for a new waiver. Care coordination is broadly defined as the integration of patient care activities between two or more providers involved in a patient’s care with the goal of facilitating the appropriate delivery of services. Activities to coordinate mental and physical health care services include sharing information—such as medical records, test and lab results, and prescribed medications—across providers and care delivery sites. States and limited benefit plans can implement specific care practices with the purpose of facilitating communications and the sharing of information. Care coordination is particularly important for Medicaid beneficiaries with mental illnesses because they are more likely to have other medical conditions requiring ongoing physical health care services than beneficiaries without mental illnesses. For example, in 2003, 14 percent of all Medicaid beneficiaries had a costly medical condition, such as diabetes and heart disease; however, among adults who were receiving mental health services, 21 percent had such medical conditions. Across the 13 states that contracted with limited benefit plans to provide mental health services to adult Medicaid beneficiaries, the enrollment levels, total payments, and services provided varied. States can enroll different adult populations—such as individuals who are blind, disabled, or have developmental disabilities—in limited benefit plans, which could contribute to variation in the number of adults enrolled, as well as the level of capitated payments states made to these plans.reported that about 4.4 million adult Medicaid beneficiaries were enrolled in limited benefit plans, about 48.6 percent of all adult Medicaid beneficiaries in the 13 states. The number of adults enrolled in these plans ranged from about 93,000 beneficiaries in Kansas to over 1.1 million beneficiaries in Pennsylvania. (See fig. 1.) Across these State officials 13 states, the percentage of adult Medicaid beneficiaries enrolled in limited benefit plans ranged from about 10.7 percent in Florida to about 93.0 percent in Colorado. (See app. II for more information on enrollment in Medicaid and limited benefit plans in the 13 states.) Capitated payments to limited benefit plans providing mental health services to adult Medicaid beneficiaries also varied across the 13 states in fiscal year 2012. State officials reported that capitated payments to these plans totaled about $5.6 billion, or about 9.0 percent of total Medicaid payments for all adult beneficiaries in these 13 states. Capitated payments to these plans ranged from about $86.5 million in New Mexico to almost $2.0 billion in Michigan. As a share of total Medicaid payments for adult Medicaid beneficiaries, capitated payments to limited benefit plans providing mental health services ranged across the 13 states from 1.3 percent in Florida to 21.9 percent in Michigan (see fig. 2). (See app. III for more information on Medicaid payments in the 13 states.) States also reported variations in the scope of services provided by limited benefit plans to adult Medicaid beneficiaries in fiscal year 2012. Specifically, 3 states contracted with limited benefit plans to provide only mental health services, while 10 states contracted with limited benefit plans to provide both mental health services and services for substance use disorder. Two of these 10 states—Oregon and Utah—contracted with some plans to provide only mental health services and contracted with other plans to provide both mental health services and services for substance use disorders. (See table 1.) The four selected states we studied generally took three steps to facilitate the coordination of mental and physical health care services, but specific activities varied. CMS’s efforts to facilitate the coordination of mental and physical health care services focused primarily on reviewing states’ federal waiver documents and contracts with limited benefit plans. The steps that selected states—Florida, Kansas, Michigan, and Washington—generally took to facilitate the coordination of mental and physical health care included (1) incorporating care coordination requirements in the contracts with limited benefit plans; (2) implementing additional steps to coordinate care; and (3) monitoring limited benefit plans’ implementation of care coordination. The four states’ contracts with limited benefit plans included general provisions regarding the types of entities that health plans are required to coordinate with in order to manage beneficiaries’ health care needs. Additionally, each state’s contracts specified the particular coordination activities the limited benefit plans were required to perform. All four states required limited benefit plans to coordinate mental and physical health services with a broad range of providers and other entities. Each state required its limited benefit plans to coordinate with physical health care providers and community organizations, mental health providers, and the state’s Medicaid agency. All four states required some coordination with physical health care providers and community organizations. For example, in Michigan, in addition to coordinating with beneficiaries’ primary care providers, limited benefit plans were required to coordinate with public and private community agencies that provide social support and other non-health care services to individuals with mental illnesses. States’ contracts also required limited benefit plans to coordinate with a wide range of mental health providers that were part of the plans’ network of providers, including mental health providers and case managers that provide ancillary services. For example, Washington required its limited benefit plans to coordinate with social workers to ensure that they shared information on health care needs and services. States also required health plans to coordinate with other state departments and agencies whose clients include Medicaid beneficiaries. For example, Florida required its limited benefit plans to have agreements with state agencies responsible for serving the homeless to ensure coordination and avoid duplication of services. In Kansas, the state required limited benefit plans to coordinate with other state agencies, along with local and regional agencies whose clients included Medicaid beneficiaries. Michigan required its limited benefit plans to coordinate with the criminal justice system, including police/sheriffs, court personnel, and attorneys. All four states’ contracts with limited benefit plans required the plans to undertake specific activities to facilitate coordination of mental and physical health care services. These required activities generally fell into the following categories: sharing information and establishing communications between providers and across care settings; identifying patients’ mental and physical health care needs and creating individual care plans; and developing measures and collecting data on coordination of care. All four states required limited benefit plans to implement mechanisms to share information or standardize communications between providers and across care settings. For example, contracts in Florida, Michigan, and Washington required limited benefit plans to develop written plans or agreements outlining when, and in some cases how, care will be coordinated between mental health, primary care, and other providers. Kansas’ contract required all limited benefit plan network providers to request a standardized release of information form from each beneficiary to allow providers to coordinate with primary care physicians and other treatment team members. All four states required limited benefit plans to identify beneficiaries’ health care needs, including mental and physical health care needs; develop individual care plans for beneficiaries’ to address all needs identified; and update these care plans on a regular basis. One part of the individual care plan development and updating process included coordinating with primary care and other providers as needed, and making appropriate referrals. For example, Washington’s contract required limited benefit plans to create and then update these plans every 180 days, identify patient mental and physical health care needs, ensure coordination between systems that are meeting patients’ needs, and require providers to make appropriate referrals to health care providers when medical concerns are identified. Two of the four states required limited benefit plans to collect and submit data to monitor care coordination. For example, Kansas required limited benefit plans to collect data from network providers and enrollees to monitor care coordination; and Florida required limited benefit plans to collect information on follow-up services enrollees received within seven days of discharge from all inpatient facilities for a mental health diagnosis. Officials from all four states reported taking additional steps beyond contract requirements to encourage coordination and further integration of mental and physical health care. For example, in 2012, officials from all four states reported that their state implemented Medicaid policies allowing providers to bill for two services, such as mental and physical health care services, in one day for the same beneficiary. In doing so, limited benefit plans can work to integrate and improve access to care by providing mental and physical health services at the same location and allowing both providers to receive reimbursement for services furnished on the same day. Since 2012, officials from two of the four states reported that they have taken steps to further integrate mental and physical health care services. Officials from Kansas reported that in January 2013 the state stopped providing mental health services through a limited benefit plan and implemented a comprehensive managed care arrangement providing both mental and physical health care services to Medicaid beneficiaries, and officials from Florida reported that the state is taking steps to implement similar arrangements in 2014. During 2011 and 2012, all four states we studied conducted a variety of reviews that were either directly focused on the coordination of mental and physical health care services or assessed such coordination as part of broader reviews. States conducted monitoring through five different types of reviews, the use of which varied by state. Desk reviews are state Medicaid agency evaluations of documents and reports that plans, including limited benefit plans, submit. States generally conduct these types of reviews at their Medicaid offices. All four states reported that they conduct some form of desk review of reports and data related to the coordination of mental and physical health care services received under limited benefit plans. The states did not issue reports on the findings of these desk reviews. External quality reviews (EQR) are federally required reviews conducted by independent organizations, called External Quality Review Organizations (EQROs), with expertise in assessing the quality of and access to care provided by managed care plans. Federal law requires EQROs to annually review Medicaid managed care plans, including limited benefit plans that provide inpatient services. These reviews assess limited benefit plans’ strengths and weaknesses with respect to quality, timeliness, and access to health care services provided to Medicaid beneficiaries. All three states in which limited benefit plans were subject to these reviews—Florida, Michigan, and Washington—had EQRs conducted that included an assessment, in either 2011 or 2012, of limited benefit plans’ compliance with care coordination contract requirements. In Michigan, the EQR findings did not include any specific results on care coordination in 2011, but found that the state and limited benefit health plans were in compliance with managed care requirements and contractual agreements. In Washington, the EQR findings in 2011 questioned the effectiveness of one limited benefit plan’s intervention to increase the quality of care coordination, including the validity of the methods used to assess care coordination. In Florida, officials reported that the EQR is conducted annually in conjunction with two process improvement projects. One limited benefit plan in this state participated in a project that examined the documentation of services in an effort to improve communication and coordination of services between physical and mental health providers in limited benefit plans. This project is still ongoing. Internal onsite reviews are reviews conducted by the state Medicaid agency at providers or managed care plan offices, or at locations where Medicaid beneficiaries receive services. Three of the four states—Florida, Michigan, and Kansas—conducted onsite reviews during the 2011 through 2012 time period. Florida officials reported that the state conducted onsite reviews annually to evaluate limited benefit plans’ administrative and clinical compliance, including care coordination. Some of these reviews identified needed improvements; for example, in a 2012 review of one limited benefit plan in Florida, the state found that the plan needed to establish a process for providing Medicaid beneficiaries immediate access to psychiatric services upon their release from a jail or juvenile detention facility to ensure prescribed medications were available. Michigan officials reported that their reviews occurred biennially and, in part, assessed limited benefit plans’ capacity to coordinate physical and mental health services for Medicaid beneficiaries. Some of these reviews identified needed improvements; for example, in a 2011 review of one limited benefit plan, the state was unable to find evidence of communication and coordination between a psychiatric hospital and a Medicaid beneficiary’s primary care physician or health plan and recommended that the limited benefit plan devise a coordination plan. In 2011 and 2012, Kansas conducted an onsite review of the one limited benefit plan that state contracted with to examine plan guidelines used to identify and authorize the coordination of care for beneficiaries with high needs, including those with certain mental health diagnoses. In regard to care coordination, the state found that the limited benefit plan met its contractual requirements and had an acceptable process to identify Medicaid beneficiaries with high physical and mental health needs, and provided these beneficiaries with ongoing care coordination between their mental health providers and other providers and entities delivering treatment and service. State reviewers also found that the plan had coordinated its activities with other social service, disability, and welfare systems, including the state’s criminal justice and disability agencies. Independent assessments are federally required reviews through which states operating Medicaid managed care programs under section 1915(b) waivers must evaluate and maintain data regarding the cost-effectiveness of their programs, the effect of the programs on beneficiaries’ access to services, and the quality of services provided under the program. At a minimum, a state is required to conduct an independent assessment for its first two waiver periods. Of the four states, Kansas was the only state required to conduct an independent assessment during the time period of our review because it began providing mental health services through a contract with a limited benefit plan under a waiver in 2007. Kansas’ 2009 assessment found that a small percentage of Medicaid beneficiaries enrolled in the limited benefit plan received care coordination, and recommended that the state should increase the share of beneficiaries receiving these services. The 2011 assessment found improvement in the state’s care coordination. Specifically, the report noted that the state expanded and enhanced its care coordination activities in an effort to increase the share of Medicaid beneficiaries in the limited benefit plan whose care was coordinated. Focused care coordination studies examine a state’s coordination of mental health and physical health care services. Michigan was the only state to conduct a focused care coordination study of limited benefit plans during the time period of our review. The study, which was conducted by an independent organization under contract, examined Medicaid utilization patterns to assess whether care coordination occurred. While the report did not draw specific conclusions on whether care coordination occurred or make recommendations to the state, it did indicate that Medicaid beneficiaries with serious mental health diagnoses had more emergency room, ambulatory visits, and inpatient admissions than other groups. GAO found that CMS did not take direct steps to facilitate the coordination of mental and physical health care services for adult Medicaid beneficiaries enrolled in limited benefit plans because its role is to provide oversight of, and technical assistance to, the states in carrying out their Medicaid programs. In its oversight role, the agency reviewed and approved state-submitted managed care waiver applications and contracts with limited benefit plans providing mental health services, some of which contain care coordination provisions. Federal regulations require managed care plans to coordinate mental and physical health care services and identify persons with special health care needs. However, states providing health care services through limited benefit plans may exempt these plans from these requirements. States requiring limited benefit plans to comply with these requirements must include these details in their waiver applications and assure that plans comply with these rules. CMS officials told us that the agency reviews the waiver application’s care coordination provisions as part of its broader review of states’ waiver applications. The officials added that CMS also reviews the results of EQRs of states’ programs, and provides final agency approval for contracts between states and limited benefit plans. Beyond these activities, CMS officials indicated that their role was to provide technical assistance to states, as needed; for example, when states are designing their programs. CMS’s regional offices also provided oversight of states’ contracting with limited benefit plans; however, none of their activities directly assessed care coordination provided by limited benefit plans. In the four regional offices we studied, we found that the number and frequency of regional office reviews of limited benefit plans varied. Regional officials we spoke to cited several activities ranging from periodic managed care calls with states, to comprehensive and focused onsite and desk reviews to ensure limited benefit plans met all federal managed care requirements. Officials in one region explained that CMS generally conducted additional regional office reviews on limited benefit plans that were newer or raised special concerns. The Department of Health and Human Services reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health Human Services and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or yocomc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In addition to the contact named above, Tim Bushfield, Assistant Director; Shaunessye Curry; Kristin Ekelund; Sandra George; and Drew Long made key contributions to this report. | Medicaid is the largest payer of mental health services in the United States and Medicaid spending on such services is likely to grow. Some states provide mental health services to Medicaid beneficiaries separately from physical health care services through contracts with limited benefit plans, which are paid on a per person basis to provide a defined set of services. While using these plans to provide mental health services may control costs, it can also increase the risk that these services will not be coordinated with physical health care services. Coordinated care is important for Medicaid beneficiaries with mental illnesses because they are more likely than others to have ongoing health conditions. GAO was asked for information on states' use of Medicaid managed care. In this report, GAO examined the (1) extent that states provide mental health services through limited benefit plans, and (2) steps states and CMS have taken to facilitate the coordination of mental and physical health care services for adult beneficiaries enrolled in these plans. GAO collected information on enrollment, payments, and services from the 13 states that contracted with limited benefit plans to provide mental health services to adult beneficiaries. GAO also selected 4 states based on, among other criteria, the number of beneficiaries enrolled in limited benefit plans. GAO reviewed documents from the 4 states and CMS, and interviewed officials to identify steps taken to coordinate care. The Department of Health and Human Services provided technical comments, which GAO incorporated, as appropriate. Thirteen states reported that in fiscal year 2012 they paid a total of about $5.6 billion to limited benefit plans to provide mental health services to about 4.4 million adult Medicaid beneficiaries. States can enroll different populations--such as adults who are blind, disabled, or have developmental disabilities--in limited benefit plans, which could contribute to the variation in the number of adults enrolled and level of capitated payments made across the 13 states. Four selected states--Florida, Kansas, Michigan, and Washington--took three steps to facilitate the coordination of mental and physical health care services: 1. incorporating care coordination requirements in limited benefit plan contracts; 2. implementing additional steps to coordinate care, such as policies that included incentives to coordinate care; and 3. monitoring limited benefit plans' implementation of care coordination. GAO found that the Centers for Medicare & Medicaid Services' (CMS) did not take direct steps to facilitate care coordination, because its role is to oversee and provide technical assistance. In its oversight role, CMS reviewed and approved state submitted documents, such as contracts with mental health limited benefit plans, some of which contained care coordination requirements. |
In order to be adequately prepared for a major public health threat, state and local public health agencies need to have several basic capabilities, whether they possess them directly or have access to them through regional agreements. Public health departments need to have disease surveillance systems and epidemiologists to detect clusters of suspicious symptoms or diseases in order to facilitate early detection of disease and treatment of victims. Laboratories need to have adequate capacity and necessary staff to test clinical and environmental samples in order to identify an agent promptly so that proper treatment can be started and infectious diseases prevented from spreading. All organizations involved in the response must be able to communicate easily with one another as events unfold and critical information is acquired, especially in a large- scale infectious disease outbreak. In addition, plans that describe how state and local officials would manage and coordinate an emergency response need to be in place and to have been tested in an exercise, both at the state and local levels as well as at the regional level. Local health care organizations, including hospitals, are generally responsible for the initial response to a public health emergency, be it a bioterrorist attack or a naturally occurring infectious disease outbreak. In the event of a large-scale infectious disease outbreak, hospitals and their emergency departments would be on the front line, and their personnel would take on the role of first responders. Because hospital emergency departments are open 24 hours a day, 7 days a week, exposed individuals would be likely to seek treatment from the medical staff on duty. Staff would need to be able to recognize and report any illness patterns or diagnostic clues that might indicate an unusual infectious disease outbreak to their state or local health department. Hospitals would need to have the capacity and staff necessary to treat severely ill patients and limit the spread of infectious disease. In addition, hospitals would need adequate stores of equipment and supplies, including medications, personal protective equipment, quarantine and isolation facilities, and air handling and filtration equipment. The federal government also has a role in preparedness for and response to major public health threats. It becomes involved in investigating the cause of the disease, as it is doing with SARS. In addition, the federal government provides funding and resources to state and local entities to support preparedness and response efforts. CDC’s Public Health Preparedness and Response for Bioterrorism program provided funding through cooperative agreements in fiscal year 2002 totaling $918 million to states and municipalities to improve bioterrorism preparedness and response, as well as other public health emergency preparedness activities. HRSA’s Bioterrorism Hospital Preparedness Program provided funding through cooperative agreements in fiscal year 2002 of approximately $125 million to states and municipalities to enhance the capacity of hospitals and associated health care entities to respond to bioterrorist attacks. Among the other public health emergency response resources that the federal government provides is the Strategic National Stockpile, which contains pharmaceuticals, antidotes, and medical supplies that can be delivered anywhere in the United States within 12 hours of the decision to deploy. Officials view influenza vaccine as the cornerstone of efforts to prevent and control annual influenza outbreaks as well as pandemic influenza. Deciding which viral strains to include in the annual influenza vaccine depends on data collected from domestic and international surveillance systems that identify prevalent strains and characterize their effect on human health. Antiviral drugs and vaccines against influenza are expected to be in short supply if a pandemic occurs. Antiviral drugs, which can be used against all forms of viral diseases, have been as effective as vaccines in preventing illness from influenza and have the advantage of being available now. HHS assumes shortages will occur in a pandemic because demand is expected to exceed current rates of production and increasing production capacity of antiviral drugs can take at least 6 to 9 months, according to manufacturers. In the cities we visited, state and local officials reported varying levels of public health preparedness to respond to an infectious disease outbreak. They recognized gaps in preparedness elements such as communication and were beginning to address them. Gaps also remained in other preparedness elements that have been more difficult to address, including the response capacity of the workforce and the disease surveillance and laboratory systems. In addition, we found that the level of preparedness varied across the cities. Jurisdictions that had multiple prior experiences with public health emergencies were generally more prepared than those with little or no such experience prior to our site visits. We found that regional planning was lacking between states. States were working on their own plans for receiving and distributing the Strategic National Stockpile and for administering mass vaccinations. States and local areas were addressing gaps in public health preparedness elements, such as communication, but weaknesses remained in other preparedness elements, including the response capacity of the workforce and the disease surveillance and laboratory systems. Gaps in capacity often are not amenable to solution in the short term because either they require additional resources or the solution takes time to implement. We found that officials were beginning to address communication problems. For example, six of the seven cities we visited were examining how communication would take place in a public health emergency. Many cities had purchased communication systems that allow officials from different organizations to communicate with one another in real time. In addition, state and local health agencies were working with CDC to build the Health Alert Network (HAN), an information and communication system. The nationwide HAN program has provided funding to establish infrastructure at the local level to improve the collection and transmission of information related to public health preparedness, including preparedness for a bioterrorism incident. Goals of the HAN program include providing high-speed Internet connectivity, broadcast capacity for emergency communication, and distance-learning infrastructure for training. State and local officials for the cities we visited recognized and were attempting to address inadequacies in their surveillance systems and laboratory facilities. Local officials were concerned that their surveillance systems were inadequate to detect a bioterrorist event and all of the states we visited were making efforts to improve their disease surveillance systems. Six of the cities we visited used a passive surveillance system to detect infectious disease outbreaks. However, passive systems may be inadequate to identify a rapidly spreading outbreak in its earliest and most manageable stage because, as officials in three states noted, there is chronic underreporting and a time lag between diagnosis of a condition and the health department’s receipt of the report. To improve disease surveillance, six of the states and two of the cities we visited were developing surveillance systems using electronic databases. Several cities were also evaluating the use of nontraditional data sources, such as pharmacy sales, to conduct surveillance. Three of the cities we visited were attempting to improve their surveillance capabilities by incorporating active surveillance components into their systems. However, work to improve surveillance systems has proved challenging. For example, despite initiatives to develop active surveillance systems, the officials in one city considered event detection to be a weakness in their system, in part because they did not have authority to access hospital information systems. In addition, various local public health officials in other cities reported that they lacked the resources to sustain active surveillance. Officials from all of the states we visited reported problems with their public health laboratory systems and said that they needed to be upgraded. All states were planning to purchase the equipment necessary for rapidly identifying a biological agent. State and local officials in most of the areas that we visited told us that the public health laboratory systems in their states were stressed, in some cases severely, by the sudden and significant increases in workload during the anthrax incidents in the fall 2001. During these incidents, the demand for laboratory testing was significant even in states where no anthrax was found and affected the ability of the laboratories to perform their routine public health functions. Following the incidents, over 70,000 suspected anthrax samples were tested in laboratories across the country. Officials in the states we visited were working on other solutions to their laboratory problems. States were examining various ways to manage peak loads, including entering into agreements with other states to provide surge capacity, incorporating clinical laboratories into cooperative laboratory systems, and purchasing new equipment. One state was working to alleviate its laboratory problems by upgrading two local public health laboratories to enable them to process samples of more dangerous pathogens, and establishing agreements with other states to provide backup capacity. Another state reported that it was using the funding from CDC to increase the number of pathogens the state laboratory could diagnose. The state also reported that it has worked to identify laboratories in adjacent states that are capable of being reached within 3 hours over surface roads. In addition, all of the states reported that their laboratory response plans were revised to cover reporting and sharing laboratory results with local public health and law enforcement agencies. At the time of our site visits, shortages in personnel existed in state and local public health departments and laboratories and were difficult to remedy. Officials from state and local health departments told us that staffing shortages were a major concern. Two of the states and cities that we visited were particularly concerned that they did not have enough epidemiologists to do the appropriate investigations in an emergency. One state department of public health we visited had lost approximately one- third of its staff because of budget cuts over the past decade. This department had been attempting to hire more epidemiologists. Barriers to finding and hiring epidemiologists included noncompetitive salaries and a general shortage of people with the necessary skills. Shortages in laboratory personnel were also cited. Officials in one city noted that they had difficulty filling and maintaining laboratory positions. People that accepted the positions often left the health department for better-paying positions. Increased funding for hiring staff cannot necessarily solve these shortages in the near term because for many types of laboratory positions there are not enough trained individuals in the workforce. According to the Association of Public Health Laboratories, training laboratory personnel to provide them with the necessary skills will take time and require a strategy for building the needed workforce. We found that the overall level of public health preparedness varied by city. In the cities we visited, we observed that those cities that had recurring experience with public health emergencies, including those resulting from natural disasters, or with preparation for National Security Special Events, such as political conventions, were generally more prepared than cities with little or no such experience. Cities that had dealt with multiple public health emergencies in the past might have been further along because they had learned which organizations and officials need to be involved in preparedness and response efforts and moved to include all pertinent parties in the efforts. Experience with natural disasters raised the awareness of local officials regarding the level of public health emergency preparedness in their cities and the kinds of preparedness problems they needed to address. Even the cities that were better prepared were not strong in all elements. For example, one city reported that communications had been effective during public health emergencies and that the city had an active disease surveillance system. However, officials reported gaps in laboratory capacity. Another one of the better-prepared cities was connected to HAN and the Epidemic Information Exchange (Epi-X), and all county emergency management agencies in the state were linked. However, the state did not have written agreements with its neighboring states for responding to a public health emergency. Response organization officials were concerned about a lack of planning for regional coordination between states of the public health response to an infectious disease outbreak. As called for by the guidance for the CDC and HRSA funding, all of the states we visited organized their planning on the basis of regions within their states, assigning local areas to particular regions for planning purposes. A concern for response organization officials was the lack of planning for regional coordination between states. A hospital official in one city we visited said that state lines presented a “real wall” for planning purposes. Hospital officials in one state reported that they had no agreements with other states to share physicians. However, one local official reported that he had been discussing these issues and had drafted mutual aid agreements for hospitals and emergency medical services. Public health officials from several states reported developing working relationships with officials from other states to provide backup laboratory capacity. States have begun planning for use of the Strategic National Stockpile. To determine eligibility for the CDC funding, applicants were required to develop interim plans to receive and manage items from the stockpile, including mass distribution of antibiotics, vaccines, and medical materiel. However, having plans for the acceptance of the deliveries from the stockpile is not enough. Plans have to include details about dividing the materials that are delivered in large pallets and distributing the medications and vaccines. Of the seven states we visited, five states had completed plans for the receipt and distribution of the stockpile. One state that was working on its plan stated that it would be completed in January 2003. Only one state had conducted exercises of its stockpile distribution plan, while the other states were planning to conduct exercises or drills of their plans sometime in 2003. In addition, five states reported on their plans for mass vaccinations and seven states reported on their plans for large-scale administration of smallpox vaccine in response to an outbreak. Some states we visited had completed plans for mass vaccinations, whereas other states were still developing their plans. The mass vaccination plans were generally closely tied to the plans for receiving and administering the stockpile. In addition, two states had completed smallpox response plans, which include administering mass smallpox vaccinations to the general population, whereas four of the other states were drafting plans. The remaining state was discussing such a plan. However, only one of the states we visited has tested in an exercise its plan for conducting mass smallpox vaccinations. Our recent work shows that progress in improving public health response capacity has lagged in hospitals. Although most hospitals across the country reported participating in basic planning activities for large-scale infectious disease outbreaks, few have acquired the medical equipment resources, such as ventilators, to handle large increases in the number of patients that may result from outbreaks of diseases such as SARS. At the time of our site visits, we found that hospitals were beginning to coordinate with other local response organizations and collaborate with each other in local planning efforts. Hospital officials in one city we visited told us that until September 11, 2001, hospitals were not seen as part of a response to a terrorist event but that the city had come to realize that the first responders to a bioterrorism incident could be a hospital’s medical staff. Officials from the state began to emphasize the need for a local approach to hospital preparedness. They said, however, that it was difficult to impress the importance of cooperation on hospitals because hospitals had not seen themselves as part of a local response system. The local government officials were asking them to create plans that integrated the city’s hospitals and addressed such issues as off-site triage of patients and off-site acute care. According to our survey of over 2,000 hospitals, 4 out of 5 hospitals reported having a written emergency response plan for large-scale infectious disease outbreaks. Of these hospitals with emergency response plans, most include a description of how to achieve surge capacity for obtaining additional pharmaceuticals, other supplies, and staff. Almost all hospitals reported participating in community interagency disaster preparedness committees. Our survey showed that hospitals have provided training to staff on biological agents, but fewer than half have participated in exercises. Most hospitals we surveyed reported providing training about identifying and diagnosing symptoms for the six biological agents identified by the CDC as most likely to be used in a bioterrorist attack. While at least 90 percent of hospitals reported providing training for smallpox and anthrax, approximately three-fourths of hospitals reported providing training about plague, botulism, tularemia, and hemorrhagic fever viruses. Fewer than half the hospitals reported participating in drills or exercises related to bioterrorism. Most hospitals lack adequate equipment, isolation facilities, and staff to treat a large increase in the number of patients for an infectious disease such as SARS. To prevent transmission of SARS in health care settings, CDC recommends that health care workers use personal protective equipment, including gowns, gloves, respirators, and protective eyewear. SARS patients in the United States are being isolated until they are no longer infectious. CDC estimates that patients require mechanical ventilation in 10 to 20 percent of SARS cases. In the seven cities we visited, hospital, state, and local officials reported that hospitals needed additional equipment and capital improvements— including medical stockpiles, personal protective equipment, quarantine and isolation facilities, and air handling and filtering equipment—to enhance preparedness. Five of the states we visited reported shortages of hospital medical staff, including nurses and physicians, necessary to increase response capacity in an emergency. One of the states we visited reported that only 11 percent of its hospitals could readily increase their capacity for treating patients with infectious diseases requiring isolation, such as smallpox and SARS. Another state reported that most of its hospitals have little or no capacity for isolating patients diagnosed with or being tested for infectious diseases. According to our hospital survey, availability of medical equipment varied greatly between hospitals, and few hospitals seemed to have adequate equipment and supplies to handle a large-scale infectious disease outbreak. While most hospitals had at least 1 ventilator per 100 staffed beds, 1 personal protective equipment suit per 100 staffed beds, or an isolation bed per 100 staffed beds, half of the hospitals had less than 6 ventilators per 100 staffed beds, 3 or fewer personal protective equipment suits per 100 staffed beds, and less than 4 isolation beds per 100 staffed beds. Federal and state influenza pandemic response plans, another important component to public health preparedness, are in various stages of completion and do not consistently address the problems related to the purchase, distribution, and administration of supplies of vaccines and antiviral drugs during a pandemic. CDC has provided interim draft guidance to facilitate state plans, but final federal decisions necessary to mitigate the effects of potential shortages of vaccines and antiviral drugs have not been made. Until such decisions are made, the timeliness and adequacy of response efforts may be compromised. Federal and state officials have not finalized plans for responding to pandemic influenza. To foster state and local pandemic planning and preparedness, CDC first issued interim planning guidance in draft form to all states in 1997, outlining general federal and state planning responsibilities. Thirty-four states are actively preparing a pandemic response plan, and many are integrating these plans with existing state plans to respond to natural or man-made disasters, such as floods or a bioterrorist attack. Although to a certain extent planning efforts for other emergencies can be used for pandemic response, additional planning is important to deal with specific aspects of a pandemic response. This includes developing plans to address the large-scale emergency needs of an entire population, including mass distribution and administration of limited vaccines and drugs, with an uncertain amount of available resources. In the most recent version of its pandemic influenza planning guidance for states, CDC lists several key federal decisions related to vaccines and antiviral drugs that have not been made. These decisions include determining the amount of vaccines and antiviral drugs that will be purchased at the federal level; the division of responsibility between the public and private sectors for the purchase, distribution, and administration of vaccines and drugs; and how population groups will be prioritized and targeted to receive limited supplies of vaccines and drugs. In each of these areas, until federal decisions are made, states will not be able to develop strategies consistent with federal action. The interim draft guidance for state pandemic plans says that resources can be expected to be available through federal contracts to purchase influenza vaccine and some antiviral agents, but some state funding may be required. The amounts of antiviral drugs to be purchased and stockpiled are yet to be determined, even though these drugs are available and can theoretically be used for both treatment and prevention during a pandemic. CDC has indicated in its interim draft guidance that the policies for purchasing, distributing, and administering vaccines and drugs by the private and public sector will change during a pandemic, but some decisions necessary to prepare for these expected changes have not been made. During a typical annual influenza response, influenza vaccine and antiviral drug distribution is primarily handled directly by manufacturers through private vendors and pharmacies to health care providers. During a pandemic, however, CDC interim draft guidance indicates that many of these private-sector responsibilities may be transferred to the public sector at the federal, state, or local levels, and priority groups within the population would need to be established for receiving limited supplies of vaccines and drugs. State officials are particularly concerned that a national plan has not been issued with final recommendations for how population groups should be prioritized to receive vaccines and antiviral drugs. In its interim draft guidance, CDC lists eight population groups that should be considered in establishing priorities among groups for receiving vaccines and drugs during a pandemic. The list includes such groups as health care workers and public health personnel involved in the pandemic response, persons traditionally considered to be at increased risk of severe influenza illness and mortality, and preschool and school-aged children. | Following the bioterrorist events of the fall of 2001, there has been concern that the nation may not be prepared to respond to a major public health threat, such as the current outbreak of Severe Acute Respiratory Syndrome (SARS). Whether a disease outbreak occurs naturally or is due to the intentional release of a harmful biological agent by a terrorist, much of the initial response would occur at the local level, particularly hospitals and their emergency departments. Efforts to plan for worldwide influenza pandemics are useful for understanding public health preparedness for other large-scale outbreaks. GAO was asked to examine (1) the preparedness of state and local public health agencies and organizations for responding to a large-scale infectious disease outbreak, (2) the preparedness of hospitals for responding to a large-scale infectious disease outbreak, and (3) federal and state efforts to prepare for an influenza pandemic. This testimony is based on GAO's report, Bioterrorism: Preparedness Varied across State and Local Jurisdictions, GAO-03-373 (Apr. 7, 2003), a survey of hospitals GAO conducted to assess their level of emergency preparedness, and information updating GAO's prior report on federal and state planning for an influenza pandemic, Influenza Pandemic: Plan Needed for Federal and State Response, GAO-01-4 (Oct. 27, 2000). The efforts of state and local public health agencies to prepare for a bioterrorist attack have improved the nation's capacity to respond to infectious disease outbreaks and other major public health threats, but gaps in preparedness remain. GAO found workforce shortages and gaps in disease surveillance and laboratory facilities. The level of preparedness varied across cities GAO visited. Jurisdictions that have had multiple prior experiences with public health emergencies were generally more prepared than others. GAO found that regional planning was generally lacking between states but that states were developing their own plans for receiving and distributing medical supplies for emergencies, as well as plans for mass vaccinations in the event of a public health emergency. GAO found that many hospitals lack the capacity to respond to large-scale infectious disease outbreaks. Most hospitals across the country reported participating in basic planning activities for large-scale infectious disease outbreaks and training staff about biological agents. However, most hospitals lack adequate equipment, isolation facilities, and staff to treat a large increase in the number of patients that may result. Federal and state officials have not finalized plans for responding to pandemic influenza. These plans do not consistently address problems related to the purchase, distribution, and administration of supplies of vaccines and antiviral drugs that may be needed during a pandemic. |
The Trade Promotion Coordinating Committee (TPCC) is a cabinet-level interagency committee chaired by the Secretary of Commerce. It began meeting in 1993, and it has met at least once annually, except during an 18- month period between 1999 and 2001. The TPCC also encouraged the formation of various interagency staff-level working groups. These groups have met or communicated more frequently. The TPCC has a staff of three or four Commerce trade professionals, located in Commerce’s International Trade Administration. The TPCC has no independent budget and no specific authority to direct its member agencies. Nine key TPCC member agencies provide a range of specific trade promotion programs for exporters. The Departments of Commerce and Agriculture identify export opportunities and conduct trade promotion activities. The U.S. Export-Import Bank (Eximbank) and the Overseas Private Investment Corporation (OPIC) help businesses participate in riskier markets by providing financing and insurance for exports or development projects. The Small Business Administration (SBA) provides export training and loans for small- and medium-sized businesses desiring to export. The U.S. Trade Representative (USTR) and the Department of State seek to create and maintain open markets for U.S. exports and investments. Although it is not actually a trade agency, the U.S. Agency for International Development (USAID) seeks to promote economic growth; assist developing country governments make economic reforms and identify changes to laws, regulations, and banking systems; and provide firm-level assistance to small businesses, thus allowing these countries to become more attractive trade and investment partners to the United States. In developing country markets, basic infrastructure and capital equipment are also essential to assist market growth. The U.S. Trade and Development Agency (TDA) supports the planning of infrastructure development and trade capacity building in such areas as energy or transportation systems, with the expectation that U.S. exporters will later have opportunities to bid on these projects for which TDA has provided support. Total export promotion funding (excluding that for OPIC and USAID) declined slightly, from nearly $2.4 billion to $2 billion between fiscal years 1996 and 2001, but rose to $2.5 billion in fiscal year 2002. The resource allocations among TPCC agencies did not change significantly over the last 5 years. The Department of Agriculture continued to have the largest share of total export promotion funding in fiscal year 2002, as it did in fiscal year 1996. (See fig 1.) Eximbank and the Department of Commerce still have the second and third largest funding levels among the agencies. During this period, the funding levels of the two agencies whose overseas staff identify and develop export opportunities for U.S. firms seeking to export—Commerce’s Foreign Commercial Service (FCS) and Agriculture’s Foreign Agricultural Service (FAS)—increased. The FCS budget increased about 14 percent, and the FAS budget increased by about 8 percent between fiscal years 1996 and 2002. However, beginning in 1998, these agencies’ administrative costs increased due to the implementation of the Department of State’s Interagency Cost Sharing System. According to Commerce, these costs made fewer funds available for export services. The Export Enhancement Act of 1992 requires that the TPCC develop and implement an annual national export strategy that, among other things, establishes a set of federal priorities supporting U.S. exports and develops a plan to align federal programs with established priorities; proposes an annual unified federal trade promotion budget. The act did not provide the TPCC with specific authority to create a unified export promotion budget, which would include the reallocation of agency resources to support the national export strategies. In practice, the TPCC facilitates interagency discussions of trade issues, coordinates interagency responses to administrative or congressional prepares the mandated national export strategy. The TPCC’s annual national export strategies have identified broad priorities, but they have not discussed agencies’ specific export promotion goals, such as increasing exports in a TPCC targeted market, or assessed progress made toward achieving the committee’s broad priorities. The TPCC has limited ability to affect the alignment of export promotion resources across agencies, but some agencies have aligned resources to support the TPCC’s broad priorities. The TPCC’s annual strategies, called “national export strategies,” describe export promotion efforts and outline broad priorities. However, they do not identify specific goals and associated agency responsibilities. The first several strategies identified numerous markets and sectors of export promise as priorities. For example, the 1994 national export strategy identified a broad set of 10 markets, called the big emerging markets (BEM), where the TPCC expected exports to grow over the next several decades. In 1995, the strategy gave additional attention to the traditional export markets of Japan, Canada, and Western Europe, and 3 years later the strategy expanded its targets further, emphasizing increasing exports to nontraditional markets in Latin America and Asia. However, none of these strategies discussed agencies’ specific goals for targeted markets or regions or outlined various agencies’ responsibilities in addressing goals over the coming years. One of the regions identified as having promise as a market for U.S. exporters, for example, was Central and Eastern Europe. Citing Poland’s and Turkey’s rapid transitions to market economies, their pent-up demand for western goods, and their desire to join the European Union (EU), the 1997 strategy noted that these countries drove regional trade and, together with the Czech Republic and Hungary, were regional focal points for U.S. trade and investment. It identified the most competitive sectors for U.S. companies as well as regional barriers to U.S. exports. Later strategies also discussed broad trade objectives. Again, however, they did not identify specific goals or agency responsibilities in implementing the strategy. The TPCC’s successive strategies lack continuity in addressing identified issues, and they have not assessed progress made toward achieving the TPCC’s broad export priorities. For example, the 1997 strategy noted that the United States was losing market share in Eastern Europe to the EU, but the 1998 strategy did not report on any changes in this condition. Instead, it discussed U.S. market share in the EU countries. Nor did the 1998 strategy update specific objectives for Central and Eastern Europe identified earlier, such as addressing bribery, negotiating for international product standards in Poland, or identifying specific barriers to trade in Turkey. Rather, the 1998 strategy focused on Europe and the challenges of competing in the EU market. The next strategy, for 2000, did not discuss East European markets but highlighted the opportunities in China. The 2002 strategy did not specifically discuss China. The TPCC has been unable to identify common performance measures because it has not achieved consensus on how agencies should measure export program results. Moreover, it has not reviewed agencies’ annual performance reports under the Government Performance and Results Act of 1993. This act mandates that the Office of Management and Budget (OMB) require federal agencies to develop performance measures and assess performance. Since 1994, the TPCC has called for the development of common measures to evaluate trade promotion performance. The TPCC identified three common measures of success—the amount of new exports, the number of new jobs, or the value of sales resulting from exported services—to help assess agencies’ export promotion programs. However, the TPCC’s 2000 national export strategy noted that generally the indicators that each agency developed to measure its performance differed from those of other agencies, as well as from the cross-cutting measures developed for prior TPCC reports. Without common indicators, it is not possible to trace performance over time in achieving TPCC priorities. The overall effect is that it is not clear whether federal export promotion resources are being used most productively. The TPCC has sought to move toward developing a unified federal trade promotion budget and has worked with OMB to participate in the budget process. However, with no authority to reallocate resources among the agencies and occasional agency resistance to its guidance, the TPCC has provided limited direction over the use of export promotion resources used to support the strategy. Moreover, the most dramatic resource changes occurred within its own agency, Commerce, but even Commerce did not fully support all the targeted markets. Finally, resource allocations have been affected by other factors, such as foreign policy initiatives, the need to provide broad country coverage, and the agencies’ emphasis on pursuing export opportunities in the most accessible overseas markets. The TPCC has sought to propose a unified federal trade promotion budget by making recommendations to the President, through OMB, on selected export promotion budget matters. The TPCC also obtained OMB approval to screen member agencies’ high-priority trade promotion initiatives in 1999; however, this effort was limited in that it highlighted only individual agency priorities and did not serve as an examination of how agencies’ trade promotion programs and budgets overall were most productively used to support the strategy. Agencies have continued to submit their proposed budgets separately to OMB, and agency representatives told us that their agencies would resist any TPCC “clearance” of their budgets. For example, in 1999, USAID decided not to participate in the TPCC budget reviews, even after representatives of the TPCC Chairman specifically requested that it do so. USAID representatives with whom we spoke did not view their programs as having a commercial application, although some TPCC member agencies consider some types of USAID technical assistance, such as energy or environmental projects, as possible precursors to potential exports of U.S. services. The TPCC indicated to its members that the use of this process could more likely result in favorable funding decisions; however, OMB was not always responsive to TPCC recommendations. For example, of 10 items submitted by the TPCC to OMB for funding, only 2 received full funding, 4 received partial funding, and 4 were not funded. The TPCC has not consistently used this process and did not submit a list of priorities to OMB in 2001 or 2002. Based on the TPCC’s strategy of targeting big emerging markets, we analyzed the shift in staffing allocations to these markets. Generally, FCS and FAS have shifted their staffing allocations to support TPCC-identified priority markets. For example, in fiscal year 1996, 32 percent of FCS overseas staff were located in the BEMs. In the same year, 23 percent of overseas staff were located in the group of industrialized countries called the Group of Six (G-6) countries (to exclude the United States). In fiscal year 2001, the distribution changed to 37 percent in the BEMs and 17 percent in G-6 countries. (See fig. 2.) The distribution of FAS’s staff among G-6 countries, BEM countries, and all other countries with FAS offices also shifted between fiscal years 1996 and 2001. In fiscal year 1996, 26 percent of FAS staff in overseas offices were located in BEM countries, and 22 percent were in G-6 countries. In fiscal year 2001, the distribution changed to 29 percent in BEM countries and 21 percent in G-6 countries. (See fig. 3.) With respect to the TPCC priority markets that we visited, Poland and Turkey, Commerce’s FCS staff increases have been at lower levels. In its 1997 national export strategy, the TPCC noted that the United States was losing market share in some of the BEMs and directed that, where resources allow, the TPCC agencies target the more promising of these markets. It identified Poland as one of four BEMs with the greatest market potential. Commercial staff in Poland initially rose from 9 to 18 between fiscal years 1996 and 1997; however, the Commerce Inspector General, noted in a 1997 report that this level was not sufficient. The Inspector General reported that Poland did not get increased resources, like other BEMs did, because FCS headquarters did not consider European countries a priority. In its report to the Congress in September 2001, the Inspector General citing declines in exports from the United States to Poland, recommended that the post develop a missionwide strategy that reflects U.S. priorities and objectives. At the time of our visit in 2002, the FCS post staff level was 14. Nor did Commerce significantly increase staffing in Turkey, the other BEM in our study of the TPCC’s 1997 strategy for Central Europe, where FCS staff levels fluctuated from 13 to 15 staff between fiscal years 1996 and 2001, and a key position was vacant for more than a year. Other factors, such as foreign policy initiatives; the need to provide minimum coverage for a broad set of countries; and agency emphasis on pursuing exports in open, accessible foreign markets, have also affected the decisions that agencies make regarding resource allocations. In response to foreign policy initiatives, agencies have reallocated staff overseas and established offices, as illustrated by the following examples. In 1994, Congress directed the executive branch to develop an Africa trade and development policy, and in 2000 Congress enacted the African Growth and Opportunity Act (P.L. 106—200, title I). The act offers trade and other economic benefits to sub-Saharan countries that are committed to certain economic reforms. As a result, the Trade and Development Agency requested funding for a training initiative in Nigeria. Moreover, FCS increased its staffing level at its offices in sub- Saharan Africa. For example, FCS opened an office in Ghana in fiscal year 2000. In fiscal year 2002, FCS expects to increase staffing in Ghana and plans to open an office in Senegal. In 1999, TDA and OPIC established the Caspian Finance Center in Turkey. The center supports two national interests as well as offers opportunities for U.S. businesses. First, the development of the rich Caspian Sea energy reserves estimated to be 178 billion barrels or more of oil would reduce U.S. reliance on more volatile sources of oil; and second, the transport of oil over a western route through Turkey would economically benefit this vital U.S. ally. One staff person from both these agencies shares space with FCS in Ankara to help U.S. companies identify, evaluate, and finance commercially viable projects in the region. In 1999, the executive branch pledged to help stabilize and revitalize southeastern Europe by developing a strategy for trade and investment in the region. To support the President’s regional initiative, OPIC and TDA, with TPCC support, have had full-time regional representatives in Zagreb since about March 2000, and Commerce has increased its staff there by one. The office serves as a local point of contact, information, and support for U.S. investors in the region. In addition, both FCS and FAS have had to spread their overseas staffing to cover a broad range of countries. During fiscal years 1998 through 2001, Commerce’s FCS opened 21 new offices overseas—many in the newly independent former Soviet states and African countries where export markets are in the early stages of development. During the same time period, Agriculture’s FAS opened four offices but closed five offices overseas, with an overall decline of nearly 7 percent in the total number of staff located overseas. (App. V contains the number of staff at FCS offices by country from fiscal year 1996 to fiscal year 2001. App. VI contains the number of staff at FAS offices by country during the same fiscal years.) Finally, to meet U.S. exporters’ market preferences and increase exports, FCS has maintained relatively high staff levels in more mature markets that have open, accessible, and regularized trading relationships, compared to the often more difficult TPCC-targeted developing country markets, according to an FCS official. FCS decreased staff levels in three of the six G-6 markets between fiscal years 1996 and 2001, but 2001 staffing levels are higher in each of the G-6 countries than in six of the BEMs. For example, Argentina, Hong Kong, Poland, South Africa, South Korea, and Turkey each have fewer staff than Italy, the country with the lowest staff level among the G-6 countries. Moreover, staff levels have increased dramatically in the United Kingdom and Canada—where the number of FCS staff grew by 63 percent and 33 percent, respectively. The TPCC has improved interagency coordination in many areas, but as it recognized in its May 2002 national export strategy, it has not completed implementing several of its early initiatives to coordinate export promotion programs aimed at better delivery of federal export services. The TPCC has improved the delivery of export services by collocating export finance services and establishing a network to assist U.S. businesses in addressing barriers to trade. However, the TPCC did not complete its efforts to clarify and make more readily available the numerous resources available to exporters, in part because the TPCC did not consistently meet at the Cabinet level to address these issues. In October 2001, the committee reconvened at the cabinet level to readdress these issues, and it is currently working to alleviate exporters confusion over the export process by (1) instituting cross-agency staff training, (2) improving the dissemination of trade information, and (3) improving outreach to new-to-export businesses. Overall, we found that, for overseas export promotion activities in the countries we visited, FCS staff serve as focal points in coordinating other agency efforts. With interagency cooperation, the TPCC achieved some early successes in coordinating member agencies’ export promotion activities, as the following examples show. The TPCC recommended the establishment of an “advocacy coordinating network” to develop a system of high-level government advocacy in coordination with the private sector, for U.S. firms seeking contracts from other governments. Created in 1993, this advocacy center is a unit within Commerce and functions as a coordinated, interagency effort. The TPCC recommended that the agencies work together to create “one-stop shops” so that exporters could receive assistance from several agencies in one location. In 1994, the TPCC established a network of U.S. Export Assistance Centers that grew to 19 Centers by 1999. The Centers are staffed by Commerce, the SBA and, in some cases, the Eximbank to provide centralized export assistance. The TPCC recommended that the Eximbank and SBA streamline their pre-export Working Capital programs to make them more customer focused and to take advantage of the agencies’ comparative strengths. In 1994, the agencies began the process of sharing the coverage of their similar loan programs and established a network of private sector lenders to support small businesses. The TPCC recommended that the export promotion agencies create a country commercial plan that combined disparate TPCC agency documents into one coordinated country report on commercial activities. This led to the creation of “country commercial guides” for prospective exporters or investors to use. More recently, in 1998, the TPCC developed a coordinated response to the Asian financial crisis, in response to direction from executive branch officials. The TPCC has not completed its original efforts to streamline the numerous federal export services available to exporters, in part, because the TPCC did not consistently meet at the cabinet level to address these issues. From June 1999 through October 2001, the TPCC did not meet at the cabinet level and, as a result, the TPCC was less active in coordinating agency efforts. During this period some staff-level working groups continued to address trade promotion issues and work on publishing the national export strategy, but they were not able to complete work implementing the earlier recommendations. Key issues continuing to need resolution are: TPCC identified the need to have cross-agency training so that agency staffs would be knowledgeable enough about the export promotion programs of the other agencies to explain them to potential exporters. In 1999, the TPCC requested but did not receive funding from OMB for such training at overseas posts. At the five overseas posts that we visited, several of the staff that are responsible for providing U.S. firms with information on exporting said that they are not fully familiar with other agencies’ programs. Most FCS domestic and overseas staff acknowledge the need for training to better understand the needs of exporters, and FCS is attempting to institute an exchange program to address this issue. However, cross-agency training has not been systematically conducted. The national export strategy for 2002, released in May 2002, renews the 1993 call to improve cross-training among TPCC agencies in order to provide better service for U.S. exporters. The TPCC has recognized that improvements were needed in the accuracy, acquisition, and dissemination of information available to exporters. To provide this information, 19 TPCC agencies created Internet Web sites that identify trade assistance programs and in some cases export leads. However, according to the TPCC survey and focus groups, businesses have found these sites to be too numerous, difficult, and time-consuming to navigate. (See app. III for descriptions of programs that provide trade leads.) Moreover, some of the overseas FCS staff told us that some U.S. firms get frustrated when directed to another agency for assistance. The TPCC’s 2002 national export strategy addresses this difficulty, stating that a new TPCC task force is working to simplify and consolidate the various trade information Web sites. The TPCC was also unable to coordinate training programs for new-to- export firms. Our 2001 report noted that the TPCC was unaware of duplicative, new-to-export training programs that the U.S. Export Assistance Centers provided. One of these programs was a new initiative within the Department of Commerce that had not been specifically coordinated with the TPCC. In October 2001 the TPCC met and recognized the need to continue work on problems identified earlier, as well as to examine some new issues. The TPCC conducted a survey of U.S. businesses and found that the export process was still confusing to potential exporters. To address these issues, the TPCC made recommendations in its 2002 strategy, several of which were similar to those made in the TPCC’s earlier strategies. TPCC agencies generally coordinated their overseas export promotion activities through contacts with the FCS. In the countries we visited, FCS staff served as focal points to coordinate various agencies’ day-to-day export activities. In addition to supporting Commerce programs, they worked in support of other U.S. trade agencies, such as TDA, the Eximbank, and OPIC, as well as visiting trade missions from various states and visitors from other U.S. agencies. Typical FCS assistance provided to U.S. government or business visitors included preparing country commercial briefings, researching market sectors, scheduling and attending appointments, arranging for transportation and translation services, and generally assisting in representing U.S. trade interests overseas. Overseas U.S. business representatives with whom we spoke cited numerous ways in which FCS and other embassy staff worked together to overcome the many foreign bureaucratic obstacles they encountered in trying to export. In the countries we visited, for example, FCS staff did the following: FCS staff in Poland coordinated eight visits by TDA officials, three visits by Department of Commerce officials, a visit by Eximbank officials, one trade association, two state delegations, and a presidential visit in 2001. FCS staff in Turkey coordinated the attendance at a security summit of the President and the secretaries of State and Energy in fiscal year 2000. FCS staff in Turkey prepared background information on the impact of proposed Turkish policy on U.S.-developed energy projects for the administration and edited a paper on telecommunication issues for Commerce’s Market Access and Compliance Division. They also assisted TDA and its contractors in arranging meetings with high-level Turkish officials, providing them with information on potential projects. Czech Republic FCS staff arranged a trade event in Prague for the Governor of Pennsylvania that included 36 U.S. firms in 2001. While we found that the various agencies’ overseas staffs would benefit from cross-agency training to understand various agencies’ programs, we also found that these agencies collaborated on issues that affected exporters, such as market entry, regulation changes, and contract bidding. As members of the ambassadors’ interagency country teams, commercial officers shared information about U.S. export activities and became aware of broader political and economic concerns affecting the export environment. In the countries that we visited, these teams met at least weekly. FCS staff, embassy economic officers, agricultural attaches, and embassy political/military officers were aware of each other’s in-country activities and felt that they worked well together. The Department of Commerce, SBA, Eximbank, TDA, OPIC, and USAID have programs that assist small- and medium-sized enterprises (SME). According to the TPCC, SMEs may have limited resources to address the complex issues associated with exporting, and U.S. government agencies can help fill this information gap. These U.S. government agencies can provide market information, guarantee export loans, identify business opportunities, fund risk and credit insurance, and advocate on behalf of U.S. firms. Commerce and SBA focus on providing help to SMEs as their core business. Commerce data show that for the five countries we visited, SMEs represented almost 91 percent of the firms that foreign posts helped during fiscal year 2001. Commerce’s Advocacy Center coordinates the actions of TPCC agencies to work on behalf of U.S. firms dealing with foreign governments, complex bidding rules, and regulatory regimes. From November 1993 through fiscal year 2002, the Advocacy Center reported 685 successes, of which 173 (25 percent) involved SMEs. The Advocacy Center valued the contracts won by SMEs at $3.9 billion (about 3 percent). SBA provides credit and capital assistance, procurement and government contracting help, and entrepreneurial development assistance to small business exporters. To promote small business exports, the SBA offers three export loan guarantee programs: the Export Working Capital Program, the International Trade Program, and the Export Express Program. According to SBA, in fiscal year 2001 it guaranteed 425 export loans worth an estimated $167 million, or about 1.8 percent of the total loan guarantees of $9.1 billion provided by the agency. The Eximbank provides SMEs with pre-export financing from commercial lenders through its Export Working Capital Program. According to the Eximbank, almost 18 percent of the value of its fiscal year 2001 loan authorizations (more than $1.6 billion) went to SMEs, and almost 80 percent of its overall number of loans in fiscal year 2001 loans benefited SMEs. Based on Eximbank data, the value of fiscal year 2001 Export Working Capital loans that benefited SMEs averaged about $1.5 million. The Eximbank also issued 1,723 export credit insurance policies to small businesses in fiscal year 2001. These represented 98 percent of Eximbank insurance policies and totaled more than $900 million. As for TDA, all consultant contracts for desk studies, definitional missions, and feasibility studies are with either small- or medium-sized enterprises. TDA reported that small- and medium-sized business participation in its programs for fiscal year 2000 amounted to 48 percent of total TDA obligations. OPIC programs also involved small business in 40 percent of its fiscal year 2000 programs. According to OPIC, it funded 16 projects involving SMEs in fiscal year 2000 totaling $265 million. USAID’s technology transfer programs—the Global Technology Network and the Eastern Europe Partnership for Environmentally Sustainable Economies (EcoLinks)—also have SME participation. Both programs provide trade leads for SMEs, and EcoLinks provides travel and project grants. (See app. III for a description of agencies’ various export promotion programs.) Our review of the TPCC’s national export strategies indicated that the strategies have not provided clear and consistent guidance over federal agencies export promotion programs and that some of the problems identified remain to be fixed. The TPCC has not used its annual export strategies to identify specific agencies’ goals and responsibilities or to examine how agencies’ resources are aligned, and it is not clear whether export promotion resources are being used most productively. In its 2002 strategy, the recently energized TPCC identified several key areas for improved agency coordination, some of which address problems initially recognized in the TPCC’s 1993 strategy, including the need for (1) cross training agency personnel so they are knowledgeable about other agency programs, (2) improving exporters’ access to timely and accurate trade information, and (3) expanding outreach and trade education for new-to- export firms. Without identification in the national export strategy of how these renewed initiatives are to be accomplished, it is not clear how the TPCC will overcome the problems experienced previously. To assist federal agencies in making the best use of federal export promotion resources and to assist U.S. exporters, we recommend that the Chairman of the TPCC ensure that its national export strategies consistently (1) identify agencies’ specific goals within the strategies’ broad priorities, (2) identify how agencies’ resources are allocated in support of their specific goals, and (3) analyze progress made in addressing the recommendations in the committee’s prior annual strategies. We received written comments on our draft report from the TPCC Secretariat, which incorporated TPCC member agencies’ input (see app. VI). The TPCC agreed with the report’s call for the TPCC to provide clear and consistent strategic guidance from year to year, to identify agency- specific goals and responsibilities, and to report regularly on progress made toward achieving recommendations. The TPCC noted that it is committed to providing periodic reports to Congress on the implementation of its recommendations, including specific agency goals and associated responsibilities. It expects that the first of such progress reports will be sent in October to the Senate Banking Committee and the House International Relations Committee. The TPCC noted that its 2002 National Export Strategy was based on a survey of exporters and potential exporters and that regular direct input from such TPCC customers now provides continuity and consistency in the TPCC’s strategic approach. It stated that changes in strategic approach will now be made in response to the changing needs of exporters. We agree that exporter needs should help define the national export strategy. We also believe that the TPCC needs to provide leadership to the various federal agencies involved in export promotion so that the government strategy better clarifies goals, agency responsibilities, and associated resource allocations. The TPCC noted that our draft report was misleading in its reference to the TPCC’s not completing its implementation of earlier TPCC recommendations. It stated that it had made progress in establishing interagency services such as the Trade Information Center and the Export.gov Web site and that it did not expect its work ever to be completed. It also noted that training is another area where agencies are constantly striving to innovate and improve. We believe these areas are important, and we agree that continuous improvement is desirable. We commend the TPCC for its recently renewed efforts to implement earlier TPCC recommendations. The TPCC also noted that the report’s attention to the “big emerging markets” detracted from the report’s otherwise valid findings on consistency and follow-up. According to the TPCC, the world economy, as well as the economies in these countries, has changed markedly since the TPCC’s 1994 report. Our purpose in selecting the October 1997 National Export Strategy’s Central and East European regional strategy for closer review was to examine how the various export promotion agencies coordinated their efforts in implementing a TPCC strategy over a 5-year period—not to review the actual results of export activities. Many of the export agencies were represented in this region, including USAID. In addition, the region included two special offices designed to facilitate interagency coordination—the Caspian Finance Center in Turkey and the Southeastern Europe Initiative office in Croatia. The Department of Commerce also provided written technical comments, which we incorporated into the report as appropriate. As you requested, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Chairman of the Senate Committee on Small Business and Entrepreneurship and the Ranking Minority Member of the House Committee on Small Business and interested congressional committees. We are also sending copies to the Chairman of the TPCC. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www/gao/gov. If you or your staff have any questions regarding this report, please call me at (202) 512-4128. Key contributors to this assignment were Virginia Hughes, Patricia Martin, Judith Knepper, Victoria Lin, Ernie E. Jackson, and Rona Mendelsohn. The Ranking Minority Member of the Senate Committee on Small Business, as well as the Chairman of the House Committee on Small Business, asked us to determine if the Trade Promotion Coordinating Committee’s (TPCC) strategies have helped to focus U.S. export promotion efforts. Specifically, we assessed (1) whether the committee’s strategy has established export priorities, assessed progress made toward achieving the strategy’s priorities, and proposed an alignment of federal resources in support of these priorities; and (2) whether the committee has made progress in coordinating the various agencies’ export promotion programs. In addition, we also identified how the various agencies are including small- and medium-sized businesses in their export promotion programs. To determine if the committee had established export priorities and whether federal resources had been aligned in support of these priorities, we analyzed the TPCC’s and federal agencies’ responsibilities under the Export Enhancement Act of 1992 and the Export Enhancement Act of 1999. We also obtained and analyzed the TPCC’s national export strategies for 1993 through 2002 to see if, in an effort to increase exports, the committee targeted specific markets, identified agencies’ export goals, and reported on the progress made by agencies in implementing the committee’s strategy. We also spoke with TPCC member agency officials from the Departments of Agriculture, Commerce, and State; the U.S. Export-Import Bank; the Overseas Private Investment Corporation; the Small Business Administration; the U.S. Agency for International Development; and the U.S. Trade and Development Agency; about the usefulness of the strategy in defining their programs. To determine if the TPCC member agencies aligned their resources to support the strategy, we obtained and analyzed, but did not verify the budget and staffing data reported (1) in the national export strategies and (2) by the Department of Commerce’s International Trade Administration and the Department of Agriculture’s Foreign Agricultural Service, the two major agencies with overseas staffs. We obtained documentation from officials of these agencies in both domestic and overseas offices regarding the staffing process and resource allocations. We also obtained and reviewed interagency memorandums as well as the minutes of TPCC’s meetings and met with TPCC officials and member agency representatives. To identify other factors affecting agencies’ resource allocations, we obtained and reviewed documents on congressional or administration directives, analyzed their impact on overseas staffing, and spoke with agency officials responsible for staffing about the impact of these initiatives on resource allocations. In addition, we analyzed the Departments of Commerce and Agriculture’s staffing patterns in the mature Group of six (G-6) markets and compared them with TPCC targeted big emerging markets. To determine if the TPCC was evaluating member-agencies’ progress in implementing the broad priorities identified in the national export strategies, we obtained and analyzed member agency Government and Performance Results Act submissions, analyzed the performance sections of the national export strategies from fiscal years 1993 and 2002, and discussed with TPCC and various member agency officials the reasons that the TPCC did not evaluate agency performance. To determine how the committee has coordinated the various agencies’ export promotion programs, we examined agency trade promotion events, obtained and analyzed TPCC interagency minutes and other correspondence, and discussed with TPCC officials and member agency representatives how issues raised within the committee were resolved. We conducted separate interviews on these matters with key officials from the Departments of Agriculture, Commerce, and State; the U.S. Export-Import Bank; the Overseas Private Investment Corporation; the Small Business Administration; the U.S. Agency for International Development; the U.S. Trade and Development Agency; and the U.S. Trade Representative in Washington, D.C. To determine how overseas agencies coordinated their efforts to implement the strategy, we selected five countries in Central and Eastern Europe identified in the TPCC’s 1997 strategy as a region targeted for growth. The locations represent markets at various stages of maturity. We visited Warsaw, Poland; and Ankara and Istanbul, Turkey— two of the designated big emerging markets; Zagreb, Croatia— a country in the early stages of making a transition to a market economy; and Prague, Czech Republic, a country in a more advanced transitional phase. To contrast the U.S. support provided exporters in a mature market, we also visited Berlin, Germany, a country with the world’s seventh largest industrial market economy and the United States’ third largest trading partner. We asked for copies of the strategy and discussed it with agency officials responsible for export promotion. We obtained documents from and interviewed officials representing the Departments of Agriculture, Commerce, Defense, and State; the Overseas Private Investment Corporation; the U.S. Agency for International Development; and the U.S. Trade and Development Agency. The documents we obtained included mission and commercial strategic plans, correspondence between agencies, quarterly reports, e-mails, and computerized trade lead data showing interagency interactions. To determine whether U.S. agencies are including small- and medium-sized businesses in their export promotion programs, we obtained documents from the Departments of Commerce, Defense, and State; the Small Business Administration, the Overseas Private Investment Corporation; the U.S. Agency for International Development; the U.S. Trade and Development Agency, and the Eximbank. The documents we obtained included agency annual reports, reports to Congress, agency performance plans, and data provided by these agencies outlining the nature and degree of small- and medium-sized enterprises participation in their programs. We also obtained documents identifying the use of databases and programs by small- and medium-sized businesses. We did not verify the data. We also obtained and discussed with key officials of the U.S. Agency for International Development reports on small business participation in its Global Technology Network and EcoLinks programs. Because we focused primarily on export promotion programs related to commercial exports, we did not examine in detail small business participation in Agriculture programs. In addition, we interviewed key agency officials representing Commerce, the Eximbank, the Overseas Private Investment Corporation, and the U.S. Trade and Development Agency at headquarters and at posts in the five countries in Central and Eastern Europe about the participation in agency programs by small- and medium-sized businesses. We performed our work from September 2001 through May 2002 in accordance with generally accepted government auditing standards. Each of the nine major trade promotion agencies offers specific services that, together with other agencies, provide the exporter with help throughout the export process. As shown in table 1, more than one agency may be active in providing these general types of export services. Federal agencies provide export training for potential new exporters; information on promising markets and export processes, as well as specific trade leads; opportunities to participate in trade events that match buyers and sellers; export finance and insurance for exports and investments in risky markets; and government-to-government advocacy on behalf of specific companies encountering trade barriers or bidding (as the sole U.S. bidder) on foreign government procurements. The Department of Commerce’s Foreign Commercial Service (FCS) offices serve as focal points for other export agencies operating overseas. Both Commerce and SBA assist potential exporters with export training classes and one-on-one export counseling. Commerce and SBA staff at the 19 U.S. Export Assistance Centers sponsor export training, often assisted by other agencies, like the Export-Import Bank (Eximbank), as well as state and private sector trade organizations. The training introduces potential exporters to general information about foreign markets, assists in the development of sound market plans, explains possible funding sources, and sometimes provides opportunities to participate in agency-led trade missions. The following example illustrates such training. The departments of Commerce and Agriculture, TDA, and USAID are the major providers of trade information and specific trade leads. Commerce provides industry analysis and policy support at headquarters, as well as support in foreign markets through its Foreign Commercial Service (FCS) and Trade Development unit. FCS staff routinely provide at no charge general research services, such as industry sector analyses and specific market insights to customers. They also provide a range of fee-for-service products, including customized (flexible) market research, international company profiles, and Commerce’s Gold Key service, which matches qualified buyers with U.S. firms. Trade Development staff provide market reports and seminars and regular one-on-one market counseling, and they manage the Trade Information Center’s 1-800-USATRADE hotline and Web- based information service. A small U.S. manufacturer of cooling and heating system parts used Commerce’s Gold Key service to meet with potential buyers in Poland. Following the initial sale, a U.S. Export Assistance Center continued to provide the firm additional trade leads generated through the market research produced by FCS staff in Poland. The firm went on to make additional sales in Poland. Exporters can also find a broad range of trade-related information at the federal government’s Export.gov Web site in addition to two Commerce Web sites. Subscription-based STAT-USA provides a broad range of information, including trade leads, market information, as well as access to the National Trade Data Bank. The annual fee for this service is $175, with quarterly subscriptions also available. Commerce’s BUYUSA program matches U.S. sellers with foreign buyers and provides U.S. firms the option of publishing on the site the firm’s electronic catalog of available products. The basic service costs $400, with the enhanced catalog service ranging between $1,075 and $2,000. Currently, about 3,400 U.S. and 19,000 foreign firms are registered at BUYUSA. According to one Commerce official, BUYUSA permits small businesses to gain low-cost access to prescreened foreign buyers. USAID funds two programs that seek to link U.S. small- and medium-sized exporters with business opportunities in USAID-specific sectors. The Global Technology Network (GTN) links U.S. agribusiness, environment and energy, health, and information technology firms with opportunities that support USAID development goals in Africa, Asia and the Near East, Central and Southeast Europe, and Latin America and the Caribbean. GTN automatically notifies (at no charge) registered U.S. firms of qualified business opportunities. These business opportunities include direct purchases, agent/distributor agreements, joint ventures, and franchise agreements. According to USAID, the GTN program generated 44 transactions totaling about $10 million in fiscal year 2001, and for the first 6 months in fiscal year 2002, 38 transactions valued at $30.1 million. USAID’s Eastern European Partnership for Environmentally Sustainable Economies (EcoLinks) program addresses environmental issues in the regions of Europe and Eurasia. The region is struggling to balance economic and environmental concerns. EcoLinks is a form of technical assistance that focuses on technology transfer by promoting partnerships between businesses, municipalities, and associations within the region and between the region and the United States. Its program focuses on three interrelated sets of activities: (1) partnership grants, (2) technology transfer and investment, and (3) an information technology initiative. EcoLinks program grants fund (1) initial matchmaking meetings between prospective partners and (2) project grants that test the viability of potential environmental projects. According to USAID data, EcoLinks generated four deals totaling $0.4 million in fiscal year 2001. An example of an EcoLinks grant follows. An EcoLinks grant helped rebuild a Croatian meat processing wastewater treatment plant destroyed by war. A small U.S. water management firm assisted plant managers in restoring the facility. The new plan will reduce water consumption by 30 percent and has already reduced the amount of waste produced in processing and cut the plant’s operating costs by 20 percent. TDA also provides market information to U.S. exporters and investors. It provides TDA grants for feasibility studies, whose contractors are primarily small- and medium-sized U.S. businesses. TDA also sponsors conferences that familiarize foreign decision makers with U.S. goods and services, build business relationships, and encourage and assist U.S. firms in exporting to developing and middle-income countries. To illustrate: A TDA contractor organized a “Building Infrastructure for Tourism Development” conference in May 2002 in Istanbul, Turkey. The conference focused on the Eurasian region and, according to the meeting subcontractor, the conference drew approximately 300 people, including about 50 U.S. firms and 75 to 100 foreign firms. Like Commerce, the Department of Agriculture provides full range of information and services to agricultural exporters, including market information, trade leads, and other help, including a Web-based training module, to agricultural exporters. Trade events bring buyers and sellers together or provide them with information (including but not limited to trade missions, trade fairs, catalog shows, reverse trade missions, and seminars). Two units in Commerce’s International Trade Administration—the U.S. and Foreign Commercial Service and Trade Development—share responsibility in coordinating trade events. Other federal agencies also organize trade events that focus on specific sectors. For example, Agriculture sponsors trade missions for agricultural products; Energy, the Environmental Protection Agency, and USAID have sponsored events related to energy and the environmental technology sectors; and the SBA organizes a few trade missions annually for small businesses. Trade events are also sponsored by other federal agencies, including Eximbank, TDA, the Departments of State and Transportation, and states. For all of these other entities, Commerce provides essential support in doing market research and arranging foreign business community contacts for the associated trade events. Based on a cursory review of world trade events data maintained by Commerce, we found that Commerce sponsors the large majority of all trade events. Of the four Central and Eastern European countries in which we did field work (Croatia, the Czech Republic, Poland, and Turkey), Poland had the most trade events (23) during fiscal years 1996 through 2000. In contrast, Germany, a more mature European market, was the destination for 164 trade events during the same time period. Major infrastructure projects require years of negotiation with foreign governments, are costly, and are risky because their returns are generated from revenues from operations that can be affected by economic or political turmoil. Three agencies, Eximbank, OPIC, and TDA, work on separate aspects of trade finance in markets where commercial funding is not readily available. The Eximbank provides U.S. firms with financing and insurance for exports in markets where commercial financing is limited or unavailable due to risk. The Eximbank has an export working capital program that provides loans for and guarantees lenders financing of pre-export production of goods. It also provides project finance for exporters or project sponsors that need financing for exports to large foreign infrastructure projects, such as oil and gas refineries. Exporters’ goods must contain over 50 percent U.S. content. OPIC is a self-sustaining agency that provides loans and guarantees to investors in overseas developing markets. OPIC’s political risk insurance and loans help U.S. businesses of all sizes invest and compete in developing nations worldwide. Specifically, OPIC insures investments overseas against a broad range of political risks, finances businesses overseas through loans and loan guaranties, finances private investment funds that provide equity to businesses overseas, and advocates in the interests of the American business community overseas. TDA provides planning assistance for foreign development projects that might offer sales opportunities for U.S. exporters. TDA’s primary tool for such projects, feasibility studies, evaluates the technical, legal, economic, environmental, and financial aspects of a potential project. In developing markets, for example, TDA approaches foreign governments or municipalities considering privatizing national assets such as energy plants or constructing an airport terminal and offers to have a small U.S. firm study the feasibility of the project or to provide technical assistance such as air controller training. If the foreign government or municipality agrees to the project, it may use U.S. firms or equipment. If the market holds risk of repayment and commercial financing is scarce, OPIC may provide the U.S. firm with loans or insurance, while the Eximbank may provide loans or guarantees for the equipment used in the project. An example follows. In 1997, the U.S. government supported building the Baku-Ceyhan oil pipeline from Azerbaijan through Turkey to a Black Sea port and on to lucrative markets in Western Europe. In 1998, TDA provided grant money to assist the Turkish government with the legal and financial negotiations of the deal. FCS staff in Turkey have continued to advocate for U.S. companies bidding on the Turkish portion of the project, and the Eximbank is now considering a guarantee covering U.S. goods and services for part of the project. Operating since November 1993, Commerce’s Advocacy Center assists U.S. firms when they encounter difficulty in winning foreign government procurements. The center coordinates the actions of the relevant U.S. agencies in a specific procurement. Top-level U.S. government officials work with their foreign counterparts to ensure a level playing field during all phases of the procurement process, and the Advocacy Center coordinates the timing of the actions, which may be official contacts via letters, telephone calls, or personal visits by one or more high-level U.S. official(s). Examples of problems that the center addresses include: foreign firms’ pursuit of contracts using assistance from their home governments to persuade foreign government officials to buy their equipment or services; unfair treatment by government decisionmakers, preventing firms from having a chance to compete; and bidding offers that may be tied up in bureaucratic red tape, resulting in lost opportunities and providing an unfair advantage to a competitor. In considering requests for assistance from a U.S. firm, the Advocacy Center confirms that the international transaction is in the national interest, that the U.S. content of the potential procurement is at least 50 percent, and that the firm is the only U.S. bidder. When more than one U.S. firm is bidding on the procurement, U.S. officials will advocate with the foreign government for U.S. participation but not for any one U.S. firm. Advocacy Center officials advocate for both large and small U.S. businesses, and a Commerce publication states that when large U.S. firms win a procurement bid, their U.S. suppliers—often small- and medium- sized businesses—also benefit. For example, the Advocacy Center has worked with the Boeing Corporation in its successful efforts to win contracts in Cyprus, Morocco, and South Africa. According to Commerce, Boeing has in excess of 500 suppliers covering all 50 states. According to Commerce, the center re-committed itself in July 2002 to expand its support of small and medium-sized businesses. The efforts of center managers with responsibility for these businesses will be coordinated by a Small Business Advocate and Advisor to the Director. The center has also launched a plan for extended outreach and will work to define the target market to which advocacy services can be realistically and effectively offered. (117) (126) (185) (183) (209) (206) (245) U.S. Agency for International Development Total Budget for Export Promotion Activities (excluding OPIC and USAID programs) #Offices #Staff#Offices #Staff#Offices #Staff#Offices #Staff#Offices #Staff#Offices 1 #Offices #Staff#Offices #Staff#Offices #Staff#Offices #Staff#Offices #Staff#Offices 1 #Offices #Staff#Offices #Staff#Offices #Staff#Offices #Staff#Offices #Staff#Offices Countries in Big Emerging Markets as defined in TPCC’s 1994 National Export Strategy. Countries in Big Emerging Markets as defined in TPCC’s 1994 National Export Strategy. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading. | Ten years ago, to coordinate the activities of the various federal agencies involved in export promotion and to ensure better delivery of services to potential exporters, Congress established the interagency Trade Promotion Coordinating Committee (TPCC) under the Export Enhancement Act of 1992. Among other things, the act required the committee to develop a governmentwide strategic plan that (1) establishes priorities for federal activities in support of U.S. export activities and (2) proposes the annual, united federal trade promotion budget that supports the plan. TPCC's annual national export strategies identify broad priorities for promoting U.S. trade, but they do not discuss agencies' specific goals or assess progress made. In its initial strategies, the committee identified 10 regionally dispersed priority markets for future trade promotion efforts, but it did not discuss agencies' specific goals or report later on progress made in increasing exports to these markets. Shifting to a regional approach 3 years later in its 1997 strategy, the committee identified Central and Eastern Europe as a region where U.S. government assistance to U.S. exporters would be important in increasing market share. Again the strategy did not discuss agencies' specific goals nor did later strategies report on progress made or even cover consistent topics from year to year. Without regular assessments of progress, it is not clear whether export promotion resources are being used most productively in support of the strategy. Furthermore, the committee has limited ability to align agency resources with its strategy. The committee has made modest but inconsistent progress in coordinating federal agencies' export promotion efforts. In its first national export strategy in 1993, the committee identified coordination weaknesses and recommended improvements, most of which required interagency consensus to implement. Although many of its initial recommendations were implemented during the committee's first few years, some have not been implemented--for example, the need for improved agency staff training and improved trade information services to bring clarity to the export process, as well as the need for expanded outreach and trade education for new-to-export firms. |
DOE is responsible for a diverse set of missions, including nuclear security, energy research, and environmental cleanup. These missions are managed by organizations within DOE and largely carried out by M&O contractors at various DOE sites. According to federal budget data, NNSA is one of the largest organizations in DOE, overseeing nuclear weapons, nuclear nonproliferation, and naval reactors missions at its sites. With an $11 billion budget in fiscal year 2012—nearly 40 percent of DOE’s total budget—NNSA is responsible for providing the United States with safe, secure, and reliable nuclear weapons in the absence of underground nuclear testing and maintaining core competencies in nuclear weapons science, technology, and engineering. Ensuring a safe and reliable nuclear weapons stockpile is an extraordinarily complicated task and requires state-of-the-art experimental and computing facilities, as well as the skills of top scientists in the field. To its credit, NNSA consistently accomplishes this task, as evidenced by the successful assessment of the safety, reliability, and performance of each weapon type in the nuclear stockpile since its creation. To support these capabilities into the future, in 2011, the administration announced plans to request $88 billion from Congress over the next decade to operate and modernize the nuclear security enterprise. As discussed earlier, work activities to support NNSA’s national security missions are largely carried out by M&O contractors. This arrangement has historical roots. Since the Manhattan Project produced the first atomic bomb during World War II, NNSA, DOE, and predecessor agencies have depended on the expertise of private firms, universities, and others to carry out research and development work and efficiently operate the facilities necessary for the nation’s nuclear defense. Currently, DOE spends 90 percent of its annual budget on M&O contracts, making it the largest non-Department of Defense contracting agency in the government. DOE generally regulates the safety of its own nuclear facilities and operations at its sites. In contrast, the Nuclear Regulatory Commission (NRC) generally regulates commercial nuclear facilities, and the Occupational Safety and Health Administration (OSHA) generally regulates worker safety at commercial industrial facilities. However, because of the dangerous nature of work conducted at many sites within the nuclear security enterprise—handling nuclear material such as plutonium, manufacturing high explosives, and various industrial operations that use hazardous chemicals—oversight of the nuclear security enterprise is multifaceted. First, DOE policy states that its M&O contractors are expected to develop and implement an assurance system, or system of management controls, that helps ensure the department’s program missions and activities are executed in an effective, efficient, and safe manner. Through these assurance systems, contractors are required to perform self-assessments as well as identify and correct negative performance trends. Second, NNSA site offices, which are collocated with NNSA sites, oversee the performance of M&O contractors. Site office oversight includes communicating performance expectations to the contractor, reviewing the contractor’s assurance system, and conducting contractor performance evaluations. Third, DOE’s Office of Health, Safety, and Security—especially its Office of Independent Oversight—conducts periodic appraisals to determine if NNSA officials and contractors are complying with safety and security requirements. Fourth, NNSA receives safety assessments and recommendations from other organizations, most prominently the Safety Board—an independent executive branch agency created by Congress to assess safety conditions and operations at DOE’s defense nuclear facilities. To address public health and safety issues, the Safety Board is authorized to make recommendations to the Secretary of Energy, who may then accept or reject, in whole or in part, the recommendations. If the Secretary of Energy accepts the recommendations, the Secretary must prepare an implementation plan. DOE and some of its contractors have viewed this multifaceted oversight to be overly burdensome. To address this issue, in March 2010 the Deputy Secretary of Energy announced a reform effort to revise DOE’s safety and security directives and modify the department’s oversight approach to “provide contractors with the flexibility to tailor and implement safety and security programs without excessive federal oversight or overly prescriptive departmental requirements.” In the memorandum announcing this effort, the Deputy Secretary noted that burdensome safety requirements were affecting the productivity of work at DOE’s sites and that reducing this burden on contractors would lead to measurable productivity improvement. The Deputy Secretary noted that DOE’s Office of Health, Safety and Security in 2009 had begun reforming its approach to enforcement and oversight. Similar to, but independent of DOE’s safety and security reform effort, in February 2011, NNSA initiated its “governance transformation” project, which involved revising the agency’s business model to, among other things, place more reliance on contractors’ self-oversight through its contractor assurance system to ensure such things as effective safety and security performance. NNSA’s non-nuclear Kansas City Plant completed implementation of this new business model, and other NNSA sites—such as the Nevada National Security Site and the Y-12 National Security Complex—were in the process of implementing it, too, when the Y-12 security incident occurred. In response to the Y-12 security breach, multiple investigations and reviews of the incident were performed by NNSA, the DOE Office of Inspector General, and the DOE Office of Independent Oversight. These reviews identified numerous problems with NNSA’s and its contractors’ performance, including: physical security systems, such as alarms; protective force (i.e., NNSA’s heavily armed, contractor guard forces) training and response; failures to correct numerous known problems; and weaknesses in contract and resource management. In addition, at the request of the Secretary of Energy, an independent panel, composed of three former executives from Federal agencies and the private sector, and a NNSA Security Task Force found broader and systemic security issues across the nuclear security enterprise. The Secretary’s panel in December 2012 analyzed various models for providing security at DOE and NNSA sites but generally found that improvements to the security culture, management, and oversight were necessary, in addition to having an effective organizational structure. In addition, the leader of the NNSA Security Task Force testified before the House Armed Services Committee in February 2013 about significant deficiencies in NNSA’s entire security organization, oversight, and culture. In response to the Y-12 security incident and these findings, DOE and NNSA took a number of immediate actions, including repairing security equipment, reassigning key security personnel, and firing the Y-12 protective force contractor. In February 2013, the Acting NNSA Administrator committed to implementing a three-tiered oversight process involving contractor self-assessment, NNSA evaluation of site performance, and independent oversight by DOE’s Office of Independent Oversight. The Acting Administrator testified before the House Armed Services Committee that she believed that such actions will help instill a culture that embraces security as an essential element of NNSA’s missions. In assessing DOE’s actions to address the security breakdowns at Y-12, a central question will be whether these latest actions taken will produce sustained improvements in security at Y-12 and across the nuclear security enterprise. As we and others have reported, DOE has a long history of security breakdowns and an equally long history of instituting responses and remedies to “fix” these problems. For example, in examining the Y-12 security incident, NNSA’s former Acting Chief of Defense Nuclear Security and the leader of the NNSA’s Security Task Force testified in February 2013 about problems with NNSA’s federal security organization including poorly defined roles and responsibilities for its headquarters and field security organizations, inadequate oversight and assessments of site security activities, and issues with overseeing contractor actions and implementing improvements. As noted in table 1, 10 years ago we reported on very similar problems, and since that time DOE has undertaken numerous security initiatives to address them. We have not evaluated these recent initiatives but we have ongoing work to evaluate them as part of our review on security reform for the Subcommittee, which we will complete later this year. It is also important to note that NNSA’s long-standing security problems are not limited to Y-12. DOE’s and NNSA’s work with nuclear materials such as plutonium and highly enriched uranium, nuclear weapons and their components, and large amounts of classified data require extremely high security, however, as we and DOE have reported, NNSA and DOE have a long history of poor security performance across the nuclear security enterprise, most notably at Los Alamos and Livermore national laboratories, as well as ongoing struggles to sustain security improvements, including information security. As we noted in our September 2012 testimony, Los Alamos National Laboratory (Los Alamos) experienced a number of high-profile security incidents in the previous decade that were subject to congressional hearings, including some held by this Subcommittee. Many of these incidents focused on Los Alamos’s inability to account for and control its classified resources. These incidents include the transfer or removal of classified information from authorized work areas or the laboratory itself, the temporary loss of two hard drives containing nuclear weapon design information, and difficulties in accounting for classified removable electronic media. In addition to these well-publicized incidents, security evaluations through 2007 identified other persistent, systemic security problems at Los Alamos. These problems included weaknesses in controlling and protecting classified resources, inadequate controls over special nuclear material, inadequate self-assessment activities, and weaknesses in the process that Los Alamos uses to ensure it corrects identified security deficiencies. Partly as a result of these findings, as we reported in 2008, Los Alamos underwent a 10 month shut-down of operations in 2004 and experienced a change in contractors in 2005. Moreover, the Secretary of Energy issued a compliance order in 2007 requiring Los Alamos to implement specific corrective actions to, among other things, address long-standing deficiencies in its classified information programs. We reported in January 2008 and testified before this Subcommittee in September 2008 that Los Alamos had experienced a period of improved security performance but that it was too early to determine whether NNSA and Los Alamos could sustain this level of improvement. In March 2009, we reported on numerous and wide-ranging security deficiencies at Lawrence Livermore National Laboratory (Livermore), particularly in the ability of Livermore’s protective forces to ensure the protection of special nuclear material and the laboratory’s protection and control of classified matter. We also identified Livermore’s physical security systems, such as alarms and sensors, and its security program planning and assurance activities, as areas needing improvement. Weaknesses in Livermore’s contractor self-assessment program and the Livermore Site Office’s oversight of the contractor contributed to these security deficiencies at the laboratory. According to one DOE Office of Independent Oversight official, both programs were “broken” and missed even the “low-hanging fruit.” The laboratory took corrective action to address these deficiencies, but we noted that better oversight was needed to ensure that security improvements were fully implemented and sustained. In September 2012, NNSA and Livermore completed efforts to move the site’s most sensitive nuclear material to other sites, thereby easing the site’s security requirements. We also have reported extensively on NNSA’s challenges in maintaining effective and secure information security systems, particularly at Los Alamos. For example, in June 2008, we reported that significant information security problems at Los Alamos had received insufficient attention. The laboratory had over two dozen initiatives under way that were principally aimed at reducing, consolidating, and better protecting classified resources. However, the laboratory had not implemented complete security solutions to address either the problems of classified parts storage in unapproved storage containers or weaknesses in its process for ensuring that actions taken to correct security deficiencies were completed. In addition, in October 2009 we reported that Los Alamos needed to better protect its classified network.found significant weaknesses remained in protecting the confidentiality, integrity, and availability of information stored on and transmitted over its classified computer network. Moreover, we found the laboratory’s Specifically, we decentralized approach to information security program management has led to inconsistent implementation of policy. DOE and NNSA have experienced significant safety problems at their sites, and recent efforts to reform safety protocols and processes have not demonstrated sustained improvements. As we testified in September 2012 before this Subcommittee, long-standing DOE and NNSA management weaknesses have contributed to persistent safety problems at NNSA’s national laboratories. For example, in October 2007, we reported that nearly 60 serious accidents or near misses had occurred at NNSA’s national laboratories since 2000. These accidents included worker exposure to radiation, inhalation of toxic vapors, and electrical shocks. Although no one was killed, many of these accidents caused serious harm to workers or damage to facilities. As we also reported, at Los Alamos in July 2004, an undergraduate student who was not wearing required eye protection was partially blinded in a laser accident. Our review of nearly 100 safety studies—including accident investigations and independent assessments by the Safety Board and others issued since 2000—found that the contributing factors to these safety problems generally fell into three key categories: (1) relatively lax laboratory attitudes toward safety procedures, (2) laboratory inadequacies in identifying and addressing safety problems with appropriate corrective actions, and (3) inadequate oversight by NNSA site offices. DOE’s Office of Inspector General has also raised concerns about safety oversight by NNSA’s site offices. Specifically, the Inspector General reported in June 2011 that NNSA’s Livermore Site Office was not sufficiently overseeing its contractor to ensure that corrective actions were fully and effectively implemented for a program designed to limit worker exposure to beryllium, a hazardous metal essential for nuclear operations. DOE has undertaken a number of reforms to address persistent safety concerns. In March 2010, the Deputy Secretary of Energy announced a reform effort to revise DOE’s safety and security directives. The reform effort was aimed at modifying the department’s oversight approach to “provide contractors with the flexibility to tailor and implement safety and security programs without excessive federal oversight or overly prescriptive departmental requirements.” As we reported to this Subcommittee in April 2012, this reform effort reduced the number of safety related directives from 80 to 42 by eliminating or combining requirements the department determined were unclear, duplicative, or too prescriptive and by encouraging the use of industry standards. However, as we noted in September 2012 before this Subcommittee, DOE’s safety reforms did not fully address safety concerns that we, as well as others, have identified in the areas of quality assurance, safety culture, and federal oversight and, in fact, these reforms may have actually weakened independent oversight. We stated, for example, that while DOE policy notes that independent oversight is integral to help ensure the effectiveness of safety performance, DOE’s Office of Independent Oversight staff must now coordinate their assessment activities with NNSA site office management to maximize the use of resources. This arrangement raised our concern about whether Office of Independent Oversight staff would be sufficiently independent from site office management. In our April 2012 report, we recommended, among other things, that DOE develop a detailed reform plan and clearly define the oversight roles and responsibilities of DOE’s Office of Independent Oversight staff to ensure that their work is sufficiently independent from the activities of DOE site office and contractor staff. DOE has taken steps to respond to these recommendations, including developing a plan aimed at improving safety management and drafting a memo from the Secretary of Energy reconfirming the department’s commitment to independent oversight of safety and security. GAO-12-912T. A November 2012 report by DOE’s Office of Independent Oversight raised concerns about safety culture issues at NNSA’s Pantex Plant. Among the concerns were reluctance by workers to raise safety problems for fear of retaliation and a perception that cost took priority over safety. At an October 2012 public hearing in Knoxville, Tennessee, the Safety Board noted that safety controls to prevent or mitigate consequences from accidents had not been fully incorporated into the design of a new uranium processing facility at Y-12. The Safety Board noted the facility’s safety basis—a technical analysis that identifies potential accidents and hazards associated with a facility’s operations and outlines controls to mitigate or prevent their impact on workers and the public—did not adequately address controls to protect workers or the public in the case of an earthquake or small fires, and did not adequately calculate reasonably conservative radiation exposure consequences that could lead to putting greater safety into the facility’s design. The Safety Board further noted that these deficiencies raise the potential for significant impacts on public and worker safety. A January 2013 Office of Independent Oversight report reviewing the Los Alamos Site Office assessment of the contractor corrective action system found that the contractor had not implemented effective corrective actions for identified safety system problems. This report noted that the site office concluded that more than half of the 62 safety system items needing corrective action had been closed without adequate action or sufficient documentation. Moreover, in October 2012, NNSA issued a Preliminary Notice of Violation to a Los Alamos contractor for repeated electrical safety problems. NNSA’s notice stated that insufficient oversight of subcontractor work by the contractor safety staff was among the contributing factors. NNSA fined the contractor $262,500. A basic tenet of effective management is the ability to complete projects on time and within budget. DOE has taken a number of actions to improve management of projects, including those overseen by NNSA. For example, DOE has updated project and contract management policies and guidance in an effort to improve the reliability of project cost estimates, better assess project risks, and better ensure project reviews that are timely and useful and identify problems early. In addition, in December 2010, the Deputy Secretary of Energy convened a DOE Contract and Project Management Summit to discuss strategies for additional improvement in contract and project management. The participants identified barriers to improved performance and reported in April 2012 on the status of initiatives to address these barriers. DOE has continued to release guides for implementing its revised order for Program and Project Management for the Acquisition of Capital Assets (DOE O 413.3B), such as for cost estimating, using earned value management, and forming project teams. Further, DOE has taken steps to enhance project management and oversight by requiring peer reviews and independent cost estimates for projects with values of more than $100 million and by improving the accuracy and consistency of data in its central repository for project data. DOE has made progress in managing nonmajor projects—those costing less than $750—million and in recognition of this progress, we narrowed the focus of our high-risk designation to major contracts and projects. Specifically, as we noted in our October 2012 report on DOE’s EM cleanup projects funded by the American Recovery and Reinvestment Act, at the time of our analysis, 78 of 112 projects had been completed. Of those completed projects, 92 percent met the performance standard of completing project work scope without exceeding the cost target by more than 10 percent, according to EM data. However, we made four recommendations to DOE in this report aimed at improving how EM manages and documents projects, particularly with respect to establishing key performance parameters such as project scope targets and baselines for cost and schedule. DOE concurred with all of our recommendations, recognizing that improvements could be made and that lessons learned from these projects can be applied to EM’s broader portfolio of projects and activities. In addition, in December 2012, we reported that EM and NNSA were making some progress in managing the 71 nonmajor construction and cleanup projects that we reviewed and are expected to cost an estimated $10.1 billion in total. For example, we identified some NNSA and EM nonmajor projects that used sound project management practices, such as the application of effective acquisition strategies, to help ensure the successful completion of these projects. We also recommended that NNSA and EM clearly define, document, and track the scope, cost, and completion date targets for each of their nonmajor projects and that EM clearly identify critical occupations and skills in its workforce plans. NNSA and EM agreed with these recommendations. GAO, Modernizing the Nuclear Security Enterprise: New Plutonium Research Facility at Los Alamos May Not Meet All Mission Needs, GAO-12-337 (Washington, D.C.: Mar. 26, 2012). More recently, in September 2011, NNSA estimated that increases.the facility would cost from $4.2 billion to $6.5 billion to construct—a nearly seven-fold cost increase from the original estimate. In April 2010, we reported that weak management by DOE and NNSA had allowed the cost, schedule, and scope of ignition-related activities at the National Ignition Facility to increase substantially. We reported that, since 2005, ignition-related costs have increased by around 25 percent—from $1.6 billion in 2005 to over $2 billion in 2010—and that the planned completion date for these activities had slipped from the end of fiscal year 2011 to the end of fiscal year 2012 or beyond. Ten years earlier, in August 2000, we had reported that poor management and oversight of the National Ignition Facility construction project at Lawrence Livermore National Laboratory had increased the facility’s cost by $1 billion and delayed its scheduled completion date by 6 years. In March 2010, we reported that NNSA’s Mixed-Oxide Fuel Fabrication Facility currently being constructed at DOE’s Savannah River Site in South Carolina had experienced delays, but project officials said that they expected to recover from these delays by the end of 2010 and planned for the start of operations on schedule in 2016. In addition, after spending about $730 million on design, NNSA has cancelled the pit disassembly and conversion facility and is now planning to use existing facilities at DOE’s Savannah River and Los Alamos sites and will add equipment to the mixed oxide facility. NNSA is working on a cost and schedule estimate for the use of these existing facilities and for adding the additional equipment. We have also issued several reports on the technical issues, cost increases, and schedule delays associated with NNSA’s efforts to extend, through refurbishment, the operational lives of nuclear weapons in the stockpile. For example, in March 2009, we reported that NNSA and the Department of Defense had not effectively managed cost, schedule, and technical risks for the B61 nuclear bomb and the W76 nuclear warhead refurbishments. For the B61 life extension program, NNSA was only able to stay on schedule by significantly reducing the number of weapons undergoing refurbishment and abandoning some refurbishment objectives. Earlier, in December 2000, we similarly had reported that refurbishment of the W87 strategic warhead had experienced significant design and production problems that increased its refurbishment costs by over $300 million and caused schedule delays of about 2 years. In conclusion, the actions that DOE and NNSA have taken to address weaknesses in oversight of security, safety, and contract and project management are very important, but problems persist. While we have noted progress in the area of project management, we also observe that NNSA and DOE EM have not begun a new major project since taking these actions. The Y-12 security incident was an unprecedented event for the nuclear security enterprise and perhaps indicates that NNSA’s organizational culture, over a decade after the agency was created to address security issues, still has not embraced security as an essential element of its missions. In terms of safety, DOE has recently taken the initiative to examine the safety culture at its sites. We believe, as do other organizations, including the DOE Inspector General and Safety Board, that a “hands off, eyes on” oversight approach for security, safety and contract and project management is insufficient and unwarranted until the department can demonstrate sustained improvement in all three areas. We will continue to monitor DOE’s and NNSA’s implementation of actions to resolve its safety, security, and contract and project management difficulties and to assess the impact of these actions. Chairman Murphy, Ranking Member DeGette, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions you may have at this time. If you or your staff have any questions about this testimony, please contact me at (202) 512-3841 or trimbled@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Allison Bawden, Jonathan Gill, and Kiki Theodoropoulos, Assistant Directors; and Nancy Kintner-Meyer, Michelle Munn, and Jeff Rueckhaus, Senior Analysts. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DOE and NNSA are responsible for managing nuclear weapon- and nonproliferation-related national security activities in national laboratories and other sites and facilities, collectively known as the nuclear security enterprise. Major portions of NNSA's mission are largely carried out by contractors at each site. GAO has designated contract management of major projects (i.e., those $750 million or more) at DOE and NNSA as a high risk area. Progress has been made, but GAO continues to identify security and safety problems at DOE and NNSA sites as well as project and contract management problems related to cost and schedule overruns on major projects. This testimony addresses DOE's and NNSA's oversight of (1) security performance, (2) safety performance, and (3) project and contract management in the nuclear security enterprise. It is based on prior GAO reports issued from August 2000 to December 2012. DOE and NNSA continue to act on the numerous recommendations GAO has made to improve management of the nuclear security enterprise. GAO will continue to monitor DOE's and NNSA's implementation of these recommendations. The Department of Energy (DOE) and the National Nuclear Security Administration (NNSA), a separately organized agency within DOE, continue to face challenges in ensuring that oversight of security activities is effective. For example, in July 2012, after three trespassers gained access to the protected security area directly adjacent to one of the nation's most critically important nuclear weapon-related facilities, the Y-12 National Security Complex, DOE and NNSA took a number of immediate actions. These actions included repairing security equipment, reassigning key security personnel, and firing the Y-12 protective force contractor. As GAO and others have reported, DOE has a long history of security breakdowns and an equally long history of instituting remedies to fix these problems. For example, 10 years ago, GAO reported on inconsistencies among NNSA sites on how they assess contractors' security activities and, since that time, DOE has undertaken security initiatives to address these issues. GAO is currently evaluating these security reform initiatives. DOE and NNSA continue to face challenges in ensuring that oversight of safety performance activities is effective. DOE and NNSA have experienced significant safety problems at their sites, and recent efforts to reform safety protocols and processes have not demonstrated sustained improvements. Long-standing DOE and NNSA management weaknesses have contributed to persistent safety problems at NNSA's national laboratories. For example, in October 2007, GAO reported that nearly 60 serious accidents or near misses had occurred at NNSA's national laboratories since 2000. DOE has undertaken a number of reforms to address persistent safety concerns. For example, in March 2010, the Deputy Secretary of Energy announced a reform effort to revise DOE's safety and security directives. However, GAO reported in September 2012 that DOE's safety reforms did not fully address continuing safety concerns that GAO and others identified in the areas of quality assurance, safety culture, and federal oversight and, in fact, may have actually weakened independent oversight. DOE and NNSA have made progress but need to make further improvements to their contract and project management efforts. DOE has made progress in managing nonmajor projects--those costing less than $750 million--and in recognition of this progress, GAO narrowed the focus of its high-risk designation of DOE's Office of Environmental Management (EM) and NNSA to major contracts and projects. Specifically, as GAO noted in its December 2012 report on 71 DOE EM and NNSA nonmajor projects, GAO found the use of some sound management practices that were helping ensure successful project completion. However, major projects continue to pose a challenge for DOE and NNSA. For example, in December 2012, GAO reported that the estimated cost to construct the Waste Treatment and Immobilization Plant in Washington State had tripled to $12.3 billion since its inception in 2000, and the scheduled completion date had slipped by nearly a decade to 2019. Also, in March 2012, GAO reported that a now-deferred NNSA project to construct a new plutonium facility in Los Alamos, New Mexico, could cost as much as $5.8 billion, a nearly six-fold cost increase. |
ATSA applied the personnel management system of the Federal Aviation Administration (FAA) to TSA employees, and further authorized TSA to make any modifications to the system it considered necessary. Therefore, similar to FAA, TSA is exempt from many of the requirement s imposed and enforced by OPM—the agency responsible for establishinghuman capital policies and regulations for the federal government—and, thus, has more flexibility in managing its executive workforce than ma ny other federal agencies. For example, compared to agencies operating under OPM’s regulations, TSA is not limited in the number of permanent TSES appointments and limited term TSES appointments it may make an the types of positions limited term TSES appointments may be used for. Also, TSA has more discretion in granting recruitment, relocation, or retention incentives to TSES staff than other agencies have for SES staff (see table 1). One benefit available to career-appointed SES in OPM-regulated agencies is that once they are accepted into the SES of their agency, they can apply for and obtain SES positions in other OPM-regulated executive branch agencies without undergoing the merit staffing process. DHS and OPM signed an agreement in February 2004 which also allows career-appointed TSES staff the benefit of applying to SES positions without being subject to the merit staffing process. Under the provisions of the agreement, TSA must ensure that all TSES staff selected for their first career TSES appointment (1) are hired using a process that encompasses merit staffing principles and (2) undergo the ECQ-evaluation process. Consistent with OPM regulations, a hiring process that encompasses federal merit staffing requirements should include: public notice of position availability, identification of all minimally eligible candidates, identification of position qualifications, rating and ranking of all eligible candidates using position qualifications, determination of the best qualified candidates (a “best qualified list”), selection of a candidate for the position from among those best qualified, certification of a candidate’s executive and technical qualifications. TSES Positions within TSA TSA has consistently employed more senior executives than any other DHS component agency; however, as shown in table 2, from fiscal years 2005 through 2008, TSA went from being one of the DHS components with the highest numbers of executive staff per nonexecutive staff, to one of the components with the fewest executive staff per nonexecutive staff. Specifically, out of eight DHS components, TSA had the third highest number of executives per nonexecutive staff in 2005; however, by fiscal year 2008, TSA had the third lowest number of executives per nonexecutive staff. Compared with DHS overall, TSA had the same number of executive per nonexecutive staff as DHS in 2005, but over the 4- year period, the number of TSA executive to nonexecutive staff declined, while that of DHS increased. Moreover, the number of TSA executive staff per nonexecutive staff was consistently lower than that of all cabinet-level departments for fiscal years 2005 through 2008 (see table 2). TSA has employed approximately equal numbers of TSES staff in both headquarters and in the field, where its operational mission of securing the nation’s transportation system is carried out (see table 3). TSES positions in the field include federal security directors (FSDs) who are responsible for implementing and overseeing security operations, including passenger and baggage screening, at TSA-regulated airports; area directors, who supervise and provide support and coordination of federal security directors in the field; special agents in charge, who are part of the Federal Air Marshal Service and generally located at airports to carry out investigative activities; and senior field executives, who work with FSDs and other federal, state, and local officials to manage operational requirements across transportation modes. Headquarters executive positions generally include officials responsible for managing TSA divisions dedicated to internal agency operations, such as the Office of Human Capital or the Office of Legislative Affairs, and external agency operations, such as the Office of Security Operations and the Office of Global Strategies. TSES attrition for fiscal years 2004 through 2008 was at its highest (20 percent) in fiscal year 2005, due to a surge in resignations for that fiscal year. The rate of attrition among TSES staff for fiscal years 2004 through 2008 was consistently lower than the rate of attrition among all DHS SES, but, until 2008, higher than the SES attrition rate for all other cabinet-level departments. TSA human capital officials acknowledge that attrition among TSES staff has been high in the past—which they attribute to the frequent turnover in administrators the agency experienced from its formation in fiscal year 2002 through mid-2005—and noted that since TSA has had more stable leadership, attrition has declined. CPDF data for fiscal years 2004 through 2008 show that attrition among TSES staff rose from fiscal year 2004 to fiscal year 2005—peaking at 20 percent in fiscal year 2005—and has declined each year thereafter, measuring 10 percent in 2008. Attrition includes separations due to resignations, retirements, expiration of a limited term appointment, terminations, or transfers to another cabinet-level department. The rate of attrition among TSES headquarters staff was generally more than double that of TSES staff in the field. Specifically, in fiscal years 2004, 2005, 2006, and 2008, TSES attrition in headquarters was 26, 28, 28, and 14 percent respectively, compared to TSES attrition in the field, which was 8, 13, 10, and 6 percent respectively (see fig. 1). With regard to the manner in which TSES separated (through resignation, retirement, expiration of a limited term appointment, termination, or transfer to another cabinet-level department), our analysis of CPDF data shows that resignations were the most frequent type of TSES separation, accounting for almost half of total separations over the 5-year period and about two thirds of all separations during fiscal years 2005 and 2006 (see table 4). Also, over the 5-year period, transfers and retirements tied for the second-most frequent type of TSES separation, while expiration of a limited term appointment and “other” were the least common separation types for TSES. TSA human capital officials acknowledged that attrition among TSES staff has been high at certain points in TSA’s history. They noted that frequent turnover in administrators since TSA’s creation in 2002 through mid-2005 was the likely catalyst for much TSES attrition, and that once Administrator Hawley, who served the longest term of any TSA Administrator, was appointed, attrition among TSES staff declined. As shown in figure 2, the rate of attrition among TSES staff for fiscal years 2004 through 2008 was consistently lower than the rate of attrition among all DHS SES. On the other hand, from fiscal years 2004 through 2006, the TSES rate of attrition was higher than the overall SES attrition rate for all other cabinet-level departments, but in 2008, the rate was slightly lower than the rate for other cabinet-level departments. When comparing attrition among types of separations, we found that TSA had higher rates of executive resignations than DHS in 2005 and 2006; in particular, the rate of TSES resignations in 2005 (13 percent) was almost twice that of DHS SES (7 percent). TSA also had consistently higher rates of executive resignations than other cabinet-level departments for fiscal years 2004 through 2008 (see fig. 3). TSA human capital officials reiterated that many of these resignations were likely influenced by frequent turnover among TSA administrators, and that it is natural to expect that some executive staff would choose to leave the agency after a change in top agency leadership. They also explained that TSA’s high number of resignations could, in part, reflect TSES staff who opted to resign in lieu of being subject to disciplinary action or having a termination on their permanent record. Regarding other separation types, TSA’s TSES had lower rates of retirements for fiscal years 2004 through 2008 than SES in DHS and all cabinet-level departments. However, rates of transfers among TSES were about the same as those among SES in DHS and cabinet-level departments. For the same time period, TSA’s attrition rate for TSES terminations and expiration of term appointments was 3 percent or less, whereas the rate for DHS and all other cabinet-level departments was 1 percent or less. In interviews with 46 of 95 TSES who separated from TSA from fiscal years 2005 through 2008, most reported adverse reasons for leaving the agency—that is, a reason related to dissatisfaction with some aspect of their TSA experience, as opposed to a nonadverse reason, such as to spend more time with family or pursue another professional opportunity. Perceptions regarding the impact of TSES separations on TSA operations varied among TSA staff who directly reported to separated TSES staff members, TSES supervisors, and stakeholder groups representing industries that collaborate with TSA on security initiatives. Some of these reported that TSES attrition had little or no impact on the agency’s ability to implement transportation security initiatives, while others identified negative effects on agency operations, such as a lack of program direction and uncertainty and stress among employees. In addition to obtaining information on the manner by which TSES staff separated from the agency, such as through resignation or retirement, we also sought more detailed information on the factors that led staff members to separate. For example, for TSES staff members who left the agency through retirement, we sought information on any factors, beyond basic eligibility, that compelled them to leave the agency. According to TSA officials, one of the primary reasons for attrition among TSES has been the large number of TSES term appointees employed by the agency, who, by the very nature of their appointment, are expected to leave TSA, generally within 3 years. However, as shown earlier in table 4, only 4 TSES appointees separated from TSA due to the expiration of their appointments for fiscal years 2004 through 2008, and TSA reported hiring a total of 76 limited term appointees over this period. TSA human capital officials later explained that when the time period for a limited term appointments concludes, the reason for the staff member’s separation is recorded on his or her personnel file as a type of “termination.” For this reason, TSES on limited term appointments often leave the agency before their terms expire in order to avoid having “termination” on their personnel record, among other reasons. To better understand the reasons for TSES separations, and the extent to which they may have been influenced by TSES limited term appointments, we requested TSA exit interview data that would provide more in-depth explanations as to why the former TSES staff members left the agency. Since TSA had documented exit interviews for only 5 of 95 TSES staff members who separated from TSA from fiscal years 2005 through 2008, we interviewed 46 of these former TSES staff to better understand the reasons why they left the agency. As stated previously, because we selected these individuals based on a nonprobability sampling method, we cannot generalize about the reasons for all TSES separations from fiscal years 2005 through 2008. However, these interviews provided us with perspectives on why nearly half of these TSES staff left TSA. Of the 46 former TSES staff members we interviewed, 33 cited more than one reason for leaving TSA. Specifically, these individuals gave between one and six reasons for separating, with an average of two reasons identified per interviewee. Ten of 46 interviewees identified only nonadverse reasons for leaving TSA, 24 identified only adverse reasons, and 12 cited both adverse and nonadverse reasons. Nonadverse reasons were those not related to dissatisfaction with TSA, such as leaving the agency for another professional opportunity or to spend more time with family. Adverse reasons were those related to dissatisfaction with some aspect of the TSES staff member’s experience at TSA. As shown in table 5, we identified three categories of nonadverse reasons and nine categories of adverse reasons for why TSES staff left TSA. By discussing only the perspectives of former TSES, we may not be presenting complete information regarding the circumstances surrounding their separation from TSA. However, as we agreed not to identify to TSA the identities of respondents we spoke with, we did not obtain TSA’s viewpoint on these separations because doing so would risk revealing the interviewee’s identity. Of the TSES staff we interviewed who reported leaving TSA for nonadverse reasons, 14 of the 46 reported leaving for another professional opportunity, such as a position in a security consulting firm. Seven of 46 reported separating from TSA because of personal reasons, such as the desire to spend quality time with family, and 4 of the 46 TSES told us they separated from the agency because they were employed on re-employed annuitant waivers, which expired after 5 years. Of the TSES staff we interviewed who reported leaving TSA for adverse reasons, 14 of the 46 cited dissatisfaction with the leadership style of top management as a reason they left the agency. These interviewees defined top leadership as the TSA Administrator or those reporting directly to him, such as Assistant Administrators. In addition to issues with management style, 10 of the 14 responses focused specifically on top leadership’s communication style and cited instances in which top management had not communicated with other TSES staff and, in some cases, with lower- level staff. For example, one former FSD reported that new policies and procedures were implemented by headquarters with little or no notice to the field. He explained that in some cases, he learned that headquarters had issued new policies or procedures when the media called to ask questions about them. Another TSES interviewee reported that communication occurred between the administrator and a core group, but all other staff received only “bits and pieces of information.” Other examples provided in this category were more general. For example, 3 interviewees reported they were compelled to leave the agency due to a specific TSA Administrator’s more hierarchical management style. Thirteen of the 46 former TSES staff we interviewed stated that some of their colleagues lacked executive-level skills or were selected for positions based on personal relationships with administrators or other TSES staff. Specifically, 12 of the 13 interviewees in this category stated their colleagues lacked the necessary qualifications for the position. For example, one interviewee mentioned that an individual with a rail background was put in charge of a TSA division that focused on aviation policy. In addition, 6 of the 13 TSES staff in this category stated that many in the TSES were hired based on personal relationships, as opposed to executive qualifications. As discussed previously, unlike many other federal agencies, TSA is not required to adhere to merit staffing principles when hiring for limited term TSES positions. However, TSA has agreed to adhere to merit staffing principles when hiring for career TSES positions in accordance with the OPM-DHS interchange agreement. The former TSES staff we interviewed did not always provide us with the names of their colleagues whom they believed were not hired in accordance with merit staffing principles. Additionally, documentation related to the hiring of TSES staff who joined the agency prior to March 2006 was not generally available. Therefore, we were not able to conduct an independent assessment of whether the TSES in question should have been hired, and subsequently were hired, in accordance with merit staffing principles. However, later in this report, we discuss the extent to which TSA documented its adherence to merit staffing principles when hiring for TSES career positions in 2006 and 2008, such that an independent third party could make this type of assessment in the future. Thirteen of the 46 TSES staff we interviewed cited dissatisfaction with the authority and responsibilities of their position as a reason for leaving. Specifically, 7 TSES staff members reported being dissatisfied with the limited authority associated with their position. For example, during a period when contractors, as opposed to FSDs, were responsible for hiring TSA airport employees, one former FSD explained that he arrived at the interview site to observe the interview and testing process for the transportation security officer candidates, but was not allowed to enter the facility, even though he would be supervising many of the individuals who were hired. The remaining 6 TSES reported that they were either dissatisfied with the duties and responsibilities of their position, or they became dissatisfied with their position after (1) they were reassigned to a less desirable position or (2) they believed their position lost authority over the course of their employment. For example, regarding the latter, one former TSES staff member reported that after his division was subsumed within another, he became dissatisfied with no longer having the ability to report directly to the administrator or implement policies across the agency, and subsequently left the agency. Twelve of the 46 TSES staff we interviewed cited disagreement with top leadership’s priorities or decisions as a reason for separation. Seven of the 12 TSES staff in this category disagreed with a specific management decision. For example, one former TSES staff member reported leaving the agency when top leadership decided to discontinue a process for evaluating candidates for a certain TSA position, which the former TSES staff member believed was critical to selecting appropriate individuals for the position. The other 5 staff in this category questioned agency priorities. For example, one TSES staff member believed that TSA focused on aviation security at the expense of security for other modes of transportation, while another commented that agency priorities had shifted from a security focus to one that was centered on customer service. Eleven of the 46 TSES staff we interviewed reported that they were frustrated with numerous agency reorganizations and frequent changes in TSA administrators. For example, one TSES staff member reported that during her tenure she experienced six physical office changes along with multiple changes to duties and responsibilities, making it difficult to lead a cohesive program in the division. We conducted an analysis of TSA organization charts from calendar years 2002 through 2008, and found that TSA underwent at least 10 reorganizations over this period. Furthermore, the charts reflected 149 changes in the TSES staff in charge of TSA divisions. Also, TSA was headed by several different administrators from 2002 through mid-2005—specifically, a total of 4 within its first 5 years of existence. TSA human capital officials acknowledged that the many reorganizations and changes in agency leadership the agency has experienced since its formation have led to many TSES staff separations. With regard to some of the remaining adverse reasons, Nine of the 46 TSES staff told us they separated from the agency because they believed that TSA executives and employees were treated in an unprofessional or disrespectful manner. For example, one TSES staff member reported that upon completion of a detail at another federal agency, he returned to TSA and learned that his TSES position had been backfilled without his knowledge. Nine of the 46 TSES staff reported they were either terminated or pressured to leave the agency. We reviewed TSA-provided data on separations, and found that 3 of the 9 TSES in this category were actually terminated. The 6 who were not terminated reported that they were pressured to leave the agency. Specifically, 4 of the 6 reported that they were forced out of the agency after being offered positions that TSA leadership knew would be undesirable to them due to the location, duties, or supervisor associated with the position. Finally, 2 of the 6 TSES reported they were compelled to resign after being wrongly accused of misconduct or poor performance. Five of the 46 TSES staff we spoke with reported either insufficient or inequitable pay as a reason for separating from the agency. In one case, a TSES staff member told us that, unlike his peers, he did not receive any bonuses or pay increases even though he was given excellent performance reviews. TSA provided us with data on the total amount of bonuses awarded to each TSES staff person employed with the agency during fiscal years 2005 through 2008. Agency documentation reflects that these bonuses were awarded to recognize performance. Of the 95 TSES who separated during this 4-year period, 55 were awarded performance bonuses, and the total amount of these awards ranged from $1,000 to $44,000. Of 141 TSES who were employed with TSA during fiscal years 2005 through 2008, 92 were awarded performance bonuses, and the total amount of these awards ranged from at least $4,800 to $85,000. Another interviewee told us that he left TSA due, in part, to his perception that TSES staff doing aviation security work were paid more than TSES staff such as himself who worked in other nonaviation transportation modes. While some attrition impacts agency operations negatively, such as the loss of historical knowledge or expertise, the separation of other staff can have a positive impact on agency operations—such as when an executive is not meeting performance expectations. To identify the potential impact of TSES separations on agency operations, we conducted interviews with TSA staff who were direct reports to and immediate supervisors of TSES staff members who left the agency. We also interviewed representatives of seven transportation security associations. While we would not expect any of these individuals to have a full understanding of the impact that TSES attrition had on the agency, we believe that presenting the perspectives of superiors and subordinates and external agency stakeholders enables us to offer additional perspective on this issue. We found that the direct reports, supervisors, and external stakeholders had varying views regarding the impact that TSES attrition has had on TSA. Specifically, of the 22 direct reports we interviewed, 13 stated that TSES attrition had little or no impact on TSA’s programs and policies, whereas 8 others cited negative effects, such as delays in the development and implementation of agency programs. Two programs direct reports identified as being negatively affected by TSES attrition were Secure Flight and the Transportation Worker Identification Credential (TWIC) programs. In addition, 12 of the direct reports stated that TSES attrition had little or no impact on the functioning of their particular division, although 10 cited negative effects such as a lack of communication regarding the direction of the division and its goals; difficulties in building relationships with ever-changing supervisors; and decreased morale. Regarding our interviews with the 7 supervisors of TSES staff who since left the agency, 6 reported that TSES attrition had little or no impact on TSA’s programs and policies, but one stated that TSES separations caused a lack of vision and direction for program development. Additionally, 4 supervisors did not believe that TSES attrition had negative impacts on the functioning of a specific division, but 3 supervisors stated that TSES attrition did have negative impacts, stating that separations cause uncertainty and stress among employees, which negatively impacts morale. With regard to our interviews with seven industry associations representing the various stakeholders affected by TSA programs and policies (for example, airports, mass transit systems, and maritime industries), four industry associations could not identify a negative impact attributable to turnover among TSES staff. The remaining three stakeholders reported delayed program implementation and a lack of communication from TSA associated with TSES turnover. TSA human capital officials noted that they were generally pleased that many of the supervisors, direct reports, and stakeholders we interviewed stated that the impact of TSES turnover on agency operations was minimal. In particular, they interpreted this as evidence that their succession planning efforts—to identify, develop, and select successors who are the right people with the right skills for leadership and other key positions—are working as intended, and minimizing the impact of turnover on agency operations. By affording separating TSES the opportunity to complete an exit survey, TSA has taken steps to address attrition that are consistent with internal control standards and effective human capital management practices. Nevertheless, the current survey instrument does not allow TSES staff leaving the agency to identify themselves as executive-level staff, hence preventing the agency from isolating the responses of TSES staff and using the data to address reasons for TSES attrition. In addition, the agency has implemented other measures to improve overall management of its TSES corps that are consistent with effective human capital management practices and internal control standards, such as issuing an official handbook that delineates human capital policies applying to the TSES, implementing a succession plan, and incorporating merit-based staffing requirements (which are intended to ensure fair and open competition for positions) into its process for hiring executive staff. However, inconsistent with internal control standards, TSA did not always clearly document its implementation of merit staffing requirements. According to TSA officials, in January 2008, TSA began collecting data on the reasons for TSES separation through an exit interview process, asking questions specifically designed to capture the experiences of executive- level staff. The interview was administered by TSA human capital officials. According to a TSA official, after we requested access to this information in September 2008, TSA ceased conducting these exit interviews due to concerns that the format would not provide for anonymity of former TSES staff members’ responses. According to standards for internal control in the federal government, as part of its human capital planning, management should consider how best to retain valuable employees to ensure the continuity of needed skills and abilities. Also, we have reported that collecting and analyzing data on the reasons for attrition through exit interviews is important for strategic workforce planning. Such planning entails developing and implementing long-term strategies for acquiring, developing, and retaining employees, so that an agency has a workforce in place capable of accomplishing its mission. In March 2009, TSA, recognizing the importance of such a process to its management of TSES resources, announced it was affording separating TSES staff the opportunity to complete an exit survey. Specifically, TSA officials reported that they would use the agency’s National Exit Survey instrument, which has been in use for non-TSES staff since November 2005. We reviewed the survey instrument, which consists of 21 questions (20 closed ended and one open ended) concerning the staff member’s experience at TSA and the specific reasons for separation, and found that it generally covered all the reasons for separation identified by 46 separated TSES staff we interviewed. Although TSA’s National Exit Survey responses are submitted anonymously (thereby allaying TSA’s concerns with the previous TSES exit interview process), respondents are given the opportunity to identify what position they held at TSA, such as “Transportation Security Officer (TSO),” by selecting from a pre-set list of position titles. However, TSA does not list “TSES” among the answer choices, which precludes TSES staff who fill out the survey from identifying their position rank. TSA officials explained that they do not allow TSES staff to self-identify because, given the small number of TSES staff who leave the agency in a given year, it may be possible to determine the identity of a particular TSES respondent. However, according to TSA’s documented policy for analyzing exit survey data, survey responses will not be analyzed by position if the total number of respondents in that position is fewer than five. We discussed this issue with TSA human capital officials and the TSA officials stated that, in light of this policy, they may consider allowing TSES staff members to identify themselves as such when filling out the survey. Without the ability to isolate the responses of TSES staff from those of other staff, it will be difficult for TSA to use the results of the exit survey to identify reasons for attrition specific to TSES staff, thus hindering TSA’s ability to use exit survey data to develop a strategy for retaining talented TSES staff with specialized skills and knowledge, and ensuring continuity among the agency’s leadership. TSA has also sought to manage attrition among TSES by decreasing its use of limited term TSES appointments. TSA officials believe that the agency’s use of limited term appointments has contributed to higher attrition among TSES staff. TSA’s Chief Human Capital Officer stated that during the agency’s formation and transition to DHS, TSA made more liberal use of limited term appointments, as it was necessary to quickly hire those individuals with the executive and subject area expertise to establish the agency. The official explained that as the agency has matured, and since it now has a regular executive candidate development program, the agency has hired fewer limited term appointments. TSA data on the number of limited term TSES appointed (hired) per fiscal year from 2004 through 2008 show that the agency’s use of limited term appointments has generally been decreasing, both in number and as a proportion of all new TSES appointments. Specifically, the number of new limited term appointments was highest in fiscal year 2004, representing over half of all TSES appointments for that fiscal year; in fiscal year 2008, TSES made six TSES limited term appointments, representing a sixth of all new appointments for that fiscal year (see table 6). TSA has implemented a number of steps to help attract and retain TSES staff. In November 2008, TSA issued a TSES handbook delineating human capital policies and procedures applicable to TSES staff. Prior to this, a comprehensive policy document did not exist. According to standards for internal control in the federal government, management should establish good human capital policies and practices for hiring, training, evaluating, counseling, promoting, compensating, and disciplining personnel in order to maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. Moreover, these policies and practices should be clearly documented and readily available for examination. TSA has had documented policies and procedures in place for such things as reassignments, transfers, and terminations since December 2003, and for the performance assessment of its TSES staff since July 2003. However, in November 2008, TSA issued a more comprehensive management directive delineating the agency’s human capital policies and procedures for TSES that, in addition to the areas listed above, also covers details to other agencies, reinstatements, compensation, work schedules, leave, awards and recognition, disciplinary actions, and workforce reductions. TSA stated that its goal is to ensure that all current TSES staff members are aware and have copies of the management directive. The directive, along with TSA’s stated commitment to increasing TSES access to this information, should help provide TSES staff with a more accurate and complete understanding of the applicable human capital management authorities, flexibilities, policies, and procedures. TSA also developed a succession plan in 2006 to improve its overall human capital management of TSES staff. TSA’s succession planning efforts provide for a more systematic assessment of position needs and staff capabilities. Specifically, the plan targets 81 positions (both TSES and pay- band) and identifies the leadership and technical competencies required for all. The program is designed to recruit talented TSA staff in lower- level positions as possible candidates for these positions and encourage them to apply for entrance into a Senior Leadership Development Program (SLDP) where, upon acceptance, program participants are to receive special access to training and development experiences. Moreover, program participants are to have their executive core qualifications approved by OPM upon completion of the program, making them eligible for noncompetitive placement into vacant TSES positions. We have previously reported that succession planning can enable an agency to remain aware of and be prepared for its current and future needs as an organization, including having a workforce with the knowledge, skills, and abilities needed for the agency to pursue its mission. To better manage its TSES program, TSA also established in 2006 a hiring process for TSES staff that incorporates merit staffing requirements; however, TSA lacked documentation that would demonstrate whether TSA is consistently following these requirements. Although TSA has more human capital flexibilities with regard to hiring than most federal agencies, the agency, on its own initiative, sought to incorporate various merit staffing requirements into its hiring process. Merit staffing requirements help to ensure that competition for executive positions is fair and transparent, and that individuals with the necessary technical skills and abilities are selected for positions—which was a concern for 13 of the 46 former TSES we interviewed. While TSA human capital officials asserted that TSA has always hired qualified TSES staff in accordance with merit staffing requirements, these officials also acknowledged that for most of TSA’s existence, the agency did not have a documented process for doing so. In January 2006, TSA established an Executive Resources Council (ERC), which was chartered to advise the TSA Administrator and Deputy Administrator on the recruitment, assessment, and selection of executives, among other things. TSA’s ERC charter requires that merit staffing be used when hiring for TSES positions by encompassing certain merit staffing requirements into its procedures, namely public notice of position availability; identification, rating, and ranking of eligible candidates against position qualifications; determination of a list of best qualified candidates with the final selection coming from among those best qualified; and the agency’s certification of the final candidate’s qualifications. According to internal control standards, internal controls and other significant events—which could include the hiring of TSES staff—need to be clearly documented, and the documentation should be properly managed and maintained. To determine the extent to which TSA documented its implementation of the merit staffing procedures, we reviewed case files for evidence that merit staffing procedures were followed for the selection of 25 career TSES appointments for calendar year 2006 (the year the TSES staffing process was established) and 16 TSES staff for calendar year 2008 (the most recent full calendar year for which documentation was available). We could not review documentation prior to this period because TSA explained that its hiring decisions were not consistently documented prior to the establishment of its ERC process in March 2006. Based upon our review, we found that for 20 of the 25 career TSES who were hired competitively in calendar year 2006 and for 8 of the 16 TSES who were hired competitively in calendar year 2008, documentation identifying how TSA implemented at least one of the merit staffing procedures was either missing or unclear. For example, in our review of one 2008 case file, we found that the person selected for the position had not previously held a career executive-level position, but we did not find documentation indicating on what basis the person had been rated and ranked against other candidates applying for the position. Absent such documentation, it is uncertain whether the appointment comported with TSA’s hiring process. Moreover, OPM regulations establishing merit staffing requirements, upon which TSA based its staffing process, provide that agencies operating under merit staffing requirements must retain such documentation for 2 years to permit reconstruction of merit staffing actions. Table 7 identifies the specific merit staffing procedures required by TSA’s hiring process for which documentation was either missing or unclear. TSA human capital officials told us that a lack of documentation within case files does not necessarily indicate that merit staffing procedures were not followed for a particular staffing decision. Specifically, TSA stated that because the TSES staffing process consists of multiple levels of review, including review by both the TSA and DHS Executive Resources Councils, regardless of the lack of documentation, the agency has reasonable assurance that merit staffing principles have been followed. While TSA officials may believe that the agency has these assurances internally, by ensuring that there is complete and consistent documentation of its TSES staffing decisions, TSA can better demonstrate to an independent third party, the Congress, and the public that the way in which it hires for TSES positions is fair and open, that candidates are evaluated on the same basis, that selection for the position is not based on political or other non-job related factors, and that executives with the appropriate skills sets are selected for positions. Given the broad visibility of its mission to secure our nation’s transportation system, it is important that TSA maintain a skilled workforce led by well-qualified executives. As TSA prepares to bring on a new administrator, it would be beneficial to address some of the circumstances which led the former TSES staff members we interviewed to separate. TSA has taken steps to address attrition among TSES staff and to improve overall management of its TSES workforce. However, some modifications to these efforts could be beneficial. For example, TSA’s planned effort to conduct exit surveys of TSES staff—consistent with human capital best practices—is intended to provide TSA with more comprehensive data on the reasons why TSES staff decided to leave the agency. However, the method by which TSA has chosen to collect these data—anonymous surveys in which the separating TSES do not disclose their level of employment—will not provide TSA reasons why TSES staff, in particular, left the agency, thereby rendering the data less useful for addressing TSES attrition. TSA has also implemented a process to hire TSES staff, which incorporates procedures based upon merit staffing requirements in order to ensure that candidates for career TSES appointments are evaluated and hired on the basis of their skills and abilities as opposed to personal relationships—which was a concern among some former TSES staff we interviewed. By more consistently documenting whether and how it has applied merit staffing procedures when filling career TSES positions, TSA can better demonstrate that its hiring of TSES is fair and merit-based, as intended. To address attrition among TSES staff and improve management of TSES resources, we recommend that the TSA Administrator take the following two actions: Ensure that the National Exit Survey, or any other exit survey instrument TSA may adopt, can be used to distinguish between responses provided by TSES staff and other staff, so that the agency can determine why TSES staff, in particular, are separating from TSA. Require that TSA officials involved in the staffing process for TSES staff fully document how they applied each of the merit staffing principles required by TSA when evaluating, qualifying, and selecting individuals to fill career TSES positions. On October 7, 2009, we received written comments on the draft report, which are reproduced in full in appendix IV. TSA concurred with our recommendations and has taken action to implement them. In addition, TSA, as well as OPM, provided technical comments on the draft report, which we incorporated as appropriate. With regard to our recommendation that TSA allow TSES staff to identify themselves as such when filling out the National Exit Survey, TSA stated that it has revised Question 27 of the National Exit Survey—”What is your pay band?”—to include “TSES” as a response option. Regarding our second recommendation that TSA fully document how it applied merit staffing principles when evaluating, qualifying, and selecting individuals to fill career TSES positions, TSA stated that it has established a checklist for proper documentation and will conduct an internal audit of TSES selection files on a quarterly basis. While TSA agrees that it should document its adherence to merit staffing principles, it raised a question about our analysis by stating that that we regarded documentation of TSA’s certification of the candidate’s executive and technical qualifications as deficient if there was not both a signed letter from the selecting official and a signed Executive Resources Council recommendation, even when contemporaneous records existed. However, TSA’s statement is not accurate. To clarify, we considered documentation of this merit staffing principle complete if there was both a signed letter from the selecting official as well as a description of the candidate’s executive and technical qualifications. Therefore, even if the signed ERC recommendation was not present, if other contemporaneous records were provided to us attesting to the candidate’s executive and technical qualifications, we would have given TSA credit for this. We found that for 2006, of the 11 staffing folders that we determined had incomplete documentation of TSA’s adherence to the agency certification principle, 4 were only missing the signed certification by the selecting official, 5 were only missing the description of the candidate’s qualifications, and 2 of the folders were missing both the signed letter from the selecting official as well as a description of the candidate’s executive and technical qualifications. The one folder we identified from 2008 as having incomplete documentation of TSA’s certification of the candidate was missing a description of the candidate’s qualifications. The absence of critical documentation makes it difficult to support TSA’s statement that it has implemented a rigorous process for executive resources management consistent with effective human capital management practices and standards for internal control. TSA also stated that it was unable to respond to the reasons we reported for why former TSES staff left the agency, because the responses were anonymous. It is the case that we did not provide TSA with the names of the former TSES staff with whom we spoke. However, we chose not to do so because we believe that if the former TSES staff we interviewed knew that we were going to share their names with TSA, they would have been less candid and forthcoming in their responses. We would also like to note that TSA would not have had to rely on the information we obtained from former TSES staff regarding their reasons for leaving if TSA had consistently been conducting exit interviews or exit surveys between 2005 and 2008, which is the period of time during which those we interviewed left the agency. We will send copies of this report to the appropriate congressional committees and the Acting Assistant Secretary for TSA. The report will also be available at no charge on our Web site at http://www.gao.gov. If you have any further questions about this report, please contact me at (202) 512-4379 or lords@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Most executive branch agencies—including most Department of Homeland Security (DHS) agencies—have a Senior Executive Service (SES), which is comprised of individuals selected for their executive leadership experience and subject area expertise who serve in key agency positions just below presidential appointees. However, due to its exemption from many of the requirements imposed and enforced by the Office of Personnel Management (OPM)—the agency responsible for establishing human capital policies and regulations for the federal government—the Transportation Security Administration’s (TSA) executives are part of the Transportation Security Executive Service (TSES), which is distinct from the SES of other agencies. The explanatory statement accompanying the DHS Appropriations Act, 2008, directed GAO to “report on the history of senior executive service-level career turnover since the formation of TSA.” Accordingly, we addressed the following questions regarding TSA’s TSES staff: 1. What has been the attrition rate among TSES staff for fiscal years 2004 through 2008, and how does it compare to attrition among SES staff in other DHS components and cabinet-level departments? 2. What reasons did former TSES staff provide for leaving TSA, and how do current TSA officials and stakeholders view the impact of TSES attrition on TSA’s operations? 3. To what extent are current TSA efforts to manage TSES attrition consistent with effective human capital practices and standards for internal control in the federal government? More details about the scope and methodology of our work to address each of these principal questions are presented below. To calculate attrition for TSES staff and SES staff in DHS overall (excluding TSA) as well as other cabinet-level departments, we analyzed fiscal year 2004 through 2008 data from OPM’s Central Personnel Data File (CPDF), a repository of selected human capital data for most Executive Branch employees, including separations data. We selected this time period because 2004 was the first full fiscal year during which TSA was a part of DHS after transferring from the Department of Transportation in March 2003, and thus a more meaningful starting point for comparing TSES attrition to SES attrition at other federal agencies. Also, at the time of our review, 2008 was the most recently completed fiscal year for which attrition data were available in CPDF. The individuals who we classified as senior executive staff who attrited, or separated, from their agencies were those with CPDF codes that: identified them as senior executive staff, specifically TSES, SES, or SES equivalent staff and indicated that they had separated from their agency of employment through resignation, transfer to another cabinet-level department, retirement, termination, expiration of term appointment, or “other” separation type. We did not include TSES or SES staff who made intradepartmental transfers (such as transferring from TSA to U.S. Customs and Border Protection (CBP), which is another DHS agency) when calculating attrition because these data were not readily available in CPDF. We calculated the executive attrition rates (both SES and TSES) for each fiscal year by dividing the total number of executive separations for a given fiscal year by the average of (1) the number of senior executive staff in the CPDF as of the last pay period of the fiscal year prior to the fiscal year for which the attrition rate was calculated and (2) the number of senior executive staff in CPDF as of the last pay period of the fiscal year in which the attrition occurred. To place the TSA’s senior executive attrition rate in context, we compared it to the overall DHS SES attrition rate (excluding TSA) and the overall SES attrition rate for all other cabinet- level departments (excluding DHS). We did not calculate senior executive attrition rates for individual component agencies within DHS (such as for U.S. Secret Service) because the total number of senior executive staff for most of these components for a given fiscal year was fewer than 50. We generally do not to calculate rates or percentages when the total population for any unit is less than 50. Given that we could not provide rates for all DHS components, we decided not to compare TSES attrition to SES attrition for individual DHS components; however we do provide data on the number and type of executive separations for each DHS component in appendix II. For additional context, we compared the attrition rate for TSES staff who worked in TSA headquarters to those who worked in field locations for fiscal years 2004 through 2008. The CPDF does not identify whether a TSES staff person is considered headquarters or field staff, but does include codes that identify the physical location of each TSES position, including the location of TSA’s headquarters building. As such, we considered headquarters TSES staff to be all TSES staff assigned location codes for TSA’s headquarters building. In addition, using CPDF location codes, we identified all TSES staff working in the Washington D.C. area (Washington, D.C., and nearby counties in Virginia and Maryland) who were not assigned location codes for TSA headquarters, and asked TSA to identify which of these individuals were considered headquarters staff. All TSES staff not identified as headquarters staff were considered field staff. We believe that the CPDF data are sufficiently reliable for the purposes of this study. Regarding the CPDF, we have previously reported that governmentwide data from the CPDF were 97 percent or more accurate. To identify the reasons for TSES staff attrition, we selected a nonprobability sample of 46 former TSES staff members to interview from a TSA-provided list of 95 TSES staff members who separated from the agency during fiscal years 2005 through 2008. TSA provided us with the last-known contact information for each of these individuals. We searched electronic databases, such as LexisNexis, or used Internet search engines to obtain current contact information for these individuals if the information TSA provided was outdated. We determined that the TSA- provided list of 95 former TSES staff was sufficiently reliable for the purposes of this study. To make this determination, we compared TSA data on TSES staff separations with the number of TSES separations identified in CPDF and found that both sources reported sufficiently similar numbers of TSES staff separations per fiscal year. We attempted to select former TSES staff based on a probability sample in order to generalize about the reasons for TSES separation. Of the 46 interviewees, 31 were selected based upon a randomized list of the 95 separated TSES created to select a probability sample. We were unable to obtain an acceptable response rate for our sample, thus we determined we would continue interviewing until we had obtained responses from about half of the 95 separated TSES staff. We selected the remaining 15 interviewees in our sample of 46 in such a way that the proportion of interviewees with the following three characteristics—fiscal year of separation (2005 through 2008), manner of separation (resignations, retirements, etc.), and job location (headquarters or field)—would be about the same as the proportion of the 95 TSES staff members who separated during fiscal year 2005 through 2008 who had those characteristics. For example, if one-third of the 95 former TSES staff TSA identified left the agency in fiscal year 2005, then our goal was to ensure that approximately one-third of the 46 former TSES we interviewed left in 2005. We were not always successful in obtaining interviews with staff possessing some of the characteristics required to make our sample population resemble the larger population; however, for most characteristics, our sample of 46 generally had the same proportions as the larger population of TSES (see table 8). To obtain our sample of 46 TSES, we contacted a total of 70 of the 95 separated TSES, and of these 70, 24 did not respond to our request for an interview. Specifically, 16 of these nonresponses were from our attempt to select a probability sample. After we began selecting TSES for interviews based on the three characteristics—fiscal year of separation (2005 through 2008), manner of separation (resignations, retirements, etc.), and job location (headquarters or field)—we encountered an additional 8 nonresponses. Since we determined which former TSES staff to interview based on a nonprobability sample, we cannot generalize the interview results to all TSES staff who separated from TSA from fiscal years 2005 through 2008. However, these results provided us with an indication of the range of reasons why nearly half of the TSES staff who separated from TSA during this time period left the agency. To ensure consistency in conducting our interviews with separated TSES staff members, we developed a structured interview guide of 24 questions that focused on senior-level executives’ reasons for separation and their opinions on how TSA could better manage attrition. We conducted 3 of the 46 interviews in person at GAO headquarters and the remainder via telephone. Our question on the reasons for separation was open-ended; therefore, to analyze the responses to this question, we performed a systematic content analysis. To do so, our team of analysts reviewed all responses to this question, proposed various descriptive categories in which TSES reasons for leaving TSA could be grouped based upon themes that emerged from the interview responses, and ultimately reached consensus on the 12 categories listed in table 9 below. To determine which categories applied to a particular response provided by the former TSES staff members we interviewed, two analysts independently reviewed interview responses and assigned categories to the data; there was no limit to the number of categories the analysts could assign to each response. If the two analysts assigned the same categories, we considered the reasons for separation agreed upon. If they determined different categories applied, a third analyst reviewed the interview data and independently assigned categories. If the third analyst assigned the same category as one of the other reviewers, we considered the reason for separation the agreed upon category. If all three analysts assigned different categories, we coded the reason for separation as “unclassified.” Of the 46 responses we received to our question regarding reasons why the former TSES we interviewed separated from TSA, the initial two analysts agreed upon the categories for 37 TSES staff members’ responses. For all 9 responses in which there was disagreement, a third analyst who reviewed the data agreed with the category assigned by one of the other two other analysts. One of the general categories we established for why TSES separated from TSA was dissatisfaction with numerous agency reorganizations. To identify the number of reorganizations TSA experienced since its creation, and the movement of TSES staff associated with these reorganizations, we analyzed 10 organization charts provided to us by TSA covering calendar years 2002 through 2008. These charts identified only high-level TSA organizational divisions and the TSES staff member (usually an Assistant Administrator) who headed each division. To identify movement of TSES staff, we compared the charts in chronological order and counted the number of changes in the TSES staff person heading the division from one chart to the next. In conducting our analysis, we did not determine whether changes in TSES staff from one chart to the next were directly attributable to TSA’s reorganizations because we did not have the resources to investigate the specific circumstances surrounding each of the 149 changes. Another of the general categories we established for why TSES staff separated from TSA was dissatisfaction due to their perception of receiving insufficient or inequitable pay. TSA provided us data on the total amount of bonuses received by TSES staff employed with TSA during fiscal years 2005 through 2008. We analyzed these data to identify the number of TSES staff who received bonuses and the range of these cumulative payments for staff who separated and for those who did not separate during this period. For TSES staff who received, in addition to bonuses, relocation, retention, and recruitment payments, TSA provided us with a single sum for all these payments. For these TSES staff, we could not identify the amount of the bonus from other payments made for recruitment, retention, or relocation purposes. Thus, we excluded from our analysis any individual receiving payments for recruitment, retention, or relocation, in addition to bonuses. Specifically, we excluded data for 4 TSES staff who separated during fiscal years 2005 through 2008, and 34 TSES staff who were employed throughout the 4-year period. Although we assessed TSA data on the number of TSES staff separations for fiscal years 2005 through 2008 and found them reliable, we were not able to assess the reliability of the specific amounts of supplemental pay TSA reported giving to TSES over this time period because some of these data were not recorded within the CPDF for comparison. However, we confirmed with TSA that the data provided were applicable to all TSES employed over the fiscal year 2005 through 2008 time period. To address the impact of TSES attrition, we interviewed supervisors of separated TSES, employees who were direct reports to—that is, employees who were directly supervised by—separated TSES staff, and industry associations representing some of the various transportation sectors (aviation, surface, and maritime) that collaborate with TSA on transportation security initiatives. To conduct interviews with supervisors, we asked TSA to identify TSES supervisors who were still with TSA and who supervised any TSES who separated during fiscal years 2005 through 2008. TSA identified nine TSES staff still at the agency who had supervised other TSES staff; we requested interviews with eight of these supervisors and conducted seven interviews. We asked the supervisors to identify the impact, if any, of the TSES separation(s) on 1) development or implementation of TSA programs or initiatives and 2) external stakeholder relations. Two analysts then performed a systematic content analysis to determine if the responses to our interview questions portrayed a positive impact, negative impact, or little to no impact. The analysts agreed in their determinations for all seven interviews. To identify direct reports for interviews, we asked the former TSES we interviewed to provide us with names of employees who reported directly to them when they were in TSES positions and who they believed were still TSA employees; among the 25 former TSES staff who responded to our inquiry, we were given names of 52 TSA employees who had reported directly to these TSES staff during their tenure at TSA. Though this selection method relied upon the recommendations of separated TSES staff, we attempted to adjust for any bias the TSES staff may have had when recommending these individuals by ensuring that the direct reports we interviewed were evenly distributed across the following three categories: 1) reported to TSES staff who left TSA for only nonadverse reasons; 2) reported to TSES staff who left TSA for a combination of nonadverse and adverse reasons; and 3) reported to TSES staff who left TSA for adverse reasons only. We then selected 26 direct reports for interviews from among the three groups. We were able to conduct a total of 22 interviews: 5 from the nonadverse category; 4 from the nonadverse/adverse category; and 13 from the adverse only category. We conducted 9 of the 22 interviews in person at TSA headquarters with only ourselves—and no other TSA employee—present in the room; we conducted the remainder of direct report interviews via telephone, with a TSA staff person online throughout the call. This staff person was the TSA liaison, whose responsibility is to ensure that GAO receives access to requested documentation and interviews for a given engagement. Though the TSA liaison had no supervisory authority over the direct report staff we interviewed, the presence of this individual during the phone call could have inhibited the responses of the direct report interviewees we spoke with via telephone. We asked the direct reports to describe the impact, if any, of a TSES supervisor’s separation on their individual responsibilities and the efforts underway in their particular division. We then performed a systematic content analysis of their responses in the same manner as our content analysis of separated TSES interviews. The two analysts reviewing the direct report interviews agreed in their determinations for all 22 interviews. Finally, to obtain perspectives from industry stakeholders, we interviewed seven TSA transportation industry groups. We identified these industry groups based on our experience in the field of transportation security and by canvassing GAO analysts working in the area of transportation security for other contacts. We requested interviews with 13 industry stakeholder groups and either received written responses or obtained interviews with 7—specifically 3 aviation associations, 1 surface transportation association; and 3 maritime transport associations. We asked the stakeholders to identify whether they were aware of turnover among TSES staff, how they knew turnover had occurred, and how it impacted a specific policy or program they were working with TSA to implement. Two analysts then performed a systematic content analysis on the responses, and there was no disagreement between their determinations. Although the direct report, supervisor, and industry stakeholder interviews provided important perspectives on impact of executive attrition, the results could not be generalized, and therefore, do not represent the views of the entire population of each group. To gather information on TSA efforts to address attrition, we interviewed the Assistant Administrator and the Deputy Assistant Administrator of TSA’s Human Capital Office to learn about the various initiatives they have underway to address attrition and to improve management of their executive resources. These officials identified several initiatives, which we assessed, including a reinstated exit interview process, decreased use of limited term appointments, and recent release of a comprehensive handbook delineating TSES human capital policies, succession planning, and the establishment of a merit-based staffing process. To assess the exit survey process, we consulted prior GAO reports that address the use of exit interview data in workforce planning. We reviewed exit interviews TSA conducted under its previous process (specifically, five interviews dating from January 2008 through September 2008), and examined TSA’s data collection tool for conducting these interviews. We also reviewed the National Exit Survey instrument that TSA is presently using to conduct exit interviews of TSES staff, and conducted interviews with TSA human capital officials on the agency’s plans for implementing this process. To determine whether TSA has decreased its use of TSES limited term appointments, we reviewed TSA-provided data on the number of limited term appointments the agency made for fiscal years 2004 through 2008, and reviewed CPDF data on the total number of TSES staff hired for fiscal years 2004 through 2008. We were not able to determine the reliability of these data because some TSA data on limited term appointments were not recorded within CPDF. To determine the extent to which TSA’s handbook for TSES human capital policies and its succession plan were consistent with effective human capital practices and internal control standards, we reviewed criteria in prior GAO reports, as well as the standards for internal control in the federal government. We reviewed TSA management directives for TSES staff from fiscal year 2003 through fiscal year 2008 (one of which is the November 2008 handbook), as well as TSA’s succession plan (both the 2006 and 2008 versions). To identify the extent to which TSA has implemented its succession plan, we also reviewed TSA data on the number of staff who completed executive-level training identified within its succession plan and spoke with human capital officials responsible for compiling these data. Finally, to determine the extent to which TSA has been following merit- based staffing requirements for hiring TSES staff, we first reviewed documentation delineating TSA’s hiring process, specifically its Executive Resource Council (ERC) charter. To determine the merit staffing requirements TSA’s ERC process should encompass, we reviewed applicable OPM regulations addressing merit staffing. We identified seven merit staffing requirements that should have been reflected within TSA’s hiring process, and therefore, within its documentation of hiring decisions (see table 7). To ensure that the seven requirements we identified were an appropriate standard for assessing TSA’s performance of merit staffing, we reviewed OPM’s audit procedures for merit staffing and found that OPM requires agencies operating under its jurisd iction to document performance of these seven requirements. In addition, TSA officials also confirmed that these were the key merit staffing requirements they followed and agreed that these should be reflected within documentation for TSES hiring decisions. To determine whether TSA was documenting its performance of the seven merit staffing requirements, we reviewed all case files for competitively filled, career appointments to TSES positions for calendar years 2006 and 2008—a total of 41 case files. We reviewed case files for competitively filled, career appointments specifically because TSA has committed to using merit staffing for these hiring decisions; thus, we could expect to find documentation of TSA’s performance of merit staffing procedures within these files. We did not review case files from 2007, because we were interested in comparing how TSA followed merit staffing requirements when it initially established its ERC process in 2006, with how it followed them more recently in 2008—the most recent full calendar year when we undertook our review. After we provided the draft report to DHS for comment on July 27, 2009, TSA officials informed us that the they had additional documentation to demonstrate that the agency had adhered to the merit staffing principle of agency certification of the candidate’s executive and technical qualifications for more TSES career positions than the number identified in our draft report. TSA provided this additional documentation to us on September 4, 2009. Although this documentation had not been kept in the files we reviewed, we assessed the additional documentation and revised our report accordingly. We conducted this performance audit from April 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. The following tables provide data for fiscal years 2004 through 2008 on the number of senior executive staff who attrited—or separated—from the Transportation Security Administration (TSA); other selected Department of Homeland Security (DHS) agencies; and all cabinet-level departments, excluding DHS. In this report, we define attrition as separation from an agency by means of resignation, termination, retirement, expiration of appointment, or transfer to another cabinet-level department. Senior executive staff members in TSA are those individuals who are part of the Transportation Security Executive Service (TSES), and senior executives for other DHS agencies and cabinet-level departments are those individuals who are part of the Senior Executive Service (SES) or who hold SES-equivalent positions (for those agencies within cabinet-level departments that, like TSA, do not have SES). The DHS agencies for which we provide SES attrition data are those with operational missions, namely the Federal Emergency Management Agency (FEMA), U.S. Customs and Border Protection (CBP), U.S. Coast Guard (USCG), U.S. Citizenship and Immigration Services (USCIS), U.S. Immigration and Customs Enforcement (ICE), and U.S. Secret Service (USSS). We also provided SES attrition data for “DHS Headquarters,” which includes all DHS executive staff in positions serving departmentwide functions, such as those involving financial or human capital management. We do not report rates and percentages for populations under 50. Although the executive populations of TSA and some DHS components for fiscal years 2004 through 2008 numbered more than 50 individuals (namely CBP, DHS Headquarters, and Rest of DHS), most DHS components had less than 50 executives during this period. So that the presentation of our data would be uniform, we chose to present the attrition data in tables 11, 13, 15, 17, and 19 in total figures for all DHS components. In addition to the contact named above, Kristy Brown (Assistant Director) and Mona Blake (Analyst-in-Charge) managed this assignment. Maria Soriano, Kim Perteet, and Janet Lee made significant contributions to the work. Gregory Wilmoth, Catherine Hurley, and Christine San assisted with design, methodology, and data analysis. Tom Lombardi and Jeff McDermott provided legal support. Adam Vogt provided assistance with report preparation. | The Transportation Security Administration's (TSA) Transportation Security Executive Service (TSES) consists of executive-level staff serving in key agency positions just below political appointees. Committees of Congress have raise questions about the frequency of turnover within the TSES and have directed GAO to examine turnover among TSES staff. Accordingly, this report examines: (1) TSES attrition and how it compares with that of Senior Executive Service (SES) staff in other DHS components and cabinet-level departments, (2) the reasons TSES staff separated from TSA, and (3) TSA efforts to mange TSES attrition consistent with effective management practices. To answer these objectives, GAO analyzed data within the Office of Personnel Management's Central Personnel Data File, reviewed TSA human capital policies and procedures, and interviewed former TSES staff. The results of these interviews are not generalizable, but represent the views of about half the TSES staff who separated from fiscal years 2005 through 2008. Separation data from fiscal years 2004 through 2008 show that attrition among TSA's TSES staff was consistently lower than the rate of attrition among all DHS SES staff and, through 2007, higher than SES attrition for all other cabinet-level departments. Separations among TSES staff peaked at 20 percent in fiscal years 2005, but declined each year thereafter, and resignations (as opposed to retirements, terminations, transfers to other cabinet level departments, or expirations of a term appointment) were the most frequent type of TSES separations over this period. In interviews with 46 former TSES staff, the majority (36 of 46) identified at least one adverse reason (that is, a reason related to dissatisfaction with some aspect of their experience at TSA) for leaving, as opposed to a nonadverse reason (such as leaving the agency for another professional opportunity). The two most frequently cited reasons for separation were dissatisfaction with the leadership style of the TSA administrator or those reporting directly to him (14 of 46) and to pursue another professional opportunity (14 of 46). To better address TSES attrition and manage executive resources, TSA has implemented measures consistent with effective human capital management practices and standards for internal control in the federal government. These measures include, among other things, reinstating an exit survey and establishing a process for hiring TSES staff that encompasses merit staffing requirements. However, TSA could improve upon these measures. For example, due to TSA officials' concerns about respondents' anonymity, TSA's new exit survey precludes TSES staff from identifying their position. Without such information, it will be difficult for TSA to identify reasons for attrition specific for TSES staff. Moreover, inconsistent with internal control standards, TSA did not document its adherence with at least one merit staffing procedure for 20 of 25 TSES hired in calendar year 2006 and 8 of 16 TSES hired in calendar year 2008. Although there are internal mechanisms that provide TSA officials reasonable assurance that merit staffing principles are followed, better documentation could also help TSA demonstrate to an independent third party, the Congress, and the public that its process for hiring TSES staff is fair and open. |
USPS is an independent establishment of the executive branch mandated to provide postal services to bind the nation together through the personal, educational, literary, and business correspondence of the people. Established by the Postal Reorganization Act of 1970, USPS is one of the largest organizations in the nation; in fiscal year 2004, USPS reported revenues of $69 billion and expenses of $66 billion. USPS handles more than 200 billion pieces of mail annually. The Postal Reorganization Act of 1970 shifted postal ratemaking authority from Congress to USPS and the independent PRC. When USPS wishes to change domestic postal rates and fees, it must submit its proposed changes and supporting material—including supporting ratemaking data on USPS costs, revenues, and mail volumes—to PRC. By law, PRC must hold a proceeding referred to as a “rate case.” Any interested party can participate in a rate case by filing a notice of intervention with PRC. The notice enables the party to submit material to PRC, as well as ask written questions of USPS. PRC also provides an opportunity for public hearings in which USPS witnesses appear and can be cross-examined by PRC and other interested parties. PRC generally must issue a recommended decision on postal rates and fees within 10 months of the inception of a rate case. USPS Governors may approve, allow under protest, reject, or modify PRC’s recommended decision. Proposed postal rates must be sufficient for USPS to meet its mandate to break even, which requires that postal rates and fees shall provide sufficient revenues so that USPS’s total estimated income and appropriations will equal as nearly as practicable USPS’s total estimated costs. In addition, each class of mail or type of postal service is required by law to cover its direct and indirect costs (attributable costs), as well as make a reasonable contribution to covering overhead costs (institutional costs). PRC has long interpreted this requirement to apply to subclasses of mail. USPS maintains data collection systems to help attribute USPS costs to various subclasses of mail, in part because USPS employees typically handle multiple subclasses of mail every workday. Such cost attribution is critical because USPS personnel costs represent more than three-quarters of USPS costs. In fiscal year 2004, USPS personnel costs included about $22 billion for clerks and mail handlers at mail processing and retail facilities, about $18 billion for carriers on city routes (predominantly in highly populated urban areas and their suburbs), about $5 billion for carriers on rural routes (predominantly in rural areas and suburbs not covered by city routes), and about $2 billion for postmasters, among other things. USPS also collects ratemaking data on the revenue, cost, and volume of each subclass of mail. About 900 USPS employees called data collectors gather ratemaking data on a full-time basis and about 2,000 USPS employees collect ratemaking data on a part-time basis in addition to their other duties. These personnel use laptop computers and digital scales to record ratemaking data at postal facilities located across the nation (see fig. 1). USPS has estimated that it budgeted about $73 million for the administration and collection of ratemaking data in fiscal year 2005. Although the quality of ratemaking data has long been recognized as critical, as the Study’s report noted, there are no definitive quality standards for postal ratemaking data. The Study concluded that the quality of data accepted by any given regulatory or antitrust entity is negotiated between the regulator and the company or companies subject to that regulation. According to the Study, data quality is a subjective issue that regulators judge in every rate review process, with the quality of data accepted by regulators depending on the availability of data, the cost/benefit of collecting additional data, and the seriousness of the issue under review. For the purpose of the Study, the criteria for the quality of ratemaking data were defined as having data that are “sufficiently complete” and “sufficiently accurate” for ratemaking, considering the costs involved in providing such data. Sufficiently complete data were defined as having enough of the necessary detail to enable the determination of each applicable rate. Sufficiently accurate data were defined as “free enough from error” to be used for this purpose. Error in this context referred to both “sampling error” (i.e., data precision associated with random error of data collected from randomly sampled employees or pieces of mail) and other sources of error (i.e., systematic error). The contractor that conducted the Study, A.T. Kearney, primarily focused on the five major data collection systems used for ratemaking, as well as some special studies used for this purpose. The Study found opportunities for improvement in three of the five data collection systems. The Study also reviewed the economic and statistical concepts that USPS uses for ratemaking and estimated the precision of key cost data for certain subclasses of mail, among other things. The Study specifically focused on data used to establish rates for subclasses of mail. The Study did not perform extensive field testing and data gathering, attempt to quantify the extent to which systematic error is present in ratemaking data, or review the ratemaking process. “In general, within the scope of the Study, the quality of the data provided by the Postal Service for rate making has been sufficiently complete and accurate to calculate subclass costs, and thus, enable subclass rates to be based on reasonably reliable data, considering the costs to collect the data. This conclusion is based on the Study team’s assessment that the Postal Service asks the appropriate economic questions, uses the best available data, and applies an economically sound approach grounded in activity based concepts to calculate its subclass costs with reasonable statistical accuracy. This assessment is based on extensive economic, statistical and simulation analyses contained in the Study’s supporting Technical Reports.” At the same time, the Study’s report concluded that “improvements and enhancements can—and must—be made to ensure future data provided for rate making will be sufficiently complete and accurate.” The report stated that “The Study team has concerns regarding the quality of certain best available data used by the Postal Service to calculate its subclass costs. In some instances, these best available data were used regardless of their inherent level of error or their obsolescence.” Specifically, the report noted opportunities for improvement in three major data collection systems used for ratemaking as well as the need to replace ratemaking data from special studies that had been collected in the 1980s. USPS generally agreed with the Study’s findings. Over the past decade, Congress has debated comprehensive proposals to reform the nation’s postal laws that would, among other things, transform the ratemaking structure and mechanisms for oversight of ratemaking data quality. In the last session of Congress, proposed postal reform legislation was reported by USPS’s oversight committees (H.R. 4341 and S. 2468, 108th Cong., 2nd Sess., which were both entitled the Postal Accountability and Enhancement Act), but no further action was taken. The legislation has been reintroduced in the current session (H.R. 22 and S. 662, which are both entitled the Postal Accountability and Enhancement Act) but has not yet been enacted. As we recently testified, comprehensive postal reform legislation continues to be needed in order to address the continuing financial, operational, governance, and human capital challenges that threaten USPS’s long-term ability to provide high-quality, universal postal service at affordable rates. USPS’s core business of First-Class Mail is declining; compensation and benefits costs are rising; and USPS is burdened with roughly $70 billion to $80 billion in financial liabilities and obligations, most of which are for unfunded retiree health benefits. We and the President’s Commission on the United States Postal Service (Presidential Commission)—which was established by President George W. Bush in 2002 to examine the future of USPS and develop recommendations to ensure the viability of postal services in the United States—have reported that comprehensive postal reform legislation is needed to minimize the risk of a significant taxpayer bailout or dramatic rate increases. Because comprehensive postal reform legislation has not been enacted and USPS continues to face formidable competition, cost, and other challenges, its transformation efforts and long-term outlook remain on our High-Risk List. In this regard, we have reported that USPS progress is hindered by limited flexibility and incentives for success, including limited flexibility to establish postal rates and poor incentives for providing quality ratemaking data. USPS took several key actions that it reported were responsive to the Study’s findings. USPS reported that these actions increased the accuracy and precision of ratemaking data. These USPS actions are summarized below: First, USPS made changes to IOCS and RPW to more accurately determine subclasses of mail in the postal system, including data on the revenue, volume, and weight of each subclass of mail, as well as to collect better information on the activities that postal employees are performing. Second, USPS conducted CCSTS to replace ratemaking data that had previously been collected in the 1980s, using a different data collection approach to collect more complete and consistent data on carrier delivery activities. Third, USPS substantially increased the quantity of data collected by RPW and TRACS to increase the precision of ratemaking data. Fourth, USPS revised and expanded its documentation of TRACS, which the Study had criticized as inadequate. USPS made changes to two major data collection systems used for ratemaking—IOCS and RPW—that USPS reported were responsive to the Study, in order to more accurately determine subclasses of mail in the postal system, including data on the revenue, volume, and weight of subclass of mail, as well as to collect better information on the activities that postal employees are performing. According to USPS, the changes to the data collection methods for IOCS and RPW were among the most significant since these data systems were established more than 30 years ago. To implement the changes, USPS undertook detailed pilot testing over a multiyear period, which required substantial efforts on the part of both USPS staff and contractors. IOCS and RPW data are critical to postal ratemaking because these data are needed to estimate the costs for USPS to handle each subclass of mail. Although USPS timekeeping systems record the amount of employee time spent in each operation or work center, those systems do not track the subclasses of mail that employees handle, and also do not track the activities they are performing. USPS employees typically handle multiple subclasses of mail each workday, such as letter carriers preparing their mail for delivery by manually sorting piles of mail into pigeonholes corresponding to each address on their route. USPS has reported that letter carriers spend 2 to 3 hours each workday in the office, with much of that time spent manually sorting mail (see fig. 2). For example, USPS has estimated that carriers manually sort about 44 billion flat-sized pieces of mail each year, including such mail as catalogs, magazines, and large envelopes. This activity incurs substantial costs because letter carriers represent about 4 in 10 USPS career employees. To understand how much time is required for letter carriers to manually sort each subclass of mail and perform other duties in the office, at randomly selected times throughout the year, IOCS records the characteristics of mail that randomly sampled carriers are handling and the activities these carriers are performing. IOCS uses similar procedures to collect data from postal employees working to sort and route mail at mail processing and other facilities (see fig. 3), as well as postal employees working to provide window service and perform other activities at post offices and other retail facilities (see fig. 4). Once IOCS produces data on the time employees spend handling each subclass of mail in various postal operations, these data are combined with other data, such as data on employee wages and benefits, to yield cost data (i.e., the in-office personnel costs attributable to each subclass of mail). USPS incurred $28 billion in personnel costs in fiscal year 2004 for employees working in postal facilities (i.e., mail processing, retail, delivery unit, and other facilities), which represented more than one-third of USPS costs for the fiscal year. In addition, IOCS provides data for the calculation of some indirect costs that are related to mail handling activities, such as mail processing equipment costs. USPS data collectors gather IOCS data in person at USPS facilities across the country. These data collectors gather information from sampled USPS employees about their activities and about the mail that they are handling (see figs. 5 and 6). Some IOCS data are gathered by data collectors via telephone interviews, generally from smaller facilities where it would not be cost-effective to collect data in person. IOCS data collection is a major effort, with more than 750,000 observations/interviews conducted annually. USPS has reported that it budgeted nearly $15 million to collect IOCS data in fiscal year 2005. The Study had concluded that opportunities existed to improve the quality of ratemaking data collected by IOCS, stating that such action should be a “first priority.” USPS reported that it was responsive to this finding by modifying the IOCS data collection instrument to more accurately record the subclasses of mail and to collect better information on the activities that postal employees are performing. In addition, according to USPS, the redesigned IOCS instrument better aligns clerk and mail handler activities with current postal operations, and thus improves the division of certain postal costs into cost pools. Formerly, the data collector recorded the mail subclass on the basis of observations of certain characteristics of each sampled mail piece, such as its shape, weight, and markings (see fig. 7). This approach was revised so the data collector records detailed characteristics of the mail piece, including its shape, weight, and markings. After IOCS data are collected, these data are uploaded to a mainframe computer. Then, USPS uses a computer program to analyze the combined IOCS data on mail piece characteristics and determine the subclass for each mail piece. Because IOCS obtains information on postal employee activities using both in-person observation and interviewing and telephone interviewing, USPS redesigned the IOCS data collection instrument with a standard script to obtain information from postal employees in a more consistent manner. Previously, the IOCS data collection instrument listed the needed information but did not provide a script that asked questions in a standardized manner. Scripting questionnaires has long been considered a best practice and is the norm for surveys conducted by other organizations. On the basis of pilot tests, USPS officials told us that the new IOCS approach categorized mail pieces more accurately because it relies less on the data collector’s judgment and more on objective criteria built into the computer program that determines the mail subclass on the basis of the characteristics of each mail piece. USPS officials also said that pilot testing helped improve the script for IOCS data collection. These pilot tests are described below: IOCS verification studies: USPS pilot tested new versions of the IOCS data collection instrument, recording characteristics of actual mail pieces that were being handled by sampled USPS employees. These mail pieces were photocopied and sent to a USPS contractor who checked to see if the mail subclasses could be correctly categorized according to the information that was recorded. USPS staff double-checked this work. The results were used to test three versions of the instrument in an iterative manner, with each version being tested and the accuracy improving each time. IOCS comparison studies: USPS recorded mail piece characteristics from predeveloped examples (not actual mail) using different versions of the IOCS data collection instrument. USPS compared the results and reported that the final revised version of the instrument resulted in more accurate mail subclass determinations than the previous versions. In addition to changing IOCS, USPS made some similar changes to the RPW data collection instrument to better estimate the revenue, volume, and weight of each subclass. Although USPS separately tracks postage revenues, postage stamps and postage meters can be used to send any subclass of mail. Therefore, data collectors observe sampled mail pieces at USPS facilities, and, for each mail piece, gather data on its characteristics, including the revenue (i.e., the amount of postage) and weight. RPW data are used to calculate the revenue, volume, and weight of each subclass of mail. As with IOCS, USPS modified RPW so that the subclass of mail could be determined more accurately through computerized analysis of detailed mail piece characteristics that are observed and recorded (see fig. 8). USPS pilot tested the new RPW approach, collecting RPW data in selected areas over a 1-year period using both the old and new data collection instruments. USPS compared the recorded data from these side-by-side tests and received feedback from field staff to refine the instrument, going through approximately 15 to 20 versions of the instrument. USPS has reported that this pilot testing method was the first of its kind for a major ratemaking data system. USPS conducted a new study called CCSTS to help attribute costs of city carriers—that is, letter carriers who deliver mail in highly populated urban and suburban areas where most deliveries are made to the door, curbside mailboxes, centrally located mailboxes, or cluster boxes. Data on city carrier delivery activities are needed for ratemaking because carriers typically deliver multiple subclasses of mail. USPS incurred about $13 billion in employee costs for the street activities of city carriers in fiscal year 2004, which represented about one-fifth of USPS costs (see fig. 9). CCSTS replaced four special studies on city carrier street activities that had been conducted in the 1980s. The Study had criticized these special studies as outdated and imprecise. PRC and others had also criticized the age of the data collected by these special studies and the methodology of the studies. Recognizing the need for better data in this area, USPS conducted CCSTS in 2002. USPS has reported that CCSTS provided both more current and precise data, as well as a better methodological framework for analyzing city carrier costs than the four special studies that CCSTS replaced. USPS also has reported that CCSTS will be less costly to update than the four special studies that CCSTS replaced, thereby facilitating further updating of CCSTS in the future. In developing CCSTS, USPS reported that it was mindful of several drawbacks of the four former special studies of city carrier street activities. First, USPS stated that the former special studies yielded inconsistent and incomplete data, explaining that they selectively reviewed different aspects of city carrier street activity, collected data at different times, and used different data collection methods. Therefore, USPS designed CCSTS as a single study to collect more complete and consistent data on all city carrier street activities. Second, the former special studies collected data that were not well suited for use with advanced data analysis techniques needed to produce ratemaking data. Therefore, USPS designed CCSTS to be compatible with advanced data analysis techniques. Third, the former special studies generated imprecise ratemaking data for the costs of certain mail subclasses, largely because the expense of those studies had limited the quantity of data that was collected. Therefore, USPS designed CCSTS to collect a larger quantity of data so that its data would be more precise. To develop CCSTS, USPS conducted a pilot study that tested CCSTS on a smaller scale. USPS used the pilot study results to refine CCSTS, which was conducted in May and June 2002. CCSTS randomly sampled over 160 ZIP Codes nationwide and recorded data during a 2-week period on the activities of more than 3,500 city carriers delivering mail to addresses in these ZIP Codes. USPS analyzed CCSTS data using advanced data analysis techniques involving econometric models and performing statistical tests to estimate how changes in mail volume affected city carrier street time and the associated costs. As a result of using CCSTS to replace the four former special studies, USPS reported that it attributed a somewhat higher percentage of city carrier street time costs to specific subclasses of mail (37 percent, up from 30 percent), thus diminishing the remaining institutional costs (63 percent, down from 70 percent). To understand why most carrier costs continue to be categorized as institutional, it is important to note that the universal service commitment to provide mail delivery requires carriers to traverse their routes each day, regardless of whether a particular subclass or volume of mail is being delivered. The Study had raised concerns about the precision of ratemaking data, which are affected by the quantity of data collected from randomly sampled postal employees and pieces of mail, as well as by the precision of data on city carrier delivery activities. USPS reported that it took responsive actions by increasing the quantity of ratemaking data collected by RPW and TRACS, which are two of the five major data collection systems used for ratemaking. TRACS randomly samples long-distance mail transportation segments, such as airplane flights, truck trips, and trips of freight trains that carry mail. Data collectors observe a random sample of mail for each segment and record its characteristics, including the subclass of mail. TRACS data are used to help attribute about $4 billion in USPS long-distance transportation costs (see fig. 10). According to USPS, the large increase in the quantity of RPW and TRACS data has improved the precision of ratemaking data. Increasing data precision can be particularly beneficial to the quality of cost, revenue, and volume data for subclasses of mail with smaller volumes. First, USPS increased the number of RPW tests from about 56,000 in fiscal year 2003 to about 136,000 in fiscal year 2004—an increase of 142 percent. USPS also revised the RPW sampling methodology, which according to USPS, resolved some technical issues identified by the Study and further contributed to data precision. According to USPS, these changes improved the precision of all RPW data as well as the precision of key ratemaking data for each subclass of mail. Second, USPS increased the number of transportation segments randomly sampled by TRACS each fiscal year from about 10,000 in fiscal year 2000 to about 17,000 in fiscal year 2004—an increase of 65 percent. USPS also reallocated the quantity of data collected for each mode of transportation (i.e., air, highway, and rail) to further increase the precision of subclass cost data. According to USPS, this change was responsive to the Study, which had found that the limited quantity of TRACS data collected for the highway transportation mode resulted in less precise ratemaking cost data, particularly for some subclasses of mail, such as Regular Rate Periodicals (e.g., news magazines) and Parcel Post. In addition, as previously described, USPS designed CCSTS to yield more precise data by collecting a larger quantity of data than the data that CCSTS replaced. USPS noted that this change was responsive to the Study, which found that the four former special studies were highly imprecise. USPS revised, updated, and expanded the documentation for TRACS, which USPS reported was responsive to the Study and was an area that USPS recognized needed improvement. According to USPS, the revised TRACS documentation improved the transparency and administration of this data collection system. The Study’s report had found TRACS documentation to be deficient, particularly with respect to the documentation of TRACS sampling and estimation methodology. Consequently, the Study’s team reported that, within the Study’s time frame (June 1997 through April 1999), the team did not have the opportunity to understand some parts of the sampling design. The Study’s report observed that the availability of improved documentation of TRACS estimation procedures is important and noted the need for clear and complete documentation on the TRACS sample design. The report further noted that once the TRACS sample design is completed, USPS should evaluate and adjust the sample to improve the precision of TRACS data. USPS proceeded to expand TRACS sampling and estimation documentation and rewrote the handbook for TRACS data collection. The redone documentation was used in the 2000 rate proceedings, providing greater transparency of this data collection system, and was commended by PRC. USPS further revised TRACS documentation for the 2005 rate case. In addition, as previously described in this report, USPS also evaluated and adjusted the TRACS sample to improve the precision of TRACS data. Thus, USPS reported that the revised TRACS documentation enhanced the transparency and administration of this system. The Presidential Commission and we have found that major changes are needed to the ratemaking process. In particular, the Presidential Commission found that the current ratemaking process is far too cumbersome and time consuming, with rate changes taking as long as 18 months. The Presidential Commission concluded that the current ratemaking process creates “an impossible situation for an institution charged with the responsibility of acting in a businesslike manner.” Our past work also reached a similar conclusion that “major changes are needed in this area,” and that improvements in the postal ratemaking structure will be a “fundamental component of a comprehensive transformation.” Proposed postal reform legislation being considered in the 109th Congress (H.R. 22 and S. 662, 109th Cong., 1st Sess.) would create new oversight mechanisms and enhanced regulatory authority over the quality of ratemaking data. The postal regulator would be required to prescribe what ratemaking data USPS must annually report and review that data in order to determine whether USPS had complied with the requirements of the new ratemaking structure. The postal regulator would be provided with the authority to initiate proceedings to improve the quality of ratemaking data; the authority to subpoena USPS documents and officials; the authority to order USPS to take appropriate actions to comply with laws and its regulations; and the authority to impose sanctions for noncompliance, including fines for deliberate noncompliance. The postal regulator could obtain court orders to enforce its subpoenas, orders, and sanctions. The proposed legislative changes would address persistent problems under the existing statutory ratemaking structure, which, as we have reported, has enabled long-standing deficiencies in ratemaking data quality and unresolved methodological issues to persist. Thus, the proposed legislative changes would likely lead to improvements in the quality of ratemaking data. However, if postal reform legislation is enacted, the outcome would likely depend on how the postal regulator would use its discretion to define and implement the new ratemaking structure. Key implementation questions would remain, including what regulatory criteria and requirements would apply to ratemaking data. The Presidential Commission concluded that for USPS to operate in a more businesslike fashion, its managers must have greater flexibility to manage and innovate, including in the ratemaking area. However, the Presidential Commission also stated that with this latitude comes the need for enhanced oversight from an independent postal regulator endowed with broad authority. Thus, the Presidential Commission concluded that the current ratemaking process should be abolished and replaced with a more streamlined structure that continues to impose rigorous ratemaking standards through independent regulatory oversight that would ensure that the outcome cannot be unduly influenced through the selective provision of information to the regulator. The Presidential Commission stated that the postal regulator must have access to the most reliable and current information possible to ensure financial transparency and enable the postal regulator to make fully informed determinations. To this end, the Presidential Commission recommended that the postal regulator have the authority to request accurate and complete financial information from USPS, including through the use of subpoena powers, if necessary. We have also reported on how the statutory structure has led to persistent problems and issues regarding the quality of ratemaking data. Specifically, we found that the current ratemaking structure has poor incentives that impede progress in improving data quality, including the incentives described below: Poor incentives to provide quality data: Current law gives USPS opportunities to seek advantage in litigious rate cases by controlling what data are collected and how they are analyzed and reported. PRC cannot subpoena USPS or order USPS to collect or update data. For example, the Study found that key ratemaking data had not been updated for many years, but these data were used regardless of their obsolescence. Poor incentives for resolving recurring issues: Statutory due process rules have enabled parties to repeatedly litigate complex data quality and cost attribution issues that have previously been considered. In addition, as we have reported, the zero-sum nature of the break-even requirement provides powerful incentives for parties to repeatedly attempt to shift postal costs in ways that serve their self-interests. Specifically, we have reported that when USPS proposes changes to domestic postal rates and fees, USPS (1) projects its “revenue requirement” for the “test year” (a fiscal year representative of the period of time when the new rates will go into effect), based on the total estimated costs plus a provision for contingencies, and a provision, if applicable, for the recovery of prior years’ losses and (2) proposes rates and fees that are estimated to raise sufficient revenues to meet USPS’s revenue requirement. Thus, as the Institute of Public Administration reported more than a decade ago, “The current ratemaking structure is premised on the concept of a static pie, which represents the revenue requirement, and focuses on who is going to pay what share of the money (i.e., ratemaking is treated as a zero-sum game).” The institute further reported that various interest groups have been organized that represent certain classes of mail in rate cases. These groups typically advocate cost attribution methods that are in their immediate self-interest, such as alternative methods that would result in fewer costs attributed to the class of mail they represent. USPS and private delivery firms have taken opposing positions on cost attribution methods for subclasses of mail, such as Priority Mail and Parcel Post. As a result, the same cost attribution issues have been debated for many years. Cost attribution issues are often a key reason why rate cases are so lengthy and litigious because these issues are complex and their disposition can directly affect postal rates. Although cost attribution issues are central to postal ratemaking, we have reported that the need to address such issues in every rate proceeding is inconsistent with providing USPS with greater flexibility to change rates under a streamlined ratemaking process. Poor incentives to appropriate cost attribution: USPS has a disincentive to maximize the attribution of costs to specific subclasses of mail that must cover their costs because USPS loses pricing flexibility as more costs are attributed. Because ratemaking data and analyses of these data are necessary to attribute costs, the quality of ratemaking data can affect the degree of cost attribution. In this regard, the PRC Chairman recently testified that the proposed postal regulator should have the means to examine all of the costs currently treated as institutional to assure Congress, USPS, and the public that all costs that can be attributed, are attributed. He concluded that “I believe there is room for improvement and would welcome the responsibility and authority to achieve it.” The postal regulator would be required to issue regulations prescribing what ratemaking data USPS would be required to report (see table 1). Despite the quantity of material submitted in rate cases, PRC has reported that its ability to carry out its responsibilities has been hindered in some rate cases because of deficiencies in the completeness and accuracy of ratemaking data provided by USPS. For example, PRC reported that its ability to consider USPS proposed rates in the 1994 rate case was hindered because the supporting ratemaking data were deficient. PRC said USPS omitted data that had previously been provided in rate cases, such as new or updated studies of the sort that were necessary to develop rates for worksharing discounts that mailers receive in exchange for performing activities that are estimated to reduce USPS costs. As a result, PRC reported it was unable to develop worksharing discounts that tracked the associated USPS cost savings, which PRC reported should be based on current data to set appropriate discounts. PRC said that the absence of these studies was particularly significant because USPS operations had been in a state of major transition since the past rate case, but the former worksharing cost studies—and the worksharing discounts that had resulted—reflected former mail processing methods. In this regard, the proposed legislation would specifically require USPS to provide worksharing data on an annual basis—a requirement not included in current law. Further, the proposed legislation would provide the postal regulator with enhanced authority to obtain these data if USPS does not initially provide them. Specifically, the postal regulator would be provided with subpoena power and the power to obtain court orders to compel USPS compliance with the reporting requirements—powers not provided to PRC by current law. The proposals for enhanced regulatory authority are discussed further later in this report. Another benefit of the proposed reporting requirements would likely be the end of a long-standing methodological dispute in which USPS prepares two sets of cost data for each regulatory proceeding—one according to its preferred methodology for analyzing mail processing costs, and one according to PRC’s preferred methodology for analyzing these costs. The different methods produce different estimates for USPS savings resulting from worksharing discounts that currently apply to three-quarters of total mail volume, and thus the choice of analysis method could affect these discounts. The current statutory ratemaking structure allows this dispute to continue because it provides due process by enabling all interested parties to raise whatever issues they wish, regardless of how many times the same issues may have been considered in the past. USPS can repeatedly raise issues by building them into its initial proposals for changes to postal rates. For example, USPS has repeatedly submitted proposed rates based on its preferred analysis method for mail processing costs into its rate proposals, even though PRC has repeatedly rejected USPS’s method. In each rate proceeding, USPS also submitted parallel data using the PRC analysis method, and both sets of data were considered by PRC and other stakeholders participating in the rate cases. The proposed requirements could resolve similar situations by mandating that the postal regulator issue regulations for how USPS cost, revenue, and rate data are to be analyzed in order to demonstrate compliance with ratemaking requirements, including newly proposed statutory requirements for worksharing discounts (see app. III for a listing of proposed ratemaking requirements). In this regard, the House bill is the most specific in that it requires the postal regulator to prescribe methodologies for analyzing ratemaking data. Further, both bills would eliminate current statutory rules for due process and stakeholder involvement in rate proceedings; the postal regulator would be given the flexibility to establish new rules in this area under its regulatory authority. The proposed legislation would require the USPS Inspector General to audit the ratemaking data included in the USPS annual reports (see table 2). For example, under the House bill, the USPS Inspector General would be required to regularly audit USPS data collection systems and procedures used to prepare the annual reports. In contrast to the proposed requirements for regular Inspector General oversight of these USPS data collection systems, the current ratemaking structure relies on ad hoc regulatory oversight conducted during rate cases that only USPS can initiate. Specifically, since the Postal Reorganization Act of 1970 was enacted, USPS has initiated 13 rate cases in the past 34 years, including the 2005 rate case. For example, the 2005 rate case was preceded by the 2001 rate case that resulted in a negotiated settlement, which resulted in limited regulatory review of USPS ratemaking data and its data collection systems. When USPS filed the 2005 rate case, it requested expedited review to consider a proposed settlement, which, if accepted, could again result in limited regulatory review of USPS ratemaking data and data collection systems. Thus, the case-by-case approach to reviewing ratemaking data quality under the current ratemaking structure, combined with the infrequency of these reviews, has limited oversight of USPS ratemaking data and its data collection systems that generate these data. When oversight has occurred, the 10-month statutory deadline for rate cases, combined with the time and expense of litigating data quality issues, has limited the scope and depth of the data quality issues reviewed in rate cases. In our view, such limited external oversight is one reason why problems with the quality of ratemaking data have persisted. For example, in the 1994 rate case, PRC called for an examination of USPS costing systems used for ratemaking, especially IOCS, citing methodological concerns, reductions in the quantity of ratemaking data that USPS collected, and major changes in USPS operations, among other things. In spring 1995, then PRC Chairman Edward Gleiman testified before the former House Subcommittee on the Postal Service about his concerns regarding the quality of ratemaking data. This led to the Chairman of that Subcommittee, Representative John M. McHugh, requesting the Data Quality Study. The request for the Study, its progress, and USPS follow-up have been the subjects of continuing congressional oversight over the past decade. The Study validated the need for improving IOCS and other USPS data collection systems and special ratemaking studies. To USPS’s credit, USPS reports making major efforts that were responsive to the Study. However, the congressional oversight provided by the Study was not envisioned by the Postal Reorganization Act of 1970, which separated Congress from the ratemaking process. The Study was a unique event that required extraordinary involvement by Congress, USPS, PRC, and the contractor that conducted the Study. Under the proposed legislation, the postal regulator would be required to annually review USPS ratemaking data reports and determine whether USPS had complied with the requirements of the new ratemaking structure (see table 3). In cases of noncompliance, the postal regulator would be required to order USPS to take appropriate action. Regulatory compliance reviews would be a critical element of the new ratemaking structure, which is intended to balance increased USPS ratemaking flexibility with enhanced transparency, oversight, and accountability. Specifically, under the proposed legislation, the postal regulator would be charged with developing a new, streamlined ratemaking process that provides USPS with additional flexibility. The mandated compliance reviews would (1) verify that USPS rates are in compliance with applicable requirements on an annual basis and (2) require regulatory action to correct any instances of noncompliance. For example, the proposed legislation would require each USPS competitive product (e.g., Express Mail and Priority Mail) to cover its attributable costs. In order for the postal regulator to verify compliance with this cost-coverage requirement, data would be needed on the attributable costs and revenues of each USPS competitive product. The quality of this ratemaking data would be vital because the regulator would be required to address instances of noncompliance through certain actions, such as ordering USPS to adjust the rate of a competitive product that was not covering its costs or even to discontinue providing the loss-making product. In contrast with current law, which depends on having USPS initiate rate cases for regulatory action to occur, the proposed compliance process triggers annual regulatory action based on actual results for each fiscal year. For example, under current law, the requirement that each subclass of mail cover its costs is addressed in rate cases—which can be years apart from each other. Because postal revenues and costs change over time, a subclass of mail may not cover its costs in some years between rate cases. This situation may not be addressed until the next rate case. As previously described, the proposed legislation specifies that if a subclass of mail fails to cover its annual costs as required, the postal regulator would be required to order USPS to take appropriate action to come into compliance. The postal regulator would have the specific authority to order USPS to change the postal rates for that subclass of mail so that its revenues would begin to cover its costs. Proposed postal reform legislation would authorize the postal regulator to initiate proceedings to improve the quality of ratemaking data, including data on the attribution of costs and revenues to postal products (see table 4). This mechanism would be needed because the legislation would abolish the current statutory ratemaking process, which has been the primary mechanism for oversight of data quality issues. Authorizing the postal regulator to initiate data quality proceedings as needed would shift from reactive oversight in USPS-initiated rate proceedings to proactive oversight by the postal regulator. The proposed statutory mechanism to consider data quality and related cost attribution issues has a number of potential benefits, including the following: Providing a mechanism for considering data quality issues with adequate time and attention: In rate cases, PRC reviews comprehensive proposals, voluminous documents, and complex issues, leaving limited time to consider data quality and related cost attribution issues. As USPS’s General Counsel has acknowledged, it is difficult for rate case participants to handle cost attribution issues involving ratemaking data and other issues within the statutory 10-month time frame for rate cases. Revisiting data quality issues as needed: Data quality is a moving target as postal operations, data needs, and data collection technologies evolve over time. Thus, it is natural for data quality issues and opportunities for improvement to arise over time. Providing the postal regulator with enhanced authority and enforcement powers is consistent with the overall intent of the proposed postal reform legislation to balance providing USPS with greater pricing flexibility with enhancing transparency, oversight, and accountability to protect USPS customers and competitors. Under the proposed postal legislation, the postal regulator would be provided with enhanced authority and enforcement powers compared with those of PRC. Specifically, the postal regulator would be provided with the authority to order USPS to take appropriate actions to comply with laws and its regulations and could impose sanctions for noncompliance, including fines for deliberate noncompliance (see table 5). Regulatory orders could result from the required annual compliance reviews of ratemaking data conducted by the postal regulator, previously discussed, or from complaints that could be initiated by the regulator or any interested party. The federal courts would enforce the postal regulator’s orders and sanctions. The postal regulator would be provided with subpoena power. The regulator’s subpoenas would be enforced by federal courts, which could punish noncompliance as a contempt of court. “Past Postal Rate Commission decisions have frequently contained requests for additional data and analysis in specific areas. Sometimes these requests were honored but all too often they have been ignored. Under the existing statute the Commission does not have the authority to compel USPS to collect specific data or perform needed studies.” The proposed legislative changes previously described would address persistent problems under the existing statutory structure, which, as we have reported, has enabled long-standing deficiencies in ratemaking data quality and unresolved methodological issues to persist. Under the current structure, regulatory oversight is conducted during rate cases that only USPS can initiate, which has limited the frequency, scope, and depth of oversight of USPS ratemaking data and its data collection systems that generate these data. The legislation would eliminate key disincentives for ratemaking data quality, including the litigious ratemaking process (which provides incentives for USPS and others to gain an advantage through the collection and analysis of ratemaking data), the break-even requirement that creates incentives to shift costs from one subclass of mail to another, and the lack of adequate oversight mechanisms to address data quality issues. The legislation also would enhance regulatory authority so that the necessary transparency, oversight, and accountability can take place regarding ratemaking data quality. Thus, the proposed legislative changes would likely lead to improvements in the quality of ratemaking data over time and at some cost. However, if postal reform legislation is enacted, the outcome would likely depend on how the postal regulator would use its discretion to define and implement the new ratemaking structure. Key implementation questions would remain, including what regulatory criteria and requirements would apply to ratemaking data. Should the legislation be enacted, implementation will be critical to achieving the intended results because the legislation would provide the postal regulator with great flexibility to establish the new ratemaking structure and implement provisions relating to data quality. This flexibility would enable the postal regulator to deal with changing circumstances, but it also creates substantial uncertainty and risks. Key implementation questions might include the following: What criteria would the new postal regulator use for evaluating the quality, completeness, and accuracy of ratemaking data? To what extent can USPS costs be rationally attributed to postal products and services, in accordance with sound economic principles? How would the postal regulator balance the need for high-quality ratemaking data with the time and expense involved in obtaining the data? How would any proceedings to improve the quality of ratemaking data be structured by the postal regulator? How could USPS and other stakeholders participate in such proceedings? Could the postal regulator use its enhanced authority over ratemaking data to require USPS to collect and/or update specific ratemaking data? If so, would that include regulatory authority over the quantity of data collected and the methods of data collection (e.g., in-person data collection v. telephone data collection)? We received written comments on a draft of this report from the Chairman of the Postal Rate Commission in a letter dated July 18, 2003, and the Controller and Vice President of the U.S. Postal Service via e-mail on the same date. Their comments are summarized below, and the PRC Chairman’s comments are reproduced as appendix II. In addition, PRC and USPS officials provided technical and clarifying comments, which were incorporated where appropriate. “In light of the record of success under the current system, the proposed legislation relating to the requirements for reporting ratemaking data in practice is not likely to lead to the breakthrough improvements in the quality of the ratemaking data systems without a significant increase in costs to the stakeholders. concerned, furthermore, that the proposed legislative changes may sacrifice the checks and balances and the effective process of data review and refinement that have evolved under the current system.” We disagree with USPS’s first comment that the current ratemaking process has worked “remarkably well” since postal reorganization. We continue to believe that major changes are needed to the ratemaking structure. As described in our report, the current ratemaking structure has enabled long-standing deficiencies in ratemaking data quality to persist. Further, we have reported that the ratemaking process is a litigious, costly, and lengthy process that can delay needed new revenues. In this regard, USPS’s comments appear to be inconsistent with the April 14, 2005, testimony of the Postmaster General that “today’s ratemaking process is both costly and time consuming” and needs major change, as well as the numerous criticisms that USPS has made of the ratemaking process over the years. We continue to believe that comprehensive postal reform legislation is urgently needed, including improvements to the regulation and oversight of postal rates. Second, we believe the need for reform is not diminished by comparisons of ratemaking data quality with that of foreign postal administrations, which have different regulatory environments. Indeed, some foreign countries that are implementing postal reform are grappling with the need to improve ratemaking data quality. In our view, it is more appropriate to consider what level of ratemaking data quality is appropriate for the United States. Third, regarding USPS’s views about achieving “breakthrough improvements” in ratemaking data quality, in our view, the proposed legislative changes would likely lead to improvements in the quality of ratemaking data over time and at some cost. The extent of such improvements, and what the associated costs may be, would depend on how the legislation is implemented. In our view, enhanced regulatory authority over ratemaking data would enable the necessary transparency, oversight, and accountability in this area. Ratemaking data are fundamental to setting postal rates that touch the lives of all Americans and affect the financial viability of USPS and the mailing industry. These data are essential to establishing fair and equitable rates. In comments on our draft report, the PRC Chairman said that the report had clearly presented USPS actions taken with respect to the Study recommendations. He commended USPS for taking steps to improve its ratemaking data systems and the data upon which postal rates are based. At the same time, he expressed concerns about ratemaking data quality and said that USPS can and should be taking more action to improve the quality of ratemaking data. He also said that the report aptly summarizes the potential of postal reform legislation to improve ratemaking data quality. The PRC Chairman said USPS had not addressed many significant potential sources of systematic error in its ratemaking data systems, including IOCS. He explained that USPS had not addressed issues of systematic error in IOCS data that have been a major concern in prior rate cases. He also said that IOCS data had become less precise due to reductions in the quantity of IOCS data implemented prior to the Study. He also expressed concerns regarding the precision of TRACS data, while complimenting USPS for improving the precision of RPW data. On another matter, he expressed concerns regarding the quality of mail processing data produced by the Management Operating Data System (MODS) that the Study did not assess. Regarding postal reform, he said that the proposed legislation reflects a consensus within the postal community that new tools are needed to increase USPS’s financial transparency. He concluded that PRC agreed that the proposed legislative changes would likely lead to improvement in the quality of ratemaking data. To put PRC comments about the IOCS data precision into context, the Study found that the reductions in IOCS sample size resulted in a minimally lower level of precision in overall subclass cost estimates and made no recommendations on the quantity of IOCS data that should be collected in the future. However, the Study did raise concerns about the precision of some ratemaking data, and USPS’s responsive actions are described in our report. Looking ahead, we encourage USPS and PRC to work together—as they did during the Study—to better understand technical issues regarding data precision, using a statistical model that the Study developed to assess data precision that USPS is working to refine. More generally, we encourage USPS and PRC to use every opportunity to maximize progress on improving the quality of ratemaking data, such as working to improve data quality outside the ratemaking process. As the Study concluded: “Providing sufficiently complete and accurate data for ratemaking is an evolutionary process that requires the Postal Service to continually improve the quality of its ratemaking data and related data systems.” Regarding PRC’s comments on MODS data issues, they were outside the scope of our report, which focused on ratemaking data systems, city carrier cost data, documentation, and data precision that were assessed by the Study. These issues are part of a broader set of mail processing cost issues that PRC and USPS have long disagreed over. Current law allows this disagreement and others to continue by enabling all interested parties to raise whatever issues they wish in rate cases, regardless of how many times the same issues may have been considered in the past. However, as previously discussed, the legislation would likely lead to resolution of this long-standing methodological dispute. We are sending copies of this report to the Ranking Minority Member of the House Committee on Government Reform, the Chairman and Ranking Minority Member of the Senate Committee on Homeland Security and Governmental Affairs, Senator Thomas R. Carper, the Postmaster General, the Chairman of the Postal Rate Commission, and other interested parties. We will make copies available to others on request. This report will also be available on our Web site at no charge at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-2834 or at siggerudk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report included Gerald P. Barnes, Assistant Director; Kenneth E. John; Anna Bonelli; Kevin Bailey; Jay Cherlow; Karen O’Conor; Richard Rockburn; and Walter Vance. Our objectives were to (1) describe key U.S. Postal Service (USPS) actions that were responsive to the 1999 Data Quality Study (the Study) to improve the quality of ratemaking data and (2) discuss possible implications of postal reform legislation for ratemaking data quality. To address the first objective, we identified key USPS actions taken that USPS reported were responsive to the Study by reviewing the Study’s report that prioritized its findings; reviewing USPS and Postal Rate Commission (PRC) documents, including USPS progress reports that prioritized actions and PRC documents that summarized concerns about data quality; and interviewing USPS officials responsible for collecting ratemaking data. We focused our work primarily on USPS’s key actions to enhance three of its five major data collection systems used for ratemaking because the Study’s report noted that these systems had opportunities for improvement. The three systems include the In-Office Cost System (IOCS), which produces data on the time that postal employees spend handling each subclass of mail in postal facilities; the Revenue, Pieces, and Weight (RPW) system, which produces data on the revenue, volume, and weight of each subclass of mail; and the Transportation Cost System (TRACS), which produces data on long- distance transportation of mail subclasses using trucks, airplanes, and freight trains. To put IOCS into context, as previously noted, in fiscal year 2004, USPS incurred about $28 billion in personnel costs for employees working in postal facilities (mail processing, retail, delivery unit, and other facilities), which comprised more than one-third of USPS costs of about $66 billion for the fiscal year. To put TRACS into context, TRACS was used to help attribute about $4 billion in fiscal year 2004 costs for long-distance transportation of mail using trucks, airplanes, and freight trains. We also focused our work on another USPS key action to conduct a new special study to replace four USPS special studies because the Study’s report found that data collected by these studies needed improvement. USPS’s new special study is called the City Carrier Street Time Study (CCSTS), which produced data on the activities of city carriers—that is, letter carriers who deliver mail in highly populated urban and suburban areas where most deliveries are made to the door, curbside mailboxes, centrally located mailboxes, or cluster boxes. We did not assess the extent to which USPS’s actions affected the quality of these ratemaking data. Aside from these ratemaking data, we included some USPS data in this report for background purposes, such as audited USPS accounting data on total USPS costs and revenues for fiscal year 2004, data on USPS personnel costs, and USPS estimates of budgeted costs for collecting ratemaking data in fiscal year 2005. We obtained information to describe the key actions taken by USPS by reviewing relevant documents, including USPS documents previously listed and additional USPS documents not submitted in rate cases, such as documentation of IOCS changes that were implemented in October 2004. We also requested and received USPS documentation of many of the reported improvements made to IOCS, RPW, and TRACS as well as interviewed USPS officials responsible for these systems. To gain an understanding of how the ratemaking data are collected, we visited some USPS mail processing facilities and post offices located in the Washington, D.C., area to observe the collection of ratemaking data, including IOCS, RPW, and TRACS data. These locations were judgmentally selected on the basis of the availability of USPS data collection personnel and their geographic proximity to our headquarters in Washington, D.C. Further, at post offices, we observed activities of letter carriers picking up their mail and organizing it for delivery, as well as delivering the mail on the street. To address the second objective, we reviewed proposed postal reform legislation, current postal laws and regulations, and other documents. Specifically, we drew from our past work in this area, reviewed the proposed legislation (H.R. 22 and S. 662, 109th Cong., 1st Sess., both entitled the Postal Accountability and Enhancement Act) and documents pertinent to legislative intent, such as records of hearings and past versions of the legislation with accompanying committee reports. We reviewed current federal postal laws and regulations, including USPS and PRC regulations pertinent to ratemaking data quality, and other relevant documents, including documents submitted in past rate cases and other PRC proceedings on data quality issues. We also reviewed the report of the President’s Commission on the United States Postal Service, past studies of the ratemaking process by other organizations, and books and articles on ratemaking data quality issues. We conducted our review at USPS headquarters, in Washington, D.C., and the Capital Metro area from June 2004 through July 2005. Appendix III: Selected Ratemaking Requirements in Proposed Postal Reform Legislation Competitive products: The Postal Regulatory Commission (the Commission) shall issue regulations for competitive products (such as Priority Mail and Express Mail) to prohibit the subsidization of competitive products by market-dominant products, ensure that each competitive product covers its attributable costs, and ensure that all competitive products collectively make a reasonable contribution to USPS institutional costs. Market-dominant products: The Commission shall by regulation establish a modern system for regulating rates and classes for market-dominant products (such as First-Class Mail, Standard Mail, Periodicals, and Special Services (such as post office boxes, money orders, and delivery confirmation). House version: In establishing this system, the Commission must take various factors into account, including the attributable costs for each class of mail or type of mail service, plus that portion of institutional costs reasonably assignable to such class or type. The average rate of any subclass of mail cannot increase at an annual rate greater than the comparable increase in the Consumer Price Index for All Urban Consumers (CPI-U), unless the Commission has, after notice and opportunity for a public hearing and comment, determined that such increase is reasonable and equitable and necessary to enable USPS, under best practices of honest, efficient, and economical management, to maintain and continue the development of postal services of the kind and quality adapted to the needs of the United States. Senate version: In establishing this system, the Commission must take into various factors into account, including the requirement that each class of mail or type of mail service cover its attributable costs, plus that portion of institutional costs reasonably assignable to such class or type. The regulatory system for market-dominant products shall (1) require the Commission to set annual limitations on the percentage changes in rates based on the CPI-U unadjusted for seasonal variation over the 12-month period preceding the date USPS proposes to increase rates and (2) establish procedures whereby rates may be adjusted on an expedited basis due to unexpected and extraordinary circumstances. Market tests: USPS may conduct market tests of experimental products provided that the product is, from the viewpoint of the mail users, significantly different from all products offered by USPS within the 2-year period preceding the start of the test; the introduction or continued offering of the product will not create an unfair or otherwise inappropriate competitive advantage for USPS or any mailer, particularly in regard to small business concerns; total revenues anticipated, or in fact received, by the tested product do not exceed $10 million in any year, unless the Commission has increased the limit to $50 million; and the market test does not exceed 24 months, unless the Commission has approved an extension of its total duration up to 36 months. Worksharing discounts: The Commission shall establish rules for worksharing discounts as part of its regulations for regulating market-dominant products that shall ensure that discounts do not exceed the cost that USPS avoids as a result of worksharing activity, unless the discount is (1) associated with a new postal service, a change to an existing postal service, or a new workshare initiative related to an existing postal service and (2) necessary to induce mailer behavior that furthers the economically efficient operation of the Postal Service and the portion of the discount in excess of the cost that the Postal Service avoids as a result of the workshare activity will be phased out over a limited period of time; a reduction in the discount would (1) lead to a loss of volume in the affected category or subclass of mail and reduce the aggregate contribution to the institutional costs of the Postal Service from the category or subclass subject to the discount below what it otherwise would have been if the discount had not been reduced to costs avoided, (2) result in a further increase in the rates paid by mailers not able to take advantage of the discount, or (3) impede the efficient operation of the Postal Service; the amount of the discount above costs avoided (1) is necessary to mitigate rate shock and (2) will be phased out over time; or the discount is provided in connection with subclasses of mail consisting exclusively of mail matter of educational, cultural, scientific, or informational value. | In 1999, the congressionally requested Data Quality Study (the Study) found opportunities to improve ratemaking data quality. The U.S. Postal Service (USPS) agreed to make improvements, but concerns remained that it is still unclear, from an overall perspective, what actions USPS has taken to improve data quality. Ratemaking data quality has also factored into congressional deliberations to reform postal laws. Thus, questions remain about USPS's actions to improve ratemaking data quality and how proposed legislation will address long-standing issues in this area. GAO was asked to (1) describe key USPS actions that were responsive to the Study to improve the quality of ratemaking data and (2) discuss possible implications of postal reform legislation for ratemaking data quality. GAO did not assess the extent to which USPS's actions affected data quality. In its comments, USPS disagreed with GAO's finding on the need to reform the ratemaking structure. USPS also differed on GAO's finding that the legislation would likely lead to improving ratemaking data quality. It said "breakthrough improvements" would be unlikely without a significant increase in costs. GAO believes reform of the ratemaking structure is needed, but the outcome would depend on its implementation. Further, the legislative changes would likely lead to data quality improvements over time and at some cost. USPS took several key actions that it reported were responsive to the Study's findings. USPS reported that these actions increased the accuracy and precision of ratemaking data. First, USPS changed the In-Office Cost System to improve the quality of data on mail handled by postal employees and the activities they are performing. Personnel costs represent more than three-quarters of USPS costs; therefore, information on postal employees' handling of mail is necessary for ratemaking purposes. USPS made similar changes to the Revenue, Pieces, and Weight System, which produces data on the revenue, volume, and weight of each type of mail. Second, replacing ratemaking data that had been collected in the 1980s, USPS conducted the City Carrier Street Time Study to gather more complete and consistent data on letter carrier activities. Third, to increase the precision of ratemaking data, USPS collected a larger quantity of data. Fourth, USPS revised documentation of the Transportation Cost System, which the Study had criticized as inadequate. Proposed postal reform legislation (H.R. 22 and S. 662) would create new oversight mechanisms and enhanced regulatory authority over the quality of ratemaking data. The legislation would transform the Postal Rate Commission into a new postal regulator that would prescribe what ratemaking data USPS must report annually, review these data, and determine whether USPS had complied with ratemaking requirements. The regulator could initiate proceedings to improve the quality of ratemaking data. To carry out its expanded duties, the regulator would have enhanced authority, including the authority to subpoena; the authority to order USPS to take actions to comply with laws and regulations; and the authority to impose sanctions for noncompliance. The legislation would address persistent problems under the existing ratemaking structure, which has enabled long-standing deficiencies in ratemaking data quality and unresolved methodological issues to persist. The legislation would eliminate key disincentives for ratemaking data quality, including the litigious ratemaking process, the break-even requirement that creates incentives to shift costs from one type of mail to another, and the lack of adequate oversight mechanisms to address data quality issues. Under the current structure, regulatory oversight is generally conducted during rate cases that only USPS can initiate. The legislation would provide mechanisms for regular oversight of ratemaking data and enhance the regulator's authority so that the necessary transparency, oversight, and accountability could take place. Thus, the legislation would likely lead to improvements in the quality of ratemaking data over time and at some cost. However, if the legislation is enacted, the outcome would likely depend on how the regulator would use its discretion to define and implement the new ratemaking structure. Key implementation questions would remain, including what regulatory criteria and requirements would apply to ratemaking data. |
Since the former Soviet Union launched its first Sputnik satellite 40 years ago, the number of manmade space objects orbiting the earth—active and inactive satellites and debris generated from launch vehicle and satellite breakups—has increased dramatically. In 1995, a National Science and Technology Council report estimated the number of space objects to be over 35 million. Although nearly all of these objects are thought to be smaller than 1 centimeter, about 110,000 are estimated to be between 1 and 10 centimeters, and about 8,000 are larger than 10 centimeters. Only the approximate 8,000 objects are large enough, or reflect radar energy or light well enough, to be routinely observed by the Department of Defense’s (DOD) existing space surveillance sensors. About 80 percent of these 8,000 objects are in low-earth orbits, and the remainder are in geosynchronous and other orbits. The increasing amount of space debris creates a hazard to certain spacecraft, especially large ones like the planned multibillion dollar International Space Station, which will operate in low-earth orbits. The National Aeronautics and Space Administration (NASA) is interested in accurate and timely information on the locations and orbits of space objects to predict and prevent collisions with spacecraft designed for human space flight—the space station and space shuttles. DOD and intelligence agencies are interested in knowing the type, status, and location of space objects, particularly foreign satellites, as part of DOD’s space control mission and other national security functions. NASA and DOD rely on the U.S. Space Command’s Space Surveillance Network, which is operated and maintained by the Air Force, Naval, and Army Space Commands, to provide information on space objects. The surveillance network consists of radar and optical sensors, data processing capabilities, and supporting communication systems. It detects objects in space; tracks them to determine their orbits; and characterizes them to determine their size, shape, motion, and type. The network routinely detects and tracks objects larger than about 30 centimeters (somewhat larger than a basketball). It can sometimes detect and track objects as small as 10 centimeters (about the size of a softball), but not routinely. The surveillance network also catalogs the approximately 8,000 space objects and includes information that describes the orbit, size, and type of object. The information is used for such purposes as (1) warning U.S. forces of foreign reconnaissance satellites passing overhead and (2) analyzing the space debris environment and the potential implications of planned space operations. All space sectors—defense, intelligence, civil, and commercial—use the catalog information. Subsequent to the launch of Sputnik in 1957, DOD established a space tracking mission and a network of radars and telescopes to monitor orbiting satellites. During the 1960s, DOD built radars to support two missions—space tracking and ballistic missile warning. The Naval Space Surveillance System (known as the Fence) is a chain of radar equipment extending from California to Georgia that was constructed to detect foreign reconnaissance satellites and provide warning to Navy ships of such satellite overflights. The system is still operational, and the Navy plans to modernize it beginning in 2003 to improve its maintainability. Also, Ballistic Missile Early Warning System radars were constructed in Alaska, Greenland, and England to detect and track intercontinental ballistic missiles that could be launched at North America. A secondary mission for these missile warning radars has always been space surveillance. Finally, a prototype phased-array radar was built in Florida to support the space surveillance mission. During the 1970s, the Air Force reactivated the Safeguard antiballistic missile phased-array radar in North Dakota. This radar provides space surveillance support as a secondary mission. Also, the Air Force began a program to build four phased-array radars (called PAVE PAWS) to detect and track submarine-launched and intercontinental ballistic missiles. The four radars—in Georgia, Texas, California, and Massachusetts—were completed in the 1980s, but the Georgia and Texas radar sites were closed in 1995. The radars in California and Massachusetts continue to operate and support space surveillance as a secondary mission. During the 1980s, DOD acquired four Ground-based Electro-Optical Deep Space Surveillance telescopes to detect and track objects in geosynchronous orbit because existing surveillance network sensors could not detect objects at such a distance. These telescopes provide nearly worldwide coverage but are limited to operating at night and in clear weather. Three sites, located in New Mexico, Hawaii, and Diego Garcia (in the Indian Ocean), are currently operational. A fourth site in Korea was closed in 1993 due to poor tracking conditions. The existing space surveillance network includes 31 radar and optical sensors at 16 worldwide locations, a communications network, and primary and alternate operations centers for data processing. Appendix I discusses the surveillance network’s composition and characteristics. The September 1996 National Space Policy includes civil, defense, and intersector guidelines related to space safety, space threats, and space debris. Specifically, the policy (1) requires NASA to ensure the safety of all space flight missions involving the space station and space shuttles; (2) requires DOD to maintain and modernize space surveillance and associated functions to effectively detect, track, categorize, monitor, and characterize threats to U.S. and friendly space systems and contribute to the protection of U.S. military activities; and (3) declares that the United States will seek to minimize the creation of space debris and will take a leadership role internationally, aimed at debris minimization. A distinctive interconnection among these policy guidelines is that, although the increasing amount of space debris creates a hazard to human space flight, NASA has no surveillance capabilities to locate space objects. Instead, it relies on DOD’s capabilities to perform this function. Despite this dependency relationship, the policy makes no provision for an interagency mechanism—either organizational or funding—to ensure that DOD’s space surveillance capabilities meet NASA’s requirements. The surveillance of space objects is receiving increasing attention from both a civil and national security perspective. Part of the reason for the increased attention is because of (1) the planned assembly of the space station beginning in 1998 and (2) DOD’s recognition that its aging space surveillance network cannot adequately deal with future national security threats. In addition, DOD believes that the growing commercial space sector will result in increased requests for surveillance support. According to the National Research Council, the chance of debris colliding with a spacecraft relates directly to the size and orbital lifetime of the spacecraft. The space station will be the largest spacecraft ever built, with length and width dimensions somewhat larger than a football field. Its total exposed surface area will be almost 10 times greater than that of a space shuttle—about 11,500 square meters compared with about 1,200 square meters. Also, the space station’s orbital lifetime is expected to exceed that of a space shuttle. NASA plans to operate the space station continuously for at least 10 years. In contrast, in recent years, individual space shuttle missions have averaged about 7 per year and 11 days per mission. In future years, NASA is planning about eight shuttle missions per year. The Council concludes that the space station will face a significant risk of being struck by potentially damaging meteoroids or orbital debris. The space station is to operate at low-earth altitudes—between 330 to 500 kilometers. According to the National Science and Technology Council, debris orbiting at altitudes up to about 900 kilometers lose energy over time through friction with the atmosphere and fall to lower altitudes, eventually either disintegrating in the atmosphere or falling to the earth. New debris is periodically added, sometimes unexpectedly. For example, in June 1996, a Pegasus rocket broke up at an altitude of about 625 kilometers, creating 668 observable objects. Also, it is likely that an unknown number of other objects were created, but they are not observable because of their small size. Such debris, as it falls toward the earth, can be expected to pass through the space station’s operating altitudes. From a national security (defense and intelligence) perspective, space surveillance provides (1) warning to U.S. forces when a foreign satellite becomes a threat to military operations and (2) information to support responsive measures. According to DOD, as the importance of space services to U.S. forces increases and the size of satellites decreases, the need for timely information about space objects expands. DOD has acknowledged that its existing surveillance network is aging, requires replacement or upgrades in the next 10 to 15 years, and is currently limited in its ability to detect and track objects smaller than 30 centimeters. In January 1996, the Deputy Under Secretary of Defense for Space directed the DOD Space Architect to begin a study of DOD’s space control mission, including the space surveillance function. The purpose was to develop a range of architecture alternatives to satisfy national security needs to 2010 and beyond. In May 1997, the team provided its results to the Joint Space Management Board. Regarding space surveillance, the team concluded that next-generation ground-based radars and potential space-based systems should be able to provide reliable near-earth tracking of space objects that are 5 to 10 centimeters in size. The team expected such capabilities to improve debris awareness and ensure that an emerging class of microsatellites as small as 10 centimeters could be tracked. The Board has yet to provide directions to DOD and intelligence organizations on how to proceed regarding the space surveillance function. In a separate action, NASA and the Air Force Space Command established a partnership council in February 1997 to study a variety of space areas of mutual interest. One area involves DOD’s space surveillance network. The impetus to address this subject arose from recognizing the potentially catastrophic consequences of collisions between manned spacecraft and orbiting debris. One of the tasks is to examine ways to enhance orbital debris data collection and processing on objects as small as 5 centimeters. The Chairman and Ranking Minority Member of the Subcommittee on Space and Aeronautics, House Committee on Science, expressed an interest in how NASA intends to ensure protection of the space station against space debris for which shielding will not be provided. As a result, they asked us to provide this report on NASA’s and DOD’s requirements and capabilities for detecting and tracking space objects and the existing relationships between the two agencies for carrying out their responsibilities in this area. We evaluated (1) how well DOD’s existing space surveillance capabilities support DOD’s and NASA’s current and future surveillance requirements and (2) the extent to which potential space surveillance capabilities and technologies are coordinated to provide opportunities for improvements. To accomplish these objectives, we reviewed surveillance network studies; DOD’s and NASA’s surveillance requirements documents and emerging needs; reports, plans, and budgets associated with surveillance network operations, maintenance, and enhancements; and program documentation on potential capabilities. We also reviewed national space policy and interviewed DOD and NASA representatives responsible for space surveillance. We performed this work primarily at the U.S. and Air Force Space Commands, Colorado Springs, Colorado, and NASA’s Johnson Space Center, Houston, Texas. In addition, we held discussions with and obtained documentation from representatives of the Office of the Deputy Under Secretary of Defense for Space; the Joint Staff; the Ballistic Missile Defense Organization; the Office of the DOD Space Architect; the Departments of the Air Force, the Navy, and the Army; the Naval Research Laboratory; and NASA Headquarters; all in Washington, D.C. We also acquired information from the Naval Space Command, Dahlgren, Virginia; the Air Force Space and Missile Systems Center, El Segundo, California; the Air Force Electronic Systems Center, Hanscom Air Force Base, Massachusetts; the Air Force’s Phillips Laboratory, Albuquerque, New Mexico; the Army Space and Strategic Defense Command, Huntsville, Alabama; the National Oceanic and Atmospheric Administration’s Office of Satellite Operations, Suitland, Maryland; the Massachusetts Institute of Technology’s Lincoln Laboratory, Lexington, Massachusetts; and the University of Colorado’s Aerospace Engineering Sciences, Boulder, Colorado. We visited the Air Force’s Ground-based Electro-Optical Deep Space Sensor, Socorro, New Mexico; the Massachusetts Institute of Technology’s Lincoln Space Surveillance Complex, Tyngsboro, Massachusetts; and NASA’s Liquid Mirror Telescope, Cloudcroft, New Mexico. We obtained written comments from DOD and NASA on a draft of this report. These comments are reprinted in their entirety in appendixes II and III, respectively. Both DOD and NASA also provided technical and editorial comments, which we have incorporated into the report where appropriate. We performed our work from September 1996 to August 1997 in accordance with generally accepted government auditing standards. NASA has established some stringent space surveillance requirements to protect the space station and other spacecraft from collisions with space debris. DOD’s space surveillance requirements are under review and are likely to become more stringent. Because DOD’s existing space surveillance network cannot satisfy its and NASA’s emerging requirements, changes in the network may be needed. NASA and DOD have held discussions over the years regarding NASA’s surveillance requirements, but there is no authoritative direction, formal agreement, or clear plan on how the two agencies could consolidate their requirements for a common capability. During the past several years, NASA and DOD periodically discussed space surveillance requirements for the space station, but many proposed requirements were left to be determined and not formally provided as firm requirements to DOD. In August 1997, however, NASA provided the U.S. Space Command with an updated set of requirements for surveillance support that are more specific, comprehensive, and complete than previous requirements. Two of these requirements—detecting and tracking relatively small space objects and more accurately determining the location of such objects—cannot be met by DOD’s existing surveillance network. In commenting on a draft of this report, NASA stated that a third requirement—notifying NASA within 1 hour of a space object breakup—also cannot be met. NASA has designed portions of the space station with shielding to provide protection against objects smaller than 1 centimeter. It has concluded that shielding against larger objects would be too costly. The National Science and Technology Council estimated that about 118,000 objects 1 centimeter and larger were orbiting the earth. However, DOD’s surveillance network cannot routinely detect and track 110,000 (93 percent) of the objects that are estimated to be between 1 and 10 centimeters in size. The National Research Council report stated that the risk of the space station colliding with untracked debris could be lowered if more objects were tracked. The report mentioned that debris from about 0.5 to 20 centimeters in diameter was of most concern to the space station because, within this range, the debris may be too large to shield against and too small to (currently) track and avoid. Because NASA has no location information about these relatively small sized objects, it is requiring DOD, in the near term, to routinely detect, track, and catalog all space objects that are 5 centimeters and larger and have a perigee of 600 kilometers or less. Beginning in the 2002-2003 time frame, when the space station is to be completed, NASA will require DOD to detect, track, and catalog objects as small as 1 centimeter. DOD agrees that achieving the ability to detect and track objects 5 centimeters in size would be an intermediate step to meeting NASA’s needs. However, DOD stated that achieving the capability to detect and track objects 1 centimeter in size would be technically challenging. The importance of the requirement to detect and track 1 centimeter space objects is linked to the effect of critical collisions between such objects and the space station. NASA estimates a 19-percent probability of critical collisions with objects larger than 1 centimeter during a 10-year period. Although not all collisions would be catastrophic, NASA estimates a 5-percent probability that such collisions would cause a catastrophic failure, resulting in the loss of a module or a crew member. The National Research Council emphasized that these calculations are far from exact because they are based on many assumptions such as the future debris environment, which could be higher or lower than estimated, and the effectiveness of shielding critical space station components. Also, the calculations exclude impacts on noncritical items that could potentially cause severe damage to the station. NASA plans to maneuver the space station to avoid collisions with those space objects that can be accurately located by DOD’s surveillance network. Currently, DOD assesses the proximity of the 8,000 cataloged objects relative to an orbiting space shuttle. NASA uses these assessments to determine whether a sufficient threat exists to require a collision avoidance maneuver. Although NASA has made such maneuvers with the space shuttle, the shuttle has not been maneuvered in some instances because of concern for interference with the primary mission objective. For safety reasons, knowing the accurate location of space objects is important in deciding when to make collision avoidance maneuvers. Also, such knowledge would help avoid making unnecessary maneuvers that would be disruptive to mission objectives, such as microgravity experiments performed on the space shuttle or space station. To ensure accurate information on objects that are 1 centimeter and larger, in low-earth orbit, and with perigees 600 kilometers or less, NASA’s requirements specifically call for sensor tracking to an orbital “semi-major axis” uncertainty of 5 meters or less. The purpose of this requirement is to better predict possible collisions and better decide on the need for collision avoidance maneuvers. However, DOD cannot meet this requirement because the network’s sensors and processing capability and capacity are insufficient, and because DOD does not have a program to measure the orbital location accuracy of the 8,000 cataloged objects. During the 1980s and early 1990s, the U.S. and Air Force Space Commands repeatedly studied different aspects of space surveillance needs and requirements, but not in a comprehensive manner. Command representatives told us that the lack of emphasis on space surveillance during this period was due to its lower priority compared with other missions, such as ballistic missile defense. In 1994, the U.S. Space Command assessed its surveillance requirements, which had last been validated in 1985. The results showed that the requirements were loosely stated or inferred, had little supporting rationale, and did not address future threats. This assessment led to another study, completed by the Air Force Space Command in 1995, that established new space surveillance requirements. However, these requirements were never validated by the Joint Requirements Oversight Council—DOD’s authoritative forum for assessing requirements for defense acquisition programs. In early 1997, the U.S. Space Command determined that the 1995 Air Force surveillance requirements contained insufficient detail and justification and, as a result, initiated another requirements review. In June 1997, the Command emphasized that space surveillance is the foundation for all functions that are performed in space and thus requested updated surveillance requirements from defense, intelligence, and civil space sector users, stating that the requirements must be quantitatively linked to the needs of the warfighter and the Command’s assigned civil support responsibilities. The final product is to be a space surveillance requirements annex to the Command’s space control capstone requirements document. This document, which is still in draft form, emphasizes the necessity of (1) timely space surveillance assessments relative to hostile actions in space, foreign reconnaissance satellite overflights, and operational capabilities of foreign satellites and (2) accurate information about space object size and orbital locations. Upon completion of this effort, the space surveillance requirements are to be reviewed and validated by the Joint Requirements Oversight Council. The DOD Space Architect used the U.S. Space Command’s draft capstone requirements as a basis for performing its space control architecture study. The study observed that U.S. forces expect timely characterization of space threats; that is, forces expect to be warned in a timely manner when a foreign satellite is a threat to their theater of operations. However, the study concluded that, with the trends in satellite growth indicating not only more satellites but also smaller and more compact satellites (known as microsatellites), the task of distinguishing the attributes and status of orbiting objects with both ground- and space-based sensors becomes more difficult. DOD has a well-defined process for establishing its own requirements. However, because NASA is not a participant in this process and depends on DOD to provide space surveillance capabilities, it is not clear how NASA can ensure satisfaction of its surveillance requirements. First, although the 1996 National Space Policy implies that DOD should provide such surveillance capabilities, and the U.S. Space Command acknowledges its civil space sector responsibility in this area, the policy does not provide directions to ensure that DOD satisfies NASA’s requirements. Second, although NASA has provided requirements to the U.S. Space Command, DOD and NASA have not reached agreement as to how or when these requirements might be satisfied. Third, the DOD Space Architect organization’s study of space surveillance, which included both the defense and intelligence space sectors, noted that detecting and tracking space debris down to 1 centimeter (NASA’s requirement) could be important to the safety of manned space systems, but that the requirement is not a high priority for DOD. Thus, there is no authoritative direction, formal agreement, or clear plan on how the two agencies could consolidate their requirements for a common capability. The civil and national security (defense and intelligence) space sectors have a common interest in space surveillance, and there may be an increasing interest by the commercial space sector. Better information is needed regarding the size, location, and characterization of space objects than the existing space surveillance network can provide. NASA’s space surveillance requirements are commensurate with its responsibilities to ensure the safety of human space flight, but these requirements have not been acted upon by DOD. DOD’s space surveillance requirements continue to be reviewed and will likely become more stringent. Unless DOD and NASA can establish a consolidated set of national security and civil space surveillance requirements, an opportunity may be missed to (1) better ensure the safety of the planned multibillion dollar space station and (2) help satisfy national security needs, including U.S. forces’ future needs for space asset information. We recommend that the Secretary of Defense and the Administrator of NASA, in consultation with the Director of Central Intelligence, establish a consolidated set of governmentwide space surveillance requirements for evaluating current capabilities and future architectures to support NASA’s, DOD’s, and other federal agencies’ space programs and surveillance information needs. DOD’s plans to modernize the existing Naval Space Surveillance System (known as the Fence) and develop three new ballistic missile warning systems do not adequately consider NASA’s or DOD’s emerging space surveillance requirements. The Fence modernization effort would not provide an enhanced capability, but instead would only install modern components while continuing to satisfy DOD’s current requirements. The development efforts for three missile warning systems do not adequately consider DOD’s or NASA’s emerging space surveillance requirements. Also, these four separate efforts are not sufficiently coordinated. Greater coordination could result in more informed decisions regarding the best combination of capabilities to satisfy a consolidated set of emerging national security and civil space surveillance requirements. Beginning in fiscal year 2003, the Navy tentatively plans to incrementally replace components of the Fence with modern components because of the system’s age and relatively high maintenance costs. However, this effort is not currently funded and will not enhance the system’s present capability to detect and track space objects smaller than about 30 centimeters. According to DOD and NASA, the Fence could be upgraded to detect most near-earth space objects larger than 1 centimeter by changing its operating radio frequency from the existing very high frequency band to the super-high frequency band and by locating it near the equator. Such an upgrade could aid in satisfying both NASA’s requirement related to small-sized space objects and DOD’s emerging requirement related to microsatellites. However, according to Naval Space Command officials, such an upgrade has not undergone comprehensive study. In addition, they stated that a radio frequency change (1) is not needed to satisfy existing DOD surveillance requirements and (2) would have a significant effect on the surveillance network’s data processing needs. In commenting on our draft report, DOD stated that the possibility of obtaining funds to upgrade the Fence to meet NASA’s 1 centimeter requirement is not high because DOD has no comparable requirement. Historically, DOD acquired various sensors to satisfy missions other than space surveillance and then capitalized on their inherent capabilities to satisfy the surveillance mission. This collateral mission concept enabled DOD to perform two missions with the same sensors. Examples included ballistic missile early warning radars to detect and track intercontinental ballistic missiles and submarine-launched ballistic missiles and other radars to track space launch vehicles. DOD’s Space-Based Infrared System (SBIRS), Ground-Based Radar (GBR), and Theater High Altitude Air Defense (THAAD) radar are future ballistic missile warning systems that could contribute to performing the space surveillance function as a secondary mission. DOD plans to develop a low-earth orbit satellite component within the SBIRS program, referred to as SBIRS-Low, to provide missile tracking support to both national and theater ballistic missile defense programs. In May 1997, the Under Secretary of Defense for Acquisition and Technology testified before a congressional panel that SBIRS-Low could also perform much of the space surveillance function, allowing some existing terrestrial surveillance sensor sites to be closed and eliminating some surveillance network gaps in space coverage, such as in the Southern Hemisphere. Although DOD believes that the planned SBIRS-Low design would provide an inherent space surveillance capability, its specific capabilities for this function have not been determined. The Air Force plans to initiate SBIRS-Low development in fiscal year 1999, launch the first satellite in fiscal year 2004, and ultimately procure up to 24 or more satellites to establish an operational constellation that would provide worldwide coverage. Although the SBIRS program office has begun to investigate the feasibility of space-based space surveillance, it currently does not plan to develop the SBIRS’ surveillance capabilities because the necessary operational requirements have not been established. Until these requirements are established, DOD can only point to the potential capabilities provided inherently by the ballistic missile warning design. The Army is developing two new phased-array radar systems—the GBR to support national missile defense and the THAAD radar to support theater missile defense. Army project officials stated that on the basis of limited analyses, GBR and THAAD radars each may have inherent space surveillance capabilities that could support NASA’s and DOD’s emerging requirements. They stated that GBR, for example, could (1) detect and track space objects that are approximately 1 centimeter or less and (2) maintain 1,000 simultaneous tracks of these objects compared with only several hundred tracks that phased-array radars in the existing surveillance network can maintain. Similarly, the officials stated that the THAAD radars could track, characterize, and discriminate objects while performing their autonomous search function. Finally, the officials stated that the GBR and THAAD radars could be used during peacetime for space surveillance while maintaining readiness for combat. As with SBIRS-Low, neither GBR nor THAAD is currently required or specifically designed to perform space surveillance functions. Army officials stated that, although the U.S. Space Command was briefed about GBR’s ability to perform collateral missions, including space surveillance, the Command had not established operational requirements for space surveillance applicable to either GBR or THAAD. By fiscal year 1998, the Army plans to have a GBR prototype in operation. A national missile defense deployment decision is expected in fiscal year 2000, which may include plans for GBR deployment in 2003. Regarding THAAD, the Army currently has two test radars and plans to award an engineering and manufacturing development contract in 1999 for two radars with more capability. It expects to deploy as many as 12 mobile THAAD radars worldwide. The Air Force Space Command’s 1995 space surveillance study observed that the surveillance network evolved without a master plan. The space surveillance mission did not have as high a priority as other missions, and DOD capitalized on the inherent capabilities of sensors that were designed for purposes other than surveillance. The lack of such a comprehensive plan creates difficulties in assessing operational capabilities to satisfy requirements, particularly when the need arises to evaluate emerging requirements that are increasingly stringent. The DOD Space Architect’s May 1997 space control study assessed a mix of space surveillance capabilities. The study observed, for example, that a modest radio frequency enhancement to the existing Naval Space Surveillance System, costing roughly $200 million, is feasible for tracking space debris as small as 2 to 5 centimeters. The study also observed that the timing is right to evaluate the presumed inherent space surveillance capabilities of SBIRS-Low to determine if those capabilities could actually be provided. Although GBR and THAAD were not specifically addressed in the study, it indicated that a system with similar generic capability would be stressed to achieve NASA’s 1 centimeter requirement. Finally, the study suggested that several technology efforts be continued to provide a hedge against an uncertain set of future space control threats and priorities. A significant point in the Space Architect’s study was that NASA’s 1 centimeter requirement would be both technically challenging and expensive. In its comments on our draft report, DOD stated that the requirement is not considered feasible within current budget constraints. Until the Joint Space Management Board provides directions regarding the study’s results, implementation plans will not be prepared. Even then, the plans may not sufficiently address NASA’s needs without agreement between DOD and NASA. NASA relies on DOD for space surveillance support, and both agencies need improved surveillance capabilities. However, four DOD systems that could provide such capabilities—the Naval Space Surveillance System, SBIRS-Low, GBR, and THAAD—lack sufficient coordination, both within DOD and between DOD and NASA. The three missile defense sensors (SBIRS-Low, GBR, and THAAD) could provide a collateral space surveillance capability, a concept DOD has successfully used over the years. In times of constrained budgets, capitalizing on ways to satisfy multiple missions with the same resources appears to be prudent. A coordinated plan between DOD and NASA that considers all existing and planned capabilities could be beneficial in making cost-effective decisions to satisfy a consolidated set of emerging national security and civil space surveillance requirements. Without a coordinated plan, DOD and NASA would not be taking advantage of potential efficiencies. The DOD Space Architect, along with NASA and the intelligence space sector, could provide a means for developing such a plan. We recommend that the Secretary of Defense and the Administrator of NASA, in consultation with the Director of Central Intelligence, develop a coordinated governmentwide space surveillance plan that (1) sets forth and evaluates all feasible alternative capabilities to support human space flight and emerging national security requirements and (2) ensures that any planned funding for space surveillance upgrades is directed toward satisfying consolidated governmentwide requirements. | GAO reviewed the Department of Defense's (DOD) and the National Aeronautics and Space Administration's (NASA) space surveillance requirements and DOD's space surveillance capabilities, focusing on: (1) how well DOD's existing surveillance capabilities support DOD's and NASA's current and future surveillance requirements; and (2) the extent to which potential surveillance capabilities and technologies are coordinated to provide opportunities for improvements. GAO noted that: (1) DOD's existing space surveillance network is not capable of providing the information NASA needs to adequately predict collisions between space objects orbiting the earth and multibillion dollar space programs like the space station; (2) the existing network cannot satisfy DOD's emerging space surveillance requirements, which are currently under review; (3) DOD's plans--to modernize an existing surveillance network radar system and develop three new ballistic missile warning systems that could contribute to performing the surveillance function--do not adequately consider DOD's or NASA's surveillance requirements; (4) these four systems are separately managed by the Navy, the Air Force, and the Army; (5) an opportunity exists to consider these systems' potential capabilities to enhance the surveillance network to better satisfy requirements and achieve greater benefits from planned investment in space sensor technology; (6) despite NASA's dependency on DOD to provide space object information, the 1996 National Space Policy makes no provision for an interagency mechanism--either organizational or funding--to ensure that DOD's surveillance capabilities satisfy NASA's requirements; (7) overall, there is no authoritative direction, formal agreement, or clear plan on how DOD and NASA could consolidate their space surveillance requirements for a common capability; (8) a coordinated interagency plan that considers all existing and planned space surveillance capabilities could be beneficial in making cost-effective decisions to satisfy a consolidated set of national security and civil space surveillance requirements; (9) unless DOD and NASA can agree on such a plan, an opportunity may be missed to simultaneously: (a) achieve efficiencies; (b) better ensure the safety of the planned multibillion dollar space station; and (c) help satisfy national security needs, including the U.S. forces' future needs for space asset information. |
In establishing related guidance on solid waste management in 1978, DOD has recognized that burning waste in open pits poses environmental and health hazards. However, burn pits—shallow excavations or surface features with berms used to conduct open-air burning—were often chosen as a method of waste disposal during recent contingency operations in the CENTCOM area of responsibility, which extends from the Middle East to Central Asia and includes Iraq and Afghanistan. In 2010 we reported that there were 251 active burns pits in Afghanistan and 22 in Iraq. Additionally, we reported that the military used burn pits to dispose of waste because of their expedience, and that waste management alternatives could decrease DOD’s reliance on the use of burn pits. We recommended that DOD implement relevant guidance related to burn pit operations, improve its adherence to relevant guidance on waste management, and analyze alternatives to its current practices. DOD generally concurred with the recommendations and took actions to address them. For example, the Army Materiel Command began focusing on solid waste management, including burn pits, and required contractors to segregate non-hazardous, hazardous, and recyclable materials; establish recycling systems; and maintain all solid waste operations in accordance with DOD guidance. According to DOD officials, while the Army considered alternative waste disposal methods such as deployable, ready-made incinerators as standard equipment to each Army unit, it ultimately decided against issuing them due to logistical issues. Further, the use of waste disposal alternatives other than burn pits is not always possible. For example, DOD officials have stated that there have been times when incinerators have not been used because of the challenges associated with obtaining permits and visas for the personnel who were contracted to operate and maintain them. In addition, DOD officials stated that using landfills as an alternative waste disposal option has not always been an option because of security concerns surrounding local landfills off base or space constraints for landfills on base. When alternatives are not available, military base commanders have resorted to the use of burn pits. Although burn pits help base commanders to manage waste, they also produce smoke and harmful emissions that military and other health professionals believe may result in acute and chronic health effects for those exposed. We previously reported that some veterans returning from the Iraq and Afghanistan conflicts have reported pulmonary and respiratory ailments, among other health concerns, that they attribute to burn pit emissions. Numerous veterans have also filed lawsuits against a DOD contractor alleging that the contractor mismanaged burn pit operations at several installations in both Iraq and Afghanistan, resulting in exposure to harmful smoke that caused these adverse health effects. We also previously reported on the difficulty of establishing a correlation between occupational and environmental exposures and health issues. For example, in 2012 we found that establishing causation between an exposure and an adverse health condition can be difficult for several reasons, including that for many environmental exposures, there is a latency period—the time period between initial exposure to a contaminant and the date on which an adverse health condition is diagnosed. When there is a long latency period between an environmental exposure and an adverse health condition, choosing between multiple causes of exposure may be difficult. In addition, in 2015 we found that the Army had recently published a study that evaluated associations between deployment to Iraq and Kuwait and the development of respiratory conditions post- deployment. However, the study was unable to identify a causal link between exposures to burn pits and respiratory conditions. Section 317 of the NDAA for Fiscal Year 2010 requires DOD to develop regulations prohibiting the disposal of covered waste in burn pits during contingency operations, except in circumstances where alternatives are not feasible. This provision also requires DOD to notify Congress of the decision to dispose of covered waste in burn pits, along with the circumstances, reasoning, and methodology leading to the decision. Additionally, for each subsequent 180-day period during which covered waste is disposed of in a burn pit, DOD must submit to Congress a justification for continuing to operate the burn pit. Further, even in the absence of a contingency operation and applicable guidance, DOD overseas environmental guidance exists prohibiting the use of burn pits, except in certain instances. In response to section 317, DOD issued department-wide guidance regarding the use of burn pits, environmental management, and occupational health. Specifically, in 2011 the Under Secretary of Defense for Acquisition, Technology and Logistics issued DOD Instruction 4715.19, which establishes policy, assigns responsibilities, and provides guidance regarding the use of burn pits during contingency operations. In 2016 DOD issued Instruction 4715.22, which establishes policy, assigns responsibilities, and provides direction for environmental management at contingency locations. Additionally, the guidance directs the Assistant Secretary of Defense for Energy, Installations, and Environment to establish Contingency Location Environmental Standards by February 2017. These standards are to define environmental standards for implementation at contingency locations in order to protect force health, minimize environmental impact, and sustain mission effectiveness. Two other DOD instructions, DOD Instruction 6055.05 and DOD Instruction 6055.01, provide guidance regarding risk management procedures associated with occupational or environmental factors. Specifically, DOD Instruction 6055.05 expands risk management procedures to anticipate, recognize, evaluate, and control health hazards associated with occupational and environmental exposures to chemical, physical, and biological hazards in DOD workplaces, including military operations and deployments. The guidance is aimed at protecting DOD personnel from accidental death, injury, and illness caused by hazardous occupational or environmental exposures. Further, it applies risk management strategies designed to achieve reductions in all mishaps, injuries, and illnesses, and compliance with DOD safety and health standards and policies. DOD Instruction 6055.01 provides guidance to protect DOD personnel from accidental death, injury, or occupational illness throughout all operations worldwide, with certain limitations. Specifically, this guidance reinforces the concept of applying risk management strategies to eliminate occupational injury or illness and loss of mission capability and resources, both on and off duty. Since 2008, CENTCOM has issued guidance relating to environmental assets within its area of responsibility. CENTCOM Regulation 200-1 requires bases within CENTCOM’s area of responsibility to have an environmental assessment program, structured to optimize environmental asset expertise, protection, enhancement, and security. This regulation requires CENTCOM environmental leads to develop policies and procedures for management and disposal of solid waste, hazardous waste, and medical waste, among others. CENTCOM Regulation 200-2 guides solid waste management practices throughout CENTCOM’s area of responsibility, including minimum requirements for operating and monitoring burn pits. This regulation also provides guidance for managing environmental concerns, such as hazardous materials and regulated medical and solid waste. It also includes requirements for base commanders to develop a solid waste management plan and strategy to transition to alternative waste disposal methods, such as an incinerator. On the basis of our assessment of DOD’s March 2016 burn pit report, we found that it generally addressed the requirements in section 313 of the NDAA for Fiscal Year 2015. According to DOD officials, to gather information for and develop its report, DOD tasked each of the military services, the Joint Staff, and the overseas combatant commands—U.S. Central Command, U.S. Africa Command, U.S. European Command, U.S. Pacific Command, and U.S. Southern Command—to provide information on the requirements in section 313, which included providing information on the policies and procedures related to the disposal of covered waste in burn pits during contingency operations. Section 313 contains seven specific reporting elements that DOD was required to address in its report. We determined, based on our assessment, that DOD’s report fully addressed four of the seven reporting requirements and partially addressed the remaining three. Table 1 summarizes our analysis of the extent to which DOD’s report addressed each of the specific requirements in section 313. Our assessment identified that reporting requirements 1, 3, and 5 are partially addressed in DOD’s report because we found that the responses to the reporting requirement lacked required detail and instead only identified relevant policy. For example, we assessed reporting requirement 3 to be partially addressed because DOD’s report included guidance used to distinguish categories of waste but did not include information about whether the categories were appropriately and clearly distinguished in environmental surveys and assessments. Similarly, we assessed reporting requirement 5 to be partially addressed because DOD reported on the notification process by which burning covered waste in a burn pit is requested and approved, and the process by which Congress is notified, but did not discuss how this notification process could be improved, if applicable. The DOD official responsible for compiling the report stated that because section 313 of the National Defense Authorization Act of Fiscal Year 2015 applies to the Office of the Secretary of Defense, which focuses primarily on strategic and policy-related issues, the requests for specific information contained in reporting requirements 1, 3 and 5 were outside its purview, and information with this degree of specificity could be provided only by lower-level commands. The Office of the Secretary of Defense tasked lower-level commands, including each of the military services, the Joint Staff, and the combatant commands, to provide information for the report. However, the tasking order sent to these lower- level commands contained the same wording as the mandate language in section 313 of the National Defense Authorization Act of Fiscal Year 2015, with little clarification of the level of detail that these entities should include in their responses. As a result, the level of information obtained and used to develop the report varied and partially addressed three of the seven requirements. While DOD issued guidance on burn pit use, as required by law, it is not clear how the guidance will be implemented in future contingency operations because the overseas combatant commands, except for CENTCOM, have not issued related policies and procedures for implementing this guidance. As previously stated, in response to section 317 of the National Defense Authorization Act of Fiscal Year 2010, in 2011 DOD issued DOD Instruction 4715.19. This instruction defines covered waste as hazardous waste, medical waste, and other items including tires, treated wood, batteries, and plastics, among other things. The guidance also prohibits the disposal of covered waste in burn pits during contingency operations, except in circumstances where alternatives are not feasible, and provides high-level guidance on the notification process in such circumstances. Specifically, the instruction states that the Under Secretary of Defense for Acquisition, Technology, and Logistics must notify Congress within 30 days of the combatant commander’s determination that no alternative disposal method for covered waste, other than in a burn pit, is feasible. The instruction further states that a justification for continued burning of covered waste in a burn pit is also required to be submitted to Congress for each subsequent 180- day period. CENTCOM is the only overseas geographic combatant command that has established policies and procedures that govern waste management during contingency operations, including implementing the DOD burn pit guidance. Specifically, CENTCOM Regulations 200-1 and 200-2 provide policies, assign responsibilities for implementing DOD Instruction 4715.19, and set environmental standards. CENTCOM Regulation 200-2 applies to military personnel and civilian contractors who operate burn pits in the CENTCOM area of responsibility, and CENTCOM Regulation 200-1 applies to its components and to all other U.S. military forces operating in CENTCOM’s area of responsibility. These regulations provide, among other things, detailed guidance for submitting burn pit notifications. Additionally, CENTCOM Regulation 200-2 acknowledges that burn pits are typically used when bases are first established, but provides a specific threshold—when an installation exceeds 100 U.S. personnel for 90 days—after which burn pits must be replaced by alternative waste disposal methods. If, however, there is no feasible alternative to the use of burn pits, base officials are to forward the rationale for the continued use of burn pits to the appropriate service component command. According to DOD officials familiar with CENTCOM’s procedures, this determination is first sent to the appropriate land component or joint task force command for review. These commands work in conjunction with the base commander to ensure that the burn pit justification criteria are met. The notification is then sent to the designated service component, and the service component decides whether the notification will be further processed. In doing so, the service component reviews the rationale, such as the basis for the lack of alternatives, an estimate of how long burn pits will continue to be used, or a preliminary health assessment, and then forwards the notification to CENTCOM along with a recommendation for approval or disapproval. Once approved by CENTCOM, according to officials, the determination is forwarded to the Under Secretary of Defense for Acquisition, Technology, and Logistics within 15 days after approval. The use of burn pits in the CENTCOM area of responsibility has declined since our last report in 2010. As of June 2016, DOD officials told us that there were no military-operated burn pits in Afghanistan and only one in Iraq, of which Congress had been notified. According to DOD officials, the decline in the number of burn pits from 2010 to 2016 can be attributed to such factors as (1) using contractors for waste disposal and (2) increased use of waste management alternatives such as landfills and incinerators. DOD officials also acknowledged, however, that burn pits are being used to dispose of waste in other locations that are not military- operated and that no notifications have been made in these instances because the means of disposal are not within DOD’s control. Specifically, these officials noted instances in which local contractors had been contracted to haul away waste and subsequently disposed of the waste in a burn pit located in close proximity to the installation. In such instances, officials stated that they requested that the contractors relocate the burn pit. In contrast with CENTCOM, other geographic combatant commands have not established additional policies and procedures that govern waste management, including the disposal of waste in burn pits and the notification procedures to be followed in the event that no other alternatives are feasible. As of June 2016, according to an official from U.S. Africa Command, command-specific burn pit policies and procedures were being developed. According to officials from the other overseas geographic combatant commands, their commands have not developed similar policies and procedures because they do not utilize burn pits and there is an absence of current contingency operations in their respective areas of responsibility. In addition, a U.S. European Command official explained that their operations are generally in countries that have specific guidance that discourage the use of burn pits as a method of waste disposal. Nonetheless, while most of the overseas geographic commands may not currently be involved in contingency operations within their areas of responsibility, waste disposal would likely be required if such operations arise in the future, and the use of burn pits would be one option for disposing of waste. Moreover, a U.S. Africa Command official stated that while there were no known DOD-operated burn pits in that command’s area of responsibility, burn pits were being operated by local nationals near an installation on which there were DOD personnel. Moreover, DOD’s guidance on burn pits in DOD Instruction 4715.19 applies to all the combatant commands, including the requirements to make determinations in circumstances in which no alternative disposal method for covered waste, other than in an open-air burn pit, is feasible, and to forward those determinations to the Under Secretary of Defense for Acquisition, Technology and Logistics. However, the instruction does not specify how combatant commanders will ensure compliance with requirements in the instruction, including which organizations within the commands would be responsible for notifications, or how they would monitor and report on the use of any burn pits. In addition, other than requiring a solid waste management plan that addresses the use of burn pits, the instruction does not require combatant commanders to develop complementary policies and procedures concerning burn pits relevant to their respective areas of responsibility. Further, according to Joint Publication 3-0, Joint Operations, waste disposal is a consideration throughout planning and execution, and until forces redeploy. Moreover, according to Standards for Internal Control in the Federal Government, those in key roles may further define policies through day-to-day procedures, and these procedures may include the timing of when a control activity occurs and any follow-up corrective actions to be performed by competent personnel if deficiencies are identified. DOD officials acknowledged that limiting burn pit use for waste disposal is important, as the relevant guidance stipulates, especially when disposing of covered waste, because of the harmful toxins emitted in the air. Without policies and procedures governing waste management during contingency operations, including the use of burn pits, combatant commanders are not well positioned to implement the requirements of DOD Instruction 4715.19 if burn pits become necessary for disposing of waste in a future contingency operation within their area of operation, specifically with regard to notification procedures that are relevant to the conditions that exist in their respective areas of responsibility. The impacts from exposing individuals to burn pit emissions are not well understood, and DOD has not fully assessed these health risks. Under DOD Instruction 6055.01, it is DOD policy to apply risk-management strategies to eliminate occupational injury or illness and loss of mission capability or resources. DOD Instruction 6055.01 also instructs all DOD components to establish procedures to ensure that risk-acceptance decisions are documented, archived, and reevaluated on a recurring basis. Furthermore, DOD Instruction 6055.05 requires that hazards are identified and risk is evaluated as early as possible, including the consideration of exposure patterns, duration, and rates. Notwithstanding this guidance, according to DOD officials, DOD has not fully assessed the health risks of use of burn pits. According to DOD officials, DOD’s ability to assess these risks is limited by a lack of adequate information on (1) the levels of exposure to burn pit emissions and (2) the health impacts these exposures have on individuals. With respect to information on exposure levels, DOD has not collected data from emissions or monitored exposures from burn pits as required by its own guidance. DOD Instruction 4715.19 requires that plans for the use of open-air burn pits include ensuring the area is monitored by qualified force health protection personnel for unacceptable exposures, and CENTCOM Regulation 200-2 requires steps to be taken to sample or monitor burn pit emissions. A DOD official stated that the department considers open air burning and other airborne contaminant sources when determining its monitoring strategies for the potentially exposed population and evaluating health risks in accordance with DOD Instructions 6490.03 and 6055.05. However, the official also stated that DOD has not collected direct data from burn pit emissions because the data do not represent the overall air quality to which personnel are exposed. Additionally, CENTCOM Regulation 200-1 requires environmental surveying to be conducted at a base if the base is occupied or is expected to be occupied for 30 or more days, or at the base’s closure. According to DOD officials, environmental surveys are conducted, generally once a year, and these surveys assist in identifying both potential risks to servicemembers’ health and air pollutants. However, DOD officials stated that there are no processes in place to specifically monitor burn pit emissions for the purposes of correlating potential exposures. They attribute this to a lack of singular exposure to the burn pit emissions, or emissions from any other individual item; instead, monitoring is done for the totality of air pollutants from all sources at the point of population exposure. An official from U.S. Africa Command, however, stated that efforts are underway in that command’s area of responsibility to conduct an air quality study specifically because of the use of burn pits by local nationals off the installation where DOD personnel are housed. Given the potential use of burn pits near installations and their potential use in future contingency operations, establishing processes to monitor burn pit emissions for unacceptable exposures would better position DOD and combatant commanders to collect data that could help assess exposure to risks. In the absence of the collection of data to examine the effects of burn pit exposure on servicemembers, the Department of Veterans Affairs in 2014 created the airborne hazards and open-air burn pit registry, which allows eligible individuals to self-report exposures to airborne hazards (such as smoke from burn pits, oil-well fires, or pollution during deployment), as well as other exposures and health concerns. The registry helps to monitor health conditions affecting veterans and servicemembers, and to collect data that will assist in improving programs to help those with deployment exposure concerns. With respect to the information on the health effects from exposure to burn pit emissions, DOD officials stated that there are short-term effects from being exposed to toxins from the burning of waste, such as eye irritation and burning, coughing and throat irritation, breathing difficulties, and skin itching and rashes. However, the officials also stated that DOD does not have enough data to confirm whether direct exposure to burn pits causes long-term health issues. Although DOD and the Department of Veterans Affairs have commissioned studies to enhance their understanding of airborne hazards, including burn pit emissions, the current lack of data on emissions specific to burn pits limits DOD’s ability to fully assess potential health impacts on servicemembers and other base personnel, such as contractors. For example, in a 2011 study that was contracted by the Department of Veterans Affairs, the Institute of Medicine stated that it was unable to determine whether long-term health effects are likely to result from burn pit exposure due to inadequate evidence of an association. While the study did not determine a linkage to long-term health effects, because of the lack of data, it did not discredit the relationship either. Rather, it outlined a methodology of how to collect the necessary data to determine the effects of the exposure. Specifically, the 2011 study outlined the feasibility and design issues for an epidemiologic study—that is, a study of the distribution and determinants of diseases and injuries in human populations—of veterans exposed to burn pit emissions. The elements of a well-designed epidemiologic study of the potential health effects of an environmental exposure include identification of a relevant study population of adequate size; comprehensive assessment of exposure; careful evaluation of health outcomes; adequate follow-up time; reasonable methods for controlling, confounding, and minimizing bias; and appropriate statistical analyses. Further, the 2011 study reported that there are a variety of methods for collecting exposure information, but the most desirable is to measure exposures quantitatively at the individual level. Individual exposure measurements can be obtained through personal monitoring data or biomonitoring. However, if individual monitoring data are not available, and they rarely are, individual exposure data may also be estimated from modeling of exposures, self-reported surveys, interviews, job exposure matrixes, and environmental monitoring. Further, to determine the incidence of chronic disease, the study states that servicemembers must be tracked from their time of deployment, over many years. The 2011 study goes into further detail on how data can be collected, with alternatives, and processed without biases. While the methodology of how to conduct an epidemiologic study is outlined, DOD has not taken steps to conduct this type of research study, specifically one that focuses on the direct, individual exposure to burn pit emissions and the possible long-term health effects of such exposure. Instead, some officials commented that there were no long-term health effects linked to the exposures of burn pits because the 2011 study did not acknowledge any. Conversely, Veterans Affairs officials stated that a study aimed at establishing health effect linkages could be enabled by the data in its airborne hazards and open-air burn pit registry, which collects self-reported information on servicemembers’ deployment location and exposure. Further, in response to a mandate contained in section 201 of Public Law 112-260, the National Academies of Sciences, Engineering, and Medicine will convene a committee to provide recommendations on collecting, maintaining, and monitoring information through the registry. The committee will assess the effectiveness of the Department of Veterans Affairs’ information gathering efforts and provide recommendations for addressing the future medical needs of the affected groups. The study will be conducted in two phases. Phase 1 will be a review of the data collection methods and outcomes, as well as an analysis of the self- reported veteran experience data gathered in the registry. Phase 2 will focus on the assessment of the effectiveness of the actions taken by the Department of Veterans Affairs and DOD and will provide recommendations for improving the methods enacted. According to officials, the expected release date of the report is late fall of 2016. Considering the results of this review as well as the methodology of the 2011 Institute of Medicine study as part of an examination of the relationship between direct, individual exposure to burn pit emissions and long-term health effects could better position DOD to fully assess those health risks. For over three decades, DOD has understood that disposing of waste in burn pits poses health hazards. In light of its experience in Iraq and Afghanistan, CENTCOM has taken steps to reduce burn pit use in its area of responsibility through the use of alternative methods of waste disposal, such as incinerators. However, DOD likely cannot completely eliminate the need for burn pits in future contingency operations. Although CENTCOM has specific policies and procedures for burn pit operations in its area of responsibility, other geographic commands do not, potentially leaving them ill-prepared to plan for and to safely and effectively manage burn pits in the event of contingency operations in their respective geographic regions. Moreover, although DOD and the Department of Veterans Affairs have commissioned studies to enhance their understanding of airborne hazards during deployments, given that DOD may have to use burn pits in future contingency operations, as allowed under current policies, ensuring that research efforts specifically examine the relationship between direct, individual exposure to burn pit emissions and long-term health issues could help improve the understanding and potentially minimize risks related to such exposure. To better position combatant commanders to implement the requirements of DOD Instruction 4715.19 if burn pits become necessary and to assist in planning for waste disposal in future military operations, we recommend that the Secretary of Defense direct the combatant commanders of U.S. Africa Command, U.S. European Command, U.S. Pacific Command, and U.S. Southern Command to establish implementation policies and procedures for waste management. Such policies and procedures should include, as applicable, specific organizations within each combatant command with responsibility for ensuring compliance with relevant policies and procedures, including burn pit notification, and, when appropriate, monitoring and reporting on the use of burn pits. To better understand the long-term health effects of exposure to the disposal of covered waste in burn pits, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following two actions: Take steps to ensure CENTCOM and other geographic combatant commands, as appropriate, establish processes to consistently monitor burn pit emissions for unacceptable exposures. In coordination with the Secretary of Veterans Affairs, specifically examine the relationship between direct, individual, burn pit exposure and potential long-term health-related issues. As part of that examination, consider the results of the National Academies of Sciences, Engineering, and Medicine’s report on the Department of Veteran Affairs registry and the methodology outlined in the 2011 Institute of Medicine study that suggests the need to evaluate the health status of service members from their time of deployment over many years to determine their incidence of chronic disease, with particular attention to the collection of data at the individual level, including the means by which that data is obtained. We provided a draft of this report to DOD and the Department of Veterans Affairs for review and comment. In its written comments, DOD concurred with our first recommendation and partially concurred with our second and third recommendations. The Department of Veterans Affairs provided general comments on our draft report but did not comment specifically on the recommendations. DOD’s and the Department of Veterans Affairs’ comments are summarized below and reprinted in their entirety in appendixes I and II, respectively. Additionally, DOD and the Department of Veterans Affairs also provided technical comments, which we incorporated into the report, as appropriate. DOD concurred with our first recommendation that the Secretary of Defense direct the combatant commanders of U.S. Africa Command, U.S. European Command, U.S. Pacific Command, and U.S. Southern Command to establish implementation policies and procedures for waste management, to include burn pit notification and, when appropriate, monitoring and reporting on the use of burn pits. DOD partially concurred with our second and third recommendations in the draft report, that it take steps to ensure that CENTCOM and other geographic combatant commands, as appropriate, establish processes to consistently monitor burn pit emissions and that the department, in coordination with the Secretary of Veterans Affairs, sponsor research to examine the relationship between burn pit exposure and potential health- related issues. In its response, DOD stated that it will ensure that geographic combatant commands establish and employ processes to consistently monitor burn pit emissions for unacceptable exposures at the point of exposure and, if necessary, at individual sources. However, DOD also stated that our recommendation in the draft report to sponsor research did not acknowledge the volume of research conducted and planned by the department and the Secretary of Veterans Affairs, in collaboration with other research entities. Specifically, DOD stated in its letter that research studies have already been completed, are ongoing, or are planned to improve the understanding of burn pit and other ambient exposures to long-term health outcomes, and that the studies, where applicable, consider and incorporate the methodology outlined in the 2011 Institute of Medicine study. Additionally, DOD stated that it has implemented an Airborne Hazards Joint Action Plan process collaboratively with the Department of Veterans Affairs as a primary means of identifying research needs within DOD and the Department of Veterans Affairs to address burn pit and other ambient air exposures during deployments. In our report, we acknowledge that DOD and the Department of Veterans Affairs have commissioned studies to enhance their understanding of airborne hazards during deployment, including burn pit emissions, many of which are listed in the department’s response to our recommendation in the draft report. We also agree that the ongoing and planned research studies listed in DOD’s response will continue to contribute to the general knowledge of health effects of airborne hazards during deployment. However, during the course of our review, DOD and other officials told us that they have not specifically examined the relationship between direct, individual burn pit exposure and potential long-term health-related issues. Further the research studies presented in DOD’s response to our draft recommendation do not directly make this linkage either. This current lack of data on direct, individual exposure to burn pit emissions limits DOD’s ability to fully assess potential long-term health impacts on servicemembers. As we discussed in the report, the 2011 Institute of Medicine study outlines a methodology of how to collect the necessary data to determine the effects of exposure. Specifically, the 2011 study states that the most desirable method to measure exposures quantitatively is at the individual level. Individual exposure measurements can be obtained through personal monitoring data. The intent of the recommendation in the draft report was to address this linkage. Therefore, we have clarified our recommendation and our report and continue to believe that research that addresses individual exposure to burn pit emissions and the potential long-term health effects would help provide important information to fully understand and mitigate future health risks. In its general comments, the Department of Veterans Affairs noted that it coordinates with DOD in collecting data on veterans and servicemembers potentially exposed to burn pit emissions and other airborne exposures, to address relevant research needs. Additionally, the Department of Veterans Affairs stated that eligible servicemembers are urged to participate in the Airborne Hazards and Open Burn Pit Registry, and as of August 14, 2016, 84,958 individuals have completed an online questionnaire to elicit responses to multiple categories of health, among other things. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense; the Secretary of Veterans Affairs; the Under Secretary of Defense for Acquisition, Technology and Logistics; the relevant combatant commanders, and the Chairman of the Joint Chiefs of Staff. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512- 5431 or russellc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. In addition to the contact named above, Guy LoFaro (Assistant Director), Lorraine Ettaro, J. Alfredo Gomez, Mike Hix, Shahrzad Nikoo, Leigh Ann Sheffield, Cheryl Weissman, Natasha Wilder, and Eugene Wisnoski made key contributions to this report. | Burn pits help base commanders manage waste generated by U.S. forces overseas, but they also produce harmful emissions that military and other health professionals believe may result in chronic health effects for those exposed. Section 313 of the NDAA for FY 2015 requires the Secretary of Defense to review DOD compliance with law and guidance regarding the disposal of covered waste in burn pits. DOD submitted a report on the results of its review in March 2016. Section 313 also includes a provision for GAO to assess DOD's report and its compliance with applicable DOD instruction and law. This report evaluates the extent to which (1) DOD's report addressed the elements required in section 313; (2) DOD, including combatant commands, issued guidance for burn pit use that addresses applicable legislative requirements; and (3) DOD has assessed any health risks of burn pit use. GAO compared DOD's report to elements required in section 313, reviewed policies and procedures and interviewed DOD officials. In assessing the Department of Defense's (DOD) March 2016 report to Congress on the use of burn pits, GAO found that it generally addressed the requirements in section 313 of the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2015. To complete this report, DOD tasked the military services, the Joint Staff, and the overseas combatant commands to provide information on the requirements in the mandate, including policies and procedures related to the disposal of covered waste (including certain types of hazardous waste, medical waste, and items such as tires, treated wood, and batteries) in burn pits during contingency operations. GAO found that DOD's report fully addressed four of the seven reporting requirements and partially addressed the remaining three. For example, the report addressed who is responsible for ensuring compliance with the legislative requirements, but partially addressed whether the waste categories are appropriately and clearly distinguished in surveys and assessments. Although DOD established guidance to meet applicable legislative requirements through the issuance of DOD Instruction 4715.19, U.S. Central Command is the only overseas geographic combatant command that has established complementary policies and procedures for implementing this guidance. The instruction applies to all the combatant commands, but it does not specify how combatant commanders will ensure compliance with requirements in the instruction. Officials from the other geographic combatant commands stated that their commands have not developed similar policies and procedures because they do not utilize burn pits and there is an absence of current contingency operations in their respective areas of responsibility. Nonetheless, while most of the overseas geographic commands may not currently be involved in contingency operations within their areas of responsibility, waste disposal would likely be required if such operations arise in the future, and the use of burn pits would be one option for disposing of waste. Establishing policies and procedures would better position these commands to implement DOD's instruction. The effects of exposing individuals to burn pit emissions are not well understood, and DOD has not fully assessed these health risks. DOD officials stated that there are short-term effects from being exposed to toxins from the burning of waste. However, the officials also stated that DOD does not have enough data to confirm whether direct exposure to burn pits causes long-term health issues. Although DOD and the Department of Veterans Affairs have commissioned studies to enhance their understanding of burn pit emissions, the current lack of data on emissions specific to burn pits and related individual exposures limits efforts to characterize potential long-term health impacts on servicemembers and other base personnel. A 2011 report by the Institute of Medicine outlined the data needed for assessing exposures and potential related health risks, and the Department of Veterans Affairs has established a registry to collect some information. However, DOD has not undertaken data-gathering and research efforts to specifically examine this relationship to fully understand any associated health risks. GAO made three recommendations to include establishing policies and procedures and ensuring research specifically examines the relationship between direct burn pit exposure and long-term health issues. DOD concurred with the first recommendation and partially concurred with the second, citing research it has or has plans to conduct. GAO agrees this research contributes to general understanding but continues to believe more specific research is needed. |
NRC is responsible for ensuring that the nation’s 103 operating commercial nuclear power plants pose no undue risk to public health and safety. According to one study, as many as 26 of the nation’s nuclear sites are vulnerable to shutdown because production costs are higher than the projected market prices of electricity. The analysis also estimates that 39 plants whose operating licenses are scheduled to expire by 2020 will seek to extend their licenses. Since the early 1980s, NRC has been increasing the use of risk information in the regulatory process. For example, in 1986, the agency issued safety goals that, according to NRC staff, supported the use of risk analyses in making regulatory decisions. In August 1995, NRC issued a policy statement advocating certain changes in the development and implementation of its regulations through a risk-informed approach. Under such an approach, NRC and the utilities would give more emphasis to those structures, systems, and components deemed more safety significant. The following example illustrates the difference between NRC’s traditional approach and a risk-informed approach: One nuclear utility identified about 635 valves and 33 pumps that must be operated, maintained, tested, and replaced at one plant, according to NRC’s traditional regulations. However, about 515 valves and 12 pumps present a low safety risk while 120 valves, 21 pumps, and 25 components present a high safety risk. Under a risk-informed approach, NRC has approved the utility’s concentrating on the elements presenting a high safety risk while continuing to comply with NRC’s traditional regulations for the remaining elements but at less frequent intervals. Early in calendar year 1998, the Nuclear Energy Institute (NEI) contracted with the Center for Strategic and International Studies to examine NRC’s regulatory processes. NRC, the Union of Concerned Scientists, and others are members of the steering committee for the study. The Center’s review focuses on answering three questions: What is NRC’s safety expectation? Are NRC’s rules and regulations properly focused on safety? Are NRC’s processes focused on safety? According to the Director of NRC’s Office of Nuclear Regulatory Research, the steering committee for the study discussed whether the Center would define “an acceptable level of safety.” Recognizing that providing such a definition is a difficult and challenging task that NRC and others have attempted over the years, the study’s steering committee believed that the Center should focus instead on how safe NRC expects commercial nuclear power plants to be and how consistently NRC applies that expectation to the plants. The Center expects to issue its report in April 1999. Commercial nuclear power plants will continue to generate electricity for some time in the future. NRC issues a plant operating license for 40 years. After 20 years, a utility can apply to extend the license for an additional 20 years. Table 1 shows the time frames during which the existing plant licenses could expire. The Energy Policy Act of 1992 has resulted in the restructuring of the nation’s electric power industry and the emergence of competition in the business of electricity generation. As the electric utility industry is restructured, operating and maintenance costs will affect the competitiveness of nuclear power plants. Competition challenges NRC to ensure that utilities do not compromise safety through cost-cutting measures. As of February 1999, 18 states had implemented plans to restructure the electric utility industry by enacting legislation or adopting final orders. In the 13 states that have enacted legislation, utilities operate 34 nuclear power plants that produce between 20 percent and 59 percent of the states’ electricity. In the 5 states that have adopted final orders without enacting legislation, utilities operate 17 nuclear plants that supply between 15 percent and 74 percent of the states’ electricity. Competition will pose difficult issues for some nuclear utilities, and efforts to achieve economies of scale will spur the growth of nuclear operating companies as a means of minimizing overhead and maximizing institutional experience. Other cost reduction efforts being pursued by the industry include mergers, acquisitions, the use of contract operators, and spin-offs of generating assets. For example, Alabama Power Company and Georgia Power formed a subsidiary—Southern Nuclear Operating Company—to operate six plants for the utilities. In addition, in July 1998, AmerGen Energy Company, a joint venture formed in 1997 by PECO Energy Company and British Energy, announced plans to purchase Three Mile Island 1 from GPU Nuclear Corporation. Furthermore, Duke Power bought Pan Energy (a gas company) as a means of diversifying its operations. Given the added economic pressures competition is likely to bring, NRC will need to continue to be vigilant to ensure that the decisions utilities make primarily in response to economic pressures are not detrimental to public health and safety. NRC, NEI, and many utility executives believe that the key for nuclear plants to compete is efficient plant operations. To achieve such efficiency, NRC and NEI believe that fewer and fewer companies will operate more and more of the existing nuclear plants. Consolidation will allow companies to achieve economies of scale in, for example, their refueling and engineering staffs. Some experts believe that in the future only 5 to 10 companies will operate all nuclear power plants to ensure cost efficiency and survive in a competitive environment. NRC staff estimate that it could take 4 to 8 years to implement a risk-informed regulatory approach and are working to resolve many issues to ensure that the new approach does not endanger public health and safety. Although NRC has issued guidance for utilities to use risk assessments to meet regulatory requirements for specific activities and has undertaken many activities to implement a risk-informed approach, more is needed to ensure that utilities have current and accurate documentation on the design of each plant and its structures, systems, and components and safety analysis reports that reflect changes to the design and other analyses conducted after NRC issued the plant’s operating license; ensure that utilities make changes to their plants on the basis of complete and accurate design and safety analysis information; determine whether and what aspects of NRC’s regulations should be develop standards on the scope and detail of the risk assessments needed for utilities to determine that changes to their plants’ design will not negatively affect safety; and determine the willingness of utilities to adopt a risk-informed approach. Whether NRC uses a traditional or a risk-informed regulatory approach, it must have current and accurate documentation to oversee nuclear plants. These documents include the (1) design of the plant and of the structures, systems, and components within it and (2) safety analysis reports that reflect changes to the design and other analyses conducted (including those related to the process that allows utilities to change their plants without obtaining NRC’s approval) since NRC issued the operating license. To effectively implement a risk-informed approach, NRC must have confidence that each plant’s design reflects current safety requirements and that accurate baseline information exists for each plant. Without such information, neither NRC nor the utility can determine the safety consequences of making changes to the plant. For more than 10 years, NRC has questioned whether utilities have accurate, available, and current information on the design of their plants. Inspections of 26 plants completed early in fiscal year 1999 confirmed that (1) some utilities had not maintained accurate design documentation; (2) with some exceptions, NRC had assurance that safety systems would perform as intended at all times; and (3) NRC needed to clarify what constitutes design information. NRC staff expect to recommend an approach to the Commission in June 1999 to clarify design information and seek approval to obtain public comments on the recommended approach. NRC staff could not estimate when the agency would complete this effort but said that the agency would oversee design information issues using such tools as safety system engineering inspections. In addition, in 1993, NRC found that Northeast Nuclear Energy Company for many years had taken actions at its Millstone Unit 1 plant that were not allowed under its updated safety analysis report. Since that time, NRC has not had confidence that some utilities update their safety analysis reports as required following analyses and changes that modify the existing descriptions or create new descriptions of facilities or their operating limits. Failure to update the reports results in poor documentation of the plants’ safety bases. As a result of the lessons learned from Millstone and other initiatives, NRC determined that additional guidance is needed to ensure that utilities update their safety analysis reports to reflect changes to the design of their plants, as well as the results of analyses performed since NRC issued the plants’ operating licenses. On June 30, 1998, the Commission directed the staff to work with NEI to finalize the industry’s guidelines on updating safety analysis reports, which NRC could then endorse in a regulatory guide. NRC expects to endorse the guidelines by the end of September 1999. Furthermore, for more than 30 years, NRC’s regulations have provided a set of criteria that utilities must use to determine whether they may change their facilities (as described in their safety analysis reports) or procedures or conduct tests and experiments without NRC’s prior approval. The finding in 1993 that Millstone Unit 1 had taken actions that were not allowed by its updated safety analysis report led NRC to question this regulatory framework. As a result, NRC staff initiated a review to identify the short- and long-term actions needed to improve the change process. For example, in October 1998, NRC published a proposed regulation on plant changes in the Federal Register for comment; the comment period ended on December 21, 1998. NRC requested comments on criteria for identifying changes that require an amendment to a plant’s license and on a range of options, several of which would allow utilities to make changes without NRC’s prior approval, despite a potential increase in the probability or consequences of an accident. NRC expects to issue a final rule in June 1999. In addition, in December 1998, NRC staff provided their views to the Commission on changing the scope of the regulation to consider risk information. NRC’s memorandum that tracks the various tasks related to a risk-informed approach did not show when NRC would resolve this issue. According to NRC staff, they will develop a plan to implement the Commission’s decision after it is received. Until recently, NRC did not consider whether and to what extent it should revise its regulations pertaining to commercial nuclear plants to make them risk informed. Revising the regulations will be a formidable task because, according to NRC staff, the regulations are inconsistent and a risk-informed approach would focus on the safety significance of structures, systems, or components, regardless of where they are located in a plant. NRC staff and NEI officials agree that the most critical issues in revising the regulations will be to define their scope (that is, whether the regulations will consider risk, as well as the meaning of such concepts as “important to safety” and “risk significant”) and to integrate the traditional and risk-informed approaches into a cohesive regulatory context. After defining the scope of the regulations, NRC can determine how to regulate within the revised context. In October 1998, NEI proposed a phased approach to revise the regulations. Under this proposal, by the end of 1999, NRC would define “important to safety” and “risk significant.” By the end of 2000, NRC would use the definitions in proposed rulemakings for such regulations as those on the definition of design information and the environmental qualification of electrical equipment. By the end of 2003, NEI proposes that NRC address other regulatory issues, such as the change process, the content of technical specifications, and license amendments. After 2003, NEI proposes that NRC address other regulations on a case-by-case basis. NRC staff agreed that the agency must take a phased approach when revising its regulations. The Director, Office of Nuclear Regulatory Research, said that if NRC attempted to revise all provisions of the regulations simultaneously, it might accomplish very little. The Director said that NRC needs to address one issue at a time while concurrently working on longer-term actions. He cautioned, however, that once NRC starts, it should commit itself to completing the process. In January 1999, NRC staff presented their proposal to the Commissioners. At that meeting, the Chairman suggested a more aggressive approach that would entail a risk-informed approach for all relevant regulations across the board. NRC’s memorandum tracking the various tasks involved in implementing a risk-informed approach did not show when the agency would resolve this issue. NRC and the industry view risk assessments as one of the main tools for identifying and focusing on those structures, systems, or components of nuclear plant operations having the greatest risk. Yet neither NRC nor the industry has a standard that defines the quality, scope, or adequacy of risk assessments. NRC staff are working with the American Society of Mechanical Engineers to develop such a standard. However, this issue is far from being resolved. The Society is developing the standard for risk assessments in two phases. The first phase would address assessments of the probability of accidents initiated by a certain set of events internal to the plant; the second phase would address accidents initiated by events external to the plant, such as earthquakes, or occurring while the plant is shut down. NRC staff estimate that the agency would have a final standard for the first phase by June 2000 but could not estimate when the second phase would be complete. To ensure consistency with other initiatives, in December 1998, NRC staff sought direction from the Commission on the quality of risk assessments needed to implement a risk-informed approach. In the meantime, the lack of a standard could affect NRC’s efforts to implement a risk-informed regulatory approach. According to NRC staff, they recognize that limitations exist with risk assessment technology and are working, and will continue to work, to enhance the technology. In addition, in the past, operational data needed to enhance the quality of risk assessments were not available for some critical structures, systems, or components. Utilities had to extrapolate the information from like systems in other industrial applications. Today, the reliability and availability of data for performing risk assessments are enhanced in many areas by almost 40 years of operational experience. Much of this information is disseminated to other utilities, partly because, in a regulated environment, the utilities do not compete with one another for market share. However, under the approaching deregulated environment, nuclear utilities will compete for market share—with each other as well as with other generators of electric power. As a result, the utilities may no longer want to share proprietary operational data previously available to upgrade the quality of risk assessments. NRC has already acted as a clearinghouse to disseminate the results of examinations undertaken at its direction to determine each plant’s vulnerabilities to severe accidents. For example, in December 1997, NRC reported on improvements made to individual plants as a result of the utilities’ examinations, the collective results of the examinations, plant-specific design and operational features, the modeling assumptions that significantly affected estimates of how frequently the reactor core is damaged and how well the plant contains radiation, and the strengths and weaknesses of the models and methods used by the utilities to perform the examinations. However, NRC does not plan to collect and disseminate this information on a regular basis. In December 1998, NRC staff recommended that implementation with revised risk-informed regulations be voluntary, noting that it would be very difficult to show that requiring compliance would increase public health and safety as required by the backfit rule. The staff also noted that requiring compliance could create the impression that current plants were less safe. The staff’s recommendation did not indicate the number of utilities that would be interested in a risk-informed approach. In commenting on a draft of this report, NRC said that the number of utilities likely to operate under risk-informed regulations would depend on economic judgments the utilities would make once the Commission clarifies the details of a risk-informed regime. In January 1999, the Commissioners expressed concern about a voluntary approach, believing that it would create two classes of plants operating under two different sets of regulations. Nevertheless, in commenting on a draft of this report, NRC said that compliance would be voluntary. Our discussions with officials from 10 utilities that operate 16 nuclear plants and NRC documents showed that utilities may be reluctant to shift to a risk-informed regulatory approach for various reasons. First, the number of years remaining on a plant’s operating license is likely to influence the utility’s views. NRC acknowledged that if a plant’s license is due to expire in 10 years or less, then the utility may not have anything to gain by changing from the traditional approach. Second, considering the investment that will be needed to develop risk-informed procedures and operations and to identify safety-significant structures, systems, or components, utilities may question whether a switch will be worth the reduction in regulatory burden and cost savings that may result. Third, design differences and age disparities among plants make it difficult for NRC and the industry to determine how, or to what extent, a standardized risk-informed approach can be implemented across the industry. Although utilities built one of two types of plants—boiling water or pressurized water reactors—each has design and operational differences. Thus, each plant is unique, and a risk-informed approach would require plant-specific tailoring. Utility officials with whom we spoke confirmed the issues discussed above and revealed the range of views held by them. The official of a small, single-unit utility said that because a limited number of years remained on the plant’s license, the utility would not be able to realize many benefits from a risk-informed approach. An official from another utility told us that the company has been focusing its attention on replacing steam generators and did not know if it could find the resources needed to comply with a risk-informed approach. Another official said that the utility has a risk assessment that works for that plant but is less detailed and costly than risk assessments prepared by some utilities for newer, larger plants. Several officials said that their utilities were planning to use risk assessments more in the future than in the past and that any changes to the plants or operating procedures would have to demonstrate benefits through a cost/benefit analysis. Another official said that the utility wants to move cautiously in applying risk assessments at its plants because it does not want to undo some other aspects of their operations that could affect safety. Several officials noted that they are monitoring the actions that NRC eventually takes concerning a graded quality assurance pilot project implemented at the South Texas nuclear power plant. According to staff, NRC approved the pilot project, but the utility has not realized the expected benefits because of constraints imposed by other regulations. NRC staff said that they will address the constraints if the agency takes a risk-informed approach to its regulations. Other utility officials said they have a “living” risk assessment that is updated frequently. They said that their utilities have used the assessment to support applications for license amendments and to determine the impact of NRC’s inspection findings on the plants. Since the early 1980s, NRC has been increasing the use of risk information in its regulatory process. NRC staff estimate that it will be at least 4 to 8 years before the agency implements a risk-informed approach. However, NRC has not developed a strategy that includes objectives, time lines, and performance measures for such an approach. Rather, NRC has developed an implementation plan, in conjunction with its policy statement on considering risk, that is a catalog of about 150 separate tasks and milestones for their completion. It has also developed guidance for some activities, such as pilot projects in the four areas where the industry wanted to test the application of a risk-informed approach. Furthermore, in August 1998, the Executive Director for Operations identified high-priority areas—including risk-informed regulation, inspection, enforcement, and organizational structure—and provided short- and long-term actions and milestones to address each of the areas. NRC has revised the schedules for completing some of the identified actions several times since August 1998. Given the complexity and interdependence of NRC’s requirements as reflected in regulations, plant designs, safety documents, and the results of ongoing activities, it is critical that NRC clearly articulate how the various initiatives will help achieve the goals set out in its 1995 policy statement supporting risk-informed regulation. Although NRC’s implementation plan sets out tasks and expected completion dates, it is not a strategy with goals and objectives. Specifically, it does not ensure that short-term efforts are building toward NRC’s longer-term goals or link the various ongoing initiatives; help the agency identify the staffing levels, training, skills, and technology needed—or the timing of those activities—to implement a risk-informed approach; provide a link between the day-to-day activities of program managers and staff and the objectives set out in the policy statement; and address the manner in which NRC would establish baseline information about the plants to assess the impact on safety of a risk-informed approach. Establishing such a baseline may be particularly important because NRC, NEI, and the Union of Concerned Scientists do not believe that the agency can demonstrate the industrywide impact of implementing such an approach. Therefore, if NRC subsequently determines that it wants or needs to demonstrate the impact of a risk-informed approach on safety, the agency will have to do so on a plant-by-plant basis. A comprehensive strategy could also enhance NRC’s efforts to comply with the Government Performance and Results Act of 1993. The Results Act requires federal agencies to develop goals, objectives, strategies, and performance measures in the form of a 5-year strategic plan, an annual performance plan, and, beginning in fiscal year 2000, an annual program performance report assessing the agency’s success in achieving the goals set out in the prior year’s performance plan. The annual performance plan would give NRC the opportunity to clearly specify the actions it will take to achieve its risk-informed strategy and the resources, training, and other skills needed to do so. The annual assessment report would give the Congress and the public an opportunity to determine the extent to which NRC has achieved its goals. In a December 1998 memorandum, NRC staff said that once the Commission provides direction on whether and how to apply a risk-informed approach to the regulations and guidance on the quality of risk assessments, they would develop a plan to implement the direction provided. The staff did not estimate how long it would take to complete the plan. The nuclear industry and public interest groups have criticized NRC’s plant assessment and enforcement processes, saying that they lack objectivity, consistency, and predictability. As part of its risk-informed initiatives, in January 1999, NRC proposed a new process to assess overall plant safety using industrywide and plant-specific safety thresholds and performance indicators. NRC is also reviewing its enforcement process to ensure consistency with the direction recommended by the staff for the assessment process and other programs. In 1997 and 1998, we noted that NRC’s process to focus attention on plants with declining safety performance needed substantial revisions to achieve its purpose as an early warning tool and that NRC did not consistently apply the process across the industry. We also noted that this inconsistency has been attributed, in part, to a lack of specific criteria, the subjective nature of the process, and the confusion of some NRC managers about their role in the process. NRC acknowledged that it should do a better job of identifying plants deserving increased regulatory attention and said that it was developing a new process that would be predictable, nonredundant, efficient, and risk informed. In January 1999, NRC proposed a new safety assessment process that includes seven “cornerstones.” For each cornerstone, NRC will identify the desired result, important attributes that contribute to achieving the desired result, areas to be measured, and various options for measuring the identified areas. Three issues cut across the seven cornerstones: human performance, safety consciousness in the work environment, and problem identification and resolution. As proposed, NRC’s plant assessment process would use performance indicators; inspection results; utilities’ self-assessments; and clearly defined, objective thresholds for making decisions. The process is anchored in a number of principles, including the beliefs that (1) a certain level of safety performance could warrant decreased NRC oversight, (2) performance thresholds should be set high enough to permit NRC to arrest declining performance, (3) NRC must assess both performance indicators and inspection findings, and (4) NRC will establish a minimum level of inspections for all plants (regardless of performance). Although some performance indicators would apply to the industry as a whole, others would be plant specific and would depend, in part, on the results of utilities’ risk assessments. However, as stated earlier, the quality of risk assessments vary considerably among utilities. NRC expects to use a phased approach to implement the revised plant assessment process. Under this approach, it plans to begin pilot testing the use of risk-informed performance indicators at 13 plants in June 1999, fully implement the process by January 2000, and complete an evaluation and propose any adjustments or modifications needed by June 2001. Between January 1999 and January 2001, NRC expects to work with the industry and other stakeholders to develop a comprehensive set of performance indicators to more directly assess plants’ performance relative to the cornerstones. When it is impractical or impossible to develop performance indicators, NRC plans to use its inspections and utilities’ self-assessments to reach a conclusion about plants’ performance. NRC’s proposed process illustrates an effort by the current Chairman and other Commissioners to improve NRC’s ability to help ensure the safe operation of the nation’s nuclear plants, as well as address the industry’s concerns about excessive regulation. By ensuring consistent implementation of the process ultimately established, the Commissioners would further demonstrate their commitment to this process. NRC has revised its enforcement policy more than 30 times since implementing it in 1980. These revisions reflect changing requirements, regulatory policy, and enforcement philosophy. Although NRC has attempted to make the policy more equitable, the industry has had long-standing problems with it. Specifically, NEI believes that the policy is not safety related, timely, or objective. Among the more contentious issues are NRC’s practice of aggregating lesser violations for enforcement purposes and NRC inspectors’ use of the term “regulatory significance.” To facilitate a discussion of the enforcement program, including these two contentious issues, NRC asked NEI and the Union of Concerned Scientists to review 56 enforcement actions that it had taken during fiscal year 1998. For example, NEI reviewed the enforcement actions on the basis of specific criteria, such as whether the violation that resulted in an enforcement action could cause an off-site release of radiation, on-site or off-site exposures to radiation, or damage to the reactor core. Overall, the Union of Concerned Scientists concluded that NRC’s enforcement actions were neither consistent nor repeatable and that the enforcement actions did not always reflect the severity of the offenses. According to NRC staff, they met with various stakeholders in December 1998 and February 1999 to discuss issues related to the enforcement program. NRC inspectors’ use of the term “regulatory significance” is an issue, according to NEI and the Union of Concerned Scientists, because inspectors use the term when they cannot define the safety significance of a violation. Then, when a violation to which the term has been applied results in a financial penalty, the utility does not understand the reason for the financial penalty and cannot explain to the public whether the violation presented a safety concern. NEI has proposed a revised enforcement process. NRC is reviewing this proposal, as well as other changes to the enforcement process, to ensure consistency with the draft plant safety assessment process and other changes being proposed as NRC moves to risk-informed regulation. NRC staff expect to provide recommendations to the Commission in March 1999 on the use of the term “regulatory significance” and in May 1999 on the consideration of risk in the enforcement process. Effective regulation, whether traditional or risk informed, needs to be anchored in information that adequately describes the design and safety parameters of a plant, changes to the plant’s design and operations that affect safety, and assessments that define the structures, systems, or components that are safety significant. Yet NRC does not have assurance that this information is available and accurate. Although the Nuclear Energy Institute, speaking for the industry, has embraced the risk-informed approach as a solution to overregulation by NRC, some utilities do not see the benefits of a risk-informed approach because they consider it too costly or inappropriate for the size and age of their plants. Since NRC has stated that compliance will be voluntary, the agency will be regulating under two different systems—a situation that will compound challenges in an already complex regulatory environment. In addition, NRC has no comprehensive strategy to guide the process of moving to a risk-informed regulatory approach. A strategy would provide NRC and the industry with a framework for implementing a risk-informed approach. This framework would identify the interrelationships of the various components, establish time lines, and define goals and performance measures. Such a strategy would identify the costs and benefits of a risk-informed approach, indicate which utilities would be regulated in a risk-informed environment, and provide information on the cost and approach for NRC’s future regulation. The strategy could also provide a mechanism to foster continued information sharing so that the quality of risk assessments and NRC’s risk-informed initiative would not suffer in a competitive environment. NRC’s new approach to assessing nuclear plant safety performance should provide valuable lessons and insights as NRC changes more of its processes and regulations to consider risk information. But whatever processes NRC ultimately adopts must be consistent, visible, and clear. The need for clarity in NRC’s processes may be even more important today than it has been in the past. In a competitive environment, utilities will not always be able to pass the costs of regulatory compliance on to consumers. Yet because of concerns about the risks of catastrophic accidents, the public will continue to pressure NRC and the industry to explain their actions. A clearly defined strategy would help both NRC and the utilities address the public’s concerns. To help ensure the safe operation of plants and the continued protection of public health and safety in a competitive environment, we recommend that the Commissioners of NRC direct the staff to develop a comprehensive strategy that includes but is not limited to objectives, goals, activities, and time frames for the transition to risk-informed regulation; specifies how the Commission expects to define the scope and implementation of risk-informed regulation; and identifies the manner in which it expects to continue the free exchange of operational information necessary to improve the quality and reliability of risk assessments. We provided a copy of a draft of this report to the Nuclear Regulatory Commission for its review and comment. Although the Nuclear Regulatory Commission did not comment on our recommendation, the agency stated that its strategic plan and 1995 policy statement specify its goals and objectives to implement a risk-informed approach and that its efforts are supported by the planning, budgeting, and performance management process. The Nuclear Regulatory Commission also noted that it has issued regulatory guidance documents to implement the strategic plan, policy statement, and 1986 safety goals. The Nuclear Regulatory Commission said that it actively supports the development of risk assessment standards and will continue to develop methods and tools to improve the assessments. In addition, the Nuclear Regulatory Commission said that we did not sufficiently recognize its many ongoing risk-informed initiatives and progress. We did not change the report to recognize the agency’s concerns because we believe that we provided sufficient information on the status of its and/or the nuclear industry’s activities for each of the initiatives that we discussed. The Nuclear Regulatory Commission also commented that the report raises issues that it, the nuclear industry, and other stakeholders are addressing. We acknowledge that the agency has identified and is working to resolve the issues addressed in the report, as well as many other initiatives. However, given the complexity and interdependence of its efforts, we continue to believe that the Nuclear Regulatory Commission needs a comprehensive strategy that includes clearly defined goals and objectives; clear links between and among its various initiatives; identified staffing levels, training, skills, and technology needs; and a link between the day-to-day activities of program managers and staff. Without such information, the Nuclear Regulatory Commission does not have a mechanism to ensure that its short-term efforts are building toward its longer-term goals and to help staff understand when and if activities will affect them. In addition, such a strategy would flow from—and not duplicate—its strategic planning efforts and planning, budgeting, and performance management process to help ensure that the agency is moving in the right direction. The Nuclear Regulatory Commission provided several clarifying comments that we have incorporated, where appropriate. The agency’s letter and our response to its specific comments are provided in appendix I. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 14 days after the date of this letter. At that time, we will send copies to the Honorable Shirley Ann Jackson, Chairman, Nuclear Regulatory Commission; the Honorable Greta Joy Dicus, the Honorable Nils J. Diaz, the Honorable Edward McGaffigan, Jr., and the Honorable Jeffrey S. Merrifield, Commissioners, Nuclear Regulatory Commission; and the Honorable Jacob Lew, Director, Office of Management and Budget. We will make copies available to other interested parties on request. We conducted our work from May 1998 through February 1999 in accordance with generally accepted government auditing standards. Appendix II provides details on our scope and methodology. If you or your staff have any questions about this report, please call me on (202) 512-3841. Other major contributors to this report are listed in appendix III. The following are GAO’s comments on NRC’s letter dated March 5, 1999. 1. We have not included NRC’s suggested language in the report. NRC says that all utilities have sufficiently current and accurate information to support a risk-informed, but not a risk-based, approach. Yet NRC found as late as several months ago that some utilities did not have complete and accurate design information. Until NRC resolves this issue, we do not believe that a foundation exists upon which to move forward with a risk-informed approach. 2. We did not state that regulations do not provide reasonable assurance of adequate protection to the health and safety of the public. Our conclusion is based on the fact that NRC has not resolved many fundamental issues needed to implement a risk-informed approach. Therefore, we have not changed our report. Senators Joseph R. Biden, Jr., and Joseph I. Lieberman asked us to examine various issues related to the safe operation of commercial nuclear power plants. On the basis of discussions with their offices, we agreed to answer the following questions: What challenges will the Nuclear Regulatory Commission (NRC) and the nuclear industry experience in a competitive environment? What issues does NRC need to resolve to move forward with risk-informed regulation? What is the status of NRC’s efforts to apply a risk-informed regulatory approach to two of its oversight programs—plant safety assessments and enforcement? We reviewed prior General Accounting Office reports; relevant sections of the Atomic Energy Act of 1954, as amended; and NRC regulations, staff requirement memorandums, and various analyses provided by the Executive Director for Operations or other offices for the Commission’s consideration. We also reviewed NRC’s responses to questions resulting from the July 1998 hearing before the Subcommittee on Clean Air, Wetlands, Private Property, and Nuclear Safety, Senate Committee on Environment and Public Works. To determine the pressures that the nuclear industry will experience in a competitive environment, we reviewed Standard and Poor’s World Energy Service: U.S. Outlook (Apr. 1998) and the Nuclear Energy Institute’s (NEI) Nuclear Energy: 2000 and Beyond—A Strategic Document for Nuclear Energy in the 21st Century (May 1998). We also examined NRC’s Office of Inspector General’s June 1998 report on the results of the safety culture and climate survey conducted in the fall of 1997. In addition, we obtained the Energy Information Administration’s Status of Electric Industry Restructuring by State and Electric Power Monthly (Jan.1999). We also met with officials from Energy Resources International, Inc., and reviewed an October 1998 report, Impacts of the Kyoto Protocol on U.S. Energy Markets and Economic Activity, to obtain views on the future of nuclear power. To determine the issues that NRC needs to resolve to move forward with a risk-informed approach, we reviewed comments that NRC received on its May 1997 proposed regulatory guidance on the process that allows utilities to change their plants without NRC’s prior approval and on its October 1998 proposed regulations for implementing the change process. We also reviewed various analyses prepared by NEI, including guidelines for the conduct of safety evaluations required by the change process. We contacted 10 utilities that operate 16 nuclear plants to obtain their views on a risk-informed regulatory approach. We selected the utilities on the basis of information provided by NRC on the quality of their risk assessments, as well as discussions with NRC staff. We attended meetings held by the Advisory Committee on Reactor Safeguards on risk assessment and the change process, a public workshop held by NRC on its risk-informed regulation (July 22, 1998), and meetings held by the Commission in July 1998 and November 1998 with various stakeholders, including NEI, the Union of Concerned Scientists, the World Association of Nuclear Operators, and utility officials. We also attended the January 1999 briefing by NRC staff to the Commissioners on their proposed approach to making the regulations that apply to nuclear power plants risk informed. We met with staff responsible for NRC’s initiatives related to design information, safety analysis reports, the change process, and risk-informed regulation, as well as with knowledgeable representatives of NEI, the Union of Concerned Scientists, and Public Citizen’s Critical Mass Energy Project. To determine the status of NRC’s efforts to make its plant safety assessments and enforcement programs risk informed, we attended a public workshop held by NRC on its proposed process (from Sept. 28, 1998, through Oct. 1, 1998) and meetings held by the Commission in July 1998 and November 1998 with various stakeholders, including NEI, the Union of Concerned Scientists, the Institute for Nuclear Power, and utility officials. In addition, we reviewed NRC’s January 1999 proposed plant safety assessment process, as well as an Assessment of the NRC Enforcement Program (NUREG-1525, Apr. 1995), the NRC Enforcement Policy Review: July 1995 - July 1997 (NUREG-1622, Apr. 1998), and the General Statement of Policy and Procedures for NRC Enforcement Actions (NUREG-1600, Rev. 1, May 1998). We also reviewed NEI’s proposal related to a risk-informed, performance-based assessment, inspection, and enforcement process. We met with staff responsible for NRC’s initiatives related to plant safety assessments and enforcement, as well as with knowledgeable representatives of NEI, the Union of Concerned Scientists, and Public Citizen’s Critical Mass Energy Project. Vondalee Hunt Gary Jones Mary Ann Kruslicky Michael Rahl The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO examined various issues related to the safe operation of commercial nuclear power plants, focusing on: (1) some of the challenges that the Nuclear Regulatory Commission (NRC) and the nuclear power industry could experience in a competitive environment; (2) issues that NRC needs to resolve to implement a risk-informed regulatory approach; and (3) the status of NRC's efforts to apply a risk-informed regulatory approach to two of its oversight programs--plant safety assessments and enforcement. GAO noted that: (1) Congress and the public need confidence in NRC's ability to ensure that the nuclear industry performs to the highest safety standards; (2) as the electric utility industry is restructured, operating and maintenance costs will affect the competitiveness of nuclear power plants; (3) competition challenges NRC to ensure that safety margins are not compromised by utilities' cost-cutting measures and that the decisions utilities make in response to economic considerations are not detrimental to public health and safety; (4) NRC has not developed a comprehensive strategy that could move its regulation of the safety of nuclear plants from its traditional approach to an approach that considers risk information; (5) in addition, NRC has not resolved certain basic issues; (6) some utilities do not have current and accurate design information for their nuclear power plants, which is needed for a risk-informed approach; (7) neither NRC nor the nuclear utility industry has standards that define the quality or adequacy of the risk assessments that utilities use to identify and measure risks to public health and the environment; (8) furthermore, NRC has not determined the willingness of utilities to adopt a risk-informed approach; (9) according to NRC staff, they are aware of these and other issues and have undertaken activities to resolve them; (10) in January 1999, NRC released for comment a proposed risk-informed process to assess the overall safety of nuclear power plants; (11) this process would establish industrywide and plant-specific safety thresholds and indicators to help NRC assess plant safety; (12) NRC expects to phase in the new process over the next 2 years and evaluate it by June 2001, at which time NRC plans to propose any adjustments or modifications needed; (13) in addition, NRC has been examining its enforcement program to make it consistent with, among other things, the proposed process for assessing plant safety; (14) the nuclear industry and public interest groups have criticized the enforcement program as subjective; and (15) in the spring of 1999, NRC staff expect to provide the Commission with recommendations for revising the enforcement program. |
Even after several years of defense downsizing, DOD operates hundreds of major military bases and many smaller facilities in the United States. These bases can range in size from less than 10 acres to several hundred thousand acres. There are bases, such as Fort Bragg and Pope Air Force Base located in North Carolina, adjacent to each other; and there are bases, while not adjacent, that are located within a relatively short distance from each other. Base supporting services vary and can include property maintenance, logistics, transportation and equipment maintenance, personnel and professional support, and services to individuals, such as food, housing, recreation, or education. Appendix I provides a more detailed list of common base support functions. Our analysis of the services’ operations and maintenance (O&M) budgets indicate that a significant portion of these budgets are spent on maintaining facilities and delivering services to installations. DOD has long been concerned about and has sought ways to reduce the cost of military base support, and DOD believes that greater economies and savings could be achieved by consolidating and eliminating duplicate support services for military bases located close to one another, or where similar functions are performed at multiple locations. Over the years, DOD’s concerns have led to some large consolidation efforts, such as in the areas of logistics and commissary services, as well as more recent consolidations involving printing and finance. DOD has also supported efforts to foster greater cooperation and interservicing among the services on regional and local levels. However, two of the most notable interservicing type efforts initiated in the 1970s and 1980s were not successful, for reasons that appeared to have more to do with how they were implemented than with the merits of the concept. They involved consolidating real property management and contracting activities at Air Force and Army bases in the San Antonio, Texas, area and consolidating management of housing for each of the services in Oahu, Hawaii. (See app. II for additional information regarding these two consolidation efforts and circumstances contributing to their lack of success.) Meanwhile, on an installation and regional basis, the services have continued varying efforts, on a more limited basis, to develop interservice support agreements where possible. Downsizing and reduced defense budgets in recent years are now causing the services to take a renewed interest in trying to achieve greater economies, efficiencies, and cost savings in base operations. This includes efforts to more vigorously examine the potential for greater inter and intraservicing involving base support, as well as partnership arrangements between military bases and local governments and communities. At the same time, DOD is advocating greater reliance on outsourcing (contracting out) base support functions. Numerous studies completed by DOD and the services have supported the potential to save money in personnel, facilities, and operating costs by consolidating various base support functions through interservicing. However, the amount of these savings are not clear because some consolidations on which some projected savings were based were either not implemented, not implemented as planned, or terminated. These include the San Antonio Real Property Maintenance Agency and Contracting Center and the Oahu, Hawaii, housing programs. Both programs, operational for several years, were disestablished after encountering various problems and concerns on the part of affected military commanders about their effectiveness. DOD and the services have found it manpower intensive and often difficult to track savings from interservicing agreements and difficult to differentiate savings from cost avoidances; consequently, DOD does not devote significant efforts to tracking savings from projects that are implemented. However, DOD officials provided us with some ad hoc examples of multimillion dollar savings spread over varying periods of years involving such support functions as contracting, printing, training, recycling, teleconferencing, personnel services, and others. For example, service officials in Charleston, South Carolina, reported a 1-year cost avoidance of over $1 million in travel and per diem costs through shared use of video teleconferencing capabilities. In another example, service officials in Colorado Springs, Colorado, reported that a consolidated regional natural gas contract resulted in cost savings of $9.5 million over a 3-year period. In addition, potential savings today are clouded because the services are increasingly looking for ways to consolidate and streamline operations because of budget reductions. Service officials stated that they were reluctant to identify further savings as part of new studies, fearing additional reductions would be taken on top of the cuts that have already been made. In 1972, DOD established the Defense Regional Interservice Support program as its principal program to help identify and eliminate duplicative base support services for activities in close proximity to each other. The regulation governing the program, DOD Instruction 4000.19, required DOD activities to first consider using other DOD and federal activity capabilities unless a commercial source or developing an in-house capability constituted a better value. DOD reinforced its emphasis on interservicing and support consolidations in 1978 by establishing Joint Interservice Resource Study Groups (JIRSG). These regional groups were expected to evaluate the feasibility of savings through support service consolidations in geographic areas where there were several relatively large military installations within a 50-mile radius. DOD and service officials told us that between 1978 and 1992, JIRSGs conducted a variety of studies that identified potential savings and efficiencies through interservicing. However, we were told that many of these studies were ignored because no one, including local base commanders, really wanted to implement them. In April 1992, DOD revised the JIRSG program so that its focus shifted from conducting regional studies to providing interservice support. The JIRSG program ceased being mandatory and was no longer required to review defined support service categories as before. As a result of these changes, JIRSGs are now tasked with facilitating communications among DOD and other federal activities, sharing innovative ideas, and seeking opportunities for improving mission quality, efficiency, and effectiveness through the use of support agreements and other cooperative efforts. Although participation in the JIRSG program remains voluntary, OSD officials continue to emphasize the program through such means as conducting national workshops for JIRSG representatives and disseminating JIRSG newsletters, including information on successful agreements and partnerships. As of November 1995, there were 55 JIRSG regions throughout the United States, Europe, the Pacific, and Panama. We contacted JIRSG officials from 21 of these regions and found that about 29 percent of these regions had been inactive for the past several years, and many program offices did not have personnel in key positions. We were told that the existence, effectiveness, and success of a JIRSG program was often dependent on the interest of both the local commander and the JIRSG program manager, if there was one. Further, we were told that command interest in the programs ebbed and flowed with changes in commanders and their differing perspectives on the desirability of the program. Despite fluctuations in program emphasis, we found a range of interservicing agreements in place at the seven bases we visited. They included agreements pertaining to such support areas as morale, welfare, and recreation activities; laundry services; and utilities. Most could be characterized as limited arrangements, pertaining to portions of functions, such as a consolidated contract, rather than large-scale reliance of one military base on another for support, such as for overall contract administration. While many agreements were in place, OSD and service officials stated that many interservicing opportunities remain. We saw this at various collocated bases we visited. For example, both Fort Lewis and McChord Air Force Base maintain separate airfield operations facilities, and airfield operations was an area cited by Fort Lewis personnel as having the potential for one facility to serve both bases. Likewise, both Fort Bragg and Pope Air Force Base maintained separate contract administration, supply and engineering, and other support areas, that an Army official suggested had the potential for one service to provide to both bases. Appendix III provides a more detailed list of base support functions being performed at bases we visited where base personnel cited at least portions of those functions having the potential for consolidation and interservicing. Defense downsizing and resource constraints in recent years have reinforced the need to look for greater efficiencies and savings in base support operations. A major initiative now being spearheaded by OSD involves examining the potential for contracting out base support services. Also, underway under service auspices, and initiated prior to OSD’s current emphasis on contracting out, are a variety of initiatives ranging from a greater emphasis on interservicing to broad regionalization of selected support functions, to privatization of some functions, and contracting out. These service initiatives go beyond traditional interservicing with other DOD and federal agencies, including forming partnerships with local governments and communities. The relationship of these efforts to OSD’s current contracting out initiative raises questions as to whether DOD’s strategy and approaches to reducing costs in these areas are likely to achieve the maximum possible savings. Although DOD has historically placed some emphasis on contracting out, that emphasis today is greater than ever before due to the administration’s Reinventing Government Initiative, otherwise known as the National Performance Review, and to recommendations of two recent DOD study groups—the May 1995 Roles and Missions study, and the October 1995 Defense Science Board study on Quality of Life. Further, a provision in the fiscal year 1996 Defense authorization legislation encourages DOD to look to the private sector to meet its support needs. The 1993 report of the National Performance Review noted that every federal agency needs support services. The report advocated greater consideration of options in obtaining those services and said that no agency should provide support services in-house unless those services could compete with those of other agencies and private companies. That report has resulted in a greater emphasis being given to contracting out and privatizing support services. The 1995 report of the Chairman of the Joint Chiefs of Staff dealing with Roles and Missions recommended that essentially all commercial activities in DOD be outsourced and that all new needs be channeled to the private sector from the beginning. That recommendation followed the study group’s review of the full spectrum of central support activities, including installations and facilities. Activities that were not dependent on specialized, defense-unique equipment such as base security, facilities maintenance, and installation management services, were designated as prime candidates for early outsourcing. According to the roles and missions report, most of these nonspecialized or defense-unique services have little direct association with combat forces and can be moved to private-sector markets where competition ensures adequate cost control. The Roles and Missions’ report also stated that the many routine, nonmilitary infrastructure functions associated with managing a military base were better left to the private sector to manage. The 1995 Defense Science Board Task Force on the Quality of Life was tasked with examining quality of life issues as they apply to all military personnel, their families and civilian employees, and recommending improvements that could be quickly implemented. The task force addressed housing, personnel tempo, and community and family services. In the area of housing, the task force recommended that DOD achieve an effective housing delivery system over a 3-year period by (1) using private venture capital initiatives to construct new and revitalize existing housing; (2) reviewing and revising housing policies, laws, standards, criteria and regulations and find ways to improve ineffective and inefficient funding practices; and (3) creating a nonprofit government corporation that could act as an umbrella organization with the actual maintenance and operations being executed through local private industry contracts. DOD is currently examining how it can implement these recommendations and is working with the services to identify obstacles to their implementation. Section 357 of the National Defense Authorization Act for Fiscal Year 1996 (P.L. 104-106) encourages reliance on private-sector sources for commercial products or services. It requires that not later than April 15, 1996, “. . . the Secretary shall transmit to the congressional defense committees a report on opportunities for increased use of private-sector sources to provide commercial products and services to the Department.” In August 1995, DOD established a working group to determine which military and civilian positions associated with base support operations should be studied by each service for possible outsourcing. The services are studying the potential to outsource work related to 60,000 full-time equivalent positions, most involving DOD civilian personnel. The services exempted another 323,000 positions from outsourcing consideration at this time because they were considered as directly supporting the services’ warfighting missions. However, one working group official told us that probably half of these exempt positions could be studied for outsourcing. This official, however, acknowledged that with the 60,000 positions selected for initial study, the military services have more than enough work to review at the present time. According to this official, no completion date for this effort has been determined. The individual service initiatives that are outside the current OSD outsourcing initiative are described below. Among the military services, the Army appears to have been the most aggressive in pursuing interservicing, partnering, and other efforts to reduce base operating costs. In fiscal year 1994, the Army created a departmental level installation management office to provide central oversight of installation support operations. The Army’s installation management office serves as a focal point for the many initiatives occurring within the Army, as well as, write policy and integrate doctrine pertaining to the planning, programming, execution, and operation of Army installations. The installation management office is encouraging Army commands to undertake a broad range of initiatives that work toward operating more efficiently. For example, one initiative suggests that installations should become less self-sufficient by encouraging more regionalizing, consolidating, and contracting out of base support services and facilities. The Army’s major commands that operate bases in the United States have the lead in examining options for achieving greater efficiencies in base operations. For example, Forces Command (FORSCOM) has been examining a number of initiatives in the base operations area. One of these initiatives is known as Installation XXI. Under this initiative, FORSCOM has tasked its three garrison commanders at I, III, and XVIII Airborne Corps, and the U.S. Army Reserve Command (USARC) with exploring options for more efficient base operations in the future. The commander of I Corps was tasked with reviewing the possibility of multiservice base operations; the commander of III Corps was tasked with exploring development of “centers of excellence” for various base functions so that one base would become expert in and assume responsibility for certain functions such as contract management for multiple bases; the commander of the XVIIIth Airborne Corps was tasked with examining community partnerships; and the commander of USARC was tasked with examining options for reserve component support apart from reliance on active duty bases. FORSCOM’s goal is to test and evaluate these various initiatives and implement them beginning October 1, 1996. However, some initiatives are being tested and implemented at the same time throughout FORSCOM. For example, an initiative to test the effect of consolidating warehouses at one location was found to be successful, and is now being expanded. Another effort being implemented involves having a regional contract administration office for contracts over $500,000. In this and other similar situations, we found service officials reluctant to identify specific cost savings from these projects. They indicated that many of these efforts were necessitated by previous budget reductions, and they were concerned that if savings were identified their budgets would be further reduced. We recognize this as a real concern to the extent budget reductions have been made in anticipation of future savings that are not achieved—a concern that was recently publicly acknowledged by the Secretary of Defense regarding some previous Defense Management Review studies within DOD. However, our work indicates that DOD has not been effective in tracking savings for initiatives such as the Defense Management Reviews. Consequently, case-by-case analyses would be needed to determine the validity of these concerns. “Air Force philosophy has always been that our Commanders must have the tools both to accomplish their mission and take care of their people. Every time in the past that we have deviated from this principle, especially in our rush to find efficiencies in base support operations, the results have been less than satisfactory. That said, if cost savings or service improvements can be realized without infringing on these two basic Command responsibilities, then these opportunities should be explored.” As of February 1996, FORSCOM officials told us that Fort Lewis and McChord officials had not reached a consensus on support issues and that discussions had been discontinued at the installation level. In another case, we were told by Army officials that further discussions beyond identifying potential base support operations at Fort Dix, McGuire Air Force Base, and Lakehurst Naval Air Station had not occurred. An Air Force official at Pope Air Force Base told us that they are examining cooperative support efforts with Fort Bragg in the areas of recycling, medical training, and parachute rigging. FORSCOM officials told us that interservicing will be further explored more broadly under another Army-wide initiative that is being developed by FORSCOM. The Navy, with support from the Chief of Naval Operations, is also emphasizing the need to reduce support costs. In our review of Navy activities, we found that the Navy is currently emphasizing regionalization and consolidation of support functions involving its own facilities more than interservicing. Officials stated this is because their installations, for the most part, are not closely located to other service installations. However, Navy officials told us that in places where Navy activities are closely located to other service installations, they will cooperate with the other services where it makes sense. The Navy’s regionalization efforts are being conducted by its headquarters level shore installation management office, with the support of the Chief of Naval Operations. The Navy, like the Army, created this office in fiscal year 1994 to oversee the operations of its installations. This office is conducting two pilot studies to reduce Navy-wide infrastructure by regionalizing base support functions under a one commander concept in place of the multiple commanders now in place. These two pilot studies are being conducted in Jacksonville, Florida, and San Diego, California. The Jacksonville study began in September 1995, and the results are expected to be reported out to the Chief of Naval Operations sometime in April 1996. The San Diego study began in February 1996, and preliminary results are expected by mid-May 1996. The Navy estimates that $30 million a year could be saved through regionalization of support functions of Navy bases in the Jacksonville area. Preliminary study results from Jacksonville suggest the potential for partial to full regionalization involving security, fire prevention, fuel services, procurement, supply and data processing, resource management, education, personnel services, environmental management, and meteorology functions. Navy officials told us that because the Jacksonville effort is the first to be completed, its results and lessons learned will benefit their San Diego effort, and additional efforts that the Navy plans to pursue, including Pearl Harbor, Hawaii; Puget Sound, Washington; and Norfolk, Virginia. As previously mentioned, we found, during our visits to selected bases, that interservicing arrangements did exist between the Air Force and the other services. At the same time, we found less emphasis within the Air Force at the headquarters and major command level than in the other services in terms of emphasizing regionalization or interservicing of base support functions. However, we did identify some recent efforts at the Air Force headquarters level that could strengthen program emphasis in the future. For example, in December 1995, all Air Force major commands were asked to gather information regarding the level of savings that had been achieved through interservicing over the past 2 years; this information is expected to be accumulated by April 1996. A headquarters official expressed hope that such information could be used as a catalyst to expand interservicing efforts. Also, the Air Force has a computerized support agreement system that previously was used to create interservice agreements, but which has recently been upgraded to provide more of a management information capability. This system is being made available to the other military services. While interservicing of some common base support functions has occurred, our discussions with DOD and service officials at all levels pointed to a variety of problems and impediments that they believed historically have limited base support consolidation and interservicing efforts and can serve as impediments to such efforts today. These views cover a wide range of issues each requiring individual analyses to confirm the extent of their validity. Such an analysis was beyond the scope of this review. However, where we have prior work relating to an issue, it is presented along with the views of DOD and service officials. Many service officials questioned the effectiveness of large-scale DOD consolidation efforts in recent years in such areas as finance and accounting and printing. Many personnel voiced concern that these functions, after consolidation, appeared to be less responsive, less timely, and perhaps more costly than when each of the services were separately responsible for these functions. These views, regardless of their validity, affect consideration of related initiatives. Also, many personnel were familiar with the failed San Antonio real property maintenance and contracting programs and the consolidated housing program in Hawaii and saw these as additional reasons for caution and skepticism. Another broad concern that frequently surfaced in our discussions involving base support functions was resistance to change and commanders’ concerns about losing direct control over their support assets, and their inability to influence servicing priorities that they deem important to supporting their military missions. We were told that if commanders perceive a problem, they want to have direct control over the activity rather than have to go through another service or activity. Having one service provide large-scale base support to another service also raised concern about the receiving base losing its identity and appearing to be subsumed by the base providing support. We believe this suggests the need for stronger OSD leadership to overcome such concerns where they related to parochial interests, rather than valid mission concerns. Differences in traditions, cultures, practices, and standards among the services also were often cited as inhibiting greater emphasis on interservicing arrangements. For example, various Air Force personnel pointed out that their base support personnel are organizationally aligned with an installation’s combat forces and are considered mission deployable. On the other hand, Army base support personnel are typically not organizationally aligned with their combat forces and are not expected to deploy with them. Also, the Air Force, in contrast with the Army, depends more on military than civilian personnel in meeting its base support requirements. Another example involves differences in the services’ accounting systems, including lack of standards in unit costing that can make it difficult to reach agreement on costing of services. While these do represent real differences between the services, we do not believe that they are insurmountable barriers to increased cooperation and interservicing. Base housing was often cited as an area having the potential for interservicing. However, within the services, we found widely held views about differences in quality of on-base housing provided to service personnel among the services, with the Air Force being known for providing a higher standard of housing than the other services. More generally, the perception often existed that the Air Force had a higher quality of life standard and was willing or able to devote more resources to maintaining that standard than the other services. These differences were seen as having significant implications for interservicing arrangements and were factors in the failed Hawaii housing consolidation effort. While there are numerous examples of one service’s housing being used by the other, there are other examples of one service sometimes not wanting to use another service’s housing because of its condition. Further, for one service to be fully dependent on another service for housing in a given area could raise the specter of one service having to devote more money to housing maintenance than it otherwise would or another service perceiving itself having to settle for a lesser standard of housing than it would otherwise expect. Some service officials suggested that overcoming these impediments may require OSD operational control and funding. As already indicated, DOD is currently examining alternatives for providing military housing. Resource constraints in today’s downsizing environment were also cited as making commanders reluctant to pursue interservicing arrangements, particularly where they would be assuming additional responsibilities to provide services to another activity or service. Growing budget constraints were seen as complicating improvements in the backlog of real property maintenance in the base operations area and also adversely affecting the potential for interservicing. Many service officials believed that deep reductions in their funding and authorized personnel, reductions that they perceive as being greater than reductions in their workload requirements, have already constrained their ability to do existing work. Reducing funding and personnel make it even more difficult to assume additional work, knowing that additional personnel resources likely would not be forthcoming. Some service officials believe that there is a need for financial incentives allowing commanders to retain some portion of savings achieved to apply to other areas where they have unmet requirements as an inducement to pursue greater interservicing. A number of service officials said that the relatively short tours of duty of base commanders limits institutional knowledge and often results in their focusing on short-term projects and not major changes in base operations involving long-term planning and implementation. We were also told that differences in philosophy from one commander to another can sometimes lead to a reversal of previously initiated interservicing efforts. Some service officials suggested that these impediments could be overcome either through greater reliance on civilian management of base operations and/or basing a portion of an installation commander’s proficiency assessment on his or her efforts to foster greater efficiencies in base operations. Our general management review work has shown that continuity of management is a key factor to ensuring the ultimate success of major initiatives in other federal agencies. Finally, interservicing agreements reached in advance of outsourcing could enhance the potential for greater efficiencies and cost savings; however, a proposed change in Office of Management and Budget’s (OMB) guidance for contracting out could reduce the potential for interservicing. At the same time, some service officials stated that with outsourcing and privatization appearing to be such high priority within DOD, the current efforts to economize base operations through inter and intraservicing efforts may receive less emphasis. At the time we completed our review, OMB was considering a change to its Circular A-76 policy guidance supplement on contracting out. That change would require that agencies not “. . . retain, create or expand capacity for the purpose of providing new or expanded levels of interservice support services, unless justified by the cost comparison requirements of this Supplement.” Some DOD officials were concerned that the change could serve as a significant disincentive to base commanders and smaller activities being willing or having the capability to conduct the private sector cost studies that would be required as a prerequisite to interservicing type arrangements. Such cost comparisons previously were not required as a prerequisite to interservicing. Given the potential for significant savings in base support costs through interservicing type arrangements, we recommend that the Secretary of Defense (1) identify options and take steps to minimize the impediments to interservicing and (2) emphasize interservicing as part of contracting out deliberations to maximize potential savings and efficiencies. DOD concurred with our report and its recommendations. In written comments to our draft report, DOD stated that they had prevailed with a request to OMB to remove from the draft Circular A-76 supplement a requirement to conduct A-76 cost comparisons prior to initiating interservice support agreements. DOD also said that it was implementing a policy directive to encourage first looking to interservice support for needed base operations, unless a better value is available from commercial sources. DOD also indicated that it would take other steps to minimize impediments to interservicing. DOD also expressed concern that our report did not adequately recognize Air Force and Defense Logistics Agency efforts to achieve major savings through interservice support. It cited a couple of initiatives recently undertaken by the Air Force and a variety of interservicing agreements administered by the Defense Logistics Agency. Although our review focused primarily on the Army, the Navy, and the Air Force, we recognize that Defense Logistics Agency activities are active participants in interservicing. We recognize the efforts cited on behalf of the Air Force as having been recently undertaken. Those recent actions not withstanding, we believe our report adequately captures the extent of Air Force activities regarding interservicing relative to the other services. Our scope and methodology are discussed in appendix IV. See appendix V for the complete text of DOD’s comments. Unless you announce its contents earlier, we plan no further distribution of this report until 15 days after its issue date. At that time, we will send copies to the Chairmen, Senate Committee on Armed Services, Subcommittee on Defense, the Senate Committee on Appropriations, the House Committee on National Security, and Subcommittee on National Security, House Committee on Appropriations; the Director, Office of Management and Budget; and the Secretaries of Defense, the Air Force, the Army, and the Navy. Please contact me at (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report were Barry W. Holman, Assistant Director; Kevin B. Perkins, Evaluator-in-Charge; and Robert R. Poetta, Evaluator. Two of the most notable interservicing type efforts initiated in the 1970s and 1980s proved unsuccessful. They involved consolidated management of real property maintenance and contracting activities in the San Antonio, Texas, area, and consolidated family housing for military personnel in Oahu, Hawaii. In the mid-to-late 1970s, Air Force and Army installation real property maintenance and contracting services in the San Antonio, Texas, area, were consolidated, creating the San Antonio Real Property Maintenance Agency (SARPMA) and the San Antonio Contracting Center (SACC). Both efforts, to be managed by the Air Force, were expected to save $2.2 million annually in personnel, supplies, and equipment, or $24 million over the 11-year life of the program. The Department of Defense (DOD) agreed to disestablish both efforts in 1989 at the Air Force’s request. By fall 1989, both efforts had ceased operating and their functions were returned to the control of individual base commanders. In a 1989 report we stated that DOD approved the request to dissolve the consolidation based on studies performed by it and the Air Force that cited installation commanders’ concern over lack of command and control of their engineering support functions. In its justification, the Air Force cited a September 1986 DOD directive giving installation commanders broad authority to decide how to accomplish their engineering functions and made them accountable for those resources, and stated that mandating SARPMA was at variance with this authority. One Air Force study questioned SARPMA’s customer responsiveness and productivity, yet concluded that it provided services at about the same level as before the consolidation. However, it also noted that customers resented the loss of direct control of the civil engineering work resulting in a negative perception of SARPMA’s performance. In retrospect, various service officials have suggested that this had been a situation in which DOD had pushed the services toward a consolidation that the services had not really bought into. A December 1990 Defense Management Report Decision concluded that comparisons of SARPMA savings was not possible due to the dramatic differences in program funding, environmental issues, hiring freezes, and other factors that impacted DOD during the period the consolidation existed. Also, the original concepts of organization, supply, personnel, procurement support, automated data processing, and the client base SARPMA was to serve never materialized. The report went on to say that, considering the range of fundamental management problems and mistakes, such as severe understaffing, an inadequate computer system, and not promptly reimbursing vendors that caused these vendors to refuse to deal with SARPMA, to blame its failure on consolidation alone was unwarranted. In July 1982, DOD directed the four services to consolidate family housing operations and maintenance on Oahu, Hawaii, by October 1, 1983, under U.S. Army, Pacific. DOD based the decision on a feasibility study performed by a contractor that concluded that a consolidation would reduce personnel costs by about $737,000 annually. However, on September 30, 1994, after operating for about 11 years, the Oahu Consolidated Family Housing Office closed and control of this function was returned to each individual service. We were unable to determine the extent of savings realized from this consolidation. According to DOD officials and the Army Audit Agency, the consolidated family housing program failed because of funding uncertainties and shortfalls, as well as the services’ prejudice toward retaining control over their own housing, a reluctance on the part of the services from the beginning to fully participate, and various problems associated with the Army’s management of the program. Reluctance to participate was illustrated by the fact that the other services continued to maintain their own housing organizations to some extent while the Army was officially responsible for managing the program and paying the bills. The quality of housing on Oahu at the time of the consolidation was also a factor that affected future operations. Various officials pointed to significant differences in the condition of the housing from each of the services with the Navy housing being in the worst condition and requiring the highest maintenance priority. Also, several officials cited differences in the quality of housing standards as a factor impeding the efforts of the consolidated office because customers expected services provided to meet their own unique criterion. Further, given that the most senior military officials on Oahu outranked the most senior Army officer raised some question about the degree of real control that could be exerted by the Army in managing the program. A 1992 Army Audit Agency report was critical of DOD for not providing the Army any guidance on how to implement a consolidated operation that it concluded led to some of the problems encountered throughout the life of the effort. Subsequently, the Army manager of the consolidated housing office at the time the program was terminated told us that a $33-million funding reduction in fiscal year 1994 (from $176 million to $143 million), and no funding for military construction were the primary reasons for dissolving the office. The manager said that these shortfalls prevented his office from making any housing repairs during that time. He also said that although the other services were aware of the funding problems, they were unable to help because budgetary controls precluded any transferring of funds to the Army. Military personnel at the collocated military bases we visited cited a range of base support functions being performed at their collocated bases—ones where at least one of the services had identified at least portions of those functions as having the potential for consolidation and interservicing. To obtain a historical perspective on interservicing, we held discussions with cognizant Office of the Secretary of Defense (OSD), Army, Navy, Air Force, and Defense Logistics Agency officials and obtained and reviewed available reports completed by various audit and DOD agencies dealing with prior consolidation efforts. Likewise, we held discussions with OSD and service officials regarding the status of existing interservice efforts and to determine impediments to such efforts. A discussion was also held with an Office of Management and Budget (OMB) official regarding OMB Circular A-76 in relation to interservicing. We made a limited telephone inquiry to a judgment sample of Joint Interservice Regional Support Group (JIRSG) regions to gauge the level of ongoing activity regarding interservice support agreements and efforts to foster additional interservicing. We also had discussions with installation officials at seven installations that were located in close proximity to one another to determine the existing level of interservicing type arrangements, the potential for additional ones, and any impediments to such efforts. Locations visited included: Fort Dix, McGuire Air Force Base, and Lakehurst Naval Air Station in New Jersey; Fort Bragg and Pope Air Force Base in North Carolina; and Fort Lewis and McChord Air Force Base in Washington. Additional discussions were held with Army officials at Headquarters Forces Command, Atlanta, Georgia, and Training and Doctrine Command, Fort Monroe, Virginia. We also contacted officials of the Navy’s Commander in Chief, Atlantic and Pacific Fleets, and the Naval Air and Sea Systems Commands; and the Air Force’s Air Combat and Air Mobility Commands to discuss efforts underway to foster inter and intraservicing of base support operations. Additionally, we observed a meeting of the Navy’s Fleet Support and Quality Management Board that discussed various base support issues, and also attended a national JIRSG training workshop. We conducted our review between July 1995 and February 1996 in accordance with generally accepted government auditing standards. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed: (1) the Department of Defense's (DOD) efforts to promote interservicing, which includes one service's reliance on another for base support and greater reliance on intraservice consolidated support; and (2) opportunities that exist for military bases to save on installation support costs. GAO noted that: (1) although DOD is aware of the potential for reducing base support costs through interservicing, it is difficult to determine the amount of the potential savings, since interservicing has never been fully or correctly implemented; (2) the services have not taken sufficient advantage of the potential savings in base support costs from interservicing; (3) the services have been considering a broad array of initiatives, including regionalizing and privatizing some base support functions; (4) consolidating these functions through advanced interservicing agreements could enhance the potential for greater efficiencies and cost savings; (5) many commanders resist interservicing because they fear losing control of their assets and service standards; and (6) an even greater impediment to interservicing on a large scale is the possibility of one base commander having to provide base support for several services. |
In our 2015 annual report, we identify 12 new areas in which we found evidence of fragmentation, overlap, or duplication, and we present 20 actions to executive branch agencies and Congress to address these issues. As described in table 1, these areas span a wide range of federal functions or missions. We consider programs or activities to be fragmented when more than one federal agency (or more than one organization within an agency) is involved in the same broad area of national need, which may result in inefficiencies in how the government delivers services. We identified fragmentation in multiple programs we reviewed. For example, in our 2015 annual report, we reported that oversight of consumer product safety involves at least 20 federal agencies, including the Consumer Product Safety Commission (CPSC), resulting in fragmented oversight across agencies. Although agencies reported that the involvement of multiple agencies with various expertise can help ensure more comprehensive oversight by addressing a range of safety concerns, they also noted that fragmentation can result in unclear roles and potential regulatory gaps. Although a number of agencies have a role, no single entity has the expertise or authority to address the full scope of product safety activities. We suggested that Congress consider establishing a formal comprehensive oversight mechanism for consumer product safety agencies to address crosscutting issues as well as inefficiencies related to fragmentation and overlap, such as communication and coordination challenges and jurisdictional questions between agencies. Mechanisms could include, for example, formalizing relationships and agreements among consumer product safety agencies or establishing an interagency work group. CPSC, the Department of Homeland Security (DHS), the Department of Housing and Urban Development, and the Department of Commerce’s National Institute of Standards and Technology agreed with GAO’s matter for congressional consideration, while the remaining agencies neither agreed nor disagreed. Fragmentation can also be a harbinger for overlap or duplication. Overlap occurs when multiple agencies or programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. We found overlap among federal programs or initiatives in a variety of areas, including nonemergency medical transportation (NEMT). Forty-two programs across six different federal departments provide NEMT to individuals who cannot provide their own transportation due to age, disability, or income constraints. For example, NEMT programs at both Medicaid, within the Department of Health and Human Services (HHS), and the Department of Veterans Affairs (VA) have similar goals (to help their respective beneficiaries access medical services), serve potentially similar beneficiaries (those individuals who have disabilities, are low income, or are elderly), and engage in similar activities (providing NEMT transportation directly or indirectly). We found a number of challenges to coordination for these NEMT programs. For example, Medicaid and VA largely do not participate in NEMT coordination activities in the states we visited, in part because both programs are designed to serve their own populations of eligible beneficiaries and the agencies are concerned that without proper controls payments could be made for services to ineligible individuals. However, because Medicaid and VA are important to NEMT, as they provide services to potentially over 90 million individuals, greater interagency cooperation—with appropriate controls and safeguards to prevent improper payments— could enhance services to transportation- disadvantaged individuals and save money. An interagency coordinating council was developed to enhance federal, state, and local coordination activities, and it has taken some actions to address human service- transportation program coordination. However, the council has not convened since 2008 and has provided only limited leadership. For example, the council has not issued key guidance documents that could promote coordination, including an updated strategic plan. To improve efficiency, we recommended that the Department of Transportation (DOT), which chairs the interagency coordinating council, take steps to enhance coordination among the programs that provide NEMT. In response, DOT agreed that more work is needed to increase coordination activities with all HHS agencies, especially the Centers for Medicare & Medicaid Services (CMS). DOT also said the Federal Transit Administration is asking its technical assistance centers to assist in developing responses to NEMT challenges. In other aspects of our work, we found evidence of duplication, which occurs when two or more agencies or programs are engaged in the same activities or provide the same services to the same beneficiaries. An example of duplicative federal efforts is the US Family Health Plan (USFHP)—a statutorily required component of the Department of Defense’s (DOD) Military Health System—and TRICARE Prime, which offers the same benefits to military beneficiaries. The USFHP was initially incorporated into the Military Health System in 1982 when Congress enacted legislation transferring ownership of certain U.S. Public Health Service hospitals to specific health care providers, referred to as designated providers under the program. During the implementation of the TRICARE program in the 1990s, Congress required the designated providers to offer the TRICARE Prime benefit to their enrollees in accordance with the National Defense Authorization Act for Fiscal Year 1997. Today, the USFHP remains a health care option required by statute to be available to eligible beneficiaries in certain locations, despite TRICARE’s national presence through the managed care support contractors. However, the USFHP has largely remained unchanged, and its role has not since been reassessed within the Military Health System. DOD contracts with managed care support contractors to administer TRICARE Prime—TRICARE’s managed care option—in three regions in the United States (North, South, and West). Separately, TRICARE Prime is offered through the USFHP by designated providers in certain locations within the same three TRICARE regions that are served by a managed care support contractor. Thus, the USFHP offers military beneficiaries the same TRICARE Prime benefit that is offered by the managed care support contractors across much of the same geographic service areas and through many of the same providers. As a result, DOD has incurred added costs by paying the USFHP designated providers to simultaneously administer the same TRICARE Prime benefit to the same population of eligible beneficiaries in many of the same locations as the managed care support contractors. To eliminate this duplication within DOD’s health system and potentially save millions of dollars, we suggested that Congress terminate the statutorily required USFHP. In addition to areas of fragmentation, overlap, and duplication, our 2015 report identified 46 actions that the executive branch and Congress can take to reduce the cost of government operations and enhance revenue collections for the U.S. Treasury in 12 areas. These opportunities for executive branch or congressional action exist in a wide range of federal government missions (see table 2). Examples of opportunities to reduce costs or enhance revenue collections from our 2015 annual report include updating the way Medicare pays certain cancer hospitals, rescinding unobligated funds, and re-examining the appropriate size of the Strategic Petroleum Reserve. Updating the way Medicare pays certain cancer hospitals: To better control Medicare spending and generate cost savings of almost $500 million per year, Congress should consider changing Medicare’s cost- based payment methods for certain cancer hospitals. Medicare pays the majority of hospitals using an approach known as the inpatient and outpatient prospective payment systems (PPS). Under a PPS, hospitals are paid a predetermined amount based on the clinical classification of each service they provide to beneficiaries. Beginning in 1983, in response to concern that certain cancer hospitals would experience payment reductions under such a system, Congress required the establishment of criteria under which 11 cancer hospitals are exempted from the inpatient PPS and receive payment adjustments under the outpatient PPS. Since these cancer hospitals were first designated in the early 1980s, cancer care and Medicare’s payment system have changed significantly. Advances in techniques and drugs have increased treatment options and allowed for more localized delivery of care. Along with these developments, the primary setting for cancer care has shifted from the inpatient setting to the outpatient setting. In addition, Medicare’s current payment system better recognizes the resource intensity of hospital care than the system put in place in 1983. While most hospitals are paid a predetermined amount based on the clinical classification of each service they provide to beneficiaries, Medicare generally pays these 11 cancer hospitals based on their reported costs, providing little incentive for efficiency. We found that if beneficiaries who received care at the 11 cancer hospitals had received inpatient and outpatient services at nearby PPS teaching hospitals, Medicare might have realized substantial savings in 2012. Specifically, we estimated inpatient savings of about $166 million; we calculated outpatient savings of about $303 million if forgone payment adjustments were returned to the Medicare Trust Fund. Until Medicare pays these cancer hospitals in a way that encourages greater efficiency, Medicare remains at risk for overspending. Rescinding unobligated funds: Congress may wish to consider permanently rescinding the entire $1.6 billion balance of the U.S. Enrichment Corporation (USEC) Fund, a revolving fund in the U.S. Treasury. As part of a 2001 GAO legal opinion, we determined that the USEC Fund was available for two purposes, both of which have been fulfilled: (1) environmental clean-up expenses associated with the disposition of depleted uranium at two specific facilities and (2) expenses of USEC privatization. Regarding the first authorized purpose, the construction of intended facilities associated with the disposition of depleted uranium has been completed. Regarding the second authorized purpose, USEC privatization was completed in 1998 when ownership of USEC was transferred to private investors. In an April 2014 report to Congress, the Department of Energy’s (DOE) National Nuclear Security Administration stated that the USEC Fund was one of two sources of funding that it was exploring to finance research, development, and demonstration of national nuclear security-related enrichment technologies. However, this is not one of the authorized purposes of the USEC Fund. Transparency in budget materials is important for informing congressional decisions, and DOE’s efforts to utilize USEC Fund monies instead of general fund appropriations diminish that transparency. The House of Representatives included language to permanently rescind the USEC Fund in H.R. 4923, Energy and Water Development and Related Agencies Appropriations Act, which passed the House on July 10, 2014. However, the rescission was not included in Public Law 113-235, Consolidated and Further Continuing Appropriations Act, 2015. As of March 2015, legislation containing a similar rescission had not been introduced in the 114th Congress. Re-examining the appropriate size of the Strategic Petroleum Reserve: DOE should assess the appropriate size of the Strategic Petroleum Reserve (SPR) to determine whether excess crude oil could be sold to fund other national priorities. The United States holds the SPR so that it can release oil to the market during supply disruptions to protect the U.S. economy from damage. After decades of generally falling U.S. crude oil production, technological advances have contributed to increasing U.S. production. Monthly crude oil production has increased by almost 68 percent from 2008 through April 2014, and increases in production in 2012 and 2013 were the largest annual increases since the beginning of U.S. commercial crude oil production in 1859, according to the Energy Information Administration (EIA). As of September 2014, the reserve had 106 days of imports, which DOE estimated was valued at about $45 billion as of December 2014. In addition, as of September 2014, private industry held reserves of 141 days. As a member of the International Energy Agency, the United States is required to maintain public and private reserves of at least 90 days of net imports and to release these reserves and reduce demand during oil supply disruptions. We found in September 2014 that DOE had taken steps to assess aspects of the SPR but had not recently reexamined its size. Without such a reexamination, DOE cannot be assured that the SPR is holding an appropriate amount of crude oil. If, for example, DOE found that 90 days of imports was an appropriate size for the SPR, it could sell crude oil worth $6.7 billion and use the proceeds to fund other national priorities. In addition, by reducing the SPR to 90 days, DOE may be able to reduce its operating costs by about $25 million DOE concurred with our recommendation, stating that a per year.broad, long-range review of the SPR is needed and that it has initiated a process for conducting a comprehensive re-examination of the appropriate size of the SPR. In addition to the 66 new actions identified for this year’s annual report, we have continued to monitor the progress that executive branch agencies or Congress have made in addressing the issues we identified in our 2011-2014 annual reports. The executive branch and Congress have made progress in addressing a number of the approximately 440 actions we previously identified (fig. 1). In total, as of March 6, 2015, the date we completed our audit work, we found that overall 169 (37 percent) were addressed, 179 (39 percent) were partially addressed, and 90 (20 percent) were not addressed. An additional 46 actions have been assessed as addressed over the past year; these include 13 actions identified in 2011, 14 actions identified in 2012, 11 actions identified in 2013, and 8 identified in 2014. Executive branch and congressional efforts from fiscal years 2011 through 2014 have resulted in over $20 billion in realized cost savings to date, with another approximately $80 billion in additional benefits projected to be accrued through 2023.the progress that has been made over the last 4 years. Combat Uniforms: In our 2013 annual report, we found that DOD’s fragmented approach could lead to increased risk on the battlefield for military personnel and increased development and acquisition costs. In response, DOD developed and issued guidance on joint criteria to help ensure that future service-specific uniforms will provide equivalent levels of performance and protection. In addition, a provision in the National Defense Authorization Act for Fiscal Year 2014 established as policy that the Secretary of Defense shall eliminate the development and fielding of service-specific combat and camouflage utility uniforms in order to adopt and field common uniforms for specific environments to be used by all members of the armed forces. Most recently, the Army chose not to introduce a new family of camouflage uniforms into its inventory, in part because of this legislation, resulting in a cost avoidance of about $4.2 billion over 5 years. Employment and Training: Congress and executive branch agencies have taken actions to help address the proliferation of certain employment programs and improve the delivery of benefits. Specifically, in June 2012, we reported on 45 programs administered by nine federal agencies that supported employment for people with disabilities and found these programs were fragmented and often provided similar services to similar populations. The Workforce Innovation and Opportunity Act, enacted in July 2014, eliminated three programs that supported employment for people with disabilities, including the Veterans’ Workforce Investment Program, administered by the Department of Labor, and the Migrant and Seasonal Farmworker Program and Projects with Industry, administered by the Department of Education. In addition, the Office of Management and Budget (OMB) worked with executive agencies to propose consolidating or eliminating two other programs, although Congress did not take action and both programs continued to receive funding. The Workforce Innovation and Opportunity Act also helped to promote efficiencies for some of the 47 employment and training programs that support a broader population (including people with and without disabilities), which we reported on in 2011. In particular, this law requires states to develop a unified state plan that covers all designated core programs in order to receive certain funding. As a result, states’ implementation of the requirement may enable them to increase administrative efficiencies in employment and training programs—a key objective of our prior recommendations. In addition, the House Budget Resolution for fiscal year 2016streamlining and consolidating federal job training programs and empowering states with the flexibility to tailor funding and programs to specific needs of their workforce, consistent with our recommendations in this area. Farm Program Payments: We reported in our 2011 annual report that Congress could save up to $5 billion annually by reducing or eliminating direct payments to farmers. These are fixed annual payments based on a farm’s history of crop production. Farmers received them regardless of whether they grew crops and even in years of record income. Direct payments were expected to be transitional when first authorized in 1996, but subsequent farm bills continued these payments. Congress passed the Agricultural Act of 2014, which eliminated direct payments to farmers and should save approximately $4.9 billion annually from fiscal year 2015 through fiscal year 2023, according to the Congressional Budget Office. Although Congress and executive branch agencies have made progress toward addressing the actions we have identified, further steps are needed to fully address the remaining actions, as shown in table 3. More specifically, 57 percent of the actions addressed to executive branch agencies and 66 percent of the actions addressed to Congress identified in our 2011-2014 reports remain partially or not addressed. As our work has shown, committed leadership is needed to overcome the many barriers to working across agency boundaries, such as agencies’ concerns about protecting jurisdiction over missions and control over resources or incompatible procedures, processes, data, and computer systems. Without increased or renewed leadership focus, opportunities will be missed to improve the efficiency and effectiveness of programs and save taxpayers’ dollars. In our 2013 annual report, we reported that federal agencies could achieve significant cost savings annually by expanding and improving their use of strategic sourcing—a contracting process that moves away from numerous individual procurement actions to a broader aggregated approach. In particular, DOD, DHS, DOE, and VA accounted for 80 percent of the $537 billion in federal procurement spending in fiscal year 2011, but reported managing about 5 percent, or $25.8 billion, through strategic sourcing efforts. In contrast, leading commercial firms leverage buying power by strategically managing 90 percent of their spending— achieving savings of 10 percent or more of total procurements costs. While strategic sourcing may not be suitable for all procurement spending, we reported that a reduction of 1 percent from procurement spending at these agencies would equate to over $4 billion in savings annually—an opportunity also noted in the House Budget Resolution for fiscal year 2016. However, a lack of clear guidance on metrics for measuring success has hindered the management of ongoing strategic sourcing efforts across the federal government. Since our 2013 report, OMB has made progress by issuing guidance on calculating savings for government-wide strategic sourcing contracts, and in December 2014 it issued a memorandum on category management that, among other things, identifies federal spending categories suitable for strategic sourcing. These categories cover some of the government’s largest spending categories, including information technology and professional services. According to OMB, these categories accounted for $277 billion in fiscal year 2013 federal procurements. This level of spending suggests that by using smarter buying practices the government could realize billions of dollars in savings. In addition, the administration has identified expanded use of high-quality, high-value strategic sourcing solutions as one of its cross-agency priority goals, which are a limited set of outcome-oriented, federal priority goals. However, until OMB sets government-wide goals and establishes metrics, the government may miss opportunities for billions in cost savings through strategic sourcing. Our work on defense has highlighted opportunities to improve efficiencies, reduce costs, and address overlapping and potentially duplicative services that result from multiple entities providing the same service, including the following examples. Combatant Command Headquarters Costs: Our body of work has raised questions about whether DOD’s efforts to reduce headquarters overhead will result in meaningful savings. In 2013, the Secretary of Defense directed a 20 percent cut in management headquarters spending throughout DOD, to include the combatant commands and service component commands. In June 2014, we found that mission and headquarters-support costs for the five geographic combatant commands and their service component commands we reviewed more than doubled from fiscal years 2007 through 2012, to about $1.7 billion. We recommended that DOD more systematically evaluate the sizing and resourcing of its combatant commands. If the department applied the 20 percent reduction in management headquarters spending to the entire $1.7 billion DOD used to operate and support the five geographic combatant commands in fiscal year 2012, we reported that DOD could achieve up to an estimated $340 million in annual savings. Electronic Warfare: We reported in 2011 that all four military services in DOD had been separately developing and acquiring new airborne electronic attack systems and that spending on new and updated systems was projected to total more than $17.6 billion during fiscal years 2007-2016. While the department has taken steps to better inform its investments in airborne electronic attack capabilities, it has yet to assess its plans for developing and acquiring two new expendable jamming decoys to determine if these initiatives should be merged. More broadly, we identified multiple weaknesses in the way DOD acquires weapon systems and the actions that are needed to address these issues, which we recently highlighted in our high-risk series update in February 2015. For example, further progress must be made in tackling the incentives that drive the acquisition process and its behaviors, applying best practices, attracting and empowering acquisition personnel, reinforcing desirable principles at the beginning of programs, and improving the budget process to allow better alignment of programs and their risks and needs. The House Budget Resolution for fiscal year 2016 encourages a continued review to improve the affordability of defense acquisitions. Addressing the issues that we have identified could help DOD improve the returns on its $1.4 trillion investment in major weapon systems and find ways to deliver capabilities for less than it has in the past. The federal government annually invests more than $80 billion on information technology (IT). The magnitude of these expenditures highlights the importance of avoiding duplicative investments to better ensure the most efficient use of resources. Opportunities remain to reduce or better manage duplication and the cost of government operations in critical IT areas, many of which require agencies to work together to improve systems, including the following examples. Information Technology Investment Portfolio Management: To better manage existing IT systems, in March 2012 OMB launched the PortfolioStat initiative. PortfolioStat requires agencies to conduct an annual, agency-wide review of their IT portfolios to reduce commodity IT spending and demonstrate how their IT investments align with their missions and business functions, among other things. In 2014, we found that while the 26 federal agencies required to participate in PortfolioStat had made progress in implementing OMB’s initiative, weaknesses existed in agencies’ implementation of the initiative, such as limitations in the Chief Information Officer’s authority. In the President’s Fiscal Year 2016 Budget submission, the administration proposes to use PortfolioStat to drive efficiencies in agencies’ IT programs. As noted in our recent high-risk series update, we have made more than 60 recommendations to improve OMB and agencies’ implementation of PortfolioStat and provide greater assurance that agencies will realize the nearly $6 billion in savings they estimated they would achieve through fiscal year 2015. Federal Data Centers: In September 2014, we found that consolidating federal data centers would provide an opportunity to improve government efficiency and achieve cost savings and avoidances of about $5.3 billion by fiscal year 2017. Although OMB has taken steps to identify data center consolidation opportunities across agencies, weaknesses exist in the execution and oversight of the consolidation efforts. Specifically, we reported many agencies are not fully reporting their planned savings to OMB as required; GAO estimates that the savings have been underreported to OMB by approximately $2.2 billion. It will continue to be important for agencies to complete their inventories and implement their plans for consolidation to better ensure continued progress toward OMB’s planned consolidation, optimization, and cost-savings goals. Information Technology Operations and Maintenance: Twenty-seven federal agencies plan to spend about $58 billion—almost three- quarters of the overall $79 billion budgeted for federal IT in fiscal year 2015—on the operations and maintenance of legacy investments. Given the magnitude of these investments, it is important that agencies effectively manage them to better ensure the investments (1) continue to meet agency needs, (2) deliver value, and (3) do not unnecessarily duplicate or overlap with other investments. Accordingly, OMB developed guidance that calls for agencies to analyze (via operational analysis) whether such investments are continuing to meet business and customer needs and are contributing to meeting the agency’s strategic goals. In our 2013 annual report, we reported that agencies did not conduct such an analysis on 52 of the 75 major existing information technology investments we reviewed. As a result, there was increased potential for these information technology investments in operations and maintenance—totaling $37 billion in fiscal year 2011—to result in waste and duplication. To avoid wasteful or duplicative investments in operations and maintenance, we recommended that agencies analyze all information technology investments annually and report the results of their analyses to OMB. Agencies have made progress in performing some operational analyses; however, until the agencies fully implement their policies and ensure complete and thorough operational analyses are being performed on their multibillion-dollar operational investments, there is increased risk that these agencies will not know whether these investments fully meet their intended objectives, therefore increasing the potential for waste and duplication. Geospatial Investments: In a 2013 report, we found that 31 federal departments and agencies invested billions of dollars to collect, maintain, and use geospatial information—information linked to specific geographic locations that supports many government functions, such as maintaining roads and responding to natural disasters. We found that federal agencies had not effectively implemented policies and procedures that would help them identify and coordinate geospatial data acquisitions across the government, resulting in duplicative investments. In a 2015 report, we reported that federal agencies had made progress in implementing geospatial data-related policies and procedures. However, critical items remained incomplete, such as coordinating activities with state governments, which also use a variety of geospatial datasets—including address data and aerial imagery—to support their missions. We found that a new initiative to create a national address database could potentially result in significant savings for federal, state, and local governments. To foster progress in developing such a national database, we suggested that Congress consider assessing existing statutory limitations on address data. We also recommended that the interagency coordinating body for geospatial information (1) establish subcommittees and working groups to assist in furthering a national address database and (2) identify discrete steps to further a national imagery program benefitting governments at all levels. Finally, we recommended that the Director of OMB require agencies to report on their efforts to implement policies and procedures before making new investments in geospatial data. OMB generally agreed with this recommendation. In addition, in March 2015, the Geospatial Data Act of 2015 was introduced and includes provisions to improve oversight and help reduce duplication in the management of geospatial data, consistent with our recommended actions. Fully addressing the actions in our two reports could help reduce duplicative investments and the risk of missing opportunities to jointly acquire data, potentially saving millions of dollars. The federal IT acquisition reforms enacted in December 2014 reinforced a number of the actions that we have recommended to address IT management issues. It established that the Chief Information Officer in each agency has a significant role in the decision processes for planning, programming, management, governance and oversight related to information technology, as well as approval for IT budget requests. In addition, the law containing these reforms codifies federal data center consolidation, emphasizing annual reporting on cost savings and detailed metric reporting and OMB’s PortfolioStat process, focusing on reducing duplication, consolidation, and cost savings. If effectively implemented, this legislation should improve the transparency and management of IT acquisitions and operations across the government. Over the years, we have identified a number of actions that have the potential for sizable cost savings through improved fiscal oversight in the Medicare and Medicaid programs. For example, CMS could save billions of dollars by improving the accuracy of its payments to Medicare Advantage programs, such as through methodology adjustments to account for diagnostic coding differences between Medicare Advantage and traditional Medicare. In addition, we found that federal spending on Medicaid demonstrations could be reduced by billions of dollars if HHS were required to improve the process for reviewing, approving, and making transparent the basis for spending limits approved for Medicaid demonstrations. In particular, our work between 2002 and 2014 has shown that HHS approved several demonstrations without ensuring that they would be budget neutral to the federal government. To address this issue, we suggested that Congress could require the Secretary of Health and Human Services to improve the Medicaid demonstration review process, through steps such as improving the review criteria, better ensuring that valid methods are used to demonstrate budget neutrality, and documenting and making clear the basis for the approved limits. We concluded in August 2014 that HHS’s approval of $778 million dollars of hypothetical costs (i.e., expenditures the state could have made but did not) in the Arkansas demonstration spending limit and the department’s waiver of its cost-effectiveness requirement is further evidence of our long-standing concerns that HHS is approving demonstrations that may not be budget-neutral. HHS’s approval of the Arkansas demonstration suggests that the Secretary may continue to approve section 1115 Medicaid demonstrations that raise federal costs, inconsistent with the department’s policy of budget neutrality. We maintain that enhancing the process HHS uses to demonstrate budget neutrality of its demonstrations could save billions in federal expenditures. In our February 2015 high-risk series update, we reported that while CMS had taken positive steps to improve Medicare and Medicaid oversight in recent years, in several areas, CMS had still to address some issues and recommendations, and improper payment rates have remained We reported that to achieve and demonstrate unacceptably high. reductions in the estimated $60 billion dollars in Medicare improper payments in 2014, CMS should fully exercise its authority related to strengthening its provider and supplier enrollment provisions and address our open recommendations related to prepayment and postpayment claims review activities. Similarly, in the area of Medicaid for which the federal share of estimated improper payments was $17.5 billion in 2014, we have made recommendations targeted at (1) improving the completeness and reliability of key data needed for ensuring effective oversight, (2) implementing effective program integrity processes for managed care, (3) ensuring clear reporting of overpayment recoveries, and (4) refocusing efforts on program integrity approaches that are cost- effective. These recommendations, if effectively implemented, could improve program management, help reduce improper payments in these programs, and achieve cost savings. Over the last 4 years, our work identified multiple opportunities for the government to increase revenue collections. For example, in 2014, we identified three actions that Congress could authorize that could increase tax revenue collections from delinquent taxpayers by hundreds of millions of dollars over a 5-year period: limiting issuance of passports to applicants, levying payments to Medicaid providers, and identifying security clearance applicants. For example, Congress could consider requiring the Secretary of State to prevent individuals who owe federal taxes from receiving passports. We found that in fiscal year 2008, passports were issued to about 16 million individuals; about 1 percent of these collectively owed more than $5.8 billion in unpaid federal taxes as of September 30, 2008. According to a 2012 Congressional Budget Office estimate, the federal government could save about $500 million over a 5- year period by revoking or denying passports to those with certain federal tax delinquencies. We have also identified opportunities to implement program benefit offsets, in which certain program benefits for individuals are reduced in recognition of other benefits received. Examples include the following: Social Security Offsets: In our 2011 annual report, we reported that the Social Security Administration (SSA) needs data from state and local governments on retirees who receive pensions from employment not covered under Social Security to better enforce offsets and ensure benefit fairness. In particular, SSA needs this information to fairly and accurately apply the Government Pension Offset, which generally applies to spouse and survivor benefits, and the Windfall Elimination Provision, which applies to retired worker benefits. The Social Security’s Government Pension Offset and Windfall Elimination Provision take noncovered employment into account when calculating Social Security benefits. While information on receipt of pensions from noncovered employment is available for federal pension benefits from the federal Office of Personnel Management, it is not available to SSA for many state and local pension benefits. The President’s Fiscal Year 2016 Budget submission re-proposed legislation that would require state and local governments to provide information on their noncovered pension payments to SSA so that the agency can apply the Government Pension Offset and Windfall Elimination Provision. The proposal includes funds for administrative expenses, with a portion available to states to develop a mechanism to provide this information. Also, we continue to suggest that Congress consider giving the Internal Revenue Service the authority to collect the information that SSA needs to administer these offsets. Providing information on the receipt of state and local noncovered pension benefits to SSA could help the agency more accurately and fairly administer the Government Pension Offset and Windfall Elimination Provision and could result in an estimated $2.4 billion— $6.5 billion in savings over 10 years if enforced both retrospectively and prospectively. If Social Security enforced the offsets only prospectively, the overall savings still would be significant. Disability and Unemployment Benefits: In our 2014 annual report, we found that 117,000 individuals received concurrent cash benefit payments in fiscal year 2010 from the Disability Insurance and Unemployment Insurance programs totaling more than $850 million because current law does not preclude the receipt of overlapping benefits. Individuals may be eligible for benefit payments from both Disability Insurance and Unemployment Insurance due to differences in the eligibility requirements; however, in such cases, the federal government is replacing a portion of lost earnings not once, but twice. The President’s Fiscal Year 2016 Budget submission proposes to eliminate these overlapping benefits, and during the 113th Congress, bills had been introduced in both the U.S. House of Representatives and the Senate containing language to reduce Disability Insurance payments to individuals for the months they collect Unemployment Insurance benefits. According to CBO, this action could save $1.2 billion over 10 years in the Social Security Disability Insurance program. Congress should consider passing legislation to offset Disability Insurance benefit payments for any Unemployment Insurance benefit payments received in the same period. Table 4 highlights some of our suggested actions within these and other areas that could result in tens of billions of dollars in cost-savings or revenue-enhancement opportunities, according to estimates from GAO, executive branch agencies, the Congressional Budget Office, or the Joint Committee on Taxation. For GAO’s most recent work on GPRAMA, see GAO, Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories, GAO-15-83 (Washington D.C.: Oct. 31, 2014); Managing for Results: Selected Agencies Need to Take Additional Efforts to Improve Customer Service, GAO-15-84 (Washington D.C.: Oct. 24, 2014); and Managing for Results: Agencies’ Trends in the Use of Performance Information to Make Decisions, GAO-14-747 (Washington D.C.: Sept. 26, 2014). In addition, information on GAO’s work on GPRAMA can be found at http://www.gao.gov/key_issues/managing_for_results_in_government/issue_summary. a greater focus on expenditures and outcomes are essential to improving the efficiency and effectiveness of federal efforts. To help analysts and decision makers better assess the extent of fragmentation, overlap and duplication, GAO has developed an evaluation and management guide (GAO-15-49SP), which is being released concurrently with our 2015 annual report. The guide includes two parts. Part one provides four steps for analysts—including federal, state, and local auditors; congressional staff; and researchers—to identify and evaluate instances of fragmentation, overlap or duplication. Each step includes examples that illustrate how to implement suggested actions or consider different types of information. Part two provides guidance to help policymakers reduce or better manage fragmentation, overlap, and duplication. In recognition that the pervasiveness of fragmentation, overlap, and duplication may require attention beyond the program level, the guide also includes information on a number of options Congress and the executive branch may consider to address these issues government- wide. Some of these options are executive branch reorganization, special temporary commissions, interagency groups, automatic sunset provisions, and portfolio or performance-based budgeting. These options can be used independently or together to assist policymakers in evaluating and addressing fragmentation, overlap, and duplication beyond the programmatic level. Congress can also use its power of the purse and oversight powers to incentivize executive branch agencies to act on our suggested actions and monitor their progress. In particular, the Senate Budget Resolution for fiscal year 2016 directs committees to review programs and tax expenditures within their jurisdiction for waste, fraud, abuse, or duplication and to consider the findings from our past annual reports. Also, the accompanying report for the House Budget Resolution for fiscal year 2016grants into three categories—first responder, law enforcement, and victims—which is consistent with our prior work recommending that DOJ better target its grant resources. The resolution also highlights a number of the issues presented in our annual reports—including the multiple programs that support Science, Technology, Engineering, and Mathematics education, housing assistance, homeland security preparedness grants, and green building initiatives—notes the number of programs that will need to be reauthorized in fiscal year 2016, and states that our findings should result in programmatic changes in both authorizing statutes and program funding levels. Congressional use of our findings in its decision making for the identified areas of fragmentation, overlap, and duplication will send an unmistakable message to agencies that Congress considers these issues a priority. Through its budget, appropriations, and oversight processes, Congress can also shift the burden to the agencies to demonstrate the effectiveness of their programs to justify continued funding. proposes that the Department of Justice (DOJ) streamline We will continue to conduct further analysis to look for additional or emerging instances of fragmentation, overlap, and duplication and opportunities for cost savings or revenue enhancement. Likewise, we will continue to monitor developments in the areas we have already identified in this series. We stand ready to assist this and other committees in further analyzing the issues we have identified and evaluating potential solutions. Chairman Chaffetz, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer questions. For further information on this testimony or our April 14, 2015, reports, please contact Orice Williams Brown, Managing Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or williamso@gao.gov, and A. Nicole Clowers, Director, Financial Markets and Community Investment, who may be reached at (202) 512- 8678 or clowersa@gao.gov. Contact points for the individual areas listed in our 2015 annual report can be found at the end of each area at GAO-15-404SP. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As the fiscal pressures facing the government continue, so too does the need for executive branch agencies and Congress to improve the efficiency and effectiveness of government programs and activities. Such opportunities exist throughout government. To bring these opportunities to light, Congress included a provision in statute for GAO to annually identify federal programs, agencies, offices, and initiatives (both within departments and government-wide) that are fragmented, overlapping, or duplicative. As part of this work, GAO also identifies additional opportunities to achieve cost savings or enhanced revenue collection. GAO's 2015 annual report is its fifth in this series ( GAO-15-404SP ). This statement discusses (1) new opportunities GAO identifies in its 2015 report; (2) the status of actions taken to address the opportunities GAO identified in its 2011-2014 reports; and (3) existing and new tools available to help executive branch agencies and Congress reduce or better manage fragmentation, overlap, and duplication. To identify what actions exist to address these issues and take advantage of opportunities for cost savings and enhanced revenues, GAO reviewed and updated prior work, including recommendations for executive action and matters for congressional consideration. GAO's 2015 annual report identifies 66 new actions that executive branch agencies and Congress could take to improve the efficiency and effectiveness of government in 24 areas. GAO identifies 12 new areas in which there is evidence of fragmentation, overlap, or duplication. For example, GAO suggests that Congress repeal the statutorily required US Family Health Plan—a decades-old component of the Department of Defense's (DOD) Military Health System—because it duplicates the efforts of DOD's managed care support contractors by providing the same benefit to military beneficiaries. GAO also identifies 12 areas where opportunities exist either to reduce the cost of government operations or enhance revenue collections. For example, GAO suggests that Congress update the way Medicare has paid certain cancer hospitals since 1983, which could save about $500 million per year. The executive branch and Congress have made progress in addressing the approximately 440 actions government-wide that GAO identified in its past annual reports. Overall, as of March 6, 2015, 37 percent of these actions were addressed, 39 percent were partially addressed, and 20 percent were not addressed. Executive branch and congressional efforts to address these actions over the past 4 years have resulted in over $20 billion in financial benefits, with about $80 billion more in financial benefits anticipated in future years from these actions. Although progress has been made, fully addressing all the remaining actions identified in GAO's annual reports could lead to tens of billions of dollars of additional savings. Addressing fragmentation, overlap, and duplication within the federal government is challenging due to, among other things, the lack of reliable budget and performance information. If fully and effectively implemented, the GPRA Modernization Act of 2010 and the Digital Accountability and Transparency Act of 2014 could help to improve performance and financial information. In addition, GAO has developed an evaluation and management guide ( GAO-15-49SP ), which is being released concurrently with the 2015 annual report. This guide provides a framework for analysts and decision makers to identify and evaluate instances of fragmentation, overlap and duplication and consider options for addressing or managing such instances. |
Information security is an important consideration for any organization that depends on information systems to carry out its mission. The dramatic expansion in computer interconnectivity and the exponential increase in the use of the Internet are changing the way our government, the nation, and much of the world communicate and conduct business. However, risks are significant, and they are growing. The number of computer security incidents reported to the CERT Coordination Center (CERT/CC) rose from 9,859 in 1999 to 21,756 in 2000. For the first 6 months of 2001, the number reported was 15,476. As the number of individuals with computer skills has increased, more intrusion or “hacking” tools have become readily available and relatively easy to use. A potential hacker can literally download tools from the Internet and "point and click" to start a hack. According to a recent National Institute of Standards and Technology (NIST) publication, hackers post 30 to 40 new tools to hacking sites on the Internet every month. The successful cyber attacks against such well-known U.S. e- commerce Internet sites as eBay, Amazon.com, and CNN.com by a 15-year old "script kiddie" in February 2000 illustrate the risks. Without proper safeguards, these developments make it easier for individuals and groups with malicious intentions to gain unauthorized access to systems and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other organizations’ sites. incidents reported in 2000, which occurred at 32 agencies, resulted in what is known as a “root compromise.” For at least five of the root compromises, government officials were able to verify that access to sensitive information had been obtained. How well federal agencies are addressing these risks is a topic of increasing interest in the executive and legislative branches. In January 2000, President Clinton issued a National Plan for Information Systems Protection and designated computer security and critical infrastructure protection a priority management objective in his fiscal year 2001 budget. The new administration, federal agencies, and private industry have collaboratively begun to prepare a new version of the national plan that will outline an integrated approach to computer security and critical infrastructure protection. The Congress, too, is increasingly interested in computer security, as evidenced by important hearings held during 1999, 2000, and 2001 on ways to strengthen information security practices throughout the federal government and on progress at specific agencies in addressing known vulnerabilities. Furthermore, in October 2000, the Congress included government information security reform provisions in the fiscal year 2001 National Defense Authorization Act. These provisions seek to ensure proper management and security for federal information systems by calling for agencies to adopt risk management practices that are consistent with those summarized in our 1998 Executive Guide. The provisions also require annual agency program reviews and Inspector General (IG) evaluations that must be reported to the Office of Management and Budget (OMB) as part of the budget process. The federal CIO Council and others have also initiated several projects that are intended to promote and support security improvements to federal information systems. Over the past year, the CIO Council, working with NIST, OMB, and us, developed the Federal Information Technology Security Assessment Framework. The framework provides agencies with a self-assessment methodology to determine the current status of their security programs and to establish targets for improvement. OMB has instructed agencies to use the framework to fulfill their annual assessment and reporting obligations. Since 1996, our analyses of information security at major federal agencies have shown that systems are not being adequately protected. Our previous reports, and those of agency IGs, describe persistent computer security weaknesses that place a variety of critical federal operations at risk of inappropriate disclosures, fraud, and disruption. This body of audit evidence has led us, since 1997, to designate computer security as a governmentwide high-risk area. Our most recent summary analysis of federal information systems found that significant computer security weaknesses had been identified in 24 of the largest federal agencies, including Commerce. During December 2000 and January 2001, Commerce's IG also reported significant computer security weaknesses in several of the department's bureaus and, in February 2001, reported information security as a material weakness affecting the department's ability to produce accurate data for financial statements. The report stated that there were weaknesses in several areas, including entitywide security management, access controls, software change controls, segregation of duties, and service continuity planning. Moreover, a recent IG assessment of the department's information security program found fundamental weaknesses in the areas of policy and oversight. Also, the IG designated information security as one of the top ten management challenges for the department. Commerce's missions are among the most diverse of the federal government's cabinet departments, covering a wide range of responsibilities that include observing and managing natural resources and the environment; promoting commerce, regional development, and scientific research; and collecting, analyzing, and disseminating statistical information. Commerce employs about 40,000 people in fourteen operating bureaus with numerous offices in the U.S. and overseas, each pursuing disparate programs and activities. IT is a critical tool for Commerce to support these missions. The department spends significant resources—reportedly over $1.5 billion in fiscal year 2000—on IT systems and services. As a percentage of total agency expenditures on IT, Commerce ranks among the top agencies in the federal government, with 17 percent of its $9-billion fiscal year 2000 budget reported as spent on IT. A primary mission of Commerce is to promote job creation and improved living standards for all Americans by furthering U.S. economic growth, and the seven bureaus we reviewed support this mission through a wide array of programs and services. Commerce uses IT to generate and disseminate some of the nation’s most important economic information. The International Trade Administration (ITA) promotes the export of U.S. goods and services—which amounted to approximately $1.1 trillion in fiscal year 2000. Millions of American jobs depend on exports, and with 96 percent of the world's consumers living outside U.S. borders, international trade is increasingly important to supporting this mission. The Economics and Statistics Administration (ESA) develops, prepares, analyzes, and disseminates important indicators of the U.S. that present basic information on such key issues as economic growth, regional development, and the U.S. role in the world economy. This information is of paramount interest to researchers, business, and policymakers. The Bureau of Export Administration (BXA), whose efforts supported sales of approximately $4.2 billion in fiscal year 1999, assists in stimulating the growth of U.S. exports while protecting national security interests by helping to stop the proliferation of weapons of mass destruction. Sensitive data such as that relating to national security, nuclear proliferation, missile technology, and chemical and biological warfare reside in this bureau's systems. Commerce's ability to fulfill its mission depends on the confidentiality, integrity, and availability of this sensitive information. For example, export data residing in the BXA systems reflect technologies that have both civil and military applications; the misuse, modification, or deletion of these data could threaten our national security or public safety and affect foreign policy. Much of these data are also business proprietary. If it were compromised, the business could not only lose its market share, but dangerous technologies might end up in the hands of renegade nations who threaten our national security or that of other nations. Commerce's IT infrastructure is decentralized. Although the Commerce IT Review Board approves major acquisitions, most bureaus have their own IT budgets and act independently to acquire, develop, operate, and maintain their own infrastructure. For example, Commerce has 14 different data centers, diverse hardware platforms and software environments, and 20 independently managed e-mail systems. The bureaus also develop and control their own individual networks to serve their specific needs. These networks vary greatly in size and complexity. For example, one bureau has as many as 155 local area networks and 3,000 users spread over 50 states and 80 countries. Some of these networks are owned, operated, and managed by individual programs within the same bureau. Because Commerce does not have a single, departmentwide common network infrastructure to facilitate data communications across the department, the bureaus have established their own access paths to the Internet, which they rely on to communicate with one another. In April 2001, the department awarded a contract for a $4 million project to consolidate the individual bureaus' local area networks within its headquarters building onto a common network infrastructure. However, until this project is completed, each of the bureaus is expected to continue to configure, operate, and maintain its own unique networks. Recognizing the importance of its data and operations, in September 1993 Commerce established departmentwide information security policies that defined and assigned a full set of security responsibilities, ranging from the department level down to individual system owners and users within the bureaus. Since 1998, the Commerce CIO position has been responsible for developing and implementing the department’s information security program. An information security manager, under the direction of the CIO's Office of Information Policy, Planning, and Review, is tasked with carrying out the responsibilities of the program. The CIO's responsibilities for the security of classified systems has been delegated to the Office of Security. In the last 2 years, the CIO introduced several initiatives that are essential to improving the security posture of the department. After a 1999 contracted evaluation of the bureaus' security plans determined that 43 percent of Commerce's most critical assets did not have current information system security plans, the CIO issued a memorandum calling for the bureaus to prepare security plans that comply with federal regulations. Also, in May 2000, the Office of the CIO performed a summary evaluation of the status of all the bureaus' information security based on the bureaus' own self-assessments. The results determined that overall information security program compliance was minimal, that no formal information security awareness and training programs were provided by the bureaus, and that incident response capabilities were either absent or informal. The Commerce IG indicated that subsequent meetings between the Office of the CIO and the bureaus led to improvements. The Office of the CIO plans to conduct another evaluation this year and, based on a comparison with last year's results, measure the bureaus’ success in strengthening their security postures. Finally, for the past year, the CIO attempted to restructure the department's IT management to increase his span of control over information security within the bureaus by enforcing his oversight authority and involvement in budgeting for IT resources. However, this initiative was not approved before the CIO’s resignation in 2001. In June 2001, after our fieldwork was completed, the Secretary of Commerce approved a high-level Commerce IT restructuring plan. The acting CIO stated that a task force is developing a more detailed implementation plan. A basic management objective for any organization is the protection of its information systems and critical data from unauthorized access. Organizations accomplish this objective by establishing controls that limit access to only authorized users, effectively configuring their operating systems, and securely implementing networks. However, our tests identified weaknesses in each of these control areas in all of the Commerce bureaus we reviewed. We demonstrated that individuals, both external and internal to Commerce, could compromise security controls to gain extensive unauthorized access to Commerce networks and systems. These weaknesses place the bureaus’ information systems at risk of unauthorized access, which could lead to the improper disclosure, modification, or deletion of sensitive information and the disruption of critical operations. As previously noted, because of the sensitivity of specific weaknesses, we plan to issue a report designated for "Limited Official Use," which describes in more detail each of the computer security weaknesses identified and offers specific recommendations for correcting them. Effective system access controls provide mechanisms that require users to identify themselves and authenticate their identity, limit the use of system administrator capabilities to authorized individuals, and protect sensitive system and data files. As with many organizations, passwords are Commerce’s primary means of authenticating user identity. Because system administrator capabilities provide the ability to read, modify, or delete any data or files on the system and modify the operating system to create access paths into the system, such capabilities should be limited to the minimum access levels necessary for systems personnel to perform their duties. Also, information can be protected by using controls that limit an individual’s ability to read, modify, or delete information stored in sensitive system files. One of the primary methods to prevent unauthorized access to information system resources is through effective management of user IDs and passwords. To accomplish this objective, organizations should establish controls that include requirements to ensure that well-chosen passwords are required for user authentication, passwords are changed periodically, the number of invalid password attempts is limited to preclude password guessing, and the confidentiality of passwords is maintained and protected. All Commerce bureaus reviewed were not effectively managing user IDs and passwords to sufficiently reduce the risk that intruders could gain unauthorized access to its information systems to (1) change system access and other rules, (2) potentially read, modify, and delete or redirect network traffic, and (3) read, modify, and delete sensitive information. Specifically, systems were either not configured to require passwords or, if passwords were required, they were relatively easy to guess. For example, powerful system administrator accounts did not require passwords, allowing anyone who could connect to certain systems through the network to log on as a system administrator without having to use a password, systems allowed users to change their passwords to a blank password, completely circumventing the password control function, passwords were easily guessed words, such as "password," passwords were the same as the user's ID, and commonly known default passwords set by vendors when systems were originally shipped had never been changed. Although frequent password changes reduce the risk of continued unauthorized use of a compromised password, systems in four of the bureaus reviewed had a significant number of passwords that never required changing or did not have to be changed for 273 years. Also, systems in six of the seven bureaus did not limit the number of times an individual could try to log on to a user ID. Unlimited attempts allow intruders to keep trying passwords until a correct password is discovered. Further, all Commerce bureaus reviewed did not adequately protect the passwords of their system users through measures such as encryption, as illustrated by the following examples: User passwords were stored in readable text files that could be viewed by all users on one bureau’s systems. Files that store user passwords were not protected from being copied by intruders, who could then take the copied password files and decrypt user passwords. The decrypted passwords could then be used to gain unauthorized access to systems by intruders masquerading as legitimate users. Over 150 users of one system could read the unencrypted password of a powerful system administrator's account. System administrators perform important functions in support of the operations of computer systems. These functions include defining security controls, granting users access privileges, changing operating system configurations, and monitoring system activity. In order to perform these functions, system administrators have powerful privileges that enable them to manipulate operating system and security controls. Privileges to perform these system administration functions should be granted only to employees who require such privileges to perform their responsibilities and who are specifically trained to understand and exercise those privileges. Moreover, the level of privilege granted to employees should not exceed the level required for them to perform their assigned duties. Finally, systems should provide accountability for the actions of system administrators on the systems. However, Commerce bureaus granted the use of excessive system administration privileges to employees who did not require such privileges to perform their responsibilities and who were not trained to exercise them. For example, a very powerful system administration privilege that should be used only in exceptional circumstances, such as recovery from a power failure, was granted to 20 individuals. These 20 individuals had the ability to access all of the information stored on the system, change important system configurations that could affect the system’s reliability, and run any program on the computer. Further, Commerce management also acknowledged that not all staff with access to this administrative privilege had been adequately trained. On other important systems in all seven bureaus, system administrators were sharing user IDs and passwords so that systems could not provide an audit trail of access by system administrators, thereby limiting accountability. By not effectively controlling the number of staff who exercise system administrator privileges, restricting the level of such privileges granted to those required to perform assigned duties, or ensuring that only well-trained staff have these privileges, Commerce is increasing the risk that unauthorized activity could occur and the security of sensitive information be compromised. Access privileges to individual critical systems and sensitive data files should be restricted to authorized users. Not only does this restriction protect files that may contain sensitive information from unauthorized access, but it also provides another layer of protection against intruders who may have successfully penetrated one system from significantly extending their unauthorized access and activities to other systems. Examples of access privileges are the capabilities to read, modify, or delete a file. Privileges can be granted to individual users, to groups of users, or to everyone who accesses the system. Six of the seven bureaus' systems were not configured to appropriately restrict access to sensitive system and/or data files. For example, critical system files could be modified by all users to allow them to bypass security controls. Also, excessive access privileges to sensitive data files such as export license applications were granted. Systems configured with excessive file access privileges are extremely vulnerable to compromise because such configurations could enable an intruder to read, modify, or delete sensitive system and data files, or to disrupt the availability and integrity of the system. Operating system controls are essential to ensure that the computer systems and security controls function as intended. Operating systems are relied on by all the software and hardware in a computer system. Additionally, all users depend on the proper operation of the operating system to provide a consistent and reliable processing environment, which is essential to the availability and reliability of the information stored and processed by the system. Operating system controls should limit the extent of information that systems provide to facilitate system interconnectivity. Operating systems should be configured to help ensure that systems are available and that information stored and processed is not corrupted. Controls should also limit the functions of the computer system to prevent insecure system configurations or the existence of functions not needed to support the operations of the system. If functions are not properly controlled, they can be used by intruders to circumvent security controls. To facilitate interconnectivity between computer systems, operating systems are configured to provide descriptive and technical information, such as version numbers and system names, to other computer systems and individuals when connections are being established. At the same time, however, systems should be configured to limit the amount of information that is made available to other systems and unidentified individuals because this information can be misused by potential intruders to learn the characteristics and vulnerabilities of that system to assist in intrusions. Systems in all bureaus reviewed were not configured to control excessive system information from exposure to potential attackers. The configuration of Commerce systems provided excessive amounts of information to anyone, including external users, without the need for authentication. Our testing demonstrated that potential attackers could collect information about systems, such as computer names, types of operating systems, functions, version numbers, user information, and other information that could be useful to circumvent security controls and gain unauthorized access. The proper configuration of operating systems is important to ensuring the reliable operation of computers and the continuous availability and integrity of critical information. Operating systems should be configured so that the security controls throughout the system function effectively and the system can be depended on to support the organization’s mission. Commerce bureaus did not properly configure operating systems to ensure that systems would be available to support bureau missions or prevent the corruption of the information relied on by management and the public. For example, in a large computer system affecting several bureaus, there were thousands of important programs that had not been assigned unique names. In some instances, as many as six different programs all shared the same name, many of which were different versions of the same program. Although typically the complexity of such a system may require the installation of some programs that are identically named and authorized programs must be able to bypass security in order to operate, there were an excessive number of such programs installed on this system, many of which were capable of bypassing security controls. Because these different programs are identically named, unintended programs could be inadvertently run, potentially resulting in the corruption of data or disruption of system operations. Also, because these powerful programs are duplicated, there is an increased likelihood that they could be misused to bypass security controls. In this same system, critical parts of the operating system were shared by the test and production systems used to process U.S. export information. Because critical parts were shared, as changes are made in the test system, these changes could also affect the production system. Consequently, changes could be made in the test system that would cause the production system to stop operating normally and shut down. Changes in the test system could also cause important Commerce data in the production system to become corrupted. Commerce management acknowledged that the isolation between these two systems needed to be strengthened to mitigate these risks. Operating system functions should be limited to support only the capabilities needed by each specific computer system. Moreover, these functions should be appropriately configured. Unnecessary operating system functions can be used to gain unauthorized access to a system and target that system for a denial-of-service attack. Poorly configured operating system functions can allow individuals to bypass security controls and access sensitive information without requiring proper identification and authentication. Unnecessary and poorly configured system functions existed on important computer systems in all the bureaus we reviewed. For example, unnecessary functions allowed us to gain access to a system from the Internet. Through such access and other identified weaknesses, we were able to gain system administration privileges on that system and subsequently gain access to other systems within other Commerce bureaus. Also, poorly configured functions would have allowed users to bypass security controls and gain unrestricted access to all programs and data. Networks are a series of interconnected information technology devices and software that allow groups of individuals to share data, printers, communications systems, electronic mail, and other resources. They provide the entry point for access to electronic information assets and provide users with access to the information technologies they need to satisfy the organization’s mission. Controls should restrict access to networks from sources external to the network. Controls should also limit the use of systems from sources internal to the network to authorized users for authorized purposes. External threats include individuals outside an organization attempting to gain unauthorized access to an organization’s networks using the Internet, other networks, or dial-up modems. Another form of external threat is flooding a network with large volumes of access requests so that the network is unable to respond to legitimate requests, one type of denial-of- service attack. External threats can be countered by implementing security controls on the perimeters of the network, such as firewalls, that limit user access and data interchange between systems and users within the organization’s network and systems and users outside the network, especially on the Internet. An example of perimeter defenses is only allowing pre-approved computer systems from outside the network to exchange certain types of data with computer systems inside the network. External network controls should guard the perimeter of the network from connections with other systems and access by individuals who are not authorized to connect with and use the network. Internal threats come from sources that are within an organization’s networks, such as a disgruntled employee with access privileges who attempts to perform unauthorized activities. Also, an intruder who has successfully penetrated a network’s perimeter defenses becomes an internal threat when the intruder attempts to compromise other parts of an organization’s network security as a result of gaining access to one system within the network. For example, at Commerce, users of one bureau who have no business need to access export license information on another bureau’s network should not have had network connections to that system. External network security controls should prevent unauthorized access from outside threats, but if those controls fail, internal network security controls should also prevent the intruder from gaining unauthorized access to other computer systems within the network. None of the Commerce bureaus reviewed had effective external and internal network security controls. Individuals, both within and outside Commerce, could compromise external and internal security controls to gain extensive unauthorized access to Commerce networks and systems. Bureaus employed a series of external control devices, such as firewalls, in some, but not all, of the access paths to their networks. However, these controls did not effectively prevent unauthorized access to Commerce networks from the Internet or through poorly controlled dial-up modems that bypass external controls. For example, four bureaus had not configured their firewalls to adequately protect their information systems from intruders on the Internet. Also, six dial-up modems were installed so that anyone could connect to their network without having to use a password, thereby circumventing the security controls provided by existing firewalls. Our testing demonstrated that, once access was gained by an unauthorized user on the Internet or through a dial-up modem to one bureau’s networks, that intruder could circumvent ineffective internal network controls to gain unauthorized access to other networks within Commerce. Such weak internal network controls could allow an unauthorized intruder or authorized user on one bureau’s network to change the configuration of other bureaus’ network controls so that the user could observe network traffic, including passwords and sensitive information that Commerce transmits in readable clear text, and disrupt network operations. The external and internal security controls of the different Commerce bureau networks did not provide a consistent level of security in part because bureaus independently configured and operated their networks as their own individual networks. For example, four of the bureaus we reviewed had their own independently controlled access points to the Internet. Because the different bureaus' networks are actually logically interconnected and perform as one large interconnected network, the ineffective network security controls of one bureau jeopardize the security of other bureaus’ networks. Weaknesses in the external and internal network controls of the individual bureaus heighten the risk that outside intruders with no prior knowledge of bureau user IDs or passwords, as well as Commerce employees with malicious intent, could exploit the other security weaknesses in access and operating system controls discussed above to misuse, improperly disclose, or destroy sensitive information. In addition to logical access controls, other important controls should be in place to ensure the confidentiality, integrity, and reliability of an organization's data. These information system controls include policies, procedures, and techniques to provide appropriate segregation of duties among computer personnel, prevent unauthorized changes to application programs, and ensure the continuation of computer processing operations in case of unexpected interruption. The Commerce bureaus had weaknesses in each of these areas that heightened the risks already created by their lack of effective access controls. A fundamental technique for safeguarding programs and data is to segregate the duties and responsibilities of computer personnel to reduce the risk that errors or fraud will occur and go undetected. OMB A-130, Appendix III, requires that roles and responsibilities be divided so that a single individual cannot subvert a critical process. Once policies and job descriptions that support the principles of segregation of duties have been established, access controls can then be implemented to ensure that employees perform only compatible functions. None of the seven bureaus in our review had specific policies documented to identify and segregate incompatible duties, and bureaus had assigned incompatible duties to staff. For example, staff were performing incompatible computer operations and security duties. In another instance, the bureau's security officer had the dual role of also being the bureau's network administrator. These two functions are not compatible since the individual's familiarity with system security could then allow him or her to bypass security controls either to facilitate performing administrative duties or for malicious purposes. Furthermore, none of the bureaus reviewed had implemented processes and procedures to mitigate the increased risks of personnel with incompatible duties. Specifically, none of the bureaus had a monitoring process to ensure appropriate segregation of duties, and management did not review access activity. Until Commerce restricts individuals from performing incompatible duties and implements compensating access controls, such as supervision and review, Commerce’s sensitive information will face increased risks of improper disclosure, inadvertent or deliberate misuse, and deletion, all of which could occur without detection. Also important for an organization's information security is ensuring that only authorized and fully tested software is placed in operation. To make certain that software changes are needed, work as intended, and do not result in the loss of data and program integrity, such changes should be documented, authorized, tested, and independently reviewed. Federal guidelines emphasize the importance of establishing controls to monitor the installation of and changes to software to ensure that software functions as expected and that a historical record is maintained of all changes. We have previously reported on Commerce's lack of policies on software change controls. Specific key controls not addressed were (1) operating system software changes, monitoring, and access and (2) controls over application software libraries including access to code, movement of software programs, and inventories of software. Moreover, implementation was delegated to the individual bureaus, which had not established written policies or procedures for managing software changes. Only three of the seven bureaus we reviewed mentioned software change controls in their system security plans, while none of the bureaus had policies or procedures for controlling the installation of software. Such policies are important in order to ensure that software changes do not adversely affect operations or the integrity of the data on the system. Without proper software change controls, there are risks that security features could be inadvertently or deliberately omitted or rendered inoperable, processing irregularities could occur, or malicious code could be introduced. Organizations must take steps to ensure that they are adequately prepared to cope with a loss of operational capability due to earthquakes, fires, sabotage, or other disruptions. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested recovery plan that covers all key computer operations. Such a plan is critical for helping to ensure that information system operations and data can be promptly restored in the event of a service disruption. OMB Circular A-130, Appendix III, requires that agency security plans assure that there is an ability to restore service sufficient to meet the minimal needs of users. Commerce policy also requires a backup or alternate operations strategy. The Commerce bureaus we reviewed had not developed comprehensive plans to ensure the continuity of service in the event of a service disruption. Described below are examples of service continuity weaknesses we identified at the seven Commerce bureaus. None of the seven bureaus had completed recovery plans for all of their sensitive systems. Although one bureau had developed two recovery plans, one for its data center and another for its software development installation center, the bureau did not have plans to cover disruptions to the rest of its critical systems, including its local area network. Systems at six of the seven bureaus did not have documented backup procedures. One bureau stated that it had an agreement with another Commerce bureau to back it up in case of disruptions; however, this agreement had not been documented. One bureau stated in its backup strategy that tapes used for system recovery are neither stored off-site nor protected from destruction. For example, backup for its network file servers is kept in a file cabinet in a bureau official's supply room, and backup tapes for a database and web server are kept on the shelf above the server. In case of a destructive event, the backups could be subject to the same damage as the primary files. Two bureaus had no backup facilities for key network devices such as firewalls. Until each of the Commerce bureaus develops and fully tests comprehensive recovery plans for all of its sensitive systems, there is little assurance that in the event of service interruptions, many functions of the organization will not effectively cease and critical data will be lost. As our government becomes increasingly dependent on information systems to support sensitive data and mission critical operations, it is essential that agencies protect these resources from misuse and disruption. An important component of such protective efforts is the capability to promptly identify and respond to incidents of attempted system intrusions. Agencies can better protect their information systems from intruders by developing formalized mechanisms that integrate incident handling functions with the rest of the organizational security infrastructure. Through such mechanisms, agencies can address how to (1) prevent intrusions before they occur, (2) detect intrusions as they occur, (3) respond to successful intrusions, and (4) report intrusions to staff and management. Although essential to protecting resources, Commerce bureau incident handling capabilities are inadequate in preventing, detecting, responding to, and reporting incidents. Because the bureaus have not implemented comprehensive and consistent incident handling capabilities, decision- making may be haphazard when a suspected incident is detected, thereby impairing responses and reporting. Thus, there is little assurance that unauthorized attempts to access sensitive information will be identified and appropriate actions taken in time to prevent or minimize damage. Until adequate incident detection and response capabilities are established, there is a greater risk that intruders could be successful in copying, modifying, or deleting sensitive data and disrupting essential operations. Accounting for and analyzing computer security incidents are effective ways for organizations to better understand threats to their information systems. Such analyses can also pinpoint vulnerabilities that need to be addressed so that they will not be exploited again. OMB Circular A-130, Appendix III, requires agencies to establish formal incident response mechanisms dedicated to evaluating and responding to security incidents in a manner that protects their own information and helps to protect the information of others who might be affected by the incident. These formal incident response mechanisms should also share information concerning common vulnerabilities and threats within the organization as well as with other organizations. By establishing such mechanisms, agencies help to ensure that they can more effectively coordinate their activities when incidents occur. Although the Commerce CIO issued a July 1999 memorandum to all bureau CIOs outlining how to prevent, detect, respond to, and report incidents, the guidance has been inconsistently implemented. Six of the seven bureaus we reviewed have only ad hoc processes and procedures for handling incidents. None have established and implemented all of the requirements of the memo. Furthermore, Commerce does not have a centralized function to coordinate the handling of incidents on a departmentwide basis. Two preventive measures for deterring system intrusions are to install (1) software updates to correct known vulnerabilities and (2) messages warning intruders that their activities are punishable by law. First, federal guidance, industry advisories, and best practices all stress the importance of installing updated versions of operating system and the software that supports system operations to protect against vulnerabilities that have been discovered in previously released versions. If new versions have not yet been released, “patches” that fix known flaws are often readily available and should be installed in the interim. Updating operating systems and other software to correct these vulnerabilities is important because once vulnerabilities are discovered, technically sophisticated hackers write scripts to exploit them and often post these scripts to the Internet for the widespread use of lesser skilled hackers. Since these scripts are easy to use, many security breaches happen when intruders take advantage of vulnerabilities for which patches are available but system administrators have not applied the patches. Second, Public Law 99-74 requires that a warning message be displayed upon access to all federal computer systems notifying users that unauthorized use is punishable by fines and imprisonment. Not only does the absence of a warning message fail to deter potential intruders, but, according to the law, pursuing and prosecuting intruders is more difficult if they have not been previously made fully aware of the consequences of their actions. Commerce has not fully instituted these two key measures to prevent incidents. First, many bureau systems do not have system software that has been updated to address known security exposures. For example, during our review, we discovered 20 systems with known vulnerabilities for which patches were available but not installed. Moreover, all the bureaus we reviewed were still running older versions software used on critical control devices that manage network connections. Newer versions of software are available that correct the known security flaws of the versions that were installed. Second, in performing our testing of network security, we observed that warning messages had not been installed for several network paths into Commerce systems that we tested. Even though strong controls may not block all intrusions, organizations can reduce the risks associated with such events if they take steps to detect intrusions and the consequent misuse before significant damage can be done. Federal guidance emphasizes the importance of using detection systems to protect systems from the threats associated with increasing network connectivity and reliance on information systems. Additionally, federally funded activities, such as CERT/CC, the Department of Energy's Computer Incident Advisory Capability, and FedCIRC are available to assist organizations in detecting and responding to incidents. Although the CIO’s July memo directs Commerce bureaus to monitor their information systems to detect unusual or suspicious activities, all the bureaus we reviewed were either not using monitoring programs or had only partially implemented their capabilities. For example, only two of the bureaus had installed intrusion detection systems. Also, system and network logs frequently had not been activated or were not reviewed to detect possible unauthorized activity. Moreover, modifications to critical operating system components were not logged, and security reports detailing access to sensitive data and resources were not sent to data owners for their review. The fact that bureaus we reviewed detected our activities only four times during the 2 months that we performed extensive external testing of Commerce networks, which included probing over 1,000 system devices, indicates that, for the most part, they are unaware of intrusions. For example, although we spent several weeks probing one bureau's networks and obtained access to many of its systems, our activities were never detected. Moreover, during testing we identified evidence of hacker activity that Commerce had not previously detected. Without monitoring their information systems, the bureaus cannot know how, when, and who performs specific computer activities, be aware of repeated attempts to bypass security, or detect suspicious patterns of behavior such as two users with the same ID and password logged on simultaneously or users with system administrator privileges logged on at an unexpected time of the day or night. As a result, the bureaus have little assurance that potential intrusions will be detected in time to prevent or, at least, minimize damage. The CIO's July memo also outlines how the bureaus are to respond to detected incidents. Instructions include responses such as notifying appropriate officials, deploying an on-site team to survey the situation, and isolating the attack to learn how it was executed. Only one of the seven bureaus reviewed has documented response procedures. Consequently, we experienced inconsistent responses when our testing was detected. For example, one bureau responded to our scanning of their systems by scanning ours in return. In another bureau, a Commerce employee who detected our testing responded by launching a software attack against our systems. In neither case was bureau management previously consulted or informed of these responses. The lack of documented incident response procedures increases the risk of inappropriate responses. For example, employees could take no action, take insufficient actions that fail to limit potential damage, take overzealous actions that unnecessarily disrupt critical operations, or take actions, such as launching a retaliatory attack, that could be considered improper. The CIO's July memo specifically requires bureau employees who suspect an incident or violation to contact their supervisor and the bureau security officer, who should report the incident to the department's information security manager. Reporting detected incidents is important because this information provides valuable input for risk assessments, helps in prioritizing security improvement efforts, and demonstrates trends of threats to an organization as a whole. The bureaus we reviewed have not been reporting all detected incidents. During our 2-month testing period, 16 incidents were reported by the seven bureaus collectively, 10 of which were generated to report computer viruses. Four of the other six reported incidents related to our testing activities, one of which was reported after our discovery of evidence of a successful intrusion that Commerce had not previously detected and reported. However, we observed instances of detected incidents that were not reported to bureau security officers or the department's information security manager. For example, the Commerce employees who responded to our testing by targeting our systems in the two instances discussed above did not report either of the two incidents to their own bureau's security officer. By not reporting incidents, the bureaus lack assurance that identified security problems have been tracked and eliminated and the targeted system restored and validated. Furthermore, information about incidents could be valuable to other bureaus and assist the department as a whole to recognize and secure systems against general patterns of intrusion. The underlying cause for the numerous weaknesses we identified in bureau information system controls is that Commerce does not have an effective departmentwide information security management program in place to ensure that sensitive data and critical operations receive adequate attention and that the appropriate security controls are implemented to protect them. Our study of security management best practices, as summarized in our 1998 Executive Guide, found that leading organizations manage their information security risks through an ongoing cycle of risk management. This management process involves (1) establishing a centralized management function to coordinate the continuous cycle of activities while providing guidance and oversight for the security of the organization as a whole, (2) identifying and assessing risks to determine what security measures are needed, (3) establishing and implementing policies and procedures that meet those needs, (4) promoting security awareness so that users understand the risks and the related policies and procedures in place to mitigate those risks, and (5) instituting an ongoing monitoring program of tests and evaluations to ensure that policies and procedures are appropriate and effective. However, Commerce's information security management program is not effective in any of these key elements. Establishing a central management function is the starting point of the information security management cycle mentioned above. This function provides knowledge and expertise on information security and coordinates organizationwide security-related activities associated with the other four segments of the risk management cycle. For example, the function researches potential threats and vulnerabilities, develops and adjusts organizationwide policies and guidance, educates users about current information security risks and the policies in place to mitigate those risks, and provides oversight to review compliance with policies and to test the effectiveness of controls. This central management function is especially important to managing the increased risks associated with a highly connected computing environment. By providing coordination and oversight of information security activities organizationwide, such a function can help ensure that weaknesses in one unit's systems do not place the entire organization's information assets at undue risk. According to Commerce policy, broad program responsibility for information security throughout the department is assigned to the CIO. Department of Commerce Organization Order 15-23 of July 5, 2000, specifically tasks the CIO with developing and implementing the department's information security program to assure the confidentiality, integrity, and availability of information and IT resources. These responsibilities include developing policies, procedures, and directives for information security; providing mandatory periodic training in computer security awareness and accepted practice; and identifying and developing security plans for Commerce systems that contain sensitive information. Furthermore, the CIO is also formally charged with carrying out the Secretary's responsibilities for computer security under OMB Circular A- 130, Appendix III for all Commerce bureaus and the Office of the Secretary. An information security manager under the direction of the Office of the CIO is tasked with carrying out the responsibilities of the security program. These responsibilities, which are clearly defined in department policy, include developing security policies, procedures, and guidance and assuring security oversight through reviews, which include tracking the implementation of required security controls. Commerce lacks an effective centralized function to facilitate the integrated management of the security of its information system infrastructure. At the time of our review, the CIO, who had no specific budget to fulfill security responsibilities and exercised no direct control over the IT budgets of the Commerce bureaus, stated that he believed that he did not have sufficient resources or the authority to implement the department information security program. Until February 2000, when additional staff positions were established to support the information security manager’s responsibilities, the information security manager had no staff to discharge these tasks. As of April 2001, the information security program was supported by a staff of three. Commerce policy also requires each of its bureaus to implement an information security program that includes a full range of security responsibilities. These include appointing a bureauwide information security officer as well as security officers for each of the bureau's systems. However, the Commerce bureaus we reviewed also lack their own centralized functions to coordinate bureau security programs with departmental policies and procedures and to implement effective programs for the security of the bureaus' information systems infrastructure. For example, four bureaus had staff assigned to security roles on a part-time basis and whose security responsibilities were treated as collateral duties. In view of the widespread interconnectivity of Commerce's systems, the lack of a centralized approach to the management of security is particularly risky since there is no coordinated effort to ensure that minimal security controls are implemented and effective across the department. As demonstrated by our testing, intruders who succeeded in gaining access to a system in a bureau with weak network security could then circumvent the stronger network security of other bureaus. It is, therefore, unlikely that the security posture of the department as a whole will significantly improve until a more integrated security management approach is adopted and sufficient resources allotted to implement and enforce essential security measures departmentwide. As outlined in our 1998 Executive Guide, understanding the risks associated with information security is the second key element of the information security management cycle. Identifying and assessing information security risks helps to determine what controls are needed and what level of resources should be expended on controls. Federal guidance requires all federal agencies to develop comprehensive information security programs based on assessing and managing risks.Commerce policy regarding information security requires (1) all bureaus to establish and implement a risk management process for all IT resources and (2) system owners to conduct a periodic risk analysis for all sensitive systems within each bureau. Commerce bureaus we reviewed are not conducting risk assessments for their sensitive systems as required. Only 3 of the bureaus' 94 systems we reviewed had documented risk assessments, one of which was still in draft. Consequently, most of the bureaus' systems are being operated without consideration of the risks associated with their immediate environment. Moreover, these bureaus are not considering risks outside their immediate environment that affect the security of their systems, such as network interconnections with other systems. Although OMB Circular A-130, Appendix III, specifically requires that the risks of connecting to other systems be considered prior to doing so, several bureau officials acknowledged that they had not considered how vulnerabilities in systems that interconnected with theirs could undermine the security of their own systems. Rather, the initial decision to interconnect should have been made by management based on an assessment of the risk involved, the controls in place to mitigate the risk, and the predetermined acceptable level of risk. The widespread lack of risk assessments, as evidenced by the serious access control weaknesses revealed during our testing, indicates that Commerce is doing little to understand and manage risks to its systems. Once risks have been assessed, OMB Circular A-130, Appendix III, requires agencies to document plans to mitigate these risks through system security plans. These plans should contain an overview of a system's security requirements; describe the technical controls planned or in place for meeting those requirements; include rules that delineate the responsibilities of managers and individuals who access the system; and outline training needs, personnel controls, and continuity plans. In summary, security plans should be updated regularly to reflect significant changes to the system as well as the rapidly changing technical environment and document that all aspects of security for a system have been fully considered, including management, technical, and operational controls. None of the bureaus we reviewed had security plans for all of their sensitive systems. Of the 94 sensitive systems we reviewed, 87 had no security plans. Of the seven systems that did have security plans, none had been approved by management. Moreover, five of these seven plans did not include all the elements required by OMB Circular A-130, Appendix III. Without comprehensive security plans, the bureaus have no assurance that all aspects of security have been considered in determining the security requirements of the system and that adequate protection has been provided to meet those requirements. OMB also requires management officials to formally authorize the use of a system before it becomes operational, when a significant change occurs, and at least every 3 years thereafter. Authorization provides quality control in that it forces managers and technical staff to find the best fit for security, given technical constraints, operational constraints, and mission requirements. By formally authorizing a system for operational use, a manager accepts responsibility for the risks associated with it. Since the security plan establishes the system protection requirements and documents the security controls in place, it should form the basis for management's decision to authorize processing. As of March 2001, Commerce management had not authorized any of the 94 sensitive systems that we identified. According to the more comprehensive data collected by the Office of the CIO in March 2000, 92 percent of all the department's sensitive systems had not been formally authorized. The lack of authorization indicates that systems' managers had not reviewed and accepted responsibility for the adequacy of the security controls implemented on their systems. As a result, Commerce has no assurance that these systems are being adequately protected. The third key element of computer security management, as identified during our study of information security management practices at leading organizations, is establishing and implementing policies. Security policies are important because they are the primary mechanism by which management communicates its goals and requirements. Federal guidelines require agencies to frequently update their information security policies in order to assess and counter rapidly evolving threats and vulnerabilities. Commerce's information security policies are significantly outdated and incomplete. Developed in 1993 and partially revised in 1995, the department's information security policies and procedures manual, Information Technology Management Handbook, Chapter 10, “Information Technology Security,” and attachment, “Information Technology Security” does not comply with OMB’s February 1996 revision to Circular A-130, Appendix III, and does not incorporate more recent NIST guidelines. For example, Commerce’s information security policy does not reflect current federal requirements for managing computer security risk on a continuing basis, authorizing processing, providing security awareness training, or performing system reviews. Moreover, because the policy was written before the explosive growth of the Internet and Commerce’s extensive use of it, policies related to the risks of current Internet usage are omitted. For example, Commerce has no departmentwide security policies on World Wide Web sites, e-mail, or networking. Further, Commerce has no departmental policies establishing baseline security requirements for all systems. For example, there is no departmental policy specifying required attributes for passwords, such as minimum length and the inclusion of special characters. Consequently, security settings differ both among bureaus and from system to system within the same bureau. Furthermore, Commerce lacks consistent policies establishing a standard minimum set of access controls. Having these baseline agencywide policies could eliminate many of the vulnerabilities discovered by our testing, such as configurations that provided users with excessive access to critical system files and sensitive data and expose excessive system information, all of which facilitate intrusions. The Director of the Office of Information Policy, Planning, and Review and the Information Security Manager stated that Commerce management recognizes the need to update the department information security policy and will begin updating the security sections of the Information Technology Management Handbook in the immediate future. The fourth key element of the security management cycle involves promoting awareness and conducting required training so that users understand the risks and the related policies and controls in place to mitigate them. Computer intrusions and security breakdowns often occur because computer users fail to take appropriate security measures. For this reason, it is vital that employees who use computer systems in their day-to-day operations are aware of the importance and sensitivity of the information they handle, as well as the business and legal reasons for maintaining its confidentiality, integrity, and availability. OMB Circular A-130, Appendix III, requires that employees be trained on how to fulfill their security responsibilities before being allowed access to sensitive systems. The Computer Security Act mandates that all federal employees and contractors who are involved with the management, use, or operation of federal computer systems be provided periodic training in information security awareness and accepted information security practice. Specific training requirements are outlined in NIST guidelines,which establish a mandatory baseline of training in security concepts and procedures and define additional structured training requirements for personnel with security-sensitive responsibilities. Overall, none of the seven bureaus had documented computer security training procedures and only one of the bureaus had documented its policy for such training. This bureau also used a network user responsibility agreement, which requires that all network users read and sign a one-page agreement describing the network rules. Officials at another bureau stated that they were developing a security awareness policy document. Although each of the seven bureaus had informal programs in place, such as a brief overview as part of the one-time general security orientation for new employees, these programs do not meet the requirements of OMB, the Computer Security Act, or NIST Special Publication 800-16. Such brief overviews do not ensure that security risks and responsibilities are understood by all managers, users, and system administrators and operators. Shortcomings in the bureaus' security awareness and training activities are illustrated by the following examples. Officials at one bureau told us that they did not see training as an integral part of its security program, and provided an instructional handbook only to users of a specific bureau application. Another bureau used a generic computer-based training course distributed by the Department of Defense that described general computer security concepts but was not specific to Commerce's computing environment. Also, this bureau did not maintain records to document who had participated. Another bureau had limited awareness practices in place such as distribution of a newsletter to staff, but had no regular training program. Officials at this bureau told us that they were in the process of assessing its training requirements. Only one Commerce bureau that we reviewed provided periodic refresher training. In addition, staff directly responsible for information security do not receive more extensive training than overviews since security is not considered to be a full-time function requiring special skills and knowledge. Several of the computer security weaknesses we discuss in this testimony indicate that Commerce employees are either unaware of or insensitive to the need for important information system controls. The final key element of the security management cycle is an ongoing program of tests and evaluations to ensure that systems are in compliance with policies and that policies and controls are both appropriate and effective. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and corrects areas of noncompliance and ineffectiveness. For these reasons, OMB Circular A-130, Appendix III, directs that the security controls of major information systems be independently reviewed or audited at least every 3 years. Commerce policy also requires information security program oversight and tasks the program manager with performing compliance reviews of the bureaus as well as verification reviews of individual systems. The government information security reform provisions of the fiscal year 2001 National Defense Authorization Act require annual independent reviews of IT security in fiscal years 2001 and 2002. No oversight reviews of the Commerce bureaus' systems have been performed by the staff of Commerce's departmentwide information security program. The information security manager stated that he was not given the resources to perform these functions. Furthermore, the bureaus we reviewed do not monitor the effectiveness of their information security. Only one of the bureaus has performed isolated tests of its systems. In lieu of independent reviews, in May 2000, the Office of the CIO, using a draft of the CIO Council's Security Assessment Framework, requested that all Commerce bureaus submit a self-assessment of the security of their systems based on the existence of risk assessments, security plans, system authorizations, awareness and training programs, service continuity plans, and incident response capabilities. This self- assessment did not require testing or evaluating whether systems were in compliance with policies or the effectiveness of implemented controls. Nevertheless, the Office of the CIO’s analysis of the self-assessments showed that 92 percent of Commerce's sensitive systems did not comply with federal security requirements. Specifically, 63 percent of Commerce's systems did not have security plans that comply with federal guidelines, 73 percent had no risk assessments, 64 percent did not have recovery plans, and 92 percent had not been authorized for operational use. The information security manager further stated that, because of the continued lack of resources, the Office of the CIO would not be able to test and evaluate the effectiveness of Commerce's information security controls to comply with the government information security reform provisions requirement of the fiscal year 2001 National Defense Authorization Act. Instead, the information security manager stated that he would again ask the bureaus to do another self-assessment and would analyze the results. In future years, the information security manager intends to perform hands-on reviews as resources permit. In conclusion, Mr. Chairman, the significant and pervasive weaknesses that we discovered in the seven Commerce bureaus we tested place the data and operations of these bureaus at serious risk. Sensitive economic, personnel, financial, and business confidential information are exposed, allowing potential intruders to read, copy, modify, or delete these data. Moreover, critical operations could effectively cease in the event of accidental or malicious service disruptions. Poor detection and response capabilities exacerbate the bureaus' vulnerability to intrusions. As demonstrated during our own testing, the bureaus' general inability to notice our activities increases the likelihood that intrusions will not be detected in time to prevent or minimize damage. These weaknesses are attributable to the lack of an effective information security program, that is, lack of centralized management, a risk-based approach, up-to-date security policies, security awareness and training, and continuous monitoring of the bureaus' compliance with established policies and the effectiveness of implemented controls. These weaknesses are exacerbated by Commerce's highly interconnected computing environment in which the vulnerabilities of individual systems affect the security of systems in the entire department, since a compromise in a single poorly secured system can undermine the security of the multiple systems that connect to it. | This testimony discusses information security controls over computer systems at the Department of Commerce. Dramatic increases in computer interconnectivity, especially in the use of the Internet, are revolutionizing the way the government, the nation, and much of the world communicate and conduct business. However, this widespread interconnectivity also poses significant risks to the nation's computer systems and to the critical operations and infrastructures they support. This testimony provides information on the effectiveness of Commerce's (1) logical access controls and other information system controls over its computerized data, (2) incident detection and response capabilities, and (3) information security management program and related procedures. |
DOD hires contractors to provide a wide range of services that may include basic services (custodial and landscaping); administrative types of services (travel and management support); and complex professional and management (i.e., advisory and assistance) services that closely support inherently governmental functions, decisions, and spending, (acquisition support, budget preparation, developing or interpreting regulations, engineering and technical services, and policy development). Contractor employees often work inside DOD facilities, alongside DOD employees, to provide these services. DOD’s increased spending on services in recent years indicates there are a large number of contractor employees working side-by-side with federal employees. In fiscal year 2006, DOD obligated more than $151 billion on services contracts, a 78 percent real increase since fiscal year 1996. Overall, according to DOD, the amount obligated on services contracts in fiscal year 2005 exceeded the amount the department spent on supplies and equipment, including major weapon systems. Some categories of spending on services have grown significantly in recent years. For example, obligations for professional, management, and administrative support has grown about 161 percent from 1996 to 2005; obligations for medical services grew by more than 400 percent during the same time period. Several reasons are behind DOD’s increased reliance on contractors for services. In addition to the belief that it is more cost-effective to hire contractor employees instead of government employees, reasons include the need for skills and expertise not currently found in DOD; the flexibility and the relative ease in obtaining necessary support from contractor employees instead of hiring more government employees; and ceilings on the authorized number of government employees. DOD has not compiled departmentwide data on the numbers of contractor employees working at its facilities. Indications are, however, that significant numbers of contractor employees are working side-by-side with government employees in certain segments of the department, based in part on information we obtained from 21 DOD offices we reviewed. As shown in table 1, at 15 of these offices, contractor employees outnumber DOD employees and the percentage of contractor employees in the remaining offices ranges from 19 to 46 percent. For the offices we reviewed, contractors are supporting key mission-critical tasks that have the potential to influence DOD decisions. Some of these tasks are similar to functions performed by federal employees. For example, at the front end of the acquisition process, contractor employees who work in various DOD program offices study alternative ways to acquire desired capabilities, help develop requirements, and help design and evaluate requests for proposals as well as responses to those proposals and provide advice on the past performance of external contractors competing for that work. In the course of an acquisition, contractor employees recommend actions to program offices to correct other contractors’ performance problems, they analyze other contractors’ cost, schedule and performance data, and they assist in award-fee determinations for other contractors. In addition, contractor employees help DOD program offices develop long-range financial plans as well as yearly budgets. They also assist in administrative tasks to support program offices by tracking travel budgets and researching and reconciling payment discrepancies. Although we view the types of roles being played by contractor employees as closely supporting inherently governmental functions, some DOD officials had a different perspective. That is, when discussing contractor employees’ roles in the decision-making process, program managers we spoke with characterize contractor employee involvement as “technical” input into the decision-making process versus direct involvement in the decisions themselves. It should be noted, the Federal Acquisition Regulation (FAR) defines contractor participation in the evaluation of contract proposals as one of those functions that may approach being in the inherently governmental function category. Appendix IV provides more details on the key services contractor employees are performing in the DOD offices we reviewed. Contractor employees are not subject to the same laws and regulations that are designed to prevent conflicts of interests among federal employees. While the DFARS and the FAR require that the companies that provide contractor employees to DOD have written ethics policies, no departmentwide or FAR policy obliges DOD offices using contractor employees to require that they be free from personal conflicts of interest. On the other hand, some program managers, realizing the risk from potential personal conflicts of interest, have established their own safeguards for particularly sensitive areas where contractor employees provide support to decision processes. For example, all DOD offices we reviewed that used contractor employees in the source selection process use additional safeguard controls such as contract clauses designed to prevent personal conflicts of interests. The same offices, however, assessed risk differently when it came to other types of activities that contractors perform--with only 6 of 21 offices using similar conflict of interest contract clauses for activities such as requirements development, cost estimating, and test and evaluation. Most of the firms we visited have ethics policies that address personal conflicts of interests, but only three directly require their employees to identify potential personal conflicts of interest with respect to their work at DOD so they can be screened and mitigated by the firms. Lastly, because of recent public scrutiny regarding a conflict of interest issue with a high-level FFRDC official, in January 2007 DOD revised its policy for employees of FFRDCs. Several laws and regulations exist that address situations where individuals performing service to the government are, or might appear to be, unduly influenced by a personal financial interest. These include prohibition on bribery and kickbacks; bans on participating in matters affecting personal financial interest; and requirements relating to private employment contacts between certain procurement officials and potential bidders on government contracts. For federal employees, misconduct in some of these areas also violates criminal statutes with potentially serious consequences, including dismissal, prosecution, fines, and incarceration. As shown in table 2, there are very limited prohibitions—relating to public corruption involving criminal bribery activity—that apply to both DOD and defense contractor employees. The type of public corruption addressed by laws for both federal and contractor employees concern bribes, kickbacks, or other forms of graft. The anti-bribery law seeks to prevent the type of “quid pro quo” where an official action was taken in return for money, favors, travel, gifts, or other things of value. Examples of this law being applied to bribery cases involving contractor employees working on government contracting matters are as follows: A Navy contractor employee at the Space and Naval Warfare Systems Center pled guilty in 2006 to accepting bribes from a freight forwarding company. In exchange for awarding freight transportation contracts to the company, this contractor employee received items valued at more than $10,000, including extravagant dinners, concert and NASCAR tickets, weekends at a bed-and- breakfast inn, jewelry, and “spa days” at a department store. Investigators discovered that coincidentally, the freight company’s business was virtually nonexistent before this contractor employee began awarding the company contracts that eventually totaled over $700,000. The contractor employee was sentenced to a year in prison and ordered to help repay the government $84,000. An Army contractor employee working for the Coalition Provisional Authority in Iraq was put in charge of over $82 million in funding for an area south of Baghdad. The contractor employee quickly began accepting bribes in the form of cash, cars, jewelry, and sexual favors provided by a U.S. citizen who owned and operated several companies in Iraq and Romania in exchange for steering lucrative contracts in the business owner’s direction. Investigators recovered incriminating e-mail traffic, including one e-mail from the contractor employee to the business owner exclaiming, “I love to give you money!” The contractor employee pled guilty in 2006 to bribery, conspiracy, and money-laundering and was sentenced to 9 years in prison, 3 years of supervised release, and ordered to forfeit $3.6 million. DOD lacks a departmentwide policy requiring safeguards against personal conflicts of interest for contractor employees. For example, although DOD contracting policy in DFARS encourages companies providing contractor employees to DOD to have written ethics policies, it fails to require that contractor employees be free from conflicts of interest or to deploy other safeguards to help assure that the advice and assistance received from contractor employees is not tainted by personal conflicts of interest. This policy also fails to address procurement integrity-related issues involving contractor employees contacting prospective bidders to DOD contracts about future employment. Since December 2007, the FAR has required certain contractors to set and follow written codes of business ethics and conduct. However as shown in table 3, this new FAR requirement for contractors’ ethics programs— which were modeled on some of the DFARS requirements—will be insufficient in addressing DOD’s lack of a departmentwide policy requiring safeguards for personal conflicts of interest with contractor employees. This is because, like DOD’s policy, the new FAR requirements also lack specific provisions to prohibit conflicts of interest or employ other safeguards to assure that the advice and assistance received from contractor employees is not tainted by personal conflicts of interest. While DOD does not have a policy regarding contractor employee conflicts of interest, many DOD offices believe there is a risk of personal conflicts of interest when contractor employees participate in source selection activities. All of the 19 DOD offices we reviewed had established safeguard procedures such as contract clauses or self-certifications to prevent conflict of interests for contractor employees for the source selection process, as shown in table 4. By contrast, program offices assessed risk differently when it came to other types of contractor employees’ participation in decision making. Only 6 of the 21 offices had personal conflict of interest safeguards, such as contract clauses for other types of contractor employee services that involve advice and assistance on governmental decisions—which, for example, could include services related to requirements development, test and evaluation, and cost estimation. The Air Force’s Electronic Systems Center uses a contract clause as a safeguard to prevent conflicts of interest for contractors involved in source selection and other activities critical to mission-support and government decision making. According to an Air Force contracting official, this contract clause, highlighted in greater detail in table 5, affects 38 prime contractor companies, with an estimated value of $280,000,000 in task orders in 2007. An Air Force official told us that this clause has been used for at least 10 years in recognition of the close relationship between decision makers and federal employee advisors—who are both required to identify and avoid financial conflicts—and contractor employees directly advising them in these roles. Thus, the clause provides a mechanism to address potential and actual contractor financial conflicts that could affect the integrity of the procurement system. Also, the Army’s Communications Electronics Lifecycle Management Command developed a personal conflict of interest policy and contractual procedural safeguards for its contractor employees after our June 2007 visit to three of its offices where Army officials told us they had not previously considered the need to do so. According to a policy alert sent out in August 2007 to the command’s contracting activities, an underlying principle behind the policy is preventing the existence of conflicting roles that might bias a contractor employee’s judgment. According to the new policy, conflicts of interest are more likely to occur in support services contracts, such as management support; consultants; preparing statements of work; and performance of technical evaluations. Table 5 also highlights the command’s new safeguards in greater detail. Appendix V includes the full text of both contract clauses. We obtained information from one defense contractor about how this company—a large business—has implemented the clause required under the company’s subcontract with an Electronic Systems Center prime contractor. According to the contractor’s senior vice president, the company developed policy and procedure for annual financial conflict of interest employee certifications. According to the senior vice president, this conflict of interest safeguard applies to every employee working on this subcontract. The company’s safeguard is similar to the financial disclosure process used for DOD employees covered by federal conflict of interest safeguards. For example, the company’s instructions to employees state that the annual financial disclosure and certification process is done to assure that each employee is “free from any actual, potential, or apparent financial conflicts of interest with work he or she may perform on this contract.” In 2006, a conflict of interest for one of this company’s employees was disclosed on the annual certification. According to the company’s senior vice president, after the employee disclosed that his wife had taken a job with one of the center’s prime contractors, the company removed him from performing service under the subcontract. That was because the company’s annual review revealed not only that the employee might have a financial conflict of interest that could not reasonably be mitigated with the subcontracted work he was performing at the Electronic Systems Center, but he had not complied with the company’s ongoing requirement for employees to avoid prohibited financial interests and to immediately notify the company when financial interests change from what was certified in an employee’s last disclosure. We analyzed the ethics program documents available for 22 of the 23 contractors we reviewed and found that 18 have written policies and procedures that address avoidance of personal conflicts of interest by their employees. However, the policies require their employees to avoid a range of interests—such as owning substantial stock in competitors or suppliers—that conflict with the firms’ interests. Except in three cases, the policies did not require written disclosure forms identifying potential conflicts of interest with the employees’ work at DOD. More specifically, our review of the documents showed that: policies for 4 of the contractor firms did not address avoidance of personal conflicts of interests at all; policies for 18 of the firms did address avoidance of personal conflicts of interest, but just 3 specifically required written disclosures identifying potential conflicts of interest with respect to their work for customers, including DOD, so they can be screened and mitigated by the firms; and 16 of the firms extended their conflicts policies to the employees’ family members. Our analysis of contractors’ ethics documents found variation in how the contractors’ policy and procedural safeguards address how employees’ financial interests that could conflict or create the appearance of a conflict in the work they do for clients such as DOD. For example, several companies have conflict of interest policies addressing business ethics and standards of conduct requiring all employees to avoid having a range of financial or personal interests that would interfere in any way with their work for the company and could make others question the company’s integrity, or give the appearance of impropriety. These contractors’ conflicts of interest policies generally describe a range of potential activities that include employees’ financial or other interests, arrangements, and outside business interests and personal relationships that could pose an actual or the appearance of a conflict to be avoided. In three cases, however, the policies required written disclosure forms identifying potential conflicts of interest related to work carried out for DOD. That is, three contractor firms we reviewed require their employees to disclose potential conflicts related to their work at DOD and have employees working at the Air Force, Navy, and Army advising and assisting on engineering development and operation of aircraft and missile programs and acquisition management support on a communications program. For example as presented below, two firms have measures for ensuring that their employees do not have personal interests that would conflict with their work at DOD. According to the firms’ ethics documentation, these measures are part of the corporate mission and values statements so that all levels of their employees are aware that their services to clients as well as individual and company decisions are based on core business values such as honesty and the highest standards of ethics and integrity. For example: One small business contractor has employees who work on a range of aeronautical systems programs of the Air Force Materiel Command. Their responsibilities for one of the offices we visited in the area of acquisition management include tasks in various phases of the acquisition cycle, such as development, award, management, and contract closeout. The company has a 3-page Financial Conflict of Interest Reporting Form—and according to company officials, it is modeled on the federal financial disclosure form—that each of its professional employees must submit when initially hired and annually thereafter. The contractor’s reporting form asks each employee if there are any personal or household financial interests in the matters dealt with under the Air Force contract, such as stock ownership in any of the contractors who are involved in the aeronautical systems programs that the employee works on as part of his or her tasks. The company’s vice-president stated that, as the employee’s supervisor, he evaluates reported interests on the financial conflict of interest form and reviews the circumstances in light of present and prospective duties of the individual to ensure that both actual and apparent conflicts of interest are avoided. According to the vice-president, he also decides on how any conflict or apparent conflict will be resolved, such as reassignment, divestiture, or disqualification. A large business defense contractor has employees who work on missile programs under the Naval Sea Systems Command. Their responsibilities for the Navy include systems engineering and program office support, including contract management input for award fee deliberations and contract modifications. The company has a 1-page Certificate on Conflict of Interest, Relationships with Suppliers, and Standards of Business Conduct that, according to contractor officials, employees are required to submit annually by e-mail, fax, or on-line. The certification form requires yes or no answers to seven questions that serve to prompt each employee to disclose certain interests in the company’s suppliers or prospective suppliers, such as whether they or a member of their family has a substantial financial interest. The form also asks each employee if they or their family members have any other interest or agreement which may violate the Standards of Business Conduct or may otherwise result in an actual or perceived conflict of interest. According to the company’s ethics and Navy contracting managers, the annual conflict-of-interest certification process receives a fair amount of supervisory review and screening by corporate business ethics offices in order to prevent or mitigate actual or even the appearance of an employee being in a position with a personal conflict of interest. DOD’s FFRDCs are private nonprofit organizations established to meet specialized or long-term research or development needs that cannot be met by existing government or contractor resources. For example, employees from FFRDCs may provide design and systems engineering expertise to major space or weapon acquisition programs, and even work along side DOD employees. They may be involved with conducting independent assessments of technical risk, management, cost, and schedule for particular programs or in broader research on international security and defense strategy, acquisition and technology policy, force management, and logistics. In 2006, prompted by the aftermath of public and congressional scrutiny regarding a conflict of interest with the president of one DOD-sponsored FFRDC, DOD’s Deputy General Counsel (Acquisition and Logistics) reviewed the conflict of interest policies and procedures in place at each of DOD’s 10 FFRDCs. DOD’s review addressed FFRDC sponsoring agreements, contracts, and internal policies and procedures. DOD concluded that some of these documents failed to meet minimum FAR requirements and others needed revision to better protect DOD from conflicts of interest by FFRDC employees. As a result, in January 2007, the Undersecretary of Defense (Acquisition, Technology, and Logistics) revised DOD’s policy adding stricter contracting safeguards for FFRDC contractors requiring them to have procedures that address personal conflicts of interest for FFRDC employees. DOD revised the policy to ensure that FFRDC employees operate in the public interest with objectivity and independence. DOD’s revised policy requires in part that each administrator of its FFRDCs do the following: maintain written, corporatewide conflict of interest policies for their employees; report any personal conflicts of interest to contracting officers or their representatives; provide annual compilations of personal conflicts of interest and their dispositions; maintain audit programs to verify compliance; establish policies for their employees that address all major areas of personal conflicts of interest including, but not necessarily limited to gifts, outside activities, and financial interest; set procedures to screen for potential conflicts of interest for all employees in a position to make or materially influence research findings and/or recommendations to DOD; provide initial and annual training to address ethics and conflicts of interest for affected employees; and designate an office responsible for ethics compliance and training. All four FFRDC administrators that we contacted for this report had written corporatewide ethics policies and training for their employees prior to DOD’s new policy. According to FFRDC administrator officials, three of the FFRDCs have updated their ethics compliance program and policies, which include their training programs, and are in compliance with the new requirements. As of October 2007, a fourth FFRDC we contacted has yet to reach agreement with its Air Force sponsor organization on whether additional safeguards are necessary. Of the three FFRDCs that have already changed practices to implement the revised DOD-wide policy, there were some differences in how they changed their procedures to screen for potential conflicts of interest for all employees in a position to make or materially influence research findings and/or recommendations to DOD. For example, according to FFRDC administrator officials: New requirements are being implemented by two of the FFRDCs for their employees to complete an on-line personal conflict-of- interest screening questionnaire as part of their initial assignment to a DOD-sponsored task. The on-line screening tool will prompt these FFRDC employees, on a task-by-task basis, to disclose and list any financial interests they, their spouse, or family members have in specific DOD prime and subcontractors pre-loaded in the FFRDC database for each defense weapon system or DOD program being researched or advised on under the FFRDC project. According to ethics officials for these two FFRDCs, any disclosure of financial interests from the on-line tool is reviewed by the employee’s project manager or supervisor as well as the ethics office to identity actual or potential conflicts of interest, which would then be mitigated in ways similar to practices for federal employees. Instead of task-by-task screening, a third FFRDC’s procedures require all employees (except project directors) working on DOD tasks to submit annual disclosures identifying personal and family financial interests for review by supervisors and ethics offices to screen for actual or potential conflicts of interest in the employees’ tasks for DOD. Project directors are now required to submit financial disclosures task-by-task. A majority of government officials we spoke with indicated support for changes in contracting policy to address risks from contractor employees having personal conflicts of interest when participating in matters affecting DOD’s decisions. Those closest to the situation—DOD program managers— all agreed that safeguards are needed for contractor employees participating in the source selection process. Moreover, some of these managers had also put in safeguards for contractor employees involved in other types of advisory and assistance tasks. However, a number of program managers as well as defense contractor company officials expressed concern that adding new safeguards will increase costs for the government and are unnecessary since government officials—not contractors—are the ones ultimately making the decisions. DOD oversight officials as well as OGE officials, however, believed additional safeguards are necessary to maintain public confidence, particularly since contractors are increasingly being involved in spending decisions, though this could be achieved through changes in policy and practice and changes in regulations rather than changes in the law. A congressionally mandated Acquisition Advisory Panel recently concluded that there is a need to assure that the increase in contractor employees’ involvement in agency activities does not undermine the integrity of the government’s decision-making process and that changes in the FAR should be considered to establish additional conflict of interest safeguards across agencies through contract clauses. All of the 19 offices we reviewed established safeguard procedures such as contract clause or self-certifications to prevent conflict of interest for contractor employees when involved in the source selection process. At the same time, six offices had safeguards for contractors performing other types of advisory and assistance tasks. For example, the Army’s Communications Electronics Lifecycle Management Command and the Air Force’s Electronics Systems Center have recognized the need to prevent the existence of conflicting interests that might bias a contractor employee’s judgment and have developed contract clauses for other types of contractor employees who directly advise and assist federal decision- makers in those organizations. In addition, some of DOD’s program managers said they should require certain contractor employees to file financial disclosures with their companies so that they can screen for potential personal conflicts of interest in the work they do at DOD. However, when it comes to using contractor employees to perform tasks other than source selection, some program managers believed that additional safeguards are unnecessary. In fact, some believed it could create a cost and oversight burden. For example, these managers also stressed that government officials are ultimately responsible for decision- making, not contractor employees. When we asked DOD officials to tell us about cases of improper conduct involving contractor employees, some officials also pointed out that very few cases of actual conflicts of interest or other ethics problems involving contractor employees have been publicly identified and in most of these cases, the situations were handled informally. They were also concerned that requiring contractor employees to abide by certain safeguards, such as submitting financial disclosure forms for government or contractor ethics review processes, could chase away qualified contractors from federal work. We spoke to various company representatives responsible for managing their companies contracting business and/or employee ethics matters at the 21 DOD offices where we conducted our review. Some of the contractor company officials told us that they believed additional safeguards are not needed because their employees were aware that personal conflicts of interest are prohibited under their corporate ethics programs. Moreover, they pointed out that their employees would know to advise their supervisors of any potential conflicts, consistent with the companies’ ethics program procedures. Contractor officials also contended that the creation of new safeguards could drive up costs for the government because of contractor administrative costs for collecting and maintaining employee financial conflict of interest paperwork. Company officials cited other reasons for not establishing additional conflict of interest safeguards for contractor employees, but we found evidence that contradicted these positions. For example, one reason was that defense contractor companies’ business ethics and standards of conduct for their employees are already consistent with the government’s ethics requirements for federal employees. Our review of 22 contractors’ ethics program documents found that they did not address the same issues that the government ethics programs are required to address. Also, in some cases, they were designed to protect the contractor’s interests, not the government’s. Another reason was that the risk of conflicts of interest for contractor employees was low because it would be obvious if these employees tried to steer decision making to favor a personal interest or bias. Some contractor officials also stressed that the role that their employees were playing in decisions was minimal. Some government officials we spoke with, however, indicated that these types of inputs into decisions are not trivial and that it may not always be obvious when employees are providing biased information. On the other hand, many company officials told us that if the federal government were to require contractor employees to submit financial conflict of interest certifications or disclosures, then the companies would comply in the interest of maintaining the public’s confidence in the integrity of government operations supported by contractor employees. In addition, a manager of one small business defense contractor said that the company’s personal conflicts of interest and ethical conduct policy already requires all employees to submit annual financial conflict of interest reporting forms when assigned to perform government work. He added that most support contractor employees are retired military and have been accustomed to abiding by government rules for 30 years. Senior officials within DOD responsible for ensuring integrity in employee conduct and in the contracting function as well as the OGE told us that they believed that there are risks associated with personal conflicts of interests not just in program offices that involve contractors in source selection, but those that use contractors in other ways to support spending decisions. In fact, during our review, DOD undertook steps to begin assessing the need for departmentwide policies for preventing personal conflicts of interests for its contractor employees. OGE: OGE, which promulgates ethics guidance for the executive branch, has expressed concerns that current federal requirements and policies are inadequate to prevent certain kinds of ethical violations on the part of contractor employees. The office is specifically concerned with potential financial conflicts of interest, impaired impartiality, and misuse of information and authority. As such, additional conflict of interest safeguards should be targeted at contractor employees engaged in the types of services that influence governmental spending, contracting, and mission delivery decisions and concern the type of processes and operations upon which considerations of management and delegation must turn. OGE has also observed that federal and contractor employees work so closely on a day-to-day basis, it is difficult to distinguish whether employees are government or contracted and they see greater risks to the integrity of decisions given the growing influence that contractors appear to be having on government operations and expenditure of funds. OGE has advocated policy changes to apply conflicts of interest requirements to contractors. In considering additional contract-based safeguards to ensure that the government’s interests are not compromised by contractor employees’ conflicts of interest, OGE’s acting director has expressed concerns in several areas, such as: advisory and assistance services support, especially those where contractor personnel regularly perform in the government workplace and participate in deliberative and decision-making processes along with government employees; management and operations contracts involving large research facilities and laboratories, military bases, and other major programs; and large indefinite delivery or umbrella contracts that involve decentralized ordering and delivery of services at multiple agencies or offices. DOD ethics and general counsel officials: Defense ethics and other general counsel officials we spoke to from several DOD offices responsible for DOD-wide standards of conduct and ethics compliance had generally the same concerns raised by OGE. For example, according to the director of DOD’s Standards of Conduct Office in 2006, as DOD increases its integration of contractor employees into the actual administration of its organizations and offices, the larger the gap between employees in its blended workforce in terms of the conflict of interest requirements that apply only to federal employees, and the more difficult it becomes to ensure the integrity of government decision making. DOD ethics and general counsel officials also expressed concerns about risks associated with reliance on contractor employees, particularly when they perform many of the same advisory and management functions as federal employees. An Army general counsel official observed that contractor employees are exerting greater influence over Army operations given that the Army has lost expertise and leadership over the years. Further, DOD ethics and general counsel officials stated that those contractor employees participating in and supporting the government’s decision-making processes should be subject to stricter conflict of interest rules so that agencies can better judge the objectivity of their advice to the government. An Air Force ethics official said his office has come across situations in which contractor employees would have been in violation of the government ethics rules had they been government employees. An Army general counsel official told us that not requiring financial disclosure statements from contractor employees poses the greatest risk to the integrity and impartiality of the work they perform under contract for the government. Unlike for federal employees who are prohibited under conflict of interest law from participating in a particular matter involving specific parties in which they have a financial interest, there is no way to know whether contractor employees are doing so. In 2006, an approach to apply conflict of interest laws to contractor employees was identified by DOD’s Standards of Conduct Office director, who offered his personal views at one public policy discussion on the issue of contractor ethics that the FAR Council should consider some model language, or instruction to government agencies to include conflict of interest provisions within contracts. DOD’s Directorate of Defense Procurement and Acquisition Policy (DPAP): The DPAP director concurred with the views of ethics officials across the department and recently directed a DOD panel examining various aspects of contracting integrity to specifically examine the need for departmentwide policies to prevent conflicts of interest with its contractor employees. The Panel on Contracting Integrity is comprised of senior leaders representing a cross section of DOD. The director told us that existing policy may be inadequate given the growing reliance on contractor employees across DOD program offices. He was specifically concerned with contractors who are involved in source selection and contract management. Acquisition Advisory Panel: This panel, comprised of recognized experts in government acquisition law and policy, was established by a congressional mandate to examine and report on ways to improve federal acquisition practices. According to the panel, the trend toward more reliance on contractors in the federal workplace raises the possibility that the government’s decision making could be undermined as a result of personal conflicts of interest on the part of contractor employees. The panel concluded that, in view of the tremendous amount of federal contracting for services, and particularly in the context of the blended workforce, additional safeguards to protect against personal conflicts of interest by contractor employees are needed. The panel believed that conflict of interest safeguards are more critical for certain types of contracts (primarily services contracts) and that further study was needed to identify those types of contracts where the potential for contractor employee conflicts of interest raises a concern. The panel believed that achieving greater governmentwide consistency in safeguarding against contractor employees conflicts of interest would be beneficial, in that it would allow agencies to implement best practices and it would also help to assure that all bidders on federal contracts—whether successful or not—are aware of their responsibilities and that they structure their operations knowing what was expected of them. The panel concluded that it was not necessary to adopt any new federal statutes to impose additional conflict of interest safeguards on contractors or their employees. Rather, the additional safeguard requirements should be imposed—where appropriate—through contract clauses. As a result, the panel recommended to the Office of Federal Procurement Policy (OFPP) that the FAR Council should determine when contractor employee personal conflicts of interest need to be addressed, and whether greater disclosures, specific prohibitions, or reliance on specified principles are needed to maintain public confidence in the integrity of government operations reliant on contractors. The panel recommended that the FAR Council’s efforts should consider whether development of a standard ethics clause or a set of standard clauses that establish the contractor’s responsibility to perform the contract with a high level of integrity needs to be included in solicitations and contracts. According to OFPP officials, the FAR council was asked to initiate a case review process to consider changes to the FAR to include new conflict of interest safeguards for contractor employees. This anticipated action would be separate from the November 2007 amendment to the FAR requiring certain contractors to have written codes of ethics and business conduct, employee ethics and compliance training programs, and internal control systems to guard against violation of these codes. The final rule does not speak to development of a standard ethics clause concerning when contractor employee personal conflicts of interest need to be addressed. The environment in which DOD makes its most significant spending decisions is changing. As programs grow more complex and costly, DOD has increasingly become reliant on technical, business, and procurement expertise supplied by contractors—sometimes to a point where the foundation on which decisions are based may be largely crafted by individuals who are not employed by the government, who are not bound by the same rules governing their conduct, and who are not required to disclose whether they have financial or other personal interests that conflict with the responsibilities they have performing contract tasks for DOD. To its credit, DOD has recognized that this condition and its risks needs to be studied and addressed by adding personal conflicts of interest among contractor employees as a tasking for its Panel on Contracting Integrity and adopting stricter safeguards for FFRDC employees early in 2007. Such attention is important. While few cases of improper conduct have been publicly identified, there are also few safeguards in place to identify whether personal conflicts of interest even exist. The new FAR requirements making it mandatory for certain contractors to set and follow written codes of business ethics and conduct will not assure that the advice and assistance received from contractor employees is not tainted by personal conflicts of interest. The officials in most offices we reviewed that operate within this environment believe that the risk to the government is considerable enough to warrant safeguards when contractors are involved in source selection; at least some believe that risk extends to contractors that are involved in other activities that feed into spending decisions. Arguments that no change is needed focus on costs, which may be calculable. Yet, costs of contractor employees constructing options for their personal gain—an outcome increasingly likely based on sheer numbers—would likely never be known, let alone calculable as long as there is no transparency. Changes to current policy and practices that are targeted, tailored and implemented at the lowest practicable level are a way to minimize the cost of addressing personal conflicts of interest among contractor employees and to maximize the value of any additional safeguards. Several program offices have already demonstrated this is possible through the use of contract clauses and processes to identify potential conflicts and at least one small company has adopted similar safeguards on its own. We recommend that the Secretary of Defense direct the Under Secretary of Defense (Acquisitions, Technology, and Logistics), to develop and implement policy that requires personal conflict of interest contract clause safeguards for defense contractor employees that are similar to those required of DOD’s federal employees. In developing its policy, DOD should include requirements for contractor companies to identify and prevent personal conflicts of interest for certain of their contractor employees who are performing contracted services that provide inputs to DOD’s decision- making in such mission-critical areas as the development, award, and administration of government contracts and other advisory and assistance functions. In developing its policy, DOD should include the following requirements for defense contractor companies: require a written code of business ethics and conduct applicable to contractor personnel working on certain DOD mission-critical advisory and assistance type services to: prohibit contractor personnel from participating in a government contract in which they have a personal conflict of interest; require contractor personnel to avoid the appearance of loss of impartiality in performing contracted duties for DOD; require contractor personnel to disclose personal conflicts of interest to their employer prior to beginning work on these contracts; require the contractor to review and address any personal conflicts of interest its employees might have before assigning them to deliver contracted services; prohibit contractor personnel from using non-public government information obtained while performing work under the contract for personal gain; prohibit contractor employees providing procurement support services from having future employment contact involving a bidder in an ongoing procurement; impose limits on the ability of contractors and their employees on accepting gifts (defined as almost anything of monetary value, such as cash, meals, trips, or services) in connection with contracted duties; and prohibit misuse of DOD contract duties to provide preferential treatment to a private interest. In developing its policy, DOD should include requirements for contractor companies to: Report any contractor personnel conflict of interest violations to the applicable contracting officer or contracting officer’s representative as soon they are identified. Maintain effective oversight to verify compliance with personal conflict of interest safeguards, and have procedures in place to screen for potential conflicts of interest for all employees in a position to make or materially influence findings, recommendations, and decisions regarding DOD contracts and other advisory and assistance functions. This screening can be done on a task-by-task basis or on an annual basis, such as a financial disclosure statement. We provided a draft of this report to DOD and OGE for comment. The DPAP director wrote that DOD partially concurred with the recommendations. Specifically, he wrote that he agrees with their intent and that each of our recommendations will be carefully reviewed by the Panel on Contracting Integrity’s subcommittee, Contractor Employee Conflicts of Interest. According to the DPAP director, this subcommittee was established in order to respond to the concerns and recommendations voiced in the exit conference for our work. DOD’s comments are reproduced in appendix II. In providing comments, OGE’s Director commended the draft report for breaking important new ground by providing data regarding the ethical implications of contractors in the federal workplace. OGE offered a few comments on our recommendations that should help DOD as it begins its efforts to address how best to implement them. Also, OGE offered its expertise to assist DOD in developing its policy in response to our recommendations regarding the scope of personal conflicts of interest and other ethics requirements that would be appropriate for contractor employees in comparison to federal employees. OGE’s comments are reproduced in appendix III. We are sending copies of this report to the Secretary of Defense, the Director of the Office of Management and Budget, the Director of the OGE, and other interested parties. We will make this report available to the public on the GAO Web site at http://www.gao.gov. If you have any questions about this report or need additional information, please contact me at (202) 512-4841 or chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Carolyn Kirby, Assistant Director; Russ Reiter; Martene Bryan; Lily Chin; John Krump; Meredith Moore; Lillian Slodkowski; and Suzanne Sterling. Our overall objective was to review existing safeguards to prevent contractor employees from having personal conflicts of interest that could affect the integrity of their service while performing tasks to their employer under contract with DOD. Because DOD does not maintain departmentwide data on the numbers of contractor employees working side-by-side with federal employees, a specific objective was to (1) assess the roles being played by certain contractor employees by identifying how many of them were working at DOD offices included in this review as well as what responsibilities they were undertaking. Other specific objectives were to assess (2) what safeguards there are to prevent conflict of interests for contractor employees and (3) whether government and contractor officials believe additional safeguards are necessary. We reviewed federal statutes and government ethics and federal acquisition regulations concerning personal conflicts of interest to assess their scope and applicability, focusing our analysis on conflict of interest laws and regulations that safeguard or promote the integrity of the government’s decisions, approvals, disapprovals, and recommendations. In addition, we reviewed information on personal conflict of interest requirements for federal employees versus contractor employees and interviewed officials from the Office of Government Ethics (OGE); several offices that administer the defense ethics program including the Standards of Conduct Office in DOD’s General Counsel (Office of the Secretary of Defense) and the Army, Navy, and Air Force general counsels for ethics. To determine what conflict of interest safeguards for contractor employees that DOD has of its own for contractor employees, we reviewed DOD offices who used contractor employees to perform the type of tasks closely associated with inherently governmental functions and that influence government decision making. To obtain an understanding on the scope of DOD-wide safeguards, we reviewed the Defense Federal Acquisition Regulation Supplement (DFARS) to identify relevant contracting policies or contract clauses restricting contractor employees participation in DOD matters regarding personal conflicts of interest. To gain an understanding of the extent to which DOD offices use any DFARS policies or have augmented DFARS to establish local conflict of interest safeguards for contractor employees supporting their mission and operations, we visited and/or obtained information from 21 DOD offices in the Air Force, Army, Navy, Missile Defense Agency, and Tricare Management Activity. We judgmentally selected these DOD organizations and offices for review because they were cited by various DOD officials as having large contractor workforces and representing a cross-section of DOD organizations growing trend of reliance on support contractors. Table 6 lists the specific DOD offices selected for our review. Terminal High-Altitude Area Defense Program Office Contracting Center of Excellence Agency Operations Office Aegis Ballistic Missile Defense Program Deterrence and Strike Division (A5MF) Surveillance, Reconnaissance and Spacelift Division (A5F) Joint Medical Information Systems Office Aeronautical Systems Center (303rd Air Wing) Aeronautical Systems Center (516th Air Wing) Within these DOD organizations and 21 offices, we obtained information from and interviewed contracting officials, program managers, and other management officials. We also met with officials from DOD sponsoring organizations for federally funded research and development centers (FFRDC) within the office of the Undersecretary of Defense (Acquisition, Technology, and Logistics), Army, Air Force, and Navy. To obtain similar information from each of the 21 DOD offices, we interviewed these officials to obtain their views and supporting documentation using a structured set of questions covering several topics including (1) the types of services being provided by contractor employees, (2) their concerns about the integrity of the information and advice being provided by the contractor employees with regard to personal conflicts of interest, and (3) what safeguards such as contract clauses the offices are using to ensure that the assistance and advice provided is not impaired due to contractor employees conflicts of interest. (See table 7, which summarizes the structured topics discussed with the DOD offices selected for this review.) In that regard we also obtained and reviewed available contract clauses or other documented safeguards to prevent, identify, and mitigate contractor employees’ personal conflicts problems. We also interviewed defense procurement and acquisition policy and general counsel and contractor oversight officials from the Office of the Undersecretary of Defense (Acquisitions, Technology, and Logistics), Army, Navy, Air Force, and Missile Defense Agency, DOD Procurement Fraud Working Group, Defense Contract Management Agency, and Defense Contract Audit Agency to obtain information and views on their oversight and monitoring of defense contractor ethics programs and contractor employees conflict of interest issues. To identify safeguards that DOD contractors have implemented for their employees to avoid conflicts of interest, we met with and obtained documentation on ethics programs from 23 defense contractors and four FFRDC administrator organizations. We judgmentally selected these contractors and FFRDCs for review because they were cited by program managers in the 21 DOD offices we reviewed as having contractor employees who are performing contracted services for them. According to fiscal year 2006 contract award data, DOD obligated $8.0 billion for professional, administrative, and management support services contracts with the 23 contractors, which accounted for 25 percent of total dollars obligated that year for this category of services contracts. The contractors and FFRDCs we selected for review are listed in table 8. We reviewed examples of the contractors’ statements of work or other documentation to identify the types of advice and assistance services being provided by their employees to the 21 DOD offices reviewed. To obtain similar information for each of the contractors and FFRDCs, we interviewed their officials to obtain their views and supporting documentation using a structured set of questions covering several topics including (1) DOD’s reliance on contractor employees in terms of their numbers and responsibilities, (2) steps the contractors take to identify and mitigate their employees’ conflicts of interest, and (3) views or concerns about the need for additional safeguards to ensure that the assistance and advice provided is not impaired by contractor employees conflicts of interest. (See table 9, which summarizes the structured topics discussed with the contractors and FFRDCs selected for this review.) We also discussed and obtained documentation from the FFRDC administrators on changes in their ethics policies and procedures to address safeguards for employee conflict of interest problems in response to DOD’s January 2007 revised policy for FFRDCs. To determine if government and contractor officials believe additional safeguards are necessary for contractor employees, we used the results of the above discussions with officials from DOD organizations, including program managers and ethics and contracting officials. In addition, we used the results from the above discussions with officials at the 23 defense contractors and four FFRDCs included in our review. We also reviewed the report of the Acquisition Advisory Panel and met with OGE and OFPP officials to obtain information on actions being considered in response to Panel recommendations related to personal conflict of interest safeguards for contractor employees. We also met with representatives of industry and other groups, including the Defense Industry Initiative on Business Ethics and Conduct, Professional Service Council, Ethics Resource Center, and members of the American Bar Association’s Public Contract Law Section on Professional Responsibility and Contracting Ethics. For the purposes of understanding how key defense contractor employees are used to perform mission-critical tasks that could influence DOD decisions similar to functions carried out by federal employees, we obtained information from 21 offices across the five DOD organizations we reviewed. We also reviewed examples of contracting documents including statements of work and task orders. The contract documents described a range of services that closely support inherently governmental functions, such as developing briefings, preparing contracts, proposing award fee amounts for contractors, conducting systems engineering studies, analyzing technical issues, and providing financial management support. Most of the documents we reviewed described services requiring contractor employees to provide program management oversight duties and entailed providing these contractors with classified, business proprietary, and otherwise nonpublic information to perform duties closely associated with inherently governmental functions. Table 10 lists a range of professional and management services and support that contractor employees provided to different DOD organizations we reviewed. Under Air Force Electronic Systems Center and Army Communications Electronics Lifecycle Management Command policies affecting 4 of the 21 DOD offices we reviewed, we identified two examples of local contract clauses establishing conflict-of-interest safeguards for contractor employees performing advisory and assistance services tasks and other support services. To illustrate the scope and breadth of these local contract clauses for addressing contractor employees’ personal conflicts of interest, the clauses are reproduced in their entirety. With its annual budget of about $3 billion, the mission of the Electronic Systems Center is to develop, acquire, modernize, and integrate command and control, intelligence, surveillance and reconnaissance capabilities, as well as combat support information technology systems. According to the center, advisory and assistance services contractor employees comprise a substantial portion of its workforce helping to execute this mission. And, according to the center’s law division, although these contractor employees cannot perform inherently governmental functions, they do provide essential technical and business advice and expertise that may be highly influential in decision making by government employees. Given the close relationship between Air Force decision-makers and federal employee advisors at the center—who are both required to identify and avoid financial conflicts—and contractor employees directly advising them in these roles, the clause (as shown in table 11), which has been used for advisory and assistance services contracts for at least 10 years, provides a mechanism to address potential and actual contractor financial conflicts that could affect the integrity of the procurement system. According to the center, the clause places an obligation on the part of the contractor to monitor for personal financial conflicts of interest and maintain its own disclosure records. The center does not routinely monitor or review these records, but relies on a self-certification model, consistent with its treatment of similar requirements in such contracts. In August 2007, the chief of the command’s Acquisition Process Change Group distributed a policy applicable to all of the command’s contracting activities to establish personal conflict of interest safeguards to be addressed for contractor employees as part of current contracting procedures for identifying, evaluating, and resolving organizational conflicts of interest. The underlying principles behind the revised policy are preventing the existence of conflicting roles that might bias a contractor’s judgment and unfair competitive advantage. According to the command’s policy, conflicts of interest are more likely to occur in support services contracts involving: management support services; consultant or other professional services; contractor performance of or assistance in technical evaluations; preparing specifications or work statements; and systems engineering and technical direction work performed by a contractor that does not have overall contractual responsibility for development or production. In the acquisition planning process for all support services, the contracting officer is required to use local clause HS6001, Organizational Conflict of Interest, in the solicitation and contract (see table 12). As a condition of award, the contractor is required to have its employees and subcontractors who will perform work on the task execute the Contractor- Employee Personal Financial Interest/Protection of Sensitive Information Agreement, to maintain copies of those agreements, and provide them to the contracting officer upon request. | Many defense contractor employees work side-by-side with federal employees in Department of Defense (DOD) facilities performing substantially the same tasks affecting billions in DOD spending. Given concerns with protecting the integrity of DOD operations, GAO was asked to assess (1) how many contractor employees work in DOD offices and what type of mission-critical contracted services they perform, (2) what safeguards there are to prevent personal conflicts of interest for contractor employees when performing DOD's tasks, and (3) whether government and defense contractor officials believe additional safeguards are necessary. GAO reviewed conflicts of interest laws and policies and interviewed ethics officials and senior leaders regarding applicability to DOD federal and contractor employees. GAO judgmentally selected and interviewed officials at 21 DOD offices with large contractor workforces, and 23 of their contractors. Indications are that significant numbers of defense contractor employees work alongside DOD employees in the 21 DOD offices GAO reviewed. At 15 offices, contractor employees outnumbered DOD employees and comprised up to 88 percent of the workforce. Contractor employees perform key tasks, including developing contract requirements and advising on award fees for other contractors. In contrast to federal employees, few government ethics laws and DOD-wide policies are in place to prevent personal conflicts of interest for defense contractor employees. Several laws and regulations address personal conflicts of interest, but just one applies to both federal and contractor employees. Some DOD offices and defense contractor companies are voluntarily adopting safeguards. For example, realizing the risk from personal conflicts of interest for particularly sensitive areas, the 19 DOD offices GAO reviewed that used contractor employees in the source selection process all use safeguards such as contract clauses that prohibit contractor employees' participation in a DOD procurement affecting a personal financial interest. In certain other tasks, only 3 of the 23 defense contractors GAO reviewed had safeguards requiring employees to identify potential conflicts of interest so they can be mitigated. In general, government officials believed that current requirements are inadequate to prevent conflicts from arising for certain contractor employees influencing DOD decisions, especially financial conflicts of interest and impaired impartiality. Some program managers and defense contractor officials expressed concern that adding new safeguards will increase costs. But ethics officials and senior leaders countered that, given the risk associated with personal conflicts of interest and the expanding roles that contractor employees play, such safeguards are necessary. |
Real-estate taxes in the United States are levied by a number of different taxing authorities, including state and local governments, but mostly by local governments. Local governments, such as counties, can levy and collect taxes on behalf of smaller jurisdictions within their boundaries. For example, a county could collect real-estate taxes on behalf of a city within the county. In 2006, local-government property tax revenue was about $347 billion, compared to about $12 billion for state-government property tax revenue. Local governments can use property tax revenues to fund local services, such as road maintenance and law enforcement. In 2006, property taxes made up an average of 45 percent of general own-source revenue for local governments nationwide. According to the Congressional Research Service, the real-estate tax deduction was the most frequently itemized federal income tax deduction claimed by individual taxpayers from 1998 through 2006; the deduction was claimed on approximately 31 percent of all individual tax returns, and on about 87 percent of all returns with itemized deductions. The real- estate tax deduction provides a benefit to homeowners and also provides an indirect federal subsidy to local governments that levy this and other deductible taxes, since it decreases the net cost of the tax to taxpayers. Deductible real-estate taxes also may encourage local governments to impose higher taxes, which may allow them to provide more services than they otherwise would without the deduction. In 2006, individual taxpayers claimed about $156 billion in real-estate taxes as an itemized deduction. By allowing taxpayers to deduct qualified real-estate taxes, the federal government forfeits tax revenues that it could otherwise collect. Taxpayers can claim paid real-estate taxes as an itemized deduction on Schedule A of the federal income tax return for individuals. In addition, the Housing and Recovery Act signed July 30, 2008, included a provision that allowed non-itemizers to deduct up to $500 ($1,000 for joint filers) in real-estate taxes paid for tax year 2008. Taxpayers can also deduct paid real-estate taxes on other parts of the tax return, including as part of a deduction for a home office or in calculating net income from rental properties. For purposes of this report, references to the real-estate tax deduction mean the itemized deduction on Schedule A. Taxpayers may deduct state, local, and foreign real-property taxes from their federal tax returns if certain conditions are met. Taxpayers may only deduct real-estate property taxes paid or accrued in the taxable year. To be deductible, real-estate taxes must be imposed on an interest in real property. Taxes based on the value of property are known as ad valorem. Further, real-estate taxes are only deductible when they are levied for the general public welfare by the proper taxing authority at a like rate against all property in the jurisdiction. Real-estate-related charges for services are not deductible. Examples of such charges for services include unit fees for water usage or trash collection. In addition, taxpayers may not deduct taxes assessed against local benefits of a kind tending to increase the value of their property. Such local benefit taxes include assessments for streets, sidewalks, and similar improvements. However, local benefit taxes can be deductible if they are for the purpose of maintenance and repair of such benefits or related interest charges. IRS estimates that on income tax returns for 2001, all overstated deductions taken together resulted in $14 billion in tax loss. IRS estimated the amount of misreporting of deductions, but did not estimate the resulting tax loss for each deduction. However, according to data from IRS’s National Research Program, which is designed to measure individual taxpayer reporting compliance, in 2001 about 5.5 million taxpayers overstated their real-estate tax deductions, which resulted in a total overstatement of about $5.0 billion. The median overstatement was $436, or about 23 percent of the median claimed deduction amount of $1,915. We estimate that 38.8 million taxpayers claimed this deduction in 2001. While about 5.5 million taxpayers overstated their deductions, about 3.3 million understated their deductions. Taken as a whole, about 8.8 million taxpayers on average overstated their deductions by about $85 each, which resulted in a net total overstatement of about $2.5 billion. Taxpayers can overstate or understate their real-estate tax deductions in a number of ways. For example, they can overstate their deduction by not meeting such eligibility requirements as property ownership and payment during the tax year, or by inappropriately deducting the same taxes on multiple parts of the income tax return. Taxpayers can also overstate by claiming such real-estate tax-related amounts as local benefit taxes and itemized charges for services, which, as noted earlier, are not deductible. Taxpayers can also understate their real-estate deduction. For example, first-time homeowners may understate this deduction because they are not aware that they are entitled to claim it. Similarly, taxpayers who buy and sell a home in the same year could understate this deduction out of confusion over how much in taxes they can deduct for the old and new homes. Our 1993 report found that a majority of the local real-estate tax bills that we reviewed included nondeductible items, such as service charges, in addition to deductible real-estate taxes. Our report also indicated that local governments had increased their use of service charges in reaction to events that had reduced their revenues, such as local laws that restricted growth in real-estate taxes. By increasing user fees to finance services, local governments could keep their tax rates lower. We also reported that some local jurisdictions did not clearly indicate nondeductible items on real-estate tax bills and combined all types of payments (e.g., deductible and nondeductible real-estate taxes) into a total amount, which may lead taxpayers to claim this total amount on the bill as deductible and thereby overstate their deduction. Most taxpayers rely upon either paid preparers or tax software to file their tax returns. Recent estimates indicate that about 75 percent of taxpayers used either a paid preparer (59 percent) or tax software (16 percent) to file their 2007 taxes. Any evaluation of the factors that contribute to taxpayers overstating the real-estate tax deduction would need to take paid preparers and tax software into consideration. To describe factors that contribute to the inclusion of nondeductible charges in real-estate tax deductions, we conducted a number of analyses and spoke with various external stakeholders, as follows. To determine what information local governments report on real-estate tax bills relating to federal deductibility, we surveyed a generalizable sample of over 1,700 local governments. We also reviewed about 500 local-government real-estate tax bills provided to us by survey respondents. We also interviewed officials with organizations representing local governments, including the National Association of Counties; the National Association of County Collectors, Treasurers, and Financial Officers; and the Government Finance Officers Association. To determine what mortgage servicers report on mortgage documents, we interviewed representatives from the mortgage industry from the Consumer Mortgage Coalition, the Mortgage Bankers Association, and the three largest mortgage servicing companies in 2007. We reviewed three IRS publications for tax year 2007 that provide guidance to individual taxpayers claiming the real-estate tax deduction as an itemized deduction on their federal income tax returns: the instructions for IRS Form 1040, Schedule A, the form and schedule where taxpayers can deduct real-estate taxes and other items from their taxable income; IRS Publication 17, which provides information for individuals on general rules for filing a federal income tax return; and IRS Publication 530, a guide for homeowners. We checked whether each of these publications explained the factors that taxpayers need to consider in determining deductibility and guided taxpayers on where they could obtain additional information necessary for determining deductibility. To determine the extent that tax-preparation software and paid professional tax preparers assisted taxpayers in only claiming deductible real-estate taxes, we reviewed online software versions of the three largest tax-preparation software programs in 2008—TaxAct, TaxCut, and TurboTax—and interviewed representatives from those three companies and representatives from the National Association of Enrolled Agents. We used the results of our survey of over 1,700 governments to determine the extent to which local governments send real-estate tax bills with certain generally nondeductible charges. To get an indication of the extent to which taxpayers may be overstating their real-estate tax deductions by including such nondeductible charges, we conducted case studies on five large local governments, collecting and analyzing tax data from them and IRS. Specifically, we worked with IRS to determine which charges on the five local governments’ tax bills were likely deductible. While conducting these five case studies of taxpayer noncompliance in claiming the real- estate tax deduction, we identified challenges in determining what charges qualify as deductible real-estate taxes. Then, to the extent possible, for two jurisdictions we compared the amounts that were likely deductible to the amounts the taxpayers claimed as deductions on Schedule A of their 2006 federal tax returns. Appendix III provides details on the methodology for this case study, including jurisdiction selection. To describe the extent that IRS examinations of the real-estate tax deduction focus on potential overstatements due to taxpayer inclusion of nondeductible charges, we reviewed IRS guidance for examiners related to the real-estate tax deduction, and interviewed IRS examiners about the standard procedures and methods they use for auditing this deduction. We reviewed guidance in the Internal Revenue Manual, which serves as the handbook for IRS examiners, to determine how clearly it instructs examiners to verify the deductibility of charges on real-estate bills when auditing this deduction. Our interviews with IRS examiners focused on the extent to which examiners determine the deductibility of charges on real- estate bills when auditing this deduction, challenges faced by examiners auditing this deduction, and whether examiners have information about local jurisdictions with large nondeductible charges on their real-estate tax bills. The examiners we interviewed included examiners and managers based in IRS offices across the United States. To assess possible options for improving voluntary taxpayer compliance with the real-estate tax deduction, we interviewed members of organizations representing local governments and IRS officials about potential options. We also identified potential options along with their benefits and trade-offs based on our other work for this report. We conducted this performance audit from October 2007 through May 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Local governments generally do not inform taxpayers what charges on real-estate tax bills qualify as deductible real-estate taxes, which creates a challenge for taxpayers attempting to determine what they can deduct. Groups representing local governments told us that local governments do not identify on real-estate tax bills which charges are deductible, and our review of almost 500 real-estate tax bills supplied by local governments supports this. In our review, we found no instances where the local government indicated on the bill what amounts were deductible for federal real-estate tax purposes. Furthermore, while IRS requires various entities to provide information about relevant federal tax items to both taxpayers and IRS on statements known as information returns, local governments are not required to provide information returns on real-estate taxes paid. Local government groups told us that local governments do not identify what taxes are deductible because they cannot easily determine whether their charges meet federal deductibility requirements. They said that local government tax collectors do not have the background or expertise to determine what items are deductible according to federal income-tax law and may lack information necessary for making such determinations for charges billed on behalf of another taxing jurisdiction. As a result, local governments did not want to make such determinations. Taxpayers with mortgages may also receive information about real-estate tax bill charges paid on their behalf by mortgage servicers, but this information generally does not identify what taxes can be deducted. To protect a mortgage holder’s interest in a mortgaged property, mortgage servicers often collect funds from property owners whose mortgages they service (borrowers) and hold them in escrow accounts. They then draw from the funds to pay real-estate taxes and related charges on the properties as they are due. Mortgage servicers provide borrowers with annual statements summarizing these and other deposits and withdrawals of escrow account funds. In addition, mortgage servicers have the option of reporting such escrow payments on information returns relating to paid mortgage interest, but can choose to report other information instead. Mortgage industry representatives we spoke with stated that when reporting escrow payments, mortgage servicers usually report the total amount paid at any given time to local governments from escrow accounts and do not itemize the specific types of charges paid for, regardless of the statement used. As a result, any nondeductible charges paid for would be embedded in this payment total and reported as “property taxes” or “real- estate taxes” on mortgage servicer documents, including IRS forms. According to mortgage industry representatives, mortgage servicers only report a total because most only track and receive information on the total payment amount due. Mortgage servicers are interested in total amounts because local governments can place a lien on a mortgaged property if all billed charges are not paid. In addition, not all mortgage servicers receive detailed information about charges. Our survey of local governments on real-estate tax billing practices showed that an estimated 43 percent of local governments provide mortgage companies with only total amounts owed for requested properties. That annual mortgage statements report only totals is significant because not all property owners receive tax bills. Based on responses to our local government survey, an estimated 25 percent of local governments do not send property owners a copy of their tax bill if the taxpayer escrows their taxes through a mortgage company. Even though real-estate tax bills do not indicate what charges are deductible, tax bills can contain information on the types of charges assessed on a property, which is a starting point for taxpayers in determining what they can deduct. In the absence of information identifying deductible real-estate taxes, determining whether certain amounts on the tax bills are deductible can be complex and require significant effort. Taxpayers generally cannot be assured that their real-estate tax bill has enough information to determine which of the charges listed are deductible for federal purposes. Deductible real-estate taxes are any state, local, or foreign taxes on real property levied for the general public welfare by the proper taxing authority at a like rate against all property in the jurisdiction. Charges for services and charges for improvements tending to increase the value of one’s property are generally not deductible. However, even if a real-estate tax bill labels a charge as a “tax” or “for services,” the designation given by a local government does not determine whether a charge on a real-estate tax bill is deductible. For example, a charge that is labeled a tax on a local real-estate tax bill, but is not used for public or governmental purposes such as police or fire protection, likely would not be deductible; whereas a charge that is labeled a fee could be considered a deductible tax if the charge is imposed at a uniform rate based on the value of the real estate and is used for the general public welfare. Complicating the matter is that local benefit taxes, which are generally not deductible, can be deductible if the revenue raised is used to maintain or repair existing improvements. Figure 1 depicts some of the questions that taxpayers need to be able to answer for each real-estate-tax- related charge they wish to deduct. Taxpayers who are unsure how to answer these questions (as well as others) with respect to a given charge cannot be assured of the charge’s deductibility. Because determining what qualifies as deductible can be complex, we asked IRS’s Office of Chief Counsel to help us determine the deductibility of amounts on tax bills in five large local governments as part of case studies on taxpayer compliance with the real-estate tax deduction. We asked attorneys in IRS’s Office of Chief Counsel what information they would need to determine whether charges that appear on real-estate tax bills in the jurisdictions were deductible. IRS’s Office of Chief Counsel indicated that it would need information on the questions indicated in figure 2. To provide IRS with this information, we searched local government Web sites for information on each charge that appeared on tax bills. We also interviewed local government officials, collected and analyzed additional documentation related to the charges, and identified sections of local statutes that provided the authority to impose the charges on the local tax bills. We summarized this information in summary documents that totaled over 120 pages across the five selected local governments. Despite this level of effort, the information was not sufficient to allow IRS to make a judgment as to the deductibility of all of the charges in the five selected jurisdictions. While local government officials we spoke with provided us with significant support in our research, some of the information we asked for was either unavailable or impractical to obtain due to format or volume. The main challenge we faced was that each of the five local governments had over 100 taxing districts—cities, townships, school districts, special districts, etc.—and gathering detailed information for each district, such as how each district calculates the rate it charges, was difficult and time-consuming. As a result, IRS attorneys were not able to make determinations on some charges in three of the five jurisdictions. Because individual real-estate tax bills in these jurisdictions would likely include only a subset of the amounts we researched, taxpayers in these jurisdictions would not necessarily need to apply the same total level of effort that we did. However, they would still face similar challenges in determining whether the amounts on their tax bills qualified as deductible. For example, one county official told us that not all charges are itemized on their tax bills and as a result, it is nearly impossible for a taxpayer in her county to find out the nature and purpose of those charges for which they are assessed. IRS instructions and guidance for taxpayers on claiming the real-estate tax deduction explain generally what taxpayers can deduct, but lack more specific information on how to comply. IRS instructions for claiming the real-estate tax deduction on the federal income-tax return for individuals explain that real-estate taxes are deductible if they are based on the value of property, they are assessed uniformly on property throughout the jurisdiction, and proceeds are used for general governmental purposes. The instructions also indicate that per-unit charges for services and charges for improvements that tend to increase the value of one’s property are generally not deductible. The IRS general guide for individuals filing an income tax return and the IRS guide for first-time homeowners similarly explain what taxpayers can deduct, and also provide examples of nondeductible charges for services and local benefit taxes. However, these three IRS publications do not inform taxpayers that they should check both real-estate tax bills and available local government resources with information about the nature and purpose of specific charges. While the two IRS guides alert taxpayers that they should check real-estate taxes bills, IRS’s instructions for deducting real-estate taxes are silent on what taxpayers need to check. None of the publications inform taxpayers that they may also need to consult local government Web sites, pamphlets, or other available documents with information about the nature and purpose of specific charges to determine what amounts qualify as deductible real-estate taxes. Without specific instruction to do otherwise, taxpayers could believe that they are getting deductible amounts from their mortgage servicer. Searching for more information may not be conclusive for all charges, but may be sufficient for determining the deductibility of many charges, as we found while examining charges in five local governments with IRS. Similarly, even though some taxpayers may be unable to determine the deductibility of a few charges on their tax bills after consulting available local government resources, they likely need such information on other charges to comply with requirements of the real-estate tax deduction. Taxpayers need to know that they may need to consult available local government resources because more information may be required before they can determine which charges they can deduct from their tax bill. Tax-preparation software and assistance provided by paid tax preparers may not be sufficient to help ensure that taxpayers only deduct qualified real-estate taxes. At the time of our review, two of the three most frequently used tax-preparation software programs for 2008—TaxAct, TaxCut, and TurboTax—did not alert taxpayers to the fact that not all charges on real-estate tax bills may qualify as deductible real-estate taxes. The sections of these two programs where users entered real- estate taxes paid lacked an alert informing users that not all charges that appear on a real-estate tax bill may qualify as deductible real-estate taxes. While all three of the programs contained information about what qualified as deductible real-estate taxes in various screens, users had to proactively click on buttons to access these sections to learn that some charges on their tax bill may not have been deductible. One software-program representative indicated that alerts need to be carefully tailored to have the intended effect. He cautioned that too much information can actually have undesirable effects that do not lead to improved compliance. Specifically, to the extent that they are not relevant to taxpayers whose bills do not contain nondeductible items, overly broad or irrelevant alerts can result in taxpayers reading less, thereby creating confusion, causing errors to be made, and unnecessarily increasing taxpayer burden by increasing the time and complexity involved in taxpayers preparing their returns. Nevertheless, software-program representatives we spoke with were receptive to potential improvements that could be made to their software programs. Prior to our review, none of the three largest software programs contained an alert informing users that not all items on real-estate tax bills may be deductible. In addition, one of the three programs did not discuss the fact that deductible real-estate taxes are based on the assessed value of property and that charges for services and local benefit taxes are generally not deductible. In response to our discussions with them on these issues, all three tax software programs made changes to their programs. One program added an alert to users indicating that not all charges on real-estate tax bills may be deductible and the other two programs added information about what qualifies as real-estate taxes or made such information more prominent in the guidance accessible from their sections on real-estate taxes. Paid preparers we spoke with indicated that they invested only limited time and energy making sure that taxpayers included only qualified real- estate taxes in their deductions. Most taxpayers do not understand that some charges assessed against a property may not be deductible, and often only provide preparers with mortgage interest statements or cancelled checks to local governments that contain only total payment amounts, making it difficult for the preparers to identify potentially nondeductible charges. Some preparers indicated that from their experience such charges are relatively small, and may have negligible impacts on a taxpayer’s tax liability, especially after other parts of the tax return are considered. As a result, even if they thought that clients may be claiming nondeductible charges, they often did not consider identifying such charges to be worth the effort. The paid preparers that we spoke with also indicated that more information from local governments or IRS on what taxes are deductible would be helpful in improving taxpayer compliance with the deduction. As mentioned earlier, deductible real-estate taxes are generally ad valorem or based on the assessed value of property. We used the ad- valorem/non-ad-valorem distinction as a rough proxy to indicate potential deductibility in our survey of local governments’ real-estate billing practices. The ad-valorem/non-ad-valorem distinction is not a perfect indicator of deductibility, since, under certain circumstances, some ad- valorem charges could be nondeductible and some non-ad-valorem charges could be deductible. However, based on the information we provided, IRS’s Office of Chief Counsel determined that all non-ad- valorem charges in our case study jurisdictions were not deductible. We estimate that almost half of local governments nationwide included charges on their real-estate tax bills that were generally not deductible, based on responses to our survey. We surveyed a sample of over 1,700 local governments identified as collecting real-estate taxes and asked them whether their real-estate tax bills included non-ad-valorem charges, that is, charges that are not based on the value of property and therefore generally not deductible. Examples of such charges include fees for trash and garbage pickup. Based on responses, we estimate that 45 percent of local governments nationwide included such charges on their tax bills. The property taxes collected by local governments with non-ad-valorem charges on their bills represented an estimated 72 percent of the property taxes collected by local governments nationwide. Of the local governments surveyed that included non-ad-valorem charges on their bills, only 22 percent reported that they label such charges as non- ad valorem. As a result, even if taxpayers owning real estate in the other 78 percent of these locations review their tax bills, they may not be able to identify which charges, if any, are non-ad valorem and likely nondeductible. In identifying how much taxpayers may have overstated real-estate tax deductions from individual taxpayers claiming nondeductible charges, we encountered data limitations that constrained our analysis and made it impossible to develop nationwide estimates of these overstatements. Some of the main limitations follow: The jurisdictions we selected did not maintain their tax data in a way that allowed us to itemize all of the charges on individuals’ tax bills. They also did not always maintain information on those charges necessary for IRS and us to determine deductibility. As a result, we were not able to account for all potentially nondeductible ad-valorem charges. Similarly to the approach we took in our survey of local governments, we categorized all ad-valorem charges as deductible and all non-ad-valorem charges as nondeductible in identifying how much taxpayers overstated their real- estate tax deductions. The selected jurisdictions also did not track the real-estate tax liabilities and payments by individuals’ Social Security number (SSN), which is the unique identifier used in the IRS tax return data for each taxpayer. Consequently, we used available information—name, address, and zip code—to calculate for each taxpayer the total amount billed by the local government and compare the amount billed to the amount claimed as a real-estate tax deduction on Schedule A of the taxpayer’s return. This process was very time- and resource-intensive. We could not explicitly account for other income tax deductions or adjustments to income that could influence the amount taxpayers are eligible to claim on the Schedule A, such as the home-office deduction and rental real-estate income. IRS did not have information readily available on how much real-estate taxes taxpayers in our case-study jurisdictions claimed as a home-office deduction, nor did it have information on the locations of other rental real-estate properties owned by a taxpayer, which could have been in multiple jurisdictions. We aimed to mitigate these issues by only analyzing records where (1) the amount claimed in the IRS data was roughly equivalent to the total amount billed to the taxpayer in the local government data, or (2) the amount claimed was less than 15 percent greater than the total billed amount. Because of these limitations, we were able to match only 42 percent of the individuals (195,432 of 463,066) who itemized their real-estate tax deductions on their tax returns to the data we received from two counties, as table 1 shows (see app. III for a more detailed discussion of our methodology). The counties—Alameda County, California and Hennepin County, Minnesota—were among the largest taxing jurisdictions in the United States that had non-ad-valorem charges, such as fees for services, special assessments, and special district charges, on their real-estate tax bills in 2006. Table 2 shows that of the 195,423 matched taxpayer records in the two counties, 56 percent, or 109,040 individuals had non-ad-valorem charges on their local bills. However, over 99 percent of the Alameda County bills had non-ad-valorem charges compared to only about 10 percent of the Hennepin County bills. Our analysis of the 109,040 individuals in the two counties who had non- ad-valorem charges on their bills that could be matched to IRS data indicates that almost 42,000 (38.3 percent) collectively overstated their real-estate tax deductions by at least $22.5 million (i.e., “very likely overstated”) for tax year 2006. When one includes over 37,000 taxpayers who had non-ad-valorem charges and overstated their deductions up to 15 percent greater than their total amounts billed in 2006 (i.e., “likely overstated”) the amount of potential overstatement increases to $46.2 million. Table 3 summarizes the results on overstated deductions from claiming nondeductible charges for the two counties. While 72.4 percent of taxpayers (78,916 of 109,040) with non-ad-valorem charges that we could match to tax returns overstated their real-estate tax deduction, these overstated amounts on average only involved amounts in the hundreds of dollars. According to our analysis, the median “very likely” overstatement was $414 in Alameda County and $241 in Hennepin County. The median “likely” overstatement was $493 for Alameda County and $179 for Hennepin County. It is important to recognize that these overstated deduction amounts are not the tax revenue loss. The tax revenue loss would be much less and depend, in part, on the marginal tax rates of the individuals who overstated their deductions as well as other factors that we did not have the data or resources to model appropriately. Those factors would include the amount of real-estate taxes and local-benefit taxes that should be allocated to other schedules on the tax return and other attributes such as the amount of refundable and nonrefundable credits. As a result, while many taxpayers are erring in claiming nondeductible charges, the small tax consequences of such overstatements may not justify the cost of IRS enforcement efforts to pursue just these claims. IRS’s guidance to examiners does not require them to check documentation to verify that the entire real-estate tax deduction amount claimed on Schedule A of Form 1040 is deductible. Such documentation would indicate whether taxpayers claim nondeductible charges. Rather, IRS’s guidance gives examiners discretion on which documentation to request from taxpayers to verify the real-estate tax deduction. Examiners are authorized to request copies of real-estate tax bills, verification of legal property ownership, copies of cancelled checks or receipts, copies of settlement statements, and verification and an explanation for any special assessments deducted. Because of the discretion in the guidance, examiners are not required to request or examine each form of documentation. The guidance also does not direct examiners to look for all potentially nondeductible charges in real-estate tax bills. Some IRS examiners we interviewed considered Form 1098 for mortgage interest paid to be appropriate documentation if the taxpayer failed to provide a real-estate tax bill because this form could demonstrate that the taxpayer paid the taxes through an escrow account set up with the mortgage company. However, as noted earlier, Form 1098 shows payments to local governments for all real-estate tax-related charges billed, including any nondeductible charges. In other words, Form 1098 does not conclusively demonstrate deductibility. Rather than focusing on the nature of charges claimed, IRS examinations of real-estate tax deductions focus on other issues, such as evidence that the taxpayer actually owned the property and paid the real-estate taxes claimed during the year in question. IRS examiners told us that they focus on proof of ownership and payment because, in their experience, taxpayer noncompliance with these requirements could result in larger overstatements. For example, a taxpayer residing in the home owned by his or her parent(s) could incorrectly claim the real-estate tax deduction for the property. It is also common for first-time homebuyers to improperly claim the full amount of real-estate taxes paid for the tax year, even though the seller had paid a portion of these taxes. Examinations of the real-estate tax deduction usually take place as part of a broader examination of inconsistent claims across the individual tax return. In examining deductions on the Schedule A, IRS examiners have found cases in which some taxpayers incorrectly include real-estate taxes as personal-property taxes on Schedule A, sometimes deducting the same tax charges as both personal-property taxes and real-estate taxes. Furthermore, IRS examiners might find claims on other parts of the return that prompt them to check the real-estate tax claimed on Schedule A, or find overstated real-estate tax deductions on Schedule A that indicate noncompliance elsewhere on the return. For instance, a taxpayer might claim the real-estate tax deduction for multiple properties on Schedule A, but fail to report any rental income earned from these properties on the Schedule E form, which is used to report income or loss from rental income. Also, a taxpayer might claim the total amount of real-estate taxes paid on Schedule A, but improperly claim these taxes again as part of the business expense deductions on the Schedule E or Schedule C forms, or both. IRS guidance instructs taxpayers to deduct only real- estate taxes paid for their private residences on Schedule A, and to dedu any real-estate taxes paid on rental properties on Schedule E. If taxpaye use a part of their private residence as the principal place for conducting business, they should divide the total real-estate taxes paid on the property accordingly, with the portion of real-estate taxes paid for the business deducted on Schedule C. As noted earlier, the format and the level of detail about charges on local real-estate bills vary greatly across local governments. IRS examiners told us that they do not focus on the deductibility of most real-estate charges when auditing real-estate tax deductions because determining deductibility from looking at such bills can take significant time and effort. They also said that when they detect apparent nondeductible charges claimed in the real-estate tax deduction, the amounts are usually small. As a result, the examiners we interviewed generally contended that determining the deductibility of every charge on a bill could be an inefficient use of IRS resources. Examiners reasoned that the amount of nondeductible charges on a real-estate tax bill would have to be quite high to justify an examination and adjustment of tax liability. IRS does not have information about which local governments are likely to have large nondeductible charges on their real-estate tax bills. IRS examiners also told us that if they had this information, they could use it to target any examination of the real-estate tax deduction toward large deductions claimed by taxpayers in these specific jurisdictions. Several examiners told us that they look for large nondeductible charges that are commonly claimed as real-estate taxes, but they only know about these nondeductible items from personal experience. For example, IRS examiners located in Florida and California indicated that some taxpayers attempt to improperly deduct large homeowners’ association fees as part of the real-estate tax deduction. Absent information about potentially nondeductible charges, some examiners told us that when they are examining a real-estate tax deduction, they might research taxpayer information accessible from the respective county assessor’s Web site, such as information about real-estate bill charges, or from other databases, such as how many properties a taxpayer owns and the amount of taxes paid for each property. Various options could help address one or more of the identified problems that make it hard for individual taxpayers to comply by only claiming deductible charges when computing their real-estate tax deduction, and improve IRS’s ability to check compliance. Given the general difficulty in determining deductibility, one option would be to change the tax code. Changing the tax code could affect both taxpayers who overstate and those who understate their deductions. Depending on the public policy goals envisioned for the real-estate tax deduction, policymakers may wish to consider changes to balance achieving those goals and make it simpler for individuals to determine how much of their total amount for local charges can be deducted. Changing the law to help taxpayers correctly claim the deduction could be done in different ways. However, assessing such changes to the law and their effects was beyond the scope of this review. Thus, we have not included nor will we further discuss in this report an option for changing the tax code. Assuming no statutory changes are made to clarify how much of local charges on real-estate tax bills can be deducted, table 4 lists some broad options under three areas involving improved information, guidance, and enforcement to address the problems. The options we discuss are concepts rather than proposals with details on implementation and likely effects. These options would likely affect both those taxpayers who overstate and those who understate their real-estate tax deductions. A combination of these options would be needed to address the four main problems. In considering the options, it is important to know how many individual taxpayers claim nondeductible charges from real-estate tax bills and how much federal revenue is lost. Such knowledge could signal how urgently solutions are needed. However, the extent of taxpayer noncompliance and related federal revenue loss is not known, and we could not estimate this with the resources available for our review. If many taxpayers overstate the deduction and the aggregate revenue loss is high enough, pursuing options to reduce noncompliance would be more important. Conversely, fewer taxpayers making errors and lower revenue losses might lead to a decision to not pursue any options or only options that have minimal costs and burdens. Ultimately, policymakers in concert with tax administrators will have to judge whether concerns about noncompliance justify the extent to which options, including those on which we make recommendations, should be pursued to help taxpayers comply. Compliance could be measured in different ways, which could yield better information at increasing cost. For example, IRS has research programs that are designed to measure compliance. One option is to modify IRS’s National Research Program (NRP) studies that IRS planned to launch in October 2007, which were designed to annually examine compliance on about 13,000 individual tax returns. NRP staff could begin to collect information through this annual study to compute how much of the overall amount of noncompliance with claiming the real-estate tax deduction is caused by taxpayers claiming nondeductible charges. If pursued, IRS would need to consider how much additional time and money to invest in its annual research to measure taxpayer compliance in claiming only deductible charges in the real-estate tax deduction. IRS also could consider focusing its compliance efforts on local governments that put large nondeductible charges on real-estate tax bills. Lacking information on the potential compliance gains compared to potential costs and burdens makes it difficult to assess whether most options are justified. Even so, some of these options could improve compliance with the real-estate tax deduction while generating lower costs and burdens for IRS and third parties. Although we did not measure the benefits and costs, the following discussion describes key trade-offs to be considered for each option, such as burdens on IRS, local governments, and other third parties, as well as implementation barriers. Taxpayers are responsible for determining which charges are deductible. The burden to be fully compliant can be significant, depending on how many charges are on the real-estate tax bill, how quickly information can be accessed on how the charge is computed and used, and how long it takes taxpayers to use that information to determine deductibility. In the absence of data, a simple illustration can provide context, recognizing that taxpayer experiences would vary widely. To illustrate, if we use an IRS estimate that roughly 43 million taxpayers claimed the real-estate tax deduction in 2006, and assume that each taxpayer spent only 1 hour to access and use information about charges on the bill to make determinations about deductibility, then a total of 43 million taxpayer hours would be used to calculate this deduction. If we further assume that the value of a taxpayer’s time averaged $30 per hour, which is the figure used by the Office of Management and Budget, the value of this compliance burden on taxpayers for the real-estate tax deduction would total $1.29 billion. The options for providing information about the local charges generally would lessen the burden on individual taxpayers while likely increasing compliance levels. However, depending on the option, the burden would shift to local governments. Although the local-government representatives we interviewed did not have data on the costs for any option and said that the costs and burdens could vary widely across local governments, they had views on the relative burdens for each option. Figure 3 provides a rough depiction of this burden shifting. Given the complexity of determining the federal deductibility of local charges, a problem we found was that taxpayers are not told how much of the total amount of charges on the local bill can be deducted. Two options for reporting information on deductible charges are (1) information reporting, or (2) changing the local real-estate tax bills. Information reporting on deductible amounts Requiring information reporting in which local governments determine in their opinion which charges are federally deductible and report the deductible amount to their taxpayers and to IRS would provide very helpful information related to deductibility. A barrier to any information reporting is that 19 of the 20 local-government tax collectors that we interviewed did not maintain records by a unique taxpayer identifier, such as the SSN. For IRS to check compliance in claiming only deductible charges, IRS would need an unambiguous way of matching the local data to the federal data, which traditionally relies on the SSN. Local- government representatives said significant challenges could arise in collecting and providing SSNs to IRS, given concerns about privacy, and possible needed changes to state laws. Local-government representatives that we interviewed viewed information reporting as having the highest costs and burdens of the options that we discussed for providing additional information to taxpayers. One example of a potentially high cost that local governments would incur is the cost associated with computer reprogramming to enable them to report the information. One way to reduce the costs for many local governments would be to require information reporting for larger local governments only or for those that have nondeductible charges on their real estate bills. Requiring information reporting only selectively would eliminate the cost for some local governments, but would not reduce the costs for those that still have to report to IRS and would not eliminate concerns about providing the SSN. Reporting deductible amounts on local real-estate tax bills Another option for providing taxpayers with information about deductibility would be to report the deductible amounts on the local government bills provided to taxpayers only. This would eliminate the concerns about collecting and providing SSNs as well as the costs of reporting to IRS. Local-government representatives we interviewed said that their costs still could be high if major changes are required to local computer systems and bills. For example, they might have to regroup and to subtotal charges based on deductibility. Furthermore, not all local governments provide a copy of their bills to taxpayers who pay their real- estate taxes through mortgage escrow accounts. These taxpayers would need to receive an informational copy of their bills or be alerted to the nondeductible charges in some other manner. Whether providing information on deductibility through information reporting or changing local bills, a major concern for local governments was determining deductibility. Local-government representatives expressed concerns about local governments protecting themselves from legal challenges over what is deductible, given the judgment necessary to determine deductibility. Local-government representatives and officials told us that local governments do not want to become experts in the federal tax code and would oppose making any determination of deductibility without assistance. Given local governments’ concern about determining deductibility, local governments could provide information to IRS about the types of charges on their bills and IRS could use that information to help local governments determine deductibility, reducing their burden and concern somewhat while increasing costs to IRS. Even if IRS took on the responsibility of determining the federal deductibility of local government real-estate charges, local governments probably would still need to be involved. The IRS officials that we spoke with for the purpose of this job did not have extensive knowledge about charges on local tax bills. Local-government representatives indicated that local governments’ willingness to work with IRS would greatly depend on IRS’s approach. After determining deductibility, IRS and local governments could pursue cost-effective strategies for making information on deductibility available to taxpayers, such as posting this information on their respective Web sites. IRS’s processing costs could be large if tens of thousands of local governments reported on many types of specific charges. Even if IRS had some uniform format for local governments to use in reporting, the amount of information to be processed likely would be voluminous and diverse given variation in local charges. IRS also would incur costs to analyze the information and work with local governments that appear to have nondeductible charges. These IRS costs would vary with the breadth and depth of involvement with the selected local governments. IRS could mitigate costs if it could identify jurisdictions with significant dollar amounts of nondeductible charges, and work only with those jurisdictions. In addition to not being given information on which local charges were deductible, another problem we found was that taxpayers do not receive enough information about the charges on real-estate tax bills to help them determine how much to deduct. Knowing about the basis for the charges, how the charges were used, and whether they applied across the locality are key pieces of information that could help taxpayers determine deductibility. We found that some local governments provided some of this information on their real-estate tax bills but many did not. An alternative for informing taxpayers about local charges would be for local governments to identify which charges on its tax bills are ad valorem and non-ad valorem. Our work with IRS attorneys on the charges on tax bills in five large counties indicated to us that non-ad-valorem charges usually would be nondeductible because they generally are not applied at a uniform rate across a locality. Similarly, many ad-valorem charges would be deductible but with exceptions, such as when charges were not applied at a uniform rate across the locality or when they generated “local benefits” for the taxpayer. Because not all ad-valorem charges are deductible and not all non-ad-valorem charges are nondeductible, taxpayers still would be required to make the determinations. If taxpayers claimed only the ad-valorem charges listed on their bills, compliance would likely improve for those who otherwise would deduct the full bill amount that includes nondeductible charges. Local governments that do not currently differentiate ad valorem from non–ad valorem would incur costs that would vary with how much the bill needs to change and the space available to report the information. However, representatives of local governments with whom we spoke saw this option as less burdensome than determining and reporting the deductible amounts. A final option involving information on local tax bills could generate the lowest costs but would provide less information for taxpayers than other options related to changing local tax bills. That option is for local governments to place disclaimers on real-estate tax bills to alert taxpayers that some charges may not be deductible for federal income tax purposes. Local-government representatives said that the direct costs would be minimal to the extent that the disclaimer was brief and that space was available on the bill. Adding pages or inserts to the bill would increase printing, handling, and mailing costs. Because the disclaimers would not provide any information to taxpayers to help them determine deductibility, some taxpayers would likely seek that information by calling the local governments. Handling a large volume of calls could be costly for local governments. Even if taxpayers were to receive more information about the local charges on their real estate bills, we found that taxpayers may not receive enough guidance from IRS and third parties to help them determine how much to deduct and to alert them to the presence of nondeductible charges. For example, although IRS’s guidance to taxpayers discusses what qualifies as deductible real-estate taxes, we found a few areas in which it was incomplete given that determining deductibility can be complex. Furthermore, third parties in the mortgage and tax-preparation industries did not regularly alert taxpayers through disclaimers and other information that not all charges may be deductible. Options for helping taxpayers to apply information in order to determine which local charges are deductible include (a) enhancing IRS’s existing guidance to individual taxpayers, and (b) having IRS engage in outreach to mortgage-servicer and tax-preparation industries about nondeductible charges and about any enhanced IRS guidance. Although IRS’s guidance publications provided basic information to taxpayers about what could be deducted as a real-estate tax and the types of charges that could not be deducted, we found areas that, if improved, might help some taxpayers to comply. Those include placing a stronger disclaimer early in the guidance to alert taxpayers about the need to check whether all charges on their real-estate tax bill are deductible; across the IRS publications we reviewed, such an explicit disclaimer either was made near the end of the guidance or not at all; clarifying that a real-estate tax bill may not be sufficient evidence of deductibility if the bill includes nondeductible charges that are not clearly stated; our work showed that some bills could not be relied upon to prove deductibility but we found nothing that explicitly told taxpayers that they could not always rely on the bills as such evidence; and providing information or a worksheet on possible steps to take to obtain information about whether bills include nondeductible charges and what those charges are; to the extent that taxpayers may not know where to find the information necessary to determine whether any charges on their local bills are nondeductible, the guidance could suggest steps to help taxpayers start to get the necessary information. The cost of IRS enhancing its guidance would vary based on the extent that IRS made changes in its written publications and electronic media, but these changes would not necessarily be costly to make. Taxpayer compliance could improve for those who have nondeductible charges on their local bills but who are not aware about the nondeductible charges and how to find them. Taxpayers also could spend some time and effort to discover whether any of the local charges are nondeductible but that time and effort would largely be a onetime investment unless the local government changes the charges on the real estate bills from year to year. IRS could conduct outreach to two types of third parties that provide information or offer assistance to individual taxpayers about the real- estate tax deduction. First, IRS could engage mortgage servicers in how they might alert taxpayers that real-estate payments made through escrow accounts could include nondeductible charges, including those reported on IRS forms. The trade-offs discussed for putting disclaimers on local real-estate tax bills would apply here as well. Mortgage servicers would likely use a generic disclaimer on all escrow statements because currently the servicers do not receive information about nondeductible local charges that appear on the bills and usually only receive total amounts to be paid. However, if mortgage servicers happen to receive itemized information about local charges from local governments, they could report these details on escrow statements to inform taxpayers who may not receive a copy of their local real estate bill because their local charges are paid through the escrow. Doing so would generate some computing costs for the servicers. Also, IRS could reach out to the tax-preparation industry—those who develop tax-preparation software as well to those who help individuals prepare their tax returns. The goals would be to ensure that those who provide guidance to taxpayers are alerted to the potential presence of nondeductible charges on real-estate tax bills and to ensure that they understand IRS’s guidance, particularly if it is enhanced. IRS also could solicit ideas on ways to improve guidance to help individual taxpayers. The tax-preparation software companies could incur some costs if conversations with IRS result in revisions to their software. Other types of tax preparers, such as enrolled agents, would likely not incur many monetary costs but may experience resistance from individual taxpayers who do not wish to comply. If the implementation barriers to information reporting on this deduction were resolved and local governments were required to report information on real-estate taxes to IRS, IRS could expand its existing computer- matching system to include the real-estate tax deduction. If this option were chosen, IRS would incur the costs of processing and checking the adequacy of the local data, developing matching criteria, generating notices to taxpayers when significant matching discrepancies arise, and providing resources to interact with taxpayers who respond to the notices. However, such matching programs have proven to be effective tools for addressing compliance. IRS already conducts tens of thousands of examinations annually that check compliance in claiming the real-estate tax deduction. IRS could do more examinations of this deduction. However, the costs involved may not be justified given the current lack of information about the extent of noncompliance caused by claiming nondeductible charges and the associated tax loss. Given that IRS is already doing so many examinations that audit the real- estate tax deduction, an option that could be less burdensome for IRS would be to ensure that its examiners know about this issue of nondeductible local charges whenever they are assigned to audit the deduction. Specifically, IRS could require its examiners to verify the deductibility of real-estate charges claimed whenever the examiners are examining a real-estate tax deduction with potentially large, unusual, or questionable nondeductible items. Currently, examiners have the discretion to request evidence on the deductibility of real-estate charges, but are not required to request it. Furthermore, the guidance to examiners lists cancelled checks, mortgage escrow statements, Forms 1098 on mortgage interest amounts, and local government real-estate tax bills as acceptable types of evidence of deductibility. However, none of these documents necessarily confirm whether all local charges can be deducted. Since IRS is already examining the deduction, the marginal cost to IRS would stem from the fact that some examinations might take slightly longer if examiners take the time to ask taxpayers to provide the correct type of evidence to substantiate their real-estate tax deduction. However, this cost could be justified to ensure compliance with the existing law. IRS also may incur some costs to expand its existing training if examiners are not adequately informed about the deduction. We identified one option that cuts across the problems facing both taxpayers and IRS and targets actions in the three areas of improving information, guidance, and enforcement. As discussed earlier, local governments could provide IRS a list of the types of charges on local real- estate tax bills that IRS could then use to help local governments determine deductibility if some charges appear to be nondeductible. However, that would impose reporting costs on all local governments and could inundate IRS with a lot of information to process, analyze, and use. In this crosscutting option, IRS would limit its data collection to larger local governments that have apparent larger nondeductible charges on their real-estate tax bills. Our work initially focused on 41 of the largest local governments because they were most likely to have large property tax revenue and because smaller local governments would have a harder time compiling the information. IRS could choose from a number of ways to identify larger local governments that appear to have larger nondeductible charges on their bills. A starting point could be the Census data we used to identify those local governments that collect the most property tax (see app. III of this report). Using these data, IRS could identify the larger local governments on which IRS could focus its data-collection efforts. For example, as an alternative to, or in addition to, requiring local governments to report the types of charges listed on their local bills, as discussed earlier, IRS could send a survey to selected local governments; collect the data through its annual NRP research on individual tax compliance for a sample of tax returns; choose to do a separate research project; collect data from annual operational examinations that touch on the real-estate tax deduction; or query its employees on the types of charges on their own local tax bills. Having received information from local governments, IRS could identify local governments whose bills have nondeductible charges that are large and unusual enough to make noncompliance and larger tax revenue losses likely to occur. Knowing which local governments have large nondeductible charges, IRS could also consider whether and how to use the data in a targeted fashion. IRS’s costs would vary with the uses pursued and the number of local governments involved. IRS could use this data to design compliance-measurement studies for those localities; begin outreach with these local governments to help determine deductible charges and help affected taxpayers correctly compute the deduction; target guidance such as mailings or public service announcements to direct taxpayers to a list of nondeductible charges, or create a tool to help taxpayers determine a deductible amount for a locality; outreach with other third parties such as tax preparers and mortgage servicers to help them better inform and guide taxpayers; and check the real-estate tax deduction for individual tax returns that have been selected for examination from taxpayers in those localities or, at a minimum, use the information when considering whether to examine one of these returns. To fully comply with the current federal law on deducting local real-estate taxes, many individual taxpayers would need to apply significant effort to determine whether all charges on a real-estate tax bill are federally deductible. However, it is likely that some taxpayers do not invest sufficient time or energy in trying to comply with federal law for determining deductibility, or may not understand how to comply, or both. Nevertheless, the total compliance burden taxpayers would bear to properly comply is one useful reference point for judging the merits of alternative means of increasing compliance. Taxpayers are responsible for determining which charges are deductible, and the burden to be fully compliant can be significant. This burden to properly comply with current federal law could be shifted from taxpayers to local governments, IRS, or third parties, or some combination of each. Along a continuum, this burden shifting could be major, such as through information reporting, or fairly minor, such as through providing taxpayers with better information or guidance to help them determine deductibility. In either case, taxpayer compliance is likely to improve and the overall compliance burden to society could possibly be lower to the extent that IRS, local governments, and other third parties can reduce the costs of overall compliance through economies of scale. Because the extent of the compliance problem is not known and some of the options we identified could significantly increase local-government or IRS burdens in order to achieve significant compliance gains, a sensible starting point is options that impose less burden shifting. Providing taxpayers better guidance on how to comply, including the information sources they need to consider, is among the least burdensome and costly means to address noncompliance with the real-estate tax deduction. Because taxpayers still would have to exercise considerable effort to comply fully, improved guidance may not materially reduce noncompliance. Providing taxpayers somewhat better information, such as real-estate bills that clearly identify ad-valorem and non-ad-valorem charges would shift more burden to local governments, but likely would have a larger effect on reducing noncompliance. Providing taxpayers traditional information reports, that is, documents that clearly identify federally tax deductible charges, would shift considerable burden to local governments and possibly IRS, but also would considerably reduce taxpayers’ compliance burden and likely result in significant compliance gains. If local governments, possibly with IRS assistance, could determine deductibility for less cost than the sum of each taxpayer’s costs in doing so, the net compliance burden for society may go down even as compliance increases. Significant reductions in noncompliance might also be achieved with minimum shifting of burdens through targeted use of the identified options for addressing noncompliance. Targeting, however, requires information about localities where there are significant risks of taxpayers claiming large nondeductible charges. If IRS learned which jurisdictions have the largest dollar amounts of nondeductible charges on their bills, it could take a number of targeted actions, such as outreach to the local governments to help them determine deductible charges, targeted outreach to taxpayers in those jurisdictions to help them correctly compute the deduction, targeted outreach to the tax-preparation and mortgage-servicer industries, and targeted examinations of the real-estate tax deduction in these localities. Low-cost options are available to obtain this information, such as collecting tax bills as part of examinations of the real-estate tax deduction that already occur annually. In terms of IRS’s examinations, IRS could send a more useful signal to taxpayers of the importance of ensuring that only deductible real-estate taxes are claimed if IRS examinations more frequently covered which charges are deductible. At a minimum, IRS can take steps to ensure that its examiners know about the problems with nondeductible charges and how to address the noncompliance. We are making 10 recommendations to the Commissioner of Internal Revenue: To enhance IRS’s guidance to help individual taxpayers comply in claiming the correct real-estate tax deduction, we recommend that the Commissioner of Internal Revenue place a stronger disclaimer early in the guidance to alert taxpayers to the need to check whether all charges on their real-estate tax bill are deductible; clarify that real-estate tax bills may be insufficient evidence of deductibility when bills include nondeductible charges that are not clearly stated; and provide information or a worksheet on steps to take to get information about whether bills include nondeductible charges and about what those charges are. To help ensure that individual taxpayers are getting the best information and assistance possible from third parties on how to comply with the real- estate tax deduction, we recommend that the Commissioner of Internal Revenue reach out to local governments to explore options for clarifying charges on the local tax bills or adding disclaimers to these bills that some charges may not be deductible; mortgage servicers to discuss adding disclaimers to their annual statements that some charges may not be deductible; and tax-preparation software firms and other tax preparers to ensure that they are alerting taxpayers that some local charges are not deductible and that they are aware of any enhancements to IRS’s guidance. To improve IRS’s guidance to its examiners auditing the real-estate tax deduction, we recommend that the Commissioner of Internal Revenue revise the guidance to indicate that evidence of deductibility should not rely on mortgage escrow statements, Forms 1098, and cancelled checks (which can be evidence of payment), and may require more than reliance on a real-estate tax bill; and require examiners to ask taxpayers to substantiate the deductibility of the amounts claimed whenever they are examining the real-estate tax deduction and they have reason to believe that taxpayers have claimed nondeductible charges that are large, unusual, or questionable. To learn more about where tax noncompliance is most likely, we recommend that the Commissioner of Internal Revenue identify a cost-effective means of obtaining information about charges that appear on real-estate tax bills in order to identify local governments with potentially large nondeductible charges on their bills; and if such local governments are identified, obtain and use the information, including uses such as compliance research focused on nondeductible charges; outreach to such local governments to help them determine which charges are deductible charges and help affected taxpayers correctly compute the deduction; targeted outreach to the tax-preparation and mortgage-servicer industries, and targeted examinations of the real- estate tax deduction in the localities. On April 22, 2009, IRS provided written comments on a draft of this report (see app. IV). IRS noted that the report accurately reflects the difficulty that many taxpayers face when local jurisdictions include nondeductible charges on real-estate tax bills, particularly when these charges can vary and are not described in detail. IRS also noted that determining deductibility can be complex and that neither the local real-estate tax bills nor mortgage service documents tell taxpayers what amounts are deductible. IRS agreed with 7 of our 10 recommendations and identified actions to implement them. Specifically, IRS agreed with 2 recommendations on enhancing guidance to taxpayers, saying it would change various publications to (1) highlight an alert to taxpayers to check for nondeductible charges on their real-estate tax bills and (2) caution that the bills may be insufficient evidence of deductibility. IRS also agreed with three recommendations on outreach to third parties to ensure that taxpayers are getting the best information possible to comply in claiming the real-estate tax deduction. IRS agreed to contact local governments, mortgage servicers, and tax software firms to explore options to alert taxpayers that some charges might not be deductible. IRS also said it would work with local governments to clarify charges on their real-estate tax bills. Further, IRS agreed with two recommendations on learning more about where noncompliance in claiming nondeductible charges is most likely and then taking action to improve compliance. IRS agreed to identify a cost-effective way to identify local governments that have potentially large nondeductible charges on their real-estate tax bills. After identifying these local governments, IRS also agreed to reach out to them to help determine the deductibility of their charges and help the affected taxpayers correctly claim the deduction. As part of this set of actions, IRS agreed to reach out to the tax preparation and mortgage servicing industries with customers in these localities. IRS disagreed with three recommendations. However, for one of the recommendations, IRS did agree to take action consistent with the intent of the recommendation. We recommended that IRS enhance its guidance to taxpayers by providing information or a worksheet on steps taxpayers could take to find out if any charges on a real-estate tax bill are nondeductible. IRS said its Publication 17 already had a chart providing guidance on which real-estate taxes can be deducted but agreed to add a caution advising taxpayers that they must contact the taxing authority if more information is needed on any charge. We believe such an action will enhance IRS’s current education efforts related to this issue and may help improve taxpayer compliance, especially if the addition provides guidance on situations in which a taxpayer may need to contact the taxing authority. The other two recommendations IRS disagreed with related to improving IRS’s guidance to its staff who audit the real-estate tax deduction. IRS did not agree to revise the guidance to clarify that mortgage escrow statements, cancelled checks, Forms 1098, and real-estate tax bills may not be sufficient evidence of deductibility. IRS also did not agree that examiners should ask taxpayers for evidence of deductibility whenever they are auditing the deduction and believe that the taxpayers have claimed nondeductible charges that are large, unusual, or questionable. IRS said that the guidance for examiners is sufficient and that examiners are to use their judgment and consider all available evidence in coming to a determination. We appreciate that examiners must exercise judgment about the scope of an audit. However, in reviewing over 100 examination files and in talking with examiners, we found that not all examiners focus on the deductibility of the real-estate charges or ask the taxpayer for adequate evidence of deductibility, even in situations where deductibility may be in question. Therefore, when examiners have reason to believe that taxpayers claimed nondeductible charges that are large, unusual, or questionable, we continue to believe they should ask taxpayers for adequate support. We also continue to believe that the guidance to examiners should clearly state that real-estate bills should be examined and that other information on the nature and purpose of tax bill charges may also be needed. This improved guidance may be especially pertinent when IRS has implemented our recommendations to identify local governments with large nondeductible charges on their bills and to take related actions to help taxpayers comply. If IRS does targeted examinations of taxpayers in those localities, the IRS examiners will need to clearly understand what evidence is required to determine the deductibility of the various charges on the real-estate bills to ensure that taxpayers are correctly claiming the real-estate tax deduction. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, Senate Committee on Finance; Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or brostekm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in app. V. To learn about real-estate tax billing practices and the proportion of local government entities with potentially nondeductible charges on their real- estate tax bills, we conducted a mail-based sample survey of 1,732 local governments primarily responsible for collecting real-estate taxes due on residential properties. In designing the sample for our survey, we used the survey population of the U.S. Census Bureau’s Quarterly Property Tax Survey (QPTS) as our sample frame. The QPTS is a mail survey the Governments Division of the U.S. Census Bureau conducts quarterly to obtain information on property taxes collected at the local governmental level. The QPTS is part of a larger data-collection effort that the Census Bureau conducts in order to make estimates of state and local tax revenue. According to QPTS data, 14,314 local governments bill for property taxes. The QPTS itself uses a stratified, one-stage cluster sample of local governments in 606 county areas with 16 strata. In designing a sample based on the QPTS for our survey, we also used a stratified, one-stage cluster design. Specifically, of the 606 county areas included in the QPTS sample, we selected 192 county areas representing 18 strata. Our sub- sample consists of a random selection of approximately 30 percent of the county areas in the 18 GAO strata with a minimum of 5 county areas selected in each stratum. All of the local governments within the selected county areas are included in the sample. The total number of local governments included in the sample was 1,732. Before constructing our sample, we checked to make sure that QPTS sample data provided to us by the Census Bureau were internally consistent and reliable for our purposes. In our survey, we asked the local governments whether they included non- ad-valorem charges on their real-estate tax bills, how they differentiated non-ad-valorem charges from ad-valorem charges, and whether and how they alerted taxpayers to the presence of non-ad-valorem charges on the bills. We also asked the local governments for a sample residential real- estate tax bill that included information about all possible charges for which property owners in that jurisdiction could be billed. We conducted two pretests of our draft survey instrument with officials from Alexandria, Virginia, and Montgomery County, Maryland, to ensure that (1) the survey did not place an undue burden on the respondent’s time, (2) the questions and terminology were clear and unambiguous, (3) the respondents were able to obtain data necessary to answer the survey questions, and (4) our method for requesting sample bills matched any preferences offered by the respondents. In late April 2008, we mailed questionnaires to our survey sample population using addresses of the local government entities provided to us from the Census Bureau’s Governments Division. At the end of May, we sent a reminder letter with an additional copy of the questionnaire to all governments in our survey from which we had not yet received a response. If a survey respondent’s answers required clarification (e.g., if a respondent did not follow the directions given in the survey), a follow-up call was conducted. Survey answers were then edited to reflect the additional information obtained in the calls. Of the 1,732 surveys sent, we received 1,450 responses for an unweighted response rate of 84 percent. Response rates for the jurisdictions in each of our 18 strata ranged from 67 percent to 100 percent. All percentage estimates from our survey are surrounded by 95 percent confidence intervals. In addition to sampling error, the practical difficulties of conducting any survey may introduce errors commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, a social science survey specialist helped us design the questionnaire. Then, as stated earlier, the draft questionnaire was pretested with two local jurisdictions. Data entry was conducted by a data entry contractor and a sample of the entered data was verified. Finally, when the data were analyzed, independent analysts checked all computer programs. One of the objectives of this report was to describe factors that contribute to the inclusion of nondeductible items in real-estate tax deductions. In our 1993 report, we determined that one cause of taxpayers overstating their deductions was confusing real-estate tax bills that don’t clearly distinguish taxes from user fees. To update our previous work and to determine the extent to which real-estate tax bills currently distinguish between taxes on real property and user fees, we reviewed a sample of real-estate tax bills from local governments across the United States. This appendix outlines the methodology that we used to review these bills. The sample of real-estate tax bills that we reviewed was a subset of the responses to our mailed survey of local governments, which was a stratified, random sample of 1,732 localities (see app. I). A question in our survey asked whether the local government included non-ad-valorem items in their bills, which are generally nondeductible. In another part of our survey, we asked respondents to attach a sample of a real-estate tax bill to their completed survey. We received a total of 1,450 responses to our survey. We did not generalize the results of this bill review because not all survey respondents provided bills as requested, and because we did not know how the bills that were submitted had been selected by the respective responding governments. We received over 643 bills from governments which included nondeductible charges on their bills. Of these bills, we deemed 486 to be usable. We performed two reviews of the usable bills. First, we used three criteria to determine if a real-estate tax bill clearly distinguished taxes from user fees: 1. Does the bill differentiate ad-valorem from non-ad-valorem charges? 2. Are all the charges in the bill clearly identified and explained? 3. Does the bill contain a disclaimer warning that some of the charges included in the real-estate tax bill may not be deductible for federal tax purposes? A bill met our first criterion if either of the following applied: The bill differentiated by labeling each item as ad valorem or non–ad valorem. The bill provided millage rates for items. A bill met our second criterion if all of the line items were individually broken out AND either of the following applied: Line item descriptions were spelled out and clearly identified. Additional information or explanations regarding line items are available in paper form or electronically. A bill met our third criterion if either of the following applied: The bill contained a disclaimer stating that all items appearing on the bill may not be deductible. The bill contained a disclaimer stating that taxpayers should consult IRS code and publications or their tax advisor for assistance in determining deductibility. Through our review, we found that about 60 percent of the bills satisfied our first criterion, with almost all of these using millage rates to differentiate ad-valorem from non-ad-valorem charges. Only about 30 percent of bills satisfied our second criterion. The main reason bills did not meet our second criterion was because line-item descriptions were not easily identifiable (e.g., a taxpayer could not determine the respective charge’s use based solely on the information on the bill). None of the bills satisfied our third criterion. In our second bill review, we determined whether the real-estate tax bills provided taxpayers with either of the following: A total for the charges that are deductible for federal income tax purposes. A warning that some of the charges on the bill may be nondeductible for federal income tax purposes. Of the 486 usable bills we reviewed, none satisfied either of these two criteria. Although our sample of real-estate tax bills is not representative of local governments nationally, the results of our review illustrate that many taxpayers would face challenges in determining what is deductible if they were to rely solely on the information provided on their real-estate tax bills. This appendix describes the methodology, including sample selection, we used to (1) determine the deductibility of charges on tax bills in five counties: Alameda County, California; Franklin County, Ohio; Hennepin County, Minnesota; Hillsborough County, Florida; King County, Washington; and (2) calculate the extent of overstated deductions in two of those counties—Alameda County, California and Hennepin County, Minnesota—for tax year 2006. We derived our list of local governments that collect property taxes from the survey population of the U.S. Census Bureau’s Quarterly Property Tax Survey (QPTS). The QPTS sample consists of local governments in 606 county areas with 312 of those counties selected with certainty. The 312 counties had a population of at least 200,000 people and annual property taxes of at least $100 million in 1997. We decided that large counties would be best for this study because they were more likely to have large property tax revenue and to maintain property tax data in electronic formats that we could more easily obtain and manipulate than paper records. We started with the 41 largest counties based on property tax revenue. We randomly sorted these 41 large collectors and picked the first 5 from the sorted list that fit the team’s inclusion criteria: (1) presence of user fees, special assessments, special district taxes, or other non-ad-valorem items on real-estate tax bills for most or all residential property owners; (2) willingness of the local government to participate; and (3) usability and reliability of the data. Using these criteria, we selected Alameda County, California; Franklin County, Ohio; Hennepin County, Minnesota; Hillsborough County, Florida; and King County, Washington for our initial analyses. We collaborated with officials from the Internal Revenue Service’s (IRS) Office of Chief Counsel to determine the deductibility of charges on the five counties’ real-estate tax bills. IRS agreed to review information we provided about the charges on these tax bills in order to provide an opinion on the deductibility of the charges. IRS did not seek additional information from the counties regarding the charges, and IRS based its determinations solely on the materials we submitted. Additional information could result in conclusions different from those IRS reached as a result of the data we provided IRS. Prior to assembling information for IRS’s review, we interviewed officials from IRS’s Office of Chief Counsel to gain a better understanding of what information IRS needed to make the determinations. IRS officials provided a list of the types of information they would need to determine whether a particular assessment levied by a taxing jurisdiction was a deductible real- property tax. Specifically, IRS asked us to provide information related to the following for each charge: (1) Is the tax imposed by a State, possession, or political subdivision thereof, against interests in real property located in the jurisdiction for the general public welfare? (2) Is the assessment an enforced contribution, exacted pursuant to legislative authority in the exercise of the taxing power? Is payment optional or avoidable? (3) The purpose of the charge. Is it collected for the purpose of raising revenue to be used for public or governmental purposes? (4) Is the tax assessed against all property within the jurisdiction? (5) Is the tax assessed at a uniform rate? (6) Whether the payer of the assessment is entitled to any privilege or service as a result of the payment. Is the assessment imposed as a payment for some special privilege granted or service rendered? Is there any relationship between the assessment and any services provided or special privilege granted? (7) Is use of the funds by the tax authority restricted in any way? Are the funds earmarked for any specific purpose? (8) Is the assessment for local benefits of a kind tending to increase the value of the property assessed? Does the assessment fund improvements to or benefiting certain properties or certain types of property? If so, is a portion of the assessment allocable to separately stated interest or maintenance charges? IRS officials also indicated that the following materials would be helpful in making their determinations: (1) A copy of the statute imposing the tax. (2) Materials published by the local government or tax-collecting authority describing the levy, including taxpayer guides, publications, or manuals describing the tax. (3) The forms and instructions relating to the tax. (4) A printed copy of the Web pages maintained by the jurisdictions related to the tax. To collect this information, we interviewed county officials and reviewed documentation either provided by county officials or found on county Web sites. Most of the selected counties’ Web sites provided tax rate tables or a list of the taxing authorities for the ad-valorem charges found on the tax bills; some also had information for the non-ad-valorem charges. For each of the year 2006 tax bill charges, we searched the counties’ Web sites and used online search engines to collect supporting documentation. We also searched state constitutions and statutes to identify the legal authority for each charge on real-estate tax bills; to a varying degree, county officials provided citations to the specific statutes that provided the legislative authorities for the charges. In addition to the real-estate tax information found online, we interviewed local tax officials in each of the five local counties to gather the requested information. Based on the materials we submitted, IRS concluded that some charges were deductible, some were nondeductible, and others required information for IRS to determine their deductibility. Table 1 below summarizes the results of IRS’s determinations. Using IRS data on real-estate tax deductions claimed by taxpayers in the selected counties and county data on real-estate taxes billed to property owners, we identified how much taxpayers likely overstated their real- estate tax deductions by claiming nondeductible charges in two counties—Alameda County, California, and Hennepin County, Minnesota—for tax year 2006. We restricted our analysis to these two counties due to limitations in resources. While taxpayers can claim deductions for real-estate taxes paid on multiple IRS schedules, we limited our analysis to the amount claimed on IRS Form 1040, Schedule A, which generally does not include deductions for real estate used for business purposes. We used the SAS SQL procedure (PROC SQL) to merge the IRS data to the tax-roll data we received from our two selected counties. To conduct the match, we parsed the last name, first name, street address, city, state, and zip code from the IRS data and the local data. We conditioned the PROC SQL merge to include in the output data set only those records in which the parsed first names, last names, and zip codes matched. Prior to the match, we controlled for taxpayers who own multiple properties within each of our selected jurisdictions by using a unique identifier for each taxpayer and subtotaling the taxpayers’ ad-valorem and non-ad-valorem charges by the unique identifier. To the extent we were able, we used existing, numerical identifiers in the data—such as property number and account numbers—to produce a subtotal for each taxpayer. When the numeric identifiers available in the data were not available, we used the parsed name and address fields to create a unique identifier. After the PROC SQL merge, we controlled for duplicate records by keeping only those records where the last name, first name, street address, city, state, and zip codes matched. It is still possible that some duplicates exist in the data, since the names and address fields were recorded in disparate ways in the data we received from the counties. We used programming logic to parse the names; due to the inconsistencies in the names and address fields in the data, the name and address information may not have parsed the same way for all taxpayers. For each taxpayer that we were able to match to the county data, we compared the amount the taxpayer claimed as a real-estate tax deduction on the Schedule A return to the total ad-valorem amount each taxpayer was billed by the county and which was due in 2006. We then calculated the difference between the amount claimed on Schedule A and the ad- valorem portion of the amount billed by the county for each taxpayer. As indicated above, we worked with IRS to determine which charges billed by the county were deductible under federal tax law. The counties we selected for analysis did not maintain their tax data in a way that would allow us to itemize all of the charges, particularly the ad-valorem charges, on individuals’ tax bills. As a result, we were not able to take into account ad-valorem charges that may not be deductible in our lower-bound computation of overstated real-estate tax deductions. Instead, we used the ad-valorem portion of the amount billed as a proxy for the deductible amount. While the proxy is imperfect, it is our understanding that the non- ad-valorem charges in our selected counties were not imposed at a uniform rate and thus did not appear to be deductible as taxes under Section 164 of the Internal Revenue Code. Given the limitations of the data, this approach allowed us to take into account those charges that are least likely to be deductible. Also, the approach produced a lower-bound computation of potential noncompliance in our two counties. We can only produce a lower-bound computation due to uncertainty of noncompliance for those taxpayers where we could not match IRS and local records. To develop the lower-bound computations of potential noncompliance, we excluded those taxpayers whose claimed deduction was greater than 1.15 times the total amount billed; this was chosen as a cutoff point to account for taxpayers who may own multiple properties and therefore deduct on their federal tax return a higher amount than is shown on the local tax bills. We also excluded taxpayers whose claimed deduction was less than the ad-valorem portion of the amount billed by the county (within a small margin of error), since we did not have conclusive data to determine whether the taxpayers held only a partial ownership in the real estate covered by the local bill. We then summed the difference between the claimed Schedule A deduction and the ad-valorem portion of the amount billed by the county to develop a lower-bound computation of noncompliance for the population of taxpayers in each county that we were able to match to the county data. For the purposes of our analysis, we created two separate categories for those taxpayers who claimed a deduction that was approximately equal to the billed amount up to 1.15 times the total amount billed. We defined those taxpayers who claimed a deduction within $2 of the full amount billed, when the bill contained non-ad-valorem amounts, as “very likely overstated.” We defined those taxpayers who claimed a deduction that was greater than $1 less than the total ad-valorem amount billed but less than 1.15 times the total billed amount as “likely overstated.” In addition to the contact named above, Tom Short (Assistant Director), Paula Braun, Jessica Bryant-Bertail, Tara Carter, Hayley Crabb, Sara Daleski, Melanie Helser, Mollie Lemon, and Albert Sim made contributions to this report. Stuart Kauffman, John Mingus, Karen O’Conor, and Andrew Stephens also provided key assistance. | The Joint Committee on Taxation identified improved taxpayer compliance with the real-estate tax deduction as a way to reduce the federal tax gap--the difference between taxes owed and taxes voluntarily and timely paid. Regarding the deduction, GAO was asked to examine (1) factors that contribute to taxpayers including nondeductible charges, (2) the extent that taxpayers may be claiming such charges, (3) the extent that Internal Revenue Service (IRS) examinations focus on the inclusion of such charges, and (4) possible options for improving taxpayer compliance. GAO surveyed a generalizable sample of local governments, studied taxpayer compliance in two jurisdictions that met selection criteria, reviewed IRS documents, and interviewed government officials and others. Addressing the complexity of current tax law on real-estate tax deductions was outside the scope of this review. Taxpayers who itemize federal income-tax deductions and whose local real-estate tax bills include nondeductible charges face challenges determining what real-estate taxes they can deduct on their federal income tax returns. Neither local-government tax bills nor mortgage-servicer documents identify what taxpayers can properly deduct. Without such information, determining deductibility can be complex and involve significant effort. While IRS guidance for taxpayers discusses what qualifies as deductible, it does not indicate that taxpayers may need to check both tax bills and other information sources to make the determination. In addition, tax software and paid preparers may not ensure that taxpayers only deduct qualified amounts. There are no reliable estimates for the extent of noncompliance caused by taxpayers claiming nondeductible charges, or the associated federal tax loss. However, GAO estimates that almost half of local governments nationwide included generally nondeductible charges on their bills. While the full extent of overstatement is unknown due to data limitations, GAO estimates that taxpayers in two counties collectively overstated their deductions by at least $23 (or $46 million using broader matching criteria). IRS examinations of real-estate tax deductions focus more on whether the taxpayer owned the property and paid the taxes than whether the taxpayer claimed only deductible amounts, primarily because nondeductible charges are generally small. IRS guidance does not require examiners to request proof of deductibility or direct them to look for nondeductible charges on tax bills. Various options could improve compliance with the real-estate tax deduction, such as providing taxpayers with better guidance and more information, and increasing IRS enforcement. However, the lack of information regarding the extent of noncompliance and the associated tax loss makes it difficult to evaluate these options. If IRS obtained information on real-estate tax bill charges, it could find areas with potentially significant noncompliance and use targeted methods to reduce noncompliance in those areas. |
Appropriations for VA’s health care services are made through three separate appropriations accounts: Medical Services, Medical Support and Compliance, and Medical Facilities. VA allocates resources from these appropriations to its networks and medical centers for general purposes and specific purposes at the beginning of each fiscal year. Seventy-eight percent, or approximately $37.8 billion, of the nearly $48.2 billion in VA’s advance appropriations for health care services for fiscal year 2011 were allocated to VA’s 21 networks for general purpose patient care. VA also allocates resources to networks and medical centers for specific purposes, such as prosthetics, transplant care, and preventive and primary care initiatives. For fiscal year 2011, 22 percent, or approximately $10.4 billion, of VA’s advance appropriations for health care services were provided to networks and medical centers for specific purposes. Of its total health care resources, VA sets aside approximately $500 million at the beginning of each fiscal year to allocate to networks and medical centers as needed throughout the year to respond to contingencies and emergencies. VA uses the VERA system to allocate general purpose resources to its networks each fiscal year. Introduced in fiscal year 1997, VERA uses a national formula-driven approach that considers the number and type of veterans served (patient workload), the complexity of care provided (case- mix), and certain geographic factors, such as local labor costs, in determining how much each VA network should receive. VERA determines how much each network will receive according to each network’s activities and needs in the following areas: patient care, equipment, nonrecurring maintenance, education support, and research support. We have previously reported that VERA is a reasonably sound methodology for VA to allocate resources to networks, although we have made recommendations to improve the methodology, some of which VA has incorporated. VA assesses its VERA model annually to determine any needed changes to the model, which may include incorporating new components. Once VA applies VERA to determine how much networks will receive, networks determine how these resources will be allocated to their individual medical centers. (See fig. 1.) Networks do not provide resources directly to medical centers; rather VA headquarters retains responsibility for providing these resources based on network allocation decisions. Prior to fiscal year 2011, VA permitted networks to develop and use their own methodologies for determining how to allocate general purpose resources to medical centers. VA headquarters provided general guidance to networks on the principles they should use when determining their allocation methodologies. For fiscal year 2010, for example, VA’s guidance stated that networks were expected to allocate resources to medical centers in a manner that must, among other things, be readily understandable and result in predictable allocations, and support the goal of improving equitable access to care and ensure appropriate allocation of resources to facilities to meet that goal. Given the relative autonomy that the 21 networks have under VA’s decentralized health care system, they developed varying allocation methodologies. For example, networks varied in the factors they considered in determining medical center allocations. These factors included prior year funding, patient workload, performance, and facility square footage. Nonetheless, VA headquarters required networks to report descriptions of their allocation methodologies, including a description of how the methodology met VA’s guiding principles for network allocation. Each network was also required to report the total amount of resources it retained at the network level—the portion of network general purpose resources set aside before allocations were made to medical centers at the beginning of the fiscal year—such as resources for network operations, network initiatives, and emergencies. In fiscal year 2011, VA implemented a new resource allocation process that includes a standardized model for networks to use in allocating general purpose resources to their medical centers. The model was designed to provide consistency in the allocation process across networks and still allow networks the flexibility to make adjustments to medical center allocations. However, VA headquarters did not require networks to report reasons for all of the adjustments networks made to their medical centers’ allocations, which limits the transparency of networks’ allocation decisions. The new process involves three steps—first, VA headquarters proposes medical center allocation amounts to networks using a standardized resource allocation model; second, network officials review these amounts and can adjust them based on the needs of their medical centers that are not reflected in the initial allocation amounts proposed by headquarters; and third, after making any adjustments, networks report final medical center allocation amounts to VA headquarters in a consistent format. (See fig. 2.) Step One: VA Proposes Allocation Amounts. VA headquarters provides networks with a spreadsheet that includes a standardized model that proposes allocation amounts for each medical center. The model includes four main components covering different aspects of the resources needed for network and medical center operations, which combined determine the amount of resources allocated to each medical center. Resources Retained for Network Initiatives. The new model requires networks to report the amounts and purposes of all resources they do not allocate to medical centers at the beginning of the fiscal year. Networks retain and manage resources for network-level initiatives that are allocated to medical centers throughout the fiscal year, such as to offset start-up costs for new medical centers or clinics or for the network’s consolidation of services shared across medical centers including contracting services, accounting, and laundry. Additionally, networks retain resources for the administrative costs associated with operating the network, such as salaries for network employees. Historically, VA had asked networks to identify the amount of resources retained at the network level, but they did not ask networks to report the purposes of these resources. Resources Retained for the Network’s Emergency Reserve. Networks may retain resources in an emergency reserve to respond to medical center emergencies throughout the year. The new model limits the amount of resources retained by each network to respond to medical center emergencies to 1.5 percent of the total allocation amount. Networks may reduce the amount retained for emergencies, but they cannot exceed the 1.5 percent limit. Networks have used these resources to help cover unanticipated medical center costs, such as those associated with natural disasters, which required resources beyond what a medical center had been initially allocated. In our review of the fiscal year 2011 allocation models, the 21 networks’ reserve amounts ranged from about 0.1 percent to the 1.5 percent limit, with an average reserve amount of 1.2 percent. While networks were asked in prior years to report their emergency reserve amounts, VA had not required that this amount be reported separately from other resources retained at the network level, making it challenging in the past for VA headquarters to know how much the network retained in reserve specifically for emergencies. In fiscal year 2010, 1 network did not put any resources in reserve, and the remaining networks’ reserves ranged from 0.2 percent to 3.7 percent of their total allocation. Furthermore, VA had not established a cap on emergency reserve amounts prior to fiscal year 2011. Resources for Research Support, Education, and High Cost Patients. Under the new model, medical centers’ resources for research support, education, and high cost patients—patients whose treatment costs exceed a VA established threshold—are determined solely by VERA. VERA calculates these amounts based on specific medical center characteristics. For example, the amount of resources allocated to medical centers for education is based on the number of residents at each medical center in the current academic year. Although these amounts were also calculated using VERA in prior years, networks had the ability to adjust them. Under the new model, networks are no longer involved in determining how to allocate these resources, which allows VA headquarters to ensure that these resources are allocated consistently across all networks. Resources for Patient Care Determined by a Standardized Measure of Workload. The new model uses a standardized measure of patient workload—which VA refers to as patient weighted work. Prior to fiscal year 2011, each network was allowed to use its own preferred workload measure, and the measures used ranged from a simple count of individual patients to a more complex statistical regression model. In fiscal year 2010, 9 of the 21 networks used a workload measure similar to patient weighted work. VA officials told us they chose patient weighted work because it establishes an equitable measure of workload among medical centers that vary significantly in their geographic location, and types and costs of services provided. According to VA officials, the patient weighted work measure lessens the impact of cost differences between medical centers, by recognizing the varying costs and levels of resource intensity associated with providing care for each patient at each VA medical center. For example, patient weighted work would result in more resources being allocated to a medical center that provides more complex care, such as open heart surgery, than a workload measure based solely on a count of each individual patient, which would not account for the additional costs associated with more complex care. Furthermore, officials told us that the patient weighted work measure is easily understandable by networks, medical centers, and stakeholders, such as veterans or VA employees. Step Two: Networks Review and Can Adjust Proposed Medical Center Allocation Amounts. After receiving the spreadsheet from headquarters, network officials determine the allocation amounts for network initiatives and reserves, which affect the total amount of resources available for allocation to medical centers. Network officials then review and can make adjustments to the model’s proposed allocation amounts for medical centers, as needed. According to VA headquarters officials, these adjustments allow each network the flexibility to change the allocation amounts if they believe that certain medical centers’ resource needs are not appropriately accounted for in the model. We reviewed the 21 networks’ allocation spreadsheets, including the adjustments networks made to the medical center allocation amounts proposed by the new model. In our review, we found that, for fiscal year 2011, of the 140 medical center allocations, 122 were adjusted from amounts proposed by the model—77 medical center allocations received an upward adjustment and 45 received a downward adjustment. The remaining 18 medical center allocations were not adjusted. (See fig. 3 for networks’ percentage adjustments to proposed allocations for medical centers.) Officials from the six networks we interviewed told us that they adjusted the allocation amounts when they anticipated that one or more of their medical centers’ resource needs would not be met by the amounts proposed in the model. Without these adjustments, network officials believed that some medical centers may not have been able to maintain the level of medical services for veterans in their service areas as they did previously. For example, Officials from one network told us they made adjustments that resulted in redistributing resources from other medical centers in the network to a rural medical center because the new model’s measure of workload would not have appropriately determined the resources that the rural medical center needed to operate. According to network officials, this medical center has several community-based outpatient clinics that have not been cost effective to operate but nonetheless provide critical access to care for rural veterans. Therefore, the network made adjustments to the amounts proposed in the model to ensure the medical center had sufficient resources to continue to provide access to veterans in these areas. Officials from another network told us that the fiscal year 2011 amount proposed in the model for one of its medical centers was 11 percent lower than the medical center’s fiscal year 2010 allocation, and for another medical center, the fiscal year 2011 allocation amount was 18 percent higher than the amount in fiscal year 2010. Network officials told us that the former medical center would not be able to absorb such a cut in resources without negatively impacting services offered, and the latter medical center would not be able to spend the additional resources it would have received under the model within the fiscal year. Network officials told us that the model’s proposed decreases to allocation amounts for some of its medical centers may have been due to inefficiencies in medical center operations, but adjustments were necessary to ensure these medical centers got the resources they needed to continue to operate. Network officials stated that they will likely continue to make adjustments in the future, but they plan to work with their medical centers to increase their efficiency and ensure that their resource needs are more in line with what the model provides. More generally, VA headquarters officials stated that some medical centers have resource needs that set them up to be recurring outliers to the model. For example, officials said that the outpatient clinic in Anchorage, Alaska, has significantly higher costs of care and therefore its resource needs likely will continue to exceed the amounts generated by the model. VA headquarters officials explained that the Anchorage clinic and other outlier medical centers exist to ensure equitable access to care for veterans in all areas and that VA therefore expects that some will incur unavoidable high costs. Network officials described certain factors contributing to particularly high costs at this clinic, including a heavy reliance on expensive community-based care and relatively high transportation costs associated with transporting patients from their homes to the medical center or between medical centers for more complex care or available services. VA headquarters officials told us they expected networks to make adjustments to the amounts proposed in the model for fiscal year 2011, but they also expected networks’ allocations to medical centers to come closer to the amounts provided by the model over time. If certain medical centers continue to require significant adjustments to the amounts proposed in the model, this could be an indicator that the medical centers warrant further review or attention. VA officials said adjustment information could be used together with information from VA’s managerial accounting systems (designed to help identify areas for management improvement or redesign) to identify areas for improving the medical centers’ efficiency. Step Three: Networks Report Final Allocation Amounts to VA Headquarters. Lastly, networks report to VA headquarters their final medical center allocation amounts, including the amounts and purposes of resources retained for network initiatives and the amount retained for network emergency reserves, using the original allocation spreadsheet that VA provided. Additionally, networks report any adjustments to the medical center allocation amounts proposed by the model. VA headquarters officials told us that the spreadsheets submitted by the networks provide headquarters with consistent information on all 21 networks’ medical center allocations to more easily track network allocation decisions. However, VA did not collect information on the reasons for each adjustment networks made to proposed allocation amounts through the spreadsheet. Although VA provided networks a list of acceptable rationales for adjustments for fiscal year 2011, VA did not require networks to report these rationales for their adjustments in the spreadsheet. As such, networks may have reported rationales for some adjustments, but networks also made adjustments for which they may not have reported a rationale. For example, one network reported detailed rationales for all adjustments to its medical center allocations (ranging from -13 percent to 34 percent), while another network did not report rationales for any of the adjustments it made to its medical center allocations (ranging from -14 percent to 17 percent). Officials said that the new network resource allocation process was not intended to be used to question networks’ decision making, but rather to increase the transparency of networks’ allocation decisions to VA headquarters while maintaining network flexibility for allocation decisions. However, absent rationales from networks for each adjustment made to medical center allocation amounts, transparency for decisions made through the new allocation process is limited. VA officials told us they have begun to review the fiscal year 2011 allocation process and intend to conduct annual assessments of the network allocation process. VA officials said that this review will help them identify potential ways to improve the model for fiscal year 2012. VA officials told us they plan to complete their assessment by the end of fiscal year 2011. Additionally, VA officials told us that they plan to conduct an assessment of the network allocation process each subsequent year, including a review of adjustments to the model, to identify areas for improvement. VA officials stated that understanding these adjustments is key to identifying potential areas where the model could be modified and to respond to changing health care needs. VA monitors the general purpose resources networks have allocated to medical centers to ensure spending does not exceed allocations, but does not have written policies documenting these practices for monitoring resources. VA’s lack of written policies documenting its monitoring is inconsistent with internal control standards applicable to all federal agencies and could put the agency’s stewardship of federal dollars at risk. VA centrally monitors the resources networks have allocated to medical centers through two primary practices—(1) automated controls in its financial management system, and (2) regular reviews of network spending. These practices help VA headquarters officials to manage VA resources to prevent network and medical center spending from exceeding their allocations and help to ensure that the agency does not violate the Antideficiency Act. By monitoring network and medical center resources throughout the fiscal year, VA is able to recognize additional or changing needs that might not have been apparent when resources were initially allocated, and to work with networks to realign resources as appropriate, within the limits of the respective appropriations—Medical Services, Medical Support and Compliance, and Medical Facilities. Specifically, VA headquarters officials told us that the agency maintains a financial management system that has automated controls in place that prevent networks and medical centers from spending more than their available resources. VA’s financial management system electronically tracks the amount of resources that networks and medical centers have available—that is, the resources they were allocated, less the resources already spent. When medical centers want to spend some of their resources, they enter requests for the obligation of funds into the system. If the amount entered exceeds what is available to them, the request is rejected by the system, and cannot be processed. VA headquarters officials also told us they monitor resources by regularly reviewing network spending—which includes the total spending of all medical centers within a network. On a monthly basis, they monitor resources by comparing each network’s spending with its operating plan, which shows the network’s plan for its medical centers’ spending of resources for each month of the fiscal year, summarized at the network level and broken down by spending category—such as travel, personnel, and equipment costs—and appropriations account. Each network submits an operating plan to VA headquarters at the beginning of the fiscal year and revises the plan throughout the year as needed. VA headquarters officials told us they determine whether spending is on target with the operating plan or not, and whether any differences from the plan are significant. VA headquarters officials told us that if they find differences that are significant, they contact network officials to discuss the differences, such as if the network appears to be behind on its spending in a particular category, based on what the network planned to spend in its operating plan. For example, a network may mention that one of its medical centers has a large contract pending that will be awarded later in the year. VA headquarters officials told us that they do not have specific criteria for which differences between what has been spent and what the network had planned to have spent would warrant further investigation; rather, they rely on their experience and judgment to know when a network may be in danger financially, based on their review of the network’s spending and their regular communication with network officials. Officials also told us that they have biweekly teleconferences with network financial officers and meet with them in person on a quarterly basis to discuss any financial concerns. However, VA does not have written policies documenting the agency’s practices for monitoring the resources networks allocate to medical centers. For example, VA does not have a written policy documenting that one of its primary practices for monitoring is the automated controls in its financial management system. In addition, VA does not have a written policy that states the overall purpose and specific objectives of their monthly reviews of network spending compared with each network’s operating plan. VA’s lack of written policies related to its monitoring of network and medical center resources is inconsistent with federal internal control standards and could put the agency’s stewardship of federal dollars at risk. Internal control standards state that internal controls should be documented and all documentation should be properly managed and maintained, and readily available for examination. Such policies are an integral part of a federal agency’s stewardship of government resources. Without written policies that clearly define VA’s objectives for monitoring resources and document existing practices, there is an increased risk that these internal control activities may not be performed, may be performed inconsistently, or may not be continued when knowledgeable employees leave, which can lead to unreliable monitoring of VA network and medical center resources. Although networks make decisions about how resources are allocated to medical centers, VA headquarters retains overall responsibility for oversight and management of VA’s resources, including the process networks use to allocate resources. To its credit, VA has taken steps to increase the transparency for how networks allocate resources to medical centers, while maintaining network flexibility for allocation decisions. However, to make network decisions more transparent to VA headquarters, and to achieve its goal of having networks’ allocations to medical centers come closer to the amounts proposed by VA’s resource allocation model over time, VA headquarters must understand the specific reasons for any adjustments that networks make to the model. Understanding why networks made adjustments is key in determining if any modifications to the model are needed for subsequent years. Further, evaluations of the model are important to determine the viability of the allocation model each year and serve as a platform for making annual modifications to it, where warranted. VA’s plan to conduct annual assessments of the allocation process will provide it the opportunity to identify and implement any modifications to the model—as medical centers’ resource needs change over time—to ensure the process and its various components continue to be viable each year. In addition, VA’s current practices for monitoring help to ensure that network and medical center spending does not exceed allocations. However, without written policies to document its objectives for monitoring resources—including its existing practices—VA cannot ensure that monitoring will be performed consistently and reliably. For example, if current employees left the agency and new employees were asked to take on these monitoring activities, VA would not have policies to guide them. These new employees might be unable to perform these activities, or might perform them in a manner inconsistent with how the agency has performed them in the past, resulting in unreliable monitoring. Such possibilities could place VA’s stewardship of federal dollars at risk. By documenting this information in a manner consistent with federal internal control standards, VA would have greater assurance that the practices developed by the current leadership will be maintained during management changes over time. To increase the transparency of the new network allocation process, and to ensure that internal control activities are performed and that the resources networks allocate to medical centers are monitored consistently and reliably, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following two actions: require networks to provide rationales for all adjustments made to the allocation amounts proposed by the model in VA’s resource allocation process; and develop written policies, consistent with federal internal control standards, to formalize existing practices for monitoring resources networks have allocated to medical centers. We provided a draft of this report to VA for comment. In its written comments, reproduced in appendix I, VA generally agreed with our conclusions, and concurred with our recommendations. VA stated that beginning in fiscal year 2012 the agency will require networks to provide rationales for all adjustments made to medical centers’ allocation amounts proposed by the new resource allocation model. VA also stated that beginning in fiscal year 2012 it will provide written guidance consistent with federal internal control standards to formalize its existing practices for monitoring resources networks allocate to medical centers. We are sending copies of this report to the Secretary of Veterans Affairs and interested congressional committees. The report also will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made major contributions to this report are listed in appendix II. In addition to the contact named above, Janina Austin, Assistant Director; Jennie F. Apter; Jessica Morris; Lisa Motley; and Julie T. Stewart made key contributions to this report. | Through fiscal year 2010, the Department of Veterans Affairs (VA) permitted its 21 health care networks to develop their own methodologies for allocating resources to medical centers. These methodologies varied considerably. Concerned that network methodologies were not fully transparent to VA headquarters, in fiscal year 2011, VA implemented a new single process for all networks to use to determine allocations to medical centers. VA headquarters retains overall responsibility for oversight of VA's resources, including ensuring networks do not spend more than the resources available. GAO was asked to review how VA networks allocate resources and how VA oversees these resources once they are allocated. In this report, GAO describes (1) VA's new process for networks to use in determining allocations to medical centers, and (2) how VA centrally monitors these resources. To do this work, GAO reviewed VA documents describing the new process and VA's monitoring efforts, in light of federal internal control standards, and interviewed VA officials.. VA's new resource allocation process uses a standardized model, but the transparency of networks' decisions for allocating resources to medical centers is limited. The new process involves three steps. First, VA headquarters proposes medical center allocation amounts to networks using a standardized resource allocation model. The model includes a standardized measure of workload that recognizes the varying costs and levels of resource intensity associated with providing care for each patient at each medical center. Second, network officials review the proposed amounts and have the flexibility to adjust them if they believe that certain medical centers' resource needs are not appropriately accounted for in the model. Third, networks report final medical center allocation amounts to VA headquarters and any adjustments made to the allocation amounts proposed by the model. VA headquarters did not ask networks to report reasons for each adjustment made to allocation amounts; networks reported reasons for some adjustments, but not for others. VA officials said that the new network resource allocation process was not intended to be used to question networks' decision making, but to increase the transparency of networks' allocation decisions to VA headquarters while maintaining network flexibility. However, absent rationales from networks on all adjustments made to medical center allocation amounts, transparency for decisions made through the allocation process is limited. Furthermore, understanding why networks make adjustments is key in determining if any modifications to the model are needed for subsequent years. VA officials told GAO that they intend to conduct annual assessments of the new resource allocation process, including a review of adjustments to the model, to identify areas for improvement. VA centrally monitors the resources networks have allocated to medical centers to ensure spending does not exceed allocations, but does not have written policies documenting these practices for monitoring resources. VA monitors resources through two primary practices--automated controls in its financial management system and regular reviews of network spending. Specifically, VA's financial management system electronically tracks the amount of resources that networks and medical centers have available--the resources allocated, less the resources already spent--and prevents medical centers from spending more than what they have available by rejecting spending requests in excess of available resources. In addition, each month VA headquarters officials compare each network's spending with what the network planned to spend and determine whether spending is on target, and whether any differences from the plan are significant. However, VA headquarters does not have written policies documenting the agency's practices for monitoring resources, which is not consistent with federal internal control standards. These standards state that internal controls should be documented, and all documentation should be properly managed, maintained, and readily available for examination. Without written policies, there is an increased risk of inconsistent monitoring of VA network and medical center resources. GAO recommends that VA (1) require networks to provide rationales for all adjustments made to allocations proposed by VA's resource allocation model, and (2) develop written policies to document practices for monitoring resources. VA concurred with these recommendations. |
Under CERCLA, the parties responsible for releasing hazardous substances into the environment are liable for their cleanup. The cleanup of hazardous waste sites is administered by the Environmental Protection Agency (EPA) under its Superfund program, which is financed mainly by taxes on corporate income, crude oil, and certain chemicals. EPA places the most dangerous sites on the Superfund National Priorities List (NPL) for cleanup actions. As of September 1995, there were 1,290 sites on the NPL. In addition to imposing cleanup obligations, CERCLA makes responsible parties liable for the costs of restoring injuries to natural resources resulting from a hazardous substance release. These resources are defined broadly under the law to include land, fish, wildlife, groundwater, and other resources belonging to, managed by, or otherwise controlled by federal or other governmental entities. Only natural resource trustees can file suits under CERCLA against parties responsible for injuring natural resources. The law and its implementing regulations designate federal, state, and tribal authorities as trustees for natural resources. The Department of the Interior (Interior) and the National Oceanic and Atmospheric Administration (NOAA) are the two principal federal trustees for natural resources. Other federal agencies, such as the departments of Agriculture, Defense, and Energy, are the trustees for natural resources on the lands that they manage. States have traditionally acted as trustees for groundwater; the lands they own (e.g., state parks and forests); and fish, game, and other wildlife. Under CERCLA and implementing regulations, Indian tribes have certain responsibilities as natural resource trustees. Although trustees’ responsibilities for natural resources are not always exclusive and can overlap, damages cannot be recovered by more than one trustee for injuries to the same resource by the same release. Thus, federal, state, and tribal trustees often coordinate their natural resource damage claims. Superfund money may not be used to restore injuries to natural resources or to conduct natural resource damage assessments. Instead, the trustees may recover monetary compensation (damages) from responsible parties to restore natural resources and to pay for the reasonable costs of assessing any damage to natural resources. Several factors limit recoveries for natural resource injuries, according to Interior officials. First, injuries must be traced to particular releases of hazardous substances; second, a viable and solvent responsible party must be found; third, the claim must be filed within the statute of limitations;and fourth, a federal agency must have the financial resources available to assess the damage and develop the information necessary to support a claim. Furthermore, Department of Justice (Justice) officials state that the level of appropriations to fund federal natural resource damage programs is the single most important factor in determining how many sites can be assessed for damages. For a site being cleaned up under CERCLA, the trustees can seek damages only for injuries that remain after the cleanup has been completed, according to Justice officials. Residual injuries occur when (1) a cleanup leaves significant contamination in the environment or (2) animal populations have been reduced or wildlife habitat has been destroyed and cannot recover quickly without human intervention. The federal trustees estimate that as of May 1995, the total compensation for residual natural resource injuries at all Superfund sites on the National Priorities List has been less than 1 percent of the total cost to clean up the sites. A natural resource damage claim has three basic components: the necessary and reasonable costs of performing the damage assessment; the costs of restoring the resource to the condition that would have existed at the time of the injury (restoration costs), taking into consideration the effects over time of natural and human activities unrelated to the release of contamination; and the costs associated with the loss of resources or of the benefits/services derived from such resources (e.g., a wetland’s provision of habitat for animals and birds or a body of water’s provision of commercial or recreational fishing opportunities) from the date of the injury until the full restoration of the resources and/or services (referred to as interim lost values). According to Interior and NOAA officials, the majority of natural resource damage cases involving federal trustees are settled as part of the cleanup agreement negotiated by EPA. Almost half of the settlements require the responsible party to make no separate payment for natural resource damages either because the negotiated cleanup will correct the injury to the natural resource or because no such injuries were found. Justice reports that through the end of April 1995, federal trustees had settled 98 natural resource damage cases for a total of $106 million. Of these settlements, 48 required no payment and the remaining 50 involved monetary recoveries ranging from about $4,000 to $24 million. At our request, Interior and NOAA officials developed preliminary estimates of the number of sites where natural resource damage claims involving federal trustees may ultimately reach $5 million or more. The agencies estimate that 60 sites may eventually have claims for damages to natural resources that will equal or exceed $5 million and that up to 20 of these sites may have claims exceeding $50 million. Sixty sites represents less than 5 percent of the current number of Superfund sites. Interior and NOAA officials cautioned that their projections are very preliminary and could change for a variety of reasons. Most importantly, as table 1 shows, detailed studies to assess the injuries to natural resources have not even begun at more than half (31) of the 60 sites estimated to have claims of $5 million or more. Furthermore, most of the sites have not been evaluated to determine whether natural resource losses can be traced to specific releases of hazardous substances and whether the parties responsible for these releases are capable of paying damages—prerequisites to pursuing natural resource damage claims. Another factor affecting agencies’ ability to make projections is that many sites will be cleaned up under the Superfund program, so that until EPA determines the scope of its cleanup efforts, the agencies do not know what, if any, residual resource damage will remain to be addressed through a claim. Finally, the value of these claims may ultimately differ from the initial estimates because the claims may be settled through negotiations with responsible parties. To date, almost all natural resource damage claims have been settled without litigation. Together, the five largest natural resource damage settlements—Elliott Bay in Seattle, Washington; Commencement Bay in Tacoma, Washington; New Bedford Harbor on the Achushnet River in Massachusetts; Montrose located offshore Los Angeles County, California; and the Cantara Loop Train Derailment outside of Dunsmuir, California—totaled $83.8 million, about four-fifths of the total dollar value of all 98 settlements reached as of April 1995. Through July 1995, about 40 percent of the moneys for the five settlements had been collected from the responsible parties. Of these collections, about 11 percent had been disbursed either to reimburse trustees for completed damage assessments or to pay for planning natural resource restorations. However, no other restoration actions had been taken with the moneys collected. Collections and disbursements are governed by settlement agreements. Although some of the funds collected from responsible parties are paid directly to the trustees to reimburse them for the costs they incurred in performing damage assessments, most of the funds usually reside in court-administered registry accounts until the trustees are ready to use them. Frequently, settlements are structured so that payments may take place over a period of years. Additionally, CERCLA requires that all participating parties agree to a restoration plan requiring extensive public review before the restoration can begin. For each of the five cases, restoration planning was taking place at the time of our review. Settlement dates ranged from December 1991 to March 1994. The reasons that restoration had not yet begun included the need at all sites to develop and obtain public comments on a restoration plan; unexpected cleanup problems at New Bedford, which hampered the planning process; and intervening lawsuits at Cantara Loop, which postponed the disbursement of collected funds. Table 2 summarizes the amounts collected and disbursed for the five largest settlements as of July 1995. The settlements are arranged by age, from the oldest to the most recent. (App. I describes the status of restoration activities for each settlement.) CERCLA does not require the trustees to use a particular standard or method for assessing natural resource damages. It did, however, direct Interior to develop standardized procedures for all trustees to consider in assessing and valuing injuries to natural resources. Accordingly, the regulations include two procedures for valuing natural resource injuries, but the trustees are not required to use these procedures. Because one procedure is limited in scope and the other procedure can be costly and time-consuming to implement, the trustees seldom fully implement either one. Instead, according to Interior and NOAA officials, the trustees most often use an abbreviated procedure that employs readily available site-specific information and scientific literature to quantify damages. CERCLA directs that the assessment process identify the best available procedures to determine damages, including both direct and indirect injuries, and takes into consideration the ability of the ecosystem to recover on its own. CERCLA further states that the measure of injuries need not be limited by the sums required to restore or replace such resources. For example, the value of a particular service or benefit that was lost to the public while the resource was injured may also be calculated and collected. In response to CERCLA’s requirements, Interior developed two valuation procedures: a simplified assessment process that requires the use of minimal data (“type A”) and a detailed process that requires extensive site-specific data (“type B”). The use of these damage assessment procedures is optional. If the trustees elect to implement these procedures fully, they are granted a legal presumption of correctness in a court of law that shifts the burden to the defendants to prove otherwise. NOAA officials said that this rebuttable presumption is of limited value, since the trustees still must prove their case. Furthermore, since all but a few cases had been settled without litigation as of December 1995, the trustees have not had to take the time and incur the expense needed to implement these procedures fully. According to NOAA, Interior, and Justice officials, full implementation of the type B procedure is most often not necessary because settlements can be reached without it or it is impractical because of the cost and time involved. According to Interior officials, the trustees use elements of the procedures to the extent necessary to reach a settlement in a cost-effective manner. The type A procedure provides standard methods for conducting simplified natural resource damage assessments through computer modeling. As of December 1995, only one computer model had been developed for the type A procedure. This model can be used only for small incidents of limited duration (e.g., one-time spills) that occur in coastal and marine environments. The model consists of programs to perform mathematical computations and databases containing chemical, biological, and economic information. Although the model requires minimal use of actual field data because it is based on general assumptions, it can be used to assess the injuries to natural resources, quantify these injuries (e.g., the number of fish killed or acres of wetlands contaminated), and determine the damages from many types of discharges or releases. Interior has proposed adding a model for the Great Lakes region to the type A regulations. This model will also be appropriate only for small, one-time incidents. Federal trustees said they rarely use the type A approach for CERCLA claims because it applies to few CERCLA damage cases. It has greater application for oil spills, which are addressed under a separate law—the Oil Pollution Act. As of July 1995, NOAA, the primary federal trustee for resources in coastal waters, had used this model to quantify damages at only one site. For a detailed description of this case, see appendix II. The type B procedure provides a set of detailed guidelines for conducting extensive site-specific studies to assess the extent of the injury and to value the damages. This procedure can involve the use of various evaluation methods and techniques. For example, the regulations specify various methods for quantifying interim values for lost use. One such technique is the travel cost analysis, which estimates the costs of the travel and extra time required to go to an alternative site rather than the injured site for a purpose such as fishing. Trustees can also use a technique referred to as the contingent valuation method. This method, which is not often used by federal trustees, employs public opinion surveys to establish a dollar value for natural resources that do not have an established market value. For example, if contamination from past mining had contributed to reducing or destroying the salmon population in a stream, members of the public would be asked what price they would be willing to pay to have that stream restored to a condition that would allow the return of salmon. Interior and NOAA officials said they seldom use the type B procedure fully because of the expense and time—usually several years—required to perform such studies. Federal officials said that they did not believe that a full type B assessment had ever been performed, but they identified five sites where the procedure had been most fully pursued. An illustration of the type B procedure appears in appendix III. Federal trustees most often use an abbreviated type B procedure to quantify damages. Under this process, they follow the basic steps of the type B procedure—determining the injuries, quantifying their value, and determining the damages. However, instead of employing the time-consuming and costly site-specific surveys and analyses required by the type B procedure, they use readily available off-the-shelf literature and other information to value damages using various evaluation techniques. The abbreviated approach is commonly used when, during a negotiation with EPA, a private party wants to settle its liability for both cleanup costs and natural resource damages at the same time. In such situations, EPA or Justice notifies the trustees of the party’s request. The trustees then typically have about 2 to 3 months to assess any injury to natural resources at the site, quantify the government’s claim, and, if possible, obtain a mutually satisfactory settlement agreement with the responsible party. To meet this time frame, the trustees use an abbreviated approach that draws on readily available site-specific and other information to quantify the damages. A 1991 settlement illustrates the use of the abbreviated process in the context of settling a party’s liability for natural resource damages as part of the cleanup settlement. In this case, a solvent recovery firm was a responsible party at two different sites, both of which are included on the NPL. The natural resource damage settlement came about after the responsible party asked to resolve its liability for natural resource damages at the same time as it settled its liability for cleanup costs. After being notified of the responsible party’s request, a Fish and Wildlife Service field biologist began to review available information about the potential injuries to resources at the sites. The field biologist identified data that had been gathered from the sites as part of the investigation to identify the appropriate cleanup remedies. These data were sufficient to show that injuries had occurred to federal and state trust resources. The biologist combined the data with other readily available information to quantify the damage using a relatively new technique, the habitat equivalency analysis. This analysis calculates the acreage needed to replace the services that were lost when the habitat was injured rather than calculating the dollar value of the loss, as is usually done. Using this method, the field biologist calculated that 17.5 acres of rare dune and swale lands and 31 acres of wetlands were needed to replace the injured resources. We transmitted copies of a draft of this report to the Secretary of the Interior, the Secretary of Commerce, and the Attorney General for their review and comment. Although the agencies did not disagree with the facts presented in the draft report, they wanted to emphasize information associated with three issues. Their general comments appear in appendixes V through VII. In addition, the three agencies provided technical and editorial comments, which we incorporated into the report as appropriate. We did not reproduce these comments in the appendixes. The first issue involves the potential for future natural resource damage claims. Interior stressed in its comments that the projected number of sites having natural resource damage claims in excess of $5 million represents a maximum number and that the actual number would likely be smaller. We have qualified our description of the estimate to indicate that it represents an upper bound. The second issue involves the use of the funds collected from natural resource damage settlements. All three agencies said that there are site-specific and legal reasons, beyond the control of the trustees, why restoration has not started at the five largest settlement sites. The agencies pointed out that a small experimental restoration project had begun at Commencement Bay. Interior stated that “restoration planning” is an essential part of the restoration process and, as such, should be reported as a restoration action. We believe it is useful, when describing the status of the program, to distinguish between restoration planning and restoration action. Interior also stated that it is misleading to compare the total collections for the five largest settlements with these settlements’ total value because most of the collections resulted from one settlement. We believe that it is appropriate to present summary figures to indicate the overall status of the five cases, and we have also shown the collections and value for each settlement so that the summary figures can be properly interpreted. The third issue involves the procedures used by the trustees to develop natural resource damage claims. Both Interior and NOAA said that the settlement process is based on selecting appropriate elements of the assessment procedures provided in the regulations. Evaluating whether the agencies were making “appropriate” selections from the regulations was beyond the scope of our review. Interior said that for relatively minor cases, the type B procedure is not necessarily costly and time-consuming. We have added this qualification to our discussion of the type B procedure. We conducted our review from July 1995 through February 1996 in accordance with generally accepted government auditing standards. See appendix IV for further discussion of our scope and methodology. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies to the Secretary of the Interior, the Secretary of Commerce, and the Attorney General. We will make copies available to others upon request. Please call me at (202) 512-6112 if you or your staff have any questions. Major contributors to this report are listed in appendix VIII. Elliott Bay is a 21-square-kilometer area in central Puget Sound encompassing the commercial waterfront district of Seattle. (See fig. I.1.) Over the past 150 years, Elliott Bay and the adjoining Duwamish Waterway estuary have been contaminated by many hazardous substances, including chromium, cadmium, copper, lead, zinc, and several toxic and/or carcinogenic organic compounds, such as polychlorinated biphenyls (PCB). These pollutants have extensively contaminated nearshore sediments, reducing the value of the area as a habitat for fish and wildlife. In 1991, the natural resource trustees—including the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA), the Department of the Interior (Interior), the state of Washington, and area Indian tribes—reached a $24.3 million legal settlement with the city of Seattle and the municipality of Metropolitan Seattle, both of which had contributed to the contamination. The settlement allocated $12 million for remediating sediments, $10 million for developing habitat, $2 million for controlling pollution sources, and $250,000 for reimbursing NOAA for damage assessment costs. As of July 1995, $3 million of the $24.3 million settlement had been collected. Of this amount, $0.7 million had been disbursed. The Panel of Managers—which, in this case, included both the trustees and the responsible parties—developed a restoration plan that was completed in June 1994. This plan requires cleaning up the bay’s contaminated sediments and also studying sediment recontamination patterns to ensure the success of planned habitat development projects. In July 1995, the Elliott Bay Waterfront Recontamination Study was completed. This study will form the basis of an effort to remediate the contaminated sediments. In addition, the panel had screened all possible habitat restoration sites and was acquiring the properties. As of December 1995, the panel was investigating sites for sediment remediation efforts. Approximately 2,000 metric tons of DDT and PCBs were discharged into the southern California marine environment by various industrial companies through the local county sewer system. (See fig. I.2.) The state of California issued a health advisory against the consumption of fish from the area because of dangerous concentrations of DDT and PCBs, and a commercial fishery was closed. In June 1990, the Department of Justice (Justice) filed a claim, collectively referred to as “Montrose,” on behalf of NOAA and Interior against the 10 responsible parties, for injuries to natural resources caused by discharges of DDT and PCBs into the marine environment. In May 1992, the federal and state trustees settled one case with some responsible parties for $12 million. San Miguel Is. Santa Cruz Is. Anacapa Is. Santa Rosa Is. Santa Barbara Is. San Nicolas Is. Santa Catalina Is. San Clemente Is. In March 1995, a federal court of appeals overturned a second $42.2 million settlement between the trustees and the Los Angeles County sanitation district and municipalities and sent the settlement back to the federal district court for reconsideration. As of December 1995, this decision was still under litigation. In the meantime, according to Interior officials, the trustees are proceeding with the preliminary restoration plan. They anticipate modifying the plan as remediation actions are completed or more settlements are obtained. According to Justice officials, these future settlements may be substantial. For the case that has been settled for $12 million, $8.1 million has been collected, $1.4 million of which has been disbursed. The money was used to reimburse some of the trustees’ past damage assessment costs. The New Bedford Harbor case was one of the first natural resource damage cases filed under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA). Located on the Achushnet River, near Buzzards Bay, Massachusetts, the harbor has long been used by the fishing, shipping, and manufacturing industries. (See fig. I.3.) After studies during the 1970s found high levels of PCBs and heavy metals in the harbor’s fish and shellfish, several fishing areas were closed. By the end of 1992, the federal and state trustees had reached a $20.2 million settlement with five companies to cover the costs of the natural resource damage assessment and restoration. The companies had also agreed to an $88 million Superfund cleanup settlement with the Environmental Protection Agency (EPA) and the state. The nature of the natural resource restoration work is contingent upon the scope of the cleanup remedy that EPA selects for the outer harbor. Restoration projects under consideration by the trustees include, but are not limited to, improving anadromous fish runs, reestablishing seagrass beds, creating wetlands, and constructing artificial reefs. As of July 1995, all of the $20.2 million settlement had been collected and $0.5 million had been disbursed for restoration planning. According to NOAA officials, restoration planning has been delayed because of the uncertainty over EPA’s cleanup plans. EPA’s record of decision for the cleanup and disposal of the most contaminated sediments had to be renegotiated when the community opposed the incineration of contaminated sediments. The community’s challenge led to a delay in planning and cleaning the remaining contaminated sediments. Nevertheless, the trustees are going forward with the restoration plan, which they say can be modified if EPA’s actions interfere with the trustees’ restoration activities. As of December 1995, the trustees had asked the public to suggest ideas for restoration. These ideas are expected to be rank-ordered and included as alternatives in the restoration plan, which the trustees expect to release for public comment by the summer of 1996. Commencement Bay is an estuarine bay located in the southern part of Puget Sound in Tacoma, Washington. (See fig. I.4.) Industrialization and urban development have severely degraded natural habitats in the bay by introducing a variety of hazardous substances into the surface water and groundwater and the sediments of the bay area. Much of the bay’s nearshore area is a federal Superfund site. Federal, state, and tribal trustees negotiated a natural resource damage settlement with the Port of Tacoma (Oct. 1993) and the Simpson Tacoma Kraft Company (Dec. 1991)—both of which contributed to natural resource losses—for a total of about $13 million. Moneys from the settlement will be used to restore, replace, or acquire equivalent components of the historical ecosystem, including vegetated shallows, mudflats, tidal marshes and creeks, off-channel sloughs and lagoons, naturalized stream channels, and adjacent upland buffer areas. Of the $13.3 million settlement, $2.6 million had been collected and about $1.0 million had been disbursed as of July 1995. The disbursements have been primarily to the trustees to reimburse their expenditures for past damage assessment activities and to develop the baywide restoration plan. In addition, as part of the settlement, one of the responsible parties agreed to conduct a pilot restoration project to convert upland industrial property into wildlife habitat. The results of the pilot project will be used to develop the baywide restoration plan. Although this project was only 1.5 months old at the time of our visit in July 1995, local Interior officials had already noted a 10-percent increase in wildlife populations. The Commencement Bay trustees are attempting to assess the natural resource damage and plan the restoration while EPA is still cleaning the site. In addition, not all parties have settled. For example, according to a NOAA official, one of the largest potential sources of pollution is a smelting plant that is currently negotiating its responsibility for Superfund cleanup activities with EPA. The cleanup may not be completed for another 5 years. The trustees are continuing to discuss settlements with other responsible parties and reported in December 1995 that they were actively negotiating settlements with three different sets of parties. Justice officials believe that future settlements may be substantial. Because other natural resource damage settlements are not expected for several more years, the trustees are developing a baywide restoration plan that can be implemented as sediments are remediated and/or funds become available. As of December 1995, this plan was in draft, and the trustees expected to circulate it for public comment in the spring of 1996. In July 1991, a train derailed on a stretch of track known as the “Cantara Loop” near Dunsmuir, California. (See fig. I.5.) The derailment spilled approximately 19,000 gallons of the herbicide metam sodium into the upper Sacramento River. The spill destroyed all aquatic life along a 42-mile stretch of the river and caused extensive injuries to a native trout fishery as well as to the river’s ecosystem. A claim for natural resource damages was filed by the state of California and Justice. The responsible parties settled with California and Justice—on behalf of Interior, the U.S. Department of Agriculture, and EPA—for $38 million in 1994, using CERCLA and other federal and state laws. According to a senior attorney at Justice overseeing the settlement, the $38 million included $14 million under CERCLA’s natural resource damage provisions, $5 million under CERCLA’s emergency restoration provisions, and $19 million under the Clean Water Act, other parts of CERCLA, and various California state laws. The settlement created the Cantara Trustee Council consisting of five voting members—four from California state agencies and one from the Fish and Wildlife Service representing Interior. According to Justice officials, as of July 1995, none of the $14 million recovered under CERCLA’s natural resource damage provisions had been deposited into the trustee account, and therefore none had been disbursed. Although, according to the official in charge of the restoration in California’s Department of Fish and Game, $16 million of the total $38 million Cantara Loop settlement had been collected by July 1995, these funds were frozen by the court pending the resolution of an additional lawsuit filed by environmental organizations seeking a greater role in the restoration. In November 1995, the plaintiffs in the suit settled their complaints, and the funds will be made available to the trustees early in 1996. The Cantara Trustee Council met for the first time in November 1995. According to the Cantara program supervisor with the California Department of Fish and Game, as of December 1995, most elements of the Sacramento River ecosystem are recovering without any further special restoration efforts. In November 1995, the Council announced that it would use the $14 million to fund grants for restoration projects rather than develop an in-house restoration program. According to terms agreed upon by the Council, projects that directly affect the upper Sacramento River ecosystem will receive a higher weighted score. However, the trustees may use the money to develop natural resource restoration projects in other areas of the state. The Council plans to choose the project(s) in March 1996 and begin implementation in April 1996. As of July 1995, NOAA, the primary federal trustee for natural resources in coastal waters, had used the type A procedure once in settling a natural resource damage claim under CERCLA. This case involved a ship’s loss of 21 shipping containers, four of which held 25-gallon drums of arsenic trioxide, a highly poisonous metal oxide that is used as an insecticide, herbicide, and wood preservative. A single dose, the size of an aspirin, is lethal to humans. The incident occurred in January 1992 off the coast of New Jersey in an area that is used for commercial and recreational fishing. Although sampling ultimately showed only background levels of arsenic in the water and sediment, a 16-square-mile area was closed to all fishing activities for 180 days because of the potential for seafood contamination. NOAA, as the federal trustee, concluded that the evidence of injury to its trust resources was not sufficient to warrant a claim for biological injuries. However, the agency determined that it did have a claim for the fishery’s closure. To value this claim, NOAA entered data into the type A model about the extent and duration of the fishery’s closure. The result was a claim of approximately $280,000 for the lost harvest of fish and shellfish from this area. NOAA and the responsible party settled the case for $205,000, which included reimbursement of the assessment’s cost. The complexity of the type B damage assessment procedure is illustrated by the state of Idaho’s actions in 1983 at the inactive Blackbird Mine site, located on national forest lands within the state. The federal claims were filed by Justice in 1993 on behalf of NOAA, the Forest Service, and EPA. Copper, cobalt, and other heavy metals from mining activities at this site have extensively contaminated groundwater and surface water, including 26 miles of the Panther Creek, a tributary of the Salmon River. To perform the assessment, the trustees conducted a series of technical and economic studies to determine the extent of the injury to natural resources, quantify the damages, and develop a plan to restore the injured resources. For example, NOAA commissioned an expert study to identify the effects of the mine’s contamination on the sediments and small animals in the streambeds of the Panther Creek watershed. Part of this study involved taking samples at 16 sites to show the conditions both upstream and downstream of the contamination. The agency also paid consultants to study injuries to fish. These studies found toxic responses (including death) when salmon were exposed to water quality conditions similar to those found at the site. The trustees settled the case in September 1995. Although this settlement is valued at more than $60 million dollars, the only cash payment required from the potentially responsible party (PRP) is approximately $8 million for restoration and reimbursement of past damage assessment costs. The remainder of the settlement is the value of the PRP’s in-kind cleanup and restoration work. The largest portion of the in-kind work is the agreement that the PRP will restore the water quality to support all life stages of the salmon by the year 2002—valued at about $57 million by the trustees. To determine the number of future federal natural resource damage claims, we interviewed officials at Interior and NOAA. After we discovered that this information was not readily available, Interior offered to survey the agency’s regional offices in order to estimate this number. From the survey, Interior developed a list of sites that it believes may have claims ranging from $5 million to $50 million and over $50 million. NOAA and Justice then reviewed this list for possible overlaps and/or omissions. In addition, we interviewed the Chief of the Mining Section in EPA’s Office of Solid Waste and representatives of the Western Governors Association, the National Association of Attorneys General, and the Mineral Policy Center. To obtain information on how settlement dollars are being collected and spent, we focused on the top five CERCLA settlements involving federal agencies, since they accounted for nearly 80 percent of the settlement dollars that Justice had identified as of April 1995. This approach emphasizes larger and possibly more complicated and time-consuming restorations. However, since the information on the smaller settlements resides predominantly with Interior, whose operations are decentralized over numerous field offices, we decided to concentrate our efforts more cost-efficiently on the largest settlements. NOAA, as the lead trustee for four of the five settlements, provided the financial backup records, disbursement request forms, consent decrees, and memorandums of agreement for these settlements. For the fifth settlement, Cantara Loop, which was led by the state of California, the California Department of Fish and Game and the California Attorney General’s Office provided information on the status of the settlement and restoration activities. We interviewed both headquarters and field office trustees for the five sites. We visited Elliott Bay in Seattle, and Commencement Bay in Tacoma, Washington. To obtain the most up-to-date information, we contacted the lead trustees in the field at the five sites as late in the data collection phase of this study as possible. Therefore, all restoration activities are reported as of December 1995. In identifying the approaches the trustees used to develop their natural resource damage claims, we reviewed the regulations for implementing CERCLA as well as other documents for developing damage claims. Interior and NOAA briefed us on their methods and explained how they had developed the claims for four sites. We also reviewed the documents related to these cases. Stewart O. Seman, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed natural resource damage provisions of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), focusing on: (1) the prospect for future claims by the federal government against polluters for natural resources damage; (2) the amount and use of settlements that federal agencies have already collected from polluters; and (3) the guidelines used by federal agencies to determine an appropriate amount of compensation for natural resources damage. GAO found that: (1) the Department of the Interior and the National Oceanic and Atmospheric Administration estimate that 60 sites may eventually have claims for damages to natural resources that will equal or exceed $5 million and that up to 20 of these sites may have claims exceeding $50 million; (2) of the $83.8 million owed to the federal government for the five largest natural resource damage settlements, only about 40 percent has been collected, and 11 percent of these funds has been spent; (3) the collected money has been used to reimburse trustees for completed damage assessments and to pay for natural resource restoration plans; (4) while CERCLA did not require standards or methods to determine damages, it directed Interior to develop consistent procedures that trustees should consider when assessing damages to natural resources; (5) trustees rarely use either of the procedures, since one is limited in scope and the other is costly and time-consuming; and (6) the 95 settlements reached as of April 1995 used abbreviated procedures that use readily available site-specific information and scientific literature to quantify damages. |
Prior to the early 1970s, the federal government provided affordable multifamily housing to low- and moderate-income households by subsidizing the production of either privately owned housing or government-owned public housing. Under the production programs, the subsidy is tied to the unit (project-based), and tenants benefit from reduced rents while living in the subsidized unit. These programs include Section 202, Section 221(d)(3) BMIR, and Section 236. A portion of the units in properties developed under these production programs received rental assistance under programs such as Rent Supplement, Rental Assistance Payments (RAP), and project-based Section 8 in order to reach lower- income tenants. In the early 1970s, questions were raised about the production programs’ effectiveness: many moderate-income tenants benefited from federal assistance, while lower-income families did not; federal costs of producing housing exceeded the private-sector costs to produce the same services; and allegations of waste surfaced. Interest in a more cost-effective approach led Congress to explore options for using existing housing to shelter low-income tenants. The Housing and Community Development Act of 1974, a major overhaul of housing laws, included both approaches—a project-based new construction and substantial rehabilitation program and a tenant-based rent certificate program for use in existing housing (currently named the Housing Choice Voucher program)—all referred to as Section 8 housing. Project-based and tenant-based Section 8 assistance is targeted to tenants with incomes no greater than 80 percent of area median income, and tenants generally pay rent equal to 30 percent of adjusted household income. Beginning in the late 1980s, owners of some subsidized properties began to be eligible to leave HUD programs by prepaying their mortgages or opting out of their project-based Section 8 rental assistance contracts. Once these owners removed their properties from HUD programs, they were no longer obligated to maintain low rents or accept rental assistance payments. In response, in 1996, among other things, Congress created a special type of voucher, known as an enhanced voucher, to protect tenants from rent increases in these properties. Enhanced vouchers differ from regular tenant-based housing vouchers in two ways. Enhanced vouchers may provide a greater subsidy (that is, be used to rent more expensive units) and give tenants a right to remain in their unit after conversion to market rent, thus creating an obligation for the owner to accept the voucher. So long as the rent remains reasonable, the tenant’s portion of the rent should not increase. If the tenant elects to move, the voucher becomes a “regular” housing voucher and is subject to the program’s standard rent limits. Not all property owners repay mortgages as originally scheduled. For example, an owner may refinance the mortgage to pay for improvements to the property. Other owners may experience financial difficulties and default on their mortgages. From January 1993 through December 2002, for example, HUD data show that the agency terminated the insurance on 231 mortgages. About 14 percent were due to mortgages that matured; other reasons included owners’ prepayment of the mortgage (37 percent) and foreclosure (22 percent). Funds provided by other federal programs can be used by states and localities to subsidize housing for low-income tenants. The CDBG program, authorized by the Housing and Community Development Act of 1974, distributes grants to local and state governments for community development activities. Rehabilitation and other housing activities now consistently represent the largest single use of CDBG funds. Other funds for housing production have been made available through the HOME program, authorized by the Cranston-Gonzalez National Affordable Housing Act of 1990, which awards block grants to state and local governments primarily for the development of affordable housing. Under the Low-Income Housing Tax Credit Program, authorized by the Tax Reform Act of 1986, state housing finance agencies provide tax incentives to private investors to develop housing affordable to low-income tenants. In addition to using their HOME and CDBG allocations as well as tax credits, some states and localities have established housing trust funds and other financial mechanisms, which have helped organizations acquire subsidized properties that may leave HUD’s programs. Further, the states and localities may use other tools and incentives, such as offering property tax relief, to encourage owners to keep serving low-income tenants. Nationwide, 21 percent (2,328) of the 11,267 subsidized properties with HUD mortgages are scheduled to mature through 2013. The percentage varies significantly by state: from 7 percent in Alabama, to 53 percent in South Dakota. Nearly all of these 2,328 properties were financed under the Section 236, Section 221(d)(3) BMIR, and Section 221(d)(3) programs, and about three-quarters of these mortgages are scheduled to mature in the last 3 years of the 10-year period. The remaining 79 percent of HUD’s outstanding mortgages in subsidized properties are scheduled to mature after 2013. Of the 11,267 subsidized properties (containing 914,441 units) with HUD mortgages, 21 percent (2,328 properties) have mortgages that are scheduled to mature through 2013. The remaining 79 percent of these mortgages are scheduled to reach maturity outside of the 10-year period. Additionally, the bulk of these mortgages (about 75 percent) are scheduled to mature in the latter 3 years of the 10-year period (see fig. 2). This concentration in the latter part of the 10-year period is attributable to the 40-year Section 221(d)(3) BMIR and Section 236 mortgages that HUD helped finance in the late 1960s and 1970s, respectively. As table 1 shows, about 57 percent of the properties with mortgages scheduled to mature in the 10-year period were financed under Section 236, 22 percent under Section 221(d)(3) BMIR, and 19 percent under Section 221(d)(3). Section 202, Section 221(d)(4), and Section 231 accounted for only 3 percent of these properties. The number of mortgages scheduled to mature through 2013 varies greatly by state (see fig. 3). Although the average is 46 per state (including the District of Columbia), the number ranges from a high of 273 maturing mortgages in California, to 3 in Vermont. Further, while 21 percent of HUD mortgages on subsidized properties nationwide are scheduled to mature through 2013, individual states have significantly different shares of these mortgages. Figure 4 shows the proportion of each state’s inventory of properties with HUD mortgages scheduled to mature in the 10-year period. The percentage varies significantly by state: from 7 percent in Alabama, to 53 percent in South Dakota. The CD-ROM that accompanies this report provides detailed property-level data that allows the users to perform similar analyses to track mortgage maturity by state or other location (congressional district or metropolitan area), as well as by other variables such as property category or rental assistance program. Over 8,900 properties, containing almost 680,000 units, have outstanding HUD mortgages scheduled to mature after 2013. Most of these mortgages were financed under the Section 202, Section 221(d)(4), and Section 236 programs. About 85 percent of the 680,000 units receive rental assistance. Many of these rental assistance contracts will be expiring through 2013. Specifically, 8,166 properties with HUD mortgages have rental assistance contracts expiring through 2013, affecting about 530,000 assisted units. Thus, while mortgages are not scheduled to mature during the period, these properties have tenants who could potentially face rent increases. According to HUD data, in the next 10 years, rental assistance contract expiration will affect a total of 18,048 properties—10,382 with HUD mortgages and another 7,666 without HUD mortgages—containing almost 1.1 million assisted units. Most of these long-term contracts are set to expire in the near future—before the end of 2007 (see fig. 5). When long-term rental assistance contracts expire, HUD may renew them. Currently, HUD generally renews expiring long-term contracts on an annual basis but may go as long as 5 years, and in some cases, 20 years. According to HUD, during the late 1990s, about 90 percent of the property owners renewed their contracts, thereby continuing to provide affordable housing. A 2001 publication by AARP reported that if past trends continue, 85 to 90 percent of contracts will be renewed. The extent to which the trend continues will depend on the availability of program funding and housing market conditions. As shown in figure 6, mortgage maturity and rental assistance contract expiration will affect a total of 18,553 properties through 2013: 505 properties will be affected by maturing mortgages only (480 of these are not covered by rental assistance contracts, and the remaining 25 have rental assistance contracts that expire outside of our 10-year window). 1,823 properties will be affected by both events (because they have rental assistance contracts set to expire and HUD mortgages scheduled to mature by 2013). 16,225 properties will be affected by expiring rental assistance contracts only (8,166 of these have HUD mortgages, but the mortgages are not scheduled to mature until after 2013). There are about 1.1 million assisted units in those properties with mortgages maturing or rental assistance expiring in the 10-year period. These units make up nearly 81 percent of all assisted units in HUD’s inventory. As figure 7 shows, about 48,000 units are in properties with maturing mortgages only, about 951,400 assisted units are in properties that have expiring rental assistance only, and about 132,600 assisted units (out of the approximate 188,600 total units) are in properties with both mortgages maturing and rental assistance expiring in the 10-year period. Over the next 10 years, low-income tenants in over 101,000 units may have to pay higher rents or move to more affordable housing when HUD- subsidized mortgages reach maturity. This is because no statutory requirement exists to protect tenants from increases in rent when HUD mortgages mature and rent restrictions are lifted. Over the next 10 years, 480 subsidized properties that do not have rental assistance contracts are scheduled to reach mortgage maturity. Unassisted tenants in some of these properties are at risk of not being able to afford their units if rents are raised. The remaining 1,848 subsidized properties with HUD mortgages scheduled to mature through 2013 have rental assistance contracts, and the protections against rent increases offered under the rental assistance programs will apply. However, not all units in these properties are covered by the rental assistance contracts, thus limiting the number of tenants protected. A number of factors may affect owners’ decisions regarding the continued affordability of their properties after mortgages mature, including neighborhood incomes, physical condition of the property, and owners’ missions. While experience with mortgage maturity has been limited, 16 of the 32 subsidized properties that reached mortgage maturity in the past 10 years are still serving low-income tenants through project- based Section 8 rental assistance contracts. Additionally, at least 10 of the remaining properties that reached mortgage maturity over the past 10 years are still serving low-income tenants. There is no statutory requirement for HUD to offer tenants special protections, such as enhanced vouchers, when a HUD mortgage matures. However, tenants who receive rental assistance in properties with maturing mortgages would be eligible for enhanced vouchers under rental assistance programs such as project-based Section 8. Of the 2,328 subsidized properties with mortgages scheduled to mature through 2013, 480—containing 45,011 units—do not have rental assistance contracts (see table 2). While the remaining 1,848 properties are subsidized with rental assistance, not all units within the properties are covered. According to HUD data, about 30 percent of the units in these properties are not covered—a total of 57,552 units with tenants who do not receive rental assistance. Altogether then, the tenants in a total of 102,563 units are not protected under the rental assistance programs. Of these, 101,730 units under Section 202, Section 221(d)(3) BMIR, and Section 236 could face higher rents after mortgage maturity when the rent restrictions under these programs are lifted. These unassisted tenants are mostly housed in properties financed under Section 221(d)(3) BMIR and Section 236 (see fig. 8). According to a HUD study, tenants in properties with mortgages under these programs have an average household income somewhat greater than that for tenants who receive rental assistance; thus, they may be somewhat more able to afford higher rents. Properties financed under the Section 221(d)(3) BMIR program allow tenants with incomes of up to 95 percent of area median income; in comparison, project-based Section 8 does not serve tenants earning more than 80 percent of area median income. Tenants in units covered by a rental assistance program—there are 134,087 units in the properties with HUD mortgages scheduled to mature through 2013—will continue to benefit from affordable rents, regardless of when the mortgage matures, as long as the rental assistance contract is in force. If a rental assistance contract expires prior to mortgage maturity and the owner opts not to renew it, assisted tenants would be eligible for enhanced vouchers. Tenants could potentially be affected by the length of time given to them to adjust to rent increases as well as by the amount of the increase. Property owners are not required to notify tenants when they pay off their mortgage at mortgage maturity. In contrast, property owners electing to prepay their mortgage or opt out of their Section 8 contract are required to notify tenants. For example, when owners opt out of the Section 8 project-based program, they are required to notify tenants 1 year in advance of the contract expiration. In cases where owners prepay their mortgages under the Section 236 or Section 221(d)(3) BMIR programs, tenants must be given notice at least 150, but not more than 270, days prior to prepayment. Some locations have established even more stringent notification requirements. Many factors can influence an owner’s decision to keep a property in the affordable inventory or convert to market rate rents upon mortgage maturity. For a profit-motivated owner, the decision may be influenced by the condition of the property and the income levels in the surrounding neighborhood. If the surrounding neighborhood has gentrified and if the property can be upgraded at a reasonable cost, it may be more profitable to turn the building into condominiums or rental units for higher income tenants. If repair costs are substantial or if high-income residents are not present in the surrounding area, it may be more profitable to leave the property in the affordable inventory. Tools and incentives offered by state and local agencies may also influence this decision. In addition, because most of these owners have had the right to prepay their mortgages and opt out of their Section 8 contracts for a number of years, the economic factors that drive a decision to convert to market rate are not unique to mortgage maturity. HUD data show that nonprofit organizations own about 38 percent of the properties with mortgages scheduled to mature in the next 10 years. For a nonprofit owner, the decision would likely be motivated by cash flow considerations since, in theory, these owners are not primarily motivated by economic returns. Since mortgage maturity results in an improvement in property cash flow, reaching mortgage maturity by itself would not necessarily trigger removal from the affordable inventory. For example, at 1 of the 16 properties (nonprofit ownership) whose mortgages matured in the past 10 years and that do not currently have project-based Section 8 assistance, the property manager told us that no longer having to pay the mortgage left money for repairs needed to keep the units affordable for their low-income senior tenants. Additionally, a nonprofit organization would be more likely to keep the property affordable to low-income tenants because to do otherwise could conflict with its basic mission of providing affordable housing. Another factor is the loss of the interest rate subsidy that occurs when the mortgage matures. When interest rate subsidies were first paid to properties built in the 1960s and 1970s, they represented substantial assistance to property owners. Over time, inflation has substantially reduced the value of this subsidy. For example, the average interest rate subsidy payment for a Section 236 property with a mortgage maturing in the next 10 years is $66 per unit per month. The level of prices has roughly quadrupled since 1970, so to have the same purchasing power would require about $260 in today’s dollars. Section 8 and similar project-based rental assistance now provide the bulk of the assistance to these subsidized properties—75 percent of the assistance versus about 25 percent that derives from the Section 236 interest-rate subsidy. Furthermore, inflation will continue to erode the value of the interest-rate subsidy until mortgage maturity, while the rental assistance subsidy is adjusted annually to account for increases in operating costs. Our review of HUD’s data showed that HUD-insured mortgages at 32 properties matured between January 1, 1993, and December 31, 2002. Sixteen of the 32 properties are still serving low-income tenants through project-based Section 8 rental assistance contracts. For 13 of these 16 properties, the rental assistance covers 100 percent of the units (799 assisted units), and for the remaining 3 properties, it covers 54 percent of the units (174 assisted units). Using HUD’s archived data for inactive properties, we attempted to contact the property managers of the remaining 16 properties (consisting of 1,997 units) to determine if the properties currently offer rents affordable to low- income tenants. We were able to obtain rent information for 10 properties. We found that all 10 (none of which have project-based rental assistance contracts) are offering rents that are affordable to tenants with incomes below 50 percent of area median income. According to HUD’s database, only 2 of these properties ever had Section 8 project-based contracts, and both expired in early 2000. We could not obtain actual tenant incomes since property managers told us that they are not required to maintain such information for properties without federal use restrictions. Using the reported average rent for a 2-bedroom unit, we estimated the income needed to afford the reported rent (that is, the income needed if no more than 30 percent of gross income would be used for rent). We then compared this estimated income to the area’s median household income for 2003. The rent affordability percentages in table 3 express the estimated income needed as a percentage of the area median income. Thus, numbers less than 50 indicate that the unit is affordable to households with incomes 50 percent or less of the area median income. The available data for the 16 properties is summarized in table 3. Because of the variety of factors that can influence owners’ decisions, however, these properties are not necessarily indicative of what will happen to other properties as their HUD mortgages mature. Various property managers we contacted also provided information about their efforts to keep their properties affordable. For example, a senior complex (nonprofit ownership) continues to generally charge residents about 30 percent of their income for rent as they did when they were in HUD’s subsidized portfolio. According to the property manager of two of the properties (for-profit ownership), he unsuccessfully sought incentives from HUD in 2002 to keep the properties in the inventory when the mortgages reached maturity, and both properties left HUD’s multifamily portfolio. However, both properties are accepting tenant-based vouchers, and the rents in both properties are affordable to very low-income tenants. HUD does not offer any tools or incentives to keep properties affordable after HUD mortgages mature, although it does offer incentives to maintain affordability for properties that also have expiring rental assistance contracts. According to officials from the four national housing and community development organizations we contacted, because few HUD mortgages have matured to date, their member state and local agencies have not experienced the need to develop programs to deal with mortgage maturity. They noted that their member agencies can offer tools and incentives, such as loans and grants, that might be used by owners to keep properties affordable after mortgage maturity. However, about three- quarters of the state and local agencies that responded to our survey reported that they do not track the maturity dates on HUD mortgages, and none provided examples of tools or incentives used to keep units affordable after mortgage maturity. The agencies indicated that funds available through HUD’s HOME and CDBG programs and the Low-Income Housing Tax Credit program are effective means for preserving the affordability of HUD-subsidized housing. They also identified financial assistance to nonprofit organizations to aid them in acquiring HUD-subsidized properties as an effective tool. However, over 50 percent of the agencies reported that they have no tracking system in place to systematically identify properties that could potentially leave HUD’s affordable housing programs and thus might be candidates for affordability preservation assistance. HUD does not offer property owners any specific incentive to keep properties affordable to low-income tenants after maturity of their HUD mortgages. During the 1990s, HUD established incentive programs to deal with the loss of affordable units because owners were prepaying their mortgages and opting out of their Section 8 contracts. These incentives are as follows: Mark-up-to-Market allows owners to increase the rents for units subsidized under the project-based Section 8 rental assistance program up to market levels in exchange for keeping the units in the Section 8 inventory for a minimum of 5 years. Section 236 Decoupling can be activated when the owner prepays a Section 236 mortgage and obtains conventional financing. By agreeing to keep the property affordable for at least another 5 years beyond the original term of the mortgage, owners can keep the interest rate reduction payments that they were receiving when they had a HUD- financed mortgage. Section 202 Prepayments allow owners to prepay their HUD loans and obtain other financing, but they must keep the affordability use restriction until the maturity date of the original loan. These incentives do not directly address the termination of the affordability requirements resulting from mortgage maturity. Rather, they can extend, under certain circumstances, the affordability period beyond the original term of the mortgage, as in the Section 236 Decoupling incentive, or allow property owners to be better positioned financially to continue providing affordable housing, as in the case of Section 202 Prepayments and Mark-up- to-Market. The 226 state and local agencies that responded to our survey commented on the effectiveness of 18 tools and incentives as a mean to preserve HUD’s affordable rental housing. Of the 18, 6 were funded directly by the federal government, while 12 were administered by state and local governments and were not directly federally funded. However, there was no evidence that they have been used to protect properties when HUD mortgages mature. This may be because relatively few mortgages have matured to date. State and local tools and incentives include housing trust funds used to make loans and grants, financial assistance to nonprofit organizations to aid them in acquiring HUD-subsidized properties, and property tax relief to owners of HUD-subsidized properties. These state and local agencies identified several incentives that they believe are the most effective in preserving the affordability of housing for low-income tenants. For example, over 60 percent of the 62 state agencies that responded identified the 4 percent tax credit and HOME programs as effective means for preserving the affordability of HUD-subsidized properties. Of the 76 local agencies that responded, over 70 percent identified HOME as effective, and over 60 percent identified CDBG as effective. Over 50 percent of the survey respondents reported that they have no system in place to identify and track properties in their states or localities that could leave HUD’s subsidized housing programs. Further, about three- quarters reported that they do not track the maturity dates of HUD mortgages. Awareness of the potential for a HUD mortgage to mature or rental assistance to end does not guarantee that state or local agencies will take action to preserve the assisted units’ affordability to low-income tenants. However, knowing when properties will be eligible to leave HUD’s programs could better position state and local agencies to use available tools and incentives at mortgage maturity. Several respondents to our survey noted that it would be helpful to them if HUD could provide information about properties that might leave HUD’s programs. Their comments included the following: “It would be helpful if HUD would provide local governments periodic reports on the status of HUD properties in the locality.” “I believe a lot of CDBG entitlement agencies would be willing to track properties that could leave HUD’s affordable housing programs if HUD would provide them with a listing of the properties.” “Communication between project-based property owners, HUD, and local public housing authorities is not very effective.” Of the 102 agencies that indicated they identified and tracked properties, 56 (55 percent) said that they monitored the scheduled maturity dates of HUD mortgages on local properties (see fig. 9). More agencies (82 or 80 percent) reported that they identified and tracked properties that might opt out of HUD project-based rental assistance contracts. HUD officials noted that they make property-level information available to the public on HUD’s multifamily housing Web site. This Web site contains detailed property-level data on active HUD-insured mortgages and expiring rental assistance contracts. However, according to our survey, some state and local agencies perceive that the information is not readily available. One problem may be that these data are in a format that may not be sufficiently “user-friendly” for these agencies. The data must be accessed using database software, which requires users to be proficient in these types of software. HUD officials agreed that the agency could provide more “user friendly” information because the data are not as accessible to state and local agencies as they could be. They also noted that these agencies could benefit from a “watch list” that identifies properties that may leave HUD subsidy programs in their jurisdictions, such as upon mortgage maturity, especially if such data were updated annually and readily available online so that agencies would have the information needed to prioritize and fund efforts to preserve low-income housing in their jurisdictions. HUD’s rental housing programs have developed subsidized properties for low- and moderate-income tenants that carry rent affordability requirements for a fixed period. As a result, HUD has focused on keeping these properties affordable for at least that period, and its tools and incentives have mainly addressed mortgage prepayments and rental assistance contract expiration, not mortgage maturity. While a share of the properties with HUD mortgages are scheduled to reach maturity over the next 10 years, it is uncertain how many of these properties will attempt to convert to market-rate housing and raise rents, making the units in these properties unaffordable for many tenants. While state and local agencies might be able to play an important role in maintaining the affordability of properties eligible to leave HUD programs because of mortgage maturity or other reasons, these agencies need to know in advance which properties are eligible to leave HUD’s programs, and when, in order to use tools and incentives that can help keep the properties affordable. Even though HUD makes property-level data available to the public on its Web site, state and local agency responses to our survey suggest that HUD data may not be as readily accessible, and therefore as useful, as they could be. HUD officials responsible for maintaining the data on the subsidized properties agreed. To help state and local housing agencies track HUD-subsidized properties that may leave HUD’s programs upon mortgage maturity or for other reasons, we recommend that the Secretary of HUD solicit the views of state and local agencies to determine (1) the specific information concerning HUD-subsidized properties that would be most useful to their affordability preservation efforts and (2) the most effective format for making this information available, and then use the results to modify the current means of conveying the data on these properties to make the data more widely available and useful. We provided a draft of this report to HUD for its review and comment. In a letter from the Assistant Secretary for Housing (see app. III), HUD agreed with the report’s findings, conclusions, and recommendations. HUD also noted that while it believes that a wide array of public and private entities concerned about preserving the affordable housing stock are using the databases currently available through HUD’s Web site, it could improve the format and modify the current means of conveying the data on these properties to make the data more readily available. In its letter, HUD also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested Members of Congress and congressional committees. We also will send copies to the Secretary of the Department of Housing and Urban Development and the Director of the Office of Management and Budget and make copies available to others upon request. The report will be available at no charge on the GAO Web site at http://www.gao.gov. A CD-ROM (GAO- 04-210SP), which includes property-level data for subsidized properties with mortgages scheduled to mature or expiring rental assistance contracts, will accompany this report and can be ordered at www.gao.gov/cgi-bin/ordtab.pl. The results of our survey of state and local agencies (GAO-04-211SP) will also be available on the GAO Web site at www.gao.gov/cgi-bin/getrpt?GAO-04-211SP. Please contact me at (202) 512-8678, or Andy Finkel at (202) 512-6765, if you or your staff have any questions concerning this report. Key contributors to this report are listed in appendix IV. To develop a state-by-state inventory of multifamily properties with HUD mortgages scheduled to mature and to identify the properties’ characteristics, we analyzed and combined information from several HUD databases. We used data from HUD’s Real Estate Management System (REMS), which contains information on active properties in Datamart, as well as from the Tenant Rental Assistance Certification System (TRACS), which contains information on Section 8 contracts. We also incorporated data from HUD’s Real Estate Assessment Center (REAC) and—through the Office of Multifamily Housing and Restructuring (OMHAR)—data from the Annual Financial Statements submitted to HUD by property owners. To ensure the HUD data were reliable, we performed internal checks to determine (1) the extent to which the data were complete and accurate, (2) the reasonableness of the values contained in the data fields, and (3) whether any data limitations existed in the data we relied upon to do our work. Based on our reliability assessment, we concluded that the data were reliable for purposes of this report. The data obtained from HUD are as of April 15, 2003. To have 10 full years of data, our analysis covered the period from April 15, 2003, through December 31, 2013. For the properties with existing HUD mortgages, we identified those that also have project-based rental assistance contracts. We then separately identified properties that do not have HUD mortgages, but have project- based rental assistance contracts that are also due to expire through 2013. To obtain occupancy data relating to the individual properties, we used the system containing the financial statements that are prepared and submitted annually to HUD by property owners. For each property, we obtained the following information: metropolitan area, total number of units, total number of assisted units, name of HUD financing program, name of rental assistance program, rental assistance expiration date, number of rental assistance contracts, rental assistance contract status, type of client (tenant) served, type of property ownership, subsidy utilization rate, and property inspection score (REAC score). We also used HUD’s database to identify properties whose mortgages have matured over the last 10 years. To determine how many properties are still serving low-income tenants, we first identified those that are covered by rental assistance contracts. For 14 of the 16 properties without current rental assistance contracts, we obtained contact information from HUD’s archived database (the database did not have sufficient complete information on the other 2). We then contacted these properties via telephone to determine if the management was still serving low-income tenants. We reviewed HUD regulations to determine the potential impact on tenants when HUD mortgages mature. In particular, we reviewed the eligibility of tenants to receive enhanced vouchers and other protections against increases in rents when properties leave HUD’s programs. We discussed these regulations with appropriate HUD officials and also requested that HUD identify protections available to tenants under the various housing programs. To identify the incentives that HUD, the states, and localities could offer owners under existing laws and regulations, we interviewed HUD, state, and local officials and reviewed available literature. Because there are no nationwide data available on the utilization of tools and incentives at the state and local level and no single agency is responsible for administrating the various incentives for any state, we surveyed state and local housing and community development agencies via the Internet. We identified the survey participants through lists provided by four national housing industry organizations. Specifically, we surveyed members of the National Council of State Housing Agencies (NCSHA), which represents state housing finance agencies; the Council of State Community Development Agencies (COSCDA), which represents state housing and community development agencies; the National Community Development Association (NCDA), which represents local communities that administer federally supported programs such as CDBG and HOME; and the National Association of Local Housing Finance Agencies (NALHFA), which represents local housing finance agencies. The survey covered (1) their experiences in preserving affordable housing, (2) the incentives used and their effectiveness, and (3) the extent to which they identify and track properties that could leave HUD’s programs. In developing the survey, we met with officials at the four national organizations to gain a better understanding of the issues and modified our questions based on their comments. We then pretested with several state and local agencies throughout the country, such as the Department of Community Development in Amarillo, Texas; the Department of Neighborhood Development in Boston, Massachusetts; and the Ohio Housing Finance Agency. During these pretests, we observed the officials as they filled out the survey over the Internet. After completing the pretest survey, we interviewed the respondents to ensure that (1) the questions were clear and unambiguous, (2) the terms we used were precise, (3) the survey did not place an undue burden on the agency officials completing it, and (4) the survey was independent and unbiased. On the basis of the feedback from the pretests, we modified the questions as appropriate. Information about accessing the survey was provided to a contact person at each of 327 state and local housing and community development agencies in 50 states, the District of Columbia, and Puerto Rico. The survey was activated on May 12, 2003; it was available until September 5, 2003. To ensure security and data integrity, we provided each agency with a password to access and complete the survey. We originally included 373 potential respondents in our survey, but eliminated 46 for various reasons, including those agencies having no authority over affordable housing and those with no HUD properties in their jurisdictions. As a result, 327 potential respondents remained—46 from NCSHA, 65 from COSCDA, 130 from NCDA, and 86 from NALHFA. From the 327, we obtained 226 usable responses—38 from NCSHA, 47 from COSCDA, 83 from NCDA, and 58 from NALHFA—for an overall response rate of 69 percent. We would like the following questions answered in this General Accounting Office report: 1. This letter includes a list of the privately owned, publicly assisted multifamily housing mortgage programs. 221(d)(3) Market rate with rent supplement Below Market Rate Interest Rates (BMIR) with rent supplement or Rental Assistance Projects (RAP) 221(d)(4) with all or partial Section 8 202s with rent supplement or Section 8 Section 8 moderate rehabilitation (not funded through HUD, maybe PHA) Noninsured rent supplement projects (12 projects only in NY and Minnesota) Please update the list if there are other programs that should have been included and include any omitted programs in your answers to the other questions requested in this report. We did not identify any programs to add to the list. The report encompasses all of these programs with the following exceptions: (1) HUD does not collect mortgage information on noninsured rent supplement properties because the properties do not use HUD financing. HUD does have data on the rent supplement contracts alone, which we included in the CD-ROM; (2) Section 8 Moderate Rehabilitation properties are excluded because HUD does not track these properties in its multifamily database and maintains no aggregate data on properties in the program. 2. What is the potential impact on the renewal of those Section 8 contracts in projects where FHA mortgages mature, the principal is paid off entirely, and the affordability restrictions attendant to the mortgages expire? The impact of a matured HUD mortgage, by itself, on an owner’s decision to renew a Section 8 contract is uncertain because there are a number of other factors that can affect the decision. For a profit-motivated owner, the decision to renew would likely be influenced by the condition of the property and the income levels in the surrounding neighborhood. If the surrounding neighborhood has gentrified and if the property can be upgraded at a reasonable cost, it may be more profitable to turn the building into condominiums or rental units for higher income tenants. If repair costs are substantial or if high-income residents are not living in the surrounding area, it may be more profitable to keep the property in the affordable inventory by renewing the Section 8 contract. Tools and incentives offered by HUD, state, and local agencies may also influence these decisions. For a nonprofit owner, the decision would likely be motivated largely by cash flow considerations since, in theory, these owners are not primarily motivated by economic returns. HUD data show that nonprofit organizations own about 36 percent of the properties with mortgages scheduled to mature in the next 10 years. Since mortgage maturity results in an improvement in property cash flow, reaching mortgage maturity would not by itself necessarily trigger removal from the affordable inventory. Additionally, a nonprofit organization would be more likely to keep the property affordable to low-income tenants because to do otherwise would conflict with its basic mission of providing affordable housing. Thus, nonprofit owners would likely continue to renew Section 8 contracts. 3. We request an inventory, in chart form, of all the units that will reach maturity in the next 10 years. The inventory should include: Property name, city, and state Property MSA (metropolitan statistical area) Month and year of mortgage maturity Type of multifamily program for each development Number of units for each development Expiration date of Section 8 contract for each development (if any) Contract status of Section 8 contract for each development (if any) Number of section 8 units for each development (if any) Total number of units covered under each of the programs and their Total number of units for all developments Total number of units that are occupied Total number of section 8 units Type of families housed, i.e. families, elderly, etc. Whether the unit is owned by a profit or nonprofit organization All the data elements cited above are included in the CD-ROM that accompanies this report. Data on property inspection scores, subsidy utilization rates, street addresses, and the congressional district in which the property is located are also included. 4. What will happen to the units and hence the families occupying the units once the mortgages expire? What rights, if any, do these tenants have regarding their rent costs subsequent to the expiration of the mortgage term and pay off of the entire mortgage principal? Provided there is no other subsidy, owners of properties whose HUD- subsidized mortgages have matured are generally no longer required to charge reduced rents to tenants that meet HUD’s income limits, and the tenants do not have any rights or protections. Depending on the owner’s decision, tenants could face higher rents and, if they were unable to afford them, would have to move. However, if the units are covered by a rental assistance contract, the tenants would not be affected by the mortgage maturity. As long as the rental assistance is in force, these tenants would continue to benefit from subsidized rents. 5. Under existing laws and regulations, are there Federal government incentives that HUD could offer the owners of the multifamily housing developments to keep properties affordable upon maturity of the FHA mortgage and pay off the principal? Under existing law and regulations, what types of incentives are available for each state and the District of Columbia that could be made available to the owners of the multifamily housing developments? Have they been successful? HUD does not offer property owners any specific incentive to keep properties affordable to low-income tenants after maturity of their HUD mortgage. During the 1990s, HUD established incentive programs to deal with the loss of affordable units because owners were prepaying their mortgages and opting out of their Section 8 contracts. These incentives include the Mark-up-to-Market program, Section 236 Decoupling, and Section 202 Prepayments. These incentives do not directly address the termination of the affordability requirements resulting from mortgage maturity. Rather, they can extend, under certain circumstances, the affordability period beyond the original term of the mortgage, as in the Section 236 Decoupling incentive, or allow property owners to be better positioned financially to continue providing affordable housing, as in the case of Section 202 Prepayments and Mark-up-to-Market. State and local agencies identified tools and incentives to preserve affordable housing, but not specifically for addressing maturing HUD mortgages. The 226 state and local agencies that responded to our survey commented on the effectiveness of 18 tools and incentives as a mean to preserve HUD’s affordable rental housing. Of the 18, 6 were funded directly by the federal government, while 12 were administered by state and local governments and were not directly federally funded. However, there was no evidence that they have been used to protect properties when HUD mortgages mature. This may be because relatively few mortgages have matured to date. 6. What are the possible effects if the Section 8 contract maturity date is shorter than the FHA mortgage maturity date? The effects depend largely on the owner’s decision about the future use of the property. As noted in our response to question 2, an owner’s decision to renew a Section 8 contract can be influenced by a number of factors, such as neighborhood incomes, the condition of the property, and owner’s mission. Consideration of these factors would likely also apply to properties where the Section 8 contract expiration date is earlier than the scheduled maturity date on the HUD mortgage. When mortgage maturity is imminent, an owner may also consider what the impact of losing the interest rate subsidy as well as paying off the HUD mortgage will be on the property’s cash flow. When interest rate subsidies were first paid to properties built in the 1960s and 1970s, they represented substantial assistance to property owners. Over time, inflation has substantially reduced the value of this subsidy relative to the rental assistance subsidy, which is adjusted annually to account for increases in operating costs. Project-based rental assistance now provides the bulk of the assistance to these subsidized properties. Therefore, it is possible that, under certain circumstances, such as where a surrounding neighborhood has gentrified and the property can be upgraded at a reasonable cost, a for- profit owner may decide to forgo the remaining interest rate subsidy payments and prepay the mortgage at the time the project-based contract expires. However, because most owners have had the right to prepay mortgages and opt out of their Section 8 contracts for a number of years, the economic factors that drive the decision to convert to market rate when mortgages mature are no different than in the past. From the tenant’s perspective, if the owner elects to enter into a new Section 8 contract, the tenants in assisted units will be protected for the duration of the contract. If the owner elects not to enter into a new Section 8 contract with or without prepaying the mortgage, the tenants in the units that previously received rental assistance would receive enhanced vouchers. Enhanced vouchers give the tenants the right to stay in their units and generally protect them from rent increases in the properties after the Section 8 contract expires, regardless of the maturity date of the HUD mortgage. 7. For those mortgages that have reached mortgage maturity or are soon to do so, what actions, if any, have been taken by state, local, or other bodies to ensure that affordability has been maintained after the FHA mortgages are extinguished or are about to be paid off in their entirety? Have the efforts been successful? According to officials from the four national housing and community development organizations we contacted, because relatively few HUD mortgages have matured to date, their member state and local agencies have not experienced the need to deal with mortgage maturity. They noted that their member agencies can offer tools and incentives, such as loans and grants, to owners to keep properties affordable after mortgage maturity. However, about three-quarters of the state and local agencies that responded to our survey reported that they do not track the maturity dates on HUD mortgages, and none provided examples of tools or incentives used specifically to keep units affordable after mortgage maturity. 8. Please provide data on how many units/developments have already reached mortgage maturity, the current status of those units/developments, and whether those units/developments are still serving low-income families. Our review of HUD’s data showed that HUD-insured mortgages at 32 properties matured between January 1, 1993, and December 31, 2002. Sixteen of the 32 properties are still serving low-income tenants through project-based Section 8 rental assistance contracts. For 13 of these 16 properties, the rental assistance covers 100 percent of the units (799 assisted units), and for the remaining 3 properties, it covers 54 percent of the units (174 assisted units). Using HUD’s archived data for inactive properties, we attempted to contact the property managers of the remaining 16 properties (consisting of 1,997 units) to determine if the properties currently serve low-income tenants. We were able to obtain rent information for 10 properties. We found that all 10 (none of which have project-based rental assistance contracts) are still primarily serving low-income tenants and that the current rents are affordable to tenants with incomes below 50 percent of area median income. According to HUD’s database, only 2 of these properties ever had Section 8 project-based contracts, and both expired in early 2000. We could not obtain actual tenant incomes since property managers told us that they are not required to maintain such information for properties without federal use restrictions. 9. The provision of enhanced vouchers does not currently apply to Section 236 or Section 221 (d) (3) mortgages that mature. What is the impact on the current tenant population upon mortgage maturity? There is no statutory requirement for HUD to offer tenants special protections, such as enhanced vouchers, when a HUD mortgage matures. However, tenants who receive rental assistance in properties with maturing Section 236 or Section 221(d)(3) mortgages would be eligible for enhanced vouchers under rental assistance programs, such as project-based Section 8. Depending on property owners’ decisions, tenants in these properties who do not receive rental assistance could face higher, possibly unaffordable, rents. 10. What recommendations does GAO propose to address or alleviate the potential loss of affordable housing arising from FHA mortgage maturations? Awareness of the potential for a HUD mortgage to mature, while not a guarantee of action, could help state or local agencies’ ability to use available tools or incentives for preserving properties’ affordability to low- income tenants. Therefore, to help state and local housing agencies track HUD-subsidized properties that may leave HUD’s programs upon mortgage maturity or for other reasons, we are recommending that the Secretary of HUD solicit the views of state and local agencies to determine (1) the specific information concerning HUD-subsidized properties that would be most useful to their affordability preservation efforts and (2) the most effective format for making this information available, and then use the results to modify the current means of conveying the data on these properties to make the data more readily available. In addition to those named above, Mark Egger, Daniel Garcia-Diaz, Nadine Garrick, Curtis Groves, Austin Kelly, John McDonough, John McGrail, Luann Moy, Barbara Roesmann, William Sparling, Thomas Taydus, and James Vitarello made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | The Department of Housing and Urban Development (HUD) has subsidized the development of over 23,000 properties by offering owners favorable long-term mortgage financing or rental assistance payments in exchange for owners' commitment to house low-income tenants. When owners pay off mortgages--the mortgages "mature"--the subsidized financing ends, raising the possibility of rent increases. GAO was asked to determine the number of HUD mortgages that are scheduled to mature in the next 10 years, the potential impact on tenants, and what HUD and others can do to keep these properties affordable. Nationwide, the HUD mortgages on 2,328 properties--21 percent of the 11,267 subsidized properties with HUD mortgages--are scheduled to mature in the next 10 years, but among states this percentage varies significantly: from 7 percent in Alabama, to 53 percent in South Dakota. About three-quarters of these mortgages are scheduled to mature in the last 3 years of the 10-year period. A CD-ROM (GAO-04-210SP) that accompanies this report provides property-level data for subsidized properties with mortgages scheduled to mature. Impacts on tenants depend on tenant protections available under program statutes and regulations, as well as on property owners' decisions about their properties. While about 134,000, or 57 percent, of the rental units in the 2,328 properties are protected by rental assistance contracts, tenants in over 101,000 units without rental assistance are at risk of paying higher rents after mortgage maturity because no requirement exists to protect tenants when HUD mortgages mature. Absent specific requirements, property owners' decisions on whether to continue serving low-income tenants after their HUD mortgages mature depend on many factors, including neighborhood incomes, property conditions, and owners' missions. Of the 32 properties with HUD mortgages that matured during the past 10 years, 16 have rental assistance contracts that continue to subsidize at least some units, and 10 of the remaining 16 that GAO was able to contact offer rents that are affordable to tenants with incomes below 50 percent of area median income. HUD does not offer incentives to owners to keep properties affordable upon mortgage maturity. While many state and local agencies GAO surveyed offer incentives to preserve affordable housing, they have not directed them specifically at properties where HUD mortgages mature. Most of the agencies do not track HUD mortgage maturity dates for subsidized properties. In addition, although HUD's Web site contains detailed property-level data, some state and local agencies perceive that the information is not readily available. Refer to GAO-04-211SP for survey details. |
The electricity industry is based on four distinct functions: generation, transmission, distribution, and system operations. (See fig. 1.) Once electricity is generated—whether by burning fossil fuels; through nuclear fission; or by harnessing wind, solar, geothermal, or hydro energy—it is sent through high-voltage, high-capacity transmission lines to electricity distributors in local regions. Once there, electricity is transformed into a lower voltage and sent through local distribution wires for end-use by industrial plants, commercial businesses, and residential consumers. A unique feature of the electricity industry is that electricity is consumed at almost the very instant that it is produced. As electricity is produced, it leaves the generating plant and travels at the speed of light through transmission and distribution wires to the point of use, where it is immediately consumed. In addition, electricity cannot be easily or inexpensively stored and, as a result, must be produced in near-exact quantities to those being consumed. Because electric energy is generated and consumed almost instantaneously, the operation of an electric power system requires that a system operator balance the generation and consumption of power. The system operator monitors generation and consumption from a centralized location using computerized systems and sends minute-by-minute signals to generators reflecting changes in the demand for electricity. The generators then make the necessary changes in generation in order to maintain the transmission system safely and reliably. Absent such continuous balancing, electrical systems would be highly unreliable, with frequent and severe outages. Historically, the electric industry developed initially as a loosely connected structure of individual monopoly utility companies, each building power plants and transmission and distribution lines to serve the exclusive needs of all the consumers in their local areas. Such monopoly utility companies were typically owned by shareholders and were referred to as investor- owned utilities. In addition to these investor-owned utilities, several types of publicly owned utilities, including rural cooperatives, municipal authorities, state authorities, public power districts, and irrigation districts, also began to sell electricity. About one-third of these publicly owned utilities are owned collectively by their customers and generally operate as not-for-profit entities. Further, nine federally owned entities, including the Tennessee Valley Authority and the Bonneville Power Administration, also generate and sell electricity—primarily to cooperatives, municipalities, and other companies that resell it to retail consumers. Because the utilities operated as monopolies, wholesale and retail electricity pricing was regulated by the federal government and the states. The Public Utility Holding Company Act of 1935 (PUHCA) and the Federal Power Act of 1935 established the basic framework for electric utility regulation. PUHCA, which required federal regulation of these companies, was enacted to eliminate unfair practices by large holding companies that owned electricity and natural gas companies in several states. The Federal Power Act created the Federal Power Commission—a predecessor to FERC—and charged it with overseeing the rates, terms, and conditions of wholesale sales and transmission of electric energy in interstate commerce. FERC, established in 1977, approved interstate wholesale rates based on the utilities’ costs of production plus a fair rate of return on the utilities’ investment. States retained regulatory authority over retail sales of electricity, electricity generation, construction of transmission lines within their boundaries, and intrastate transmission and distribution. Generally, states set retail rates based on the utility’s cost of production plus a rate of return. The goal of federal efforts to restructure the electricity industry is to increase competition in order to provide benefits to consumers, such as lower prices and access to a wider range of services, while maintaining reliability. Over the past 13 years, the federal government has taken a series of steps to encourage this restructuring that generally fall into four key categories: (1) market structure, (2) supply, (3) demand, and (4) oversight. Regarding market structure, federal restructuring efforts have changed how electricity prices are determined, replacing cost-based regulated rates with market-based pricing in many wholesale electricity markets. In this regard, efforts undertaken predominantly by FERC have helped to encourage a shift from a market structure that is based on monopoly utilities providing electricity to all customers at regulated rates to one in which prices are determined largely by the interaction of supply and demand. In prior work, we reported that increasing competition required that at least three key steps be taken: increasing the number of buyers and sellers, providing adequate market information, and allowing potential market participants the freedom to enter and exit the industry. In terms of supply, federal restructuring efforts have generally focused on allowing new companies to sell electricity, requiring the owners of the transmission systems to allow these new companies to use their lines, and approving the creation of new entities to fairly administer these markets. The Energy Policy Act of 1992 made it easier for new companies, referred to as nonutilities, to enter the wholesale electricity market, which expanded the number of companies that can sell electricity. For example, we reported that from 1992 through 2002, FERC had authorized 850 companies to sell electricity at market-based rates. To allow these companies to buy and sell electricity, FERC also required that transmission owners under its jurisdiction, generally large utilities, allow all other entities to use their transmission lines under the same prices, terms, and conditions as those that they apply to themselves. To do this, FERC issued orders that required the regulated monopoly utilities—which had historically owned the power plants, transmission systems, and distribution lines—to separate their generation and transmission businesses. In addition, in response to concerns that some of these new companies received unfair access to transmission lines, which were mostly still owned and operated by the former utilities, FERC encouraged the utilities that it regulated to form new entities to impartially manage the regional network of transmission lines and provide equal access to all market participants, including nonutilities. These entities, including independent system operators (ISOs) and regional transmission organizations (RTOs), operate transmission systems covering significant parts of the country. One of these, the California ISO, currently oversees the electricity network spanning most of the state of California. Another important effort to facilitate the interaction of buyers and sellers was FERC’s approval of the creation of several wholesale markets for electricity. These markets created centralized venues for market participants to buy and sell electricity. Finally, FERC has undertaken efforts to improve the availability and accuracy of price information used by suppliers, such as daily market prices reported to news services, and has established guidelines for the conduct of sellers of wholesale electricity, requiring these entities to, among other things, accurately report prices and other data to news services. Federal efforts to affect demand at the wholesale level have focused on encouraging prices in wholesale markets to be established by the direct interaction between buyers and sellers in these markets. We previously reported that there were several centralized markets in which suppliers and buyers submitted bids to buy and sell electricity and that other types of market-based trading were also emerging, such as Internet-based trading systems. However, there have been few federal efforts to directly affect prices at the retail level, where most electricity that is consumed is purchased, because states, and not the federal government, have regulatory authority for overseeing retail electricity markets. As part of its efforts to have prices set by the direct interaction of supply and demand, FERC has approved proposals to incorporate so-called “demand-response” programs into the markets that it oversees. These programs, among other things, allow electricity buyers to see electricity prices as they change throughout the day and provide the choice to sell back electricity that they otherwise would have used. For example, we reported that FERC had approved one such program in New York State that allows consumers to offer to sell back specific amounts of electricity that they are willing to forgo at prices that they determine. More recently, the Energy Policy Act of 2005 requires FERC to study issues such as demand-response and report on its findings to the Congress. Finally, restructuring has fundamentally changed how electricity markets are overseen and regulated. Historically, FERC had ensured that prices in wholesale electricity markets were “just and reasonable” by approving rates that allowed for the recovery of justifiable costs and providing for a regulated rate of return, or profit. To ensure that prices are just and reasonable in today’s restructured electricity markets, FERC has shifted its regulatory role to approving rules and market designs, proactively monitoring electricity market performance to ensure that markets are working rather than waiting for problems to develop before acting, and enforcing market rules. As part of its decision to approve the creation of market designs that include ISOs and RTOs, FERC approved the creation of market monitoring units within these entities. These market monitors are designed to routinely collect information on the activities in these markets including prices; perform up-to-the-minute market monitoring activities, such as examining whether prices appear to be the result of fair competition or market manipulation; and can impose penalties, such as fines, when they identify that rules have been violated. More recently, the Energy Policy Act of 2005 granted FERC authority to impose greater civil penalties on companies that are found to have manipulated the market. Federal restructuring efforts, combined with efforts undertaken by states, have created a patchwork of electricity markets, broadened electricity supplies, disconnected wholesale and retail markets, and shifted how the electricity industry is overseen. Taken together, these developments have produced some positive and some negative outcomes for consumers. In terms of market structure, we previously reported that the combined effects of the federal efforts and those of some states have created a patchwork of wholesale and retail electricity markets. In the wholesale markets, there is a combination of restructured and traditional markets because FERC’s regulatory authority is limited. As a result, some entities— including municipal utilities and cooperatively owned utilities—have not been required to make the changes FERC has required others to make. As shown in figure 2, collectively the areas not generally subject to FERC jurisdiction span a significant portion of the country. In addition, even where FERC has clear jurisdiction, it has historically approved a variety of different rules that govern how each of the transmission networks is controlled and what types of wholesale markets may exist. In the retail electricity markets, state utility commissions or local entities historically have controlled how prices were set, as well as approved power plants, transmission lines, and other capital investments. Because each state performed these functions slightly differently, these rules vary. In addition, many states also have shifted the retail markets that they oversee toward competition. As we reported in 2002, 24 states and the District of Columbia had enacted legislation or issued regulations that planned to open their retail markets to competition. As of 2004, 17 states had actually opened their retail markets to competition, according to the Energy Information Administration. One of these states, California, opened its retail markets to competition but has taken steps to limit the extent of competition. In terms of supply, efforts to restructure the electricity industry by the federal government and some states have broadened electricity markets overall—shifting the focus from state and/or local supply to multistate or regional supply. In particular, efforts at wholesale restructuring have led to a significant change in the way electricity is supplied in those markets. The introduction of ISOs and RTOs in many areas has provided open access to transmission lines, allowing more market participants to compete and sell electricity across wide geographic regions and multiple states. In addition, in some parts of the country, overall supply has grown as a result of the large increase in new generating capacity that has been built by nonutility companies, while other regions have witnessed smaller increases in supply. For example, we reported that, by 2002, Texas had added substantial amounts of generating capacity—more than double the forecasted amount needed through 2004. In contrast, in California only about 25 percent of the forecasted need had been built over the same period, and the region witnessed a historic market disruption costing consumers billions of dollars. Similarly, the opening of retail markets has also widened the scope of electricity markets by allowing new and different entities to sell electricity, which works to further broaden markets because these retail sellers must either build or buy a power plant or rely on wholesale markets. Finally, FERC has improved the transparency of wholesale markets, a key requirement of competitive markets, by increasing the availability and accuracy of price and other market information. In terms of demand, while federal efforts have encouraged price setting by the interaction of supply and demand, this approach has not been widely adopted in retail markets. Even though FERC and other electricity experts have determined that it is important for demand to be responsive to prices and other factors for competitive markets to operate efficiently, as we reported in 2004, the use of these programs remains limited. In many retail markets, including some states where retail markets have been opened to competition, prices are still set so that rates are either flat or have been frozen. In either case, prices are not reflective of the hourly costs of providing electricity. In some cases, demand-response programs are in place but are aimed at only certain types of customers, such as some commercial and industrial customers. Overall, these customers account for only a small share of total demand. As a result, in this hybrid system, wholesale and retail markets remain disconnected, with competition setting wholesale prices in many areas, and state regulation setting retail prices in many states. Regulatory oversight of the electricity industry remains divided among federal, regional, and state entities. As we have previously reported, FERC initially did not adequately revise its regulatory and oversight approach to respond to the transition to competitive energy markets. However, it has made progress in recent years in defining its role, developing a framework for overseeing the markets, and beginning to use an array of data and analytical tools to oversee the market. In particular, FERC established the Office of Market Oversight and Investigations in 2002, which oversees the markets by monitoring its enforcement hotline for tips on misconduct; conducting investigations and audits; and reviewing large amounts of data—including wholesale spot and futures prices, plant outage information, fuel storage level data, and supply and demand statistics—for anomalies that could lead to potential market problems. In addition to FERC’s own efforts, substantial oversight also now occurs at the regional level, through ISO and RTO market monitoring units. These units monitor their region’s market to identify design flaws, market power abuses, and opportunities for efficiency improvements and report back to FERC periodically. Finally, states’ oversight roles vary. Those states that have not restructured their markets retain key roles in overseeing and regulating electricity markets directly and indirectly through such activities as setting rates to recover costs and siting of power plants and transmission lines and other capital investments needed to supply electricity. The ability of states that have restructured their retail markets, to oversee their markets is more limited, according to experts. The effects of restructuring on consumers have been mixed. While most studies evaluating wholesale electricity markets, including our own assessment, have determined that progress has been made in introducing competition in wholesale electricity markets, results at the retail level have been difficult to measure. For example, in 2002, we reported that prices generally fell after restructuring and fell in particular in many areas that had implemented retail restructuring. However, we were unable to attribute these price decreases solely to restructuring, since several other factors, such as lower prices for natural gas and other fuels used in the production of electricity, could have contributed to the price decreases. Furthermore, while some consumers had benefited by paying lower prices, others have experienced high prices and market manipulation. For example, in 2002, we reported that nationally, consumers benefited from price declines of as much as 15 percent since federal restructuring efforts began. However, as consumers in California and across other parts of the West will attest, there have been many negative effects, including higher prices and market manipulation. More recently, electricity prices have risen, potentially the result of higher prices for fuels such as natural gas and petroleum, and other factors. We have identified four key challenges that, if addressed, could benefit consumers and the restructured electricity markets that serve them. With several fundamentally different electricity market structures in place simultaneously in various parts of the country, it is important that these markets work together better in order to meet regional needs. As we previously reported, two aspects of the current electricity markets serve to limit the benefits expected from restructuring. First, FERC’s limited authority has meant that significant parts of the market and significant amounts of transmission lines have not been subject to FERC’s effort to restructure wholesale markets—creating “holes” in the national restructured wholesale market. These gaps, where efforts to open wholesale markets have not been undertaken, may limit the number of potential participants and the types of transactions that can occur, thereby limiting the benefits expected from competition. Second, where FERC has clear authority, it has historically approved a range of rules for how the different transmission systems and centralized wholesale markets operate—creating “seams” where these different jurisdictions meet and the rules change. We have previously noted that the lack of consistent rules among restructured wholesale markets limits the extent of competition across wholesale markets and, in turn, limits the benefits expected from competition. California experienced this firsthand, as it tried to “cap” wholesale electricity prices in its state market—establishing rules different from those in the markets surrounding California. The lower price cap in California, coupled with an exemption for electricity imports, created incentives to sell electricity to areas outside the state (where prices were higher) and later import it (because imports were exempt from the price cap). FERC has acknowledged that the lack of consistent rules can lead to discrimination in access, raise costs, and lead to reliability problems. As a result, FERC made an effort to standardize the various wholesale market designs under its jurisdiction. However, these efforts met with sharp criticism from some industry stakeholders. FERC ended its effort to require a single market design in all regions and has, instead, promoted voluntary participation in RTOs and having the RTOs work together to reconcile their differences. In the end, today’s patchwork of wholesale market structures, with holes and seams, is at odds with the physics of the interdependent electricity industry, where electrons travel at the speed of light and do not stop neatly at jurisdictional boundaries. Successfully developing markets will require the alignment of market structures and rules in order to reconcile them with these physical certainties. Broadening of restructured electricity markets has made the federal government, the states, and localities more dependent on each other in order to ensure a sufficient supply of electricity. We previously concluded that, as federal and state restructuring efforts broaden electricity markets to span multiple states, states will become more interdependent on each other for a reliable electricity supply. Consequently, one state’s problems acquiring and maintaining an adequate supply can now affect its neighbors. For example, in the lead up to the western electricity crisis in 2000-2001, few power plants were built to meet the rising demand in California, which became dependent on power plants located outside the state. However, when prices began to rise, this affected consumers, both inside and outside California. We previously reported these higher prices had implications for California consumers such as higher electricity bills, as well as others located outside the state, costing billions of additional dollars. Because of these negative outcomes, some have questioned whether restructuring will eventually benefit consumers. More broadly, rising interdependence has significant implications for many industry stakeholders, especially in light of the shift in how plants are financed and built. In the past, monopoly utilities proposed, and regulators approved, the construction of new power plants and other infrastructure. Today, policymakers at all levels of government must recognize that providing consumers with reliable electricity in competitive markets requires private investors to make reasoned investments. We have reported that these private investors make decisions on investing by balancing their perceptions of potential risk and profitability. Further, we concluded that the reliability of the electricity system and, more generally, the success of restructuring, now hinges on whether these developers choose to enter a market and how quickly they are able to respond to the need for new power plants. The implications of this broadening of electricity markets are important, since it has occurred while most of the primary authorities associated with building new power plants, such as state energy siting or local land use planning, still rest with states and localities. As we have reported, there is sometimes considerable variation across states and localities in how long these processes take and how much they cost, and building new power plants can take a year or more once all the approvals are obtained. Because of the broader electricity markets, one state’s or locality’s processes and decisions provide signals affecting private investors’ perceptions of the risk or profitability of making investments in local areas and can have long-lasting implications for the entire region. In this context of growing interdependence for adequate electricity supplies, our work shows that it is important for federal, state, and local entities to provide timely, clear, and consistent signals that allow private developers to make the kinds of reasonable and long-term investments that are needed. As we have previously reported, for competitive wholesale electricity markets to provide the full benefits expected of them, it is essential that they be connected to the retail markets, where most electricity is sold and consumed. Otherwise, hybrid electricity markets—wholesale prices set by competition and retail prices set by regulation—will be difficult to manage because consumers at the retail level can unknowingly drive up wholesale prices during periods when electricity supplies are limited. This occurs when consumers do not see prices at the retail level that accurately reflect the higher wholesale market prices. Seeing only these lower electricity prices, consumers use larger quantities of electricity than they would if they saw higher prices, which raises costs and can risk reliability. We have noted that, in this environment (consumers seeing low retail prices during periods of high wholesale prices) consumers have little incentive to reduce their consumption during periods when prices are high or reliability is at risk. The appeal of seeming to insulate retail consumers from wholesale market fluctuations may be compelling, but most experts agree that the lack of significant demand response can actually lead to higher and more volatile prices. In 2004, we concluded that this system makes it difficult for FERC to ensure that prices in wholesale markets are just and reasonable. We further concluded that connecting wholesale and retail markets through demand-response programs such as real-time pricing or reliability-based programs would help competitive electricity markets function better, enhance the reliability of the electricity system, and provide important signals that consumers should consider investments into energy-efficient equipment. Such signals would work to reduce overall demand in a more permanent way. While FERC has been supportive of increasing the role of demand-response programs in the wholesale markets that it oversees, there have been limited efforts to do so in retail markets—these markets are outside FERC’s jurisdiction and overseen by the states. Some states, such as California, have a long history with demand-response programs and have conducted more recent experiments with using it in more widespread ways. Sharing and building upon these and other examples could help develop efficient ways to bring the consumers who flip the light switches into the markets responsible for ensuring that their lights go on. Since electricity travels at the speed of light, retail markets where electricity is consumed are tightly connected to the wholesale markets that supply these retail markets. As a result, much of the success of federal restructuring of the wholesale markets relies on actions taken at the state level to bring consumers into the market. Significant changes in how oversight is carried out in competitive markets, combined with the divided regulatory authority over the electricity industry, has made effective oversight difficult. We previously reported that FERC, the states, and other market monitors were neither fully monitoring the overall performance of all wholesale and retail markets nor collecting sufficient data to do so, thus limiting the opportunity to meaningfully compare performance. At the federal level, FERC protects customers primarily through ensuring that prices in the wholesale markets are just and reasonable. In prior work, we found that FERC did not initially revise its oversight approach adequately in response to restructured markets, resulting in markets that were not adequately overseen. However, more recently, we reported that FERC has made significant efforts to revise its oversight strategy to better align with its new role overseeing restructured markets, has taken a more proactive approach to monitoring the performance of markets, and has better aligned its workforce to fit its needs in these new markets. Recent actions will require further changes to FERC’s role. The Energy Policy Act of 2005 provided FERC additional authority to establish reliability rules for all “users, owners, and operators” of the transmission system. We had previously reported that this change would be desireable, but it is too early to judge its success. At the state level, oversight varies widely. States that have retained traditionally regulated retail markets continue to require substantial amounts of information to help them set the regulated prices that consumers see. The states that now feature restructured retail markets face a sharply different oversight role of policing their state-level retail markets for misbehavior and signs of market malfunction. The introduction of the market monitoring units within ISOs and RTOs adds a new layer of regional oversight to the existing federal and state roles. While authority over the electricity industry is divided, restructuring has served to make the success of each of the oversight efforts more interdependent, and FERC and the states will have to rely on each other, as well as on new entities, to a greater degree than before to be successful. It is becoming increasingly clear that many of the challenges facing the electricity industry are rooted in the interdependence of actions taken by federal, state, local, and private entities, as well as consumers. Accordingly, the individual challenges we have discussed follow a central theme—the need to integrate the various ongoing activities and efforts and harmonize them in a way that improves the functioning of the marketplace while providing adequate oversight to protect electricity consumers. This will not be easy because it requires what is, at times, most difficult: collaboration and cooperation among entities with a history of independence. Successfully restructuring the electricity industry is an ongoing process that will require rethinking old issues, such as jurisdictional responsibilities, and applying new and creative ideas to help bridge the current gap between wholesale and retail markets. Only if interdependent parties work together will electricity restructuring succeed in delivering benefits to U.S. consumers by way of healthy, viable, and competitive markets. Not adequately addressing these issues could result in an electricity industry that does not provide consumers with sufficient quantities of the reliable, reasonably priced electricity that has been a mainstay of our nation’s economic and social progress. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 15 days after the report date. At that time, we will send copies of this report to appropriate congressional committees. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or wellsj@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are listed in the appendix. In addition to the contact named above, Dan Haas, Jon Ludwigson, and Kris Massey made key contributions to this report. Barbara Timmerman, Susan Iott, and Nancy Crothers also made important contributions. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. Electricity Markets: Consumers Could Benefit from Demand Programs, but Challenges Remain. GAO-04-844. Washington, D.C.: August 13, 2004. Energy Markets: Additional Actions Would Help Ensure That FERC’s Oversight and Enforcement Capability Is Comprehensive and Systematic. GAO-03-845. Washington, D.C.: August 15, 2003. Electricity Markets: FERC’s Role in Protecting Consumers. GAO-03-726R. Washington, D.C.: June 6, 2003. Energy Markets: Concerted Actions Needed by FERC to Confront Challenges That Impede Effective Oversight. GAO-02-656. Washington, D.C.: June 14, 2002. Electricity Restructuring: 2003 Blackout Identifies Crisis and Opportunity for the Electricity Sector. GAO-04-204. Washington, D.C.: November 18, 2003. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Lessons Learned from Electricity Restructuring: Transition to Competitive Markets Underway, but Full Benefits Will Take Time and Effort to Achieve. GAO-03-271. Washington, D.C.: December 17, 2002. Restructured Electricity Markets: California Market Design Enabled Exercise of Market Power. GAO-02-828. Washington, D.C.: June 21, 2002. Restructured Electricity Markets: Three States' Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Electric Utility Restructuring: Implications for Electricity R&D. T-RCED- 98-144. Washington, D.C.: March 31, 1998. Restructured Electricity Markets: California Market Design Enabled Exercise of Market Power. GAO-02-828. Washington, D.C.: June 21, 2002. California Electricity Market: Outlook for Summer 2001. GAO-01-870R. Washington, D.C.: June 29, 2001. California Electricity Market Options for 2001: Military Generation and Private Backup Possibilities. GAO-01-865R. Washington, D.C.: June 29, 2001. Energy Markets: Results of Studies Assessing High Electricity Prices in California. GAO-01-857. Washington, D.C.: June 29, 2001. Bonneville Power Administration: Better Management of BPA’s Obligation to Provide Power Is Needed to Control Future Costs. GAO-04- 694. Washington, D.C.: July 9, 2004. Bonneville Power Administration: Long-Term Fiscal Challenges. GAO-03- 918R. Washington, D.C.: June 27, 2003. Federal Power: The Evolution of Preference in Marketing Federal Power. GAO-01-373. Washington, D.C.: February 8, 2001. | The electricity industry is in the midst of many changes, collectively referred to as restructuring, evolving from a highly regulated environment to one that places greater reliance on competition. This restructuring is occurring against a backdrop of constraints and challenges, including a shared responsibility for implementing and enforcing local, state, and federal laws affecting the electricity industry and an expected substantial increase in electricity demanded by consumers by 2025, requiring significant investment in new power plants and transmission lines. Furthermore, several recent incidents, including the largest blackout in U.S. history along the East Coast in 2003 and the energy crisis in California and other parts of the West in 2000 and 2001, have drawn attention to the need to examine the operation and direction of the industry. At Congress's request, this report summarizes results of previous GAO work on electricity restructuring, which was conducted in accordance with generally accepted government auditing standards. In particular, this report provides information on (1) what the federal government has done to restructure the electricity industry and the wholesale markets that it oversees, (2) how electricity markets have changed since restructuring began, and (3) GAO's views on key challenges that remain in restructuring the electricity industry. Over the past 13 years, the federal government has taken a variety of steps to restructure the electricity industry with the goal of increasing competition in wholesale markets and thereby increasing benefits to consumers, including lower electricity prices and access to a wider array of retail services. In particular, the federal government has changed (1) how electricity is priced--shifting from prices set by regulators to prices determined by markets; (2) how electricity is supplied--including the addition of new entities that sell electricity; (3) the role of electricity demand--through programs that allow consumers to participate in markets; and (4) how the electricity industry is overseen--in order to ensure consumer protection. Federal restructuring efforts, combined with efforts undertaken by states, have created a patchwork of wholesale and retail electricity markets; broadened electricity supplies; disconnected wholesale markets from retail markets, where most demand occurs; and shifted how the electricity industry is overseen. Taken together, these developments have produced some positive outcomes, such as progress in introducing competition in wholesale electricity markets, as well as some negative outcomes, such as periods of higher prices. We have identified four key challenges to the effective operation of the restructured electricity industry: making wholesale markets work better together so that restructuring can deliver the benefits to consumers that were expected; providing clear and consistent signals to private investors when new plants are needed so that there are adequate supplies to meet regional needs; connecting wholesale markets to retail markets through consumer demand programs to keep prices lower and less volatile; and, resolving divided regulatory authority to ensure that these markets are adequately overseen. The theme cutting across each of these challenges is the need to better integrate the various market structures, factors affecting supply and demand, and various efforts at market oversight. |
The NSSN program is intended to address the Joint Chiefs of Staff requirement for 10 to 12 new attack submarines with Seawolf level quieting by the year 2012 and to maintain future force structure goals. In funding the NSSN program, Congress expected the Navy to deliver a less costly submarine than its predecessor, the Seawolf, without compromising military utility. The NSSN is expected to be a highly effective multimission platform capable of performing antisubmarine and antisurface ship missions and land attack strikes as well as mine missions, special operations, battle group support, and surveillance. The NSSN is also expected to be as quiet as the Seawolf, include a vertical launch system, and have improved surveillance as well as special operations characteristics to enhance littoral warfare capability. While the NSSN is expected to perform effectively against the most capable, open ocean, nuclear attack submarine threat, it will be slower and less capable in diving depth and arctic operations and will carry fewer weapons than the Seawolf. The Navy’s fiscal year 1999 budget request contained about $1.5 billion for procurement of the second NSSN and $504.7 million for advanced procurement of the third authorized NSSN. The Navy also requested about $219 million for continued research and development activities. Public Law 105-56 appropriated funds and Public Law 105-85 provided authorization for the contractor teaming arrangement to build the first four new attack submarines. The Navy has established performance levels to ensure that the NSSN will have the capabilities to successfully conduct its missions. Operational requirements documents are required for the ship and its major subsystems. These documents establish the optimal (objective) and minimal (threshold) requirements related to the submarine’s performance. For the most part, according to the NSSN program manager, the NSSN is being designed to meet a cost-effective balance at a performance level that meets or exceeds minimum requirements. The Navy is also establishing detailed technical specifications for the design of individual subsystems. To gain assurance that the designs of the submarine and its subsystems will result in the submarine successfully performing its various missions, the Navy requires that the Program Manager use computer simulations as a principal tool to model the NSSN’s capabilities against existing and potential threats. An example is the modeling performed for the June 1995 NSSN milestone II cost and operational effectiveness analysis. Based on the results, both the Department of Defense (DOD) and the Navy believe the baseline NSSN design satisfies military requirements. The Navy also seeks assurance by requiring that weapon systems be tested and evaluated in their anticipated operational environment and against the anticipated threat. This mission is performed by the Operational Test and Evaluation Force, which was established by the Secretary of the Navy to be the Navy’s sole independent agency for these activities. Since the Navy modeled the NSSN in 1995, a number of subsystems in development have encountered financial constraints and developmental problems. These financial constraints resulted in modifying the design requirements for some of the subsystems to reduce the performance capabilities. Significant development risks are also present in other subsystems that could further affect planned performance. The Navy’s tester noted that many of the potential risks are the result of program restructuring to mitigate the effects of internally directed funding cuts. He expressed concern that the combined effects of the reductions in performance and developmental risks may affect the NSSN’s operational effectiveness. The Navy has restructured two key NSSN subsystems—electronics warfare and acoustic intercept. The Navy has also reduced or will reduce some operational performance requirements to the minimum acceptable levels for the NSSN to successfully complete assigned missions. The electronics warfare system enables the NSSN to covertly monitor intelligence targets and record electronics data. Because of internally directed fiscal year 1998 funding cuts, some system capability was removed. The reduced capability system will not meet the optimal performance levels modeled in the 1995 assessment, but it is projected to meet minimum levels. The Navy has established the detailed technical specifications that will be important to meeting those projections but has not approved all of the operational requirements documents. Public Law 105-56 provided increased funding to restore some of the critical elements of the electronics warfare subsystem—such as specific emitter identification, full implementation of precision radar band direction finding, and interception of frequency-hopping communications. Public Law 105-85 authorized the increase. The acoustic intercept system provides defensive capability for the submarine and according to a Navy official, is critical to its survival. Like the electronics warfare system, the acoustic intercept system was restructured because of fiscal year 1998 internal funding cuts. Although the restructured system will have fewer capabilities than the original one, limited computer modeling indicates that if the restructured system performs as expected, there is no statistical difference in performance. The question is whether the restructured system will perform as expected. In the June 1997 operational assessment of this system, the Navy tester noted several deficiencies in achieving required performance. (Detailed information on these deficiencies is classified.) As a result, the Navy tester recommended approval for only a single unit for backfit testing on 688I class submarines and only one unit for release to support the first NSSN contingent upon resolution of these issues. The submarine’s propulsor and external communications systems are experiencing development problems. These problems, although not unusual at this stage in a weapon program, present significant risks in meeting performance requirements. Also, the design for the lower cost alternative to the present towed array has not been approved, nor has a contractor been selected. The propulsor provides thrust to move the submarine through the water. Cavitation noise from the propulsor is critical to the ability of enemy submarines or surface ships to detect the submarine and, consequently, has a major impact on a submarine’s survivability and operational effectiveness. Currently, there is no cavitation performance requirement in the NSSN operational requirements document, but there are program office cavitation design goals. The Navy, through large-scale vehicle testing, determined that an interim propulsor design did not meet the program office’s cavitation design goals. As a result, it has developed two alternative designs that it began to test in March 1998. To meet the lead ship NSSN production schedule, the Navy must select from these alternatives during the one remaining large-scale vehicle test before a propulsor for the lead ship is produced. If the alternative designs do not meet cavitation goals, the Navy plans to backfit another redesigned propulsor on the lead ship. The external communications system was restructured in August 1996 to provide a cost-effective means of introducing commercial hardware and software technologies in order to meet the NSSN development schedule and operational requirements. This system consists of several components such as the submarine high data rate antenna system, various radio frequency receivers, imagery and teleconference video capability, and internal data distribution systems. Improvements in the data rate capability of the external communications system depend on the high data rate antenna system and the amount of satellite resources allocated to submarine platforms. As currently designed, with a 17-inch antenna, the Navy tester noted that the submarine’s system will only be able to process the required amounts of data if all of the Navy’s current satellite resources are allocated to support submarine communications. The Navy is attempting to establish a concept of operations among satellite scheduling units that will allocate appropriate resources to the deployed submarine. Program office officials said the Navy has alternative ways to provide the required satellite resources such as using different frequencies on satellites or leasing commercial satellites. In addition, the Navy has not completed an overall operational requirements document for submarine external communication systems. As such, the NSSN external communications system design has not been finalized. These documents are required to ensure that the system configuration is properly designed to meet minimum performance requirements. The TB-29 towed array and its processing system are critical to NSSN operations in detecting, tracking, and, if required, attacking a threat submarine. This system enables the NSSN to hear acoustic noises made by threat submarines. However, the Navy has determined that the current TB-29 system is too expensive. Also, the contract for the current TB-29 expired in fiscal year 1997. The Navy is looking for a comparable system at a lower cost than the TB-29 array. Navy officials told us that required technology is available and that it is a matter of selecting a design and a contractor to produce the system. They believe there is sufficient time to develop and procure a new system to meet the delivery of the first NSSN. However, there is no approved design for the new system. Some developmental funding has been specifically identified. Navy officials said the Chief of Naval Operations has fully supported completing the TB-29 follow-on development and procurement in future years’ budget submissions. According to the program manager, a request for proposal for the design of a new array will be issued early in fiscal year 1998. The Navy expects to award a contract for the development and production of the new array in the third quarter of fiscal year 1998. In April 1996, the Office of Naval Intelligence revised its classified underseas threat assessment and noted several technological advances in the open-ocean, antisubmarine warfare threat. Several improvements resulting in a more capable threat were noted over the previous threat of record, which the Navy used to model the survivability of the NSSN design in the 1995 assessment. (Details of these improvements are classified.) Facing a more capable threat, and without an increase in submarine capability, the risk to the NSSN’s survivability is likely to increase. The Commander, Operational Test and Evaluation Force, conducted NSSN operational assessments in April 1995 and again in January 1997. (Detailed results of these assessments are classified.) The 1995 assessment was conducted using computer simulated modeling of the baseline NSSN design against the threat projected at that time. As a result of the 1995 assessment, the Navy tester expressed concern that if the NSSN were just to meet minimum requirements for survivability, the NSSN may not be operationally effective against the most capable threat that the Navy was projecting at that time. The 1997 assessment was based on a more limited amount of information, such as changes outlined in budgetary documents, and did not include an in-depth survivability modeling as was done for the 1995 assessment. The Navy tester’s report noted reduced performance of several subsystems and developmental problems in others that also will result in reductions in planned performance. The report pointed out that many of the affected subsystems, such as the acoustic intercept system and the propulsor, are necessary to support the NSSN’s operational effectiveness and survivability. The Navy tester concluded that the NSSN could potentially be operationally effective and suitable. However, he recommended that a new NSSN modeling baseline be established to reflect more current information, because the performance of some subsystems had been reduced below the performance modeled in the 1995 NSSN milestone II cost and operational effectiveness analysis and the April 1995 early operational assessment. The tester also recommended that this new design baseline be evaluated against the currently projected threat. Navy program officials are cognizant of the Navy tester’s report but have indicated that there are no plans to perform an updated survivability modeling of the total system against the new threat. Navy program officials told us that they have modeled, or plan to model, the performance of individual subsystems instead. Program officials also stated that even at the current reduced performance levels, the subsystems discussed will still meet NSSN minimum requirements. However, the submarine’s survivability has only been assessed using performance levels above the minimum requirements. The combined effects of a more capable threat, the reduction of some system performance requirements, and the risks inherent in new development could affect the NSSN’s operational effectiveness. Without an evaluation that reflects current conditions, DOD and Navy program officials appear to have little basis for their confidence in how the submarine, with its design changes, will perform. Given the complexities and uncertainties in weapon system acquisitions, encountering performance problems during the development phase is not unusual. At this point in the NSSN program, using modeling tools to identify and correct problems that could affect the system’s survivability, such as those described in this report, would allow changes to be made in development schedules and funding profiles at a much lower cost than if problems were identified later. To avoid spending funds on construction from a design that may require costly modifications to meet requirements, we recommend that the Secretary of Defense require the Secretary of the Navy to conduct survivability modeling to assess the impact that reduced capabilities of various subsystems have on ship survivability when integrated into the overall NSSN design. Available research and development funding could be used for this modeling. Further, we recommend that the Secretary of Defense take steps to ensure that the results are used in making fiscal year spending decisions on the program. DOD provided written comments on a draft of this report, which are reprinted in appendix I. DOD stated that it agreed with the recommendation in our draft report to conduct sufficient survivability modeling to assess the extent to which the NSSN will be fully capable of countering the threat and meeting all its mission requirements. In its comments, DOD acknowledged that the performance of some subsystems was reduced below that used to model the survivability of the NSSN during the milestone II cost and operational assessment and the 1995 early operational assessment. DOD laid out the process by which it makes decisions on what testing is needed and how the test results are used. DOD offered, as an example, that design changes to the Acoustic Intercept Receiver and to the Electronic Warfare Support Measures suites were assessed and determined to have reduced performance. The program’s management concluded that the reduced performance of these subsystems would not compromise ship survivability and, therefore, no higher level modeling was required. DOD also stated that operational assessments, already scheduled for fiscal year 2000 on an interim basis and fiscal year 2002 for a final report, will assess the impact on overall NSSN performance of changes to the design, validated threat projection, and demonstrated subsystem performance. The intent of our recommendation, however, was to have DOD conduct survivability modeling. As we point out in the report, until the cumulative effect of subsystem changes, including reduced performance, on overall ship survivability is modeled, it will not be known if the NSSN will perform as intended. For example, while performance modeling indicates that the restructured acoustic intercept system may perform as expected, this does not answer the question of what impact the system’s reduced capabilities have on ship survivability when integrated into the overall NSSN design. Therefore, although important, individual assessments of subsystem performance such as those conducted in the Janaury 1997 operational assessment, do not provide information on overall survivability when they are integrated into the overall submarine design. Likewise, the second phase of operational testing discussed in our report and scheduled to be reported on in fiscal year 2000 will not include an assessment of the overall survivability of the NSSN at reduced levels of subsystem performance, unless explicitly requested and paid for by the program sponsor. Program officials have no plans to do so. As we note, the Navy rejected the recommendation in the January 1997 operational assessment that a new NSSN baseline be established to reflect more current information and be evaluated against the currently projected threat. Based on our discussions with Navy officials, there is no indication that tests scheduled for fiscal year 2000 will include an assessment of overall survivability nor that the results of the tests will be used to make modifications to the program. If the combined reduction of subsystem performance is subsequently found to affect overall ship survivability, the NSSN program could face expensive modifications or reduced capability. Therefore, we have modified our draft report recommendation to clarify what we meant by sufficient survivability modeling. DOD concurred with our recommendation that the Secretary of Defense take steps to ensure that the modeling results are used in making fiscal year spending decisions on the program. DOD officials have stated that it now plans to conduct comprehensive annual reviews of the NSSN program. We analyzed Navy and DOD documents and studies such as the NSSN cost- and operational effectiveness analysis and discussed the status of the NSSN’s acquisition with Navy program officials in Washington, D.C.; at the Naval Undersea Warfare Center, Newport, Rhode Island; and the Naval Surface Warfare Center, Carderock Division. We held additional discussions with officials from the offices of the Chief of Naval Operations; the Assistant Secretary of the Navy for Research, Development, and Acquisition; the Secretary of Defense; and the Program Executive Office for Submarines. We also discussed program acquisition status with (1) representatives from Electric Boat Corporation, Groton, Connecticut, and Newport News Shipbuilding and Drydock Company, Newport News, Virginia; (2) the Supervisors of Shipbuilding at these respective shipyards; and (3) representatives from Lockheed Martin Federal Systems, Manassas, Virginia. In addition, we analyzed the threat modeling and other testing results contained in the NSSN’s operational assessments and discussed the results with representatives of the Commander, Operational Test and Evaluation Force, Norfolk, Virginia. Discussions on the capabilities of the projected submarine threat were held with representatives of the Office of Naval Intelligence, the Defense Intelligence Agency, and the Central Intelligence Agency. We conducted our review between December 1996 and March 1998 in accordance with generally accepted government auditing standards. We are sending copies of this report to the four congressional Defense committees, the Secretary of the Navy, and the Assistant Secretary of the Navy for Research, Development, and Acquisition. Upon request, we will make copies available to other interested parties. Please contact me on (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are in appendix II. William T. Woods, Assistant General Counsel The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed: (1) the status of the new attack submarine (NSSN) development program; (2) current information on the antisubmarine warfare threat; and (3) the Navy's plans to model the NSSN's survivability. GAO noted that: (1) since modeling the NSSN's survivability in 1995, the Navy, because of technical and funding limitations, has modified the design for some subsystems that reduce performance below the optimal levels used to model the 1995 baseline design; (2) other systems also have developmental problems; (3) at the same time, Navy threat assessments have reported that the open ocean antisubmarine warfare threat has improved, resulting in a more capable threat than previously projected; (4) the Navy tester's 1997 assessment report concluded that the NSSN could potentially be operationally effective and suitable, but noted a number of significant changes and risks in the development program; (5) the report also noted several technological advances in the open ocean antisubmarine warfare threat; (5) in addition, the report stated that budgetary pressures resulted in tradeoffs in some of the performance modeled in the NSSN milestone II cost and operational effectiveness analysis and the tester's 1995 early operational assessment; (6) as of November 1997, the Navy program manager planned no additional survivability modeling to test the NSSN with its potential for reduced performance against the improved threat; (7) however, as a result of its 1997 assessment, the Navy tester recommended that the Navy develop a new modeling baseline that reflects the reduced performance of some subsystems and that this new design baseline be evaluated against the increased threat; and (8) without such modeling, the Department of Defense and Navy program officials appear to have little basis for their confidence that the currently designed submarine will perform as expected. |
Strategic human capital management is receiving increased attention across the federal government. In January 2001, we designated strategic human capital management as a governmentwide high-risk area and continued this designation with the release of High-Risk Series: An Update in January 2003. Despite the considerable progress over the past 2 years, it remains clear that today’s federal human capital strategies are not appropriately constituted to meet current and emerging challenges or drive the needed transformation across the federal government. One of the key areas that federal agencies continue to face challenges in is creating results-oriented organizational cultures. Agencies lack organizational cultures that promote high performance and accountability and empower and include employees in setting and accomplishing programmatic goals, which are critical to successful organizations. To help agency leaders effectively lead and manage their people and integrate human capital considerations into daily decision making and the program results they seek to achieve, we developed a strategic human capital model. The model highlights the kind of thinking that agencies should apply, as well as some of the steps they can take, to make progress in managing human capital strategically. Since we designated strategic human capital management as a high-risk area in January 2001, the President’s Management Agenda, released in August 2001, placed the strategic management of human capital at the top of the administration’s management agenda. In October 2002, OMB and OPM updated the standards for success in the human capital area of the President’s Management Agenda, reflecting language that was developed in collaboration with GAO. To assist agencies in responding to the revised standards and addressing the human capital challenges, OPM released the Human Capital Assessment and Accountability Framework. One of the standards of success in the framework is a results-oriented performance culture, specifically a performance management system that effectively differentiates between high and low performance. On September 24, 2002, we convened a forum to discuss useful practices from major private and public sector organizational mergers, acquisitions, and transformations that federal agencies could learn from to successfully transform their cultures and that the then proposed Department of Homeland Security could use to merge its various originating agencies or their components into a unified department. The participants identified the use of performance management systems as a tool to help manage and direct the transformation process. Specifically, performance management systems must create a line of sight showing how team, unit, and individual performance can contribute to overall organizational results. The system serves as the basis for setting expectations for employees’ roles in the transformation process and for evaluating individual performance and contributions to the success of the transformation process and, ultimately, to the achievement of organizational results. An effective performance management system can be a strategic tool to drive internal change and achieve desired results. We found that public sector organizations in the United States and abroad have implemented a selected, generally consistent set of key practices as part of their performance management systems. Federal agencies can implement these practices to develop effective performance management systems that help create the line of sight between individual performance and organizational success and transform their cultures to be more results-oriented, customer- focused, and collaborative in nature. An explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high- performing organizations. These organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. Such organizations continuously review and revise their performance management systems to support their strategic and performance goals, as well as their core values and transformational objectives. High-performing organizations can show how the products and services they deliver contribute to results by aligning performance expectations of top leadership with organizational goals and then cascading those expectations down to lower levels. To this end, we reported that in fiscal year 2000 the Federal Aviation Administration (FAA) was able to show how the Department of Transportation’s (DOT) strategic goal to promote public health and safety was cascaded through the FAA Administrator’s performance expectation to reduce the commercial air carrier fatal accident rate to a program director’s performance expectation to develop software to help aircraft maintain safe altitudes in their approach paths, as shown in figure 2. The FAA Administrator’s performance agreement for fiscal year 2000 included a performance expectation to reduce the commercial air carrier fatal accident rate by implementing the Safer Skies Agenda. As part of implementing the Safer Skies Agenda, the Flight Standards Service Director had a performance expectation to meet milestones for reducing a type of crash called controlled flight into terrain, which occurs when pilots lose their sense of the plane’s relation to the surface below. Among these milestones included validating Minimum Safe Altitude Warning software, which had to be developed by the Aviation Systems Standards Program Director. This software system is designed to aid air traffic controllers through both visual and aural alarms by alerting them when a tracked aircraft is below, or predicted by the computer to go below, a predetermined minimum altitude. Similarly, we recently reported that as a first step in establishing a permanent performance management system, the Transportation Security Administration (TSA) has implemented standardized performance agreements for groups of employees, including transportation security screeners, supervisory transportation security screeners, supervisors, and executives. These performance agreements include both organizational and individual goals and standards for satisfactory performance that can help TSA establish a line of sight showing how individual performance contributes to organizational goals. For example, each executive performance agreement includes organizational goals, such as to maintain the nation’s air security and ensure an emphasis on customer satisfaction, as well as individual goals, such as to demonstrate through actions, words, and leadership, a commitment to civil rights. To strengthen its current executive performance agreement and foster the culture of a high- performing organization, we recommended that TSA add performance expectations that establish explicit targets directly linked to organizational goals, foster the necessary collaboration within and across organizational boundaries to achieve results, and demonstrate commitment to lead and facilitate change. TSA agreed with this recommendation. We reported in September 2002 that some agencies set targets for individual performance that were linked to organizational goals. For example, the Veterans Benefits Administration (VBA) identified targets with specific levels of performance for senior executives that were explicitly linked to VBA’s priorities for fiscal year 2001 and the Department of Veterans Affairs’ (VA) strategic goals for fiscal years 2001 to 2006. For example, to contribute to VA’s strategic goal to “provide ‘One VA’ world class service to veterans and their families through the effective management of people, technology, processes and financial resources” and to address its priority of speed and timeliness, VBA set a national target for property holding time—the average number of months from date of acquisition to date of sale of properties acquired due to defaults on VA guaranteed loans—of 10 months for fiscal year 2001. To contribute to the national target, the senior executive in the Nashville regional office had a performance expectation for his office to meet a target of 8.6 months. As public sector organizations shift their focus of accountability from outputs to results, they have recognized that the activities needed to achieve those results often transcend specific organizational boundaries. Consequently, organizations that are flatter and focused on collaboration, interaction, and teamwork across organizational boundaries are increasingly critical to achieve results. High-performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on fostering the necessary collaboration both within and across organizational boundaries to achieve results. For example, in August 2002, we reported that Canada’s agricultural department, Agriculture and Agri-Food Canada, used individual performance agreements to specify the internal or external organizations whose collaboration is needed to help individuals contribute to the departmental crosscutting goals or areas. Specifically, the head of the department’s Market and Industry Services Branch had in his 2001-02 performance agreement the expectation to “lead efforts to develop the department’s ability to deal with emerging technical trade issues” that aligned with the crosscutting area of “international issues.” The agreement also listed two internal units whose collaboration was needed to meet the expectation—the department’s Research Branch and its Strategic Policy Branch—as well as two external organizations—the Canadian Food Inspection Agency and Health Canada. While the performance agreement provides a vehicle for identifying and communicating with the various organizations associated with each crosscutting performance expectation, the department leaves it up to individuals to determine how to collaborate with their organizations when working to fulfill their performance agreements. Similarly, we reported in October 2000 that the Veterans Health Administration’s (VHA) Veterans Integrated Service Network (VISN) headquartered in Cincinnati implemented performance agreements that focused on patient services for the entire VISN and were designed to encourage the VISN’s medical centers to work collaboratively. In 2000, the VISN Director had a performance agreement with “care line” directors for patient services, such as primary care, medical and surgical care, and mental health care. In particular, the mental health care line director’s performance agreement included improvement goals related to mental health for the entire VISN. To make progress towards these goals, this care line director had to work across each of the VISN’s four medical centers with the corresponding care line managers at each medical center. As part of this collaboration, the care line director needed to establish consensus among VISN officials and external stakeholders on the strategic direction for the services provided by the mental health care line across the VISN; develop, implement, and revise integrated clinical programs to reflect that strategic direction for the VISN; and allocate resources among the centers for mental health programs to implement these programs. High-performing organizations provide objective performance information to individuals to show progress in achieving organizational results and other priorities, such as customer satisfaction and employee perspectives, and help individuals manage during the year, identify performance gaps, and pinpoint improvement opportunities. Having this performance information in a useful format also helps individuals track their performance against organizational goals and compare their performance to that of other individuals. To this end, we described in September 2002, the Bureau of Land Management’s (BLM) Web-based data system called the Director’s Tracking System that collects and makes available on a real-time basis data on each senior executive’s progress in his or her state office towards BLM’s organizational priorities and the resources expended on each priority. In particular, a BLM senior executive in headquarters responsible for the wild horse and burro adoptions program can use the tracking system to identify at anytime during the year where the senior executives in the state offices responsible for this program are against their targets and what the program costs have been by state. To address progress towards its performance goals, we reported in October 2000 that VHA produced quarterly Network Performance Reports that presented both VHA-wide and VISN-specific progress on each of the goals in the then 22 VISN directors’ performance agreements. VHA’s then Chief Network Officer and each of the VISN directors used these performance reports to inform quarterly meetings they had and to discuss each VISN’s progress towards the goals in the director’s performance agreement. Specifically, the Network Performance Report issued in May 2000 showed that 90 percent of the patients in VISN 5 located in Baltimore received follow-up care after hospitalization for mental illness in the third quarter of fiscal year 2000. Further, that VISN produced biweekly performance reports that allowed it to monitor its three medical centers’ progress on the VHA-wide performance goals in the VISN director’s performance agreements. For example, the VISN’s biweekly performance report for August 2000 showed that the VISN-wide rate for follow-up care after hospitalization for mental illness remained at 90 percent, while its three medical centers ranged from 89 to 91 percent for follow-up care. In addition to showing progress in achieving organizational results, high- performing organizations also provide performance information on other priorities, such as customer satisfaction and employee perspectives. We reported in September 2002 that to emphasize a balanced set of performance expectations, some agencies disaggregated customer and employee satisfaction survey data so that the results were applicable to an executive’s customers and employees. For example, from its Use Authorization Survey administered to various customers in fiscal year 2000, BLM disaggregated the survey data to provide the applicable results to individuals who head the state offices. Specifically, the executive in the Montana state office received data for his state showing that 81 percent of the grazing permit customers surveyed gave favorable ratings for the timeliness of permit processing and for service quality. The executive addressed the results of the customer survey in his self-assessment for the 2001 performance appraisal cycle. We also reported that to help senior executives address employee perspectives, the Internal Revenue Service (IRS) disaggregated data to the workgroup level from its IRS/National Treasury Employees Union Employee Satisfaction Survey, which measures general satisfaction with IRS, the workplace, and the union. The Gallup Organization administered this survey to all IRS employees. The survey comprised Gallup’s 12 questions (Q12); additional questions unique to IRS, such as views on local union chapters and employee organizations; and questions on issues IRS has been tracking over time. Gallup provided the results for each workgroup. For example, an executive could compare the performance of his or her workgroup to that of other operating divisions and to that of IRS as a whole. Specifically, for the 2001 survey, an executive’s workgroup scored 3.68 out of a possible 5 for the question “I have the materials and equipment I need to do my work right” compared to the IRS-wide score of 3.58. To allow individuals to benchmark externally, Gallup compared each workgroup’s results to the 50th (median) and 75th (best practices) percentile scores from Gallup’s Q12 database. To benchmark internally, IRS provided the servicewide results from the previous year’s survey in each workgroup report. High-performing organizations require individuals to take follow-up actions based on the performance information available to them. By requiring and tracking such follow-up actions on performance gaps, these organizations underscore the importance of holding individuals accountable for making progress on their priorities. To help address employee perspectives in their senior executive performance management system, we reported in September 2002 that the Federal Highway Administration required senior executives to use 360- degree feedback instruments to solicit employees’ views on their leadership skills. Based on the 360-degree feedback, senior executives were to identify action items and incorporate them into their individual performance plans for the next fiscal year. While the 360-degree feedback instrument was intended for developmental purposes to help senior executives identify areas for improvement and was not included in the executive’s performance evaluation, executives were held accountable for taking some action with the 360-degree feedback results and responding to the concerns of their peers, customers, and subordinates. For example, based on 360-degree feedback, a senior executive for field services identified better communications with subordinates and increased collaboration among colleagues as areas for improvement, and as required, he then incorporated action items into his individual performance plan. In fiscal year 2001, he set a performance expectation to develop a leadership self-improvement action plan and identify appropriate improvement goals. In his self-assessment for fiscal year 2001, he reported that he improved his personal contact and attention to the division offices as evidenced by a 30 percent increase in visits to the divisions that year. Also, he stated that he encouraged his subordinates to assess their leadership skills. Consequently, 9 of his 11 subordinates used 360-degree feedback instruments to improve their personal leadership competencies. We also reported that to address employee perspectives based on the performance information obtained through its employee survey, IRS required senior executives to hold workgroup meetings with their employees to discuss the workgroups’ survey results and develop action plans to address these results. According to a senior executive in IRS’s criminal investigation unit, the workgroup meetings were beneficial because they increased communication with employees and identified improvements in the quality of worklife. For example, through this executive’s workgroup meetings on the 2001 employee survey results, employees identified the need for recruiting supervisory special agents to even out some of the workload. Subsequently, the senior executive set a performance expectation in his fiscal year 2002 individual performance plan to ensure that the field office had a strong recruitment program to attract viable candidates. Similarly, for its customer satisfaction survey, the former Commissioner of Internal Revenue set an expectation that the senior executives who head the business units develop action plans based on the performance information from IRS’s customer survey that are relevant to the needs of their particular customers. For example, an IRS senior executive who is the area director for compliance in Laguna Niguel, California, developed a consolidated action plan based on the plans he required from each of his territory managers that identified ways to improve low scores from the customer survey. Specifically, the senior executive had an expectation in his action plan to improve how customers were treated during collection and examination activities by ensuring that examiners explain to customers their taxpayer rights, as well as why they were selected for examination and what they could expect. Further, the senior executive planned to ensure that territory managers solicited feedback from customers on their treatment during these activities and identify specific reasons for any customer dissatisfaction. In his midyear self-assessment for fiscal year 2002, the senior executive stated that substantial progress was being made in achieving the collection and examination customer satisfaction goals. High-performing organizations use competencies to examine individual contributions to organizational results. Competencies, which define the skills and supporting behaviors that individuals are expected to exhibit to carry out their work effectively, can provide a fuller picture of an individual’s performance. To help reinforce employee behaviors and actions that support the agency’s mission, we reported that in fiscal year 2000, IRS implemented a performance management system that requires executives and managers to include critical job responsibilities with supporting behaviors in their performance agreements, which serve as the basis for their annual performance appraisals. The critical job responsibilities, which represent IRS’s core values, include leadership, employee satisfaction, customer satisfaction, business results, and equal employment opportunity and are further defined by supporting behaviors—broad actions and competencies that IRS expects its executives and managers to demonstrate during the year. The critical job responsibilities and supporting behaviors are intended to provide executives and managers with a consistent message about how their daily activities are to reflect the organization’s core values. Three of the five critical job responsibilities—customer satisfaction, business results, and employee satisfaction—align with IRS’s strategic goals as shown in figure 3. For example, by establishing a critical job responsibility and supporting behavior in customer satisfaction, IRS aligns managers’ performance to its strategic goal of “top-quality service to each taxpayer in every interaction.” The other two critical job responsibilities, leadership and equal employment opportunity, reinforce behaviors that IRS considers necessary for organizational change and an open and fair work environment. We described in August 2002 how the United Kingdom considers competencies in evaluating executives. The executives in the Senior Civil Service have performance agreements that include both business objectives and certain core competencies that executives should develop in order to effectively achieve these objectives. For example, an executive and his supervisor select one or two competencies, such as “thinking strategically,” “getting the best from people,” or “focusing on delivery.” Each competency is further described by several specific behaviors. For example, the competency of “getting the best from people” includes behaviors such as “developing people to achieve high performance;” “adopting a leadership style to suit different people, cultures, and situations;” “coaching individuals so they achieve their best;” and “praising achievements and celebrating success.” The supervisor evaluated the executive’s demonstration of these selected competencies and the achievement of business objectives when determining the size of the annual pay award. Similarly, we described in August 2002 how New Zealand’s Inland Revenue Department evaluated the performance of its employees against results and core and technical competencies and weighted these results and competencies differently in each employee evaluation depending on the position. All employees were evaluated on their commitments to deliver results, which account for 40 to 55 percent of their overall performance evaluations. In addition, all employees were evaluated against core organizational competencies such as customer focus, strategic leadership, analysis and decision making, and communication, which make up 20 to 50 percent of their evaluations. Some employees who have special knowledge and expertise in areas such as tax policy, information technology, and human capital were also evaluated against technical competencies that may account for 20 to 35 percent of their overall performance evaluations. An employee who was considered fully successful in achieving his or her performance commitments, but does not demonstrate the expected competencies, may not be assessed as fully successful in his or her particular position. Conversely, if an employee demonstrated the expected competencies, but did not achieve the agreed to performance commitments, he or she could also be considered less than fully successful. As part of the department’s review of the program conducted in 2000, both managers and staff cited the department’s policy of evaluating individual performance based on both results and competencies as a better way to measure staff performance than focusing on only results or competencies alone. High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. At the same time, these organizations recognize that valid, reliable, and transparent performance management systems with adequate safeguards for employees are the precondition to such an approach. For example, we reported in August 2002 how Canada links pay to the performance of its senior executives through its Performance Management Program. Under the Performance Management Program, introduced in 1999, a significant portion of the total cash compensation package that top and senior executives can receive takes the form of “at-risk” pay. This annual lump-sum payment ranges from 10 to 15 percent of base pay for senior executives, and as high as 25 percent for deputy ministers. Another central feature of Canada’s Performance Management Program is that both increases in base salary and at-risk pay are only awarded to executives who successfully achieve commitments agreed to in their annual performance agreements. These commitments are of two types: “ongoing commitments,” which include continuing responsibilities associated with the position, and “key commitments,” which identify priority areas for the current performance cycle. Departments award increases in base pay to executives who successfully carry out their ongoing commitments and award at-risk pay to individuals who, in addition to meeting all ongoing commitments, also successfully deliver on key commitments. Executives who do not meet at least one key commitment are not eligible for this lump- sum performance award. Under the Performance Management Program, there are no automatic salary increases connected with length of service. The Ontario Public Service (OPS) links executive performance pay to the performance of the provincial government as a whole, the performance of the executive’s home ministry, the contribution of that ministry to overall governmentwide results, as well as the individual’s own performance. The amount of the award an individual executive can receive ranges from no payment to a maximum of 20 percent of base salary. To determine the amount of performance pay for any given fiscal year, the Premier and Cabinet, the top political leadership of the Ontario government, first determine whether and to what extent the government as a whole has achieved the key provincial goals it established at the beginning of the fiscal year. If they determine that the government has met a threshold of satisfactory performance, these officials designate a certain percentage as the governmentwide “incentive envelope,” which represents the percentage that will be the basis for subsequent calculations used to determine performance awards. The Secretary of Cabinet, in consultation with the Premier, then assesses each ministry’s performance based on the ministry’s relative contribution enabling Ontario to achieve its key provincial goals and the ministry’s performance against its own approved business plan. As a result of this assessment, each ministry receives an amount equivalent to a specific percentage of the ministry’s total executive payroll for performance awards. Finally, each ministry determines the actual amount of an executive’s performance award by assessing both the individual’s actual performance against his or her prior performance commitments as well as the individual’s level of responsibility. For example, in the 1999–2000 performance cycle, the Premier and Cabinet determined that the government as a whole had met a threshold of satisfactory performance and set an incentive envelope of 10 percent. The Secretary of Cabinet and the Premier then assessed the performance of a particular ministry deciding that it had a “critical impact” on the government’s ability to deliver on its results that year, including the roll out of its quality service and e-government initiatives. They also found that this ministry “exceeded” the key commitments established in its business plan. In this case, the ministry received an amount equivalent to 12.5 percent of its executive payroll towards performance payments. Individual awards, depending upon the performance and position of the executive, ranged from no payment to 15 percent, and could have reached as high as 20 percent under the program’s regulations. In contrast, during the same performance cycle, the Secretary of Cabinet and the Premier found that another ministry had only “contributed” to governmentwide goals while having “met” its business commitments. Accordingly, this ministry received only 5 percent of its executive payroll towards performance payments. Individual awards in this case ranged from no payment to 7.5 percent. (See fig. 4.) is applied to... is applied to... An executive who performed the job of a “manager,” the least senior executive position, and had “met” some commitments contained in his or her performance agreement received a performance award of 2.5% of base pay. An executive who performed the job of an “assistant deputy minister,” the second most senior executive position, and had “exceeded” commitments contained in his or her performance agreement received a performance award of 15% of base pay. Effective performance management requires the organization’s leadership to make meaningful distinctions between acceptable and outstanding performance of individuals and to appropriately reward those who perform at the highest level. In doing so, performance management systems in high- performing organizations typically seek to achieve three key objectives: (1) they strive to provide candid and constructive feedback to help individuals maximize their contribution and potential in understanding and realizing the goals and objectives of the organization, (2) they seek to provide management with the objective and fact-based information it needs to reward top performers, and (3) they provide the necessary information and documentation to deal with poor performers. We reported that IRS recognizes that it is still working at implementing an effective performance management system that makes meaningful distinctions in senior executive performance. For example, IRS established an executive compensation plan for determining base salary, performance bonuses, and other awards for its senior executives that is intended to explicitly link individual performance to organizational performance and is designed to emphasize performance. IRS piloted the compensation plan in fiscal year 2000 with the top senior executives who report to the Commissioner of Internal Revenue and used it for all senior executives in fiscal year 2001. To recognize performance across different levels of responsibilities and commitments, IRS assigned senior executives to one of three bonus levels at the beginning of the performance appraisal cycle. Assignments depend on the senior executives’ responsibilities and commitments in their individual performance plans for the year, as well as the scope of their work and its impact on IRS’s overall mission and goals. For example, the Commissioner of Internal Revenue or the Deputy Commissioner assigns senior executives to bonus level three—considered to be the level with the highest responsibilities and commitments—only if they are part of the Senior Leadership Team. IRS restricts the number of senior executives assigned to each bonus level for each business unit. In addition, for each bonus level, IRS establishes set bonus ranges by individual summary evaluation rating, which is intended to reinforce the link between performance and rewards. The bonus levels and corresponding bonus amounts of base salary by summary rating are shown in table 1. To help ensure realistic and consistent performance ratings, each IRS business unit had a “point budget” for assigning performance ratings that is the total of four points for each senior executive in the unit. After the initial summary evaluation ratings were assigned, the senior executives’ ratings were converted into points—an “outstanding” rating converted to six points; an “exceeded” rating to four points, which is the baseline; a “met” rating to two points; and a “not met” rating to zero points. If the business unit exceeded its point budget, it had the opportunity to request additional points from the Deputy Commissioner. IRS officials indicated that none of the business units requested additional points for the fiscal year 2001 ratings. For fiscal year 2001, 31 percent of the senior executives received a rating of outstanding compared to 42 percent for fiscal year 2000, 49 percent received a rating of exceeded expectations compared to 55 percent, and 20 percent received a rating of met expectations compared to 3 percent. In fiscal year 2001, 52 percent of senior executives received a bonus, compared to 56 percent in fiscal year 2000. IRS officials indicated that they are still gaining experience using the new compensation plan and will wait to establish trend data before they evaluate the link between performance and bonus decisions. To stress making performance results the basis for pay, awards, and other personnel decisions for senior executives, OPM implemented amended regulations for senior executive performance management requiring agencies to establish performance management systems for the rating cycles beginning in 2001. These systems are to hold senior executives accountable for their individual and organizational performance by linking performance management with results-oriented organizational goals and evaluating senior executive performance using measures that balance organizational results with customer satisfaction, employee perspectives, and other measures agencies decide are appropriate. According to OPM, these regulations require agency leadership to expect excellence and take action to reward outstanding performers and deal appropriately with those who do not measure up. High-performing organizations have found that actively involving employees and stakeholders, such as unions or other employee associations, when developing results-oriented performance management systems helps improve employees’ confidence and belief in the fairness of the system and increase their understanding and ownership of organizational goals and objectives. Effective performance management systems depend on individuals’, their supervisors’, and management’s common understanding, support, and use of these systems to reinforce the connection between performance management and organizational results. These organizations recognize that they must conduct frequent training for staff members at all levels of the organization to maximize the effectiveness of the performance management systems. Overall, employees and supervisors share the responsibility for individual performance management. Both are actively involved in identifying how they can contribute to organizational results and are held accountable for their contributions. We described in August 2002 that, when reforming their performance management systems, public sector organizations in other countries consulted a wide range of employees and stakeholders early in the process, obtained direct feedback from them, and engaged employee unions or associations. Consult a Wide Range of Stakeholders Early in the Process. An important step to ensure the success of a new performance management system is to consult a wide range of stakeholders and to do so early in the process. For its new Senior Civil Service performance management and pay system, the United Kingdom’s Cabinet Office recognized the importance of meeting with and including employees and stakeholders in the formation of the new system. The Cabinet Office obtained feedback from various employee associations, a civil servant advisory group, a project board composed of personnel directors, and permanent secretaries As part of Canada’s effort to consult stakeholders concerning its new performance management system, the government convened an interdepartmental committee to explore and discuss possible approaches, consulted networks of human capital professionals and executives across the country, and engaged top executives through the Committee of Senior Officials, consisting of the Clerk of the Privy Council and heads of major departments and other top officials. Obtain Feedback Directly from Employees. Directly asking employees to provide feedback on proposed changes in their performance management systems encourages a sense of involvement and ownership, allows employees to express their views, and helps validate the system to ensure that the performance measures are appropriate. Asking employees to provide feedback should not be a one-time process, but an ongoing process through the training of employees to ensure common understanding of the evaluation, implementation, and results of the systems. For example, the United Kingdom’s Cabinet Office provided a packet detailing proposed reforms of the existing performance management system to approximately 3,000 members of the Senior Civil Service in a large-scale effort to obtain their feedback on the proposed changes. In addition, each department also held consultations where individuals listened to proposed reforms. More than 1,200 executives (approximately 40 percent of the Senior Civil Service) participated in the process. The Cabinet Office then collected and incorporated these views into the final proposal, which was adopted by the government and implemented in April 2001. Engage Employee Unions or Associations. We have previously reported that in the United States obtaining union cooperation and support can help to achieve consensus on planned changes, avoid misunderstandings, and assist in the expeditious resolution of problems. Agencies in New Zealand and Canada actively engaged unions or employee associations when making changes to performance management systems. In New Zealand, an agreement between government and the primary public service union created a “Partnership for Quality” framework that provides for ongoing, mutual consultation on issues such as performance management. Specifically, the Department of Child, Youth, and Family Services and the Public Service Association entered into a joint partnership agreement that emphasizes the importance of mutual consideration of each other’s organizational needs and constraints. For example, two of the objectives stated in the 2001–02 partnership agreement were to (1) develop the parties’ understanding of each other’s business and (2) equip managers, delegates, and members with the knowledge and skills required to build a partnership for a quality relationship in the workplace. Department and union officials told us that this framework had considerably improved how both parties approach potentially contentious issues, such as employee performance management. Also included in the partnership agreement were measures to evaluate the success of the relationship such as (1) sharing ownership of issues, plans, and outcomes and (2) quickly resolving issues in a solution-focused way, with a reduction in grievances. The government of Canada repeatedly consulted with the Association of Professional Executives of the Public Service of Canada (Association) about its proposed reforms to the executive performance management system and accompanying pay-at-risk provisions. This dialogue began prior to the system’s rollout and continued through initial implementation during which the Association was actively involved in collecting feedback from executives as well as making recommendations. For example, as part of an assessment of Canada’s Performance Management Program, based on consultations the Association had with its membership after the first year of the program, the Association identified several issues needing further attention, including the need to provide executives with additional guidance on how to develop their individual performance agreements, particularly with regard to identifying and selecting different types of performance commitments. This recommendation and others were shared with the government, and the official Performance Management Program guidance issued the following year incorporated these concerns. The experience of successful cultural transformations and change management initiatives in large public and private organizations suggests that it can often take 5 to 7 years until such initiatives are fully implemented and cultures are transformed in a substantial manner. Because this time frame can easily outlast the tenures of top political appointees, high-performing organizations recognize that they need to reinforce accountability for organizational goals during times of leadership transitions through the use of performance agreements as part of their performance management systems. At a recent GAO-sponsored roundtable, we reported on the necessity to elevate attention, integrate various efforts, and institutionalize accountability for addressing management issues and leading transformational change. The average tenure of political leadership and the long-term nature of the change management initiatives that are needed can have critical implications for the success of those initiatives. Specifically, in the federal government, the frequent turnover of the political leadership has often made it difficult to obtain the sustained and inspired attention required to make needed changes. The average tenure of political appointees governmentwide for the period 1990-2001 was just under 3 years. In addition, career executives can help provide the long-term commitment and focus needed to transform an agency, but the retirement eligibility of executives is increasing. For example, 71 percent of career senior executive service members will reach retirement eligibility by the end of fiscal year 2005—an historically high rate of eligibility. Without careful planning, the retirement eligibility rate suggests an eventual loss in institutional knowledge, expertise, and leadership continuity. High-performing organizations use their performance management systems to help provide continuity during these times of transition by maintaining a consistent focus on a set of broad programmatic priorities. Performance agreements can be used to clearly and concisely outline top leadership priorities during a given year and thereby serve as a convenient vehicle for new leadership to identify and maintain focus on the most pressing issues confronting the organization as it transforms. We have observed that a specific performance expectation in the leadership’s performance agreement to lead and facilitate change during this transition could be a critical element as organizations transform themselves to succeed in an environment that is more results-oriented, less hierarchical, and more integrated. More generally, the existence of an established process for developing and using performance agreements provides new leadership with a tested tool that it can use to communicate its priorities and instill those priorities throughout the organization. We described in August 2002 how OPS and Canada’s Performance Management Program institutionalized the use of performance agreements in their performance management systems to withstand organizational changes and cascaded the performance agreements from top leadership to lower levels of the organizations. Since 1996, OPS has used performance agreements to align and cascade performance goals down to all organizational levels and all employees and has required senior executives to have annual performance agreements that link their performance commitments to key provincial priorities and approved ministry business plans. In 2000, OPS extended this requirement so that agreements are now required of all employees, from senior executives to frontline employees. Specifically, all employees develop individual performance commitments that link to their supervisors’ performance agreements and their ministries’ business plans. Senior executives and some middle-level managers and specialists also link commitments contained in their individual performance plans to the government of Ontario’s key provincial priorities in areas such as fiscal control and management, human capital leadership, and fostering a culture of innovation. Similarly, Canada’s Performance Management Program cascades goals down through all levels of senior executives. It requires that each department’s deputy minister—the senior career public service official responsible for leading Canadian government departments—has a written performance agreement that links his or her individual commitments to the organization’s business plan, strategies, and priorities. From the deputy minister, commitments cascade down through assistant deputy ministers, directors general, and directors. At every level, the performance agreement between each executive and his or her manager is intended to document a mutual understanding about the performance that is expected and how it will be assessed. Some agencies, such as Industry Canada and the Public Service Commission, have established their own programs to cascade commitments below the director level and require the use of performance agreements for some middle managers or supervisors within their organizations. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report for 30 days from its date. At that time, we will provide copies of this report to interested congressional committees and the Director of OPM. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact me or Lisa Shames on (202) 512-6806 or at mihmj@gao.gov. Anne Kidd and Janice Lichty were key contributors to this report. To meet our objective to identify key practices for effective performance management, we summarized our most recent reports on performance management for public sector organizations both in the United States and abroad. We reviewed and synthesized the information contained in the reports to identify key practices for modern, effective, and credible performance management systems. We included the agency examples supporting the key practices primarily from the previous three reports and added examples from other GAO reports where appropriate. The specific objectives, scope, and methodology of each of these reports are included in the reports. We discussed the set of key practices with agency officials at the Office of Personnel Management (OPM) responsible for performance management of the general workforce. We also spoke with the President of the Senior Executives Association and the Director of the Center for Human Resources Management at the National Academy for Public Administration to obtain any observations or general comments on the key practices we identified. Likewise, we provided the key practices, for their general comments, to the Presidents for the National Treasury Employees Union and the American Federation of Government Employees; the Director of the Office of Policy and Evaluation, U.S. Merit Systems Protection Board; and the Vice President for Policy and Research, Partnership for Public Service. We did not seek official comments on the draft report from agency officials because the practices and examples were drawn from previously issued GAO reports. We provided the draft report to the Director of OPM for her information. We also did not update the examples, and as a result, the information in the examples may, or may not, have changed since the issuance of the report. We performed our work in Washington, D.C., from December 2002 through February 2003 in accordance with generally accepted government auditing standards. Major Management Challenges and Program Risks: A Governmentwide Perspective. GAO-03-95. Washington, D.C.: January 2003. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 17, 2003. Human Capital: Effective Use of Flexibilities Can Assist Agencies in Managing Their Workforces. GAO-03-2. Washington, D.C.: December 6, 2002. Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Highlights of a GAO Roundtable: The Chief Operating Officer Concept: A Potential Strategy to Address Federal Governance Challenges. GAO-03- 192SP. Washington, D.C.: October 4, 2002. Results-Oriented Cultures: Using Balanced Expectations to Manage Senior Executive Performance. GAO-02-966. Washington, D.C.: September 27, 2002. Results-Oriented Cultures: Insights for U.S. Agencies from Other Countries’ Performance Management Initiatives. GAO-02-862. Washington, D.C.: August 2, 2002. Managing for Results: Using Strategic Human Capital Management to Drive Transformational Change. GAO-02-940T. Washington, D.C.: July 15, 2002. Managing for Results: Building on the Momentum for Strategic Human Capital Reform. GAO-02-528T. Washington, D.C.: March 18, 2002. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Human Capital: Practices That Empowered and Involved Employees. GAO-01-1070. Washington, D.C.: September 14, 2001. Human Capital: Taking Steps to Meet Current and Emerging Human Capital Challenges. GAO-01-965T. Washington, D.C.: July 17, 2001. Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies. GAO-01-592. Washington, D.C.: May 25, 2001. Managing for Results: Emerging Benefits From Selected Agencies’ Use of Performance Agreements. GAO-01-115. Washington, D.C.: October 30, 2000. Human Capital: Using Incentives to Motivate and Reward High Performance. GAO/T-GGD-00-118. Washington, D.C.: May 2, 2000. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. | The federal government is in a period of profound transition and faces an array of challenges and opportunities to enhance performance, ensure accountability, and position the nation for the future. High-performing organizations have found that to successfully transform themselves, they must often fundamentally change their cultures so that they are more results-oriented, customer-focused, and collaborative in nature. To foster such cultures, these organizations recognize that an effective performance management system can be a strategic tool to drive internal change and achieve desired results. Based on previously issued reports on public sector organizations' approaches to reinforce individual accountability for results, GAO identified key practices that federal agencies can consider as they develop modern, effective, and credible performance management systems. Public sector organizations both in the United States and abroad have implemented a selected, generally consistent set of key practices for effective performance management that collectively create a clear linkage--"line of sight"--between individual performance and organizational success. These key practices include the following. (1) Align individual performance expectations with organizational goals: An explicit alignment helps individuals see the connection between their daily activities and organizational goals; (2) Connect performance expectations to cross-cutting goals: Placing an emphasis on collaboration, interaction, and teamwork across organizational boundaries helps strengthen accountability for results; (3) Provide and routinely use performance information to track: organizational priorities. Individuals use performance information to manage during the year, identify performance gaps, and pinpoint improvement opportunities; (4) Require follow-up actions to address organizational priorities: By requiring and tracking follow-up actions on performance gaps, organizations underscore the importance of holding individuals accountable for making progress on their priorities; (5) Use competencies to provide a fuller assessment of performance: Competencies define the skills and supporting behaviors that individuals need to effectively contribute to organizational results; (6) Link pay to individual and organizational performance: Pay, incentive, and reward systems that link employee knowledge, skills, and contributions to organizational results are based on valid, reliable, and transparent performance management systems with adequate safeguards; (7) Make meaningful distinctions in performance: Effective performance management systems strive to provide candid and constructive feedback and the necessary objective information and documentation to reward top performers and deal with poor performers; (8) Involve employees and stakeholders to gain ownership of performance management systems: Early and direct involvement helps increase employees' and stakeholders' understanding and ownership of the system and belief in its fairness; and (9) Maintain continuity during transitions: Because cultural transformations take time, performance management systems reinforce accountability for change management and other organizational goals. |
SBIRS is intended to be a more capable successor to DSP and provide initial warning of a ballistic missile attack on the United States, its Once complete, the nominal SBIRS deployed forces, or its allies.constellation is to consist of two hosted HEO sensors and four GEO satellites. The GEO satellite constellation provides midlatitude coverage and the hosted HEO sensors provide polar coverage for missile warning and defense and other missions. Figure 1 shows the field of view of a single GEO satellite. Large, complex satellite systems like SBIRS can take a long time to develop and construct. As a result, they can contain technologies that have become obsolete by the time they are launched. Although two GEO satellites were launched in recent years—the first in May 2011 and the second in March 2013—they had been designed in the late 1990s and primarily use technology from that period. The third and fourth GEO satellites, which have some updates to address parts obsolescence issues, are in production and expected to be initially available for launch in May 2016 for GEO satellite 4, and September 2017 for GEO satellite 3, which will first be stored. Figure 2 depicts a nominal constellation of SBIRS GEO satellites and HEO sensors once SBIRS GEO satellites 3 and 4 are launched and operational, augmented by DSP satellites. SBIRS GEO satellites 5 and 6 are needed in 2020 and 2021, respectively, to replenish the first two SBIRS GEO satellites and maintain the SBIRS constellation. In February 2013, the Air Force awarded a fixed- price incentive (firm target) contract for nonrecurring engineering activities and procurement of long lead spacecraft parts for GEO satellites 5 and 6.June 2014, 1 month after the Air Force’s assessment on inserting newer technologies. The Air Force procured the production of GEO satellites 5 and 6 in In accordance with the acquisition strategy and to reduce risk in meeting need dates, GEO satellites 5 and 6 are to be derivatives of GEO satellite 4, with limited design changes to capitalize on the use of previously procured engineering and parts. According to the Air Force, it plans for limited technology refresh improvements. GEO satellites 5 and 6, including some on the sensors, are being upgraded to address parts obsolescence and essential technology updates. They will also include updates that were incorporated into GEO satellites 3 and 4—approximately 30 percent of these satellites’ parts were updated, according to the Air Force’s report. Figure 3 depicts the key components of the SBIRS GEO satellite. DOD’s definition of technology refresh is the periodic replacement of both custom-built and commercial-off-the-shelf system components, within a larger DOD weapon system, to ensure continued supportability throughout the weapon system’s life cycle. The Air Force assessed the feasibility and cost of incorporating a newer infrared focal plane into the SBIRS GEO satellites 5 and 6 and found that inserting a new focal plane would incur significant cost and schedule increases. The assessment came too late to be useful to GEO satellites 5 and 6, but that might not have been the case if the Air Force had invested in technology development and insertion planning earlier in the program to provide more options for consideration. As directed in the Senate report, the Air Force assessed the feasibility and costs of inserting newer infrared focal plane technologies—sensors that can detect heat from missile launches, for example—into GEO satellites 5 and 6. The Air Force considered one digital focal plane, a staring sensor, in lieu of the current analog focal plane. It identified two plausible options for insertion, and though technically feasible, neither was deemed affordable or deliverable within the replenishment need dates of 2020 and 2021. According to the Air Force report: The first option would develop and replace the current analog focal plane assembly with more a modern digital focal plane while minimizing changes to the electronic interfaces. This would not increase system performance; however, the cost would be about $424 million and incur a schedule delay of 23 to 32 months. The second option would also include replacement of the analog focal plane with a digital focal plane; however, the most significant difference between this option and the first option is the redesign of the signal processor assembly. According to the Air Force, this redesign could maximize the capability of the new digital focal plane by at least 20 percent beyond the current system’s requirements by increasing, among other items, target resolution. However, this option—at $859 million—would more than double the cost of the first option, and bring with it a 35- to 44-month schedule delay. The timing of the Air Force’s assessment occurred after the Air Force had already approved the GEO satellites 5 and 6 acquisition strategy and awarded the advance procurement contract to complete nonrecurring engineering activities and procurement of critical parts with long lead times—on February 26, 2012, and February 19, 2013, respectively. In its assessment, the Air Force reported that to implement changes to the infrared focal planes at this stage, the current advanced procurement GEO satellites 5 and 6 contract would have to be modified, which would require renegotiations. In addition, the Air Force noted that at the time of the assessment, the fix-priced production modification had not yet been executed and changes could also have affected the related negotiations. Furthermore, any changes to the design of the satellites at this juncture would most likely have incurred additional cost with resulting schedule slips. For example, Air Force officials stated additional nonrecurring engineering would likely be required to design, build, test, and qualify a new focal plane design and to mitigate impacts to other subsystems on the satellite. Because of limited prior investment in research and development and technology insertion planning leading up to the acquisition of GEO satellites 5 and 6, there was only one viable alternative focal plane to be considered. As a result, the Air Force was limited in the number of feasible options for adding new technology to GEO satellites 5 and 6. Effectively planning and managing technology development—including specifying when, how, and why to insert technologies into a deployed system—can help to increase readiness and improve the potential for reduced costs. We have found that leading commercial companies plan for technology insertion prior to the start of a program, which provides managers time to gain additional knowledge about a technology. DOD policy and guidance indicate that planning for technology insertion and refresh is also important throughout a system’s life cycle. Specifically, DOD Instruction 5000.02, January 7, 2015, requires program managers to prepare a Life Cycle Sustainment Plan, and notes that technology advances and plans for follow-on systems may warrant revisions to the plan. In addition, DOD’s Defense Acquisition Guidebook advises the use of trade studies to inform system modifications, such as technology insertion or refresh, and the development and implementation of technology refresh schedules. Very little technology insertion or refresh planning was completed early on in the SBIRS program to address potential obsolescence and find opportunities to insert newer technologies in later stages of the program’s life cycle. The SBIRS program was unable to plan for technology upgrades and refresh, according to program officials, because of other issues with the satellites being built. Officials said it was difficult to obtain funding for exploring future technologies at a time when the program was experiencing satellite development problems. As we have reported, the SBIRS program has experienced significant cost growth and schedule delays since its inception, in part because of development challenges, For example, in 2014 we reported a test failures, and technical issues. total cost growth of $14.1 billion over the original program cost estimate, and a delay of roughly 9 years for the first satellite launch. Hence, funding that could have been used for technology development and planning for parts obsolescence or technology insertion to reduce risk was, instead, used to address significant cost and schedule breaches as they arose. Though the SBIRS program started in 1996, efforts to begin studying options for transitioning to the next system did not start until 2007. The program also began to invest in technology development in 2007 with the Third Generation Infrared Surveillance program, which was intended to reduce risk for the development of new sensor technology. The Air Force later incorporated the technology into the Commercially Hosted Infrared Payload (CHIRP), which received funding for an on-orbit demonstration beginning in fiscal year 2011, though it was not used operationally for SBIRS missions. Funding for SMI started in fiscal year 2013. Figure 4 depicts a timeline of key SBIRS program events and efforts to study options for the next system, including technology development investments. Beyond assessing the two options—of replacing the current analog focal plane with a more modern digital focal plane, either with or without changes to the electronic interfaces—the Air Force was not in a position to incorporate changes and still maintain the efficiencies planned by buying GEO satellites 5 and 6 together. The current approach to technology insertion for SBIRS is not consistent with the best practice of establishing a plan prior to the start of a program that identifies specific technologies to be developed and inserted to achieve a desired end state. The efforts that are under way are limited by lack of direction and time constraints in informing an acquisition decision and technology insertion plan for the follow-on to the current SBIRS program. While the Air Force is working to develop a technology road map for the next system, the effort is still hampered by the lack of a clear vision for the path forward, requiring the Air Force to plan for multiple potential systems. Further, it is too soon to tell whether the road map will be sufficiently developed in time to address future technology insertion needs. Technology insertion decisions for SBIRS do not systematically follow an established plan. Instead, efforts are more near-term oriented to solve known problems or to take advantage of isolated technologies. A technology insertion plan ideally envisions desired capabilities for a system and then directs investments to develop those capabilities. In its Systems Engineering Guide, the MITRE Corporation—a not-for-profit research and development company—highlights the importance of technology planning to provide guidance for evolving and maturing technologies to address future mission needs. As mentioned above, we have also found that leading commercial companies conduct strategic planning before technology development and plan for technology insertion before a program begins. Such practices enable managers to identify needs and technologies, prioritize resources, and validate that a technology can be integrated. Currently, technology insertion for SBIRS is largely driven by the need to replace obsolescent parts, that is, parts that are no longer available and need to be rebuilt or redesigned and qualified for the space environment. For example, when a contractor was having difficulty delivering an encoder and decoder system—which assists with pointing control of the sensor—on time, the program office sought another source for the system. In place of a technology insertion plan, Air Force officials have cited SMI as a means for demonstrating developed technologies that could be inserted into future systems. One of the areas under the SMI plan, Evolved SBIRS, focuses on reducing cost and technical risk for replenishments of the current SBIRS satellites and future SBIRS systems, including addressing obsolescence. By simplifying designs and studying ways to reduce the risk of obsolescence, the effort aims to significantly reduce costs if the decision is made to procure a seventh and eighth GEO satellite. Beyond replacing obsolescent parts, technology insertion efforts for SBIRS are generally ad hoc and focus on isolated technologies. Although Air Force Space Command’s (AFSPC) annual integrated planning process identifies technology concepts that could be a part of a future system, it is the program’s responsibility to decide which concepts to Program managers generally pursue further, according to officials.initiate technology development ideas and propose them to AFSPC as they arise, at which point they develop into science and technology projects. Air Force officials noted that ongoing technology development efforts are relatively narrow in scope because of resource constraints. For example, another SMI effort, Wide Field of View Testbeds, is focused on demonstrating a prototype wide field of view staring payload that could be inserted either into an evolved program of record or an alternate system, such as a host satellite. Officials said this effort has been limited to testing one focal plane in a relevant space environment, although it would have been beneficial to test others that were available. The Data Exploitation effort, another SMI effort, is focused on ways to further exploit data collected from existing sensors on orbit by advancing on-orbit data collection and analysis and developing algorithms to process data. Given that these efforts aim for varying goals, they are not together intended to plan for a single end system and are not set up to identify the specific technologies required for such a system. Officials acknowledge that the SMI efforts cover different directions to keep options open for the various potential approaches to a future system but anticipate that efforts will become more focused once the SBIRS Follow-on analysis of alternatives (AOA) is completed and a decision is made on the way forward. SMI efforts are also hampered by time constraints that could limit their usefulness in informing technology insertion decisions for the follow-on system. Air Force officials have stated that an acquisition decision for the follow-on to SBIRS—whether a continuation of the program with next- generation satellites or a different system—will need to be made within the fiscal year 2017-2018 time frame. To inform that decision, any new technologies required for the follow-on will need to be developed enough that the Air Force can be certain they will be ready to transition in time. For example, if the follow-on uses a wide field of view sensor, the Air Force will need to complete significant work—including data exploitation, testing, and demonstrations—to ensure that the sensor is capable of performing the necessary function. Officials said the relevant Wide Field of View Testbeds effort, expected to be active by fiscal year 2017, could potentially meet the decision time frame if it stays on track, though a delay in the AOA or funding decisions could affect the program’s ability to keep the effort on schedule. Given the short history of SMI, which started in fiscal year 2013, the SBIRS program has had limited time to develop and demonstrate new technologies that could be inserted into a follow-on. Going forward, program officials said they are developing a technology road map for each of the different options being considered in the AOA. As the results of the AOA are pending, officials must develop plans for multiple potential paths forward, including those that may involve less mature technology currently. This road map will be modified based on the option selected from the AOA to identify the technologies available and determine when they may be inserted into the follow-on, officials said. Though specific timelines for the final road map are not yet determined, once finalized, the program plans to use it to guide SMI investment plans and to work with the science and technology community on development efforts. It is too early to determine how successful the road map will be in providing a timely plan for inserting technology into the next system. Delays in previous efforts to analyze alternatives and plan for a follow-on suggest similar delays could occur for the ongoing SBIRS Follow-on AOA. Such delays would make it difficult to develop a thorough road map for technology insertion if the program does not know the system for which to plan. In addition, some officials have cited concerns that all segments of the system—particularly the ground system, which provides command and control of the satellites and is already delayed behind the satellites currently on orbit—may not be fully assessed in ongoing analyses and that potential risks could be marginalized or overlooked in a technology insertion plan. Large and complex satellites like SBIRS take a long time to develop and build, which can make the technology aboard outdated compared to what might be available when the satellites are launched and operated. The Air Force has been focused on building the satellites versus developing new capabilities and, in doing so, has missed opportunities to pursue viable technology options. Establishing a plan for when, how, and why technology improvements should be inserted into a system can be essential to providing capabilities when needed and reducing life cycle costs. Without an early technology insertion plan for SBIRS and the associated technology development, the Air Force was limited to assessing few new technologies, which were too late to be incorporated into GEO satellites 5 and 6 without significant cost and schedule increases. Given the time it took to develop, produce, and launch the SBIRS satellites, spanning over 18 years, a forward-looking approach that develops and inserts technologies within planned schedule windows could be more effective in satisfying mission needs and anticipating future requirements. Going forward, the Air Force is at risk of being in the same position for the next system that follows the current SBIRS program. Plans to establish more specific technology insertion strategies for potential alternatives could encourage earlier technology development, though these cannot yet be assessed because they are still in development. Without a clear vision of the path forward and a corresponding plan that lays out specific points for addressing potential obsolescence issues, assessing technology readiness, and determining when it is appropriate to insert technology for all segments of the program, the Air Force could be limited in its ability to mitigate technology insertion risk. Further, as the deadline approaches for deciding on a follow-on to SBIRS, the Air Force continues to lose valuable time to develop, demonstrate, and assess new technologies. As a result, it may be forced to continue with the current design for subsequent satellites, potentially requiring more attention to obsolete components and continuing the cycle of limited technology insertion. To improve technology planning and ensure planning efforts are clearly aligned with the SBIRS follow-on, we recommend that the Secretary of the Air Force establish a technology insertion plan as part of the SBIRS follow-on acquisition strategy that identifies obsolescence needs as well as specific potential technologies and insertion points. We provided a draft of this report to DOD for comment. In its written comments, which are reprinted in appendix I, DOD concurred with our recommendation. DOD also provided technical comments which were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff any have questions about this report, please contact me at (202) 512-4841 or at chaplainc@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II. In addition to the contact named above, key contributors to this report were Art Gallegos (Assistant Director), Maricela Cherveny, Brenna Guarneros, Bob Swierczek, Hai Tran, Oziel Trevino, and Alyssa Weir. | SBIRS is a key part of DOD's missile warning and defense systems. To replace the first two satellites currently on orbit, the Air Force plans to build two more with the same design as previous satellites. The basic SBIRS design is years old and some of its technology has become obsolete. To address obsolescence issues in the next satellites, the program must replace old technologies with new ones, a process that may be referred to as technology insertion or refresh. A Senate Armed Services Committee report included a provision for GAO to review an Air Force assessment of the feasibility of inserting newer technologies into the planned replacement satellites, SBIRS GEO satellites 5 and 6, and how it intends to address technology insertion issues for future satellite systems. This report examines (1) the extent to which the Air Force assessed the feasibility of inserting newer technologies into SBIRS GEO satellites 5 and 6 and (2) plans to address obsolescence issues and risk associated with technology insertion for future satellites or systems. GAO identified technology insertion planning guidance and practices, reviewed the Air Force's assessment and plans, and met with DOD and contractor offices. The Air Force assessed options for replacing older technologies with newer ones—called technology insertion—in the Space Based Infrared System (SBIRS) geosynchronous earth orbit (GEO) satellites 5 and 6. However, the assessment was limited in the number of options it could practically consider because of timing and minimal early investment in technology planning. The Air Force assessed the feasibility and cost of inserting new digital infrared focal plane technology—used to provide surveillance, tracking, and targeting information for national missile defense and other missions—in place of the current analog focal plane, either with or without changing the related electronics. While technically feasible, neither option was deemed affordable or deliverable when needed. The Air Force estimated that inserting new focal plane technology would result in cost increases and schedule delays ranging from $424 million and 23 months to $859 million and 44 months. The assessment came too late to be useful for SBIRS GEO satellites 5 and 6. It occurred after the Air Force had approved the acquisition strategy and while negotiations were ongoing to procure production of the two satellites. According to the Air Force, implementing changes at that stage would require contract modifications and renegotiations and incur additional cost and schedule growth. Limited prior investment in technology development and planning for insertion also limited the number of feasible options for adding new technology into SBIRS GEO satellites 5 and 6. Department of Defense (DOD) acquisition policy and guidance indicate that such planning is important throughout a system's life cycle, and GAO has reported on leading commercial companies' practice of planning for technology insertion prior to the start of a program. Air Force officials said early technology insertion planning was hampered in part by development challenges, test failures, and technical issues with the satellites, which took priority over research and development efforts. The current approach to technology insertion for the system or satellites after SBIRS GEO satellites 5 and 6 could leave the program with similar challenges in the future. GAO's work on best practices has found that leading companies conduct strategic planning before technology development begins to help identify needs and technologies. Similarly, the MITRE Corporation—a not-for-profit research and development organization—has highlighted the importance of technology planning to provide guidance for evolving and maturing technologies to address future mission needs. Technology insertion decisions for the future system or satellites are not guided by such planning. Instead, decisions are largely driven by the need to replace obsolete parts as issues arise. Current efforts—such as individual science and technology projects, including those in the Space Modernization Initiative—are limited by lack of direction, focusing on isolated technologies, and therefore are not set up to identify specific insertion points for a desired future system. In addition, the SBIRS program has had little time to develop and demonstrate new technologies that could be inserted into a SBIRS follow-on system. The Air Force is working to develop a technology road map for the next system, according to officials. Given the lack of a clear vision for the path forward and the road map's early development status, it is too soon to determine whether it will be able to identify specific technology and obsolescence needs and insertion points in time for the next system. To improve technology planning, GAO recommends that the Secretary of the Air Force establish a plan as part of the SBIRS follow-on acquisition strategy that identifies obsolescence needs, specific potential technologies, and insertion points. DOD concurred with the recommendation. |
According to forum participants, nanomanufacturing is an emerging megatrend that will bring diverse societal benefits and new opportunities—potentially creating jobs through disruptive innovation. Further, nanomanufacturing has characteristics of a general purpose technology (GPT)—such as electricity or computers, or historically, innovations such as the smelting of ore and the internal combustion engine. As one participant said: “Everything will become nano.” Figure 1, below, provides examples of nanomanufacturing products that illustrate four diverse areas being affected by nanomanufacturing. Different manufacturing activities occur at different stages of the value chain. According to experts, the United States likely leads in nanotechnology R&D today but faces global-scale competition—which one forum participant described as a “moon race.” Two indicators of how the U.S. compares with other countries are R&D funding levels and scientific publications. With respect to R&D funding, there is some uncertainty about international comparisons because relevant definitions may vary across nations—and some countries may not adequately or effectively track R&D investments or not share such information externally. However, forum participants viewed the United States as currently appearing to lead in terms of overall (that is, combined public and private) funding of nanotechnology R&D. When public funding alone was considered, a participant in the July 2013 forum presented projections showing the United States as likely being surpassed by some other nations. With respect to scientific publications, the United States appears to dominate in numbers of nanotechnology publications in three highly cited journals—which is an apparent indication of U.S. competitiveness in quality research. However, China overtook the United States in 2010 through 2012 (the most recent year reported) in terms of the quantity of nano-science articles published annually. A semiconductor is the generic term for the various devices and integrated circuits that regulate and provide a path for electrical signals. As such, semiconductors are the foundation of the electronics industry. semiconductors. However, they also said that U.S. manufacturing in this area has declined (although some plants are located here) and that the United States does not have a strategy to assure U.S. leadership in the semiconductor industry. Nano-based concrete: Concrete is the most heavily used construction material in the world—with about 5-billion cubic yards annually produced worldwide—and demand for it is expected to increase to meet the infrastructure needs of a growing global population. Nanomaterials can enhance the performance of the concrete used to construct this infrastructure. These materials might potentially result in roads, bridges, buildings, and structures that are more easily built, longer-lasting, and better-functioning than those that currently exist. Experts offered differing views on U.S. global competitiveness in the commercialization and use of nanomaterials in concrete. A key forum participant said that while cement for domestic use is produced in the United States, today’s dominant companies— which are spearheading development of new technologies—are headquartered elsewhere (although this industry was previously dominated by the United States). Additionally, some experts said that other countries are spending more resources than the United States to promote commercialization; for example, one expert said that China established a national technology center to improve its competitiveness and domestic production of high-value, nano-based construction products. On the positive side, chemical admixtures are one means to introduce nano-materials into concrete—and the United States has a 15% market share of chemical sales, worldwide. According to forum participants and experts interviewed, challenges to U.S. competitiveness in nanomanufacturing include U.S. funding gaps, significant global competition, and lack of a U.S. vision for nanomanufacturing, among others. Participants said that in the United States, government often funds research or the initial stages of development, whereas industry typically invests in the final stages. As a result, U.S. innovators may find it difficult to obtain either public funding or private investment during the middle stages of innovation. For nano-innovators, this support gap can characterize the middle stages of both (1) efforts to develop a new technology or product, and/or (2) efforts to develop a new manufacturing process. Thus, U.S. innovators may encounter two support gaps, which participants termed: the Valley of Death (the lack of funding or investment for the middle stages of developing a technology or product), and the Missing Middle (a similar lack of adequate support for the middle stages of developing a process or an approach to manufacture the new product at scale). The Valley of Death begins after a new technology or product has been validated in a laboratory environment and continues through testing and demonstration as a prototype in a non-laboratory environment (but before industry acquires it as a commercial technology or product). The Missing Middle occurs during analogous stages of the manufacturing-innovation process, as illustrated below (fig.2). Participants further said that substantial amounts of funding/investment are needed to bridge the Valley of Death and the Missing Middle—and that high costs can be a barrier to commercialization, especially for small and medium-sized U.S. enterprises. Additionally, some said that recently, venture capital (VC) funding has been diverted from physical science areas like nanotechnology to fund new ventures in Internet services that may provide larger and faster returns on investment. Varied forum participants and experts interviewed made statements to the effect that other nations do more than the United States in terms of government investment in technology beyond the research stage. According to participants, the funding and investment gaps that hamper U.S. nano-innovation (such as the Missing Middle) do not apply to the same extent in some other countries—for example, China and Russia—or are being addressed. Multiple participants referred to the European Commission’s upcoming Horizon 2020 program, specifically mentioning a key program within Horizon 2020: the European Institute of Innovation and Technology or EIT, which emphasizes the nexus of business, research, and higher education. The 2014-2020 budget for the EIT portion of this European Commission initiative is €2.7 billion (or close to $3.7 billion in U.S. dollars as of January 2014). Multiple forum participants said that the United States lacks a vision or However, one explained strategy for a nanomanufacturing capability.that such a strategy could be designed by (1) proceeding from a vision or goal to the examination of the social, technological, economic, environmental, and political elements of the relevant systems and their interactions with one another; (2) understanding the basic science, engineering, and manufacturing involved; and (3) consulting the full range of stakeholders. This participant said that although systems thinking and the design of a grand strategy, based on a vision, are often employed following a crisis that motivates a nation, such an effort could be usefully pursued in advance of a crisis, using foresight. Such an effort would reflect the statements of another participant who said, in effect, that the future of nanomanufacturing for the United States is limited only by our ability to envision what we want to see realized. This approach would likely draw upon the U.S. federal government to develop and articulate the strategy—in coordination with industry, academia, nonprofits, and state and local governments. Additionally, some federal effort is implied for implementation, but the level of funding and the mix of funding sources (not specifically discussed at the forum) would likely be specified as part of developing a vision and strategy for nanomanufacturing. Forum participants described further challenges to U.S. competitiveness in nanomanufacturing, including the earlier loss of an industry, as discussed above for lithium-ion batteries—or even extensive prior offshoring in some industries, which can be important, in part because, as one participant said: “when we design here ship abroad, we lose this shop- floor-innovation kind of mentality” and threats to U.S. intellectual property on the part of some other countries or entities within those countries—which occur with respect to both university research and private R&D on, for example, manufacturing processes. Forum participants suggested the need to address policy issues in U.S. research funding, challenges to U.S. competitiveness in nanomanufacturing, and other areas, including environmental, health, and safety (EHS) issues. U.S. research funding. Forum participants said it is essential for the United States to maintain a high level of investment in fundamental nanotechnology research. This is because (1) some other countries are now making significant investments in R&D and (2) ongoing research breakthroughs will drive the future of nanomanufacturing. One participant emphasized that as nanotechnology increasingly moves into manufacturing, it may be important to consider not only continuing funding for fundamental nanotechnology research, but also targeting some funding to early stage research on nanomanufacturing processes. Challenges to U.S. competitiveness in nanomanufacturing. Forum participants said the United States could improve U.S. competitiveness in nanomanufacturing by pursuing one or more of three approaches, which might be viewed either as alternatives or as complementary approaches. These three approaches are described in table 1, below. Two examples of U.S. public-private partnerships that are designed to promote innovation in nanomanufacturing are housed in universities. A related example with similar goals is a user facility that is located within a federal laboratory. The Center for Nanomanufacturing Systems for Mobile Computing and Mobile Energy Technologies (NASCENT) was founded at the University of Texas at Austin in 2012, with funding from NSF. Two key objectives are: to create processes and tools for manufacturing nano-enabled components for mobile computing, energy, healthcare, and security— as well as simulations for testing potential nanomanufacturing approaches, and to provide an ecosystem with computational and manufacturing facilities—for example, large-area wafer-scale and roll-to-roll nanomanufacturing,faculty, staff, and students. as well as the university’s resources, including The Center’s overall goal is to facilitate the rapid creation and deployment of new products and to mitigate the risks associated with the Valley of Death and the Missing Middle. A co-director of NASCENT told us that another goal is to use “10 years of NSF funding to develop the center infrastructure so it will . . . self-supported from industrial partnerships and other funding sources.” Center partners include industrial partners—such as toolmakers, materials suppliers, and device makers—that will provide both technical and financial support; companies ranging from start-ups to well-established firms that will implement or adopt technology created by the center; and “translational research partners” such as technology incubators and technology funds. The College of Nanoscale Science and Engineering (CNSE), established in 2004, is part of the State University of New York and is located in Albany—within the existing regional (Hudson Valley) ecosystem centered on the semiconductor industry. CNSE is designed as a unique research, development, prototyping, and educational public-private partnership for advancing nanotechnology. A chief CNSE partner is SEMATECH—a global consortium of major computer chip manufacturers that coordinates cutting-edge R&D projects on semiconductors and is headquartered at CNSE. CNSE has more than 300 members and strategic partners that include large U.S.- and non-U.S.-headquartered private companies such as IBM, Intel, Samsung, and Global Foundries; small and medium-sized companies; universities from across the United States; and regional community colleges and economic development organizations, as well as government-agency sponsors. CNSE facilities allow the development of semiconductors just short of mass production—which is relevant for companies attempting to transition from an innovative concept to a prototype and to prepare for large-scale production. CNSE has developed models of pre-competitive collaboration among its partners, which use high-tech CNSE equipment that would be too costly for many individual companies to purchase. The Center for Nanoscale Science and Technology (CNST) is hosted by a federal laboratory at the National Institute of Standards and Technology (NIST). CNST is a user facility with baseline sponsorship through the Department of Commerce, which is augmented by external commercial funds in the form of user fees paid by industry, academia, government labs, and states. CNST supports the U.S. nanotechnology enterprise from discovery to production by providing industry, academia, NIST, and other government agencies access to world-class nanoscale measurement and fabrication methods and technology. The CNST’s shared-use nanotechnology-fabrication capability (called NanoFab) gives researchers economical access to and training on a commercial state-of-the-art tool set required for cutting-edge nanotechnology development. The simple application process is designed to get projects started in a few weeks. Looking beyond the current commercial state of the art, the CNST’s nanotechnology-metrology capability offers opportunities for researchers to collaborate on creating and using the next generation of nanoscale measurement instruments and methods. Based on the views of a wide range of experts, nanoscale control and fabrication are creating important new opportunities for our nation—as well as the need not only to recognize challenges, but also, where challenges exist, to act in response to them. The United States leads in some areas of nanomanufacturing, but faces increasing international competition. Challenges specific to U.S. competitiveness include, among others: possible weaknesses associated with prior extensive offshoring in the U.S. funding gap known as the Missing Middle, some U.S. industries, and the lack of a national vision and strategy for the United States to lead or sustain a high level of competitiveness in global nanomanufacturing markets in the years ahead. Experts outlined three main approaches for responding to these challenges: (1) reviewing and renewing policies that undergird U.S. innovation; (2) supporting public-private partnerships that address U.S. funding gaps—especially as these apply to nanomanufacturing; and (3) defining a vision and strategy for achieving and sustaining a high level of U.S. competitiveness in nanomanufacturing. The potential benefit that experts see in pursuing forward-looking approaches such as these is to help chart a favorable course for the global economic position of the United States as we move further into the twenty-first century. Chairman Bucshon, Ranking Member Lipinski, and Members of the Committee, this concludes my statement. I would be happy to answer any questions you may have. If you or your staff have any questions about this testimony, please contact me at (202) 512-5648 or personst@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. GAO staff members who made key contributions to this testimony include Judith Droitcour, Assistant Director, and Eric M. Larson, Analyst-in-Charge. Gene L. Dodaro (Host), Comptroller General of the United States George Allen, Former U.S. Senator and former Governor of Virginia Tina Bahadori, Environmental Protection Agency Sarbajit Banerjee, University at Buffalo, State University of New York Lynn L. Bergeson, Bergeson & Campbell PC Bjorn Birgisson, KTH Royal Institute of Technology Bill Canis, Congressional Research Service Vicki L. Colvin, Rice University Joseph DeSimone, University of North Carolina Bart Gordon, Former Chairman, Committee on Science and Technology, U.S. House of Representatives, and Partner at K&L Gates LLP John Ho, QD Vision, Inc. Appendix II: Examples of General Purpose Technologies ”Nanotechnology has yet to make its presence felt as a general purpose technology, but its potential is so obvious and developing so quickly that we are willing to accept that it is on its way to being one of the most pervasive general purpose technologies of the 21st century” (Lipsey et al. 2005, 132). Bradley, Jurron. 2010. “The Recession’s Impact on Nanotechnology.” The Lux Research Analyst Blog, February 4. (Based on the Lux Research report, The Recession’s Ripple Effect on Nanotech: State of the Market Report. Boston, Massachusetts: Lux Research, Inc., June 9, 2009.) Christensen, Clayton M., and Michael E. Raynor. 2003. The Innovator’s Solution: Creating and Sustaining Successful Growth. Boston, Massachusetts: Harvard Business School Publishing Corporation. Council on Competitiveness. 2007. Competitiveness Index: Where America Stands. Washington, D.C.: Council on Competitiveness, January. Executive Office of the President. 2012. Report to the President on Capturing Domestic Competitive Advantage in Advanced Manufacturing. Washington, D.C.: Executive Office of the President, July. GAO (U.S. Government Accountability Office). 2012. Nanotechnology: Improved Performance Information Needed for Environmental, Health, and Safety Research. GAO-12-427. Washington, D.C.: GAO, May 21. GAO (U.S. Government Accountability Office). 2014. Nanomanufacturing: Emergence and Implications for U.S. Competitiveness, the Environment, and Human Health. GAO-14-181SP. Washington, DC: January. Holman, Michael. 2007. “Nanotechnology’s Impact on Consumer Products.” (Slide presentation at a meeting organized by the Directorate- General for Health and Consumers, European Commission, October 25, 2007.) New York, New York: Lux Research, Inc. Lipsey, Richard G., Kenneth Carlaw, and Clifford Bekar. 2005. Economic Transformations: General Purpose Technologies and Long-term Economic Growth. New York: Oxford University Press, Inc. Morse, Jeffrey D. (ed.). 2011. Nanofabrication Processes for Roll-to-Roll Processing: Report from the NIST-NNN Workshop. Workshop on Nanofabrication Technologies for Roll-to-Roll Processing. Seaport Convention Center, Seaport Boston Hotel, Boston, Massachusetts, September 27-28. Persons, Timothy M. 2013. “Comptroller General Forum on Nanomanufacturing: Overview.” Slide presentation at the Comptroller General Forum on Nanomanufacturing. Washington, D.C.: U.S. Government Accountability Office, July 23–24. Roco, Mihail C. 2013. “Global Investment Profile in Nanotechnology— Comparing U.S. to Selected Economies.” Slide presentation at the Comptroller General Forum on Nanomanufacturing. U.S. Government Accountability Office, Washington, D.C., July 23. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Nanotechnology has been defined as the control or restructuring of matter at the atomic and molecular levels in the size range of about 1–100 nanometers (nm); 100 nm is about 1/1000th the width of a hair. The U.S. National Nanotechnology Initiative (NNI), begun in 2001 and focusing primarily on R&D, represents a cumulative investment of almost $20 billion, including the request for fiscal year 2014. As research continues and other nations increasingly invest in R&D, nanotechnology is moving from the laboratory to commercial markets, mass manufacturing, and the global marketplace. Today, burgeoning markets and nanomanufacturing activities are increasingly competitive in a global context—and the potential EHS effects of nanomanufacturing remain largely unknown. GAO was asked to testify on challenges to U.S. competitiveness in nanomanufacturing and related issues. Our statement is based on GAO's earlier report on the Forum on Nano-manufacturing, which was convened by the Comptroller General of the United States in July 2013 (GAO 2014; also referred to as GAO-14-181SP ). That report reflects forum discussions as well as four expert-based profiles of nano-industry areas, which GAO prepared prior to the forum and which are appended to the earlier report. Forum participants described nanomanufacturing as an emerging set of developments that will become a global megatrend: a technological revolution that is now in its formative phases but that many knowledgeable persons—in science, business, and government—expect to burgeon in the years ahead, bringing new opportunities, “disruptive innovation,” jobs creation, and diverse societal benefits. They said that the United States likely leads in sponsorship and overall quality of nanotechnology R&D today as well as some areas of nanomanufacturing—for example, nanotherapeutic drug development and the design of semiconductor devices. But they cautioned that the United States faces global-scale competition and is struggling to compete in some industry areas (notably, advanced batteries). Challenges facing U.S. nanomanufacturing include (1) a key U.S. funding gap in the middle stages of the manufacturing-innovation process, as illustrated below; (2) lack of commercial or environmental, safety, and health (EHS) standards; (3) lack of a U.S. vision for nanomanufacturing; (4) extensive prior offshoring in some industries, which may have had unintended consequences; and (5) threats to U.S. intellectual property. Key actions identified by our experts to enhance U.S. nanomanufacturing competitiveness include one or more of the following: (1) strengthen U.S. innovation by updating current innovation-related policies and programs, (2) promote U.S. innovation in manufacturing through public-private partnerships, and (3) design a strategy for attaining a holistic vision for U.S. nanomanufacturing. Key policy issues identified by our experts include the development of international commercial nanomanufacturing standards, the need to maintain support for basic research and development in nanotechnology, and the development of a revitalized, integrative, and collaborative approach to EHS issues. |
The Homeland Security Act of 2002 established the Department of Homeland Security (DHS) and gave it responsibility for visa policy. Section 428 of the act also authorized DHS to immediately assign personnel to Saudi Arabia to review all visa applications prior to final adjudication, as well as the future assignment of officers to other locations overseas to review visa applications. In August 2003, DHS created the Office of International Enforcement within the Border and Transportation Security Directorate, to implement these requirements. In the same month, four temporary DHS officers were deployed to Saudi Arabia to begin reviewing all visa applications. In September 2003, DHS and State signed a Memorandum of Understanding to govern the implementation of section 428. This agreement broadly defines the DHS officers’ responsibilities in reviewing visa applications, indicating, in particular, that they will provide expert advice to consular officers regarding specific security threats relating to visa adjudication, specifically by gathering and reviewing intelligence relevant to visa adjudication and providing training to consular officers on terrorist threats and detecting applicant fraud; review applications on their own initiative or at the request of consular officers, and provide input on or recommend security advisory opinion requests; and conduct investigations on consular matters under the jurisdiction of the Secretary of Homeland Security. Several other agencies stationed overseas have roles in the visa adjudication process. For example, the State Department Diplomatic Security Bureau’s regional security officers assist the consular section by investigating passport and visa fraud detected through the consular officers’ reviews of visa applications and supporting documents. In addition, officials from the Federal Bureau of Investigation overseas can assist consular officers when questions about an applicant’s potential criminal history arise during adjudication. DHS’s Bureaus of Citizenship and Immigration Services and Customs and Border Protection have responsibility for some immigration and border security programs overseas. For example, consular officers may seek advice from these officials on issues such as DHS procedures at U.S. ports of entry. In October 2003, DHS designated its Bureau of Immigration and Customs Enforcement (ICE) to handle the operational and policy-making responsibilities outlined in section 428 (e) and (i). Subsequently, ICE created an office to oversee the Visa Security Program. Since the assignment of VSOs to Saudi Arabia in 2003 until May 2005, DHS has spent about $4 million for Visa Security Program operations at headquarters and overseas, of which approximately $2 million was spent on operations in Saudi Arabia. Figure 1 provides a timeline for the establishment and implementation of the visa security program. In August 2004, the DHS Office of Inspector General reported on the planning and implementation of the VSOs’ activities in Saudi Arabia. The report was based on observations beginning in July 2003, at which time DHS was in the early stages of designing the Visa Security Program. (DHS officers did not arrive in Saudi Arabia until August 31, 2003.) According to the Inspector General, DHS operations at the time of the review were not as efficient or effective as they could be due to the use of temporary officers in Saudi Arabia, a lack of specialized training and foreign language proficiency, and the lack of a clear plan for the VSOs. The Inspector General recommended that DHS hire permanent officers, develop a visa security training program, and establish criteria for selecting VSOs. According to the Inspector General’s office, DHS has taken steps to implement these recommendations, but as of July 8, 2005, six remain open. According to embassy officials in Saudi Arabia and DHS officials, the VSOs enhance homeland security through their review of visa applications at posts in Saudi Arabia. However, several factors have hindered the program, including a lack of comprehensive data on the VSOs’ activities and results in Riyadh and Jeddah to demonstrate the program’s overall impact at these posts. VSOs in Saudi Arabia provide an additional law enforcement capability to the visa adjudication process. VSOs have access to and experience using important law enforcement information not readily available to consular officers. Moreover, VSOs’ border security and immigration experience can assist consular officers during the visa process. According to State Department consular officers, the deputy chief of mission, and DHS officials, VSOs in Saudi Arabia enhance the security of the visa adjudication process at these consular posts. In particular, the consular sections in Riyadh and Jeddah have incorporated the VSOs’ review of all visa applications into the adjudication process (see fig. 2). After consular officers interview an applicant and review the relevant supporting documentation, they make a preliminary determination about whether to issue or refuse the visa or refer the case to Washington for additional security clearances. Consular officers may consult with VSOs during this initial determination. According to the VSOs, within 24 hours of this initial determination by consular officers, they review the application and inspect the applicant’s documentation for evidence of fraud or misrepresentation, indicators of potential national security risks, criminal activity, and potential illegal immigration risks. VSOs may also query the applicant’s information against a number of law enforcement, immigration, and other databases, which may contain more detail than the consular officers’ name check results. Based on these reviews, the VSOs will either affirm or oppose the consular officer’s original decision, and the consular officer then decides to issue or deny the visa. If the consular section chief and the VSOs disagree on a case, it is sent to DHS, where the Secretary of Homeland Security, in consultation with State officials, will render a final determination. According to a consular official in Saudi Arabia at the time of our visit in February 2005, no case has ever been sent back to Washington for such a decision. In addition to reviewing applications, the VSOs may conduct secondary interviews with some visa applicants based either on findings from their application reviews or a consular officer’s request. For example, DHS officials in Riyadh reported that the VSOs, in cooperation with intelligence officials at post, interviewed an applicant who had ties to an organization of national security concern to the U.S. government. This individual was denied a visa after the interview based upon the VSO’s determination of the potential threat the individual posed to the United States. We also observed the VSOs conduct a secondary interview with an applicant they had identified as a concern through their physical review of the visa application. VSOs have access to and experience using immigration and law enforcement databases not readily available to consular officers, who are not classified as criminal justice, or law enforcement, personnel. Consular officers rely on information contained in the Consular Lookout and Support System (CLASS) to adjudicate a visa. As law enforcement agents, the VSOs can access detailed criminal history records and immigration information not included in CLASS. For example, the VSOs have access to criminal history records contained in the National Crime Information Center’s Interstate Identification Index, which cannot be directly accessed by consular officers. The VSOs also use databases containing information on employers and businesses, hotel reservation information, and sponsors of applicants seeking temporary work visas. They can use these databases to verify, for instance, an applicant’s claim to be working for a particular business. Consular officials at headquarters and in the field believe this data would be useful to them in the adjudication process, particularly at the other posts worldwide that do not have VSOs. Indeed, consular officials in Washington indicated that they are working with DHS to gain access to these databases. In Riyadh, we observed a VSO assist a consular officer review a potential “hit” in CLASS for an applicant in Riyadh. The applicant claimed that, during a trip to the United States, border inspectors refused him entry to the country even though he had a valid visa. At the consular officer’s request, we observed the VSO search a database and inform the consular officer that the applicant at the window had been placed on the “No-Fly” list—information that was not specified in CLASS—since the issuance of the initial visa and was therefore ineligible for another visa. In addition, the VSOs in Riyadh conduct searches on applicants’ names prior to their interviews with consular officers and provide more detailed information on potential matches obtained from these searches of law enforcement databases. Consular officers indicated that this practice helps them tailor their questioning of applicants. Furthermore, the VSOs in Saudi Arabia interact with consular officers on a real-time basis. We observed consular officers ask the VSOs for assistance during interviews, for example, to clarify questions pertaining to potential criminal hits in CLASS. By contrast, in other embassies, consular officers must request additional information from other DHS overseas offices or from Washington. According to DHS, the VSOs’ law enforcement experience and training and knowledge of immigration law enables them to more effectively identify applicants who are potential threats to U.S. national security, as well as identify potentially fraudulent documents submitted by applicants. Since the Inspector General’s report in 2004, DHS has developed criteria for selecting VSOs, which includes certain levels of law enforcement and counterterrorism experience, as well as knowledge of immigration law and experience working overseas. In addition, VSOs have experience and training in detecting fraudulent documents. The Memorandum of Understanding between State and DHS states that VSOs at consular posts will provide antifraud training to consular officers, among other things. This training is particularly useful given that State does not have full-time fraud prevention officers at all of its consular posts overseas, with antifraud duties often performed by junior officers on a part-time basis. Indeed, at all but one of the posts that have or plan to have VSOs, consular officers served as part-time fraud prevention officers in addition to their other duties in the consular section. Therefore, the VSOs’ experience in this area can be valuable to consular sections. The deputy chief of mission, consular officers, and VSOs in Saudi Arabia indicated that the VSOs have positively impacted visa operations; however, several issues raise concerns about the role and impact of these officers. These include (1) the use of temporary duty employees, which can limit the impact of the VSOs in Saudi Arabia; (2) the lack of Arabic language proficient officers; (3) the requirement that the officers review all visa applications, which limits their time to perform other valuable tasks; and (4) the lack of measurable data on the VSOs’ activities, which would demonstrate their impact on the visa process. From August 31, 2003, through June 2005, DHS assigned temporary duty VSOs to Saudi Arabia for tours that varied in length between about 2 and 15 months, for an average assignment of about 7 months. According to the deputy chief of mission in Saudi Arabia, the use of temporary VSOs led to a lack of continuity in visa security operations, and, as a result, the VSOs initially were not able to significantly impact the visa process at post. The constant turnover of officers can hinder the development of institutional knowledge and overall visa security efforts. However, the deputy chief of mission indicated that each subsequent temporary officer improved operations in Saudi Arabia and enhanced security of the visa adjudication process. DHS acknowledged that the reliance on temporary detailed staff is not ideal for the continuity of operations and the ongoing development of the Visa Security Program. DHS officials believe that they have addressed the situation as DHS has hired and trained four permanent employees who were deployed to Saudi Arabia in June 2005, and will be assigned for a 12-month tour. Most of the VSOs stationed in Saudi Arabia since 2003 have not been proficient Arabic speakers and, according to DHS, two of the four new permanent staff assigned to Saudi Arabia speak Arabic. Additionally, consuls general at three of the locations chosen for expansion told us language proficiency would be beneficial at their posts, particularly for interviewing applicants and reviewing applications and documents. The ability to speak the host country language is a qualification for VSOs, as agreed to in the Memorandum of Understanding with State. DHS acknowledged the utility of language capability, but noted that law enforcement skills and expertise outweigh the limitations of a lack of language proficiency. According to DHS, if language training is deemed necessary, such courses would be offered in addition to the standard VSO training, which includes courses on interviewing, detection of deception, and national security law, as well as regional and country briefings. The Memorandum of Understanding between State and DHS states that VSOs would provide training to consular officers on detecting applicants who pose a threat to homeland security and fraudulent documents; however, the requirement that VSOs review all visa applications in Saudi Arabia limits the amount of time that they can spend on training and other valuable services. We observed that VSOs in Riyadh and Jeddah must spend a significant amount of time reviewing all visa applications, including those of low-risk applicants or individuals who do not pose a threat to national security, as well as those that have preliminarily been refused by consular officers. For example, according to DHS officials, lower priority applications may include those from elderly applicants and very young children. Furthermore, the requirement has resulted in extremely long work hours for the VSOs. For example, to return applications to consular officers within 24 hours of the initial decision, the three VSOs in Riyadh and one VSO in Jeddah were each working 7 days per week at the time of our visit. Moreover, the VSOs spend considerable time—as much as 2 hours each day, according to one officer in Jeddah—reviewing applications that are preliminarily refused by consular officers or from low-risk applicants. A Visa Security Program official noted that this mandate is only for visa security operations in Saudi Arabia and not other posts to which DHS plans to assign VSOs. At posts outside of Saudi Arabia, DHS proposed the use of site-specific criteria to focus the review of applications based on several factors, including the number of applications at the post and post-specific threat assessments. VSOs, DHS and State officials, and the deputy chief of mission all agreed that the mandate to review all applications was forcing the VSOs to spend time on lower priority tasks, limiting their ability to perform other activities, such as providing training or conducting additional secondary interviews of applicants. Consular officers also agreed that they would benefit from additional training and other interaction with the VSOs. According to DHS, if its VSOs were granted more flexibility to determine the extent of their review and were not required to review all applications, they could prioritize visa application reviews—a process which they plan to implement at other posts. DHS acknowledged that adding additional officers to the posts in Saudi Arabia could allow VSOs time to perform other tasks, but DHS would still need to prioritize these resources to address training and other activities in Saudi Arabia. However, security concerns at the U.S. embassy and consulate have limited the number of personnel DHS, as well as other U.S. agencies, can assign to these posts. DHS has not maintained measurable data to fully demonstrate the impact of VSOs on the visa process. The VSOs that were stationed in Riyadh during our visit estimated that, based on their review of visa applications, they had recommended that visas be refused after the preliminary decision to issue a visa by consular officers in about 15 cases between October 2004 and February 2005. In addition, the DHS officials in Saudi Arabia and in Washington, D.C., were able to provide anecdotal examples of assistance provided to the consular officers. However, DHS has not developed a system to fully track the results of visa security activities in Saudi Arabia. For example, DHS could not provide data to demonstrate the number of cases for which they have recommended refusal. DHS plans to expand the Visa Security Program to five additional posts in fiscal year 2005; however, the assignments of VSOs were delayed at four of the five selected expansion posts. DHS attributed the delay to resistance by State, as well as funding problems. State and chiefs of mission attributed the delays to various questions about the program, including the criteria used by DHS to select expansion posts and the reasoning for the number of VSOs requested for the posts. A strategic plan to guide operations and expansion of the Visa Security Program could have answered some of these questions and potentially prevented some delays in expanding the program to additional posts, but DHS has not prepared such a plan. The Homeland Security Act of 2002 authorized the assignment of DHS officers to each diplomatic post where visas are issued to provide expert advice and training to consular officers and review visa applications. In 2003, a DHS working group established criteria for ranking potential posts for the program’s expansion. The site selection criteria considered the following primary factors: risk of terrorism in a country based on State’s threat assessments and intelligence of terrorist activity; visa denial rates; and issuance of visas to multiple nationalities at a post. In addition, a Visa Security Program official indicated that DHS also considered intelligence reports and the host nation circumstances, including government cooperation, corruption, immigration controls, and identification document controls, when selecting potential expansion posts. DHS conducted site assessments, in coordination with State, at six consular posts in October and November 2003 and April 2004 to further evaluate the potential for establishing the Visa Security Program at these posts. According to DHS, delays in expanding the program were due, in part, to the fact that funding was not reprogrammed for visa security operations until December 2004. DHS selected five posts to expand the Visa Security Program and in June 2004 submitted requests for the assignment of 21 VSO positions to five posts. One post approved the NSDD-38 request in July 2004. Another post approved the assignment of VSOs in March of 2005, and two posts approved the requests in June 2005. As of June 2005, one post has still not approved the NSDD-38 request. Four posts have approved the assignment of VSOs at their respective posts, but DHS had not yet assigned VSOs to any of the expansion posts. Embassy officials raised questions and concerns regarding the plans to expand the Visa Security Program, which contributed to the delays in the approval of the NSDD-38 requests. State’s Office of Rightsizing the U.S. Overseas Presence supported the posts’ questions of DHS’s plans for expansion of the Visa Security Program. Embassy officials at the expansion posts expressed concerns with the site selection process and the criteria DHS used to select the posts, which contributed to the delays in approving DHS’s requests for VSOs. Based on DHS’s quantitative evaluation criteria used to select expansion posts, visa issuing posts were ranked to identify priority posts for the deployment of VSOs. However, of the 5 posts selected for expansion of the Visa Security Program, 2 of the posts ranked outside of the top 10 posts identified by DHS’s evaluation. Moreover, embassy officials at one of these expansion posts that did not rank in the initial top 10 believe that DHS’s selection criteria does not justify the assignment of VSOs to their post. In particular the consular chief stated that the post had a relatively low application volume and a low refusal rate—two criteria that DHS used to select the fiscal year 2005 expansion posts. DHS stated that this particular post was chosen based on other qualitative data, consultation with law enforcement and intelligence officials, and practical considerations for expansion of the program. These additional factors were not included in the methodology DHS developed to identify priority posts for expansion of the Visa Security Program. Embassy officials at 2 posts chosen for expansion were unaware of the criteria used to select the expansion posts; however, DHS stated that they had explained their criteria. Embassy officials also questioned the reasoning behind the number of VSOs that DHS requested for assignment to the selected expansion posts. In June 2004, DHS originally requested the assignment of 21 VSO positions to 5 posts. According to DHS, the request for the number of VSOs at each post was based on the assessment of several factors including the workload at post. However, chiefs of mission and consular officials also told us that they were unclear about the number of VSOs required for visa security operations and requested for assignment. DHS officials stated that they had explained their rationale fully. As of June 2005, four posts had approved the assignment of 13 VSO positions. Table 1 shows the number of VSO positions requested compared to the number of VSO positions approved by chiefs of mission. DHS received approval for 8 fewer VSO positions than it requested, and received the full complement of staff requested at one expansion post. This gap in approving the assignment of VSOs indicates that DHS either overestimated the staff it needed to conduct activities at each post or will not have enough staff at each post to effectively impact the visa adjudication process at these locations. DHS negotiated the final number of positions with chiefs of mission at several posts to help expedite the NSDD-38 requests. For example, DHS and embassy officials at one post agreed to reduce the number of positions requested from 5 to 3; subsequently, the NSDD-38 request was approved in March 2005. The deputy chief of mission and consul general at another embassy noted that DHS’s request for four VSOs appeared excessive, considering the low volume of visas that are processed at that post, which conducts about 30 to 40 applicant interviews daily, and that there are only four consular officers stationed at the post. Therefore, the embassy approved two VSOs in June 2005. The post that has not approved DHS’s request as of June 2005 proposed that DHS assign not four but one VSO for a 6-month assignment. According to the chief of mission, during this time, the VSO could demonstrate how the program would benefit the post, as well as the need for the additional positions DHS requested. DHS officials, however, believe that one officer would not be sufficient to meet the threat to visa security at the post. As we have previously reported, questions related to (1) security of facilities and employees, (2) mission priorities and requirements, and (3) cost of operations should be addressed when determining the appropriate number of staff that should be assigned to a U.S. embassy. In August 2004, State’s Office of Rightsizing the U.S. Overseas Presence, which manages the NSDD-38 process for the U.S. government, issued interim guidance to chiefs of mission regarding factors to consider when approving DHS’s requests for VSOs. A Rightsizing Office official stated that this guidance is consistent with guidance that is applicable to all agencies that submit NSDD-38 requests. Specifically, the cable advised the five chiefs of mission at posts selected for VSO expansion to delay approving the DHS positions until State or the post had received sufficient responses to several outstanding issues, including criteria for selecting the expansion posts; agreement on administrative support services, such as building maintenance, utilities, supplies, and equipment, among others; the extent to which the VSOs will have regional responsibilities at other the roles and responsibilities of the VSOs in relation to State’s consular fraud investigators and regional security officers at post, as well as any other agencies at post; and the criteria that will be used to measure the effectiveness of the visa security operations. In 2004 and 2005, DHS provided responses, through State’s Bureau of Consular Affairs, to the questions raised by the chiefs of mission at four of the expansion posts. According to DHS, the responses were sufficient to answer the concerns raised by the chiefs of mission. We reviewed the responses to the posts, and identified a number of issues that had not been fully addressed. For example, the documentation did not specify the criteria that DHS will use to demonstrate the effectiveness of its officers. Nevertheless, the chiefs of mission at three posts approved NSDD-38 requests in March and June 2005. In 2003, DHS and State agreed in a Memorandum of Understanding that DHS would identify those diplomatic and consular posts where DHS considered the presence of its personnel necessary to perform visa security functions and would subsequently assign VSOs to those posts. DHS plans to expand the Visa Security Program to five additional consular posts throughout fiscal year 2005. Furthermore, DHS plans to expand the Visa Security Program beyond the posts initially selected for expansion, conducted a site assessment in May 2005 for a sixth expansion location, and plans to continue deployment of VSOs to attain worldwide coverage of the program. According to DHS, the Secretary of Homeland Security has suggested a pace of five new posts per year. Although DHS plans to expand the Visa Security Program in fiscal year 2005 and beyond, it does not a have a strategic plan that defines mission priorities and long-term goals and identifies the outcomes expected at each post to guide operations of the program. We have identified the development of a strategic plan as an essential component of measuring progress and holding agencies accountable for achieving results. The development of an overall strategic plan for the Visa Security Program prior to the expansion of the program may have addressed the questions initially raised by State and embassy officials that led to the delay of the assignment of VSOs. Moreover, a strategic plan would provide a framework for DHS to address broader questions regarding the selection criteria for expansion, the roles and responsibilities of VSOs, and the cost of establishing the program at posts. In addition, a strategic plan would guide rightsizing analyses to determine the appropriate number of VSOs at each post. Officials from DHS and State, as well as consular officials we contacted overseas, all agreed that the development of such a plan would be useful to guide visa security operations in Saudi Arabia and other posts. It would also be useful to inform the Congress, as well as State and other agencies who participate in the visa process at consular posts overseas. Furthermore, as a key stakeholder in the Visa Security Program, State should be consulted in the strategic planning process and, therefore, the concerns and questions raised by State’s Office of Rightsizing the U.S. Overseas Presence and chiefs of mission should be addressed by DHS. Moreover, without a strategic plan that serves as a roadmap for expansion, DHS may continue to experience delays in the approval of NSDD-38 requests at future expansion posts. The placement of VSOs overseas has the potential to improve the security of the visa process at U.S. embassies and consulates. However, the congressional mandate requiring the VSOs in Saudi Arabia to review all applications prior to adjudication limits them from engaging in other counterterrorism activities, such as providing additional training to consular officers on fraud prevention and interview techniques. Moreover, DHS has not incorporated key features of strong program management essential to measuring program results and holding staff accountable for achieving results into its oversight of the Visa Security Program. Before DHS expands this program to other consular posts, it needs a plan outlining its goals and objectives to allow the department to measure program performance and determine the overall value of its visa security operations worldwide. Such a plan needs to address questions from the chiefs of mission who must approve the assignment of VSOs to U.S. embassies or consulates. Addressing these questions would help facilitate negotiations of the expansion of the Visa Security Program. To help ensure that the Visa Security Program, and its expansion to other locations worldwide, is managed effectively, we recommend that the Secretary of Homeland Security: develop a strategic plan, in consultation with the Secretary of State, to guide visa security operations in Saudi Arabia and in other embassies and consulates overseas. This plan should incorporate the key elements of strategic planning, including a mission statement, program goals and objectives, approaches to achieving those goals, a connection between the long-term and short-term goals, and a description of how the effectiveness of the program will be evaluated. In addition, DHS should include or develop supporting documents that provide more specific information on the criteria used to select the locations for expansion, justification for the number of VSOs at each post, the roles and responsibilities of the VSOs in relation to other agencies located at post, and the resources needed to establish the Visa Security Program overseas. develop performance data that can be used to assess the results of the Visa Security Program at each post. Congress may wish to consider amending current legislation, which requires that VSOs in Saudi Arabia review all visa applications prior to adjudication, to provide DHS the flexibility to determine the extent to which VSOs review applications, based upon the development of a risk- assessment tool. This flexibility will allow them to engage in other activities that will provide additional benefit to consular officers and the visa process. DHS and State provided written comments on a draft of this report (see apps. II and III). DHS stated it was taking actions to implement performance measurements and a strategic plan for the Visa Security Program, as described in our recommendations. DHS indicated that it is expanding the tracking and measurement of performance data to better reflect program results. In addition, DHS stated it is developing a strategic plan that will integrate the key elements described in our recommendation; however, DHS stated that it was unlikely that such a plan would have aided in the approval of the NSDD-38 requests at the five expansion posts. We believe that a strategic plan would allow DHS to better address questions about the program and would be useful in guiding visa security operations in Saudi Arabia and other consular posts. Regarding the matter for congressional consideration to provide DHS with the flexibility to determine the review of visa applications in Saudi Arabia, DHS agreed that it needed to expand some of the VSOs’ activities in Saudi Arabia, such as providing additional training, which we found were not being provided because of the volume of work that resulted from fulfilling the legislative requirement. DHS noted that a legislative change should maintain DHS’s authority and discretion in determining the scope of the VSOs’ review. DHS also provided additional details on the Visa Security Program, its plans to improve operations, and its interaction with State regarding program expansion. These comments are reprinted in appendix II, along with our analysis. DHS also provided technical comments, which we incorporated into the report, as appropriate. State agreed with our conclusions, stating that the report is an accurate description of the implementation of the Visa Security Program. In addition, State agreed with our matter for congressional consideration. State noted that the ability of the VSOs in Saudi Arabia to access law enforcement and other databases not available to consular officers highlights the importance of shared, interoperable databases worldwide. With regard to the program’s expansion outside Saudi Arabia, State also noted that chiefs of mission and its Rightsizing Office are obligated to ensure that staffing overseas for all agencies is at the proper level and consistent with available space and resources. State’s comments are reprinted in appendix III. We are sending copies of this report to the Secretaries of State and Homeland Security, and to other interested Members of Congress. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess the Visa Security Officers’ activities in Saudi Arabia, we reviewed the Homeland Security Act of 2002, which authorized DHS to create the Visa Security Program. In addition, we reviewed the subsequent September 2003 Memorandum of Understanding between State and DHS regarding the implementation of the requirements set forth in section 428 of the Homeland Security Act. We also reviewed a prior report from August 2004 on DHS’s implementation of section 428 requirements, conducted by the DHS Office of Inspector General, and spoke with the Inspector General officials who conducted that review. We interviewed officials from DHS who manage the Visa Security Program in Washington, D.C., as well as officials from State’s Bureau of Consular Affairs and the Office of Rightsizing the U.S. Overseas Presence. Moreover, we observed the VSOs’ activities in Riyadh and Jeddah, Saudi Arabia, and interviewed the VSOs, as well as consular officials and the chief of mission, regarding the impact of the Visa Security Program at these posts. To assess DHS’s plans to expand the Visa Security Program to consular posts outside Saudi Arabia, we reviewed documentation on the department’s requests to establish new positions at 5 additional posts and spoke with DHS officials regarding the planned expansion. In addition, we reviewed DHS’s criteria for selecting VSOs and the criteria and methodology for selecting expansion posts. We also compared DHS’s management strategy for the Visa Security Program and its expansion with criteria from the Government Performance and Results Act and associated GAO reports on performance-based, strategic planning. In addition, we visited two of the five posts to which DHS plans to expand the Visa Security Program and interviewed consular and embassy officials, including the chiefs and deputy chiefs of mission, at these locations to discuss these posts’ plans for the VSOs. We also spoke with officials from other law enforcement agencies at post who work with the consular section. Further, we spoke with the consuls general from the other three posts initially chosen for expansion in fiscal year 2005 to discuss the status of DHS plans to expand to these locations. We conducted our evaluation from August 2004 to June 2005 in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Department of Homeland Security’s letter dated July 15, 2005. 1. We revised the highlights page to reflect that no comprehensive data exists to demonstrate the impact of the VSOs in Saudi Arabia. 2. We requested documentation from DHS on the visa application reviews conducted by VSOs in Saudi Arabia. DHS provided weekly operational reports that contained descriptive examples of the reviews of visa applications and the outcomes of those reviews. DHS did not provide systematic data on the operations of the VSOs, and VSOs in Saudi Arabia stated that they did not have a system in place to track the activities of the program. The steps DHS describes appear to be positive steps to incorporate performance measurement into the Visa Security Program, and to implement a workload tracking database. We believe these actions should allow DHS to better demonstrate program results and are consistent with our recommendation. 3. We revise the report to clarify that VSOs may recommend a refusal after a preliminary determination to issue the visa by a consular officer. We agree that there might be additional cases where VSOs may influence the decision of consular officers. We believe it is important to measure other outcomes that demonstrate the impact of the Visa Security Program. Furthermore, we believe that it is not difficult to track additional data, and such performance measures should be incorporated into the tracking system for VSO activities. 4. We do not agree that the statement was an error in syntax. We believe that performance measurement is an integral part of effective program management, and the lack of comprehensive data on program impact has hindered the Visa Security Program. Performance data could be used to demonstrate the effectiveness of operations in Saudi Arabia, as well as to illustrate the benefits of the program when presenting the benefits of the program to interested parties, including chiefs of mission at future expansion posts and the Congress. 5. In August 2004, the DHS Office of Inspector General found that the continued use of temporary officers to fill VSO positions was not conducive to developing an effective or efficient long-term visa security operation. In addition, in February 2005, the deputy chief of mission in Saudi Arabia told us that the use of temporary VSOs led to a lack of continuity in operations, and that the VSOs initially were not able to significantly impact the visa process at post. Our report recognized that DHS assigned permanent officers to Saudi Arabia in June 2005. 6. We revised the figure to reflect that VSOs also conduct investigative research on visa applicants in addition to conducting name checks. 7. Our report noted that, in addition to the quantitative data used as preliminary selection criteria, DHS stated it used qualitative data and other practical considerations in choosing the posts. DHS did not provide this qualitative data nor the additional considerations used to select expansion posts to GAO, and thus we were unable to assess the additional criteria. We made an assessment based on the information and data provided by DHS. 8. We believe that the development of a strategic plan would assist DHS by providing stakeholders, such as State and chiefs of mission, with information regarding the mission, goals and operations of the Visa Security Program. A strategic plan may have helped to address the questions raised by State and embassy officials that led to the delays in the approvals of the NSDD-38 requests. In addition, we believe that a strategic plan would expedite the approval of future NSDD-38 requests for assignment of VSOs to consular posts. State officials support this view. DHS is taking positive steps by working towards the development of a strategic plan as we recommend. In addition, John Brummet, Daniel Chen, Katie Hartsburg, Jeff Miller, Mary Moutsos, Joseph Carney, and Etana Finkler made key contributions to this report. | The Homeland Security Act of 2002 required that the Department of Homeland Security's on-site personnel in Saudi Arabia review all visa applications. The act also authorized the expansion of the Visa Security Program to other embassies and consulates to provide expert advice and training to consular officers, among other things. Given the congressional interest in effective implementation of the Visa Security Program, we assessed (1) the Visa Security Officers' activities in Saudi Arabia, and (2) DHS's plans to expand its Visa Security Program to other consular posts overseas. Visa Security Officers (VSO) assigned to Saudi Arabia review all visa applications prior to final adjudication by consular officers, and assist consular officers with interviews and fraud prevention; however, no comprehensive data exists to demonstrate the VSOs' impact. According to State Department consular officers, the deputy chief of mission, and Department of Homeland Security (DHS) officials in Saudi Arabia, the VSOs in Riyadh and Jeddah strengthen visa security because of their law enforcement and immigration experience, as well as their ability to access and use information from law enforcement databases not immediately available, by law, to consular officers. Furthermore, the requirement to review all visa applications in Saudi Arabia limits the VSOs' ability to provide additional training and other services to consular officers, such as assisting with interviews. Moreover, security concerns in Saudi Arabia limit staffing levels at these posts. DHS has not developed a strategic plan outlining the Visa Security Program's mission, activities, program goals, and intended results for operations in Saudi Arabia or the planned expansion posts. Chiefs of mission at the five posts chosen for expansion in fiscal year 2005 delayed approving DHS's requests for the assignment of VSOs until DHS answered specific questions regarding the program's goals and objectives, staffing requirements, and plans to coordinate with existing staff and law enforcement and border security programs at post. DHS's development of a strategic plan may address outstanding questions from chiefs of mission and other embassy officials and help DHS expand the program. |
During fiscal year 2006, DOD reported obligations of over $685 billion, the second largest amount reported by an executive branch entity. Of this, travel obligations were $8.46 billion for fiscal year 2006. Travel includes expenses such as air fare, lodging, per diem, and local transportation. Travel conducted by DOD represents an estimated 60 percent of total travel obligations for the entire federal government. Travel is one of six programs for which IPIA information is reported in DOD’s PAR. IPIA was enacted in November 2002 with the major objective of enhancing the accuracy and integrity of federal payments. Guidance for reporting under IPIA is provided in Appendix C of OMB Circular No. A-123 and requires agencies to: Review all programs and activities and identify those that are susceptible to significant improper payments. Obtain a statistically valid estimate of the annual amount of improper payments in those programs and activities. Report estimates of the annual amount of improper payments in programs and activities and, for estimates exceeding $10 million, implement a plan to reduce improper payments. In addition, this guidance instructs agencies to institute a systematic method of reviewing all programs and identifying those which they believe to be susceptible to significant improper payments. The guidance defines “significant erroneous payments” as annual improper payments exceeding both 2.5 percent of program payments and $10 million. It further explains that agencies must then estimate the gross total of both over- and underpayments for those programs identified as susceptible. These estimates shall be based on a statistically random sample of sufficient size to yield an estimate with a 90 percent confidence interval of plus or minus 2.5 percentage points. The guidance also requires agencies to consult a statistician to ensure the validity of their sample design, sample size, and measurement methodology. If an agency cannot determine whether or not a payment was proper because of insufficient documentation, OMB Circular No. A-123 requires that the payment be considered an error. According to its guidance, OMB may also determine, on a case-by-case basis, whether certain programs should be reported even if those programs do not meet established thresholds. In February 2007, OMB notified DOD that it was requiring that an improper payment error measurement be reported for travel pay in the fiscal year 2007 PAR under IPIA due to congressional interest and concern regarding this program. For all programs and activities susceptible to significant improper payments, agencies are to determine an annual estimated amount of improper payments made in those programs and activities. If the estimate of improper payments exceeds $10 million, the agency must implement a plan to reduce the amount of such improper payments. If the improper payment estimate is less than $10 million, agencies are still required to report the total in their annual PAR. Although there are over 70 types or circumstances of travel at DOD, DOD travel is generally segregated into two broad types: temporary duty travel (TDY) and permanent change of station (PCS) travel. TDY is travel to one or more places away from a permanent duty station to perform duties for a period of time and, upon completion of assignment, return or proceed to a permanent duty station. PCS travel is the assignment, detail, or transfer of a member or unit to a different permanent duty station under a competent order that does not specify the duty as temporary, provide for further assignment to a new permanent duty station, or direct return to the old permanent duty station. DOD reported that in a typical year over 3 million DOD personnel perform TDY travel and generate over 5 million travel vouchers. For fiscal year 2006, DOD reported $8.5 billion was obligated for travel. The Institute for Defense Analyses estimates that $7.3 billion of this amount is for TDY and the remaining $1.2 billion is for PCS travel. DOD has been working to upgrade its TDY travel system since 1993, when the National Performance Review recommended an overhaul of DOD’s TDY travel system. Long-standing concerns about the efficiency and effectiveness of the existing travel systems resulted in the development of DTS to be a centralized, integrated system used to process TDY travel. DTS is envisioned as being DOD’s standard end-to-end travel system. The Defense Finance and Accounting Service (DFAS) reported that about $1.2 billion was processed through DTS in fiscal year 2006. In January 2006 we reported on DOD’s difficulties implementing DTS. DTS was originally intended to be fully implemented by April 2002, but this date was changed to September 2006—a slippage of over 4 years. The report specified two key challenges facing DTS in becoming DOD’s standard travel system: (1) developing needed interfaces and (2) underutilization of DTS at sites where it has been deployed. Extensive travel is still processed through legacy systems. One such system is the Integrated Automated Travel System (IATS), which is used by the Army and several other DOD components. IATS is a manual travel system where the traveler submits paper travel documents (e.g., travel orders, travel voucher, and receipts) for entry into IATS. Once the information is entered into IATS, it is processed and a travel reimbursement is made to the traveler. Under current implementation plans, not all legacy travel systems will be eliminated due to current DTS functionality limitations. Despite difficulties implementing DTS, the Institute for Defense Analyses recently issued a report stating that DTS is the only end-to-end system today with the capability to support all DOD policy and business rules. Responsibility for assessing and reporting DOD’s improper payments, including travel, for IPIA is the responsibility of the Office of the Comptroller. In its fiscal year 2006 PAR, DOD reported that its current IPIA review did not identify any programs or activities at risk of “significant improper payments” in accordance with OMB criteria. However, the department also reported that civilian, commercial, and travel pay potentially were susceptible to improper payments in excess of $10 million and reported estimated improper payment information for these programs. Further, the department again reported on its sampling and corrective actions concerning its military retirement, military health benefits, and military pay programs. Table 1 shows the information DOD reported for estimated improper payments for six programs, including travel pay, in its fiscal year 2006 PAR. Further, in its 2006 PAR, DOD described the risk assessment process for each of the programs or activities that addressed the strength of the internal controls in place to prevent improper payments and reported on the results in its disclosure. DOD also described the statistical sampling and corrective action plans for these six programs. Additionally, DOD summarized the improper payment reduction outlook for the military retirement, military health benefits, and military pay programs. Finally, DOD described its improper payments auditing, accountability information, information system usage, and statutory and regulatory barriers limiting the department’s corrective actions. Excerpts from DOD’s fiscal year 2006 PAR related to improper payments are reprinted in appendix III of this report. In its fiscal year 2006 PAR, DOD estimated approximately $8 million in travel program improper payments, reported as reflecting about 1 percent of reported program payments. While this estimate would indicate the program was not at risk of significant improper payments under OMB guidance, we found that DOD’s travel improper payments disclosures for fiscal year 2006 were incomplete as to the full extent of total travel payments made by DOD. The estimate information reported by DOD, which DOD used to assess the travel program’s risk of significant improper payments, only included payments from one system, DTS, which processed an estimated 10 percent of DOD’s travel. Nonetheless, DOD’s 2006 PAR describes a travel postpayment review process that may mislead the reader to believe that the reported travel improper payment estimate represents more than DTS-processed travel. Further, the travel improper payment estimate excluded the largest user of DTS, the Army, which would likely have increased DOD’s estimate by over $4 million. Finally, the statistical sampling methodology and process used by DOD to estimate DTS improper payments as reported for fiscal year 2006 had several weaknesses and did not result in statistically valid estimates of travel improper payments. In its fiscal year 2006 IPIA disclosure for travel, DOD estimated $8 million in improper payments for travel pay, which it reported as reflecting about 1 percent of DOD reported travel payments. Based on our review, we determined that DOD’s estimate of travel improper payments was derived from approximately 10 percent of the $8.5 billion of DOD travel obligations reported by DOD for the fiscal year—excluding a significant portion of travel payments from the PAR disclosure. Further, the DTS improper payments disclosure did not include data on the largest user of DTS, the Army. The reporting of only DTS travel pay and the exclusion of Army travel pay processed through DTS was incomplete. DOD’s fiscal year 2006 reporting of travel improper payments based on travel processed by DTS (excluding the Army) also excluded travel processed in other systems used by several DOD components, including the following: In fiscal year 2006, the Army Corps of Engineers used IATS to process all travel. According to information provided by DOD, the Army Corps of Engineers processed over $239 million in travel payments during fiscal year 2006. Air Force officials reported using the Reserve Travel System to process $1.5 billion in travel pay in fiscal year 2006. The Army utilized IATS to process TDY, PCS, and other types of travel payments. The postpayment review of IATS-processed travel, completed by DFAS for the Army, indicated approximately $1.4 million in improper payments for fiscal year 2006, none of which were reported in the DOD fiscal year 2006 PAR disclosure. DOD also did not include Army travel processed using DTS in its fiscal year 2006 PAR. The Army is the largest user of DTS—processing a reported $437 million of travel through DTS. As shown in figure 1, the Army represented about 35 percent of the $1.2 billion of total DTS- processed travel in fiscal year 2006. The exclusion of Army improper payment information resulted in further incomplete reporting of travel improper payments in DOD’s fiscal year 2006 PAR. Based on the information provided by DOD, the addition of Army travel payments processed through DTS would have increased estimated improper payments from $7.97 million to $12.6 million. DOD officials told us that the results from Army DTS postpayment reviews were not included in the PAR because the results were not available in time for the reporting deadline. DOD acknowledged that the PAR disclosures regarding this exclusion could have been improved. Moreover, the descriptive information included in DOD’s PAR did not disclose the limitation to its reported estimates. Within the statistical sampling section of the IPIA reporting in DOD’s fiscal year 2006 PAR, DOD describes reviews of vouchers from IATS and a review of travel conducted by the Army Corps of Engineers, but the results of these reviews were not actually included in the fiscal year 2006 estimate of improper payments. Thus, the descriptive information may mislead readers to believe that the travel improper payment estimates are based on a larger population than is actually reported. When we discussed the exclusion of non-DTS travel improper payments with Office of the Comptroller staff they explained that they believed the August 2006 release of updated guidance by OMB (namely Appendix C of OMB Circular No. A-123) modified which programs must be reported. In fiscal year 2006, DOD reported three new programs—one of which was travel pay. DOD officials explained that because only DTS data were readily available for reporting by the November 15 deadline, they decided that was to be the only PAR input. DOD acknowledged that the disclosure of this reporting limitation could have been improved. We also found weaknesses in the postpayment review process used to estimate improper payments for DTS-processed travel. Under OMB guidance, agencies are required to obtain a statistically valid estimate of the annual amount of improper payments. However, we found that DOD did not have documented sampling plans that detailed how the samples were planned, executed, and evaluated to derive a statistically valid improper payments estimate for DTS-processed travel. We also found that the methodology used to estimate sampling results for nine DOD agencies was not statistically valid. Appendix C of OMB Circular No. A-123 provides guidance on using statistical sampling to estimate improper payments. According to the guidance, improper payment estimates shall be based on a statistically valid random sample of sufficient size to yield an estimate with a 90 percent confidence interval of plus or minus 2.5 percentage points (agencies may alternatively use a 95 percent confidence interval of plus or minus 3 percentage points around the estimate of the percentage of improper payments). The guidance also requires agencies to consult a statistician to ensure the validity of their sample design, sample size, and measurement methodology. DFAS was responsible for estimating improper payment information for over $1.2 billion in fiscal year 2006 DTS payments. DFAS was unable to provide us with its DTS postpayment review sampling plan, and according to our discussions with DFAS and Office of the Comptroller officials, one was not prepared. At the end of our fieldwork, DOD provided us a retrospective document describing the fiscal year 2006 sampling plan. The plan described information on the sampling method, payment and account selection, treatment of missing records and errors, and summary reporting. While OMB’s guidance does not require a sampling plan, our Standards for Internal Control in the Federal Government identify control activities such as policies, procedures, and mechanisms that enforce management’s directives and are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. The lack of a documented sampling plan, before and during sampling, is an internal control weakness in the process used by DFAS and could result in testing activities not being completed as anticipated by management. For example, our review of testing for fiscal year 2006 DTS payments raised questions as to the completeness of the testing prior to the projections being made that were included in the PAR. Based on our review of the DTS postpayment review results database, as of March 2007, about 41 percent of the vouchers selected for sampling of fiscal year 2006 payments did not include an annotation that the review was completed. When we discussed this with DFAS and Office of the Comptroller staff, they responded by explaining that the population of DTS trips subject to postpayment review for any given month will not represent the actual DTS trip records settled or paid for that month due to the lag between payment and postpayment review. They also stated that statistics or population projections will not be reported for any incomplete monthly sample. DFAS staff further clarified that the fiscal year 2006 reporting was not necessarily based solely on fiscal year 2006 transactions. A component’s fiscal year 2006 projection could be based on activity from both fiscal years 2005 and 2006 due to the timing of postpayment review. For example, a component’s fiscal year 2006 estimate could be based on postpayment review of activity from April 1, 2005, through March 31, 2006—a 12-month period overlapping 2 fiscal years. In order to assess this explanation, we requested additional information from DFAS that would detail what audit months of data were used to project and report fiscal year 2006 travel improper payments. Office of the Comptroller officials told us that they were unable to provide further support because the database did not have the needed information. If a written sampling plan, with appropriate detail, had been developed for fiscal year 2006 DTS postpayment review, it is more likely that DFAS would have performed procedures to assure that sampling was completed prior to projection and that appropriate documentation was maintained. In order to determine the extent of improper payments for travel processed through DTS, DFAS officials explained that the DTS postpayment review was conducted using a monthly random sample for each component and agency. In fiscal year 2006, this methodology resulted in the selection of 168 unique samples from 168 distinct populations, with each sample varying in size from 20 test items for a defense agency to nearly 500 for a large military component. DOD reported that it randomly selected vouchers from the monthly population of vouchers based on a 95 percent confidence interval with a precision of 2.5 percentage points; we did not verify whether the data from which the samples were selected were complete or the accuracy of the samples taken. Once a sample item was selected, DFAS reviewed the selected vouchers and recorded the results of its findings in a database. The review process included a recalculation of the travel entitlement based on information submitted on the travel authorization, DTS data, and supporting documents (e.g., travel receipts, credit card information, and DOD and federal travel regulations). The reviewer considered the overall validity of a payment as well as specific items such as appropriate use of organization codes, travel dates, per diem rates, airfare rates, and correct mathematical calculation on the voucher. If an error was found during the postpayment review process, staff recorded it in the database. Each error was classified as one of four error types (lodging, per diem, reimbursement paid incorrectly, or nonmonetary errors). Errors involving lodging, per diem, and reimbursement paid incorrectly are all monetary errors, and each error type had between 13 and 46 subclassification types reviewers used to more accurately describe the error. For example, “reimbursement paid incorrectly” errors could be classified as 1 of 46 more specific error types, such as airfare paid incorrectly, mileage paid incorrectly, and mileage over- or underpaid. DFAS used the review results and information in the database to estimate monthly improper payment amounts for each component and agency. During our review, we noted what could be an incorrect categorization of “receipts not received” as a nonmonetary error. OMB guidance states that “when an agency’s review is unable to discern whether a payment was proper as a result of insufficient or lack of documentation, this payment must also be considered an error.” While DFAS categorized this as a nonmonetary error, this type of error could potentially be a monetary error, but was not included in the estimate of improper payments. We noted nearly 200 instances where a payment was categorized as “receipts not received”—a nonmonetary error. However, because the monetary value of the error was not provided, we were unable to determine the effect of this incorrect classification on travel improper payments reported in DOD’s fiscal year 2006 PAR. When we discussed this categorization error with DOD officials, they explained that during the review process, DFAS allows a traveler a maximum of 30 working days to submit receipts that were not available for review. During this time, the sample item is considered open and the error is categorized as nonmonetary. If after the allotted time period the receipts are not provided, the amount is considered a monetary error. DOD believes the approximately 200 cases are likely those where the examiner was awaiting receipts for final determination of their propriety. However, on the basis of our review, we noted that all of these items had a completed date annotated in the database, suggesting they were completed audits—not audits awaiting additional documentation. We requested additional documentation from DOD that would support its assessment of these vouchers. DOD did not provide any documentation but did note in a written response to us that “all items are reviewed and settled with a determination of whether or not they are improper payment errors, and improper payments are reported as such. Incorrect or incomplete documentation may relate to nonmonetary errors that are also not improper payments (such as the wrong form being used or missing elements that are DOD internal procedural requirements, but are not required by law to support the payment).” DOD also utilized a flawed methodology to estimate DTS improper payments at nine DOD agencies. Information reported for the defense agencies in the fiscal year 2006 PAR was prepared by totaling the monthly sample results from the nine defense organizations and then estimating an improper payment amount based on this aggregate data instead of deriving a monthly estimate, and then aggregating the estimated results and related confidence intervals. As described above, monthly samples were taken by component and agency for postpayment review. Despite the segmentation of the population during the testing process, the information reported by DFAS to the Office of the Comptroller has nine defense organizations reported as “other” and uses the sum of the nine organizations’ sample results to estimate an error amount. By selecting samples for each organization separately and aggregating the results from the sample to estimate the total error rate, estimates made for the organizations were incorrectly projected to the population. DFAS reports that for fiscal year 2007 it will ensure that sample statistics and population estimates for defense agencies are computed at the agency level and then summarized. As discussed in the previous section, DOD’s process for estimating and reporting improper payments for its travel program for inclusion in its fiscal year 2006 PAR was significantly flawed. Going forward, DOD plans to use the results from its annual Improper Payments Survey, conducted by the Office of the Comptroller, to determine the extent of improper payments in several programs, including travel. The survey of fiscal year 2006 payments was not prepared in time for inclusion in the fiscal year 2006 PAR, in November 2006, but has since been completed. DOD plans to use these results for its fiscal year 2007 PAR reporting. We reviewed this survey, as a component of the department’s risk assessment for improper payments in the travel program. Our review identified several weaknesses in the survey and reported results which, if uncorrected, will limit the department’s ability to fully assess improper payments in the travel program. We identified weaknesses in DOD’s guidance regarding the estimation of travel improper payments and lack of oversight and review by the Office of the Comptroller over implementation of the survey and its results. The department is also taking other steps to improve its reporting under IPIA. To address reporting issues identified in its fiscal year 2006 auditor’s report, DOD has established a Program Officer for Improper Payment and Recovery Auditing. Further, the department is establishing an improper payment working group and held a “Department of Defense Improper Payments Information Act Conference.” DOD assesses its programs, including travel, for improper payments, based on its departmentwide annual Improper Payments Survey. The survey, distributed annually by the Office of the Comptroller, queries DOD components to determine the extent of improper payments in several programs, including travel, across the department. We reviewed this survey as a component of the department’s risk assessment for improper payments in the travel program. Our review indicated several weaknesses in the survey and reported results, including weaknesses in the guidance regarding the estimation of travel improper payments and lack of oversight and review of the survey and its results. These weaknesses, if uncorrected, will limit the department’s ability to fully assess improper payments in the travel program. In order to more fully assess its travel program for improper payments, the Office of the Comptroller issues its annual Improper Payments Survey to DOD components. The survey requests that each component report to the Office of the Comptroller the amount of improper payments for several programs throughout the department and to specify additional programs or activities as needed. For fiscal year 2006, the Office of the Comptroller issued guidance on completion of the IPIA survey to DOD officials. The guidance included a cover memorandum which requested that all services and agencies review and report on any program or activity payment for which the component computed the entitlement. Accompanying the memorandum were Appendix C of OMB Circular No. A-123, results of the previous year’s Improper Payments Survey, and a survey template for the component to use to submit survey results. The survey for fiscal year 2006 was sent to the components in January 2007, with survey results due to the Office of the Comptroller by January 26, 2007. The completed survey was provided to us in April 2007. DOD also used the survey to report a more complete travel population to OMB. This report detailed improper payments information for $3.4 billion in travel payments rather than the $824 million reported in the PAR. The survey also identified $20 million in travel improper payments, a $12 million increase from the $8 million reported in DOD’s fiscal year 2006 PAR. Eight entities, other than DFAS, reported information in the fiscal year 2006 IPIA survey for travel pay. A summary of the improper payment survey results for travel is shown in table 2. Going forward, considering the complexity of DOD, extent of travel throughout the department, and information reported in the survey of fiscal year 2006 payments, the guidance issued by the Office of the Comptroller does not provide adequate information to allow components to properly report improper payment information needed for a useful assessment. Our internal control standards identify information and communications as one of the five standards for internal control. This standard states that information should be communicated to those within the entity in a form that enables them to carry out their responsibilities. The guidance issued by the Office of the Comptroller to DOD components does not provide adequate guidance specific to DOD to allow for components to prepare reliable estimates of improper payments. For example, while OMB guidance requires that agencies obtain a statistically valid estimate of the annual amount of improper payments in a program, Office of the Comptroller guidance does not adequately address sampling methodologies to employ, or provide contact information on how to seek assistance with this matter. Further, the guidance does not offer detailed information on the steps needed to adequately implement IPIA at DOD or examples of improper payments relevant to DOD. In addition, the guidance does not provide sufficient procedures on how to identify or assess risk factors to assist DOD components in identifying programs and activities vulnerable to improper payments, such as assessments of internal control, audit report findings, and human capital risks related to staff turnover, training, or experience. Assessing the effect of risk conditions identified during the risk assessment plays a major role in effectively determining the overall risk level of an agency’s operations. Some risk conditions may affect a program or activity to a greater or lesser degree. Likewise, not all risk conditions may be relevant to each program or activity. This type of risk identification and assessment is consistent with our previous recommendation that OMB establish risk factors in its guidance for agencies to consider, and is also consistent with our standards of internal control and executive guide on strategies to manage improper payments, which provides a framework for conducting a comprehensive risk assessment. The process each DOD component uses to estimate its travel improper payments and report to the Office of the Comptroller varies throughout the department and is largely decentralized. Further complicating the assessment for travel improper payments are the numerous systems used to process travel throughout the department. For survey reporting, DFAS (Indianapolis) is responsible for reporting all travel processed through DTS and certain payments processed through IATS for Army and some other defense agencies. The determination of all other travel pay and associated improper payments is the responsibility of the component that computed the entitlement. The Office of the Comptroller relies on each component to determine and report this information. In our review of the fiscal year 2006 Improper Payments Survey, we noted that the survey results were not always statistically valid and in some cases appear unreasonable. Improved guidance by the Office of the Comptroller will be necessary to assure that survey information is reliable and complete for IPIA reporting. DFAS is responsible for estimating and reporting travel improper payments for travel processed for the Army by the IATS system. In fiscal year 2006, DFAS (Indianapolis) did not conduct a statistically valid sample and review of travel payments processed through IATS. Instead, officials from DFAS performed limited reviews of IATS vouchers paid to determine if any such payments were improper. For example, DFAS reviewed payments to determine if payments for the same travel activity had been paid in both DTS and IATS—essentially a duplicate payment review. This review found $1.5 million in improper payments in fiscal year 2006, which was reported in the survey, as shown previously in table 2. Such DFAS IATS reviews cannot be used to estimate the value of improper payments to the entire IATS population. The Air Force reports improper payment information on travel processed through the Reserve Travel System. For fiscal year 2005, the Air Force sought the guidance of the Air Force Audit Agency to determine if the Air Force had developed and used an effective methodology to estimate and report the dollar amount of improper travel payments processed through the Reserve Travel System. The Air Force Audit Agency reported that the methodology used by the Air Force to estimate Reserve Travel System improper payments did not meet IPIA requirements. As part of the audit, the Air Force Audit Agency developed and provided the Air Force with a statistically valid sampling methodology for centralized reviews that it reported would meet IPIA reporting requirements. The Air Force told us that it now follows the sampling methodology developed by the Air Force Audit Agency. As shown in table 2, utilizing this methodology, the Air Force estimated nearly $4.6 million in travel improper payments were processed in fiscal year 2006 by the Reserve Travel System—an improper payment estimate of approximately 4.5 percent of the $101 million in payments processed by the Reserve Travel System during this period. However, after reporting its estimated results in the survey for fiscal year 2006 payments, the Air Force revised the results of its IPIA review. In a memo dated August 8, 2007, the Air Force disclosed an underestimation of total Reserve Travel System payments, revising the reported amount to $1.5 billion, instead of the $101 million originally reported. Based on a centralized review, the Air Force projected its improper payments to be $13.6 million—an error rate of 0.9 percent. The Army also has a decentralized review process for non-DTS travel reimbursements for improper payments. As described above, DFAS (Indianapolis) is responsible for identifying and reporting Army IATS payments calculated and disbursed by DFAS. However, the Army also reported travel improper payments for three other programs or activities in the fiscal year 2006 improper payments survey: Army--Korea; Army--Europe; and the Army Corps of Engineers. Staff responsible for the improper payment review for the Korea command explained the process they follow to estimate and report improper payments, which is completed as part of the internal control process and includes an annual inspection. In fiscal year 2006, this review included a reinspection of every voucher for a 1-month period. This review discovered few improper payments. The future plans for improper payment reviews were unknown when we spoke to the Army-Korea staff due to ongoing DTS implementation there. In the fiscal year 2006 survey, Army--Korea reported no improper payments and over $12.6 million in travel payments. The improper payment review for Army--Europe is even more decentralized, with finance officers throughout the region preparing improper payment information independently. From our discussion with Army--Europe staff, there is no formal process for reviewing and reporting improper payment information throughout the region beyond the IPIA guidance provided by the Office of the Comptroller. In the fiscal year 2006 survey, Army--Europe reported no improper payments and over $3.9 million in payments. During fiscal year 2006, the Army Corps of Engineers processed all of its travel using IATS. The Army Corps of Engineers finance center is responsible for compiling and reporting travel improper payments. Officials from the Army Corps of Engineers finance center reported that all TDY and PCS vouchers greater than or equal to $2,500 were subject to postaudit review, and a sample of every 366th TDY voucher less than $2,500 was also reviewed by finance center staff. The sampling plan was designed to have a 95 percent confidence level plus or minus 2 percent. A DFAS statistician attested to the validity of the sampling methodology used by the Army Corps of Engineers. In the fiscal year 2006 survey, the Army Corps of Engineers reported $57,279 in travel improper payments. Our review indicated weaknesses in the survey and reported results for travel were caused, in part, by limited oversight and review by the Office of the Comptroller of the survey and its results. These weaknesses include a survey that does not consider the entire population of travel payments for fiscal year 2006 and information reported that appears unreliable. Without improved oversight by Office of the Comptroller, the department’s future reporting under IPIA could be compromised. Our internal control standards include monitoring as one of the five standards for internal control. The standards provide that internal control should generally be designed to assure that ongoing monitoring occurs in the course of normal operations and includes regular management and supervisory activities, comparisons, and reconciliations. During our review of the fiscal year 2006 improper payments survey, we noted several weaknesses that indicate a lack of appropriate monitoring or oversight by the Office of the Comptroller. The most notable weakness we found in the survey is the population of travel payments from which improper payments were estimated. As shown in figure 2, in fiscal year 2006, $8.5 billion was obligated for travel by DOD as reported in the DOD budget for fiscal year 2008. The improper payments survey for this same time period reported a total travel expenditure population of $3.4 billion—a difference of $5.1 billion. The survey results were more complete than the PAR reporting, which reported on approximately $824 million in travel payments. The exclusion of such a significant portion of travel payments in the survey decreases its effectiveness as an improper payments assessment tool and indicates inadequate monitoring of the survey process by the Office of the Comptroller. A strong internal control environment, particularly monitoring, should have included regular reconciliations and comparisons that would have brought this discrepancy to management’s attention in a timely manner. The Office of the Comptroller did not clearly define DOD’s travel population before collecting the improper payment information for its fiscal year 2006 improper payment survey. Because DOD did not clearly define the full population of travel payments, the full extent of travel improper payments at the department is unknown. Further, until DOD establishes guidance with sufficiently detailed procedures on how to define its population for travel IPIA reporting, future annual reporting is unlikely to be comparable across fiscal years, which could prevent users of IPIA information from determining progress made in reducing improper payments. When we met with Office of the Comptroller officials in March 2007, we discussed the importance of a complete travel payment population. At that time, Office of the Comptroller officials said they were not aware that the travel payment population was a potential problem. However, after discussion they agreed to reconcile the difference between the budget obligation amounts ($8.5 billion) and the payment amounts reported in the fiscal year 2006 improper payments survey ($3.4 billion). In September 2007, the Office of the Comptroller provided reconciliation information for approximately $1.9 billion in travel payments and described the following factors that may have contributed to the remaining variance: classified program expenditures that are not included in IPIA reporting, timing differences between obligations and expenditures, and reviews and reporting based on audit month rather than actual reporting period (e.g., audit and reporting year may run from August through July while the fiscal year is October through September). As reported in the PAR, DOD travel improper payments appear immaterial and the fiscal year 2006 reported travel payment population was substantially less than other programs reported on by DOD. While this might decrease the focus given to the travel program, we do not consider the information reliable and if, for example, the total population of $8.5 billion was reported on, with an improper error rate of 1 percent as estimated by DOD, travel improper payments would be approximately $85 million. This is a substantial amount of improper payments and would exceed the improper payment estimates for all but two programs as reported in fiscal year 2006—military health benefits and commercial pay. In addition to the incomplete travel payment population, we noted weaknesses in the oversight and review of some data reported in the improper payments survey. Four program/activities (Marine Corp Travel Pay (IATS), Army–Korea, Army–Europe, and the Defense Security Service PCS travel) reported no improper payments on $17 million in associated travel payments. We did not review vouchers to determine if any improper payments existed in this population but based on the description of the improper payments review we obtained from Army–Korea and Army--Europe, we do not believe the review process used provided a reliable basis for IPIA reporting for those components. For example, Army–Korea’s review only considered a review of payments during a 1-month period, instead of a statistically valid sample of payments for the fiscal year. Additionally, one entity reporting under Army–Europe reported that because they preaudit their travel vouchers they have very few improper payments. Another official told us that Army–Europe does little to identify and report travel improper payments. Army–Europe staff told us they were not trained in the proper method for reporting improper payment information. While it is possible there may not be any improper payments in a population, the review process was inadequate to provide a basis for reporting no improper payments on $17 million in travel payments. When we discussed these concerns, Office of the Comptroller staff informed us that they are doing further analysis to determine the accuracy and completeness of the information. Further, the staff told us that they use variance analysis to determine if the information submitted is reasonable based on previously reported information. While there are serious challenges facing the Office of the Comptroller in the assessment and reporting of travel improper payments, the office is taking steps to improve its oversight of the IPIA estimating and reporting process. Recently, the Office of the Comptroller established a Project Officer for Improper Payments and Recovery Auditing. Additionally, the DOD Project Officer has established a working group, comprised of representatives from numerous components, intended to further improve DOD’s compliance with IPIA reporting. To introduce DOD component participants to improper payment issues, including identifying and reporting improper payments, establishing and achieving reduction targets, and recovery auditing, the Office of the Comptroller held the “Department of Defense Improper Payments Information Act Conference.” The conference, held in May 2007, included presentations by officials from OMB, DFAS, Navy, DOD OIG, and Office of the Comptroller. Additionally, there was a 3-hour session dedicated to statistical analysis, with presentations by OMB and DFAS on statistical methodologies. Such conferences or other training activities, if held on a regular basis, could serve to better train DOD staff responsible for improper payment reporting and help assure that information provided to the Office of the Comptroller for reporting is reliable and prepared in accordance with OMB and DOD guidance. Further, the Office of the Comptroller provided us with a draft of the “Recommended Post Payment Sampling Plan for Defense Travel System, WinIATS & PCS Travel Claims,” for fiscal year 2008. This sampling plan details the sampling method, selection process, treatment of missing records, information on completing the target sample, reporting errors, and summary reporting. If implemented effectively, this methodology should result in simple random sampling of DTS payments by component and the sampling of DOD agencies in aggregate. The plan is estimated to reduce the number of DTS sample items from approximately 43,600 in fiscal year 2007 to 17,600 in fiscal year 2008—a 26,000 decrease in tested sample items for the year, largely from a reduction in sampling of DOD agencies. This decrease should allow DFAS to perform more timely postpayment reviews because the number of sample items to be reviewed would be less. Although DOD has made some progress in implementing the requirements of IPIA for its travel program, challenges remain in ensuring that the complete population of all appropriate travel payments has been identified and reviewed to reliably determine its susceptibility to significant improper payments. As DOD continues to improve its IPIA efforts in the travel program, the agency should be better able to identify and report improper payments. This is not a simple task and will not be easily accomplished, particularly in light of the decentralized nature of DOD’s travel program. For example, although DTS is intended to centralize travel processes at DOD, that goal has not yet been achieved, and an estimated 90 percent of DOD’s travel payments for fiscal year 2006 were computed outside of DTS. Improved guidance and oversight by the Office of the Comptroller will be key to ensuring that complete and reliable estimates of improper payments are reported. With the ongoing imbalance between revenues and outlays across the federal government, and the Congress’s and the American public’s increasing demands for accountability over taxpayer funds, improving DOD’s ability to identify, reduce, and recover travel improper payments is even more critical. In order to improve the usefulness and completeness of IPIA reporting for DOD’s travel program, we recommend that the Secretary of Defense direct the DOD Comptroller to take the following four actions: Establish and implement policies and procedures to reliably identify the complete population of DOD travel payments. Establish and implement policies and procedures to report a valid improper payment estimate for the population. Develop and implement guidance for the preparation of improper payment estimates, to include (1) how to compute a statistically valid estimate of improper payments, and (2) the consideration of risk factors associated with vulnerability to improper payments. Establish and implement policies and procedures specifying actions to oversee the data collection process for travel improper payments to be included in the annual PAR, including, at a minimum: (1) periodic reviews of processes used by components to prepare improper payment estimates, and (2) reviews of information reported in the improper payment survey to assure that the population being reported is complete, and the improper payment estimate data reported are reliable and complete. DOD provided written comments on a draft of this report, which are reprinted in appendix II. In its written response, DOD concurred with three of the recommendations and partially concurred with the fourth. DOD partially agreed with our recommendation that the department develop and implement guidance for the preparation of improper payments estimates, including how to compute a statistically valid estimate of improper payments and the consideration of risk factors associated with vulnerability to improper payments. Although it concurred with the intent of our recommendation, DOD stated that OMB’s Circular No. A-123, Appendix C, provided guidance for statistical sampling and for identifying risk factors and that the decentralized nature of the DOD components and the varying systems used for travel pay computations make a detailed universal approach impractical. We agree that DOD is a decentralized organization, with a wide breadth of activities and components. Indeed, this is the primary reason for our recommendation that DOD develop and implement additional guidance for use by its components. OMB’s guidance provides a broad framework for use by agencies across the federal government. However, individual agencies are responsible for implementing OMB’s guidance with policies and procedures that meet the specific needs of their operations. Therefore, we continue to recommend that DOD issue guidance to provide potential sampling methodologies, contact information on how to seek assistance with this matter, information on the steps needed to adequately implement IPIA at the component level, and examples of improper payments relevant to DOD. This guidance should be developed in a form that enables DOD staff across the broad range of DOD activities and components to carry out their responsibilities. Further, DOD commented that it had completed action on all recommendations. Specifically, DOD stated that a policy memorandum issued by the Deputy Chief Financial Officer, dated November 27, 2007, addressed all needed actions. This document consists of a cover memorandum, excerpts from Appendix C of OMB Circular No. A-123, and DOD improper payment component contact information. As part of our standard recommendation follow-up process, we will consider this policy memorandum as well as DOD’s progress in implementing it throughout the department. It is important that DOD takes action to ensure that such policy guidance is fully and effectively implemented if DOD is to improve the usefulness and completeness of its IPIA report for the travel program. DOD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to other interested congressional committees and to affected federal agencies. Copies of this report will be made available to others upon request. In addition, this report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions regarding this report, please contact me at (202) 512-9095 or at williamsm1@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To assess the completeness and accuracy of the Department of Defense’s (DOD) fiscal year 2006 Improper Payments Information Act of 2002 (IPIA) disclosure for travel improper payments, we reviewed the IPIA disclosures in its performance and accountability reports (PAR) for fiscal years 2003, 2004, 2005, and 2006. We also contacted representatives from DOD Office of Inspector General to discuss their assessment of DOD’s compliance with IPIA for fiscal year 2006. We met with representatives from the Office of the Under Secretary of Defense (Comptroller) (Office of the Comptroller) to discuss their preparation of IPIA disclosure information included in the DOD PAR. We met with and obtained supporting documentation from the Defense Finance and Accounting Service (DFAS) officials responsible for estimating and reporting improper payment information for travel processed by the Defense Travel System (DTS). We observed the processing of Army travel through the Integrated Automated Travel System (IATS) and the postpayment review process for both Army IATS-processed travel and DTS-processed travel. We analyzed information provided by DFAS officials related to travel postpayment review to determine if it was complete and reliable for reporting purposes. We also reviewed associated improper payment reporting information. Additionally, we reviewed applicable laws, Office of Management and Budget guidance, DOD procedural directives, DOD memos, and other guidance used by DOD to guide IPIA reporting to determine what legislation and guidance was in place. To assess DOD’s planned efforts to improve and refine its processes for estimating and reporting on travel improper payments we met with staff from the Office of the Comptroller. We obtained information from representatives from Army-Korea, Army-Europe, the Army Corps of Engineers, Air Force, and Navy to determine how each component reviewed travel payments processed through its respective legacy systems for improper payments and the reporting of that information to the Office of the Comptroller. We reviewed the Improper Payments Survey for fiscal year 2006 payments and met with officials from the Defense Travel Management Office and the Project Management Office for the Defense Travel System. We attended the “Department of Defense Improper Payments Information Act Conference.” We did not independently verify the reliability of all information provided. However, we did compare it with other supporting documents, when available, to determine data consistency and reasonableness. Based on our analysis, we believe the information we obtained is sufficiently reliable for its use in this report. We conducted this performance audit from November 2006 through September 2007 in accordance with generally accepted government auditing standards. We provided a draft of this report to DOD for comment. DOD provided written comments, which are presented in the Agency Comments and Our Evaluation section of this report and are reprinted in appendix II. DOD also provided technical comments, and we made revisions as appropriate. FY 2006 Estimated Improper Payments (September 30, 2006) Electronic Government (e-Gov) In addition to the contact named above, the following individuals also made significant contribution to this report: Dianne Guensberg, Assistant Director; Sharon Byrd; Francine DelVecchio; Heather Dunahoo; and Bradley Klingsporn. | Fiscal year 2006 was the first year that the Department of Defense (DOD) reported improper payment information for its travel program under the Improper Payments Information Act of 2002 (IPIA). For fiscal year 2006, DOD reported obligations of approximately $8.5 billion for travel. Congress mandated that GAO assess the reasons why DOD is not fully in compliance with IPIA related to travel expenditures. In May 2007, GAO issued an initial report in response to this mandate. To further respond, GAO assessed (1) the completeness and accuracy of DOD's fiscal year 2006 IPIA travel disclosure in its performance and accountability report (PAR), and (2) DOD's planned efforts to improve and refine its processes for estimating and reporting on travel improper payments. To complete this work, GAO reviewed DOD's IPIA reporting, IPIA, Office of Management and Budget's (OMB) IPIA implementing guidance, and met with cognizant DOD officials. In its fiscal year 2006 PAR, DOD reported an estimate of approximately $8 million in travel improper payments, reflecting about 1 percent of reported travel payments. While this estimate would indicate the program was not at risk of significant erroneous payments under OMB guidance, DOD's improper payment travel disclosure for fiscal year 2006 was incomplete. The DOD travel payment data used to assess the program's risk of significant improper payments only included payments processed by the Defense Travel System (DTS)--approximately 10 percent of the $8.5 billion of DOD travel obligations reported for fiscal year 2006. Further, DOD's 2006 PAR described a travel postpayment review process that may mislead readers to believe that the reported travel improper payment estimate represents more than DTS-processed travel. The travel improper payment estimate also excluded the largest user of DTS, the Army, which would likely have increased DOD's estimate by over $4 million. Finally, the statistical sampling methodology and process used by DOD to estimate DTS improper payments as reported for fiscal year 2006 had several weaknesses and did not result in statistically valid estimates of travel improper payments. DOD is taking steps to more fully assess and report on its travel program for improper payments for future IPIA reporting. DOD's planned assessment is to be based on an annual Improper Payments Survey conducted by the Office of the Undersecretary of Defense (Comptroller). However, GAO's review identified several weaknesses with the survey and reported results, including limited guidance on how to estimate travel improper payments and a lack of oversight and review over implementation of the survey and its results. As shown in the figure below, there were substantial discrepancies among the travel populations reported in the PAR, improper payment survey, and fiscal year 2006 travel obligations. The exclusion of such a significant portion of travel expenditures in the survey decreases its effectiveness as an improper payments assessment tool. DOD has also established a Program Officer for Improper Payment and Recovery Auditing, an improper payment working group, and held a "Department of Defense Improper Payments Information Act Conference." |
DOD’s current space network is comprised of constellations of satellites, ground-based systems, and associated terminals and receivers. Among other things, these assets are used to perform intelligence, surveillance, and reconnaissance functions; perform missile warning; provide communication services to DOD and other government users; provide weather and environmental data; and provide positioning and precise timing data to U.S. forces as well as national security, civil, and commercial users. All of these systems are playing an increasingly important role in military operations. According to DOD officials, for example, in Operation Iraqi Freedom, approximately 70 percent of weapons were precision-guided, most of those using Global Positioning System (GPS) capabilities. Weather satellites enabled war fighters to not only prepare for, but also take advantage of blinding sandstorms. Communication and intelligence satellites were also heavily used to plan and carry out attacks and to assess post-strike damage. Some of DOD’s satellite systems—such as GPS—have also grown into international use for civil and military applications and commercial and personal uses. Moreover, the demand for space-based capabilities is outpacing DOD’s current capacity. For example, even though DOD has augmented its own satellite communications capacity with commercial satellites, in each major conflict of this past decade, senior military commanders reported shortfalls in capacity, particularly for rapid transmission of large data files, such as those created by imagery sensors. DOD is looking to space to play an even more pivotal role in future military operations. As such, it is developing several families of new, expensive, and technically challenging satellites, which are expected to require dramatically increased investments over the next decade. For example, DOD is building new satellites that will use laser optics to transport information over long distances in much larger quantities than radio waves. The system, known as the Transformational Satellite, or TSAT, is to be the cornerstone of DOD’s future communications architecture. Many space, air, land, and sea-based systems will depend on TSAT to receive and transmit large amounts of data to each other as DOD moves toward a more “network centric” war-fighting approach. DOD is also building a new space-based radar (SBR) system, which is to employ synthetic aperture radar and other advanced technologies to enable DOD to have 24-hour coverage over a large portion of the Earth on a continuous basis and allow military forces a “deep-look” into denied areas of interest, on a non-intrusive basis without risk to personnel or resources. SBR itself is expected to generate large amounts of imagery data, and it will rely on TSAT to deliver this data to war fighters. As figure 1 shows, the costs of these and other new efforts will increase DOD’s annual space investment significantly. For example, based on the 2003 President’s budget, acquisition costs for new satellite programs and launch services in the next 4 years are expected to grow by 115 percent— from $3.5 billion to about $7.5 billion. Costs beyond that period are as yet unknown. While DOD’s budget documents show a decrease in 2009 for these systems to $6.4 billion—they do not include procurement costs for some of the largest programs, including TSAT, GPS III, SBR, Space Tracking and Surveillance System (STSS), and Space-Based Surveillance System (SBSS), which DOD will begin fielding beginning 2011. Nor do these numbers reflect the totality of DOD’s investment in space. For example, ground stations and user equipment all require significant investment and that investment will likely increase as the new programs mature. Table 1 identifies specific programs factored into our analysis of upcoming investments. It also shows that DOD will be fielding many of the new programs within just a few years of each other. For the past 6 years, we have been examining ways DOD can get better outcomes from its investment in weapon systems, drawing on lessons learned from the best, mostly commercial, product development efforts. Our work has shown that leading commercial firms expect that their managers will deliver high quality products on time and within budgets. Doing otherwise could result in losing a customer in the short term and losing the company in the long term. Thus, these firms have adopted practices that put their individual programs in a good position to succeed in meeting these expectations on individual products. Collectively, these practices ensure that a high level of knowledge exists about critical facets of the product at key junctures and is used to make decisions to deliver capability as promised. We have assessed DOD’s space acquisition policy as well as its revised acquisition policy for other weapon systems against these practices. Our reviews have shown that there are three critical junctures at which firms must have knowledge to make large investment decisions. First, before a product development is started, a match must be made between the customers’ needs and the available resources—technical and engineering knowledge, time, and funding. Second, a product’s design must demonstrate its ability to meet performance requirements and be stable about midway through development. Third, the developer must show that the product can be manufactured within cost, schedule, and quality targets and is demonstrated to be reliable before production begins. If the knowledge attained at each juncture does not confirm the business case on which the acquisition was originally justified, the program does not go forward. These precepts hold for technically complex, high volume programs as well as low volume programs such as satellites. In applying the knowledge-based approach, the most-leveraged investment point is the first: matching the customer’s needs with the developer’s resources. The timing of this match sets the stage for the eventual outcome—desirable or problematic. The match is ultimately achieved in every development program, but in successful development programs, it occurs before product development begins. When the needs and resources match is not made before product development, realistic cost and schedule projections become extremely difficult to make. Moreover, technical problems can disrupt design and production efforts. Thus, leading firms make an important distinction between technology development and product development. Technologies that are not ready continue to be developed in the technology base—they are not included in a product development. With technologically achievable requirements and commitment of sufficient resources to complete the development, programs are better able to deliver products at cost and on schedule. When knowledge lags, risks are introduced into the acquisition process that can result in cost overruns, schedule delays, and inconsistent product performance. As we recently testified, such problems, in turn, can reduce the buying power of the defense dollar, delay capabilities for the war fighter, and force unplanned—and possibly unnecessary—trade-offs in desired acquisition quantities and an adverse ripple effort among other weapon programs or defense needs. Moreover, as DOD moves more toward a system-of- systems approach—where systems are being designed to be highly interdependent and interoperable—it is exceedingly important that each individual program stay on track. Our past work has shown that space programs have not typically achieved a match between needs and resources before starting product development. Instead, product development was often started based on a rigid set of requirements and a hope that technology would develop on a schedule. At times, even more requirements were added after the program began. When technology did not perform as planned, adding resources in terms of time and money became the primary option for solving problems, since customer expectations about the products’ performance already became hardened. For example, after starting its Advanced Extremely High Frequency (AEHF) communications satellite program, DOD substantially and frequently changed requirements. In addition, after the launch failure of one of DOD’s legacy communications satellites, DOD decided to accelerate its plans to build AEHF satellites. The contractors proposed, and DOD accepted, a high risk schedule that turned out to be overly optimistic and highly compressed, leaving little room for error and depending on a precise chain of events taking place at certain times. Moreover, at the time DOD decided to accelerate the program, it did not have funding needed to support the activities and manpower needed to design and build the satellites quicker. The effects of DOD’s inability to match needs to resources were significant. Total program cost estimates produced by the Air Force reflected an increase from $4.4 billion in January 1999 to $5.6 billion in June 2001—a difference of 26 percent. Although considered necessary, many changes to requirements were substantial, leading to cost increases of hundreds of millions of dollars because they required major design modifications. Also, schedule delays occurred when some events did not occur on time, and additional delays occurred when the program faced funding gaps. Scheduling delays eventually culminated into a 2-year delay in the launch of the first satellite. We also reported that there were still technical and production risks that need to be overcome in the AEHF program, such as a less-than-mature satellite antenna system and complications associated with the production of the system’s information security system. Another example can be found with DOD’s Space-Based Infrared System (SBIRS)–High program, which is focused on building high-orbiting satellites that can detect ballistic missile launches. Over time, costs have more than doubled for this program. Originally, total development costs for SBIRS-High were estimated at $1.8 billion. In the fall of 2001, DOD identified potential cost growth of $2 billion or more, triggering a mandatory review and recertification under 10 U.S.C. section 2433. Currently, the Air Force estimates research and development costs for SBIRS-High to be $4.4 billion. We reported that when DOD’s SBIRS-High satellite program began in 1994, none of its critical technologies were mature. Moreover, according to a DOD-chartered independent review team, the complexity, schedule, and resources needed to develop SBIRS- High, in hindsight, were misunderstood when the program began. This led to an immature understanding of how requirements translated into detailed engineering solutions. We recently reported to this subcommittee that while the SBIRS restructuring implemented a number of needed management changes, the program continues to experience problems and risks related to changing requirements, design instability, and software development concerns. We concluded that if the Air Force continues to add new requirements and program content while prolonging efforts to resolve requirements that cannot be met, the program will remain at risk of not achieving, within schedule, its intended purposes—to provide an early warning and tracking system superior to that of its current ballistic missile detection system. DOD has also initiated several programs and spent several billion dollars over the past 2 decades to develop low-orbiting satellites that can track ballistic missiles throughout their flight. However, it has not launched a single satellite to perform this capability. We have reported that a primary problem affecting these particular programs was that DOD and the Air Force did not relax rigid requirements to more closely match technical capabilities that were achievable. Program baselines were based on artificial time and/or money constraints. Over time, it became apparent that the lack of knowledge of program challenges had led to overly optimistic schedules and budgets that were funded at less than what was needed. Attempts to stay on schedule by approving critical milestones without meeting program criteria resulted in higher costs and more slips in technology development efforts. For example, our 1997 and 2001 reviews of DOD’s $1.7 billion SBIRS-Low program (which was originally a part of the SBIRS-High program) showed that the program would enter into the product development phase with critical technologies that were immature and with optimistic deployment schedules. Some of these technologies were so critical that SBIRS-Low would not be able to perform its mission if they were not available when needed. DOD eventually restructured the SBIRS-Low program because of the cost and scheduling problems, and it put the equipment it had partially built into storage. In view of the program’s mismatch between expectations and what it could achieve, the Congress directed DOD to restructure the program (now under the responsibility of the Missile Defense Agency) as a research and development effort. DOD’s new space acquisition policy may help increase insight into gaps between needs and resources, but it does not require programs to close this gap before starting product development. In other words, the new policy does not alter DOD’s practice of committing major investments before knowing what resources will be required to deliver promised capability. There are tools being adopted under the new policy that can enable DOD to better predict risks and estimate costs. Similar tools are also being adopted by other weapon system programs. For example: DOD is requiring that all space programs conduct technology maturity assessments before key oversight decisions to assess the maturity level of technology. DOD is requiring space programs to more rigorously assess alternatives, consider how their systems will operate in the context of larger families of systems, and think through operational, technical, and system requirements before programs are started. The new policy seeks to improve the accuracy of cost estimates by establishing an independent cost estimating process in partnership with DOD’s Cost Analysis Improvement Group (CAIG) and by adopting methodologies and tools used by the National Reconnaissance Office. To ensure timely cost analyses, the CAIG will augment its own staff with cost estimating personnel drawn from across the entire national security space cost estimating community. Moreover, to facilitate faster decision-making on programs, the policy also calls for independent program assessments to be performed on space programs nearing key decision points. The teams performing these assessments are to be drawn from experts who are not directly affiliated with the program, and they are to spend about 8 weeks studying the program, particularly the acquisition strategy, contracting information, cost analyses, system engineering, and requirements. After this study, the team is to conclude its work with recommendations to the Under Secretary of the Air Force, as DOD’s milestone decision authority for all DOD major defense acquisition programs for space, on whether or not to allow the program to proceed, typically using the traditional “red,” “yellow”, and “green” assessment colors to indicate whether the program has satisfied key criteria in areas such as requirements setting, cost estimates, and risk reduction. The benefits that can be derived from tools called for by the space acquisition policy, however, will be limited since the policy allows programs to continue to develop technologies while they are designing the system and undertaking other product development activities. As illustrated below, this is a very different and important departure from DOD’s acquisition policy for other weapon systems. As we reported last week, the revised acquisition policy for non-space systems establishes mature technologies—that is, technologies demonstrated in a relevant environment—as critical before entering product development. By encouraging programs to do so, the policy puts programs in a better position to deliver capability to the war fighter in a timely fashion and within funding estimates because program managers can focus on the design, system integration, and manufacturing tasks needed to produce a product. By contrast, the space acquisition policy increases the risk that significant problems will be discovered late in development because programs are expected to go into development with many unknowns about technology. In fact, DOD officials stated that technologies may well enter product development at a stage where basic components have only been tested in a laboratory, or an even lower level of maturity. This means that programs will still be grappling with the shapes and sizes of individual components while they are also trying to design the overall system and conduct other program activities. In essence, DOD will be concurrently building knowledge about technology and design—an approach with a problematic history that results in a cycle of changes, defects, and delays. Further, the consequences of problems experienced during development will be much greater for space programs since, under the new space acquisition policy, critical design review occurs at the same time as the commitment to build and deliver the first product to a customer. It is thus possible that the design review will signify a greater commitment on a satellite program at the same time less knowledge will be available to make that commitment. An upcoming decision by DOD on the new TSAT program represents the potential risks posed by the new space acquisition policy. The $12 billion program is scheduled to start product development in December 2003, meaning that the Air Force will formally commit to this investment and, as required by law, set goals on cost, schedule and performance. However, at present, TSAT’s critical technologies are underdeveloped, leaving the Air Force without the knowledge needed to build an effective business case for going forward with this massive investment. In fact, most of the technologies for TSAT are at a stage where most of the work performed so far has been based on analytical studies and a few laboratory tests or, at best, some key components have been wired and integrated and have been demonstrated to work together in a laboratory environment. The program does not know yet whether TSAT’s key technologies can effectively work, let alone work together in the harsh space environment for which they are intended. Yet the space acquisition policy allows the Air Force to move the program forward and to set cost, schedule, and performance goals in the face of these unknowns. Moreover, the Air Force has scaled back its AEHF program, whose technologies are more mature, to help pay for TSAT’s development. Making tradeoff decisions between alternative investments is difficult at best. Yet doing so without a solid knowledge basis only compounds the risk of failures. Our work on program after program has demonstrated that DOD’s optimism has rarely been justified. The growing importance of space systems to military and civil operations requires DOD to achieve timely delivery of high quality capability. New space systems not only need to support important missions such as missile defense and reconnaissance, they need to help DOD move toward a more “network centric” warfighting approach. At the same time, given its desire to transform how military operations are conducted, DOD must find ways to optimize its overall investment on weapon systems since the transformation will require DOD to develop new cutting edge systems while concurrently maintaining and operating legacy systems—a costly proposition. Recognizing the need to optimize its investment, DOD has expressed a desire to move toward an “effects-based” investment approach, where decisions to acquire new systems are made based on needs and joint interests versus annual budgets and parochial interests. Changing the new space acquisition policy to clearly separate technology development from product development is an essential first step toward optimizing DOD’s space investment and assuring more timely delivery of capability since it enables a program to align customer expectations with resources, and therefore minimize problems that could hurt a program in its design and production phase. Thus, we recommended that DOD make this change in our recent report on the new space acquisition policy. DOD did not agree with our recommendation because it believed that it needs to keep up with the fast-paced development of advanced technologies for space systems, and that its policy provides the best avenue for doing so. In fact, it is DOD’s long-standing and continuous inability to bring the benefits of technology to the war fighter in a timely manner that underlies our concerns about the policy for space acquisitions. In our reviews of numerous DOD programs, including many satellite developments, it has been clear that committing to major investments in design, engineering, and manufacturing capacity without knowing a technology is mature and what resources are needed to ensure that the technology can be incorporated into a weapon system has consistently resulted in more money, time, and talent spent than either was promised, planned for, or necessary. The impact of such high risk decisions has also had a damaging effect on military capability as other programs are taxed to meet unplanned cost increases and product units are often cut because unit costs increase and funds run out. Moreover, as it moves toward a more interdependent environment, DOD can simply no longer afford to misestimate the cost and time to field capabilities—such as TSAT—since they are needed to support other applications. Further, policy changes are just a first step toward optimizing DOD’s investment in space and other weapon systems. There are also some changes that need to be made at a corporate level to foster a knowledge- based acquisition approach. As we have reported in the past, DOD needs to remove incentives that drive premature product development decisions. This means embracing a willingness to invest in technology development outside a program as well as alleviating pressures to get new acquisition programs approved and funded on the basis of requirements that must beat out all other alternatives. Other changes—some of which have been recognized by recent DOD studies on space acquisitions—include: Keeping key people in place long enough so that they can affect decisions and be held accountable. Part of the solution would be to shorten product development times. Providing program offices with the capability needed to craft acquisition approaches that implement policy and to effectively oversee the execution of programs by contractors. Realigning responsibilities and funding between science and technology organizations and acquisition organizations to enable the separation of technology development from product development. Bringing discipline to the requirements-setting process by demanding a match between requirements and resources. Designing and implementing test programs that deliver knowledge when needed, including reliability testing early in design. Lastly, DOD leadership can use this knowledge-based approach to effectively rebalance its investment portfolio. For programs whose original justification was based on assumptions of cost, schedule and performance that have not been realized, having a consistent set of standards allows DOD and the Congress to reevaluate alternatives and make investment decisions across programs that increase the likelihood that the war fighter will have the best possible mix of capabilities in a timely fashion. In conclusion, using an approach for managing weapon system investments based on knowledge instead of promises can help DOD fully leverage the value of its investment dollars. At a time when the nation is facing a large and growing fiscal gap, DOD’s $150 billion annual investment in the acquisition of new weapons is the single largest area of discretionary spending. While there are differing views on what weapons DOD should or should not invest in and how much should be invested, there cannot be any disagreement that within this fiscal environment, once a consensus has been reached on the level of investment and the specific weapons to be acquired, we should get those weapons for what was estimated in the budget. While DOD’s revised acquisition policy for non- space systems puts DOD on a better footing toward this end, DOD’s acquisition policy for space systems does not because it allows programs to proceed into product development before knowing what their true costs will be. Therefore, we continue to recommend that DOD modify its policy to separate technology development from product development so that needs can be matched with available technology, time, and money at the start of a new development program. Mr. Chairman and Members of the Subcommittee, this concludes my statement. I would be happy to respond to any questions that you or other members of the Subcommittee may have. In preparing for this testimony, we relied on previously issued GAO reports on DOD’s space acquisition policy, common problems affecting space acquisitions, SBIRS-High and other individual programs, as well as our reports on best practices for weapon systems development. We also analyzed DOD’s Future Years Defense Program to assess investment trends. In addition, we reviewed DOD reports on satellite acquisition problems. We conducted our review between October 29 and November 14, 2003 in accordance with generally accepted government auditing standards. For future information, please contact Katherine Schinasi or Bob Levin at (202) 512-4841 or by email at schinasik@gao.gov or levinr@gao.gov Individuals making key contributions to this testimony include Cristina Chaplain, Jean Harker, and Art Gallegos. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Defense is spending nearly $18 billion annually to develop, acquire, and operate satellites and other spacerelated systems. The majority of satellite programs that GAO has reviewed over the past 2 decades experienced problems that increased costs, delayed schedules, and increased performance risk. In some cases, capabilities have not been delivered to the warfighter after decades of development. DOD has recently implemented a new acquisition policy, which sets the stage for decision making on individual space programs. GAO was asked to testify on its assessment of the new policy. Similar to all weapon system programs, we have found that the problems being experienced on space programs are largely rooted in a failure to match the customer's needs with the developer's resources--technical knowledge, timing, and funding--when starting product development. In other words, commitments were made to satellite launch dates, cost estimates, and delivering certain capabilities without knowing whether technologies being pursued could really work as intended. Time and costs were consistently underestimated. DOD has recognized this problem and recently revised its acquisition policy for non-space systems to ensure that requirements can be matched to resources at the time a product development starts. The space community, however, in its newly issued policy for space systems, has taken another approach. As currently written, and from our discussions with DOD officials about how it will be implemented, the policy will not result in the most important decision, to separate technology development from product development to ensure that a match is made between needs and resources. Instead, it allows major investment commitments to be made with unknowns about technology readiness, requirements, and funding. By not changing its current practice, DOD will likely perpetuate problems within individual programs that require more time and money to address than anticipated. More important, over the long run, the extra investment required to address these problems will likely prevent DOD from pursuing more advanced capabilities and from making effective tradeoff decisions between space and other weapon system programs. |
Amtrak was created by the Rail Passenger Service Act of 1970 to operate and revitalize intercity passenger rail service. Prior to its creation, intercity passenger rail service was provided by private railroads, which had continually lost money, especially after World War II. The Congress gave Amtrak specific goals, including providing modern, efficient intercity passenger service; helping to alleviate the overcrowding of airports, airways, and highways; and giving Americans an alternative to automobiles and airplanes to meet their transportation needs. Through fiscal year 1997, the federal government has invested over $19 billion in Amtrak. (Appendix I shows federal appropriations for Amtrak since fiscal year 1988.) In response to continually growing losses and a widening gap between operating deficits and federal operating subsidies, Amtrak developed its Strategic Business Plan. This plan (which has been revised several times) was designed to increase revenues and control cost growth and, at the same time, eliminate Amtrak’s need for federal operating subsidies by 2002. Amtrak also restructured its organization into strategic business units: the Northeast Corridor Unit, which is responsible for operations on the East Coast between Virginia and Vermont; Amtrak West, for operations on the West Coast; and the Intercity Unit, for all other service, including most long-distance, cross-country trains. Amtrak is still in a financial crisis despite the fact that its financial performance (as measured by net losses) has improved over the last 2 years. At the end of fiscal year 1994, Amtrak’s net loss was about $1.1 billion (in 1996 dollars). This loss was $873 million if the one-time charge of $255 million, taken in fiscal year 1994 for accounting changes, restructuring costs, and other items, is excluded. By the end of fiscal year 1996, this loss had declined to about $764 million. However, the relative gap between total revenues and expenses has not significantly closed, and passenger revenues (adjusted for inflation)—which Amtrak has been relying on to help close the gap—have generally declined over the past several years (see apps. II and III). More importantly, the gap between operating deficits and federal operating subsidies has again begun to grow. Amtrak continues to be heavily dependent on federal operating subsidies to make ends meet. Although operating deficits have declined, they have not gone down at the same rate as federal operating subsidies (see app. IV). At the end of fiscal year 1994, the gap between Amtrak’s operating deficit and federal operating subsidies was $75 million. At the end of fiscal year 1996, the gap had increased to $82 million. Over this same time, federal operating subsidies went from $502.2 million to $405 million. Amtrak’s continuing financial crisis can be seen in other measures as well. In February 1995, we reported that Amtrak’s working capital—the difference between current assets and current liabilities—declined between fiscal years 1987 and 1994. Although Amtrak’s working capital position improved in fiscal year 1995, it declined again in fiscal year 1996 to a $195 million deficit (see app. V). This reflects an increase in accounts payable and short-term debt and capital lease obligations, among other items. As we noted in our 1995 report, a continued decline in working capital jeopardizes Amtrak’s ability to pay immediate expenses. Amtrak’s debt levels have also increased significantly (see app. VI). Between fiscal years 1993 and 1996, Amtrak’s debt and capital lease obligations increased about $460 million—from about $527 million to about $987 million (in 1996 dollars). According to Amtrak, this increase was to finance the delivery of new locomotives and Superliner and Viewliner cars—a total of 28 locomotives and 245 cars delivered between fiscal years 1994 and 1996. These debt levels do not include an additional $1 billion expected to be incurred to finance 18 high-speed trainsets due to begin arriving in fiscal year 1999 and related maintenance facilities for the Northeast Corridor (at about $800 million) and the acquisition of 98 new locomotives (at about $250 million). It is important to note that Amtrak’s increased debt levels could limit the use of federal operating support to cover future operating deficits. As Amtrak’s debt levels have increased, there has also been a significant increase in the interest expenses that Amtrak has incurred on this debt (see app. VII). In fact, over the last 4 years, interest expenses have about tripled—from about $20.6 million in fiscal year 1993 to about $60.2 million in fiscal year 1996. This increase has absorbed more of the federal operating subsidies each year because Amtrak pays interest from federal operating assistance and principal from federal capital grants. Between fiscal years 1993 and 1996, the percentage of federal operating subsidies accounted for by interest expense has increased from about 6 to about 21 percent. As Amtrak assumes more debt to acquire equipment, the interest payments are likely to continue to consume an increasing portion of federal operating subsidies. The implementation of the strategic business plans appears to have helped Amtrak’s financial performance—as evidenced by the reduction in net losses between fiscal years 1994 and 1996 (from about $873 million to about $764 million). As we reported in July 1996, about $170 million in cost reductions came in fiscal year 1995 by reducing some routes and services, cutting management positions, and raising fares. Amtrak projected that these actions would reduce future net losses by about $315 million annually once they were in place. The net loss was reduced in fiscal year 1996 as total revenues increased more than total expenses did. In contrast, Amtrak estimates that its net loss in fiscal year 1996 would have been about $1.1 billion if no actions had been taken to address its financial crisis in 1994. Although the strategic business plans have helped reduce the net losses, targets for these losses have often been missed. To illustrate, Amtrak’s plans for fiscal years 1995 and 1996 included actions to reduce the net losses by $195 million—from about $834 million in 1994 (in current year dollars) to $639 million in fiscal year 1996. This reduction was to be accomplished, in part, by increasing revenues $191 million while holding expenses at about the 1994 level. However, actual net losses for this period totaled about $1.572 billion, or about $127 million more than the $1.445 billion Amtrak had planned. This difference was primarily due to the severe winter weather in fiscal year 1996—a contingency that Amtrak had not planned for and one that added about $29 million to expenses—and the unsuccessful implementation of various elements of the fiscal year 1996 business plan. For example, many of the productivity improvements (such as reducing the size of train crews) that Amtrak had planned in fiscal year 1996 were not achieved. As a result, cost savings fell short of Amtrak’s $108 million target by about $60 million. As we reported in July 1996, Amtrak has made little progress in negotiating new productivity improvements with its labor unions. For fiscal year 1997, as a result of higher than anticipated losses and an expected accounting adjustment, Amtrak planned for a net loss of $726 million. However, after the first quarter of operations, revenues were below target, and although expenses were lower than expected, the operating deficit was almost $4 million more than planned for that quarter. Furthermore, fiscal year 1997 financial results will be affected by the postponement of route and service adjustments planned for November 1996. Amtrak estimates that postponing these adjustments will bring a net revenue reduction of $6.9 million and a net cost increase of $29.2 million. Part of this increased cost will be offset by an additional federal operating grant of $22.5 million made to keep these routes operating. In part, as a result of these increased costs, Amtrak revised its planned fiscal year 1997 net loss upward to $762 million from the originally projected $726 million. Even that might not be achieved. As a result of additional unanticipated expenses and revenue shortfalls, Amtrak projects its actual fiscal year 1997 year-end net loss could be about $786 million. Amtrak’s projected fiscal year 1997 financial results may also affect its cash flow and the need to borrow money to make ends meet. For example, in January 1997, Amtrak projected a cash flow deficit of about $96 million at the end of fiscal year 1997—about $30 million more than planned. This deficit may require Amtrak to begin borrowing as early as March 1997 to pay their bills. Moreover, the cash flow deficit may be even larger than projected if Amtrak does not receive anticipated revenues from the sale of property ($16 million) and cost savings from lower electric power prices in the Northeast Corridor ($20.5 million). Amtrak’s fiscal year 1998 projected year-end cash balance is also bleak. On the basis of current projections, Amtrak estimates that it may have to borrow up to $148 million next year. Amtrak currently has short-term lines of credit of $150 million. Amtrak’s need for capital funds remains high. We reported in June 1996 that Amtrak will need billions of dollars to address its capital needs, such as bringing the Northeast Corridor up to a state of good repair. This situation largely continues today. In May 1996, the Federal Railroad Administration (FRA) and Amtrak estimated that about $2 billion would be needed over the next 3 to 5 years to recapitalize the south end of the corridor and preserve its ability to operate in the near term at existing service levels. This renovation would include making improvements in the North and East river tunnels serving New York City and restoring the system that provides electric power to the corridor. This system, with equipment designed to last 40 to 50 years, is now between 60 and 80 years old, and, according to FRA and Amtrak, has gotten to the point at which it no longer allows Amtrak and others to provide reliable high-speed and commuter service. FRA and Amtrak believe that this capital investment of about $2 billion would help reverse the trend of adding time to published schedules because of poor on-time performance. Over the next 20 years, FRA and Amtrak estimate, up to $6.7 billion may be needed to recapitalize the corridor and make improvements targeted to respond to high-priority growth opportunities. A significant capital investment will also be required for other projects as well. For example, additional capital assistance will be required to introduce high-speed rail service between New York and Boston. In 1992, the Amtrak Authorization and Development Act directed that a plan be developed for regularly scheduled passenger rail service between New York and Boston in 3 hours or less. Currently, such trips take, on average, about 4-1/2 hours. Significant rehabilitation of the existing infrastructure as well as electrification of the line north of New Haven, Connecticut, will be required to accomplish this goal. According to Amtrak, since fiscal year 1991 the federal government has invested about $900 million in the high-speed rail program, and an additional $1.4 billion will be required to complete the project. A significant capital investment will also be required to acquire new equipment and overhaul existing equipment. Amtrak plans to spend about $1.7 billion over the next 6 years for these purposes. We reported in July 1996 and February 1995 on Amtrak’s need for capital investments and some of the problems being experienced as a result. We noted the additional costs of maintaining an aging fleet, the backlogs and funding shortages that were plaguing Amtrak’s equipment overhaul program, and the need for substantial capital improvements and modernization at maintenance and overhaul facilities. We also commented on the shrinking availability of federal funds to meet new capital investment needs. Our ongoing work, the results of which we expect to report later this year, is looking at these issues. The preliminary results of our work indicate that Amtrak has made some progress in addressing capital needs, but the going has been slow, and in some cases Amtrak may be facing significant future costs. For example, we reported in February 1995 that about 31 percent of Amtrak’s passenger car fleet was beyond its useful life—estimated at between 25 and 30 years—and that 23 percent of the fleet was made up of Heritage cars (cars that Amtrak obtained in 1971 from other railroads) that averaged over 40 years old. Since our report, the average age of the passenger car fleet has declined from 22.4 years old (in fiscal year 1994) to 20.7 years old (at the end of fiscal year 1996), and the number of Heritage cars has declined from 437 to 246. This drop is significant because Heritage cars, as a result of their age, were subject to frequent failures, and their downtime for repair was about 3 times longer than for other types of cars. However, these trends may be masking substantial future costs to maintain the fleet. In October 1996, about 53 percent of the cars in Amtrak’s active fleet of 1,600 passenger cars averaged 20 years old or more and were at or approaching the end of their useful life (see app. VIII). It is safe to assume that as this equipment continues to age, it will be subject to more frequent failures and require more expensive repairs. Our ongoing work also shows that the portion of Amtrak’s federal capital grant available to replace assets has continued to shrink. In February 1995, we reported that an increasing portion of the capital grant was being devoted to debt service, overhauls of existing equipment, and legally mandated uses, such as equipment modifications and environmental cleanup. In fiscal year 1994, only about $54 million of Amtrak’s federal capital grant of $195 million was available to purchase new equipment and meet other capital investment needs. Since our report, although the portion of the capital grant available to meet general capital investment needs increased in fiscal years 1995 and 1996, it shrunk in fiscal year 1997 (see app. IX). In fiscal year 1997, only $12 million of the capital grant of $223 million is expected to be available for general capital needs. The rest will be devoted to debt service ($75 million), overhauls of existing equipment ($110 million), or legally mandated work ($26 million). It is likely that as Amtrak assumes increased debt (including capital lease obligations) to acquire equipment and as the number of cars in Amtrak’s fleet that exceed their useful life increases, even less of Amtrak’s future capital grants will be available to meet capital investment needs. Amtrak’s ability to reach operating self-sufficiency by 2002 will be difficult given the environment within which it operates. Amtrak is relying heavily on capital investment to support its goal of eliminating federal operating subsidies. For example, Amtrak’s draft fiscal year 1997-2002 Strategic Capital Plan indicates that about 830 million dollars’ worth of actions needed to close gaps in the operating budget through 2002 is directly linked to capital investments. To support these actions, Amtrak anticipates significantly increased federal capital assistance—about $750 million to $800 million per year. In comparison, in fiscal year 1997, Amtrak received federal capital funding of $478 million. Amtrak would like this increased assistance to be provided from a dedicated funding source. Given today’s budget environment, it may be difficult to obtain this degree of increased federal funding. In addition, providing funds from a dedicated source—such as the federal Highway Trust Fund—may not give Amtrak as much money as it expects. Historically, spending for programs financed by this Trust Fund, such as the federal-aid highway program, has generally been constrained by limiting the total amount of funds that can be obligated in a given year. Amtrak is also subject to the competitive and economic environment within which it operates. We reported in February 1995 that competitive pressures had limited Amtrak’s ability to increase revenues by raising fares. Fares were constrained, in part, by lower fares on airlines and intercity buses. From fiscal year 1994 to fiscal year 1996, Amtrak’s yield (revenue per passenger mile) increased about 24 percent, from 15.4 cents per passenger mile to about 19.1 cents. In comparison, between 1994 and 1995, airline yields declined slightly, intercity bus yields increased 18 percent, and the real price of unleaded regular gasoline increased a little less than 1 percent. However, it appears that Amtrak’s ability to increase revenues through fare increases has come at the expense of ridership, the number of passenger miles, and the passenger miles per seat-mile (load factor). Between fiscal years 1994 and 1996, all three declined. Such trade-offs in the future could limit further increases in Amtrak’s yield and ultimately revenue growth. Finally, Amtrak will continue to find it difficult to take those actions necessary to further reduce costs. These include making the route and service adjustments necessary to save money and to collectively bargain cost-saving productivity improvements with its employees. During fiscal year 1995, Amtrak was successful in reducing and eliminating some routes and services. For example, on seven routes Amtrak reduced the frequency of service from daily to 3 or 4 times per week, and on nine other routes various segments were eliminated. Amtrak estimates that such actions saved about $54 million. Amtrak was less successful in making route and service adjustments planned for fiscal year 1997 and estimates that its failure to take these actions will increase its projected fiscal year 1997 loss by $13.5 million. Amtrak has also been unsuccessful in negotiating productivity improvements with labor unions. Such improvements were expected to save about $26 million in fiscal year 1995 and $19.0 million in fiscal year 1996. According to an Amtrak official, over the last 2 years Amtrak has not pursued negotiations for productivity improvements. Amtrak’s financial future has been staked on the ability to eliminate federal operating support by 2002 by increasing revenues, controlling costs, and providing customers with high-quality service. Although the business plans have helped reduce net losses, Amtrak continues to face significant challenges in accomplishing this goal, and it is likely Amtrak will continue to require federal financial support—both operating and capital—well into the future. Madam Chairwoman, this concludes my testimony. I would be happy to respond to any questions that you or Members of the Subcommittee may have. The appropriations for fiscal year 1993 include $20 million in supplemental operating funds and $25 million for capital requirements. The appropriations for fiscal year 1997 include $22.5 million in supplemental operating funds and $60 million for the Northeast Corridor Improvement Program. For fiscal year 1997, an additional $80 million was appropriated to Amtrak for high-speed rail. Amounts are in current year dollars. Amounts are in current year dollars. In 1996 dollars, working capital declined from $149 million in fiscal year 1987 to a deficit of $195 million in fiscal year 1996. 6% Horizon (7.1 years) Superliner II (1.5 years) 3% Turboliner (21.0 years) 1% Capitoliner (29.8 years) 2% Viewliners (0.9 years) Amfleet I (20.9 years) Heritage Passenger (43.0 years) Baggage/Autocarrier (39.7 years) Superliner I (16.7 years) 8% Amfleet II (14.7 years) The age of the baggage and autocarrier cars is a weighted average. Amounts for fiscal year 1997 are estimated. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed preliminary information from its ongoing work looking at Amtrak's progress in achieving operating self sufficiency, focusing on: (1) Amtrak's financial condition and progress toward self-sufficiency; (2) Amtrak's need for, and use of, capital funds; and (3) some of the factors that will play a role in Amtrak's future viability. GAO noted that: (1) Amtrak's financial condition is still very precarious and heavily dependent on federal operating and capital funds; (2) in response to its deteriorating financial condition, in 1995 and 1996 Amtrak developed strategic business plans designed to increase revenues and reduce cost growth; (3) however, GAO has found that, in the past 2 years, passenger revenues, adjusted for inflation, have generally declined, and in fiscal year (FY) 1996, the gap between operating deficits and federal operating subsidies began to grow again to levels exceeding that of FY 1994, when the continuation of Amtrak's nationwide passenger rail service was severely threatened; (4) at the end of FY 1996, the gap between the operating deficit and federal operating subsidies was $82 million; (5) capital investment continues to play a critical role in supporting Amtrak's business plans and ultimately in maintaining Amtrak's viability; (6) such investment will not only help Amtrak retain revenues by improving the quality of its service but will be important in facilitating the revenue growth predicted in the business plans; (7) in 1995 and 1996, GAO reported that Amtrak faced significant capital investment needs to, among other things, bring its equipment and facilities systemwide and its tracks in the Northeast Corridor into a state of good repair and to introduce high-speed rail service between Washington and Boston; (8) Amtrak will need billions of dollars in capital investment for these and other projects; (9) it will be difficult for Amtrak to achieve operating self-sufficiency by 2002 given the environment within which it operates; (10) Amtrak is relying heavily on capital investment to support its business plans, which envision a significant increase in capital funding support--possibly from a dedicated funding source, such as the Highway Trust Fund; (11) while such a source may offer the potential for steady, reliable funding, the current budget environment may limit the amount of funds actually made available to Amtrak; (12) Amtrak is also relying greatly on revenue growth and cost containment to achieve its goal of eliminating federal operating support; and (13) the economic and competitive environment within which Amtrak operates may limit revenue growth, and Amtrak will continue to find it difficult to take those actions necessary, such as route and service adjustments, to reduce costs. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.