MISSILE DEFENSE:Review of Results and Limitations of an Early National Missile Defense Flight Test. Part II



The April 1, 1998, addendum also disclosed that the August 13 and August 22 reports, in which TRW conveyed that its software successfully distinguished the mock warhead from decoys, were based on tests of the software using about one-third of the target signals collected during Integrated Flight Test 1A. We talked to TRW officials who told us that Boeing provided several data sets to TRW, including the full data set. The officials said that Boeing provided target signals from the entire timeline to a TRW office that was developing a prototype version of the exoatmospheric kill vehicle’s tracking, fusion, and discrimination Appendix I: Disclosure of Flight Test’s Key Results and Limitations software,7 which was not yet operational. However, TRW representatives said that the test bed version of the software that TRW was using so that it could submit its analysis within 60 days of Integrated Flight Test 1A could not process the full data set. The officials said that shortly before the August 22 report was issued, the prototype version of the tracking, fusion, and discrimination software became functional and engineers were able to use the software to assess the expanded set of target signals. According to the officials, this assessment also resulted in the software’s selecting the mock warhead as the most likely target. In our review of the August 22 report, we found no analysis of the expanded set of target signals. The April 1, 1998, report, did include an analysis of a few additional seconds of data collected near the end of Integrated Flight Test 1A, but did not include an analysis of target signals collected at the beginning of the flight. Most of the signals that were excluded from TRW's discrimination analysis were collected during the early part of the flight, when the sensor’s temperature was fluctuating. TRW told us that their software was designed to drop a target object’s track if the tracking portion of the software received no data updates for a defined period. This design feature was meant to reduce false tracks that the software might establish if the sensor detected targets where there were none. In Integrated Flight Test 1A, the fluctuation of the sensor’s temperature caused the loss of target signals. TRW engineers said that Boeing recognized that this interruption would cause TRW’s software to stop tracking all target objects and restart the discrimination process. Therefore, Boeing focused its efforts on 7 The purpose of TRW’s tracking, fusion, and discrimination software, which was being designed to operate on-board Boeing’s exoatmospheric kill vehicle, was to record the positions of the target objects as they moved through space, fuse information about the objects collected by ground-based radar with data collected by the kill vehicle’s infrared sensor, and discriminate the warhead from decoys. The software’s tracking function was not operational when the project office asked the contractors to determine the software’s ability to discriminate. As a result, Boeing hand-tracked the target objects so that TRW could use test bed discrimination software, which is almost identical to the discrimination portion of the operational version of the tracking, fusion, and discrimination software, to assess the discrimination capability. Appendix I: Disclosure of Flight Test’s Key Results and Limitations Corresponding Portions of Reference Data Excluded processing those target signals that were collected after the sensor’s temperature stabilized and signals were collected continuously.8 Some signals collected during the last seconds of the sensor’s flight were also excluded. The former TRW employee alleged that these latter signals were excluded because during this time a decoy was selected as the target. The Phase One Engineering Team cited one explanation for the exclusion of the signals. The team said that TRW stopped using data when objects began leaving the sensor’s field of view. Our review did not confirm this explanation. We reviewed the target intensities derived from the infrared frames covering that period and found that several seconds of data were excluded before objects began to leave the field of view. Boeing officials gave us another explanation. They said that target signals collected during the last few seconds of the flight were streaking, or blurring, because the sensor was viewing the target objects as it flew by them. Boeing told us that streaking would not occur in an intercept flight because the kill vehicle would have continued to approach the target objects. We could not confirm that the test of TRW’s discrimination software, as explained in the August 22, 1997, report, included all target signals that did not streak. We noted that the April 1, 1998, addendum shows that TRW analyzed several more seconds of target signals than is shown in the August 22, 1997, report. It was in these additional seconds that the software began to increase the rank of one decoy as it assessed which target object was most likely the mock warhead. However, the April 1, 1998, addendum also shows that even though the decoy’s rank increased the software continued to rank the mock warhead as the most likely target. But, because not all of the Integrated Flight Test 1A timeline was presented in the April 1 addendum, we could not determine whether any portion of the excluded timeline might have been useful data and if there were additional seconds of useful data whether a target object other than the mock warhead might have been ranked as the most likely target. The April 1 addendum also documented that portions of the reference data developed for Integrated Flight Test 1A were also excluded from the 8 When the Ground Based Interceptor Project Management Office asked Boeing to assess the discrimination capability of its sensor’s software, TRW’s prototype tracking, fusion, and discrimination software was not operational. To perform the requested assessment, TRW used test-bed discrimination software that was almost identical to the discrimination software that TRW engineers designed for the prototype tracking, fusion, and discrimination software. Because the test-bed software did not have the ability to track targets, Boeing performed the tracking function and provided the tracked signals to TRW. Appendix I: Disclosure of Flight Test’s Key Results and Limitations Information Provided Verbally to Project Office Effect of Cooling Failure on Sensor’s Performance discrimination analysis. Nichols and project office officials told us the software identifies the various target objects by comparing the target signals collected from each object at a given point in their flight to the target signals it expects each object to display at that same point in the flight. Therefore, when target signals collected during a portion of the flight timeline are excluded, reference data developed for the same portion of the timeline must be excluded. Officials in the National Missile Defense Joint Program Office’s Ground Based Interceptor Project Management Office and Nichols Research told us that soon after Integrated Flight Test 1A the contractors orally disclosed all of the problems and limitations cited in the December 11, 1997, briefing and the April 1, 1998, addendum. Contractors made these disclosures to project office and Nichols Research officials during meetings that were held to review Integrated Flight Test 1A results sometime in late August 1997. The project office and contractors could not, however, provide us with documentation of these disclosures. The current Ground Based Interceptor Project Management Office deputy manager said that the problems that contractors discussed with his office were not specifically communicated to others within the Department of Defense because his office was the office within the Department responsible for the Boeing contract. The project office’s assessment was that these problems did not compromise the reported success of the mission, were similar in nature to problems normally found in initial developmental tests, and could be easily corrected. Because we questioned whether Boeing’s sensor could collect any usable target signals if the silicon detector array was not cooled to the desired temperature, we hired sensor experts at Utah State University’s Space Dynamics Laboratory to determine the extent to which the sub-optimal cooling degraded the sensor’s performance. These experts concluded that the higher temperature of the silicon detectors degraded the sensor’s performance in a number of ways, but did not result in extreme degradation. For example, the experts said the higher temperature reduced by approximately 7 percent the distance at which the sensor could detect targets. The experts also said that the rapid temperature fluctuation at the beginning and at the end of data acquisition contributed to the number of times that the sensor detected a false target. However, the experts said the major cause of the false alarms was the power supply noise that contaminated the electrical signals generated by the sensor in response to the infrared energy. When the sensor signals were processed Appendix I: Disclosure of Flight Test’s Key Results and Limitations after Integrated Flight Test 1A, the noise appeared as objects, but they were actually false alarms. Additionally, the experts said that the precision with which the sensor could estimate the infrared energy emanating from an object based on the electrical signal produced by the energy was especially degraded in one of the sensor’s two infrared wave bands. In their report, the experts said that the Massachusetts Institute of Technology’s Lincoln Laboratory analyzed the precision with which the Boeing sensor could measure infrared radiation and found large errors in measurement accuracy. The Utah State experts said that their determination that the sensor’s measurement capability was degraded in one infrared wave band might partially explain the errors found by Lincoln Laboratory. Although Boeing’s sensor did not cool to the desired temperature during Integrated Flight Test 1A, the experts found that an obstruction in gas flow rather than the sensor’s design was at fault. These experts said the sensor’s cooling mechanism was properly designed and Boeing’s sensor design was sound. Appendix II: Project Office Reliance on Various Sources for Contractor Oversight The Ground Based Interceptor Project Management Office used several sources to monitor the contractors’ technical performance, but oversight activities were limited by the ongoing exoatmospheric kill vehicle contract competition between Boeing and Raytheon. Specifically, the project office relied on an engineer and a System Engineering and Technical Analysis contractor, Nichols Research Corporation, to provide insight into Boeing’s work. The project office also relied on Boeing to oversee TRW’s performance. The deputy manager of the Ground Based Interceptor Project Management Office told us that competition between Boeing and Raytheon limited oversight to some extent. He said that because of the ongoing competition, the project office monitored the two contractors’ progress but was careful not to affect the competition by assisting one contractor more than the other. The project office primarily ensured that the contractors abided by their contractual requirements. The project office deputy manager told us that his office relied on “insight” into the contractors’ work rather than oversight of that work. The project office gained insight by placing an engineer on-site at Boeing and tasking Nichols Research Corporation to attend technical meetings, assess test reports, and, in some cases, evaluate Boeing’s and TRW’s technologies. The on-site engineer was responsible for observing the performance of Boeing and TRW and relaying any problems back to the project office. He did not have authority to provide technical direction to the contractors. According to the Ground Based Interceptor Project Management Office deputy manager, Nichols essentially “looked over the shoulder” of Boeing and TRW. We observed evidence of Nichols’ insight in memorandums that Nichols’ engineers submitted to the project office suggesting questions that should be asked of the contractors, memorandums documenting engineer’s comments on various contractor reports, and trip reports recorded by the engineers after various technical meetings. Boeing said its oversight of TRW’s work complied with contract requirements. The contract between the Department of Defense and Boeing required Boeing to declare that “to the best of its knowledge and belief, the technical data delivered is complete, accurate, and complies with all requirements of the contract.” With regard to Integrated Flight Test 1A, Boeing officials said that they complied with this provision by selecting a qualified subcontractor, TRW, to develop the discrimination concepts, software, and system design in support of the flight tests, and by holding weekly team meetings with subcontractor and project office Appendix II: Project Office Reliance on Various Sources for Contractor Oversight officials. Boeing officials stated that they were not required to verify the validity of their subcontractor’s flight test analyses; rather, they were only required to verify that the analyses seemed reasonable. According to Boeing officials, both they and the project office shared the belief that TRW possessed the necessary technical expertise in threat phenomenology modeling, discrimination, and target tracking, and both relied on TRW’s expertise. Appendix III: Reduced Test Complexity National Missile Defense Joint Program Office officials said that they reduced the number of decoys planned for intercept flight tests in response to a recommendation by an independent panel, known as the Welch Panel. The panel, established to reduce risk in ballistic missile defense flight test programs, viewed a successful hit-to-kill engagement as a difficult task that should not be further complicated in early tests by the addition of decoys. In contemplating the panel’s advice, the program manager discussed various target options with other program officials and the contractors competing to develop and produce the system’s exoatmospheric kill vehicle. The officials disagreed on the number of decoys that should be deployed in the first intercept flight tests. Some recommended using the same target set deployed in Integrated Flight Test 1A and 2, while others wanted to eliminate some decoys. After considering the differing viewpoints, the program manager decided to deploy only one decoy—a large balloon—in early intercept tests. As flight tests began in 1997, the National Missile Defense Joint Program Office was planning two sensor tests—Integrated Flight Test 1A and 2— and 19 intercept tests. The primary objective of the sensor flight tests was to reduce risk in future flight tests. Specifically the tests were designed to determine if the sensor could operate in space; to examine the extent to which the sensor could detect small differences in infrared emissions; to determine if the sensor was accurately calibrated; and to collect target signature1 data for post-mission discrimination analysis. Initially, the next two flight tests were to demonstrate the ability of the competing kill vehicles to intercept a mock warhead. Integrated Flight Test 3 was to test the Boeing kill vehicle and Integrated Flight Test 4 was to test the Raytheon kill vehicle. Table 1 shows the number of target objects deployed in the two sensor tests, the number of objects originally planned to be deployed in the first two intercept attempts, and the number of objects actually deployed in the intercept attempts. Decoys in Early Intercept Tests 1 A target object’s signature is the set of infrared signals emitted by the target. Appendix III: Reduced Test Complexity Table 2: Planned and Actual Targets for Initial Flight Tests Target suite Actual targets in integrated flight tests 1A and 2 Initial plan for integrated flight tests 3 and 4 Actual targets deployed for integrated flight tests 3 and 4 Mock warheada 1 1 Medium rigid light b replica 2 2 Small canisterizedc light replica 1 1 Canisterized small balloon 2 2 Large balloon 1 1 Medium balloon 2 2 Total objects 9 9 aThe mock warhead, also known as the medium reentry vehicle, is the test target. Not included in this table is the multi-service launch system, which carries the mock warhead and all of the decoys into space. The launch system will likely become an object in the field of view of the exoatmospheric kill vehicle, like the mock warhead and decoys, and must be discriminated. bThis is a replica of the warhead. cDecoys can be stored in canisters and released in flight. Source: GAO generated from Department of Defense information. By the time Integrated Flight Tests 3 and 4 were actually conducted, Boeing had become the National Missile Defense Lead System Integrator and had selected Raytheon’s exoatmospheric kill vehicle for use in the National Missile Defense system. Boeing conducted Integrated Flight Test 3 (in October 1999) and Integrated Flight Test 4 (in January 2000) with the Raytheon kill vehicle. However, both of these flight tests used only the mock warhead and one large balloon, rather than the nine objects originally planned. Integrated Flight Test 5 (flown in July 2000) also used only the mock warhead and one large balloon. Program officials told us that the National Missile Defense Program Manager decided to reduce the number of decoys used in Integrated Flight Tests 3, 4, and 5, based on the findings of an expert panel. This panel, known as the Welch Panel, reviewed the flight test programs of several Ballistic Missile Defense Organization programs, including the National Missile Defense program. The resulting report,2 which was released shortly 2 Report of the Panel on Reducing Risk in Ballistic Missile Defense Flight Test Programs, February 27, 1998. Appendix III: Reduced Test Complexity after Integrated Flight Test 2, found that U.S. ballistic missile defense programs, including the National Missile Defense program, had not yet demonstrated that they could reliably intercept a ballistic missile warhead using the technology known as “hit-to-kill.” Numerous failures had occurred for several of these programs and the Welch Panel concluded that the National Missile Defense program (as well as other programs using "hit-to-kill" technology) needed to demonstrate that it could reliably intercept simple targets before it attempted to demonstrate that it could hit a target accompanied by decoys. The panel reported again 1 month after Integrated Flight Test 33 and came to the same conclusion. The Director of the Ballistic Missile Defense Organization testified4 at a congressional hearing that the Welch Panel advocated removing all decoys from the initial flight tests, but that the Ballistic Missile Defense Organization opted to include a limited discrimination requirement with the use of one decoy. Nevertheless, he said that the primary purpose of the tests was to demonstrate the system’s “hit-to-kill” capability. Program officials said there was disagreement within the Joint Program Office and among the key contractors as to how many targets to use in the early intercept flight tests. Raytheon and one high-ranking program official wanted Integrated Flight Tests 3, 4, and 5 to include target objects identical to those deployed in the sensor flight tests. Boeing and other program officials wanted to deploy fewer target objects. After considering all options, the Joint Program Office decided to deploy a mock warhead and one decoy—a large balloon. Raytheon officials told us that they discussed the number of objects to be deployed in Integrated Flight Tests 3, 4, and 5 with program officials and recommended using the same target set as deployed in Integrated Flight Tests 1A and 2. Raytheon believed that this approach would be less risky because it would not require revisions to be made to the kill vehicle’s software. Raytheon and program officials told us that Raytheon was confident that it could successfully identify and intercept the mock warhead even with this larger target set. Opinions on Decoys 3 National Missile Defense Review, November 1999. 4 Statement of Lieutenant General Ronald T. Kadish, USAF, Director, Ballistic Missile Defense Organization, Before the House Armed Services Committee, Subcommittee on Military Research & Development, June 14, 2001. Appendix III: Reduced Test Complexity One high-ranking program official said that she objected to reducing the number of decoys used in Integrated Flight Test 3, because there was a need to more completely test the system. However, other program officials lobbied for a smaller target set. One program official said that his position was based on the Welch Panel’s findings and on the fact that the program office was not concerned at that time about discrimination capability. He added that the National Missile Defense program was responding to the threat of “nations of concern,” which could only develop simple targets, rather than major nuclear powers, which were more likely to be able to deploy decoys. The Boeing/TRW team also wanted to reduce the number of decoys used in the first intercept tests. In a December 1997 study, the companies recommended that Integrated Flight Test 3 be conducted with a total of four objects—the mock warhead, the two small balloons, and the large balloon. (The multi-service launch system was not counted as one of the objects.) The study cited concerns about the inclusion of decoys that were not part of the initially expected threat and about the need to reduce risk. Boeing said that the risk increased significantly that the exoatmospheric kill vehicle would not intercept the mock warhead if the target objects did not deploy from the test missile as expected. According to Boeing/TRW, as the types and number of target objects increased, the potential risk that the target objects would be different in some way from what was expected also increased. Specifically, the December 1997 study noted that the medium balloons had been in inventory for some time and had not deployed as expected in other tests, including Integrated Flight Test 1A. In that test, one medium balloon only partially inflated and was not positioned within the target cluster as expected. The study also found that the medium rigid light replicas are the easiest to misdeploy and the small canisterized light replica moved differently than expected during Integrated Flight Test 1A. Appendix IV: Phase One Engineering Team’s Evaluation of TRW’s Software In 1998, the National Missile Defense Joint Program Office asked the Phase One Engineering Team to conduct an assessment, using available data, of TRW’s discrimination software even though Nichols Research Corporation had already concluded that it met the requirements established by Boeing.1 The program office asked for the second evaluation because the Defense Criminal Investigative Service lead investigator was concerned about the ability of Nichols to provide a truly objective evaluation. The Phase One Engineering Team developed a methodology to (1) determine if TRW’s software was consistent with scientific, mathematical, and engineering principles; (2) determine whether TRW accurately reported that its software successfully discriminated a mock warhead from decoys using data collected during Integrated Flight Test 1A; and (3) predict the performance of TRW’s basic discrimination software against Integrated Flight Test 3 scenarios. The key results of the team’s evaluation were that the software was well designed; the contractors accurately reported the results of Integrated Flight Test 1A; and the software would likely perform successfully in Integrated Flight Test 3. The primary limitation was that the team used Boeing- and TRWprocessed target data and TRW-developed reference data in determining the accuracy of TRW reports for Integrated Flight Test 1A. The team began its work by assuring itself that TRW’s discrimination software was based on sound scientific, engineering, and mathematical principles and that those principles had been correctly implemented. It did this primarily by studying technical documents provided by the contractors and the program office. Next, the team began to look at the software’s performance using Integrated Flight Test 1A data. The team studied TRW’s August 13 and August 22, 1997, test reports to learn more about discrepancies that the Defense Criminal Investigative Service said it found in these reports. Team members also received briefings from the Phase One Engineering Team’s Methodology 1 The Ground Based Interceptor Project Management Office identified the precision (expressed as a probability) with which the exoatmospheric kill vehicle is expected to destroy a warhead with a single shot. To ensure that the kill vehicle would meet this requirement, Boeing established lower-level requirements for each function that affects the kill vehicle’s performance, including the discrimination function. Nichols compared the contractor-established software discrimination performance requirement to the software’s performance in simulated scenarios. Appendix IV: Phase One Engineering Team’s Evaluation of TRW’s Software Defense Criminal Investigative Service, Boeing, TRW, and Nichols Research Corporation. Team members told us that they did not replicate TRW’s software in total. Instead, the team emulated critical functions of TRW’s discrimination software and tested those functions using data collected during Integrated Flight Test 1A. To test the ability of TRW’s software to extract the features of each target object’s signal, the team designed a software routine that mirrored TRW’s feature-extraction design. The team received Integrated Flight Test 1A target signals that had been processed by Boeing and then further processed by TRW. These signals represented about one-third of the collected signals. Team members input the TRW-supplied target signals into the team’s feature-extraction software routine and extracted two features from each target signal. The team then compared the extracted features to TRW’s reports on these same features and concluded that TRW’s software-extraction process worked as reported by TRW. Next, the team acquired the results of 200 of the 1,000 simulations that TRW had run to determine the features that target objects deployed in Integrated Flight Test 1A would likely display.2 Using these results, team members developed reference data that the software could compare to the features extracted from Integrated Flight Test 1A target signals. Finally, the team wrote software that ranked the different observed target objects in terms of the probability that each was the mock warhead. The results produced by the team’s software were then compared to TRW’s reported results. The team did not perform any additional analysis to predict the performance of the Boeing sensor and its software in Integrated Flight Test 3. Instead, the team used the knowledge that it gained from its assessment of the software’s performance using Integrated Flight Test 1A data to estimate the software’s performance in the third flight test. 2 The Phase One Engineering Team reported that TRW ran 1,000 simulations to determine the reference data for Integrated flight Test 1A, but the Team received the results of only 200 simulations. TRW engineers said this was most likely to save time. Also, the engineers said that the only effect of developing reference data from 200 simulations rather than 1,000 simulations is that confidence in the reference data drops from 98 percent to approximately 96 percent. Appendix IV: Phase One Engineering Team’s Evaluation of TRW’s Software The Phase One Engineering Team’s Key Results In its report published on January 25, 1999, the Phase One Engineering Team reported that even though it noted some weaknesses, TRW’s discrimination software was well designed and worked properly, with only some refinement or redesign needed to increase the robustness of the discrimination function. In addition, the team reported that its test of the software using data from Integrated Flight Test 1A produced essentially the same results as those reported by TRW. The team also predicted that the Boeing sensor and its software would perform well in Integrated Flight Test 3 if target objects deployed as expected. Weaknesses in TRW’s Software The team's assessment identified some software weaknesses. First, the team reported that TRW’s use of a software module to replace missing or noisy target signals was not effective and could actually hurt rather than help the performance of the discrimination software. Second, the Phase One Engineering Team pointed out that while TRW proposed extracting several features from each target-object signal, only a few of the features could be used. The Phase One Engineering Team also reported that it found TRW’s software to be fragile because the software was unlikely to operate effectively if the reference data—or expected target signals—did not closely match the signals that the sensor collected from deployed target objects. The team warned that the software’s performance could degrade significantly if incorrect reference data were loaded into the software. Because developing good reference data is dependent upon having the correct information about target characteristics, sensor-to-target geometry, and engagement timelines, unexpected targets might challenge the software. The team suggested that very good knowledge about all of these parameters might not always be available. Accuracy of Contractors’ Integrated Flight Test 1A Reports The Phase One Engineering Team reported that the results of its evaluation using Integrated Flight Test 1A data supported TRW’s claim that in post-flight analysis its software accurately distinguished a mock warhead from decoys. The report stated that TRW explained why there were differences in the discrimination analysis included in the August 13, 1997, Integrated Flight Test 1A test report and that included in the August 22, 1997, report. According to the report, one difference was that TRW mislabeled a chart in the August 22 report. Another difference was that the August 22 discrimination analysis was based on target signals collected over a shorter period of time (see app. I for more information regarding Appendix IV: Phase One Engineering Team’s Evaluation of TRW’s Software TRW’s explanation of report differences). Team members said that they found TRW’s explanations reasonable. Predicted Success in Integrated Flight Test 3 Limitations of the Team’s Evaluation The Phase One Engineering Team predicted that if the targets deployed in Integrated Flight Test 3 performed as expected, TRW's discrimination software would successfully identify the warhead as the target. The team observed that the targets proposed for the flight test had been viewed by Boeing’s sensor in Integrated Flight Test 1A and that target-object features collected by the sensor would be extremely useful in constructing reference data for the third flight test. The team concluded that given this prior knowledge, TRW’s discrimination software would successfully select the correct target even in the most stressing Integrated Flight Test 3 scenario being considered, if all target objects deployed as expected. However, the team expressed concern about the software’s capabilities if objects deployed differently, as had happened in previous flight tests. The Phase One Engineering Team’s conclusion that TRW’s software successfully discriminated is based on the assumption that Boeing’s and TRW’s input data were accurate. The team did not process the raw data collected by the sensor’s silicon detector array during Integrated Flight Test 1A or develop their own reference data by running hundreds of simulations. Instead, the team used target signature data extracted by Boeing and TRW and developed reference data from a portion of the simulations that TRW ran for its own post-flight analysis. Because it did not process the raw data from Integrated Flight Test 1A or develop its own reference data, the team cannot be said to have definitively proved or disproved TRW’s claim that its software successfully discriminated the mock warhead from decoys using data collected from Integrated Flight Test 1A. A team member told us its use of Boeing- and TRW-provided data was appropriate because the former TRW employee had not alleged that the contractors tampered with the raw test data or used inappropriate reference data. Appendix V: Boeing Integrated Flight Test 1A Requirements and Actual Performance as Reported by Boeing and TRW The table below includes selected requirements that Boeing established before the flight test to evaluate sensor performance and the actual sensor performance characteristics that Boeing and TRW discussed in the August 22 report. Table 3: Integrated Flight Test 1A Requirements Established by Boeing and Actual Performance Integrated Flight Test 1A performance reported by Boeing/TRW Capability Testeda Requirement b Acquisition range The sensor subsystem shall acquire the target objects at a specified distance. The performance exceeded the requirement.c Probability of detection The sensor shall detect target objects with a specified precision, which is expressed as a probability. The performance satisfied the requirement. False alarm rate False alarms shall not exceed a specified level. The performance did not satisfy the requirement. The false alarm rate exceeded Boeing’s requirement by more than 200 to 1 because of problems with the power supply and the higher than expected temperature of the sensor. Infrared radiation The sensor subsystem shall demonstrate a specified The contractor met the requirement in one measurement precision measurement precision at a specified range. infrared measurement band, but not in another. Angular Measurement Given specified conditions, the sensor subsystem shall The performance was better than the Precision (AMP) determine the angular position of the targets with a requirement. specified angular measurement precision. Closely spaced objects Resolution of closely spaced objects shall be satisfied The closely spaced objects requirement could resolution at a specified range. not be validated because the targets did not deploy with the required separation. Silicon detector array The time to cool the silicon detector array to less than a The performance did not satisfy the requirement cool-down time desired temperature shall be less than or equal to a because the desired temperature was not specified length of time. reached. Nevertheless, the silicon detector operated as designed at the higher temperatures. d Hold time With a certain probability, the silicon detector array’s Even though the detector array’s temperature did temperature shall be held below a desired temperature not reach the desired temperature, the array was for a specified minimum length of time. cooled to an acceptable operating temperature and held at that temperature for longer than required. aThe requirements displayed in the table were established by the contractor and were not imposed by the government. Additionally, because of various sensor problems recognized prior to the test, Boeing waived most of the requirements. Boeing established these requirements to ensure that its exoatmospheric kill vehicle, when fully developed, could destroy a warhead with the single shot precision (expressed as a probability) required by the Ground Based Interceptor Project Management Office. b Boeing’s acquisition range specification required that the specified range, detection probability, and false alarm rate be achieved simultaneously. Boeing’s Chief Scientist said that because the range and target signals varied with time and the total observation time was sharply limited during Integrated Flight Test 1A, the probability of detection could not be accurately determined. As a result, the test was not a suitable means for assessing whether the sensor can attain the specified acquisition range. Appendix V: Boeing Integrated Flight Test 1A Requirements and Actual Performance as Reported by Boeing and TRW cThe revised 60-day report states that the sensor did not detect the target until approximately twothirds of the nominal acquisition range. Boeing engineers told us that while this statement appears to contradict the claim that the target was acquired at 107 percent of the specified range, it does not. Boeing engineers said that the nominal acquisition range refers to the range at which a sensor that is performing as designed would acquire the target, which is a substantially greater range than the specified acquisition range. However, neither Boeing nor TRW could provide documentation of the nominal acquisition range so that we could verify that these statements are not contradictory. dIn the main body of the August 22 report, the contractor discussed “hold time.” However, it is not mentioned in the appendix to the August 22 report that lists the performance characteristics against which Boeing planned to evaluate its sensor’s performance. Rather, the appendix refers to a “minimum target object viewing” time, which has the same requirement as the hold time. Boeing reported that its sensor collected target signals over approximately 54 seconds. Appendix VI: Scope and Methodology We determined whether Boeing and TRW disclosed key results and limitations of Integrated Flight Test 1A to the National Missile Defense Joint Program Office by examining test reports submitted to the program office on August 13, 1997, August 22, 1997, and April 1, 1998, and by examining the December 11, 1997, briefing charts. We also held discussions with and examined various reports and documents prepared by Boeing North American, Anaheim, California; TRW Inc., Redondo Beach, California; the Raytheon Company, Tucson, Arizona; Nichols Research Corporation, Huntsville, Alabama; the Phase One Engineering Team, Washington, D.C.; the Massachusetts Institute of Technology/Lincoln Laboratory, Lexington, Massachusetts; the National Missile Defense Joint Program Office, Arlington, Virginia, and Huntsville, Alabama; the Office of the Director, Operational Test and Evaluation, Washington D.C.; the U.S. Army Space and Missile Defense Command, Huntsville, Alabama; the Defense Criminal Investigative Service, Mission Viejo, California, and Arlington, Virginia; and the Institute for Defense Analyses, Alexandria, Virginia. We held discussions with and examined documents prepared by Dr. Theodore Postol, Massachusetts Institute of Technology, Cambridge, Massachusetts; Dr. Nira Schwartz, Torrance, California; Mr. Roy Danchick, Santa Monica, California; and Dr. Michael Munn, Benson, Arizona. In addition, we hired the Utah State University Space Dynamics Laboratory, Logan, Utah, to examine the performance of the Boeing sensor because we needed to determine the effect the higher operating temperature had on the sensor’s performance. We did not replicate TRW’s assessment of its software using target signals that the Boeing sensor collected during the test. This would have required us to make engineers and computers available to verify TRW’s software, format raw target signals for input into the software, develop reference data, and run the data through the software. We did not have these resources available, and we, therefore, cannot attest to the accuracy of TRW’s discrimination claims. We also examined the methodologies, findings, and limitations of the review conducted by the Phase One Engineering Team of TRW’s discrimination software. To accomplish this task, we analyzed the Phase One Engineering Team’s “Independent Review of TRW EKV Discrimination Techniques” dated January 1999. In addition, we held discussions with Phase One Engineering Team members, officials from the National Missile Defense Joint Program Office, and contractor officials. Appendix VI: Scope and Methodology We did not replicate the evaluations conducted by the Phase One Engineering Team and cannot attest to the accuracy of their reports. We reviewed the decision by the National Missile Defense Joint Program Office to reduce the complexity of later flight tests by comparing actual flight test information with information in prior plans and by discussing these differences with program and contractor officials. We held discussions with and examined documents prepared by the National Missile Defense Joint Program Office, the Institute for Defense Analyses, Boeing North American, and the Raytheon Company. Our work was conducted from August 2000 through February 2002 according to generally accepted government auditing standards. The length of time the National Missile Defense Joint Program Office required to release documents to us significantly slowed our review. For example, the Program Office required approximately 4 months to release key documents such as the Phase One Engineering Team’s response to the professor’s allegations. We requested these and other documents on September 14, 2000, and received them on January 9, 2001. Appendix VII: Comments from the Department of Defense Appendix VIII: Major Contributors Acquisition and Bob Levin, Director Barbara Haynes, Assistant Director Sourcing Management Cristina Chaplain, Assistant Director, Communications David Hand, Analyst-in-charge Subrata Ghoshroy, Technical Advisor Stan Lipscomb, Senior Analyst Terry Wyatt, Senior Analyst William Petrick, Analyst Applied Research and Nabajyoti Barkakati, Senior Level Technologist Hai Tran, Senior Level Technologist Methods General Counsel Stephanie May, Assistant General Counsel GAO’s Mission Obtaining Copies of GAO Reports and Testimony The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to daily e-mail alert for newly released products" under the GAO Reports heading. Order by Mail or Phone The first copy of each printed report is free. Additional copies are $2 each. A check or money order should be made out to the Superintendent of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or more copies mailed to a single address are discounted 25 percent. Orders should be sent to: U.S. General Accounting Office P.O. Box 37050 Washington, D.C. 20013 To order by Phone: Voice: (202) 512-6000 TDD: (202) 512-2537 Fax: (202) 512-6061 Visit GAO’s Document GAO Building Distribution Center Room 1100, 700 4th Street, NW (corner of 4th and G Streets, NW) Washington, D.C. 20013 To Report Fraud, Contact: Web site: www.gao.gov/fraudnet/fraudnet.htm, Waste, and Abuse in E-mail: fraudnet@gao.gov, or Federal Programs 1-800-424-5454 or (202) 512-7470 (automated answering system). Jeff Nelligan, Managing Director, NelliganJ@gao.gov (202) 512-4800 Public Affairs U.S. General Accounting Office, 441 G. Street NW, Room 7149, Washington, D.C. 20548