Our experience has been that many in the Intelligence Community are resistant to the idea of rigorous, scientific measurement of the accuracy of analytic forecasts, preferring instead to evaluate analyses through a critical review process.

Measuring the Forecast Accuracy of Intelligence Products
Download Resources
PDF Accessibility
One or more of the PDF files on this page fall under E202.2 Legacy Exceptions and may not be completely accessible. You may request an accessible version of a PDF using the form on the Contact Us page.
Our experience has been that many in the Intelligence Community are resistant to the idea of rigorous, scientific measurement of the accuracy of analytic forecasts, preferring instead to evaluate analyses through a critical review process. Unfortunately, research and experience in other complex domains show that expert self-assessments based only on critical reviews frequently result in measurably incorrect lessons learned. In this paper we argue that the Intelligence Community should adopt a program of rigorous, scientific measurement of forecast accuracy, because such a program is essential to improving accuracy. The paper also describes a new method for measuring the accuracy of analytic forecasts expressed with verbal imprecision. The method was used to evaluate the accuracy of ten open source intelligence products, including the declassified key judgments in two National Intelligence Estimates. Results show that forecasts in these products were reasonably calibrated, with a strong positive correlation between the strength of the language used to express forecast certainty and the frequency with which forecast events actually occurred. These results demonstrate that the forecast accuracy of analytic products can be measured rigorously.