Thursday, January 29, 2009

Part 5 -- The Problems With Evaluating The Intelligence Process (Evaluating Intelligence)

Part 1 -- Introduction
Part 2 -- A Tale Of Two Weathermen
Part 3 -- A Model For Evaluating Intelligence
Part 4 -- The Problems With Evaluating Intelligence Products

There are a number of ways that the intelligence process can fail. Requirements can be vague, collection can be flimsy or undermined by deliberate deception, production values can be poor or intelligence made inaccessible through over-classification. Finally, the intelligence architecture, the system in which all the pieces are embedded, can be cumbersome, inflexible and incapable of responding to the intelligence needs of the decisionmaker. All of these are part of the intelligence process and any of these -- or any combination of these -- reasons can be the cause of an intelligence failure.

In this series of posts (and in this post in particular), I intend to look only at the kinds of problems that arise when attempting to evaluate the analytic part of the process. From this perspective, the most instructive current document available is Intelligence Community Directive (ICD) 203: Analytic Standards. Paragraph D4, the operative paragraph, lays out what makes for a good analytic process in the eyes of the Director Of National intelligence:
  • Objectivity
  • Independent of Political Considerations
  • Timeliness
  • Based on all available sources of intelligence
  • Properly describes the quality and reliability of underlying sources
  • Properly caveats and expresses uncertainties or confidence in analytic judgments
  • Properly distinguishes between underlying intelligence and analyst's assumptions and judgements
  • Incorporates alternative analysis where appropriate
  • Demonstrates relevance to US national security
  • Uses logical argumentation
  • Exhibits consistency of analysis over time or highlights changes and explains rationale
  • Makes accurate judgements and assessments

This is an excellent starting point for evaluating the analytic process. There are a few problems, though. Some are trivial. Statements such as "Demonstrates relevance to US national security" would have to be modified slightly to be entirely relevant to other disciplines of intelligence such as law enforcement and business. Likewise, the distinction between "objectivity" and "independent of political considerations" would likely bother a stricter editor as the latter appears to be redundant (though I suspect the authors of the ICD considered this and still decided to separate the two in order to highlight the notion of political independence).

Some of the problems are not trivial. I have already discussed (in Part 3) the difficulties associated with mixing process accountability and product accountability, something the last item on the list, "Makes accurate judgements and assessments" seems to encourage us to do.

Even more problematic, however, is the requirement to "properly caveat and express uncertainties or confidence in analytic judgements." Surely the authors meant to say "express uncertainties and confidence in analytic judgments". While this may seem like hair-splitting, the act of expressing uncertainty and the act of expressing a degree of analytic confidence are quite different things. This distinction is made (though not as clearly as I would like) in the prefatory matter to all of the recently released National Intelligence Estimates. The idea that the analyst can either express uncertainties (typically through the use of words of estimative probability) or express confidence flies in the face of this current practice.

Analytic confidence is (or should be) considered a crucial subsection of an evaluation of the overall analytic process. If the question answered by the estimate is, "How likely is X to happen?" then the question answered by an evaluation of analytic confidence is "How likely is it that you, the analyst, are wrong?" These concepts are analogous to statistical notions of probability and margin of error (as in polling data that indicates that Candidate X is looked upon favorably by 55% of the electorate with a plus or minus 3% margin of error). Given the lack of a controlled environment, the inability to replicate situations important to intelligence analysts and the largely intuitive nature of most intelligence analysis, an analogy, however, is what it must remain.

What contributes legitimately to an increase in analytic confidence? To answer this question, it is essential to go beyond the necessary but by no means sufficient criteria set by the standards of ICD 203. In other words, analysis which is biased or late shouldn't make it through the door but analysis that is only unbaised and on time meets only the minimum standard.

Beyond these entry-level standards for a good analytic process, what are those elements that actually contribute a better estimative product? The current best answer to this question comes from Josh Peterson's thesis on the topic. In it he argued that seven elements had adequate experimental data to suggest that they legitimately contribute to analytic confidence:
  • Use of structured methods in analysis
  • Overall Source Reliability
  • Level of Source Corroboration/Agreement
  • Subject Matter Expertise
  • Amount of Collaboration Among Analysts
  • Task Complexity
  • Time Pressure
There are still numerous questions that remain to be answered. Which element is most important? Is there a positive or negative synergy between two or more of the elements? Are these the only elements that legitimately contribute to analytic confidence?

Perhaps the most important question, however, is how the decisionmaker -- the person or organization the intelligence analyst supports -- likely sees this interplay of elements that continuously impacts both the analytic product and process.

Monday: The Decisionmaker's Perspective

No comments: