There is a way forward that addresses Michael O'Hanlon's concernsThe relevant quotes from O'Hanlon were:
- "metrics are grist for a fact based debate but history shows it is dangerous to rely on too few of them"
- "metrics were used in Vietnam and we had the wrong ones"
- [the Vietnam metrics] "did net harm to the debate"
- "we can’t be exactly precise about which indicators are the conclusive ones”
Dangerous to Rely on Too Few of Them. I agree entirely. For Iraq, we need to be able to look at at least 50-100 of the most important factors efficiently, all within a relatively small period of time. For expert use, a much higher number of metrics must be available for review in an efficient and effective manner. Michael O'Hanlon's weekly Iraq Index is one of the best examples out there that pays attention to the principle of not relying on too few metrics.
The Metrics were the Wrong Ones. This is a solvable challenge this time around. First, follow the principle above and make sure that we look at ALL of the important factors. How do we decide? We ask all the experts in all the different areas (security, economy, health, ...) to name what they consider to be the most important indicators. Then we capture and archive and report on all the factors that have been suggested, even if only once. Maybe there will be some "wrong ones" in the mix, but when we have the full set of data to work with, the chances of being led astray will be markedly reduced.
Can’t be exactly precise about which indicators are the conclusive ones. We don't have to be precise. Different people will have different opinions. If we capture all of them, record a history of each trend, and make this data available in Static Graphical format, AND as machine readable data suitable for input into a Trend Visualization application, then we can begin to have good and constructive discussions about what different trends mean and what we should do about them.