It is easy to spot warning signs after the fact. But it is important that we do investigate them. After the murderous treason at Fort Hood on November 5, we need to develop as a complete an understanding what of what information was available prior to the attack and what information processing failures may have allowed it to happen. I don't think that anyone disputes this.
As more and more facts come to light, it is natural to ask how we could have failed to
connect the dots. What appears to have happened is that in several separate instances warning signs about Major Hasan were noted, but in no instance were they considered serious enough to escalate the case to a more comprehensive investigation of him. People will naturally think that this means that there was some flaw in the system.
There very well be flaws in the system, but that we won't know until we have more comprehensive understanding of the system itself. (And that understanding will probably not be made public in all its details.) But I do want to make clear is that when it comes to noting warning signs we need to look probabilities and false positives.
If a few innocent people get investigated due to false positives in our system that is not a problem. It is normal and to be expected. But we need to remember that we simply don't have the resources to properly investigate hundreds of thousands of people.
With that in mind, let's run some hypothetical numbers. Suppose, extremely optimistically, we have a tool that can correctly identify terrorists living in the US with an accuracy of 99.9%. Let's also suppose that there are about 1000 terrorists living in the US. Our tool would catch 999 of them and miss only one terrorist. That sounds excellent.
But now consider what happens with non-terrorists. With about three hundred million non-terrorists living in the US, our hypothetical tool would correctly identify 99.9% of them as non-terrorists. Unfortunately it would incorrectly identify three hundred thousand people as terrorists needing careful investigation. So even with a tool as accurate as only one error in 1000 we would have 300,000 false positives.
Three hundred thousand innocent people would need to be carefully investigated even if our screening tool were wrong only one out of 1000 times. Even if we were willing to accept the civil liberties implications of having the government undertake careful followup investigations of the political, religious, and psychological motives of that many innocent people, we don't have the resources.
If we can't perform the that many investigations (and here we are considering best case), then do we deprive 300,000 innocent people of working in sensitive positions? One doesn't have to be a card carrying member of the ACLU to recognize that that would be truly un-American. And such a practice would certainly lead to a backlash that could harm security more than help it.
I do not have a solution to this problem. I don't know how we can effectively screen against domestic terrorists. I expect that the DHS has people who are a lot smarter than I am working on these problems. Those people will also have real numbers to work with instead of my made up ones. But I do know that the inevitable calls we will hear over the next few weeks to
follow-up every lead are failing to understand the implications of such a policy.