Getting to Know Greg Thielmann
June 2016
Interviewed by Daniel Horner
Read the full transcript of this interview here.
Greg Thielmann has spent four decades analyzing national security issues. For 25 years, he was a U.S. Foreign Service officer. In his last position before he left the State Department in 2002, he headed the Office of Analysis for Strategic, Proliferation, and Military Issues in the department’s Bureau of Intelligence and Research. He then served as a senior staffer on the Senate Intelligence Committee. Since 2009 he has been a senior fellow at the Arms Control Association and will retire this August.
Thielmann spoke to Daniel Horner on May 10 at the offices of the association, which publishes Arms Control Today. The interview has been edited for length and clarity.
You served in three pretty different places—Brazil, West Germany, and the Soviet Union. Did that give you some perspective on the U.S. and the way the U.S. interacts with the world?
I think trying to understand how foreigners perceive the United States gave me a lot of insights into threat assessments. One country may interpret something as being very threatening in terms of new weapons development or military actions, but from the other country’s perspective, it is what the first country is doing that is threatening. We’re dealing with perceptions, and because of that, there is often a way in which, through negotiations or realizing the different perspectives, you can actually find a space where both parties can reduce the sense of threat that they feel through a negotiated agreement.
You observed the intelligence assessment process on Iraq from the inside and the one on Iran from the outside. Are you able to compare the two processes?
I think in many ways the 2007 National Intelligence Estimate on Iran’s nuclear program was a great triumph in avoiding so much of what went wrong in the case of Iraqi [weapons of mass destruction]. What one had in the Iran document was the intelligence community basically admitting that previous National Intelligence Estimates had not gotten it right on Iran. For example, there was the critical determination on Iran’s nuclear weapons program—and they did have a nuclear weapons program, but it was essentially halted in the fall of 2003.
What was so conspicuously different than the Iraq intelligence estimate was that [this conclusion on Iran] was a very unwelcome conclusion for the Bush administration, particularly the Dick Cheney wing of the Bush administration. Yet, the estimate was not tailored, shaped, spun in a way that disguised the conclusion. The administration basically allowed it to come out in its all-important, honest bottom line. That made a profound difference.
In one of your interviews on Iraq [in 2003], you said, “The default setting of the U.S. intelligence community is to over-warn rather than under-warn.” Explain what you mean by that.
Warning is the chief function, one might say, of the intelligence community. To talk historically, this is sort of the aftermath of Pearl Harbor. That was a huge trauma that we haven’t quite gotten over. So there is that still legitimate response of the intelligence community to focus on warning of possible disastrous events.
The problem is that this often leads us to assuming that a worst-case threat identified is the most likely outcome or that’s the most objective way to predict the future. So that’s why, as an intelligence analyst, I would always look for two different estimates: first, what could happen and what is the probability that that could happen; but secondly, what is most likely to happen, even realizing that judgments about the future are very difficult to make. It seems to me, again and again in the negotiating process, that we miss opportunities to lower the threat in a better way by overestimating what the threat is.
So given all the political and institutional reasons you just were describing, how do you prevent that?
One of the ways you prevent it is to make sure that your judgments are labeled for what they are. That means it’s okay and sometimes mandatory to remind people what could happen. But it’s also necessary to remind them what is likely to happen, or if we are wrong about identifying something that could happen one way or another, whether it’s good news or bad news, why might we be wrong. On what assumptions does this conclusion hinge? That’s just part of, it seems to me, responsible intelligence tradecraft.