Every time we open a newspaper, we encounter statistics—from the money lost to identity theft each year to the number of children abducted by strangers. But where do these numbers come from? According to a new book, the public very often makes judgments—and politicians allocate money—based on numbers that are of dubious origin.
In April 2006, the New York Times reported that, according to witnesses at a Congressional hearing, “the sexual exploitation of children on the Internet is a $20 billion industry that continues to expand in the United States and abroad.”
But when Wall Street Journal reporter Carl Bialik looked into where that figure came from, he found that no one —not the office of the congressman who'd issued the number in a press release, not the National Center for Missing and Exploited Children, and not the FBI—could tell him where it came from.
The same thing happened when NPR's “On the Media” examined the claim—which they reported had been repeated on NBC's Dateline and by then-Attorney General Alberto Gonzales—that 50,000 sexual predators are online at any given moment. Apparently, at least according to the former FBI agent interviewed on the program, the number 50,000 was so popular because it is “a Goldilocks number—not too hot, not too cold.”
Since as far back as at least 1986, when the Denver Post won a Pulitzer Prize for debunking inflated statistics about child abduction, the American public has been occasionally bamboozled by bad numbers. Julia Dahl of The Crime Report asked Kelly M. Greenhill, a fellow at Harvard's Kennedy School of Government and co-editor of Sex, Drugs and Body Counts: The Politics of Numbers in Global Crime and Conflict, about the very real consequences of pretending we know more than we do.
The Crime Report: When did you first realize that a lot of the statistics we read are bunk?
Kelly Greenhill: I first became aware of the problem in the late 1990s when [during research for a dissertation] I was struck by the simultaneously very large and yet remarkably static casualty figures associated with the war in Sudan—and the fact that it was virtually impossible to ascertain where those numbers came from. The deeper I delved into the project, the clearer it became that potentially suspect and often competing statistics were not unique to the Sudanese conflict. I didn't think much about it again until December 2006, when I was co-hosting a conference on trafficking in people and illicit goods, and it became clear that the group of experts could not even agree on the statistics regarding size and scope of the trade, much less the implications of those numbers and what kinds of policies they needed. The parallels with debates I had heard in the context of body counts and war were striking and suggested it was time to resume exploration of the issue.
TCR: The book cites two areas in which bad statistics have vast consequence: human death tolls and illicit activity. You write that it is often impossible, for instance, to count the dead in conflict zones because morgues and hospitals are barely functioning, let alone collecting good data. Activities like the drug market, terrorism or the sex trade don't exactly make their finances available to auditors. How have bad numbers in both these areas affected policy?
KG: The use of inflated—or deflated—statistics to generate support for funding or policy action in support of a particular mission may mean that other, possibly more serious but not politicized issues don't get the attention or financing they demand. And once funding is in place, the incentives to continue to “cook the books,” intentionally or inadvertently, become more compelling. This can lead to a kind of chronic crowding out of other issues and alternative approaches to problems. It can also thwart efforts to honestly self-evaluate programs.
TCR: How are people making these estimates, or “guesstimates” about death tolls and illicit markets? And if you could wave a magic wand, how would you have them make them instead?
KG: If there were a better way to make these estimates, I suspect people would use it. But a universal method doesn't exist. In many cases, it is simply impossible to get the data to make credible estimates. What we're calling for is a bit more humility among purveyors of statistics, as well as a greater willingness to say that sometimes we don't or can't know.
At the same time, there are several straightforward questions that people can and should ask when evaluating a proposed number. What is the source of the numbers and how is what is being measured defined—for instance, who is a combatant? What constitutes a combat-related death? What are the interests of whoever is providing the numbers? Do they stand to gain or lose if the statistics are — or are not—accepted? And finally, are there competing figures? And, if so, what do we know about their sources, measurements, and methodologies?
TCR: How can the average person tell if a statistic is suspect? Are there telltale signs?
KG: There are some red flags. Nice, round numbers suggest guesstimation. And numbers that are significantly out of whack with known benchmarks or, similarly, appear shockingly large or small. And be wary of statistics in which the definition of what is being measured is unclear or changes over time. Numbers produced by issue advocates that appear in isolation or seem to conflict with numbers produced by groups without a vested interest in the issue are also suspect. If you're interested in learning more about how to identify problematic statistics, I recommend Joel Best's Stat-Spotting: A Field Guide to Identifying Dubious Data.
TCR: Who do you think are the most irresponsible purveyors of suspect statistics—advocates, journalists, state agencies or politicians?
KG: I don't think it is possible or even fair to single out a particular group as especially guilty. There is a good deal of blame to go around, and there are often incentives to engage in politicization of statistics. Organizations get pressure to “give us a number,” even when providing credible statistics is simply impossible.
TCR: You write that specific government agencies, including the FBI and ICE, have sometimes taken advantage of bad, but widely publicized numbers, to get funding and support for things like increased immigration raids and drug-related arrests. Have you heard anything from these agencies since the publication of the book? Do you think calling them out will make any difference in the future?
KG: I haven't heard from any agencies directly, although I have been told that the book is being read by people inside some U.S. government agencies. And I also know that a copy was requested for the White House library.
Julia Dahl is a contributing editor to The Crime Report.