How Police Can Kill Fewer People: Numbers vs Narratives

Print More
police

Photo by Keoki Seu via Flickr

In July, 2018, the venerable British medical journal Lancet reported on research that explored the “spillover effect” of police shootings of unarmed African-American men on the mental health of African Americans in the states where they occurred.

Public health scholar Jacob Bor of Boston University and his colleagues found that police killings of unarmed black Americans had adverse effects on mental health among black American adults in the general population. They described it as “nearly as large as the mental health burden associated with diabetes.”

The study made a splash.

In response, Justin Nix, a criminologist at the University of Nebraska at Omaha, and Dr. M. James Lozada, an anesthesiologist at Vanderbilt University Medical Center, noting that the Lancet study had received “extensive scholarly, media, and social media attention” and had been cited 70 times by the end of 2019, set about examining its methods and its claims.

Last month, they published their conclusions. The Nix and Lozada response, lightly seasoned with academic snark, alleges that the Lancet authors, who relied on classifications supplied by the activist research collaborative Mapping for Justice, miscoded 93 incidents as police killings of unarmed black individuals when, in fact, the decedent in each of those 93 encounters (30.7 percent of the “unarmed black victims”) was either armed or was not killed by police acting in the line of duty.

Remove those incidents from the calculation, Nix and Lozada write, and the correction eliminates the statistical significance of exposure to police killings on African-American mental health that the Lancet authors claimed.

Nix and Lozada issue a stern warning against relying (as the Lancet authors had) on activists’ counts of police-involved fatalities. Doing so, they argued, may “diminish police legitimacy in the eyes of the public, reduce police morale, and hinder our understanding of the structural conditions we hope to improve.”

Although they do not quite state that the activists whose counts they debunk intentionally sought those consequences, Nix and Lozada don’t go out of their way to disclaim that implication either. Readers are left free to believe it if they choose.

A brisk round of point and counter-point ensued: Neither side persuaded the other.

The Heart of the Issue

 Most people’s medical school ambitions are defeated by organic chemistry; mine were thwarted by long division, and I am not equipped to weigh the merits of the epidemiological arguments.

But maybe it is useful to supply one specimen-practitioner’s reaction, not to the merits of either side of the combat, but to the existence of the combat itself. Is this battle moving things forward or holding things back?

To put my own cards on the table, after 40 years in urban criminal justice systems I believe that there are environments where structural racism and racial bias (both implicit and explicit) operate every day. I think African Americans have noticed this, and I have to believe, until someone shows me otherwise, that it affects their mental health. It would affect mine if I were in their place.

Put both of these studies aside and you would still have to confront the fact that African Americans are killed —whether they are unarmed or not—in police encounters at three times the rate of whites, and that they account for 40 percent of police killings nationwide.

I also believe that careful statistical analyses of that system’s outputs are valuable.

But the light that an analysis of the system’s outputs sheds on the processes that practitioners are embroiled in is oblique, and sometimes the shadows it casts can obscure elements of the practitioner’s daily reality.

Research Coding Practice

Researchers are often forced to follow Thurgood Marshall’s credo and “do the best you can with what you’ve got.” It isn’t obvious, for example, that the home state of residence of respondents—rather than, say, the city or media market of residence—is the natural unit of study. But that’s how the mental health data are compiled, so that’s what the Lancet authors had to use.

To translate the elements of the practitioners’ lives into something to be studied things have to be simplified—“coded.” This argument between researchers arises from that stage of the process.

From the frontline point of view it isn’t clear that the protagonists are arguing over the same thing. Bor and his fellow Lancet authors deal with “exposure to accounts” of events, where Nix and Lozada are analyzing the actual facts of the events themselves.

The Lancet authors start with one compressed narrative (roughly, “Bad cop kills unarmed man”). Nix and Lozada argue that the Lancet study undercounts another story, (“Good cop kills armed, or dangerous, man, or kills while off duty”).

Although the public inevitably learns of a mixture of these events, neither team of researchers estimates what impact the cognitive frame supplied for the public by “bad cop” accounts—when those are reported by the media with greater repetition and intensity—might have in the production of mental health effects among citizens who attempt to interpret a fatality.

(After all, if the “activists” misunderstand the circumstances, the public might too.)

Blinding By Blaming

But the most interesting thing about the researchers’ dispute is the central point over which the contestants join battle.

The debate, like many others in criminal justice, reflects our fascination with culpability. How many cops are guilty? How many blameless? After all, a bad event requires a bad author, or at least that’s what we desperately want to believe.

When Nix and Lozada warn that activists (whose numbers the Lancet study relied on) generated over-simplified versions of complex events, their complaint is that the activists sorted the deaths into the wrong piles—that they lumped “good cop” events in with the very rare “bad cop” fatalities.

Because the argument is over “armed v. unarmed” it pulls the focus “down and in”, onto the individual cop who pulled the trigger, and at the last second before he fired.

But people who think and write about safety in other fields would say that although guilt and blame are interesting questions, answering them is a bad place to stop.  We need to go “up and out” to understand the conditions and influences that drove the cop’s decision.

When an infant is killed by a drug overdose in the Neonatal Intensive Care Unit, an exclusive focus on the nurse who gave the injection prevents recognizing and addressing weaknesses in the hospital’s prescribing practices, in its computerized medication reconciliation protections, its shift work, its checklists, their interactions.

Contemporary patient safety leaders would resist the temptation to crack down on the “bad apple” nurse and would widen the focus to survey the underlying “organizational accident” roots of the tragedy. They would ask whether the nurse was “set up to fail.”

In recent years we’ve seen a growing recognition among policing scholars that this approach has to be applied to police shootings.  Joanna Schwartz, David Klinger, John Hollway and Sean Smoot, Lawrence Sherman, Barbara Armacost, and others have agreed that an officer-involved death should be seen as a “system crash” not the work of lone operator.

These authorities would argue that every fatal encounter— including encounters with both armed and unarmed citizens—is “complex.”  They recognize a distinction between “complicated” (e.g, a jet airliner at rest) and “complex” (e.g., a jet airliner in operation).  Outcomes in complex systems emerge from a swirl of conditions and influences that affect probable results; they are not generated by linear, sequential, mechanical relationships of cause and effect.

To understand what happened you need more than a performance review (“good cop or bad cop?”). You need an event review that appraises the system weaknesses that increased the likelihood of the bad outcome.

This doesn’t mean that we will be reduced to reviewing a useless pile of elaborate, detailed, but ultimately idiosyncratic, anecdotes.  Patterns will emerge; chronic biases will be recognized.

Beyond these “answers” that event reviews can provide are the good questions that they can generate for empirical study.  For example, in a recent article, Professor Paul Taylor detailed a randomized controlled experiment he designed and executed that revealed the influence of erroneous dispatch information on mistaken decisions to shoot unarmed victims by law enforcement officers.

Stories and Progress

Careful reviews of events, mobilizing the perspectives of all stakeholders, can illuminate endemic biases, mistaken policies and dangerous conditions.

Fewer equipment-violation traffic stops will lead to fewer dead people (both cops and citizens).  Better emergency equipment will lead to more survivors after shootings. Supportive Critical Incident mental health response capacity and de-escalation training will disarm many explosive situations.  The roles of actors outside the police silo—in the courts, public health, or corrections, for example—are influential in many cases.

Our question could be not “Was the cop right to shoot?” but rather “Did this cop, with this training, this supervision, this information, this equipment, and these back-up resources have to encounter this citizen, with this background, in these circumstances?”

When, for example, a cop answering the fourteenth call for service to deal with a suicidal teenager shoots and kills him, we might ask how that event differed from the 13 encounters that preceded it.  Could the system—as system— have avoided this somehow?

Will it the next time?

The public never hears the professionals ask these questions; and the public rarely hears any answers.  Generally the response comes to “Nothing to see here, move along.”

Of course it’s true that, if you are sorting things for statistical purposes, it is better to sort them correctly.

But it doesn’t seem very likely that marginal adjustments in the count, however essential for statistical purposes, are likely to move the needle every far in terms of either public trust or “spillover” health impacts.

It may be that protecting police legitimacy and morale and nurturing public trust depends more on filling the information vacuum (through efforts such as the Tucson Police Department’s Critical Incident Review Board program) than on sorting police acts accurately between the “armed” and “unarmed” piles.

After all, the Lancet authors’ stated goals, “To decrease the frequency of police killings and to mitigate adverse mental health effects within communities when such killings do occur,” are goals everyone shares.

james doyle

James Doyle

Every shooting that is “justifiable” from the perspective of the cop who has to pull the trigger was not “unavoidable.”

Struggling to answer the wrong question will not lead us to the right answer.  Shifting our emphasis from who is culpable to what is avoidable might be a good place to start.

James M. Doyle is a Boston defense lawyer and author, and a frequent contributor to The Crime Report. He welcomes readers’ comments.

4 thoughts on “How Police Can Kill Fewer People: Numbers vs Narratives

  1. “But it doesn’t seem very likely that marginal adjustments in the count, however essential for statistical purposes, are likely to move the needle every far in terms of either public trust or “spillover” health impacts.”

    While there is a lot to unpack in this article, and a lot I agree with, this particular sentence misses quite a few important facts. Mischaracterizing 30% of an event is hardly marginal, it’s a symptom of sloppy methodology. While the needle might not be moved very far because of that sloppiness it is hard to argue it is not moved, maybe significantly, by widespread reporting on what is a flawed study. There is ample research that a significant determinant of people’s perception of safety is news reporting. By all means every incident should be examined and corrective actions taken when fault is found in process and procedure, but the very first step should be in correctly defining the scope of the problem and bad statistical methodology is not helpful in doing so.

    • I think it’s more complex than that.

      Nix’s intro states pretty clearly that they chose to recode and re-characterize shootings as armed because “the defendant was carrying a toy gun which could have been mistaken by law enforcement for a real weapon” … that to me seems pretty clear cut as unarmed, but the different authors different subjective views contribute to the coding, and how you code influences the outcome, it’s not sloppy, its subjective.

      Secondly, merely being armed doesn’t give law enforcement the right (or indeed reason) to shoot you, several of the high profile incidents over the past few years have involved black men who were armed informing the police they were armed, and the police choosing to shoot because they “felt threatened” – I would argue that you could classify that in either category.

      The truth is it’s very hard to be entirely objective when coding something, so if two researchers with completely opposite bias (conscious or otherwise) code a dataset, they will likely do so differently, especially since in this case, the underlying data isn’t uniform. It’s messy, but I don’t think it’s sloppy, it’s just subjective (which is sadly unavoidable)

  2. As I said, doing it properly is better, and I’m no judge of the statistical practices. But if you start by limiting the definition of the “scope of the problem” to the “problem” of the ratio of bad/blameworthy frontline cops you are likely to miss quite a lot. And if you simply count events without recognizing that the accounts of events differ not only in their accuracy but in repetition, vividness, salience, from the public’s point of view you may be missing something more too. The problem of too many sloppy accounts of killings might be ameliorated by having fewer killings.

Leave a Reply

Your email address will not be published. Required fields are marked *