• Technology
  • Electric Equipment
  • Others
  • material industry
  • Privacy Policy
  • About Us
  • Contact Us
Location: Home / Technology / COMMENTARY: 8 things US pandemic communicators still get wrong

COMMENTARY: 8 things US pandemic communicators still get wrong

serv |
72

As we approach 2 years of COVID-19, US pandemic messaging has settled into some counterproductive patterns. I want to address eight of these risk communication mistakes that public health officials and experts keep making. Turning them around can rebuild trust and help save lives.

In August 2020, CIDRAP published my commentary titled, "Public health's share of the blame: US COVID-19 risk communication failures." I tracked what I saw—and still see—as a series of missteps by public health officials in the early months of the pandemic:

  1. Over-reassuring the public
  2. Panicking and overreacting
  3. Flubbing the rationale for lockdowns
  4. Abandoning "flatten the curve"
  5. Insisting that public health should be in charge

Except for misstep 5, which is still super important, this list of risk communication mistakes now reads like ancient history. It's hard to remember back that far.

I have produced updated lists from time to time (see this one from March 2021, for example). The most recent was a Nov 15, 2021, Zoom presentation to the Minnesota Department of Health that spurred this commentary.

The eight risk communication mistakes in this commentary aren’t necessarily the biggest challenges public health officials and experts face—maybe not even their biggest risk communication challenges. But they are likely to be among the most remediable of their risk communication challenges, since they stem from their own behavior. I think these mistakes keep happening, they do real damage, but they can be remedied—so revisiting them isn't just backward-looking.

One of the most stunning surprises of the COVID pandemic has been the growing importance of trust—or, rather, mistrust. A sizable slice of the American public has come to mistrust the public health establishment and the pandemic responses it recommends.

It has long been a truism for me that when low trust is a problem, we should focus on our own behavior: "They don't trust us" is a less useful starting point than "We're not earning their trust." I think these eight risk communication mistakes are a big piece of how US public health has forfeited some trust.

1. Overconfidence and failure to proclaim uncertainty

It is not easy to communicate uncertainty. The public doesn't want to hear it, so to truly get it across, you have to proclaim it, not just acknowledge it.

Doing so goes against the grain for most public health agency spokespersons. They rightly think the public prefers confident-sounding officials.

But the public can tolerate official uncertainty, if it's confidently and matter-of-factly stated: "We're building our boat and sailing it at the same time." Among the many benefits: The damage done when you turn out wrong is a lot lower.

Officials' overconfidence re COVID has been too obvious to belabor. Among the early mistakes confidently asserted: There's no reason to think the virus is spreading significantly in the US; masks are useless; the most important thing you can do is wash your hands; it's not airborne; etc. I had dinner recently with a friend who told me, "I just don't trust what they say anymore. They've been so sure and so wrong so often."

Plenty of uncertain things are asserted with equal overconfidence today. We just don't know yet which ones will turn out to be mistaken.

Consider this hypothetical example—and ask yourself why I can't find many examples like this that aren't hypothetical:

One overconfidence issue that particularly bothers me is attribution bias—particularly about surges. Every time the COVID situation gets better or worse, public health people "explain" why. Officials and experts rarely say—CIDRAP Director Michael Osterholm is a notable exception—that they don't have a clue why something happened; that the virus does what the virus does; that we're not steering this ship, we're passengers.

These overconfident attributions aren't science-based, but they're not random. Often they seem to be based on what public health agencies and experts want the public to think and do. Bad news is attributed to not enough people doing what you asked them to do; good news is attributed to lots of people doing what you asked them to do.

Another example: I keep reading that flu disappeared in 2020-21 because of COVID precautions—even though it disappeared also in places like China, where normal life had pretty much resumed until Delta emerged.

When you overconfidently attribute events we don't really understand, like rises and falls in case numbers, it undermines public confidence in the things you truly can confidently attribute. The fact that vaccinated people are much less likely than unvaccinated people to be hospitalized with COVID is genuinely attributable to vaccination, for example.

A good risk communication strategy is to pair something you know with something you don't: "Even though we really don't understand why waves of infection rose here and fell there, we are very confident that the vaccines have reduced people's COVID hospitalization risk."

2. Failure to do anticipatory guidance

Anticipatory guidance is the risk communication term-of-art for telling people what to expect. It is a linchpin of crisis communication. Knowing what to expect helps people prepare, emotionally as well as logistically. It also inoculates them against false rumors.

I realize that it's hard to do anticipatory guidance about novel pathogens. You don't know what to expect. At least you can help people expect that! One of my favorite good examples is from Centers for Disease Control and Prevention (CDC) head Jeff Koplan in the early days of the 2001 anthrax attacks: "We will learn things in the coming weeks that we will then wish we had known when we started." That's an elegant way of saying, "We expect to make mistakes, find out about them, tell you about them, and correct them."

You can also help people know what to expect by offering them algorithms. "If the test positivity rate goes up to X, we will probably reinstate our mask mandate." That not only tells them what to expect if X happens. It tells them that you don't know whether X will happen. It tells them that you're preparing for X in case it happens. It tells them that your algorithm is tentative, not a promise (that's why you said "probably"). And it tells them that if they don't like wearing masks, they should do what they can to keep X from happening.

I think such algorithms are good risk management as well as good risk communication. Before you remove your mask mandate, figure out under what circumstances you're likely to want to reinstate it. And tell us your algorithm when you remove the mandate, rather than belatedly trying to explain why you're making people put their masks back on after having given the impression that mask wearing is over forever.

Worst-case scenarios are a particularly important and particularly neglected type of anticipatory guidance. It's an axiom of risk communication that any scenario that's likely enough to be worth planning for is also likely enough that you ought to tell the public about it (so the rest of us can plan for it too—or at least prepare emotionally for it).

"Are you planning for a more transmissible or more virulent variant than Delta?" I asked the Minnesota Department of Health on Nov 15. "If so, are you talking about the fact that you're planning for it?" Ten days later Omicron came on the scene.

Paradoxically, anticipatory guidance about worst-case scenarios can often calm people down. They may already worry that bad news is coming; it's a relief of sorts when the other shoe drops. Or they may have been dreading something even more awful, left alone with their fears (and all the more frightened) by false reassurances from other sources. When they hear from you about the worst case, they may get on the other seat of the "risk communication seesaw" and remind themselves that the worst case probably isn't the likeliest case. At a minimum, your candor builds trust. It shows them that you're not afraid to give them upsetting information, so they are less inclined to suspect you're covering things up.

Of course, for people who haven't yet considered how bad things might get, anticipatory guidance about worst-case scenarios can be frightening rather than calming. So be it. Unrealistic calm shouldn't be any risk communicator's goal. Any time the public is less alarmed than your agency about what might be coming, your anticipatory guidance must be too reassuring. Better to let people go through their adjustment reactions to the scary possibilities now, so if and when the on-the-ground situation worsens they'll be better able to cope—and calmer, too.

3. Fake consensus

COMMENTARY: 8 things US pandemic communicators still get wrong

Like most professions, public health is a guild, and guild members are deterred from deviating publicly from mainstream guild opinion. It can sometimes work like this: A 20% minority of the experts believe something. The 80% prevail. Most of the 20% go silent. The 2% who speak up look like cranks. And journalists and other non-experts (even non-experts within public health) get a misimpression of expert consensus.

In highly uncertain crisis situations like the COVID pandemic, fake consensus can be very harmful. Policymakers are deterred from giving the minority position the consideration it deserves. Researchers are deterred from studying the minority position; granting agencies are deterred from funding research that explores it; journals are deterred from publishing evidence that supports it. In the worst cases, the majority position becomes reified not just as the expert consensus but as inviolate scientific truth that only an anti-science denialist would dare to question.

Under these conditions, discovering that the majority position is mistaken, if it is, takes much longer. And when the news finally reaches the public that the (former) majority position was mistaken, not inviolate scientific truth after all, the loss in trust can be deep and long-lasting.

(Here again I want to acknowledge Osterholm's unique value. For decades, he has somehow managed to assert outlier opinions without being expelled from the inner circle.)

Please note that I am explicitly disagreeing with many other risk communication experts, whose mantra on the subject is "Speak with one voice." (We don't all speak with one voice on the wisdom of speaking with one voice.) I agree that real expert consensus is a wonderful thing, as long as it stays tentative and open to new evidence. Fake consensus that masks real disagreement is something else entirely.

There have been several patterns of fake consensus vis-à-vis COVID. The most dangerous is shutting up the dissenters, or bashing them so badly that potential followers shy away and they can't get a fair hearing. The maltreatment of the Great Barrington Declaration authors comes to mind. The question isn't whether they were right or wrong to oppose last year's lockdowns; the question is whether the mainstream was right or wrong to try to muzzle them. Wrong, I think. Badly wrong.

A less extreme case: Proponents of aerosol transmission were widely ignored for far too long, partly because many of them came from disciplines (like engineering and fluid dynamics) that public health professionals knew little about, and published in journals that public health professionals rarely read. How many lives might have been saved if Lisa Brosseau and her colleagues had been listened to sooner?

A different pattern is when both sides speak as if the other side didn't exist, as if their half of an ongoing debate were the consensus position.

Often the fake consensus starts with a genuine consensus regarding the scientific data, but then tacks on a faux consensus regarding what to do about it. Public health policies about COVID or anything else are necessarily grounded in both scientific judgments based on evidence and trans-scientific judgments based on values. The debate about COVID vaccine boosters in August through October 2021 is a nice case in point. The debate was never mostly about the scientific evidence. It focused instead on two trans-scientific questions: How important is it to reduce the incidence of mild breakthrough infections? And with regard to severe breakthrough infections, should we take a "better safe than sorry approach" based on preliminary data, or should we wait for stronger evidence before okaying a booster rollout?

I think the vaccine booster debate was also partly about many public health professionals' resentment of political leaders for getting ahead of the public health consensus, making their own judgments about these trans-scientific questions instead of simply following the science. In the minds of many public health professionals, "follow the science" really means follow the scientists—that is, follow them—even with regard to choices that are about values more than science and even when there is no scientific consensus on these values choices. Their outrage that President Joe Biden got the policy horse before their scientific cart may have delayed their acceptance of the wisdom of universal COVID booster access.

4. Prioritizing health over other values

The vaccination mandate from the Occupational Safety and Health Administration (OSHA) is fundamentally about health versus liberty. CDC's eviction moratorium was fundamentally about health versus property rights. School closures were fundamentally about health versus education. Lockdowns were fundamentally about health versus economics and psychological wellbeing.

In each instance public health officials are entitled to make their case that health should prevail. But they're not entitled to pretend that there is nothing of value on the other side of the debate.

Acknowledging that there is something of value on the other side of the debate is what I call "even-though risk communication." Here's a hypothetical example:

Public health officials and experts understandably prioritize health over many other goals and values: liberty, property rights, education, economics, psychological wellbeing, convenience, quality of life, etc. Two choices make sense:

"We focus almost exclusively on public health considerations. Political decision-makers listen to our advice but they don't necessarily follow it, because they must attend to other criteria as well."

"We're the decision-makers. So we can't afford to focus just on public health considerations. We build non-health criteria into our public health decisions."

But insisting on being the decision-makers while ignoring or seeming to ignore criteria other than public health undermines public acceptance and public trust.

This is a huge issue. To begin regaining trust, public health officials must either explicitly take other criteria into consideration or explicitly confine their role to rendering nonbinding public health advice to political decision-makers who will take those other criteria into consideration.

And to speed the process of regaining trust, own the prior mistake: "We have sometimes spoken and acted as if the only thing that matters is public health. That has alienated people who are quite rightly insisting on the importance of other goals and values, too."

5. Prioritizing health over truth

Public health has a long history of deciding what to tell people based on what will get them to do the right thing to protect or improve their health.

Sometimes that means flat-out lying. The polio eradication campaign, for example, spent many years telling parents in the developing world that the oral polio vaccine can't cause polio, hiding the reality of vaccine-associated paralytic polio (VAPP) and vaccine-derived poliovirus (VDPV) in order to encourage vaccine acceptance.

More often it means misleading without lying, cherry-picking data to emphasize the health-promoting portion of the truth. Consider the false claim that flu vaccination was 70% to 90% effective, a claim (grounded in early studies of healthy, young soldiers) that public health continued to make long after everyone in the field knew or should have known that in most years for most vaccinees the flu vaccine doesn't work nearly that well. (For a third time I need to mention Mike Osterholm, who was instrumental in pushing public health to abandon the 70% to 90% pretense—though officials rarely acknowledge the dishonesty of their prior claim.)

COVID examples of public health professionals' dishonesty in the service of health are plentiful. Perhaps the "best" example is President Biden's top medical advisor, Anthony Fauci. Fauci has acknowledged telling people there was no reason to wear masks in part because he was worried about the mask shortage in healthcare settings. He has acknowledged making overly optimistic claims about COVID herd immunity because he thought the public wasn't yet ready to hear what he really believed about that. With extraordinary lack of self-awareness, he continues to maintain that he has done nothing to undermine public trust, and that anyone who mistrusts his pronouncements is mistrusting science itself.

Fauci was for decades a genuine public health hero. His contributions to our country and our world are undeniable. Sadly, his contributions to the deadly polarization of COVID precautions and widespread mistrust of public health messaging are also undeniable.

I have argued for decades that prioritizing health over truth risked undermining the credibility of the entire public health enterprise. I didn't have a lot of examples. (In recent years the Dengvaxia vaccine rollout in the Philippines was one.) Even when my public health clients reluctantly conceded that, yes, they do sometimes say not-quite-honest things in order to save lives, they invariably pointed out that their dishonesty did indeed save lives, lives they could document, whereas I had little evidence for my claim that they were eroding trust in the process.

Sadly, COVID has given me a lot of new ammunition.

6. Failure to own your mistakes

When the World Health Organization (WHO) was developing its Outbreak Communication Guidelines after SARS-1, it hired my wife and colleague Jody Lanard to produce a rationale and a draft. Jody submitted six overarching guidelines. WHO adopted five of them. The one it couldn't stomach: "Admit and apologize for errors."

Public health continues to find that extremely difficult. There are at least four common patterns.

  1. Sometimes you let the error fester uncorrected. During the 2009 H1N1 flu pandemic, for example, the CDC misjudged what age-groups were likeliest to be most affected, and designed vaccination priorities based on that mistaken prediction. When the data came in, the agency decided to stick with the priorities it had rather than let the public realize it had been wrong.Similarly, public health agencies all over the United States pushed near-fanatical cleaning and sanitizing of surfaces early in the pandemic, despite the fact that the CDC's webpage on probable routes of transmission clearly stated that while COVID transmission from touching something contaminated "may be possible," it is "not thought to be the main way the virus spreads." As tens of millions of Americans and American businesses scrubbed and wiped, the CDC did virtually nothing to discourage this Hyper-Hygiene Theater.
  1. Sometimes you update your judgment in the dark of night. The claim or recommendation changes without any acknowledgment of the prior mistake. This is characteristic, for example, of many public health agency websites. Key webpages have evergreen URLs, even as the webpage content keeps changing. The date of the most recent revision is usually clear. But what got revised since last time is usually not specified. This creates an illusion that nothing has changed. It prioritizes protecting the agency's reputation over guiding readers, who would be far better able to update their own understanding if you told them explicitly what you just changed your mind about.
  2. Sometimes you update your judgment and tell us you did so—which is a huge improvement—but try to mislead us about the reasons for the change. You claim the situation changed; you were right at the time but then came Delta. Or you claim that new science emerged. You were right based on the science at the time, but those were preliminary studies and now you know more.Of course, sometimes the situation truly does change and sometimes new science truly does change our understanding. (As I've already noted, telling people about these changes is harder and more damaging if your agency was overconfident, understated your uncertainty, and failed to provide anticipatory guidance about the likelihood of changes.) But quite often the situation truly hasn't changed, nor has the science. You just ignored some of the evidence for a while; or most of you did, and cowed the others into silence.That was true of the 70% to 90% flu vaccine efficacy claim, for example. Vis-à-vis COVID, it was true of asymptomatic transmission, true of aerosols versus droplets, etc. It is true now of the minimal usefulness of ill-chosen, loose-fitting cloth face coverings. When public health agencies finally decide to acknowledge the evidence about those, odds are they will cite new science, as if relevant data hadn't been available for ages.
  1. Sometimes you explicitly deny your prior position. You were misquoted. You were quoted out of context. The public was confused.

Since this commentary is about COVID risk communication mistakes, I need to stress the value of owning your mistakes, not just correcting them. Any improvements you are able to make in your risk communication will yield much quicker dividends if you tell us what you've been doing wrong, say you're sorry, and vow to do better.

7. Failure to address misinformation credibly and empathetically

The topic of how to tell people they're wrong about something is even more complicated than some of the other points I've been making today. I want to focus on just two aspects: credibility and empathy.

Vis-à-vis credibility: It's virtually impossible to address other people's misinformation to an audience that fervently believes (often accurately, in my judgment) that you yourself are guilty of misinformation. I similarly used to urge my corporate clients to stop calling out activists on their lies when the company itself was widely thought to be lying. It's a long-term project to earn a reputation for trustworthiness. In the meantime it is self-deceptive to imagine that your own credibility problems aren't at the core of your difficulty rebutting the misinformation promulgated by others.

One step in the right direction: Distinguish three different kinds of content. Public health professionals understandably (even rightly) hate all three, and tend to call all three misinformation. But only one truly is:

  1. Demonstrable falsehoods. That's the one that really is misinformation.
  2. Opinions you disagree with—even if most of the data and most of the experts are on your side. I'm tempted to stick with an example that most public health professionals would acknowledge (however reluctantly) is at least debatable. Example: Masks on schoolchildren do more harm than good. But I also want to include here opinions so far-fetched that you're sorely tempted to consider them flat-out falsehoods. Example: Ivermectin works.
  3. Factoids that are technically true but literally misleading—that is, they are likely to lead people to mismanage their own health. Example one: COVID vaccines were granted Emergency Use Authorizations even though the phase 3 trials failed to yield any statistically significant evidence that they reduced the COVID death rate. (Studies big enough and long-lasting enough to yield enough deaths in the placebo groups would have delayed the vaccine rollouts unconscionably.) Example two, an oldie: Not every vaccine dose recommended by the CDC is necessary or even useful. (The CDC states that "one dose of vaccine confers long-term, probably lifelong, protection against rubella." The second rubella dose is administered only because it's part of the MMR [measles, mumps, and rubella] combo shot.)

Rebut only #1. Feel free to explain why you think #2 is mistaken and #3 is misleading, but stop claiming they're misinformation. And stop supporting social media censorship of #2 and #3 as misinformation.

My second point is about how to rebut misinformation empathetically—genuine misinformation, my #1. Remember, you are talking to people who believe it, or at least have heard it and not rejected it. I call this "playing donkey." Six pointers:

  1. Consider not trying. The collateral damage when you rebut misinformation is that you give it greater currency. People who haven't encountered it elsewhere will encounter it in your rebuttal. People who have forgotten it will remember it thanks to your rebuttal. Arguably misinformation is worth rebutting only if it is widespread or likely to become widespread.
  2. If you decide to go ahead despite the potential collateral damage, go whole hog: Repeat the point you're rebutting. Because of #1, a lot of experts advise never to repeat misinformation. But rebuttals of unmentioned falsehoods rarely work. If you truly want to change people's minds, I think you can't just tell them the truth; you have to respond explicitly to the falsehood they currently believe.
  3. Even as you rebut the misinformation, validate your audience's reasons for believing it. This is crucial in demonstrating empathy. You can't change people's minds by telling them how stupid they are. Explicitly tell them why they're not stupid: "A lot of people believe that." "It used to be true." "It makes logical sense, whereas the actual truth is counterintuitive." "Even I used to think so until I came to work for the health department and they sent me to reeducation camp."
  4. Validate the good reasons your audience has for resisting your message. You can't be trusted. You have a vested interest. You're speaking for Big Pharma, or for the Libtards, or for white folks. They've built a major commitment to their viewpoint, and they're understandably reluctant to let it go easily. They already have an opinion that you're trying to change, so the burden of proof is on you. Say so.
  5. Insofar as you can—if you're one-on-one or in a meeting, for example—listen more than you speak. Start by offering stakeholders an opportunity to vent, and use all your active listening skills while they tell you why they believe what they believe. If you do that long enough, eventually they'll want to know your response to what they had to say, and they'll actually demand that you take a turn. Wait for that. Then echo to make sure you understood what they told you. Then ask questions. Then list a few points of agreement. Finally, indicate that you do have concerns about some of what they said, and ask whether this is a good time for you to go into them. All these steps further build empathy.
  6. In your actual rebuttal, try to take people on a journey from their current understanding to the understanding you want them to have. The pathway may be marked with data, emotions, anecdotes, testimonials, or logic—whatever rhetorical tools you have at your disposal. Talk about the journey and the steps along the way a lot more than the destination. Bring them along as slowly as you need to. The more you know about how your audience came to believe what they believe, and the more you know about how some people have moved off that view to yours, the better able you will be to construct the pathway.

8. Politicization

The politicization of COVID is one of the main reasons for the pandemic's horrific death toll in America. Public health professionals usually blame it on former President Donald Trump and his allies. Trump deserves a good deal of the blame. But so does public health, I think.

And it's important to remember some precursors. The political polarization of infectious disease outbreaks is a staple of medical history going back centuries. Going back only a few years, we might want to remember the Ebola quarantine controversy and the Zika funding controversy. Both became left-right issues, largely at the hands of the left.

With COVID, the politicization watershed moment may have been the antiracism protests that followed the murder of George Floyd. Public health professionals who had condemned anti-lockdown demonstrations as horribly dangerous super-spreader events found ways to embrace anti-racism demonstrations as somehow not really dangerous at all. Some said racism was a public health crisis and so antiracism demonstrations were public health achievements, even in the middle of vaccine-less surges. I think that's the moment when the public health profession got identified by many on the right as a left-leaning enterprise they shouldn't trust.

Almost as important: the widespread accusation that linking COVID to China was racist: an accusation made about everything from travel restrictions to language (terms like "Wuhan Coronavirus") to the lab leak hypothesis.

Also interesting to me is how public health has addressed vaccine hesitancy among African-Americans versus vaccine hesitancy among Trump supporters. Early on, the former got much more attention than the latter. That's more balanced now. But there is still much more sympathy for people of color's hesitancy and suspicion about public health than for the hesitancy and suspicion of conservatives.

The two demographics have similar reasons for their mistrust, starting with the fact that public health is run mostly by liberal white people. One group's mistrust has been validated and addressed. The other's has much too often been condemned. I look forward to the day when recruiting conservatives is seen as an important priority for public health agencies, just as recruiting people of color is rightly seen.

A point I hope is occurring to you about now without my having to make it again: You get only part credit for putting a stop to the ways you have politicized public health. For full credit you need to apologize to right-leaning people for your past politicizing.

And if you daren't apologize to right-leaning people because that could offend left-leaning people, your peeps, I rest my case.

Ask yourself 3 questions—plus a booster

I want to end with three questions for public health professionals:

  1. Which of the eight COVID risk communication mistakes discussed in this commentary do you think you make? Which of the eight does your organization make?
  2. To what extent do you think you and your organization can correct these mistakes? Are there some you can improve and others honestly you think you can't?
  3. Given everything else on your plate, how much of a priority do you think improving these mistakes should have? Will it help redress eroding trust? How much of a priority is that?

And a fourth question:

  1. What other COVID risk communication mistakes are public health professionals making that strike you as important to improve? Can you take a minute to write to me about them at peter@psandman.com?