Why Do Campaigns Fail? Lessons for Campaign Mangers to Succeed

Research Translation

Why Do Campaigns Fail? Lessons for Campaign Managers to Succeed

The case of the national youth anti-drug media campaign: A critical analysis from a strategic communication perspective

Why do a majority of communication and marketing campaigns fail? The success rate of health campaigns in the US is only about 8% on average (Snyder et al., 2004).  Some known culprits are poor funding of campaigns and inadequate planning, but why do even well funded, expert-driven campaigns fail to achieve its objectives? In order to understand why even big budget, carefully conceived campaigns fail, we conducted a strategic communication analysis of US government’s most visible anti-drug campaign.

Unprecedented in size and scope, the U.S. government’s 1 billion dollar National Youth Anti-Drug Media Campaign had everything going for it: A large budget, a dedicated office to co-ordinate the programme—White House’s Office of National Drug Control Policy (ONDCP) along with Partnership for Drug Free America (PDFA)—and a team of industry professionals—Porter Novelli, Ogilvy and Mather (O&M) and Fleishman-Hillard—to design and implement the campaign. Yet, independent evaluations of the campaign showed that in spite of high exposure and favourable recall of campaign messages, the campaign had no impact on the initiation, cessation
or reduction in youth drug use.

In fact young people who had never used drugs, when exposed to the campaign, were 2.5 times more likely to try drugs a year later—an unintended ‘boomerang’ effect. For those youth already using drugs, the campaign did not result in reduced use or quit rates. The parents campaign component—as significant influences on youth—was partly successful: Parents exposed to the campaign were more likely to talk to their kids about drugs and do fun activities together as positive parental involvement. But the campaign did not improve parent’s monitoring behaviour, a key campaign target.  Moreover, parental exposure to the campaign had no impact on their children’s beliefs and behaviours.

To examine these counter-intuitive results, we developed and tested the campaign using a strategic communication framework developing from health communication research and the “10 Steps in Strategic Marketing Planning Process” by Kotler and Lee (2008). In short, our examination suggests that campaigns progress through four stages: Formative research (background research), Strategy and tactics, Implementation, and Monitoring and evaluation. While many campaigns are formulated in these four stages, the key to success is the ability of campaign managers to transfer the learning’s from one stage to the next, and to effectively respond to external and internal threats (See Figure 1).

To understand the strengths and weakness of the campaign, we conducted an extensive review, including the analysis of 25 campaign documents and academic articles. We also conducted 20 in-depth interviews with the federal government officials who directly managed the campaign, campaign experts, members of the advertisement agencies, and community organisations. We used content analysis to analyse the interviews and documents by coding them into strengths and weakness during each of the four phases of the campaign. We further sent our analysis to interviewees for verification and feedback.

The results were analysed for each of the four phases of the campaign. In the formative stage, ONDCP and Porter Novelli conducted a broad literature review, and consulted over 200 experts in public and private sector of what had worked and not worked in public health campaigns. This extensive exercise culminated into a strategy document, informally named the ‘Burgandy Bible.’ Most interviewees, and published literature, agreed that the strategy document contained a solid evidence based plan that drew on the best available social scientific research theories and evidence. The failure, however, lies in the strategy document’s inability to anticipate or formulate effective responses to organizational weaknesses or threats.

The strategy document included communication objectives for each of the target audiences (different youth segments, parents, friends) along with strategic message platforms to achieve each objective, a strength.  It failed to include any specific measurable objectives, or a formal monitoring and evaluation plan to assess campaign impacts and outcomes, a critical weakness. Moreover, the document’s original insistence on 25 strategies with 16 different audiences appeared far too ambitious in the implementation stage.

Pressure to get the campaign started as early as possible resulted in advertising that was not pre-tested with the target-audiences, or part of the agreed strategy.  Pre-existing fear-based ads by PDFA were therefore used. “The Anti-Drug” advertising theme for parents worked well, but the “My Anti-Drug” theme for youth did not as one interviewee said, “youth weren’t looking for an anti-drug”.

While the parents component of the campaign was managed entirely by Ogilvy and Mather, the youth component was spread across several advertising companies due to government ‘pro bono’ regulations.  As a result many ad agencies volunteered to do the ads for the youth that did not always follow the best practices in the strategy document. The discrepancy in the implementation of youth ads (pro bono) and parent ads (paid) most likely reflected the relative allocation of ONDCP funds; 60% toward parents and 40% toward youth ads even as the youth were primary targets of the campaign.

The implementation stage also exposed holes in the applicability of social science theories to shape youth behaviour through 30-second TV advertisements: An expert asserted that TV ads were a “blunt instrument not a surgical tool”.  While the use of celebrities did achieve higher visibility of the campaign in the media, reduced funding for non-media tactics (5% compared to 75% on TV advertising), such as school programs to dissuade youth from drug initiation, could have resulted in reinforcement of key message long after the ads disappeared from the television.  Differing philosophies, priorities, and ways of doing business between the campaign management and its partners added to the organisational tensions.

Changes in White House (new administration, Bill Clinton to George Bush) and ONDCP leadership also shifted the campaign focus, such as targeting a single drug, marijuana, ‘gate-way’ drug, ignoring inhalants and alcohol use.  The ‘gateway drug’ hypothesis was not only challenged by the Behavioral Change Expert Panel (BCEP), a team of behavioural science academic experts to bridge the gap between strategy and implementation, but also by the rise of legalization of medical marijuana movement.

The campaign evaluation was managed externally, by University of Pennsylvania’s Annenberg School of Communication, with little input from campaign managers due to government regulations. Although the evaluators were lauded for innovative methods of campaign evaluation—a good standard, according the Government Accountability Office—there methods were criticized for using an untested method (propensity scoring), lack of baseline data, and relying on self-reported measures. The campaign evaluation did not time well with different campaign phases and did not help to improve the campaign. Moreover, the evaluation team did not evaluate non-media strategy and tactics. Finally, the complex evaluation process and results were difficult to communicate to policy makers, resulting in low funding for the campaign in subsequent years.

Recommendations for Campaign Managers

  1. Managers should facilitate a core multifunctional team to enable communication links between contractors working on different stages of the campaign.
  2. Managers should effectively communicate learning from one stage of the campaign to the next for mid-course corrections.
  3. Managers should monitor internal and external factors, and proactively engage with negative media coverage to resolve issues before the issues turn into organisational or campaign crisis.
  4. While media campaigns are important to raise public and media exposure, campaign managers should seek to maximize the impact by achieving synergy between advertising and non-advertising tactics. Managers should prepare for more meaningful participation and flexibility in adapting a national media campaign to local conditions to increase its effectiveness and sustainability long after the 30-second ads were no longer on the air.
  5. Managers should facilitate greater participation by the end-user groups in campaign planning, as the end-users such as community organisations are the arms and the legs of the campaign.
  6. Small, frequent, and inexpensive evaluations such as field experiments, should supplement a large-scale impact evaluation. Policy makers, campaign managers, and program designers could use these “mini evaluations” to feed insights into the campaign development process with a focus on improving processes, institutional capacity building, and strengthening local community coalitions.
  7. Design and results of evaluation should be clear and understandable to facilitate communication with, and decision-making by, key stakeholders, including policy makers who decide on funding future stages of the campaign.

References

Trowbridge, J., & Thaker, J. (2016).The case of the National Youth Anti-Drug Media Campaign: A critical analysis from a strategic communication perspective. Cases in Public Health Communication & Marketing, 8, 136-169.

Kotler, P. & Lee, N. (2008). Social marketing: Influencing behaviors for good. Thousand Oaks, CA: Sage.

Snyder, L. B., Hamilton, M. A., Mitchell, E. W., Kiwanuka-Tondo, J., Fleming-Milici, F., & Proctor, D. (2004). A meta-analysis of the effect of mediated health communication campaigns on behavior change in the United States. Journal of Health Communication, 9 Suppl 1, 71–96. http://doi.org/10.1080/10810730490271548

 

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *