The Case for Advertising Equivalency
“Ad value equivalency is conceptually wrong,” PR expert says.
That is a direct quote by David Rockland, partner and CEO of Ketchum Pleon Change and Global Research, and he went on to say that “If you can’t recognize it as a bad idea, then you probably shouldn’t be in PR.”
It could be a matter of semantics, because the public service advertising and PR worlds are quite different in the way exposure occurs. It is important to address this issue, however, because the public service advertising profession has used advertising equivalency for more than 3 decades as a way to measure the value of a campaign.
To start the discussion, let’s review Mr. Rockland’s statement more carefully and try to find out what he and his colleagues at the Institute for Public Relations define as a better way to measure campaign impact. At the recent European Summit on Measurement held in Barcelona, leaders from 30 countries met to discuss global standards and practices to evaluate public relations programs.
One of those delegates, Andre Manning, global head of external communications at Royal Philips Electronics, said that his firm has “reworked its PR approach to ‘outcome communications,’ and totally abandoned ad value equivalency.” My response to this is, what leads these PR experts to think that anyone uses advertising equivalency value (AEV) exclusively as a measurment of success?
Everyone we know who engages in public relations and/or public service advertising programs uses advertising equivalency as just one of the important metrics of measuring campaign outcome. But why use it at all is the question that is being raised. Several reasons.
First, everyone can do the math. They know if they spent X dollars to create and distribute a campaign, and got back Y in ad equivalency value, then they know their ROI.
Secondly, AEV inherently reflects many different aspects of media values in the way it is calculated. For example, in calculating the AEV for broadcast TV exposure, the size of the market, (as defined by population), the prominence of the station within the market, the time of day the exposure occurred, the length, duration and frequency of the message are all reflected by the AEV.
As the first firm to develop PSA evaluation software back in 1983, we compiled reports showing details of media exposure and ad equivalency. We also demonstrated the impact this exposure had on our mission – recruiting young people into the U.S. Coast Guard. We used graphs to show a direct correlation between the amount of PSA exposure we were getting (the same would be true of PR editorial exposure) and the number of 800 phone leads coming into our call center. Prior to the Internet, other methods used to measure outcome were literature requests and the number of volunteers recruited, if that was the campaign goal.
We are not a PR firm and the types of campaigns we distribute to the media are very different from the standard public relations effort designed to generate earned media. On the other hand, we do not think it is wise to completely discard an evaluation metric that is based on solid media logic and we probably could all agree that it should only be one method for determining the success or outcome for any given public education campaign.