Havas’s decision to cut ties with Google and YouTube marks a victory for a new type of consumer activism. But it could also give a taste of fights to come.
MARCH 18, 2017 —As an al-Shabab militant called for jihad inside Kenya, an ad at the base of his YouTube video urged viewers to “Book Now” for a Sandals tropical vacation. An armed neo-Nazi promoting Combat 18 was paired with a call to volunteer for the hospice charity Marie Curie.
Last month, an investigation by The Times of London detailed these as two examples of how major brand names frequently appear alongside hateful YouTube content. A separate article by The Guardian wrote that the UK revenue from these ads amounts to about £250,000, or $318,000, for “extremists and hate preachers.”
On Friday, ad agency Havas Media, whose high-profile clients include Hyundai and the Royal Mail, announced that it would stop placing ads on YouTube and its parent company, Google. The tech giant – which commands more than 30 percent of global online ad revenue and draws 90 percent of its own revenue by placing ads – was quick to respond.
“We’ve begun a thorough review of our ads policies and brand controls, and we will be making changes in the coming weeks to give brands more control over where their ads appear,” wrote Google UK managing director Ronan Harris.
In recent months, online activists have been targeting offensive Web content by trying to shut off their streams of ad revenue. Havas’s decision marks an early win for this strategy, but could also give a taste of fights to come.
“I think that advertisers are becoming wary because they’re seeing consumer backlash,” says David Carroll, an associate professor of media design at Parsons New School of Design. Havas’s decision – which it made on behalf of its UK clients – indeed prompted Google to change course.
But Professor Carroll, who researches how we use digital media to interact, doesn’t think the playing field has shifted. In a phone interview with The Christian Science Monitor, he observes that, because of the largely-automated, milliseconds-long online advertising process, “advertisers, brands, and publishers [still] have no control over where these ads show up.” When clients’ ads get paired with offensive content, websites “have to act defensively.”
When an online ad appears on a racist or terrorist YouTube video, the video’s creator gets most of the revenue, with Google also taking a cut for its role in placing the ad. When it appears on a website like Breitbart – which has been widely accused of spreading fake news and offensive content – most of the money goes to that publisher.
If the viewer finds either type of content offensive, Carroll explains, the real losers are the ad firms and and their clients. With the rise of social media, “a new purpose of news is to show other people what you believe in.” This means that, “if people are rejecting a particular publisher because it is spreading what they believe as falsehoods, or is spreading hyper-partisan, very damaging misinformation,” then brands and ad agencies face greater pressure to cut ties with that publication.
Applying this pressure has become a new, effective form of consumer activism. One Twitter group, the Sleeping Giants, flags brands whose ads appear on Breitbart. It claims nearly 1,600 ad removals so far. “It’s hard to imagine that Havas had not watched the Sleeping Giants thing very closely,” says Carroll.
But at the same time, “the only way to solve this problem was through shame.” He expects this pattern of activist groups calling out companies to continue, until a bigger change happens in how companies reach consumer eyeballs via the Web.
The fact that a Sandals ad showed up under al-Shabab’s video had little to do with the terrorist group, and more with the vacation preferences of whoever clicked on the video. In the internet era, “ads are no longer placed based on the context,” like a particular magazine for a print ad or a neighborhood for a billboard.
Instead, advertising firms focus on “matching ads to people.” As you browse, cookies collect information about which sites you visit. Each ad that you see – including the ones appearing next to this article on the Monitor’s website – reflects those prior habits.
These programs don’t just determine the ads that show up next to offensive content; they also help unsavory characters spread that content. As Carroll puts it, the “technology that matches your preference [for] laundry detergent is the same technology that matches a hyper-partisan inflammatory falsehood that you’re going to share on your streams because you want to believe it.”
Is an end to these practices in sight?
If so, it could come next year, when a new set of EU data-protection laws is scheduled to take effect. The new codes – likely to affect US companies – switch from a system where “you having to know who’s collecting data about you, and opt out of it,” to a model where the internet user has to opt in to a data-collection operation.
That may not sound like much, but Carroll thinks it could help internet users discern good and bad forms of data collection – and, in doing so, push “a lot of the bad actors out of the marketplace – and restore “highly-trusted relationships between users, companies, publishers, and advertisers.”