A lot has been said and written about viewability and brand safety in the world of digital display advertising over the past 5 years. When advertisers and brands started to better understand the nature of ‘real-time-bidding’ concerns were raised mainly around ad fraud brand safety and content distribution management.
Advertisers sought assurances and more control of where their content is going to appear, pressurising ad networks and media buying tools to develop better brand safety controls. And the industry has come a long way; besides the technological developments in the space of programmatic display allowing advertisers to not only block but also handpick websites and placements their ads will appear on, the industry created specialist legal teams were created to safeguard advertisers’ rights.
More importantly, clients, agencies and ad networks are having open conversations on brand safety and they are collectively tackling issues and concerns.
But why after so many years, we are still talking about brand safety as if it’s a new menace, with so much confusion and catastrophology flooding the news? Partly, because of the recent stories putting on the spot the largest social media networks (Facebook, Youtube) but also, partly because of misleading or lack of information around what brand safety is and isn’t.
As defined by IAB, brand safety is keeping a brand’s reputation safe when advertise online. This includes avoiding advertising next to:
- Unlawful Content: Content that is against the law of the country the ads are targeted in. (eg. sexual assault/child abuse; graphic violence/death; promotion of drugs or illegal criminal activity)
- Illicit Content: Content that is widely accepted to be inappropriate but it is not illegal in many countries (eg. explicit sexual conduct, suffering and violence, hate speech etc).
- Unsuitable or undesirable Content:Ad placement in environments that do not align with a brand’s values or identity. For example, a vegan brand wouldn’t want to show up next to chicken recipes content, as meat eating is not something a vegan brand would want to be associated with.
The first and the second points above are usually tackled through the native controls of advertising networks, particularly the social networks. The third one is a little bit difficult when ‘what’s undesirable’ isn’t quite straightforward for a brand.
What’s the deal with brand safety on social and what’s the fuss all about?
Social platforms derive much of their audience from content generated for and by their usersand the conversations those users engage in. That alone makes it hard for brands to control where their content will appear.
With that in mind, brand safety controls for advertisers such as blacklists or content control measures are effective only on a certain set of placements, such as Audience Network (third-party network partnering with Facebook), Instant Articles, In-stream, and not on in-feed content.
Is the newsfeed-like environment unsafe?
It depends how you look at it.
Brand safety was always in the context of content appearing ‘next to’ unsafe content, always with the display environment in mind. There was little discussion around the in-feed environment possibly due to the fact that it was considered safe by default, as advertiser’s content would not appear next to any other content, but rather over or under (as you scroll down the feed). This was widely accepted and there was hardly any questioning even from brands that were fairly precious of their reputation.
If you happen to live in the UK or Europe, you might remember the recent BBC investigation on Instagramor Youtube’sscandal which created a plethora of reactions, leading some advertisers pulling their spend from the social media giants.
As a result, Instagram updatedits offering to protect both users and advertisers from self-harm content on the newsfeed by rolling out a new reporting tool that lets its users anonymously flag friends’ posts about self-harm, and also applying ‘blur’ filters on self-harm images that are uploaded. Similarly, as a response to public criticism, Youtube disabled comments on videos that feature children.
The bigger picture: Social Networks see themselves as a community whose primary aim is to connect people through their offering. Advertising is also an important part of their business and are therefore seeking to create an environment that offers both an engaging user experience and also opportunities for advertisers to reach users. There will be times that these two areas come in conflict (eg. Freedom of expression and brand safety) which is the nature of an environment were users express views and opinions, are being creative and often outspoken. Social media has become one of the most impactful means of communication, exactly because they give users a voice. Advertisers who chose to go into that space have to be aware of the potential risks and balance them out with the value their brand will gain by engaging with their audience in a space they can interact with and create long-term, meaningful relationships.