Campaign Setup Governance Study

Campaign Setup Governance Study

Campaign Setup Governance Study

Eliminating campaign setup defects dramatically improves digital media performance, and it is easy to achieve.

Eliminating campaign setup defects dramatically improves digital media performance, and it is easy to achieve.

Eliminating campaign setup defects dramatically improves digital media performance, and it is easy to achieve.

Abstract

This study was conducted to get a perspective on campaign setup quality in key digital media platforms. The results show that majority of campaigns do not meet advertisers’ policy requirements, campaign setup issues are frequent and impact media productivity, brand risk exposure, and data quality:

  • Campaign performance is hindered by suboptimal settings around audience targeting, creative use, bidding strategies, optimization features, and more.

  • Brands are exposed to risks from overly-relaxed or incomplete settings: high-risk inventory included, sensitive content categories not excluded, inclusion/exclusion lists missing.

  • Naming convention / taxonomy compliance remains a major challenge and disrupts advertisers’ ability to generate learnings from media activity.

The study further shows that introducing automated campaign setup monitoring immediately drives major improvement in quality.

Methodology

The study analyzed campaign setup using a subset of data of Adfidence clients. 

  • Ad accounts from 10 advertisers were included. Accounts have been randomly selected, insights are not representative of any specific advertiser’s performance.

  • 100 accounts were reviewed for each of the platforms: Meta, DV360, Google Ads. Only media platforms used by at least 5 clients were included (for this reason, the study excluded Trade Desk, TikTok, Snapchat, Pinterest, Amazon). 

  • Accounts were reviewed for the period of 90 days directly before being enrolled in monitoring.

Problem

Digital media is riddled with pitfalls. Systemic ecosystem issues make buying media difficult:

  • Measurement challenge. Measuring business and brand outcomes (sales, brand lift) is very challenging and time-delayed, as a result campaigns are often primarily evaluated on cost metrics (CPM, CPC). Prioritizing cost KPIs, combined with low transparency, drives the appeal of the cheapest inventory, regardless of its quality. 

  • High prevalence of fraud. According to ANA, at least $20B, or 23% of programmatic budgets are wasted on fraudulent MFA sites. The recent YouTube “Google Video Partners” fiasco showed that even the top media platforms fail to ensure the quality of the inventory they sell. 

  • Incentive misalignment. It is in media platforms’ interest to maximize demand for all their inventory, so the campaign setup options default to the broadest inventory settings (e.g., include “audience networks”,  “expanded” inventory tier). Advertisers’ hands-on-keyboard staff must proactively and manually change the setup defaults to maximize the impact of their media dollars.

  • Communication gaps. The setup is increasingly complex with lots of configuration options. Media buyers don’t always have full clarity, or training, around the required setup guardrails: What’s the acceptable inventory risk level for this brand? Viewability goals? Audience demographic? Frequency cap policy? In this fast-changing ecosystem, it’s often difficult for advertisers themselves to develop and maintain precise media execution guardrails for all media platforms.

  • Fast-paced environment. In the typical heat-of-the-moment media buying world, agency staffers are often expected to get the campaign up as soon as the creatives are ready. Hundreds of line items may need to be created, hundreds of campaign settings configured. In such a labor-intensive and fast-paced environment the best practices are often not followed, in particular when agency buyers manage multiple campaigns across different accounts simultaneously and with little oversight.

Shifting focus towards quality

Until recently, Media Directors had very limited ability to control campaign setup. They provided agencies with briefs, guidelines and requirements, but given the large complexity and limited transparency they weren’t able to verify the execution. But the momentum for more transparency and control has been steadily building, fueled by regular discoveries of large areas of waste and by increased awareness of the high prevalence of setup issues. 

Media leaders are increasingly aware that digital campaign setup is error-prone and that even if errors are found, the incentives aren’t there, particularly for the agency buyers, to provide disclosures. Forward thinking media leaders no longer want to be in the dark.

Advertisers have been building their internal media competences, often by hiring senior staff from media agencies. They have reclaimed the ad account ownership from agencies, or, at minimum, ensured they have access. In-housing of media buying continues to gain steam, the desire for more transparency and control is one of the goals.

Technology provides a solution

Buying digital media is done by configuring campaign setup in media platforms, therefore addressing the media quality issues comes down to ensuring proper campaign setup. For example, simply unchecking the “Google Video Partners” box in Google Ads dramatically reduces the risk of waste in YouTube TrueView ads.

All the leading media platforms have APIs enabling to retrieve setup info for all advertisers’ campaigns, every day. This enables the automation of campaign quality assurance at scale.

In partnership with top global advertisers, Adfidence developed a campaign setup governance platform. The solution enables advertisers to set the guardrails for their media buys (e.g., required inventory type exclusions, frequency caps, viewability thresholds, etc.) and to monitor all live campaigns against those guardrails. It is an automated quality assurance solution detecting issues and prompting for fixes across advertisers’ media activity on Meta, Google DV360 and Search, Trade Desk, TikTok, Snapchat, Bing, Amazon, and other large media platforms. 

Insights Summary

Insight

Data

Campaign performance is routinely hindered by suboptimal settings. Setup mistakes happen across all setup dimensions, including audience targeting, creative use, bidding strategies and optimizations.

Advertisers can get +25% more performance from their ad spend by achieving campaign setup excellence. 

Brands are routinely exposed to hidden risks resulting from overly-relaxed brand safety settings (high-risk inventory included, sensitive content categories not excluded).

More than 40% of campaigns failed to apply brand safety settings in a manner fully compliant with advertisers’ policies.

Naming convention / taxonomy compliance remains a major challenge and disrupts advertisers’ ability to generate learnings from media activity.

More than 30% of media assets names (campaigns, ad sets, ads) did not comply with advertisers’ naming conventions.

Media Productivity Settings

Setup issues were identified across all setup dimensions, including audience targeting, creative use, bidding strategies and optimizations. Examples of common, or particularly wasteful settings are provided below. 

At Adfidence, we develop estimates of a waste percentage from a particular type of mistake, either by A/B tests, reviewing historical campaigns, or by various approximations. Combining those estimates with mistake prevalence data, we observe an average advertiser can get +25% more performance from their media budget by achieving setup excellence. 

Meta

  • 12% of campaigns had no frequency cap. This is a particularly expensive problem: in A/B tests we have observed a 67% drop in campaign reach for a campaign with no cap vs one capped at 3 impressions/week.

  • 30% of Meta campaigns were restricted to a single placement, e.g., Instagram Story. A missed opportunity here is not including alternative placements that can reach the same audience at lower cost, in this case the Facebook Story placement. In A/B tests, we’ve seen double digit reach gains for campaigns running on both placements vs a single one.

  • 75% of the feed ads underutilized the ad space purchased. 20% feed ads used horizontal creatives, 55% used squares. The feed ad placement has a 4:5 aspect ratio, so advertisers who used a square creative used 80% of the space they paid for, and those who used a tv-style horizontal creative used only 45%!

  • 16% of campaigns didn’t use age targeting. It’s hard to assess the average budget waste in this case, but we saw several campaigns for brands clearly aimed at a narrow age demographic (e.g., youth fashion, baby products for new parents, pharma products for elderly) running campaigns with a default “18-65+” age setting.

Google DV360

  • 95% of campaigns did not exclude user-rewarded content. It’s a controversial format: ads are placed in games and players are rewarded with in-game perks for completed ad views. The placement offers very high viewability rates and is often used to boost the overall viewability of the campaign. But, we often find that these ads are watched by kids playing games on their parents’ phones and do not reach the target audience.

  • 14% of campaigns had no frequency cap. The consequences are similar to what we’ve observed in Meta tests: campaigns with no cap have significantly lower reach vs capped ones. There is no true business rationale for omitting frequency caps, but this still happens because uncapped campaigns have ~10% lower CPMs: it’s typically cheaper to show 50th impression to the same person than it is to reach a new one.

  • 42% of campaigns didn’t use age targeting. Advertisers with extremely high household penetration products like e.g., toothpaste could argue it’s a valid approach for them. But, we saw this also in product categories which are clearly aimed at a relatively narrow age demographic. There is no justification for this other than a pursuit of lowest unit costs without regard for actual business results.

Google Search Ads

  • 50% of the accounts did not do any remarketing. Retargeting is a low hanging fruit for driving ROI in Search: advertisers can reach engaged audiences by targeting users who previously visited a website/app, or engaged with specific content. Running search campaigns that generate audience data without pairing it with any remarketing activity is a missed opportunity.

  • 34% of the campaigns had incomplete creatives: headlines, descriptions, callouts and extensions. Such inputs are used by Google to dynamically assemble the best performing search result ad. An incomplete set of inputs limits the opportunity to create best ads which would maximize the search campaign’s performance. 

  • 4% of search ads link to non-functioning landing pages. Not only does this completely waste the budget, but also potentially builds a negative association with the brand. We see ads leading to nonexistent pages across all digital media, but this is most prevalent in search because search campaigns are long or ever-green and they sometimes outlive the configuration of the website they’re linking to. 

Brand Suitability Settings

Over the past few years, advertisers put pressure on the top media platforms to improve brand safety, and the platforms have largely delivered: both Meta and Google built AI capabilities for categorizing content and gave advertisers corresponding controls. We found that those controls are underutilized, that the campaigns often run without any sensitive content exclusions, and on the broadest inventory tier that includes risky content. 

Meta

  • 37% of eligible campaigns used the “Expanded” feed content tier. Meta’s “Expanded” content includes: “Physical or emotional distress”, “Social issues that provoke debate”, “Substance abuse or crime”, “Mature sexual or suggestive topics”, “Profanity, derogatory words, slurs, or vulgar sexual language”, “Injury, gore, or bodily functions/conditions”.

  • 15% of campaigns included Audience Network, out of which 24% did not use any Audience Network content filters. Not all Audience Network inventory is risky, but Meta has less control over the AN content compared to its own inventory, so many advertisers require the AN placement to be OFF.

Google DV360

  • 44% of campaigns did not exclude any sensitive categories. Example sensitive categories: Sexual, Violence, Profanity, Drugs, Politics, Religion, Tragedy, Transportation accidents, Shocking, and Sensitive social issues.

  • 36% of open programmatic campaigns had no inclusion list and no exclusion list. Such campaigns are at a particular risk of falling victim to “Made For Advertising” websites and ad fraud.

  • 26% of YouTube campaigns did not exclude Google Video Partners. The recent  Adalytics study goes in depth into the challenges of the GVP inventory quality.

Google Search Ads

  • 26% of accounts don’t not include any negative keywords. Negative keywords are critical in ensuring that the ads don’t show up for search queries a brand doesn’t want to be associated with.

  • 12% of campaigns did not exclude Google Search Partners, despite their advertisers’ policies requiring Search Partners inventory to be OFF. Advertisers often chose to have GSP OFF to avoid showing their ads in contexts they have no control over.


The Impact of Monitoring

After accounts in this study were onboarded to automated monitoring, the quality improved significantly. Within 2 months, the monitored accounts showed: 

  • 11% improvement in media productivity,

  • 50% reduction in brand suitability issues, 

  • improvement in naming convention compliance from 57% to 95%

For most of the accounts in the study, the introduction of monitoring wasn’t paired with any adjustments to media buying KPIs, or changes to the media buyers’ incentives. The only operational change was onboarding media buyers to the Adfidence platform that provides setup quality dashboards and points to non-compliant campaign items. 

It turned out that “what gets measured gets managed” - once quality measurement was introduced, quality immediately started to improve.

Conclusion

Automating campaign setup monitoring provides a large and easily accessible opportunity for brand advertisers to: 

  • improve media productivity,

  • reduce hidden brand risks, 

  • fix naming convention issues. 

For brand advertisers specifically, digital media campaign setup governance is a strategic topic in marketing. Advertisers who get a solid grasp on campaign execution quality gain a large competitive advantage.

Reach out to our team at contact@adfidence.com to explore whether automated setup quality assurance should be part of your media strategy.

Abstract

This study was conducted to get a perspective on campaign setup quality in key digital media platforms. The results show that majority of campaigns do not meet advertisers’ policy requirements, campaign setup issues are frequent and impact media productivity, brand risk exposure, and data quality:

  • Campaign performance is hindered by suboptimal settings around audience targeting, creative use, bidding strategies, optimization features, and more.

  • Brands are exposed to risks from overly-relaxed or incomplete settings: high-risk inventory included, sensitive content categories not excluded, inclusion/exclusion lists missing.

  • Naming convention / taxonomy compliance remains a major challenge and disrupts advertisers’ ability to generate learnings from media activity.

The study further shows that introducing automated campaign setup monitoring immediately drives major improvement in quality.

Methodology

The study analyzed campaign setup using a subset of data of Adfidence clients. 

  • Ad accounts from 10 advertisers were included. Accounts have been randomly selected, insights are not representative of any specific advertiser’s performance.

  • 100 accounts were reviewed for each of the platforms: Meta, DV360, Google Ads. Only media platforms used by at least 5 clients were included (for this reason, the study excluded Trade Desk, TikTok, Snapchat, Pinterest, Amazon). 

  • Accounts were reviewed for the period of 90 days directly before being enrolled in monitoring.

Problem

Digital media is riddled with pitfalls. Systemic ecosystem issues make buying media difficult:

  • Measurement challenge. Measuring business and brand outcomes (sales, brand lift) is very challenging and time-delayed, as a result campaigns are often primarily evaluated on cost metrics (CPM, CPC). Prioritizing cost KPIs, combined with low transparency, drives the appeal of the cheapest inventory, regardless of its quality. 

  • High prevalence of fraud. According to ANA, at least $20B, or 23% of programmatic budgets are wasted on fraudulent MFA sites. The recent YouTube “Google Video Partners” fiasco showed that even the top media platforms fail to ensure the quality of the inventory they sell. 

  • Incentive misalignment. It is in media platforms’ interest to maximize demand for all their inventory, so the campaign setup options default to the broadest inventory settings (e.g., include “audience networks”,  “expanded” inventory tier). Advertisers’ hands-on-keyboard staff must proactively and manually change the setup defaults to maximize the impact of their media dollars.

  • Communication gaps. The setup is increasingly complex with lots of configuration options. Media buyers don’t always have full clarity, or training, around the required setup guardrails: What’s the acceptable inventory risk level for this brand? Viewability goals? Audience demographic? Frequency cap policy? In this fast-changing ecosystem, it’s often difficult for advertisers themselves to develop and maintain precise media execution guardrails for all media platforms.

  • Fast-paced environment. In the typical heat-of-the-moment media buying world, agency staffers are often expected to get the campaign up as soon as the creatives are ready. Hundreds of line items may need to be created, hundreds of campaign settings configured. In such a labor-intensive and fast-paced environment the best practices are often not followed, in particular when agency buyers manage multiple campaigns across different accounts simultaneously and with little oversight.

Shifting focus towards quality

Until recently, Media Directors had very limited ability to control campaign setup. They provided agencies with briefs, guidelines and requirements, but given the large complexity and limited transparency they weren’t able to verify the execution. But the momentum for more transparency and control has been steadily building, fueled by regular discoveries of large areas of waste and by increased awareness of the high prevalence of setup issues. 

Media leaders are increasingly aware that digital campaign setup is error-prone and that even if errors are found, the incentives aren’t there, particularly for the agency buyers, to provide disclosures. Forward thinking media leaders no longer want to be in the dark.

Advertisers have been building their internal media competences, often by hiring senior staff from media agencies. They have reclaimed the ad account ownership from agencies, or, at minimum, ensured they have access. In-housing of media buying continues to gain steam, the desire for more transparency and control is one of the goals.

Technology provides a solution

Buying digital media is done by configuring campaign setup in media platforms, therefore addressing the media quality issues comes down to ensuring proper campaign setup. For example, simply unchecking the “Google Video Partners” box in Google Ads dramatically reduces the risk of waste in YouTube TrueView ads.

All the leading media platforms have APIs enabling to retrieve setup info for all advertisers’ campaigns, every day. This enables the automation of campaign quality assurance at scale.

In partnership with top global advertisers, Adfidence developed a campaign setup governance platform. The solution enables advertisers to set the guardrails for their media buys (e.g., required inventory type exclusions, frequency caps, viewability thresholds, etc.) and to monitor all live campaigns against those guardrails. It is an automated quality assurance solution detecting issues and prompting for fixes across advertisers’ media activity on Meta, Google DV360 and Search, Trade Desk, TikTok, Snapchat, Bing, Amazon, and other large media platforms. 

Insights Summary

Insight

Data

Campaign performance is routinely hindered by suboptimal settings. Setup mistakes happen across all setup dimensions, including audience targeting, creative use, bidding strategies and optimizations.

Advertisers can get +25% more performance from their ad spend by achieving campaign setup excellence. 

Brands are routinely exposed to hidden risks resulting from overly-relaxed brand safety settings (high-risk inventory included, sensitive content categories not excluded).

More than 40% of campaigns failed to apply brand safety settings in a manner fully compliant with advertisers’ policies.

Naming convention / taxonomy compliance remains a major challenge and disrupts advertisers’ ability to generate learnings from media activity.

More than 30% of media assets names (campaigns, ad sets, ads) did not comply with advertisers’ naming conventions.

Media Productivity Settings

Setup issues were identified across all setup dimensions, including audience targeting, creative use, bidding strategies and optimizations. Examples of common, or particularly wasteful settings are provided below. 

At Adfidence, we develop estimates of a waste percentage from a particular type of mistake, either by A/B tests, reviewing historical campaigns, or by various approximations. Combining those estimates with mistake prevalence data, we observe an average advertiser can get +25% more performance from their media budget by achieving setup excellence. 

Meta

  • 12% of campaigns had no frequency cap. This is a particularly expensive problem: in A/B tests we have observed a 67% drop in campaign reach for a campaign with no cap vs one capped at 3 impressions/week.

  • 30% of Meta campaigns were restricted to a single placement, e.g., Instagram Story. A missed opportunity here is not including alternative placements that can reach the same audience at lower cost, in this case the Facebook Story placement. In A/B tests, we’ve seen double digit reach gains for campaigns running on both placements vs a single one.

  • 75% of the feed ads underutilized the ad space purchased. 20% feed ads used horizontal creatives, 55% used squares. The feed ad placement has a 4:5 aspect ratio, so advertisers who used a square creative used 80% of the space they paid for, and those who used a tv-style horizontal creative used only 45%!

  • 16% of campaigns didn’t use age targeting. It’s hard to assess the average budget waste in this case, but we saw several campaigns for brands clearly aimed at a narrow age demographic (e.g., youth fashion, baby products for new parents, pharma products for elderly) running campaigns with a default “18-65+” age setting.

Google DV360

  • 95% of campaigns did not exclude user-rewarded content. It’s a controversial format: ads are placed in games and players are rewarded with in-game perks for completed ad views. The placement offers very high viewability rates and is often used to boost the overall viewability of the campaign. But, we often find that these ads are watched by kids playing games on their parents’ phones and do not reach the target audience.

  • 14% of campaigns had no frequency cap. The consequences are similar to what we’ve observed in Meta tests: campaigns with no cap have significantly lower reach vs capped ones. There is no true business rationale for omitting frequency caps, but this still happens because uncapped campaigns have ~10% lower CPMs: it’s typically cheaper to show 50th impression to the same person than it is to reach a new one.

  • 42% of campaigns didn’t use age targeting. Advertisers with extremely high household penetration products like e.g., toothpaste could argue it’s a valid approach for them. But, we saw this also in product categories which are clearly aimed at a relatively narrow age demographic. There is no justification for this other than a pursuit of lowest unit costs without regard for actual business results.

Google Search Ads

  • 50% of the accounts did not do any remarketing. Retargeting is a low hanging fruit for driving ROI in Search: advertisers can reach engaged audiences by targeting users who previously visited a website/app, or engaged with specific content. Running search campaigns that generate audience data without pairing it with any remarketing activity is a missed opportunity.

  • 34% of the campaigns had incomplete creatives: headlines, descriptions, callouts and extensions. Such inputs are used by Google to dynamically assemble the best performing search result ad. An incomplete set of inputs limits the opportunity to create best ads which would maximize the search campaign’s performance. 

  • 4% of search ads link to non-functioning landing pages. Not only does this completely waste the budget, but also potentially builds a negative association with the brand. We see ads leading to nonexistent pages across all digital media, but this is most prevalent in search because search campaigns are long or ever-green and they sometimes outlive the configuration of the website they’re linking to. 

Brand Suitability Settings

Over the past few years, advertisers put pressure on the top media platforms to improve brand safety, and the platforms have largely delivered: both Meta and Google built AI capabilities for categorizing content and gave advertisers corresponding controls. We found that those controls are underutilized, that the campaigns often run without any sensitive content exclusions, and on the broadest inventory tier that includes risky content. 

Meta

  • 37% of eligible campaigns used the “Expanded” feed content tier. Meta’s “Expanded” content includes: “Physical or emotional distress”, “Social issues that provoke debate”, “Substance abuse or crime”, “Mature sexual or suggestive topics”, “Profanity, derogatory words, slurs, or vulgar sexual language”, “Injury, gore, or bodily functions/conditions”.

  • 15% of campaigns included Audience Network, out of which 24% did not use any Audience Network content filters. Not all Audience Network inventory is risky, but Meta has less control over the AN content compared to its own inventory, so many advertisers require the AN placement to be OFF.

Google DV360

  • 44% of campaigns did not exclude any sensitive categories. Example sensitive categories: Sexual, Violence, Profanity, Drugs, Politics, Religion, Tragedy, Transportation accidents, Shocking, and Sensitive social issues.

  • 36% of open programmatic campaigns had no inclusion list and no exclusion list. Such campaigns are at a particular risk of falling victim to “Made For Advertising” websites and ad fraud.

  • 26% of YouTube campaigns did not exclude Google Video Partners. The recent  Adalytics study goes in depth into the challenges of the GVP inventory quality.

Google Search Ads

  • 26% of accounts don’t not include any negative keywords. Negative keywords are critical in ensuring that the ads don’t show up for search queries a brand doesn’t want to be associated with.

  • 12% of campaigns did not exclude Google Search Partners, despite their advertisers’ policies requiring Search Partners inventory to be OFF. Advertisers often chose to have GSP OFF to avoid showing their ads in contexts they have no control over.


The Impact of Monitoring

After accounts in this study were onboarded to automated monitoring, the quality improved significantly. Within 2 months, the monitored accounts showed: 

  • 11% improvement in media productivity,

  • 50% reduction in brand suitability issues, 

  • improvement in naming convention compliance from 57% to 95%

For most of the accounts in the study, the introduction of monitoring wasn’t paired with any adjustments to media buying KPIs, or changes to the media buyers’ incentives. The only operational change was onboarding media buyers to the Adfidence platform that provides setup quality dashboards and points to non-compliant campaign items. 

It turned out that “what gets measured gets managed” - once quality measurement was introduced, quality immediately started to improve.

Conclusion

Automating campaign setup monitoring provides a large and easily accessible opportunity for brand advertisers to: 

  • improve media productivity,

  • reduce hidden brand risks, 

  • fix naming convention issues. 

For brand advertisers specifically, digital media campaign setup governance is a strategic topic in marketing. Advertisers who get a solid grasp on campaign execution quality gain a large competitive advantage.

Reach out to our team at contact@adfidence.com to explore whether automated setup quality assurance should be part of your media strategy.

Abstract

This study was conducted to get a perspective on campaign setup quality in key digital media platforms. The results show that majority of campaigns do not meet advertisers’ policy requirements, campaign setup issues are frequent and impact media productivity, brand risk exposure, and data quality:

  • Campaign performance is hindered by suboptimal settings around audience targeting, creative use, bidding strategies, optimization features, and more.

  • Brands are exposed to risks from overly-relaxed or incomplete settings: high-risk inventory included, sensitive content categories not excluded, inclusion/exclusion lists missing.

  • Naming convention / taxonomy compliance remains a major challenge and disrupts advertisers’ ability to generate learnings from media activity.

The study further shows that introducing automated campaign setup monitoring immediately drives major improvement in quality.

Methodology

The study analyzed campaign setup using a subset of data of Adfidence clients. 

  • Ad accounts from 10 advertisers were included. Accounts have been randomly selected, insights are not representative of any specific advertiser’s performance.

  • 100 accounts were reviewed for each of the platforms: Meta, DV360, Google Ads. Only media platforms used by at least 5 clients were included (for this reason, the study excluded Trade Desk, TikTok, Snapchat, Pinterest, Amazon). 

  • Accounts were reviewed for the period of 90 days directly before being enrolled in monitoring.

Problem

Digital media is riddled with pitfalls. Systemic ecosystem issues make buying media difficult:

  • Measurement challenge. Measuring business and brand outcomes (sales, brand lift) is very challenging and time-delayed, as a result campaigns are often primarily evaluated on cost metrics (CPM, CPC). Prioritizing cost KPIs, combined with low transparency, drives the appeal of the cheapest inventory, regardless of its quality. 

  • High prevalence of fraud. According to ANA, at least $20B, or 23% of programmatic budgets are wasted on fraudulent MFA sites. The recent YouTube “Google Video Partners” fiasco showed that even the top media platforms fail to ensure the quality of the inventory they sell. 

  • Incentive misalignment. It is in media platforms’ interest to maximize demand for all their inventory, so the campaign setup options default to the broadest inventory settings (e.g., include “audience networks”,  “expanded” inventory tier). Advertisers’ hands-on-keyboard staff must proactively and manually change the setup defaults to maximize the impact of their media dollars.

  • Communication gaps. The setup is increasingly complex with lots of configuration options. Media buyers don’t always have full clarity, or training, around the required setup guardrails: What’s the acceptable inventory risk level for this brand? Viewability goals? Audience demographic? Frequency cap policy? In this fast-changing ecosystem, it’s often difficult for advertisers themselves to develop and maintain precise media execution guardrails for all media platforms.

  • Fast-paced environment. In the typical heat-of-the-moment media buying world, agency staffers are often expected to get the campaign up as soon as the creatives are ready. Hundreds of line items may need to be created, hundreds of campaign settings configured. In such a labor-intensive and fast-paced environment the best practices are often not followed, in particular when agency buyers manage multiple campaigns across different accounts simultaneously and with little oversight.

Shifting focus towards quality

Until recently, Media Directors had very limited ability to control campaign setup. They provided agencies with briefs, guidelines and requirements, but given the large complexity and limited transparency they weren’t able to verify the execution. But the momentum for more transparency and control has been steadily building, fueled by regular discoveries of large areas of waste and by increased awareness of the high prevalence of setup issues. 

Media leaders are increasingly aware that digital campaign setup is error-prone and that even if errors are found, the incentives aren’t there, particularly for the agency buyers, to provide disclosures. Forward thinking media leaders no longer want to be in the dark.

Advertisers have been building their internal media competences, often by hiring senior staff from media agencies. They have reclaimed the ad account ownership from agencies, or, at minimum, ensured they have access. In-housing of media buying continues to gain steam, the desire for more transparency and control is one of the goals.

Technology provides a solution

Buying digital media is done by configuring campaign setup in media platforms, therefore addressing the media quality issues comes down to ensuring proper campaign setup. For example, simply unchecking the “Google Video Partners” box in Google Ads dramatically reduces the risk of waste in YouTube TrueView ads.

All the leading media platforms have APIs enabling to retrieve setup info for all advertisers’ campaigns, every day. This enables the automation of campaign quality assurance at scale.

In partnership with top global advertisers, Adfidence developed a campaign setup governance platform. The solution enables advertisers to set the guardrails for their media buys (e.g., required inventory type exclusions, frequency caps, viewability thresholds, etc.) and to monitor all live campaigns against those guardrails. It is an automated quality assurance solution detecting issues and prompting for fixes across advertisers’ media activity on Meta, Google DV360 and Search, Trade Desk, TikTok, Snapchat, Bing, Amazon, and other large media platforms. 

Insights Summary

Insight

Data

Campaign performance is routinely hindered by suboptimal settings. Setup mistakes happen across all setup dimensions, including audience targeting, creative use, bidding strategies and optimizations.

Advertisers can get +25% more performance from their ad spend by achieving campaign setup excellence. 

Brands are routinely exposed to hidden risks resulting from overly-relaxed brand safety settings (high-risk inventory included, sensitive content categories not excluded).

More than 40% of campaigns failed to apply brand safety settings in a manner fully compliant with advertisers’ policies.

Naming convention / taxonomy compliance remains a major challenge and disrupts advertisers’ ability to generate learnings from media activity.

More than 30% of media assets names (campaigns, ad sets, ads) did not comply with advertisers’ naming conventions.

Media Productivity Settings

Setup issues were identified across all setup dimensions, including audience targeting, creative use, bidding strategies and optimizations. Examples of common, or particularly wasteful settings are provided below. 

At Adfidence, we develop estimates of a waste percentage from a particular type of mistake, either by A/B tests, reviewing historical campaigns, or by various approximations. Combining those estimates with mistake prevalence data, we observe an average advertiser can get +25% more performance from their media budget by achieving setup excellence. 

Meta

  • 12% of campaigns had no frequency cap. This is a particularly expensive problem: in A/B tests we have observed a 67% drop in campaign reach for a campaign with no cap vs one capped at 3 impressions/week.

  • 30% of Meta campaigns were restricted to a single placement, e.g., Instagram Story. A missed opportunity here is not including alternative placements that can reach the same audience at lower cost, in this case the Facebook Story placement. In A/B tests, we’ve seen double digit reach gains for campaigns running on both placements vs a single one.

  • 75% of the feed ads underutilized the ad space purchased. 20% feed ads used horizontal creatives, 55% used squares. The feed ad placement has a 4:5 aspect ratio, so advertisers who used a square creative used 80% of the space they paid for, and those who used a tv-style horizontal creative used only 45%!

  • 16% of campaigns didn’t use age targeting. It’s hard to assess the average budget waste in this case, but we saw several campaigns for brands clearly aimed at a narrow age demographic (e.g., youth fashion, baby products for new parents, pharma products for elderly) running campaigns with a default “18-65+” age setting.

Google DV360

  • 95% of campaigns did not exclude user-rewarded content. It’s a controversial format: ads are placed in games and players are rewarded with in-game perks for completed ad views. The placement offers very high viewability rates and is often used to boost the overall viewability of the campaign. But, we often find that these ads are watched by kids playing games on their parents’ phones and do not reach the target audience.

  • 14% of campaigns had no frequency cap. The consequences are similar to what we’ve observed in Meta tests: campaigns with no cap have significantly lower reach vs capped ones. There is no true business rationale for omitting frequency caps, but this still happens because uncapped campaigns have ~10% lower CPMs: it’s typically cheaper to show 50th impression to the same person than it is to reach a new one.

  • 42% of campaigns didn’t use age targeting. Advertisers with extremely high household penetration products like e.g., toothpaste could argue it’s a valid approach for them. But, we saw this also in product categories which are clearly aimed at a relatively narrow age demographic. There is no justification for this other than a pursuit of lowest unit costs without regard for actual business results.

Google Search Ads

  • 50% of the accounts did not do any remarketing. Retargeting is a low hanging fruit for driving ROI in Search: advertisers can reach engaged audiences by targeting users who previously visited a website/app, or engaged with specific content. Running search campaigns that generate audience data without pairing it with any remarketing activity is a missed opportunity.

  • 34% of the campaigns had incomplete creatives: headlines, descriptions, callouts and extensions. Such inputs are used by Google to dynamically assemble the best performing search result ad. An incomplete set of inputs limits the opportunity to create best ads which would maximize the search campaign’s performance. 

  • 4% of search ads link to non-functioning landing pages. Not only does this completely waste the budget, but also potentially builds a negative association with the brand. We see ads leading to nonexistent pages across all digital media, but this is most prevalent in search because search campaigns are long or ever-green and they sometimes outlive the configuration of the website they’re linking to. 

Brand Suitability Settings

Over the past few years, advertisers put pressure on the top media platforms to improve brand safety, and the platforms have largely delivered: both Meta and Google built AI capabilities for categorizing content and gave advertisers corresponding controls. We found that those controls are underutilized, that the campaigns often run without any sensitive content exclusions, and on the broadest inventory tier that includes risky content. 

Meta

  • 37% of eligible campaigns used the “Expanded” feed content tier. Meta’s “Expanded” content includes: “Physical or emotional distress”, “Social issues that provoke debate”, “Substance abuse or crime”, “Mature sexual or suggestive topics”, “Profanity, derogatory words, slurs, or vulgar sexual language”, “Injury, gore, or bodily functions/conditions”.

  • 15% of campaigns included Audience Network, out of which 24% did not use any Audience Network content filters. Not all Audience Network inventory is risky, but Meta has less control over the AN content compared to its own inventory, so many advertisers require the AN placement to be OFF.

Google DV360

  • 44% of campaigns did not exclude any sensitive categories. Example sensitive categories: Sexual, Violence, Profanity, Drugs, Politics, Religion, Tragedy, Transportation accidents, Shocking, and Sensitive social issues.

  • 36% of open programmatic campaigns had no inclusion list and no exclusion list. Such campaigns are at a particular risk of falling victim to “Made For Advertising” websites and ad fraud.

  • 26% of YouTube campaigns did not exclude Google Video Partners. The recent  Adalytics study goes in depth into the challenges of the GVP inventory quality.

Google Search Ads

  • 26% of accounts don’t not include any negative keywords. Negative keywords are critical in ensuring that the ads don’t show up for search queries a brand doesn’t want to be associated with.

  • 12% of campaigns did not exclude Google Search Partners, despite their advertisers’ policies requiring Search Partners inventory to be OFF. Advertisers often chose to have GSP OFF to avoid showing their ads in contexts they have no control over.


The Impact of Monitoring

After accounts in this study were onboarded to automated monitoring, the quality improved significantly. Within 2 months, the monitored accounts showed: 

  • 11% improvement in media productivity,

  • 50% reduction in brand suitability issues, 

  • improvement in naming convention compliance from 57% to 95%

For most of the accounts in the study, the introduction of monitoring wasn’t paired with any adjustments to media buying KPIs, or changes to the media buyers’ incentives. The only operational change was onboarding media buyers to the Adfidence platform that provides setup quality dashboards and points to non-compliant campaign items. 

It turned out that “what gets measured gets managed” - once quality measurement was introduced, quality immediately started to improve.

Conclusion

Automating campaign setup monitoring provides a large and easily accessible opportunity for brand advertisers to: 

  • improve media productivity,

  • reduce hidden brand risks, 

  • fix naming convention issues. 

For brand advertisers specifically, digital media campaign setup governance is a strategic topic in marketing. Advertisers who get a solid grasp on campaign execution quality gain a large competitive advantage.

Reach out to our team at contact@adfidence.com to explore whether automated setup quality assurance should be part of your media strategy.