Understanding the nuance of scale within programmatic advertising
One of the great benefits the programmatic advertising space promotes is the opportunity to scale advertising campaigns. Essentially, through programmatic media buying, marketers gain access to millions of publishers, and therefore audiences, that provide considerable manners in which an advertiser can diversify the channels through which they buy media. There’s a problem with that argument, however, as it implies that all programmatic demand-side platforms (DSPs) are scalable. The reality is that scale has been a buzzword for far too long, and it’s the nuance that really matters.
A simple definition of scale, in the context of programmatic advertising, centers around a DSP’s ability to reach a high number of eligible consumers. Thankfully, the industry has a standardized form of measurement for scale called Queries Per Second (QPS). QPS tells us how many opportunities a certain bidder has to serve an ad per second. More specifically, this measurement provides a sense of the infrastructure a DSP has, as well as the diversity of supply partners integrated.
Let’s take a look at a tangible example to demonstrate scale. Let’s say an advertiser is choosing between two DSPs to run a retargeting campaign, DSP A has QPS measuring 2 million, and DSP B has QPS measuring 3.3 million. Both DSPs, in this instance, are above the industry average in QPS, and I’d argue the choice is somewhat obvious. If you have a target audience of 100,000 users, DSP B should have a 50%+ higher chance of reaching those users.
Of course, this example is an oversimplification of how buyers choose their partners. It doesn’t take into account customer service, experience, creative suite, etc., but there’s a challenge in buying based on these non-QPS attributes. If we were to scatter plot differentiation of possible attributes for a DSP, we’d likely see that differences in “creative services” and “experience” between different bidders would all land relatively close to each other. As a result, the advertiser mentioned above will frequently choose DSP A, or even DSP C with scale measuring 750K QPS. It’s an unfortunate manifestation of the consequences that arise when every platform’s website says, “the most scale, with the best machine learning, and the best advertisers, and the best team.” The nuance of scale is lost in this context, but perhaps it’s lost because QPS, as a number, isn’t enough.
How can we examine scale?
If QPS is insufficient on its own, as a method for examining scale, then we need to explore other facets of a DSP that allow it to bid at scale: specifically, supply diversity and infrastructure. An argument commonly made in the programmatic environment is that walled gardens provide a lack of publisher diversity, while programmatic DSPs offer a wide variety of publishers. It’s a reasonable argument, however, diversity of supply is not binary. It’s not a case of “you have it or you don’t,” rather, diversity of supply varies dramatically from one DSP to another, and those DSPs with higher volumes of supply integrations provide a more scalable solution.
Let’s revisit the retargeting scenario from above, this time assuming that DSP A has a QPS of two million and access to 10 different supply-side platforms (SSPs) while DPS B has a QPS of 3.3 million and access to 20 different SSPs - including the same 10 as DSP A. Leaving QPS aside, if the target audience is 100,000 users, DSP B is once again the obvious choice.
Not only will DSP B have a greater opportunity to find more of the advertiser’s users, but it will also have a significantly higher volume of data points from which it can optimize. A DSP with more supply partners can better optimize for attributes like supply partner, publisher, creative iteration, creative type, OS version, bid rates, frequency and more. A DSP with more supply partners can ultimately reduce diminishing returns for an advertiser by offering different formats and publishers through which they can approach an audience.
The importance of optimization in traffic pricing
With higher scale, a DSP can observe more auction outcomes and learn the market prices for different traffic components more efficiently. Through this process of traffic-price-learning, a DSP can deliver performance at lower cost by bidding more intelligently, and accurately, to win requests. The importance of this capability has become clearer as most programmatic supply has transitioned to first-price auctions this year. In a second-price auction for example, if a DSP bids too high it ends up paying the price of the second-highest bid, while in a first-price auction the safety net of a second highest bid does not exist, and a DSP can end up overpaying for the same impression and wasting an advertiser’s money.
Therefore, if we make the argument that the goal is to “find the right user, in the right place, with the right ad, at the right price,” then variation in supply partners must matter. And while high diversity of supply is a straightforward concept, achieving variety at scale requires robust infrastructure.
Instead of using cloud servers, some DSPs invest in building their own infrastructure, which they manage with their own custom hardware and network - optimized for their needs. As a result, their costs are dramatically lowered and they can extend the cost savings to clients in the form of lower traffic costs and subsequent lower cost per conversion. They’re then able to toggle new supply partners, increase bid rates on SSPs and provide a service to their advertisers that scales at a lower cost.
Why is scale particularly important moving forward?
On iOS 14, targeting device IDs will no longer be so straightforward - and the diminishing access to a persistent-identifier will impact look-alike models, device graphs, exclusion targeting and retargeting. If the privacy-changes manifest in a manner that we expect, scale will be the most important attribute available to advertisers.
In today’s programmatic world we bid aggressively on specific identifiers for advertisers (IDFAs) based on data sets we have at our disposal. We have highly dynamic bidders that enable us to reach specific consumers with specific ads, and through conversion rate prediction and optimization, we’re able to apply appropriate cost per thousand (CPM) bids to win the opportunity and serve an ad. If however, the IDFA becomes less relevant or accessible, that ability to apply intelligent bidding at user-level vanishes and we’re left with a landscape that has much less certainty.
If an advertiser has little certainty of the expected outcome from serving an ad, then that advertiser cannot afford to pay a high price for that ad. However, that same advertiser needs to reach prospective and existing users to grow and sustain their business - they need to drive outcomes similar to what they see today, but at a lower cost. So, what we will eventually arrive at is a landscape in which broad, high scale, low-cost bidding will provide the strongest performance to advertisers.
Programmatic ad spending is continuing to grow rapidly - to learn more about how to choose the right DSP, you can take a look at our blog post here. You can also find out more about Remerge and how they work with apps to build mobile marketing strategies here.