Getting cost data right: 4 experts weigh in
Just before we announced the launch of our cost data API, we brought together a panel of experts for a chat about how to get ad spend right.
Earlier this year at Mobile Spree San Francisco, our conference for app marketers in the Bay Area, our panelists aired their ad spend dirty laundry and got frank about the state of ad spend transparency, what’s missing in the industry, and how an API might be the answer to all of our prayers.
Here’s a recap of the conversation our host, Michael Paxman (Adjust’s Team Lead Technical Account Manager for Japan & SEA), had together with Erica Ma (VP of User Acquisition, Scopely), Kevin Young (Senior Manager of User Acquisition, Kixeye), Ashleigh Rankin (Head of Media Operations, Fetch) and Fabien-Pierre Nicolas (Head of Growth, SmartNews). Read on for the highlights of their conversation or watch the entire video below.
Michael: Everyone on this panel has a cost data horror story. Can you tell us about your pain points? What do you do with cost data?
Eric: We have cost data coming in from a lot of places. Once upon a time, this was an extremely manual process; we did all of this ourselves in Excel. We’ve been a Singular customer for the last couple of years. From an aggregate standpoint, this has done a great job of getting very fast reads for us on our ad spend.
But the Singular solution is kind of beholden to how good or bad individual vendors are - that’s a pain point for us. We’re looking for profitable acquisition wherever we can find it and sometimes that doesn’t come with the most reliable cost information. As a result, we have all of this aggregate data, but one of the things we’d like is to break it down in more compelling ways.
It’s not like it was in 2013 or 2014 where you’d come into the office every day and someone would be running around, ‘running’ the cost data. We’re past that. But in terms of advancement and how we think about our ROI, there’s definitely still room to grow.
Fabien: SmartNews is a Japanese company. In terms of horror stories on the Japan side, there is a channel where we don’t have an API, so we do have someone doing the job of manually putting together cost data in a spreadsheet for us, but that’s external. In the US, we don’t have the luxury of these external resources. That’s a pain point because we rely on engagement, and not having cost data at the user level means that we’re looking at blended return on ad spend, which is not great when in some places like Facebook you’re doing really micro-targeting. For us that’s painful and it’s still not a problem that’s solved.
Ashleigh: As an agency, we work with a lot of networks, channels and clients. We get to see an awful lot of information, but we still don’t have enough because it’s not at the user level. We ask for transparency from all of our networks. We can see where we’re running, but we won’t know the performance of that specific site within that specific network. To get that kind of information would be a massive game changer for us.
Kevin: Yeah, it’s a similar story for us. Our first era of cost data was pretty manual, with someone updating a spreadsheet overnight. That was a problem for a couple reasons. The human element - so someone entered the data, but how do you know it’s right? The other issue was that there’s a labor restriction on it. Say you’re aggregating by day and you want to aggregate by geo or anything else - it quickly becomes super complicated.
Era 2.0 for us was the beginning of automation. The bigger developers were building these crawlers to go into these dashboards and pull everything in the format they want. That was fine; we used an external partner - Singular - but that’s still a problem because on the network side the nomenclature needs to be straight. If not, you still might be looking at ROAS incorrectly depending on how you’re slicing it.
My goal now is to completely decouple things on the cost side and the user side and building ROAS from the ground up in a very granular sense. The thing is that all these networks do things differently. Some can report costs by geo and some can’t, for example. So you end up manually mapping those edge cases, and you’re back to square one, devoting people resources to doing that exclusively.
Michael: What level of trust do you have with the type of cost data you have now?
Kevin: Well, it’s an inverse relationship with how granular it is. So if I’m looking at one network in the US, I feel pretty confident, but if I’m looking at iPads in a place like Russia on a network I don’t know, I’m not very confident.
Eric: I find that cost data is very usable within the prefab dimensions with which you set things up. When I think about a 2.0, 3.0 concept, I think that’s what’s missing. Cost is fundamentally limiting to how you’ve set it up, and how you’ve set it up is often what cost might be available by. It limits the way you would set things up, so a lot of us have ‘stock’ dimensions we target by, and the more granular we could get with cost, the less we’d have to do that and the more value we could exploit.
Ashleigh: It’s so difficult to understand the performance of specific apps and sites within networks because you have Network A, Network B, Network C, and you pay three or four dollars’ CPI on each of them and you’re running across 500 different sites and apps and you have no idea which one is converting. So I think there’s an issue with that model as well. I feel comfortable with Facebook and Apple data, but on the network side there just needs to be so much more transparency and that’s something we need to do as an industry.
Michael: I want to ask something a little bit specific. What happens when the naming convention on the network side and the naming convention on the Adjust side or the attribution partner side is not the same. Do you know what happens?
**Eric: **Sure, that happens all the time. You’re limited to the hierarchy that things match on. When it breaks down, it’s frustrating. You can wind up in Excel manually building spreadsheets from scratch.
Michael: If Adjust came up with a one-to-one cost API, how would you use it? How would that affect your buying strategies?
Kevin: For one, I think we have a lot more work as UA managers to get insight at that level. I would love to have that. If there was a key to connect everything together… that would be great.
Eric: If data was available at a user level, there’s so much we could automate. This may sound relatively tame, but you could re-combine all of your data by any metric that you wanted and automate what you tell your networks to do as a result. As a company, we need to invest in that to make it work, but there’s so much exploration to be done because when you don’t already have data at that level, you don’t even know where the gold mines are within that.
Fabien: With that data, you could focus your UA team and your engineering team on solving real problems. They could generate insights a lot more quickly, instead of having to focus on piecing spreadsheets together, which is not fun.
Michael: How about from the agency side? What would be the way that you’d approach this next generation of cost?
Ashleigh: For us, we ask for data all the time. The more we can get, the better we can do. This would offer a really unique opportunity for us as well; we work with clients across all verticals, including gaming, e-commerce and travel. So to have information at a user level from all of that and make decisions based on that for future campaigns and clients would be awesome for us. We’d use as much of that as possible.
Michael: Okay, let’s pretend you all worked for networks. If this happened today, what do you think the ramifications are? How would this change our industry?
Kevin: Well, I would say that one of the resistors for me on the advertiser side is if networks are going to send me a CSV of spend, it’s not even worth my time. I think it would help discovery in a lot of ways, like if there’s a great new network out there that’s developing a healthy list of publishers in their portfolio and they’ve got a spend thing that’s easy to plug into, there’s no sort of friction when it comes to running a test there.
Ashleigh: From my side, it’s going to separate the wheat from the chaff. We have very specific criteria for partners that we want to work with. Something like this could go on our approval list, like ‘we won’t work with you unless you’re able to give us this information’.
Michael: Do the rest of you think it would eventually become a requirement, after a certain amount of time, to only buy from a network if they provided cost data?
Kevin: For me it pretty much already is. We’re already kind of there.
Eric: It’s all or nothing, right? We need our people and machines to commit to a certain level of optimization. We can’t be like ‘hey, on 30 percent of this stuff, we’ll take an Excel file, and all the rest over here is automated….’. The gathering and formatting of this data needs to be a complete solution.