How much better is your business performing because of your digital advertising program? Quantifying that lift can be really hard. Contrary to popular belief, measuring impact on the business doesn’t...
How much better is your business performing because of your digital advertising program? Quantifying that lift can be really hard.
Contrary to popular belief, measuring impact on the business doesn’t hail from looking at clicks or leads from campaigns and then chasing down the specific opportunities and deals we can attribute back to them.
Gaining meaningful, actionable insights into your ad performance takes a bit more thought and effort. It takes diving a bit more deeply into your data, cooking up theories, and running tests. The extra work is worth it: your findings can be truly revelatory and take your advertising program into new, more effective directions.
We’ll share the step-by-step process we use at 6sense to gain these insights, and will provide examples of our own results along the way. You can create similar tests to generate results that are bespoke to your industry and solution.
First things first: What theory are you looking to test? It should be something you can measure today, within a list of current accounts. Good candidates for measurement might include:
In our case, we wanted to determine if our own digital advertising efforts were effective in accelerating pipeline creation. 6sense has a longer sales cycle — and while it would’ve been nice to launch a test to measure the impact of advertising on revenue, we didn't want to wait that long for the results.
Instead, we built our test to answer two questions:
Armed with a theory, you should now identify an appropriate list of accounts and then divide them into randomized test and control groups.
If one list has a different industry mix, company size mix, or engagement history, it isn't random. So make sure to take the time to confirm the test groups are truly randomized. Skipping this crucial step can unwittingly inject bias into your test.
We identified a list of prospect accounts and divided them into randomized test and control groups.
For our test, we began with a list of accounts that:
We then randomly divided the list to get our test (targeted with ads) and control (not targeted with ads) account lists. Our aim was to run ads exclusively to our test group, and then compare the results between the two.
This approach creates a random controlled trial, allowing us to measure the difference in outcomes between two groups. This is one of the few instances where B2B is easier than B2C, because you can create two groups of similar accounts to run a test to.
Create an advertising campaign to target your test accounts. Setting your test budget based on an average spend per account makes it easy to determine the budget when you move from the testing stage (which might target 500 accounts) to a rollout to 5x or 10x the number of accounts. You’ll need access to a solution like 6sense's advertising platform or LinkedIn that enables you to target by account.
We ran our test using 6sense’s built-in advertising platform, and leveraged 6sense’s profile targeting to reach the right people in those accounts.
By targeting specific job functions, we could test with a lower budget than if we’d targeted everyone in each account.
Just because this is a test — and because you’re probably more focused on it than you are on other campaigns — doesn't mean you should be quick to make changes to it. Remember, you’re testing what happens when you run ads, so treat it like you would other ad campaigns.
We waited a month. Some would say we weren't very patient, but oh well. At the end of the month, we stopped the campaign and compiled our results.
Remember those theories you wanted to test? Revisit them, and analyze the data to extract some actionable conclusions.
When we looked at our test questions, here is what we found:
This kind of lift can be monumental if your advertising goal is to accelerate early marketing funnel stages.
We also found some unexpected nuggets in our analysis when we were able to compare these two groups (and you probably will, too):
Our test group accounts progressed significantly further through our in-market stages. We were simply looking for movement from “target” to “awareness” given the short time period, so this was a pleasant surprise.
It was clear that BDR outbound + advertising is more effective than BDR outbound on its own. Companies included in advertising campaigns were about one-third more likely to engage when BDRs reached out to them.
We can cost-effectively create in-market demand with advertising. This test showed we can significantly expand our in-market demand with an intentional media investment much more cost effectively than we anticipated.
Gaining true clarity into the effectiveness of your digital advertising program may appear daunting at first, but don’t shy away from the challenge.
Applying a thoughtful, thorough “theory, test, analyze” methodology can make all the difference in how you target your accounts, allocate your ad spend — and efficiently transform prospects into paying, satisfied customers.