How much better is your business performing because of your digital advertising program? Quantifying that lift can be really hard.
Contrary to popular belief, measuring impact on the business doesn’t hail from looking at clicks or leads from campaigns and then chasing down the specific opportunities and deals we can attribute back to them.
Gaining meaningful, actionable insights into your ad performance takes a bit more thought and effort. It takes diving a bit more deeply into your data, cooking up theories, and running tests. The extra work is worth it: your findings can be truly revelatory and take your advertising program into new, more effective directions.
We’ll share the step-by-step process we use at 6sense to gain these insights, and will provide examples of our own results along the way. You can create similar tests to generate results that are bespoke to your industry and solution.
1. Establish Your Test Objectives
First things first: What theory are you looking to test? It should be something you can measure today, within a list of current accounts. Good candidates for measurement might include:
- How many accounts you get a first meeting with
- The number of accounts with new opportunities
- Your sales team’s win rate
- Your customer retention rate
In our case, we wanted to determine if our own digital advertising efforts were effective in accelerating pipeline creation. 6sense has a longer sales cycle — and while it would’ve been nice to launch a test to measure the impact of advertising on revenue, we didn't want to wait that long for the results.
Instead, we built our test to answer two questions:
- Does advertising move more accounts into our in-market buying stages? This is a leading indicator of pipeline creation that we report on directly in the 6sense platform.
- Does advertising lead to more engagement with sales? We measure accounts that are engaged with sales within our platform, too.
2. Identify Test and Control Account Groups
Armed with a theory, you should now identify an appropriate list of accounts and then divide them into randomized test and control groups.
If one list has a different industry mix, company size mix, or engagement history, it isn't random. So make sure to take the time to confirm the test groups are truly randomized. Skipping this crucial step can unwittingly inject bias into your test.
We identified a list of prospect accounts and divided them into randomized test and control groups.
For our test, we began with a list of accounts that:
- Had between 500 and 1,000 employees
- Were not in an in-market stage
- Had low or no engagement
- And were accounts we’d not been actively targeted, but our predictive model deemed a good fit for our solution
We then randomly divided the list to get our test (targeted with ads) and control (not targeted with ads) account lists. Our aim was to run ads exclusively to our test group, and then compare the results between the two.
Why Is This Step Important?
This approach creates a random controlled trial, allowing us to measure the difference in outcomes between two groups. This is one of the few instances where B2B is easier than B2C, because you can create two groups of similar accounts to run a test to.
3. Launch Your Advertising Campaign
Create an advertising campaign to target your test accounts. Setting your test budget based on an average spend per account makes it easy to determine the budget when you move from the testing stage (which might target 500 accounts) to a rollout to 5x or 10x the number of accounts. You’ll need access to a solution like 6sense's advertising platform or LinkedIn that enables you to target by account.
We ran our test using 6sense’s built-in advertising platform, and leveraged 6sense’s profile targeting to reach the right people in those accounts.
By targeting specific job functions, we could test with a lower budget than if we’d targeted everyone in each account.
4. Sit Back and Wait
Just because this is a test — and because you’re probably more focused on it than you are on other campaigns — doesn't mean you should be quick to make changes to it. Remember, you’re testing what happens when you run ads, so treat it like you would other ad campaigns.
We waited a month. Some would say we weren't very patient, but oh well. At the end of the month, we stopped the campaign and compiled our results.
5. Analyze Your Results
Remember those theories you wanted to test? Revisit them, and analyze the data to extract some actionable conclusions.
When we looked at our test questions, here is what we found:
- Advertising increased the portion of companies that entered in-market buying stages by 114%, a surprisingly large margin.
- Advertising increased companies engaging with our sales team by 71%.
This kind of lift can be monumental if your advertising goal is to accelerate early marketing funnel stages.
We also found some unexpected nuggets in our analysis when we were able to compare these two groups (and you probably will, too):
Advertising Improved Account Velocity
Our test group accounts progressed significantly further through our in-market stages. We were simply looking for movement from “target” to “awareness” given the short time period, so this was a pleasant surprise.
Advertising Helps BDRs
It was clear that BDR outbound + advertising is more effective than BDR outbound on its own. Companies included in advertising campaigns were about one-third more likely to engage when BDRs reached out to them.
Advertising Is Effective … and Efficient
We can cost-effectively create in-market demand with advertising. This test showed we can significantly expand our in-market demand with an intentional media investment much more cost effectively than we anticipated.
Gaining true clarity into the effectiveness of your digital advertising program may appear daunting at first, but don’t shy away from the challenge.
Applying a thoughtful, thorough “theory, test, analyze” methodology can make all the difference in how you target your accounts, allocate your ad spend — and efficiently transform prospects into paying, satisfied customers.