Whether your marketing teams is looking to build brand awareness, improve customer experience, or target a specific customer base, direct mail can deliver.
Why the risk is greater when the test is smaller
Compared to digital, direct mail can deliver better quality leads resulting in higher conversion rates and ROI. That makes many marketers to want to test direct mail.
Digital marketers new to the mail channel look to transfer their digital experience to mailings. While certain precepts apply, one that does not is the testing philosophy.
The issue is that mail costs more (postage alone is 26¢+ per piece) and takes longer, (due to delivery and response curves), than digital—both to execute and read results. This leads some marketers new to the channel to want to minimize financial exposure by testing a small quantity mailing. That practice is doomed to fail.
For outbound digital, it works to start small and grow programs iteratively. Direct mail is much more likely to have a reverse paradigm—test liberally, but responsibly. Understand what works and roll it out to successful segments.
Here’s why small-scale testing in direct mail won’t provide the information needed for an effective roll out:
1. Production inefficiencies.
Small, inefficient print quantities, plus the fixed costs of creative and program management, make ROI goals very hard to meet at low quantities.
Let’s do some quick back-of-the-envelope (it’s direct mail, after all) calculations for printing a simple #10 envelope and letter at two different quantities:
These are rough numbers, but they tell a story. In this example, the increase in mail quantity cuts the cost per package in more than half. That’s a lot more testing for your money. Importantly, when it’s time to do the analytics, the ROI on small quantity mailings is seriously penalized due to high per-piece costs.
2. Limited testing, limited learning.
Small quantities hamper the ability to get readable results that can translate to rollout. We strategize with clients to prioritize which response levers to test: messaging, lists, models, offers, formats and more. We are looking for the small cracks of light in test cells that lead to chasms of opportunity for rollout.
Consider the results of a fictitious 25,000-quantity mailing broken into four test cells that include two different creative approaches and two different lists:
Directionally, Cell 1 was the loser and Cell 4 the winner. But, it would be a stretch to base conclusions on a difference of seven responses. And the cost for each response, based on the above cost per piece, is $252.72.
Even 2 cells with 12,500 each may not provide readable results. Plus, you’re making a judgement as to what to test. Two creative cells mailed to one list? Or two lists, but one execution? Either way, a 2-cell test severely limits your learning, and you risk missing the real lever for success testing a single “best guess” variable.
Now let’s assume a 250,000-quantity mail test of the same variables:
In this instance we have enough response to look at individual cells, plus to aggregate the quantities across variables. The response difference between Cells 1 and 4 are statistically significant (90% confidence level). Aggregating List 1 Cells (1 & 3) and comparing them to the List 2 Cells (2 & 4) also provides some significant learning about list performance.
And the cost per response above? Only $112.80 per piece.
The math bears it out. More quantity gives you more actionable results at a lower cost.
Quantity is even more important in crowded categories. Competitors are likely to be mailing to the same prospects as you—often at the same time. That could lead to category clutter and lower overall response, making reading results even harder.
You can’t iterate and improve direct mail on a daily or even weekly basis. Direct mail has naturally longer timelines.
Following approval to creative and data, there is still a manufacturing process to create physical mail. Then, there is a 7–10 day standard mail delivery time before mail reaches a prospect. And response curves are typically 30–45 days. So, from the time the program is “out the door” until we read it is typically 2 months or more. That’s a long wait for not a lot of learning when employing a small quantity test.
How We Do It
At Gunderson Direct, our Leap and Repeat process takes these issues into account to provide efficient, faster, and more actionable learning. It’s a process that builds channel momentum, reduces guesswork and leads to rollouts with the potential for a high ROI, bottom line impact.
We have the strategic expertise to help you plan and execute successful direct mail strategies that build trust and drive revenue. Drop us a line for help increasing your leads and growing your sales numbers.