by Daniel Burstein
, Director of Editorial Content
"Testing is key!"
That emphatic statement is from the MarketingSherpa Email Marketing Benchmark Report
survey, where we asked:Q: How routinely does your organization implement the following testing practices?
Click here to see a printable version of this chart
Marketers are pretty thorough at measuring results from a single test
You can learn a lot from an A/B test. For example, one marketer mentioned in the survey that "testing footer promotions/calls-to-action can make a big difference."
From what marketers told us, they seem to be pretty good at measuring results from a single test. For example, half of marketers very routinely track deliverability, open, clicks and conversion rates to document the entire impact of email on the marketing and sales funnel. Only 2% said they never do it.
If we segment the data from the Benchmark Report survey, we see this funnel-wide result tracking even more pronounced in certain segments.
Here are the numbers for organizations that very routinely track tests through the entire funnel:
- 64% of B2C organizations
- 52% of organizations that sell to both business and consumers
- 56% of organizations with more than 100 employees
Tracking through the funnel is critical, and here's why.
Let's say you just tracked open rates. You test a subject line and it hugely increases open rates.
Or so it seemed. However, maybe you just received curiosity opens? Maybe the email message and offer do not play off of the subject line? Maybe this alienates customers who delete, unsubscribe or, even worse, mark your email as spam?
By tracking throughout the funnel, marketers are able to see not only the effect of a change on intermediate metrics (like open rates), but also the effect on ultimate KPIs (like conversion rates).
For example, here is what one marketer discovered:
"We've been testing a variety of different factors to try to drive higher clicks. All of our tests have been effective to drive higher opens (averages of around 35% to lists of 5,000+) but our CTR is still frustratingly low."
Marketers struggle at learning from a series of tests
As a CMO responded in the Benchmark Report survey, "You have to monitor and test and monitor and test constantly."
Here's where marketers struggle more — only 31% very, or somewhat routinely, review tests and decide on follow-up tests.
Testing in isolation is of limited value.
The real value is learning from a test and using that new knowledge to inform future tests in a virtuous testing-optimization cycle
. I like to think of this as marketing kaizen
, in reverence to the continuous improvement philosophy that has most notably been applied to manufacturing.
In addition to failing to utilize follow-up tests, 40% of marketers infrequently document findings at regularly scheduled times. Many years before the advent of Web optimization, David Ogilvy said:
What is the reason for this failure to codify experience? Is it that advertising does not attract inquiring minds? Is it that any kind of scientific method is beyond the grasp of "creative" people? Are they afraid that knowledge would impose some discipline on their work?
Web Optimization Summit in New York City: Call for speakers
Have you run a series of split tests? Are you codifying your experiences? Don't let your good work get overlooked. We would love to highlight you on stage at Web Optimization Summit. If you're an e-commerce or subscription marketer with a conversion optimization or A/B testing case study to share, I encourage you to apply to speak at Web Optimization Summit
. The application deadline is March 3.
Related ResourcesMarketing Process: Managing your business leaderís testing expectationsA/B Testing: One word will unclog your conversion testingA/B Testing: How a landing page test yielded a 6% increase in leads