Close
Join 237,000 weekly readers and receive practical marketing advice for FREE.
MarketingSherpa's Case Studies, New Research Data, How-tos, Interviews and Articles

Enter your email below to join thousands of marketers and get FREE weekly newsletters with practical Case Studies, research and training, as well as MarketingSherpa updates and promotions.

 

Please refer to our Privacy Policy and About Us page for contact details.

No thanks, take me to MarketingSherpa

First Name:
Last Name:
Email:
Text HTML
Feb 25, 2014
Article

Marketing Research Chart: Can you expand upon a single successful split test?

SUMMARY: An A/B test can be exciting.

You might discover a new headline that generates 10% more leads or even 100% more sales.

But what then? How do you achieve sustainable, successful results?

This week's chart takes a MarketingSherpa look at your peers' testing practices.
by Daniel Burstein, Director of Editorial Content

"Testing is key!"

That emphatic statement is from the MarketingSherpa Email Marketing Benchmark Report survey, where we asked:

Q: How routinely does your organization implement the following testing practices?

View Chart Online


Click here to see a printable version of this chart



Marketers are pretty thorough at measuring results from a single test

You can learn a lot from an A/B test. For example, one marketer mentioned in the survey that "testing footer promotions/calls-to-action can make a big difference."

From what marketers told us, they seem to be pretty good at measuring results from a single test. For example, half of marketers very routinely track deliverability, open, clicks and conversion rates to document the entire impact of email on the marketing and sales funnel. Only 2% said they never do it.

If we segment the data from the Benchmark Report survey, we see this funnel-wide result tracking even more pronounced in certain segments.

Here are the numbers for organizations that very routinely track tests through the entire funnel:
  • 64% of B2C organizations

  • 52% of organizations that sell to both business and consumers

  • 56% of organizations with more than 100 employees

Tracking through the funnel is critical, and here's why.

Let's say you just tracked open rates. You test a subject line and it hugely increases open rates.

Success!

Or so it seemed. However, maybe you just received curiosity opens? Maybe the email message and offer do not play off of the subject line? Maybe this alienates customers who delete, unsubscribe or, even worse, mark your email as spam?

By tracking throughout the funnel, marketers are able to see not only the effect of a change on intermediate metrics (like open rates), but also the effect on ultimate KPIs (like conversion rates).

For example, here is what one marketer discovered:

"We've been testing a variety of different factors to try to drive higher clicks. All of our tests have been effective to drive higher opens (averages of around 35% to lists of 5,000+) but our CTR is still frustratingly low."

Marketers struggle at learning from a series of tests

As a CMO responded in the Benchmark Report survey, "You have to monitor and test and monitor and test constantly."

Here's where marketers struggle more — only 31% very, or somewhat routinely, review tests and decide on follow-up tests.

Testing in isolation is of limited value.

The real value is learning from a test and using that new knowledge to inform future tests in a virtuous testing-optimization cycle. I like to think of this as marketing kaizen, in reverence to the continuous improvement philosophy that has most notably been applied to manufacturing.

In addition to failing to utilize follow-up tests, 40% of marketers infrequently document findings at regularly scheduled times. Many years before the advent of Web optimization, David Ogilvy said:
What is the reason for this failure to codify experience? Is it that advertising does not attract inquiring minds? Is it that any kind of scientific method is beyond the grasp of "creative" people? Are they afraid that knowledge would impose some discipline on their work?

Web Optimization Summit in New York City: Call for speakers

Have you run a series of split tests? Are you codifying your experiences? Don't let your good work get overlooked. We would love to highlight you on stage at Web Optimization Summit. If you're an e-commerce or subscription marketer with a conversion optimization or A/B testing case study to share, I encourage you to apply to speak at Web Optimization Summit. The application deadline is March 3.

Related Resources

Marketing Process: Managing your business leaderís testing expectations

A/B Testing: One word will unclog your conversion testing

A/B Testing: How a landing page test yielded a 6% increase in leads


Post a Comment

Note: Comments are lightly moderated. We post all comments without editing as long as they
(a) relate to the topic at hand,
(b) do not contain offensive content, and
(c) are not overt sales pitches for your company's own products/services.










To help us prevent spam, please type the numbers
(including dashes) you see in the image below.*

Invalid entry - please re-enter




*Please Note: Your comment will not appear immediately --
article comments are approved by a moderator.