Join thousands of weekly readers and receive practical marketing advice for FREE.
MarketingSherpa's Case Studies, New Research Data, How-tos, Interviews and Articles

Enter your email below to join thousands of marketers and get FREE weekly newsletters with practical Case Studies, research and training, as well as MarketingSherpa updates and promotions.


Please refer to our Privacy Policy and About Us page for contact details.

No thanks, take me to MarketingSherpa

First Name:
Last Name:
Jan 07, 2004

The New Metrics Frontier: Predictive Modeling for Email Campaigns -- Two New Tools Reviewed

SUMMARY: Have you ever wanted a tool that would tell you exactly how to tweak your creative and frequency so names on your house list will be far more likely to open, click, and love getting email from your brand for months and years to come?

In this article, you'll learn about two new analysis systems that are being tested by emailers - including Chas Schwab. Includes our honest reviews, useful links, and info from the Advertising Research Foundation.
Predictive modeling, the science of analyzing past campaign results in order to forecast and proactively improve future results, has long been the holy grail of direct marketing.

The question is, can predictive modeling be applied in any sophisticated way to email? We reviewed two different new analysis methodologies that claim to help emailers figure out exactly how to tweak future mailings for a better response, and enhance long-term reader engagement.

System #1: MR2's Taguchi Method-based Tactics

How it works:

The Taguchi Method was invented to help industrial researchers (such as automakers) who wanted to conduct tests with multiple elements, in a short-time frame, with small test cells. (Link to more detailed info below from the American Supplier Institute.)

Dr Kowalick of MR2 (a respected product engineering consultant who does not have a marketing background) decided to try applying this to email campaigns for folks who had small lists and/or wanted to test a broad range of variables without creating a separate test cell for every single one of them.

Instead of classic tests where you measure one varying factor per test cell against your "control", you could measure hundreds or even thousands of varying factors -- and the way they interact with each other -- per test cell. And the cells could be as small as under 100 names sent.

The returning data would tell you what factors go toward making campaign creative that wins.

If it works, this tactic could be a godsend for marketers with small lists, or those who need to launch completely new creative instead of just tweaks to what's already working.

Client testing it:

Malloy Insurance Services, a small firm based in Silicon Valley, whose regular campaigns to a house list of 7,500 weren't working very well, tested MR2's services in May 2003.

Over a two-week period, Malloy split his list into 12 test cells of 625 each, and sent out each a very different email creative -- ranging from short HTML to long-copy text-only -- hoping to measure the success and interaction of more than 2,000 different specific creative factors.

Dr Kowalick tells us he was able to determine the winner and the specific elements within that creative that made it win with a "95-98% confidence level." (Link to samples of winner and loser below.)

Then Malloy relied on his list of winning elements to craft new email letters to clients and prospects fortnightly from then on.

Our opinion:

We were suspicious of any ad test that forecast results based on such a teeny number of responses per test cell. But since Ad Age, PBS, and Inc. Magazine have all said the idea is promising, we decided to suspend disbelief and ask expert Bill Cook, PhD, SVP Research & Standards at the Advertising Research Foundation, what he thinks.

Cook told us there's good news and bad news. First of all, sophisticated emailers certainly should be measuring interactions between creative elements instead of just each element in a vacuum by itself.

He explains, "Reducing the number of experimental combinations (or "cells" in statistical parlance) to be tested by the use of a fractional factorial design is a well-respected approach.

"Think of ice cream sundaes. You might love a sundae with cherries more than one without and you might like a sundae with raspberry sauce more than one without it, but that doesn't mean that you will like a sundae with both raspberry sauce and cherries better than a sundae with one of them, or even more than you would like a sundae without either of them."

However, here's the bad news: "Reducing 2,048 to 12 is a very radical surgery and is a highly fractional design, think thin like the ice on the Hudson River just now.

"In addition to worrying about the number of combinations to test, you have to worry about the number of people to be given each combination and the right number is not just driven by the number people that you are mailing to, but the number who are answering the email. You could end up with a lot of zero-response cells and be in worse shape than what the overly pared design would bring you."

Cook's advice - Don't use the Taguchi system (or any similar one), unless you have both larger cells and lots of personal email-specific experience with similar creative combinations so you create tests and can evaluate results properly.

Basically - it's not about the math, it's about the background email campaign knowledge you bring to it.

"Like an exotic high-powered car, Taguchi algorithms are not for the layperson, and just because you can buy a canned program to generate the combinations, doesn't mean you know how to use the tool."

System #2: Quris' RFMoe Engagement Predictive Model

How it works:

This system attempts to help you measure and forecast the health, responsiveness, and longevity of names in your house email list.

The problem with simply looking at open and click rates, VP Marketing Simon Greenman says is that you are usually looking at one point in time -- one campaign's opens, clicks and sales.

Plus, typical measurement systems donít enable you to track the factor that consumers have repeatedly said was the most critical in their relationship with your company -- frequency of email.
It's very hard to test frequency, or to determine how frequency preferences of sub segments of your list such as big fans versus rare-openers, affect their impression of your brand and/or likelihood to buy.

Instead RFMoe tracks each name's open and click behavior over time and assigns a "momentum score." Greenman suggests you examine at least 90-180 days of activity for an individual or a group to get reliable momentum stats.

Then you'll be able to see which types of people are losing interest in your regular mailings, and how you can affect their interest-levels by altering elements such as frequency or creative.

Client testing it:

Chas. Schwab's email department have been testing the system with segments of their house lists for more than 60 days. They are scoring names based on recency of open and clicks, and frequency of open and clicks.

The goal is to identify which customers are becoming bored with Schwab email, and to test factors to reverse the downward trend. They don't have firm results data yet, but we'll bring it to you when they do.

Our opinion:

We love this idea, especially because it can be used to shed real light on the true lifetime of names on your house list; and then give you a bit of control over it before things get so bad folks click on "unsubscribe" or automatically associate "delete" with your brand name.

In postal mail, it's long been known that people who joined your list within 30-days or less are far more involved with your brand and more likely to respond than everyone else. Then after that mark, the responsiveness drops off sharply over time until the average name is close to worthless to you.

We've asked many emailers if they were measuring this critical metric of response lifetime in the past, and almost no one has ever said "yes." This system makes it easier, and we're in favor of that.

-> Useful links related to this story

Samples of the best and worst-performing emails from the Malloy Insurance tests using MR2's system:

American Supplier Institute explains what the Taguchi Method is:



Advertising Research Foundation

Drilling Down: Turning Customer Data into Profits with a Spreadsheet (Book)
See Also:

Post a Comment

Note: Comments are lightly moderated. We post all comments without editing as long as they
(a) relate to the topic at hand,
(b) do not contain offensive content, and
(c) are not overt sales pitches for your company's own products/services.

To help us prevent spam, please type the numbers
(including dashes) you see in the image below.*

Invalid entry - please re-enter

*Please Note: Your comment will not appear immediately --
article comments are approved by a moderator.

Improve Your Marketing

Join our thousands of weekly Case Study readers.

Enter your email below to receive MarketingSherpa news, updates, and promotions:

Note: Already a subscriber? Want to add a subscription?
Click Here to Manage Subscriptions

Best of the Week:
Marketing case studies and research

Chart Of The Week

B2B Marketing

Consumer Marketing

Email marketing

Inbound Marketing

SherpaStore Alerts


We value your privacy and will not rent or sell your email address. Visit our About Us page for contact details.