April 17, 2002
Article

The Three Major Problems with Online Marketing Metrics (And What You Can Do About Them)

SUMMARY: Have you ever noticed that online marketing metrics rarely tell you what you really need to know to determine whether your campaign worked, or how to improve on it? Our Marketing Technology Columnist, Alexis D. Gutzman, explains why it is that all those thick metrics reports do not give you the data you need. Plus, she shares truly practical tips on what you can do about it. After you read this article, we suggest you share it with the techies behind your Web marketing campaigns so they can begin to understand marketing's needs.
By Alexis D. Gutzman, Marketing Technology Columnist

When we conducted our reader survey in February, the
number one thing you told us you wanted to read more
about – from a practical, hands-on perspective – was
metrics. I could not have been happier with your
answers. As it happens, I love marketing metrics.

Most discussions of metrics start with a definition
of all the terms you need to know. I detest that
approach. Metrics are hardly new. Marketers have
been measuring the effect of their own efforts since
the beginning of time. I imagine even cavemen
monitored the results of their drawings to determine
which cave walls had the best success.

Why is it, then, that marketers with a clear idea of
offline metrics are so confused by online metrics?
This is because online metrics were devised not by marketers,
but by technologists. That is the first major
problem with online metrics.

Technologists measured what they *could*, not what
marketers needed to know. Online metrics (even email
metrics) are really Web metrics, which are all the
result of what Web server logs collect. If you have
ever seen a Web server log, you know that they collect
virtually everything there is to collect about a
visitor to a site, which is not to say that they
collect what you, as a marketer, need to know.

Online metrics are usually bottom-up metrics, not
top-down metrics. Using bottom-up metrics to
determine success, or even to measure improvement,
is a recipe for failure.

Consider the sacrifice bunt in baseball. One of the
metrics of a baseball player’s effectiveness is his
batting average. If a player were only concerned with
his own batting average, he would seldom consider
bunting, because rarely does the batter who bunts get
on base. However, there are times when a team needs
to sacrifice the current batter to advance a player
already on a base, in the hopes of getting that player
home on the hit of a third batter. Without the sacrifice
bunt, the team might lose. Certainly, wins are a more
important metric than batting average.

Many marketers are letting the lowest level metrics drive
campaigns. "Stickiness" is a good example of this.
Stickiness is easily measured as a combination of the
time a visitor spends on a Web site and the number of
pages viewed. Stickiness is good, right? More time on the
site and more pages viewed mean better success, right?

If I click through an affiliate link to Amazon after
reading a book review at About.com, and I only see two
pages at Amazon in the process, is that an indication of
site failure? Read this week’s case study on the redesign
of the WebTrends Web site
http://www.marketingsherpa.com/sample.cfm?contentID=1999)
to read more about how to use metrics to measure Web site
effectiveness.

The problem with online metrics is that the bottom-up
approach has been regnant for so long that most people
still think that stickiness is inherently good.
Convenience stores are so sticky for teenagers that
stores have come to post "No loitering" signs in the
parking lots. Unless you have separate data that shows
that the more time a customer spends at your site the
more he or she buys (or the less you need to spend on
customer service or returns), your goal should not be
to drag out the customer’s experience. Yet stickiness
is one of those things that technology-driven metrics
software will easily report for you.

Since the Web server is where all metrics are collected
(whether those metrics are for Web site, banner,
affiliate, viral, or email campaigns), you should have
some idea of what the Web server is collecting.

* IP address of the requestor (or of the proxy
server or ISP of the requestor)
* Browser version of the requestor
* Language of the requestor
* Page requested
* Referring page (the page the visitor clicked
on to get to the one he or she’s on)
* Date and time


That is the free and easy list. With the least little
bit of programming, or the level of software just more
sophisticated than a simple Web server log analyzer,
you can also collect:

* Number of unique visitors
* Number of returning visitors
* Number of new visitors
* Number of times per month the average visitor
returns
* Frequency with which an image is loaded or a
download is viewed

It is only natural that techies would want to convert
these millions of lines of Web server logs into
"meaningful reports." What can they tell you with
these data points?

1. From the IP address, they can do a reasonable
job of telling you what country or what part of the
US a visitor was coming from (except for AOL visitors
who all look like they’re coming from Herndon, VA).

2. From the browser version they can tell you whether
your visitors are likely to be able to use the Flash
version or the frames version of the site.

3. From the combination of referring and requested
pages, they can tell you what paths visitors took through
the site.

4. From the referring pages, they can tell you which
search engines were sending traffic and for which
keywords, which banner ads, affiliate links, and partner
sites were sending traffic.


That brings us to the second major problem with
Web site metrics, in particular, but with all types
of online metrics to some degree: the presence of a
measurement report tends to make marketers believe
that they have the data they need to do their jobs.
If I hand you a 12 page report with lots of numbers
in it, how likely are you to review the data and make
a list of what you really wanted to know? Most
marketers are going to look at the report as
comprehensive, then try to make sense of what they
are getting.

The final major problem is, likewise, more with the
marketers than with the metrics themselves. Fortunately,
this is the one about which you can do something today.

Having useful metrics at the end of a campaign requires
advance planning. How often have you come to the end of
a campaign and mused that it would be nice to know
something that you did not plan to measure, and thus,
did not measure when you had the chance.

What do you want to *know*?

We launched a new site this morning (SherpaWeekly,
which you can find at http://www.sherpaweekly.com).
One of the things we wanted to know was what the
abandonment rate was for an opt-in offer. The offer
is on the home page. Co-registration offers are on
the second page. The third page shows the link to
the free report, as well as our three best-selling
store reports. The last page shows the report in
PDF format.

Most Web analytics software won’t give you the
opportunity to create an "Abandonment Rate" report.
It will, however, permit you to indicate which pages
you want to watch, then to see how often each of
those are called in a certain period of time. By
looking for falloff between one page and the next in
a sequence, all the way through the opt-in,
registration, or checkout process, you can derive an
abandonment rate for the process.

In our case, we had to be a bit creative. Our site is
coded in such a way that we actually use the same file
to display both the second and third pages, and to
process opt-ins from other pages. This is not at all
unusual, by the way. Many sites use the same code to
handle opt-ins or registration wherever it appears on
the site. Since that second/third page file can be
called from multiple places, just looking at how often
the page is requested is not enough.

To address the problem of measuring the effectiveness
of this one offer tunnel (the abandonment rate of the
process), we created multiple 1x1-pixel clear gifs that
only appear on the second/third page file when it is
being used to process opt-ins coming from the new offer.
We had to tell the Web analytics software to watch and
track the number of times each of the gifs is requested.
Without having planned this out in advance, we might
look at the number of times the offer page was requested,
then look at the number of times the co-registration page
was requested, and find that 300% of people who read the
offer opted-in! Wait! That is not useful data at all! Seeing
the numbers fall off (or not) from the offer, through the
co-reg, through the reports page, and finally through the
actual PDF report can tell us how effective the offer and
the copy are, whether the instructions are clear, and
whether subscribers are having trouble viewing the PDF file.

By planning ahead to trap the data we know we will need to
measure effectiveness and make improvements the next time
around, we can measure what we need to measure.

Whatever it is you need to measure …
* Relative effectiveness of an offer based on where
on the page it appears
* The way that visitors traverse your site, and how
often they have to click the Back button to find what
they want
* Your shopping cart abandonment rate

… you can not rely on software to tell you after the fact
what you neglected to ask it to record for you.

Now that you are aware of the shortcomings of typical
online metrics (that technologists measure what they
*can* measure, and that marketers tend to look to what
has been measured to identify how effective their efforts
have been, regardless of whether what has been measured
is an accurate predictor) you can take action to plan to
have the right metrics available throughout the campaign.

Meaningless marketing metrics may be worse than no
metrics at all.

Improve Your Marketing

Join our thousands of weekly case study readers.

Enter your email below to receive MarketingSherpa news, updates, and promotions:

Note: Already a subscriber? Want to add a subscription?
Click Here to Manage Subscriptions