Close
Join 237,000 weekly readers and receive practical marketing advice for FREE.
MarketingSherpa's Case Studies, New Research Data, How-tos, Interviews and Articles

Enter your email below to join thousands of marketers and get FREE weekly newsletters with practical Case Studies, research and training, as well as MarketingSherpa updates and promotions.

 

Please refer to our Privacy Policy and About Us page for contact details.

No thanks, take me to MarketingSherpa

First Name:
Last Name:
Email:
Text HTML
MarketingSherpa Email Summit 2015 - SAVE $700 - VIP PRICING ENDS THURSDAY
May 06, 2008
Event Presentation

Sherpa’s Online Advertising Presentation 2008: New Research on What Ads Work & How to Improve ROI

SUMMARY: If you couldn't listen to MarketingSherpa's recent teleseminar on online advertising, just click below to get your audio presentation and discover what 951 marketers revealed about what works in online advertising.

Includes 12 new charts and an eyetracking heatmap. This year’s highlights include:
- How to improve the effectiveness of your ads by up to 219%
- How to increase rich media clickthrough rates by up to 132%
- Which tests will lift your advertising ROI the most
This is your downloadable version of the teleseminar that was presented April 23, 2008, by Stefan Tornquist, Research Director and Tim McAtee, Senior Analyst and lead author for the 2008 Online Advertising Handbook & Benchmarks.

#1. Click on this link to download the PowerPoint presentation, including 20 slides with 12 new charts and an eyetracking heatmap:
http://www.marketingsherpa.com/tele/OnlineAd.pdf


#2. Click this link to download the MP3 recording of the teleseminar: http://www.MarketingSherpa.com/tele/OnlineAd08.mp3

#3. Here is the transcript of the teleconference:

Stefan Tornquist: Hello, welcome to today’s teleseminar covering Online Advertising 2008: What Works, What Doesn’t and Why. The research that we’ll be presenting today is the result of work that went into the 2008 Online Advertising Report Handbook and Benchmarks. I’m Stefan Tornquist, Research Director at Marketing Sherpa. I’m joined today by Tim McAtee, Senior Analyst and lead author on the online advertising report. Welcome, Tim.

Tim McAtee: Hi, Stefan. Thanks.

Stefan Tornquist: This is the first time that MarketingSherpa has written a report about online advertising and we’re eager for your feedback. After this teleseminar, please send any comments, any questions about online advertising, things that we can focus on in coming years, because this is going to become an annual report. There’s been an awful lot of interest, a lot of coverage in the press and that sort of confirmed one of the suspicions we had in designing this report: For all of the talk, and the money spent on display advertising, there’s a surprising lack of research.

We really wanted to answer some essential questions. Does display advertising work – that's the big one – and how does it work? If it works, what is its primary function, and are we measuring it correctly? We’ll be talking a lot about that later. Then how do you reduce media waste and optimize performance? Obviously those are big questions. Finally, how do you match your online advertising strategy with your business goals?

Again, we’d very much appreciate your comments as we move through this presentation, and we’ll be answering some of the questions that you gave us during registration at the end.

With that, let’s take a look at how the data for the study was gathered. Let’s move to slide number 2. Now here you’ll see that our primary source is online display marketers themselves. We did an in-house survey of 577 folks in various appropriate Sherpa lists. We also took advantage of a late 2007 study that we conducted with attendees of ad:tech, and 374 gave their responses.

In addition to that, Tim and an analyst working with him arranged for several unique special reports coming from maniaTV, InsightExpress, Unicast and several others, as well as some eyetracking research that I think – having done eyetracking for the last several years, is really some of the most interesting eyetracking research that we’ve seen to date.

So, with all of that out of the way, let’s jump into the content for the report. Tim, we’re now on slide 3, the Big Picture. Where are we headed here?

Tim McAtee: Well, Stefan, we’re going to start off looking at some norms of effectiveness, kind of what marketers can expect, budget levels, that sort of thing, and kind of wrap our heads around what’s happening right now in the marketplace and what we can expect to be seeing in 2008 and even into 2009.

Stefan Tornquist: All right, so as we move to slide number 4, Despite the clutter: creative is improving. This chart tracks some norms. Tim, what are these norms? How does an ad effectiveness study really work?

Tim McAtee: Well, any time an ad effectiveness study is run, what they’ll do is they’ll simultaneously collect a control and an exposed group of respondents. What we’re looking at is the difference. So, the difference is always going to be called the delta. That difference is attributable solely to the online ad exposure, because by comparing these two groups we’ve taken away any variables between the two except for that ad exposure.

What we’re looking at here with this chart is we’re seeing that it’s normal for there to be a difference for online advertising awareness of 3.6% overall, which means that for every 100 people that come to a website on which an ad is served, 3 1/2 of them are going to have noticed the ad who wouldn’t have otherwise. That same math holds true across the board here.

What we’re seeing here among these norms is that overall advertising is doing a little bit better when we compare 2007 to the overall norms, which means that advertisers are doing a pretty decent job. We’re also seeing that the majority of metrics saw increases, and that list ranges from a 13% lift for unaided brand awareness to a 23% lift for intent.

So, online ad awareness deltas, and unaided brand awareness deltas, did decrease a bit, which I think is the result of an increasingly cluttered online landscape. Hence the title there, Despite Clutter.

Stefan Tornquist: All right. So let’s move on to our next slide, and we’re now on slide number 5. Here we’re looking at – Spending Reflects the Size of the Target. Well, that makes sense. Do you want to talk us through this?

Tim McAtee: Sure. Again, this is, I think, just something that makes sense at a pretty basic level here. Just the more people you have that you’re advertising to, the more you’re going to wind up spending. So I think this is a good place for advertisers to just get a bearing based on where they are with their own targets, where they should be spending, and I think it’s really interesting to see that there is a pretty huge range there that someone with a huge mass audience -- you’ve got everything from $10k to $99k all the way up to a million plus. Granted, they’re more likely to spend more, again, because they do have the bigger targets.

I think one of the interesting things here, too, is just with general B-to-B, you do see that sort of sweet spot of 100k to just under a million, which, again, I think, just speaks a lot to just prices right now, and how big the target is. It makes quite a bit of sense.

Stefan Tornquist: All right. Let me play devil’s advocate for a second. For companies that are spending smaller amounts on display, is it really practical to do the kinds of brand awareness and these fairly long-term tests to really track the ROI of online marketing?

Does it make sense – if I’m a smaller B-to-B advertiser, I’ve probably bought display ads on the same sites that I used to, the same publishers where I used to be a print advertiser, or still a print advertiser? If I’m spending less than $50k online, does it make sense to engage in these kinds of tests, or are there other kinds of tests that make more sense for the smaller budget online advertiser?

Tim McAtee: Well, I think we have to differentiate between small targets and small budgets. Whether you have a huge target or – it’s very possible to have a huge target and a small budget -- or a small target and a huge budget. It really just depends on what your profit margin is, and what it takes to get a decent ROI. If you do have a huge target, but you have really slim margins, then maybe it doesn’t make a ton of sense to spend a lot of money.

Alternatively, if you’re selling nuclear reactors or something, like GE, you can have a huge cost per ad – a huge cost per sale, and have that still make sense. So it really does sort of depend on your margins and just getting the right message in front of the right people.

Stefan Tornquist: So the implication there, there’s a lot here that talks to media modeling, as well, right? If you are in a position where maybe brand effectiveness studies aren’t the optimal measure, you can base your investments on a media model. We get questions about this very often. There are specific agencies that do media modeling. Is there a good reason to go beyond your regular agency to look at those specialists?

Tim McAtee: Again, any time you’re testing or modeling – when you’re modeling, you’re sort of taking a leap of faith. The better your model, the more likely you are to spend your money wisely once you spend it. When you’re testing, there’s always going to be a bit of an opportunity cost.

With an ad effectiveness study, again, there’s always going to be your control audience. If your target is CEOs who create a certain product, and it’s very small, very limited, having half of your target audience go dark while you test half the CEOs versus the other half of the CEOs to see who spends more is kind of idiotic, frankly, just because your opportunity cost just doesn’t make sense. If, however, you’re selling gum to the Chinese, suddenly it’s very easy to sort of black out a million people and see what that difference is in real time and test that.

Again, if you find yourself in the situation where the opportunity cost is just too great to ignore, then modeling is absolutely the way to go. With anything, you get what you pay for. If you go to a company that specializes in modeling, I think you are going to get what you pay for and, in the end, save a lot of money doing it that way.

Stefan Tornquist: All right. Well, let’s turn to our next slide. We’re now on slide number 6 of the presentation. Agencies Get Results at a Price. This brings up a really very hot question … been a hot question for the last 15 years as online forms have become more and more important in marketing plans, which is: Are all agencies created equal? Do full service agencies – are they able to really optimize online campaigns? Tim, why don’t you talk us through this slide.

Tim McAtee: Sure. What we’re looking at here is just – from left to right, absolutely worth the money, good results but too expensive, average results but cheap, and all the way on the far right, poor results and expensive. We’re seeing a pretty massive variance, but it seems like, just in terms of who’s absolutely worth the money, search marketing agencies and analytics consultants, even online boutique agencies, got the best scores there.

Full-service agencies, interestingly, got 55% of respondents who use a full service agency saying good results but far too expensive. Media agency, kind of half and half there. A lot of people thought they were expensive. Twenty percent said poor results and expensive; so sort of the worst of the bunch here. Then I think what’s really interesting is we’re looking at analytics consultants. There’s no middle ground there. It’s either all the way on the left, absolutely worth the money, or all the way on the right with poor results and expensive.

I think what that says is – for the same reason that hiring a personal trainer doesn’t actually make you lose weight -- simply employing an analytics consultant isn’t enough to do the trick. Implementing changes in analytics is a major commitment that no consultant can do for you. If you do it right, you get this huge payoff, it’s great. But, if you just sort of are looking for a quick fix, you’re just wasting your money basically. I don’t know, just interesting results there.

Stefan Tornquist: All right, well let’s move on and take a look at some of the lessons learned from the different creative tests. Let’s move beyond slide 7 into slide 8 and take a look at how these studies were conducted.

Tim McAtee: From the macro of those last few slides down to the incredible micro of this stuff here, I think it’s really interesting here that we look at – this is from a study by InsightExpress, comparing a Volvo ad and a Mitsubishi ad. What they did in the first one there is a pretty simple thing. I don’t know if you can even see it on that slide, but in the top right corner they added a little Volvo logo, and just that little tiny logo resulted in an 86% increase in brand recognition among people who saw those ads, control versus test.

Then the second one there with the Mitsubishi ad: What they did was they kept that Mitsubishi logo in every other frame, along with the name, and then interspersed it in between the other sort of creative messaging, so that it was really obvious who did the ad. With that one you saw a 219% increase in brand recognition.

So, really what that says is a couple things:

It says that there are a lot of things on the screen that are competing for the attention of your viewers. You only have just a very brief moment to get their attention and to show your message and say what you have to say.

The other thing is just that with frequency, with repetition, that really does help drive home points that you were trying to get across. You do want to make sure that at any point during the time that they’re on a page they see what you want them to see and that when they do, you hammer it home, I guess.

Stefan Tornquist: That’s very interesting. The point that something as small as the inclusion of a logo – with all of the time that we spend thinking about the creative for a campaign… You think about some of the more sophisticated rich media campaigns, and an awful lot of thought and money goes into the design, of course, to the media buy, and yet something so small as the inclusion of a logo from the first frame through the end can have a dramatic difference. There’s a very practical lesson that we can all take away.

Let’s move on to the next slide. We’re now on slide number 9 of the presentation. I’ve got to say, this example from Celebrex brings back a lot of memories because in a prior life, I started a rich media company. Our premise at the time was: Why try to take people off the page they’re surfing when rich media can allow them to experience, essentially, an entire landing page on the page? Now this was back 10 years ago. That same question is still there, and it’s underlied by the same basic issue: Clickthrough rates are extremely low.

Perhaps in the course of talking to this slide, Tim, you can address the question of should we even be looking at online ad clickthrough rates?

Tim McAtee: I’m guessing that back when you were working there, clickthrough rates were still probably 5%.

Stefan Tornquist: Well, pretty close to it. Of course –

Tim McAtee: Was it ’75?

Stefan Tornquist: Yeah, ‘Starsky and Hutch’ was number one on TV. What was interesting, though, was in a very short period of time, clickthrough rates went from 5% down to 3, down to 2, and then the Internet bust happened. Now at this point where are we?

Tim McAtee: Now we’re at a 0.25 average clickthrough rate. … The thing is that there are so few people that click. As we saw in some of the other studies, the people who do click, it’s funny, there’s actually the 80/20 rule kind of applies to clickers where 20% of the online audience makes 80% of the clicks. I don’t know who these people are, but they like to click.

Stefan Tornquist: I’m pretty sure they’re the ones keeping spam alive.

Tim McAtee: Yeah, I think so. The point it, I guess, that people are just so burnt out by clicking that it really has become, at best, a secondary objective for advertising. Really, the thing that – if you’re going to be doing display advertising, you should be focusing on branding.

These types of ads where they take a lot of data that would have otherwise been buried and never seen, and you bring that up, and you make it easy for someone to educate themselves, and once someone has sort of a baseline level of understanding, they are going to be a lot more likely to click, or to give you their email address, or to further that conversation. Much the same way that when you meet a stranger there has to be some sort of pleasantry involved before you become friends. There’s got to be this middle ground here.

I think a lot of these rollover, expandable ads do a pretty good job of getting a lot of information out there that would have otherwise been hidden.

Stefan Tornquist: Definitely. All right, well let’s go on to our next slide, slide number 10 of the presentation. Video Works Better at Communicating. So, as I look across these different ad types, and the percentage of people who understood the message: It doesn’t seem like the difference between in-page video and a GIF – I mean there’s certainly a difference there, it’s a significant difference – but does it necessarily justify the cost? That would be one question. At the same time, I can only suspect that the kind of message that can be communicated by video, and very quickly communicated, is much more sophisticated than what a GIF can do.

Tim McAtee: Yeah, absolutely. This is just demonstrating quantitatively what happens when you kind of pack more communicative power into an ad unit. It just really does make a difference in how much information people soak up from your ad.

Stefan Tornquist: And the number of messages that can be contained in a very short period of time. A GIF can get across that a new Toyota is available, perhaps, or get across that it’s on sale. It becomes more and more difficult in a static image to convey both of those things, let alone any of the sorts of sexier brand affiliation that the video can communicate.

Tim McAtee: Right, and you see that – part of why TV advertising does so well is not because it’s on a TV. It’s because it’s video. There’s absolutely no reason why, if you have a relatively captive audience, such as someone who’s waiting for some in-stream entertainment to load, they’re basically engaged in the exact same action that they would be on a television. The only difference really is just – there’s really not a ton of difference.

Stefan Tornquist: Well, if there’s a difference it’s probably to the advantage of the online medium, at least at the moment, because they’re sitting in front of a desktop with fewer distractions than they have on TV, perhaps.

We’ll probably see, over time, the interaction rates and effectiveness rates of things online, in terms of online video, we’ll probably see those things back off like we saw the banner ad. Right now you can see that, based on where larger branding-oriented companies are putting their money, that in-stream video, desktop video, is of a very high degree of interest to all of these guys.

Tim McAtee: Yeah. I’ll leave it at that. I will just say that my money is on streaming video. I think there’s going to be a huge future there for ad-supported content.

Stefan Tornquist: All right. Well, let’s take a look at the next section, Targeting ROI. Here we’re getting into media buying and analytics.

Tim McAtee: OK, if you move on to slide 12, what we’re seeing here is that online advertising is getting more effective, thanks to better targeting. According to this chart, advertisers rate the ability to use behavioral and contextual targeting as the best ways to achieve great ROI. InsightExpress also cites targeting as a key driver of effectiveness in their studies that they’ve run.

The nature of targeting itself is rapidly changing as technology enables real time logic when deciding when an ad is served, which ad to serve to specific people, and the reach of large ads serving in publisher networks allows for niche audiences to be pulled from the crowd and treated differently. For advertisers seeking tiny vertical niche sites, services like comScore, Nielsen or Quantcast allow marketers to quickly and easily find obscure targets.

The key takeaway for advertisers is that the context in which an ad is served is as important as the ad itself. So, think about direct marketing in the traditional world. The list is the most important variable in success. If the consumer is not in the proper state of mind, or simply does not fall into the group of people who would ever have a reason to consider a product, then the impression is wasted.

From an ROI perspective, eliminating wasted impressions, then making a good impression by serving up great advertising, is consistently the best option for advertisers.

Stefan Tornquist: Well, that makes sense. One other note on this: You may look down and see that rich media ads don’t seem to stack up all that well in comparison with some of the other tactics, but I would almost say that this is an apples to oranges comparison because I think the potential for rich media ads are the means, not the end, whereas the targeting mechanisms are sort of the end. It’s when you put rich media in cooperation with sophisticated targeting that it’s really going to be at its most powerful, like anything else.

Tim McAtee: Absolutely.

Stefan Tornquist: All right. Well, let’s turn to slide number 13 in the presentation. Take a look at – of the various tests that you looked at, it appears that the qualitative tests, those tests that gather the opinions of people, were those that had the highest impact on ROI, which to those of us in the direct world seems a little counterintuitive, perhaps.

Tim McAtee: Maybe this is a bit of a personal bias that I’ve had for a long time, but I have always thought that great advertising is always more important than all the analytics and all that stuff. If you don’t connect with people – it’s about people – you kind of lose the point.

I just found it really fascinating that when we actually went out and asked advertisers, “What among these tests has the best impact on your return on investment?”, that it was these qualitative tests. So again, it just really speaks to the fact that you have to go out and you have to really understand your consumer and stop basically spamming them with bad advertising.

Stefan Tornquist: All right, fair enough. Let’s move on to the next slide. We’re now on slide number 14. This is the eyetracking study that we mentioned earlier in the broadcast. You’ll notice the heat map, or one of the heat maps, from which this data was drawn. This chart, this dinosaur’s tail, what are we seeing here?

Tim McAtee: OK, this is kind of a funky chart, but let me walk you through this. If you look along the bottom of the chart, you’re going to see right block had one partner, center right. Right block had two partners, center left. Then center left you’ve got one ad that’s above the fold there. The rest of those kind of trail down along this long page below the fold. Now the top line here represents all of the ads that were ever visible among all the people that we showed this page to for the eyetracking study.

Basically what happened was 100% of the people saw that ad above the fold because 100% of the people looked at the page. The farther down you go, the more people drop off, because they never quite scrolled that far. So, the ad itself never even appeared on the page. Yet, the way that ads are counted right now, when people buy impressions, all those ads would have been bought impressions. Somebody bought those ads, even though they never made it in front of human eyes.

Right there, that’s just something you’ve really got to be paying attention to if you’re a media buyer: Just where on the page your ad is actually being shown, because you’re lopping off a big chunk of your views.

The bottom line there is the percent seen. What this represents is the percent of people that actually saw the ad, looked at it. One of the kind of really interesting things is that column one, ad two, kind of all the way – two-thirds of the way down. That ad actually did a lot better than some of the bigger ads that were farther up, simply because of where it was on the page. If you’re kind of evaluating the eye flow of how somebody consumes the content on the page, what’s happening is the track that somebody follows with their eyes doesn’t necessarily have much to do with kind of a steady consumption of content, left right, straight down the page. It really does sort of skip around.

The big takeaways here are that it’s really important to make sure that when you’re considering placements, that you are looking for placements that sort of naturally fall within your eye flow and, on top of that, that you are factoring in what the actual effective viewing of your ads are as opposed to the bought number.

Stefan Tornquist: That might potentially be a point of negotiation in the media-buying process.

Tim McAtee: Yeah, absolutely. On the other side, I think for publishers who want to switch to kind of a more linear, sort of streaming format, such as video or Ajax or something, where basically instead of having a huge long page that you scroll, you simply have a much shorter page that you’re kind of going deeper into in one place. Right there, that’s another argument for testing that: To see if fewer ads can actually be worth more money because they get more eyeballs.

Stefan Tornquist: Interesting. All right. Let’s move on to slide number 15 in our presentation. Well, no surprise here. Frequency is a key factor. Maybe the degree of difference is what’s surprising.

Tim McAtee: Yeah, yeah. This was a study done by InsightExpress where they split up two different exposed groups. One was just everybody, so that’s the 5.4 and 6.5% delta. Then on the right, you’ve got 29.3 and 20.8% as your deltas, and that’s just among people who are exposed four or more times to the ad.

What this is saying is that – it’s not necessarily saying that you need to go out there and put a million ads in front of everybody. What it’s saying is that you really need to be aware of how many ads people are seeing. If you look at advertisers who are doing direct response, what they find is that the most efficient frequency is often lower. If you’re looking at kind of like brand familiarity metrics, it does cap out after a certain time because at some point you'll become familiar. There’s kind of nowhere to go from there.

The point is that if you’re going to be advertising, you need to be frequency capping. You need to be paying attention to where your frequency is going and make sure that it is optimized for your audience.

Stefan Tornquist: All right. Let’s go on to slide number 16 and take a look. Here’s an example of a very intelligent piece of behavioral targeting. I believe this was served to you personally, wasn’t it Tim?

Tim McAtee: Yes, yes. Cookie to my computer.

Stefan Tornquist: Obviously with good effect. So day of the week parting for response. Obviously, this is something where time and geography need to work hand in hand. One of my first questions would be: Is geo-targeting now at a point where the fact that – I mean in the old days, 30% of the population lived in Reston, Virginia because AOL was there -- we’ve moved to a point where geo-targeting is fairly effective?

Tim McAtee: Yes and no. Geo-targeting is definitely technologically a lot better than it used to be. The success rate for actually finding and figuring out where people are is pretty high. I don’t have the exact number, but I want to say it’s in the 80% to 90% range in terms of accuracy.

You are still going to get instances where your ISP is based out of Saskatchewan or something and you’re in Miami but, for the most part, that’s not going to happen. Especially if you are very purposely choosing to advertise on a site, like a Digital City, or a Metromix, or something where it is sort of location-based, and you know where these people are in the country. So, it isn’t that hard to serve an ad like this on a Metromix that – what seven hours from now – do the math – is that advertising at 9:00 a.m.?

Stefan Tornquist: It might be 2 in the afternoon on a Wednesday.

Tim McAtee: The point being that you can definitely do this. The question, I think, more so is why and when would you do this? If you look at the graph on the right, what that does is, it actually breaks out by day, overall – and not just for beer. These are kind of unrelated to here but just overall, according to Advertising.com, when people click on an ad, which is in blue, versus when people convert, which is in beige there.

What we see is that you’ve definitely got – people sort of allow for their curiosity to get the better of them on Saturdays and Sundays, and they’re a lot more likely to click. It is kind of a weird phenomenon there. Then during the week, when it’s sort of more about business, you see that spike kind of – Monday, Tuesday, Wednesday -- that their getting-stuff-done time, so that’s when people actually go out and shop during their lunch break from work, and that’s sort of when you’re going to see the conversion happening.

It is important to remember, especially for ecommerce people, or somebody who is relying on something beyond a brand action, to think about sort of what the mindset of these people is. Maybe you spend Monday through Friday putting out branding advertising, saying just how great your brand is. Then Saturday and Sunday is when you hammer home the click now, 20% off sort of discount thing.

Stefan Tornquist: Interesting. Let’s move on to our next slide, 17, in the deck. The chart that we’re looking at here, Analytics Entering the Modern Age, this is of people who are conducting analytics. This is not a look at the whole universe, right?

Tim McAtee: Right. Just to bring that out, there are a lot of people out there who aren’t doing any analytics at all. We did not factor that in. Basically, just breaking out how people are doing analytics here. It’s interesting to see that you get this nice bell curve of non-existent to completely automated, but that the majority of people are right there in the middle.

It’s pretty cool to see that you’ve got 27.8% who are in the mostly automated but not yet integrated across all marketing vehicles category. As an ex-analytics person myself, I’m actually surprised it’s as high as it was.

Stefan Tornquist: I was as well, and that’s very good news because certainly so many companies have spent money on analytics without investing in proper support of it, and without automating processes, which is really something that you’ve absolutely got to do. It’s very easy for a marketing department to get overwhelmed by numbers, especially when you need to get buy-in from folks higher in the organization. If you don’t have a certain degree of automation grabbing those C-level eyeballs on a regular basis, it becomes more and more difficult over time to make that argument for subsequent investment.

OK, let’s go on to slide number 18. This is essentially a look at what happens after we view ads, right?

Tim McAtee: Yeah. One of the most important takeaways for any online marketer is that just because it’s hard to track the effect of an ad doesn’t mean the ad wasn’t effective. So, ad effectiveness studies use surveys to help paint a fuller picture of the true effect of online advertising, but it’s up to analytics departments to figure out how to integrate direct response-tracked metrics – which as we saw from the last slide, we’re sort of just getting good at – with these indirect response or branding metrics. So, only then can online advertising really demonstrate its true worth.

As the data on this slide illustrates, consumers are far more likely to notice an ad and consider a later visit than they are to click on it immediately. Let’s look at the numbers here. 36% say that they often will do further research as a result of seeing or hearing an ad. I believe that. I think, in my own experience as a consumer, I definitely fall into line with this where, frankly, I would never click on an ad. But if I notice an ad, I will absolutely follow up via search, ask a friend, that sort of thing. How about you?

Stefan Tornquist: Well, that’s exactly right. We’ve heard from, and seen, in a number of studies that you spend a lot of money on display advertising, you see a bump in search. You see a bump in email subscriptions. There are those effects, but they’re further downstream. They’re not coming from the click.

OK, let’s take a look at the next couple of slides, and then we’re going to go to your questions. On slide number 19, we wanted to thank you all for your attendance by offering a $100 discount on the Online Advertising Handbook with 2008 Benchmarks, but to get that discount, make sure you use the URL within this slide.

On slide number 20, here’s a quick description on the Landing Page Optimization Summit taking place in Florida on June 2nd and 3rd. As any Sherpa reader knows, landing pages are the fastest and least expensive way to increase your sales. Optimization of landing pages is the entire topic of this intensive course. You’ll come out of that not just with a certification in landing page analysis, but also some very practical next steps on your landing pages. So, I encourage you to join us in Florida there.

All right, let’s take a few minutes and answer some questions from the audience. First up, how do you increase conversion rates and drive the average revenue per user online?

Tim McAtee: Well, the biggest way to do that is by decreasing media waste. I think we’re all familiar with … Was it John Wanamaker’s statement that, "Half my advertising is wasted. I don't know which half." Of course, online is supposed to fix that. In a way, you don’t necessarily know which half of your advertising is wasted. It’s harder to quantify than I think the initial promise was, but it’s a whole lot easier to target than it used to be.

Frankly, if you’re not targeting at this point, you’re missing out. There’s definitely room for that. The other thing I think is just doing stuff like your landing page design, and upsell. There’s a lot of things you can do. Just talking to lifetime sales per customers, and really treating conversion as more of a lifetime relationship instead of just sort of a one-time shot.

Stefan Tornquist: Fair enough. All right. Here’s a good one. How effective and relevant is online advertising to target the C-level audience for B-to-B services? The question here is about B-to-B services, management, consulting, and so forth, but obviously the question can be extended to how good is online advertising for targeting anyone at the C-level or any other niche?

Tim McAtee: Advertising is just as effective among a C-level audience, assuming that it’s good, and it resonates, and speaks to them, as it is for any other audience. Just because they’re a CEO doesn’t mean they’re immune to being swayed by your messaging. The hard part is going to be targeting advertising to them with a minimum of wasted impressions. That’s where things like newsletters and social networks, and data matching, and other advanced forms of targeting come in. I have to imagine that somewhere out there is the CEO Facebook club or something where you can go and, literally, just advertise to CEOs.

All facetiousness aside, it is definitely possible, if you do your homework, to get in front of these people. Yes, it might be a $70 CPM, but if you’re selling nuclear reactors, then it’s probably worth it.

Stefan Tornquist: Good answer. All right, our last question is what is the best measure of the effectiveness of online advertising? What’s the best way to prove ROI? I sort of picked this one because it’s a softball, but it really emphasizes a point that underscores this whole presentation.

Tim McAtee: Well, I think the best measure is: Are you making money? Tests like matched market tests are one way to objectively look at the effects of advertising, but there’s the opportunity cost to do so. Ad effectiveness studies also, it’s a good way to evaluate by looking at proxy metrics such as brand favorability in a control versus exposed audience without incurring too much opportunity cost. It’s really important to differentiate between your return objective and your return on investment when you’re dealing with proxy metrics.

Just because you’re increasing your brand favorability by 10%, what does that really mean for your ROI? To some extent, you just kind of have to take these things on faith. You have to believe that if 10,000 more people know about your product that X% of those people are going to be more likely to buy it. Then do that math and figure out is it worth it on a per user basis.

When is it good enough? When do you know when to quit? I’m not sure there is a “good enough.” That’s what we do as marketers. We’re always trying to improve and beat our old scores.

Stefan Tornquist: Well, I think that’s a great point on which to end our presentation today. Thank you all very much for attending and, Tim, thanks for your insights.

Useful links related to this article

New Research From Sherpa: 577 Marketers Reveal Top Strategies to Improve Online Ads -- 5 Data Charts & PDF Download
http://www.marketingsherpa.com/article.php?ident=30409

Comments about this Event Presentation

May 14, 2008 - Helene of The Modern Woman's Divorce Guide says:
Thank you for posting the teleseminar. It was very worthwhile.



Post a Comment

Note: Comments are lightly moderated. We post all comments without editing as long as they
(a) relate to the topic at hand,
(b) do not contain offensive content, and
(c) are not overt sales pitches for your company's own products/services.










To help us prevent spam, please type the numbers
(including dashes) you see in the image below.*

Invalid entry - please re-enter




*Please Note: Your comment will not appear immediately --
article comments are approved by a moderator.