by David Kirkpatrick
, Senior Reporter
Two weeks ago, we hosted the second annual MarketingSherpa and MarketingExperiments Optimization Summit in Denver, June 11-14. The event featured two days of the case studies and how-to training, bookended with MECLABS workshops and training.
Attendees were presented with a variety of material on testing and optimization programs, including case studies from marketing practitioners, intimate roundtable discussions, breakout sessions on high-level optimization topics, and presentations that featured data points and actionable advice.
To provide MarketingSherpa readers with a sense of what transpired at Optimization Summit 2012, here are five takeaways from the event.
Takeaway #1. Marketers aren’t taking full advantage of optimization
Dr. Flint McGlaughlin, Managing Director, MECLABS (the parent company of MarketingSherpa), opened Optimization Summit 2012 with a number of slides featuring new research from the upcoming Website Optimization Benchmark Report, which will report on the current state of optimization.
He offered a number of charts with the latest data points from MarketingSherpa research.
These charts told very interesting story. On one hand, 70% of surveyed marketers said they used lessons learned from website optimization to inform offline, or other, marketing initiatives. This suggests that not only is optimization an effective way to improve website performance, but it also provides insights into more broad-based transferrable principles that can help improve an overall marketing effort.
And, marketers are very interested in optimization with 89% reporting that testing and optimization is a priority in 2012.
Given these positive results from the research, why did McGlaughlin also report that these organizations are not taking full advantage of their optimization campaigns? The reason is a majority -- 53% -- said they could not calculate return on investment (ROI) for their optimization efforts.
For those marketers who did calculate ROI, 38% reported a positive ROI, 6% reported no ROI, and a mere 3% reported a negative ROI.
McGlaughlin said, "In the research experiments that we field tested this year, there were hundreds of millions of dollars generated from the applications of heuristics. I can tell you some in specific right to the amount. Huge amounts of money. It can be done. But, most of us can’t even measure well enough to determine how well our optimization efforts are working."
Takeaway #2. Understand how to gain buy-in from leadership at your company
For the morning keynote of day two, Kristin Zhivago, President, Zhivago Management Partners, spoke on the importance of internal buy-in to implement a new testing and optimization program at an organization. To offer some practical advice on achieving that buy-in, she explained the attributes of seven different leadership personas.
Zhivago also suggested that marketers take a customer-centric approach to marketing efforts, and even take the time to interview customers to learn what they think about your product, service, company and even the competition.
She said by doing this, marketers set themselves up as the "voice of the customer" and the person in the company with the most knowledge of what the customer actually thinks.
Taking this step goes a long way toward being a trusted advisor internally, as well as a team member with some clout to get a new optimization program in place.
The next step is to understand the "functional persona" of the CEO or other leadership member.
Zhivago broke leaders down into seven different personas, and offered some characteristics and an example of each persona:
The key characteristics a "sales CEO" possesses are being competitive, controlling, but also easily influenced. The best tools to keep their interest are stories backed up by stats or data points to keep the executive excited about the material being presented.
An example of a "sales CEO" is Microsoft’s Steve Ballmer.
As the title implies, the key characteristics of a "technical CEO" include being logical, inclusive and very process oriented. To grab this leader’s attention, present empirical evidence that is supported by stats.
Tony Hsieh, CEO, Zappos, is an example of a "technical CEO."
"Finance CEOs" have two main traits: They can be elitist or exclusionary, and they don’t excite easily. To grab and keep their interest, gather as much statistical data as possible and make sure every touch point is covered.
An example of a "finance CEO" is Meg Whitman at HP.
"Legal CEOs" have the ability to see both sides of an issue and weigh the various options, but they are weak on processes. To reach them, provide an abundance of empirical evidence. Think numbers, numbers, numbers.
An example of a "legal CEO" is Frank Blake, CEO, The Home Depot.
The main characteristics of a "marketing CEO" is they are customer-driven visionaries. The best way to gain their interest is through stories that are backed with facts and forward-thinking.
The late Steve Jobs of Apple is an example of a "marketing CEO."
"Operations CEOs" focus completely on process. They can be reached by presenting systems that work, and also by providing a method for moving forward.
Amazon’s Jeff Bezos exemplifies an "operations CEO."
This type of leader’s behavior is influenced by their background, and a key characteristic is they are always moving. The way to influence a serial entrepreneur is through aggressive pro-action.
Reid Hoffman, CEO, LinkedIn, is an example of a serial entrepreneur.
Takeaway #3. Nonprofit marketers can benefit from optimization beyond just conversion
One attendee from last year’s Summit, Tim Kachuriak, Chief Innovation and Optimization Officer, Next After, is a nonprofit marketer. He said at last year’s event, one of his landing pages was used for live optimization. He furiously took notes on the suggestions from the audience, took those suggestions to the office, and began testing them. The result was a lift in that particular page.
Given this success, Kachuriak presented a case study at this year’s Summit on testing the call-to-action on a Washington D.C. political think tank’s website.
The results of this optimization cycle were impressive:
- 74% lift in donor conversion rate
- 189% increase in average gift
- 274% increase in revenue
What Kachuriak found most interesting in these key performance indicators was that second figure.
For a typical B2B or consumer product or service, improving conversion to sale means more sales -- a very worthy goal.
But, for the nonprofit marketer, increasing the conversion to donor is only part of the optimization process. Once someone decides to become a donor, they then have to determine the size of the gift.
Kachuriak realized the optimization process should include this second level, and through this case study, he proved the size of donation could be affected by a testing and optimization cycle.
He said, "One of the things I have come to discover is that the MECLABS methodology may prove to have more transformational impact for nonprofit organizations than it does for for-profit organizations. This is because a donation doesn't have a fixed price, where a product does."
Kachuriak continued, "So, if I can convey a stronger value proposition -- and I can communicate that value proposition with such tremendous force -- then I will not only compel more people to give, I will compel more people to give higher dollar donations. This has the potential to change the game for nonprofit organizations that will suddenly have the resources they need to multiply their impact. That means more clean water, more food for starving children, and more missions around the world."
Takeaway #4. Plan, set goals and form a hypothesis to maximize your optimization efforts
In one of the many sessions featuring insight and wisdom from MECLABS researchers, Adam Lapp, Associate Director of Optimization and Strategy, and Tony Doty, Senior Research Manager, posed three questions:
- Where to test?
- What to test?
- What to ask?
To determine where to test, there are a series of steps to take:
1. Identify key pages in your website funnel
This means ID'ing the key pages according to traffic and business impact, determining the primary goals of these key pages, analyzing the traffic sources for the pages, and understanding the visitor’s experience on these pages.
2. Map primary conversion funnels
This begins with funnel messaging -- what messages are prospects provided at different stages in the website funnel.
Analyze funnel performance to find out where leaks might be happening between the original landing page through to the checkout process, or buying page.
3. Estimate conversion impact of each leak
At this stage, it’s important to consider the basic elements of the leaking step in the website funnel -- motivation, clarity of the value proposition, incentive, friction and anxiety.
4. Estimate IT impact of fixing each leak
How long will it take to develop and set up each test?
Once you have determined where to test -- leaks in conversion through the website pipeline in this example -- the next question to answer is "what to test?"
For this, Lapp and Doty went back to those basic elements listed in step 3 above and provided some questions to ask when determining what to test:
- Motivation: Where are the visitors coming from? What do we know about them? Their needs? How are they engaging with the page? Most importantly, do the message and objective match the visitor’s motivation?
- Clarity of the Value Prop: Within the first several seconds, what values are communicated to the visitor? Is there enough perceived value to retain the visitor’s attention? How are you differentiating from the competition? Is the value prop supported by all the page elements, such as navigation, color, images, etc.?
- Friction Elements: What on the page causes difficulty, confusion or annoyance for the visitor? Are the page, process or form fields too long? Is the eye-path disorganized, or is there a clear thought sequence guiding the visitor to the objective? Is there too many things going on at once, or is the page simple with clear objectives?
- Incentive to Take Action: An effective test plan will optimize for other elements first, then seek additional improvement -- that is, squeeze the last few percentage points of conversion increase -- with the incentive. Is the incentive relevant to the visitor’s motivations and page offering? What is the return on investment for the incentive? Does it have enough appeal to overcome negative forces of friction on the page?
- Anxiety Elements: What fears or concerns might the visitor be experiencing? Are your corrective measures (e.g., guarantee, security seal, testimonial) placed in close proximity to the concern? Do your corrective measures specifically address the visitor’s anxiety? How irrational is the visitor’s fear or concern?
When it comes to what to ask, this is where a testing hypothesis and research question are created. The hypothesis consists of two basic parts: the "assumption," or statement about a website variable you believe to be true, and the "argument," which is the test you set up to prove, or disprove, your assumption.
MarketingExperiments defines research question as, "A behavioral question expressed in a factorial split so it must begin with 'which.'"
This means, "What is the best price for product X?" does not form a research question. But, "Which of these three price points produces the most revenue for product X?" is an example of a research question.
A subtle difference, but the second example offers testable elements that can be answered by A/B testing, and not just a broad-based question on price.
Takeaway #5. Event Marketing: Roundtable discussions actively promote interaction and promotion
For the most part, these takeaways have been about the content presented at this year’s Optimization Summit. This one features a learning we obtained as event marketers, and it might be useful for other event marketers.
We host a number of events each year at MarketingSherpa, and we are always looking for something new and interesting to offer attendees.
At Optimization Summit 2012, we offered a new type of session. In a side room, we placed ten actual round tables spread out around the room, put a whole lot of easy-to-move and rearrange chairs, and positioned one topic expert at each of the round tables.
For the session, the entire audience was allowed to go to the room, choose a topic expert of their choice, and listen to, and interact with, the expert and other attendees sitting at the table.
Every fifteen minutes, we set up a musical chime to allow the audience to get up and listen to a new expert, or even stay put for more information.
The session was very successful, and post-session comments included how much the audience enjoyed the chance to interact one-on-one with the subject matter experts, many of which had either already presented, or were going to be presenting, from the main stage during the event.
Audience members also enjoyed the chance to interact with other attendees on specific topics in a relaxed but still somewhat controlled format.
If anything, what we most learned is we could have easily set up tables with five more topic experts because this event marketing idea really took off.
-- Tim Kachuriak’s blog Zhivago Management Partners
Related ResourcesThe State of Website Optimization in 2012A/B Testing: Why don’t companies track ROI of testing and optimization programs? Gaining Business Leader Buy-in: 7 CEO personasNonprofit Marketing: How a long, ugly page generated 274% more revenueEvent Recap: Notes from the Optimization Summit 2012 roundtable sessionsOptimization Summit 2012Cabela’s shares tactics from 51 years of offline testing, 7 years of digital testingPage Optimization: Radical redesign leads to 3,566% increase in conversion The Boston Globe: An inside look at launching a paid content site