June 18, 2010
Article

Lead Nurturing and Management Q&A: How to Handle 5 Key Challenges

SUMMARY: A lead nurturing and management strategy is always a work-in-progress. And the inevitable challenges and questions you face along the way are great opportunities to improve your results.

This Q&A addresses five key challenges raised by the audience during a recent MarketingSherpa and Eloqua webinar on lead management optimization. Whether you’re just getting started, or an experienced marketer looking to improve your strategy, take a look at these recommendations from MarketingSherpa, MarketingExperiments, InTouch and Eloqua.
Last week, MarketingSherpa and Eloqua presented a webinar on optimizing the lead nurturing and lead management process. Jennifer Horton, Best Practice Consultant, Eloqua, and Sean Donahue, Editor, MarketingSherpa, tackled some of the major questions marketers typically face when developing and optimizing a lead management strategy.

During the event, the audience responded with several of their own specific challenges -- but time constraints prevented us from addressing all of them.

So we’ve compiled this Q&A to answer some of the most interesting questions raised by B2B marketers about:
o Prospect profile development
o Collaborating with sales
o Measuring campaign results
o Specific nurturing tactics
o Testing automated email

Along with Jennifer Horton and Sean Donahue, we tapped experts at MarketingSherpa’s sister companies, MarketingExperiments and InTouch, to deliver advice that can help any B2B marketer wrestling with similar challenges. (We’ll also be answering more audience questions in an upcoming blog post from Eloqua.)

Here are recommendations for handling five common lead nurturing and lead management challenges:

Question #1. What are the most effective ways of developing a progressive prospect profile? Implicit information gathering or explicit?

Both tactics are effective ways to generate prospect profiles, so you might want to use both in your nurturing efforts.

By monitoring prospects’ actions and behavior over time, you can gain insights into their roles and potential stage in the buying process.

For example, a prospect that downloads a technical whitepaper and then comes back to download detailed product data sheets or evaluation versions of software is likely to be a technical user or implementer who’s moving deeper into the consideration stage.

At the same time, you can be more explicit about profile development by using registration forms that ask prospects to volunteer more information about themselves as they interact with more of your offers:

- You might only require basic contact information, such as name and email address, for a new prospect that is not yet in your database.

- Then, each time they respond to another offer or return to download higher-value content, you can use dynamic registration forms that pre-fill fields for which you already have information, but ask them to answer more qualifying questions such as:
o Industry
o Company name or size
o Role within the company
o Potential budget or timeframe for a purchase

Before choosing your approach -- or deciding when to use implicit profiling vs. explicit profiling -- consider how you will use the data.

Jennifer Horton says she’s seen implicit profiling yield better results when marketers use the data to target specific promotions.

However, if you want to use the data to personalize communications, explicit profiling may be more comforting to the prospective buyer.

Question #2. When calling a prospect and getting voice mail, do you think sending an email is more effective than actually leaving the voice mail?

Adding telephone calls to your mix of lead nurturing tactics is a great idea. But the effectiveness of a voice mail or email message when you don’t reach prospects depends on a number of factors.

We asked for advice from the team at InTouch, who manage teleprospecting, lead nurturing and lead management campaigns.

"It’s not either/or. It’s both," says David Green, Best Practice Leader, InTouch. "Give the prospects choices for the channel they want to use."

Green says you should leave a voice mail message the first time you follow up with a lead by telephone. But you don’t want to leave too many voice mail messages if the prospect does not respond to subsequent calls.

You also can blend your use of voice mail and email messages, says InTouch's Bob Hanson. For example:

- If you have had recent conversations with the prospect you can leave a voice mail message followed by an e-mail within a week if they don’t respond.

- If you have not spoken with the prospect recently but have had past conversations, you can leave a short voice mail message and follow up with an immediate email that provides more depth.

Question #3. What is your point of view about the use of "nurture" microsites, which take time to get up and running, but more importantly require ongoing resources to maintain?

It comes down to what works best for your team and your prospects.

Jennifer Horton says nurturing microsites can be a powerful tool for engaging prospective buyers -- especially when you can personalize the sites for deals that require approval from larger buying committee.

But you should evaluate the resources you have to work with and choose a communication strategy that is manageable and trackable for your team.

If the time and resources to maintain nurturing microsites outweigh the value you can receive from them -- or are unreasonable for your team to maintain and keep optimized -- don’t let that stop your nurturing process.

"Simple drip email campaigns are a great place to get started and start seeing value," says Horton.

Question #4. How do you properly split-test an email campaign that’s usually set as automated, and clickthrough or open numbers are usually not large enough for a significantly valid result?

We tapped the team of testing experts from MarketingExperiments to help answer this question.

Bob Kemper, Director of Sciences, says there are three primary factors that make a test conclusive:
o Effect size -- the amount of performance difference between emails
o Sample size -- the number of email message sent
o Significance level -- e.g., 95% level of confidence

When sample size or any other factors aren’t sufficient to make a test conclusive, he recommends computing the significance level you have from data you collected, and then deciding whether it’s strong enough to continue the test with another send.

But he also warns that marketers often misunderstand the goal of testing:

"Too frequently the objective of conducting experimental testing is misconstrued as ‘To prove that there is a significant difference between the control and the experimental treatments,’" he says. "The true objective is, ‘To determine whether there is a significant difference between the control and the experimental treatments.’"

The assumption by a marketer that there is a sample size problem often reflects this misunderstanding.

You may not have control over the sample size for your test -- for example, you’re limited to the number of addresses on your house list.

In these cases, you should design tests that focus on the "effect size," such as the difference in clickthrough rate needed to show a significant difference between the two emails.

"If the actual effect size measured is smaller than that required to achieve your desired significance level, then you are suffering not from a small sample size but rather from a small effect size," says Kemper.

Your analysis should then focus on the difference in effect size -- for example, whether the change in clickthrough rates was so small as to make it impossible to conclude whether there’s a difference in the two email treatments.

This said, if you don’t quite reach your goal for statistical significance with a test, you have a few options:

1. Accept the results and decide that there’s not enough difference between the results of the two emails. At this point, more testing is needed.

2. Accept that the effect size was close enough to your goal for significance level and decide that the results do show a difference and proceed with the more effective email treatment.

Question #5. Our most difficult challenge is tracking the lead from source to close. We currently use a CRM system, so all data, including timestamps, is recorded. Do you have any tips on processing and reporting on such data to learn which campaigns work?

Great question, and one of the biggest challenges marketers face. Jen Horton says to think about the problem this way:

"The key is to associate specific campaigns (where the "I" in ROI lives) to people, and people to opportunities (where the "R" in ROI lives)."

She recommends using a Campaign Object in you CRM system to track all campaign activity for each individual in your database:

- Track original "Contact Source" -- i.e. the original source that produced the name in your database -- so that you can see from where the organic growth of your database is generated.

- Then, track the "Opportunity Source" -- i.e. the most recent campaign associated to the individual that triggered the lead’s transition from the marketing to the sales cycle -- to determine which campaign types are tipping points in your sales cycle.

- Many clients will also tie original "Contact Source" to the opportunity record so they can analyze originating campaigns and "tipping point" campaigns to discover which campaigns are impacting sales-ready opportunities.

For closed/won opportunities, do some analysis on all campaigns that influenced contacts associated to opportunities. With a campaign object you should be able to do so.

Useful links related to this article

Eloqua

InTouch

MarketingExperiments


Improve Your Marketing

Join our thousands of weekly case study readers.

Enter your email below to receive MarketingSherpa news, updates, and promotions:

Note: Already a subscriber? Want to add a subscription?
Click Here to Manage Subscriptions