The Role of Testing in B-to-B Marketing

Thursday, Apr 6, 2017| Tags: testing

Each little interaction in B-to-B marketing is a promise that the company makes to the prospective buyer. Every promise that is fulfilled builds trust with an individual and across an organization.

The role of marketing in B-to-B is to build incremental trust ahead of and during a sales cycle by keeping promises:

  • The snippet in the search results is a promise that your product (or podcast, or blog post) will help the buyer answer the problem that initiated the search.

  • The email notification that you published a new blog post is a promise that reading will be worth the buyer’s time.

  • The copy on a webinar landing page is a promise that the buyer will learn something worth sacrificing an hour of their time.

  • The individual fields in your request a demo form are promises that you will use that information to give the buyer a better demo.

The role of testing is to make sure that you are making the right promises. Testing anything less ambitious is setting your B-to-B program up for failure. Failure is a big danger since all you need to do is follow the best-practice advice of ecommerce conversion rate optimization.

The design of your test program will make the difference between success and failure with A/B testing. But the testing role in B-to-B shouldn’t be limited to A/B testing in the browser. Qualitative feedback and strategic A/B testing are tactics that can multiply the impact of your B-to-B testing program.

A/B Testing

A/B testing in B-to-B environments is a challenge:

  • You have lower traffic so it takes longer to get a significant result.

  • The final conversion doesn’t happen on your website.

  • The final conversion occurs several months after the first interaction with your website so it is hard to tell whether a conversion early in the buyer’s journey has an effect on later sales.

  • You can have several people involved in the final conversion who interact with online and offline sales and marketing in a variety of ways.

Despite these many challenges you can build a testing program that uses A/B testing to powerful effect through good program design.

Simplicity

You will have to compensate for the complexity of your conversion with simplicity in your testing program.

The first rule for B-to-B testing is to stick to A/B testing. Multivariate testing looks awesome in theory, but it will take 20 years to get a significant result at B-to-B traffic level.

Even when you get a significant result, you are still just testing for the conversion that you can see rather than the actual revenue conversion.

Keep it simple and get quick results so you can book improvements and launch new tests soon after.

Test Your Fundamental Assumptions About the Customer

Testing the fundamentals is an important corollary to keeping your testing program simple.

The button-color warriors of ecommerce testing can get away with testing every minute detail because tiny lifts in conversion can result in immediate lifts in revenue that have a material impact.

In B-to-B testing, it will take 10 years to get a significant result if you’re going after 0.2 percent lift. The obvious solution to this problem is to test the big stuff.

Are your prospects better motivated by avoiding pain? Or by the recognition they’ll get if your solution makes them a success?

Are your prospects better motivated by the contents of the webinar? Or the outcomes of the webinar?

Test your underlying assumptions and target gains of 20 percent, 50 percent, or more. And, in doing so, you should expect significant results in about a month.

You will know that you are doing this right if you are learning lessons about your customers that don’t just help you convert them at each micro-conversion of the buying journey, but also help you better communicate with the buyer throughout the journey.

Collect Assumptions

In order to challenge your assumptions, you’re going to need to know what those assumptions were.

An outsider’s perspective is valuable because the outsider can challenge assumptions that the people involved in producing an individual piece of marketing collateral may have missed.

However, as an outsider you’ll very quickly find the patterns that give big gains when you test. By all means keep testing these patterns, but your testing program will stagnate if you rely on just the patterns that you know work well.

The individuals involved in the original production should also have a deep understanding of what thoughts, disagreements, and compromises went in to the content. These people too are a valuable source of assumptions that you can test.

If you are the outsider running tests, then you want the whole sales and marketing team to know that you want to test assumptions and you should make it easy for them to communicate what assumptions and compromises that they made to complete the content.

Choosing What Assumptions to Test First

With feedback coming from throughout the marketing department, you’ll want a fair and transparent way to set priorities. Anything less and you run the risk of turning others off of your testing program and losing their contributions.

While you can’t A/B test easily against sales, you can test against each promise—each micro-conversion. And with clear funnel definitions, you can use historical data to calculate the value of each micro-conversion.

With the value of a micro-conversion, a testable hypothesis, a little bit of analytics data, and an estimate for the lift you expect from the test, you can do a quick pro-forma analysis to calculate the expected revenue impact of each test.

While this can be manipulated by biasing your estimates, even a flawed, good-faith estimating process can help you pick the best tests first while giving others transparency into the testing process.

Share Your Lessons

The goal of a B-to-B testing program should be to better understand your buyers rather than just improving conversion rates.

You need to test assumptions in order to get results that are big enough that your tests achieve an acceptable sample size in a reasonable amount of time. But you should also be making valuable discoveries about your buyers.

A B-level testing program helps the testers better understand your buyers. An A-level testing program helps the entire team better understand your buyers.

Using the whole team to source test ideas can help get the team invested in your testing program. Sharing the lessons learned from your best tests both serves to show the value of their feedback, but also helps deepen the team’s understanding of your buyers.

Micro-Conversions

Each promise in the buyer’s journey is a micro-conversion and each micro-conversion can be tested.

Any decently valuable solution that you sell is going to be costly and the buying decision is going to be complex. You don’t want to wait for a nine-month buying cycle to come to the end before evaluating your first tests.

That’s why you need to test against micro-conversions and optimize against each step in the path.

You can’t take the results of these tests to your CFO, say “webinar sign-ups are up by 30 percent,” and expect to get a big bump in the marketing budget. But you can turn around tests a lot more quickly.

The results of these tests may be a little dubious since they aren’t tested against revenue, but they will let you complete more tests and make more micro-improvements up the funnel that will cumulatively be better than waiting for the bottom-line results.

You won’t know whether each test increases sales when you get your initial results, but you can run a retrospective analysis after a couple of sales cycles to check whether micro-conversions are having the expected impact.

The Funnel

One key to a strengthening your testing program is to have a well-defined funnel that reflects how buyers develop into customers. The funnel will help you estimate and measure the impact of each test.

Estimating the value of a conversion is a challenge when each conversion lives in isolation. In reality, they are part of a system for building trust, educating the buyer, and making a sale.

By categorizing each conversion by funnel stage, you can more finely tune the values without expensive multi-touch conversion modelling.

Someone following you on Twitter or LinkedIn, or signing up for blog updates doesn’t have the same value as someone who has done all of those things and has started to learn about your product. And they have even less value than someone who is calculating the ROI of your solution.

By having a clear funnel, you can assign values to each stage in the funnel rather than each conversion. Assigning value to stages better reflects the actual value of a conversion compared with weighting all conversions equally and it takes alot less effort and expense than multi-touch.

With a strong funnel definition in place and a strong understanding of the value at each stage, you can better estimate the value of a test without much change to your marketing stack. Testing this way can also help you better understand and refine your funnel.

Retrospective Analysis

Running one test against bottom-line revenue takes too long to be worthwhile when compared against running lots of tests against micro-conversions. But that doesn’t mean that you can’t try to catch results that produce positive micro-conversions but negative revenue.

Once you’ve been running tests long enough to have prospects who’ve been through tested assets and turned into customers, you should look at your conversion data, calculate the expected revenue lift for the improved conversion rate, and compare that with the actual lift.

The data will be messy and you won’t catch every bad result.

But if you’re testing important assumptions, periodically looking at data will let you catch tests that badly underperform in terms of revenue. This in turn should give you ideas for what assumptions that need to be challenged.

Isolating Tests in B-to-B Testing Programs

If you’re testing assumptions, as I recommend, then you’re going to run into challenges isolating your test.

If you want to test, for example, different assumptions about why buyers sign up for webinars and all of your assets (ads, social media posts, calls to action, and landing page) support the initial assumption, then you can’t just run a test swapping out copy on the landing page.

Your challenge may be correct, but the promises you made to get the click to the landing page won’t align with the challenger version on the landing page. You’ll need to watch for this sort of co-dependency and then create alternate ads, social media posts, and CTAs that align with the message you are testing and send these alternates to the challenger landing page.

Not every test will suffer from this form of co-dependency, but you will need to watch for it.

Isolating tests is also important when dealing with tests on pages or resources that occur close together on the conversion path.

For example, you can’t test your home page messaging and your product page messaging at the same time since a significant proportion of your product page visits will pass through the home page. The results are co-dependant and running co-dependant tests together will mess up your sampling and results.

The A/B Testing Wall

If you follow the above outline for a testing program, you should have a steady source of good testing ideas but nothing is going to stop you from hitting a wall in your A/B testing program where results start coming slower and you find yourself with too much time between tests.

This is because early on in your testing program the opportunity is great. Increasing your conversion rate from three percent to six percent is much easier than six to 12.

At some point you will run close to the limits of optimization where the only way you can get big lifts in conversion rate is to confuse people who have no business converting. This is deadly where your conversions don’t result in immediate revenue and short-sighted under any circumstances.

Meanwhile, your best tests, in terms of quality, will still only give a lift of a few tenths of a percent and you’ll be back to six months plus to get a significant result.

If the team is learning about your buyers from your tests, then you can expect that new marketing assets will already start at a higher level than when you started the testing program so your new testing opportunities shouldn’t be as good as your original ones.

Additionally, each test can affect the results of another so you need to be careful about running too many tests at once.

If you want to test your home page’s ability to identify new opportunities, you can’t also test your product pages’ ability to convert those opportunities. The tests are co-dependant and should not be run in parallel.

As a result, you will find rapidly diminishing returns and a whole lot of free time as your A/B testing program matures. And, while you shouldn’t abandon A/B testing, you will make yourself dead weight on the team if you limit yourself to A/B testing in the browser.

Beyond A/B Testing

While A/B testing in a B-to-B setting can be valuable if done right, you will limit your results if you stick to the popular testing tools and built-in testing functions of your ad and marketing automation platforms.

Qualitative feedback and strategic A/B testing are two techniques that go beyond your standard testing tools. They can be the source of valuable lessons and results while extending the influence of testing beyond the browser.

Qualitative Feedback

B-to-B marketers are forced to make a lot of assumptions about their buyers.

We have tools like personas, case studies, and marketing funnels to help standardize lessons across the marketing team. We can use A/B testing or sit in on sales calls to challenge our assumptions, but there’s still a lot of guess work involved.

Qualitative feedback, a fancy term for listening, helps challenge those assumptions and get results faster than a stand-alone A/B test.

If you want to get a better understanding of what your buyers are thinking at various points in their journey, then you should contact them during their journey and listen.

The goals for qualitative feedback can vary from broad understanding, like what the buyer is thinking at different stages of the funnel, to narrow understanding, like what did they want from a specific webinar.

While the questions you ask will depend on your goals, the following survey questions are a good start for capturing what’s important to the buyer without taking too much time.

  • What were you trying to accomplish?

  • Did you accomplish it?

  • How can we do better?

In addition, there’s a fourth question that you should be able to infer from these answers that you can use to evaluate your lead scoring.

  • Are you in a buying cycle?

By recording the answer to this fourth question, you can evaluate whether your lead scoring is letting leads who should go to sales slip by and calculate the missed opportunity.

While there are tools that automate feedback, I think picking up the phone and calling leads who are in the midst of their buyer’s journey, asking the questions, and listening is worth the effort.

By using automated tools, like feedback popups embedded on your site, you are limiting the effectiveness of your survey.

  • You are biasing your responses to only those who are willing to fill out the form which will skew your data and hurt your analysis.

  • You are annoying your visitors.

  • Your responses will be more filtered as people edit their writing better than their speech.

  • You miss a chance to show that your company actually listens to buyers.

By calling and transcribing your calls, you can learn all sorts of important lessons and get feedback from a larger segment of buyers:

  • What kinds of language your buyers use to describe their problems.

  • Who else is involved in the buying decision.

  • What the buyer’s journey actually looks like rather than the idealized version of your marketing funnels.

  • Where your marketing falls short of meeting expectation.

Just doing two calls a day will give you 40 calls worth of information by the end of the month. With just 40 calls, you can start making better decisions about what goes on the homepage and product pages and what sorts of pages you should add to the site.

As your volume of calls grows, you can segment this information by industry, persona, and funnel stage to get an even more granular understanding of your buyers. Eventually you can use the feedback to feed natural language processing libraries to further deepen your understanding and turn qualitative insight into quantitative insight.

Qualitative feedback should be used to for test ideas, but don’t limit yourself to just informing your testing program. It should also help you better understand your buyers and make better decisions.

Finally, it should be used in place of testing where a testing is difficult and takes too long.

Testing the homepage against many competing internal and external priorities is a huge challenge.

You could, for example, create a homepage that greatly increases the number of blog subscribers. You could even calculate that this improvement will have a greater impact on revenue than a smaller improvement that increases the number of opportunities. But, in making the pure numbers-based decision you could very easily choke off opportunities and waste those subscriber gains.

Listening to feedback can help you make stronger decisions and can be a better alternative where A/B testing struggles.

A/B Testing Beyond the Browser

There’s one last form of testing that unequivocally belongs in a B-to-B testing program even if in a limited way.

You can test your strategic marketing mix by removing different tactics from in different regions and measuring the results after a couple of sales cycles.

Retargeting ads, for example, may look great because they touch every visitor before they become a customer. But do they have an impact? By definition they are shown to everyone who visits your site, or at least a specific page.

If you want to know whether you are just sinking your money into retargeting, you’ll have to pull retargeting from a test region and measure the results after a couple of sales cycles.

Field marketing, too, is another good candidate for this form of testing.

What you do is pick a region and cut a tactic off from that region for a few sales cycles and then analyze the effect of the tactic on number of deals, average deal value, and total number of deals compared with other regions that didn’t cut that tactic off as well as historical data from within that region.

This form of testing isn’t very challenging to implement and you will be testing things that help the CMO allocate budget and the CFO give more budget. You are somewhat limited in what you can test to marketing tactics that are wholly initiated by sales and marketing. A blog, for example, requires content that is initiated by marketing but also visitors who will find your blog independent of the region (and your control).

Testing at the strategic level will make your testing program more valuable to the organization in a way that executives can better appreciate.

The Testing Role in B-to-B Marketing

There’s still plenty that you can learn from the commonly-shared ecommerce testing best practices but testing in a B-to-B setting presents a host of different challenges and you need to understand these challenges in order to have a successful testing program.

Low traffic levels and long sales cycles limit the power of testing but that doesn’t mean it can’t be a powerful tool.

Structuring your test program around understanding your buyers will not only help you overcome those limits but also improve the overall effectiveness of marketing beyond just the testing role.

Get Updates

You're data will be protected in accordance with our privacy policy.

The Refresher ~ Analytics for Marketers

Marketer-friendly summary of the most important changes to browsers, privacy laws, and analytics delivered quarterly to your inbox.

Learn More

Let's talk about your data

I want to help you get more from your data now while setting up culture and processes that will help you scale your data analytics team in the future.

Schedule a Call