Just Do Something: A Starter Kit for Marketing Performance Improvement

The full post is over at Hedgehog. This is a slightly edited and abridged version.

In talking to a wide range of companies over the years, a surprising number have made little headway towards maturing their marketing performance. Even with the tools, the personnel, the executive buy-in, and the career aspirations, companies often are stuck in neutral.

Our message is always “just do something”. A single page, component, or campaign can be the first step along the learning curve to performance-based marketing.

The Basic Plan

Pick one element to test

Find a single web content item which would benefit from visitor-based optimization. An image, a copy block, or a button would work. Seriously, just one will do to start. We do, however, recommend using an image; they tend to be easy to source and produce significantly different results. There are other areas for testing, such as outbound email, social, or mobile but web typically is easier to get up and running quickly.

Determine your objective

It is important to pick a variable to assess, such as revenue per visit or form completion rate. Without an objective, there is no way for you to analyze the impact of different versions. This should reflect the result that would cause changes in how you operate your business.

For example, showing a 5% increase in visitors who click on an image may not result in any organizational changes. But showing a 5% increase in revenue per visit when you replace a dog photo with a cat photo will likely drive more investigation and eventual reorientation of your creative direction.

Measure whatever you can

Hopefully your website is already instrumented for most performance metrics. If you can’t get the perfect metric, look for something similar. The key here is to pick something meaningful and easy to understand, because your goal will be to show how testing changes results and use that proof to get budget to improve reporting.

Test

Hopefully your CMS allows for random A/B testing. If it doesn’t, you could test the hard way by running one version of the image for a while, then running another, and comparing results by timeframe. (If you need to do this, it’s time to upgrade.)

You’ll need an hypothesis to test, such as “cat photos convert better than dog photos” or “photos with dogs alone convert better than those also with their owners”. While a single test won’t really prove or disprove your hypothesis, it will increase your knowledge for the next test. The test itself would be between the current image (the “champion”) and a test image (the “challenger”).

Run your test long enough to produce statistically significant results. How long is that? Well, it depends on how much difference between the two versions you expect. The smaller the difference, the more data points you need.

For example, if you normally have 1.50% conversion from visitors who reach a particular page with the test, and you want to detect a difference of 20% in conversion, (that is, your hypothesis is the test image will drive 20% more conversion) you will need approximately 27,000 visitors per version for a test at 90% significance. You can figure these values for yourself using Optimizely’s really handy A/B test calculator.

Why so many? Well, you need enough conversions to compare, and with the relatively low conversion rate it takes a lot of visits to generate a big enough sample. Some quick math shows you would have 405 conversions on 27,000 visitors normally, and you are expecting 486 with the test. That’s not a big difference. A little randomness one way and it will look like there’s no difference, when in reality there is (and vice versa). So you may need to run your test for a while to get enough data to validate.

Analyze

The results of your test may be conclusive. Or they may not. It is up to you to decide how much risk there is in accepting or rejecting the results of the test.

For example, in the situation above the champion converts at 1.53% during the test period, and the challenger converts at 1.67%. This result says there is not a statistically significant difference between the behaviors of the two groups. But looking at it, you’ll be tempted to switch to the challenger based on the 9.2% better conversion rate. It’s OK to do so, but you’ll have to be aware there isn’t statistical proof that it was better. You could run the test again for another 54,000 visitors. If this time you get the champion at 1.49% and the challenger at 1.62%, your hypothesis is still not proven, but as a marketer you’d feel a lot more comfortable switching to the challenger.

Repeat

tortoiseshell kitten reaching up
Tortoiseshell kitten

Once you’ve completed your first test, congrats! You now have proof of how different images drive different results. You can take those results and hypothesize differently or extend your original hypothesis. Maybe you’ll test Pomeranians vs. Pit Bulls. Or Torties vs. Siamese. Regardless, make sure every test would result in a change in your marketing, or else you’ve wasted the test. (Above image from the Cat Breeds Encyclopedia.)

Takeaway

This is a very basic approach, but one that will produce real results. We’ve seen how just one test can transform how companies allocate budget and attention, by highlighting the different results from seemingly similar options.

Leave a Reply

Your email address will not be published. Required fields are marked *