No matter how well you’ve designed your post-click landing page, it’s not going to work on the majority of your visitors. At least, not at first.
They’ll click through your ad, evaluate your offer, and they’ll likely abandon your landing page. But the more visitors you accumulate, the more you learn about how to get them to sign up, download, and buy. With a process known as conversion rate optimization (CRO), you can turn that information into revenue.
What is a conversion rate?
A conversion rate is the measurement of how effective your page is at producing conversions. In advertising, a conversion could be a sign up, a download, a purchase – anything that requires an action on the part of the visitor that moves them further through the buyer’s journey.
To calculate the conversion rate of a page, you need two numbers: the number of conversions the page has generated and the number of total visitors to the page.
From these, you divide the first number by the second number, and you have your conversion rate. If 1,000 people have visited your page, and 50 of them have converted, you divide 100 by 1,000 and you get .05, which is 5%. Your conversion rate for that page is 5%.
What is CRO?
CRO, also known as conversion rate optimization, is the attempt at systematic improvement of a page’s conversion rate. If you want to boost your page’s conversion rate from 5% to 7% or 10%, it will take a combination of techniques like data collection and testing.
CRO isn’t always the answer
The last few years have seen a major growth in the conversion rate optimization industry. From 2017 to 2018, the number of businesses spending more on CRO increased by 45%. But as with any tactic, it’s important to evaluate whether it’s worth implementing for your business. For many, the answer is no.
First off, CRO is a method that, to get right, takes constant attention. Just one test can take weeks or months to run. And even if it produces a positive result, it won’t move the needle as high as you’ve been led to believe.
Results like “Changing headline boosted conversion rate by 75%” are not common. Chances are they’re not even valid. Sustained lifts to conversion rate are much lower, usually in the single-digit range.
To produce a marked improvement that significantly impacts the bottom line, you need more than scattershot one-and-done tests. You need a strategy.
However, before you consider implementing a CRO strategy, it’s important to ask yourself: Do I have the time and resources to be running tests constantly? And most importantly, will CRO provide the biggest improvement to bottom-line goals compared to other activities?
Two examples when CRO doesn’t work
A tactic is only valuable if it adds to your bottom line. And while optimizing conversion rate often translates to an increase in bottom line, it doesn’t always do so as effectively as other methods.
Consider this: If I drive 100 people to my landing page and they convert at 10%, it means I’ve converted 10 people. Now, I can run test after test to try to boost that conversion rate to 15%, thereby converting 15 out of every 100 people, or I can generate 100 more visits to the page, thereby generating 20 conversions.
Simply boosting your traffic with paid ads or organic content can be a much easier and more sustainable way to improve bottom-line goals.
Example two comes from a server monitoring company called Server Density. They were attempting to boost revenue generated from an outdated pricing page, which once allowed visitors to pay by the amount of servers they needed monitoring.
The structure was created to boost customer base. It accomplished its goal, and then Server Density was focused on revenue.
To increase it, they could’ve focused on finding ways to improve conversion rate with different page layouts, CTA buttons, and so on. They could’ve boosted conversions with more traffic. Instead, though, they did something very different.
They restructured their pricing model. Have a look:
This new page featuring an overhauled pricing model actually generated fewer conversions at a lower conversion rate. However, data from Server Density shows it generated more than double the revenue.
The lesson here is that there’s more than one way to raise bottom-line metrics, but CRO isn’t always the most effective one. If you’ve determined it is, however, you shouldn’t just start testing scattershot tactics like changing images and headlines. You need a sound CRO process.
Getting started with CRO
The majority of a sound CRO process actually involves very little of what you might consider traditional optimization. It’s mostly research, data analysis, and testing; only at the end of the process are you actually able to make a confident optimization to a page.
Conduct technical testing
The goal of CRO is to systematically improve your conversion rate with a well-formed strategy. But not every improvement requires a drawn-out process.
Before you even think of diving into weeks-long tests to improve a single page elements, look for ones you can test instantly.
ConversionXL founder, Peep Laja, recommends starting with technical testing for glaring issues that can provide quick wins like:
Browser/device testing: Peep says: “If you think your site works perfectly on every browser version and every device, you’re probably wrong.” Open Chrome, Firefox, Safari, Edge, on desktop and mobile. Navigate to your site. Does it look and work the same way everywhere?
Measure site speed: Site speed is one of the biggest affectors of conversion rate. Research has shown that 53% of users will abandon a page if it loads in more than three seconds. And according to another study, the average mobile landing page loads in a shocking 15 seconds. Tools like Google PageSpeed insights can give you an idea of what could be slowing your site down.
It’s also worth testing dead links, issues with advertising platforms, and so on. Before you dig into making calculated optimizations through A/B and multivariate testing, make the easy ones that you can resolve quickly. There are a variety of A/B testing tools that can help optimize your pages.
Approach with a business problem
In blog posts and case studies, you’ll get recommendation after recommendation of what you should optimize to boost conversion rate. Some will say images. Others might say headlines, CTAs, or a host of other page elements.
But using their case studies as a starting point is misguided. Here’s why: Their business problems are not your business problems.
The answer to “What should I optimize?” isn’t likely to be found in another business’s case study or a list of best practices. Instead, it should come from the biggest obstacles to your customers in their path to purchase.
Why do most people come to my site? What problem do they need to have solved? (this can be as simple as “they want more information” or as complex as “they need a comprehensive IT solution that addresses X, Y and Z issues”). Personalizing the user experience is paramount to keeping them on site and solving their problem.
What do I want people to do on my site? (you may want people to do different things on different pages or sections of your site).
After you’ve figured these out, put yourself in your site visitors’ shoes. Enter your site through the channels they do, the devices they use, the campaigns they approach. Move through the customer journey. Convert on key landing pages. Note any friction in the process that might cause them to drop out of the funnel.
Assess company data
Self assessment alone can’t form the basis of your tests. You need something to back it up: data. There are two main types that can be gathered in many ways:
Quantitative data: what you normally think of when you hear “data.” It’s numbers, percentages, ratios – points that are easy to quantify. This is objective data like bounce rate, conversion rate, click-throughs, and more. It can be gathered through standard analytics tools.
Qualitative data: the other kind of data – the subjective measures that cannot be easily quantified. While they’re not as black and white as quantitative data, they’re almost always more valuable because they give you real, tangible insight into how people are using your pages and what’s going wrong. This can be collected with exit polls, email surveys, pop-ups, phone calls with customers, heat mapping technology, user sessions recording, and so on.
In both quantitative and qualitative data, you’re looking for big holes, drop-offs. Where are you succeeding? And more importantly, where are you failing?
Once you start combining this data with assessments of your own content, you’ll likely find countless ideas for optimizations. Now, you need to figure out which to pursue.
Prioritize your projects
Before you can start testing improvements, you need a way to determine which CRO project will be most productive to your business.
For this, Ben Cotton suggests using the “PIE” framework, which he says has been used at HubSpot with great success. It involves ranking potential CRO projects on three factors:
Potential: How much total improvement can this project offer?
Importance: How valuable will this improvement be?
Ease: How complicated will it be to implement this improvement?
For every optimization idea you’ve come up with, answer these questions with a number between one and 10 (one being the lowest and 10 being the highest).
Once you've assigned a score for each strategy, add up the numbers and then divide it by three – this gives a score which shows what project will have the greatest impact. Then, work on the projects with the highest scores first.
Ben is sure to emphasize that while the framework isn’t perfect, it’s a great way to communicate to your team the value of CRO projects and why they’re being pursued.
Carrying out your ideas now requires testing. This is the trickiest part of the process with the most potential for wins and losses. Do it right and you’ll learn ways to improve your conversion rate. Do it wrong and you could implement an idea that actually hurts your business in the long run.
First you have to decide the kind of test you’re going to run. In CRO, the two most common types are A/B testing and multivariate testing.
A/B testing: the method that’s best used for finding what’s called the “global maximum,” which is the best general version of your page. It involves testing an “A” page, the original, vs. a “B” page, the variation (there are also A/B/C to test more pages against each other at a time). At the end of the test, you will know which page performs better for a particular goal, like conversion rate, but you won’t know how each page element interacts with each other.
For example, here’s a test completed by Marketing Experiments, in which two very different versions an Investopedia page were tested against each other.
Here’s the original:
And here is the variation, with a different headline, layout, fewer images, and it’s written like a letter as opposed to a bulleted list above:
By 89%, this variation outperformed the original. That was learned through A/B testing. However, what can’t be learned through A/B testing like this is why the page improved.
Was it the new layout? The new headline? The letter-style copy? Unless you conduct an A/B test on each element one at a time (headline 1 on page A vs. headline 2 on page B), you’ll never really know.
However, for the vast majority of businesses, testing one page at a time is a wasteful method. At the end of the day, does it particularly matter why one page beat another? Or will you be satisfied with a page that performs 89% better regardless of the reason?
Multivariate testing: while the strength of A/B testing its ability to discover the best performing page among a set of drastically different pages, multivariate testing is best for fine-tuning to find the “local maximum.”
For example, with the help of the test above, we now know the best performing general page. It’s variation “B.” If we want to figure out which combination of elements on that page will improve it even further, we use a multivariate test. This will allow you to compare things like:
Headline 1, Image 1, Button 1 vs.
Headline 1, Image 2, Button 1 vs.
Headline 1, Image 1, Button 2 vs.
Headline 1, Image 2, Button 2 vs.
Headline 2, Image 1, Button 1 vs.
Headline 2, Image 2, Button 1 vs.
Headline 2, Image 1, Button 2 vs.
Headline 2, Image 2, Button 2 vs.
At the end of this test, you’ll be able to understand not only which combination is most effective, but also the interactions between each element and how they contribute to your goal. The downside to multivariate testing is it’s not as straightforward as A/B testing, and it also requires a lot more traffic to complete.
Once you’ve decided which test you’re using, you must progress through a very particular set of steps to ensure you’re as accurate as possible. Among other steps, this involves setting a clear hypothesis, defining your minimum detectable effect, and discovering the sample size needed to reach statistical significance at a high level of confidence.
When the test begins, it’s important to watch for variables that can threaten the validity of your outcomes, like the instrumentation effect and the history effect. Once your sample size has been exhausted and significance reached, you’ll be able to get a relatively clear idea of which page performed better.
Analyze and optimize
Now that you have your test results, if they’re positive, you can finally move to actually optimizing your page. Run the winner of the test. Make sure you’re recording the results of each. Aeden offers a very simple way to do so:
If the results are negative, not to worry. Much of finding what works is about finding what doesn’t work. Arguably, there’s more to learn from a negative test result than a positive one.
Review your method. Were there any errors that went unchecked? If not, review the page. What have you learned about your audience by failing to convert them with this approach
Maybe your longer variation had too much copy. Maybe a video doesn't work like you thought it would. These are great places to start for the next attempt at optimization.
Tyson Quick is the Founder and CEO of Instapage, the leader in post-click automation. He founded Instapage in 2012 after seeing how performance and growth marketers were losing money in underperforming advertising campaigns. Since then, his vision has been to create a suite of post-click automation products that maximize returns through advertising personalization.