comparative-winner-a-b-testing-to-support-social-media-marketing

Comparative Winner: A/B testing

Just like websites, it’s a common practice to test and optimise social media accounts and feeds. A/B testing, also known as split testing, refers to the comparison of ads that are identical in all but one element. Comparison between the ads allows for determining the best performing version.

In A/B testing, the target group is divided into two or more groups that are all shown the same ads that differ only in the one element that is currently being tested. These elements might include aspects such as target group, content, distribution, or design. The test results of different variations of the same ad are compared with each other and the best performing element is chosen. A result of such an A/B test could be, for example, that the target group is best reached through social media time line, or that pictures with smiling people yield the best results.

Example

Two ads are created: ad A and ad B, that differ in only one defined aspect. If after exposing the focus groups to the ads it is found that ad A performs better, ad B is discarded. Then, with some other aspect modified, a new improved version of the ad, say, ad C, is created, and compared with ad A. The company can now invest in the best performing ad and trust the winner to yield the best results when shown to the general public.

 

What can be tested?

For instance, Facebook’s A/B testing tool supports the testing of four elements: target group, posting optimisation, design, and creative content.

Target group: location, age, gender, level of education, interests, and behaviour                       Posting optimisation: timing and platform of the post                                                                 Design: the design and placing of the elements in the ad                                                           Creative content: title, readability, choice and number of words, videos, GIFs, pictures of the product, people, and project

 

 

How to test efficiently and reliably

“Oh, but that’s easy! I’ll compare two almost identical ads and use the better one in my campaign”, you might say. However, there’s a few things that are good to keep in mind when analysing the results. First, take a moment to consider how generalisable the results are. If the testing budget or scale have been small, the results may not be fully accurate. Additionally, think about how long the testing period is, as the optimal campaign duration depends on targets and industry. Facebook recommends 3-14 days long testing periods. Duration any shorter than three days, and the test may not produce enough information to be of use. Tests longer than 14 days rarely provide you with additional information after the first fortnight, therefore being financially inefficient.

Most importantly, make sure to test only one aspect at a time. Two or more variants in the ad, and it will become hard to pinpoint the most effective version. As in all other social media production, A/B testing requires thorough planning. There are no short-cuts. Consistency is the key to the perfect ad.

Consistency, again

Testing can provide multiple benefits: decreased marketing costs, improved results and more efficient content. Instead of guessing and “feeling it out” you can place your trust on hard facts. One final piece of advice, though: What works today may not work tomorrow. The markets and the trends are on a constant move, and so should be you, too. Consistent testing provides consistent results, and testing should be a standard tool in your marketing toolbox.