I had an interesting A/B testing experience over the holidays. This time, it wasn’t an A/B test that I was running, but rather an A/B test in which I was an (initially) unsuspecting participant.
It reminded me of the negative side effects that certain kinds of tests can have on customers — sometimes your best customers — and the steps that marketers should consider to mitigate those risks.
This is a cautionary tale, but I don’t want to lambast the the company running the test. They ultimately handled the situation fairly. So I will refer to them anonymously as a business software provider named Acme.
Testing One’s Sanity
Acme typically offers three levels of their product: Basic, Advanced, and Super-Duper. Each level adds more features and increases in price. For illustrative purposes, let’s say their monthly subscription prices are normally $100, $200, and $300 respectively.
During a family gathering for Christmas, one of my in-laws asked for a recommendation of such software for his business. Since I knew and liked Acme, I was quick to suggest them. He pulled out his laptop, and I navigated to their website. But when I clicked on their “Pricing” tab, expecting to show him the three different levels, there was just one option: $100.
At first, I was disoriented. I was looking for the Advanced level that I already knew, but it wasn’t there. I could have sworn it was there a few days ago when I looked at it on my computer. Had they rearranged the site architecture?
To check my sanity, I pulled out my laptop, went to their site, clicked the “Pricing” tab… and still saw all three levels with their original pricing.
Huh? I refreshed my page. I refreshed his. He had one option. I had three. Looking more closely, the difference was even more striking. It turned out that for $100 he was being offered all of the features of the Super-Duper level — what was being offered on my screen as a $300 package.
My first reaction was, “Cool, an A/B test!” (What can I say? I’m a conversion geek. And I was relieved to have a rational explanation for the bizarre dichotomy between our two computers.)
But my professional appreciation for finding an A/B test in the wild was slowly replaced by a different sensation. I felt, well, gamed. I had been ready to sign up for the Advanced package myself and would have paid $200 for it.
In fact, that was what Acme was telling me, with a metaphorically straight face, was still “the” price. But on the computer right next to mine, they were telling my in-law that the price was something very different — twice as many features for half the price.
“Hey,” I wanted to chide Acme. “I’m referring you business, and you’re offering them a deal that’s way better than what you’re telling me I have to pay? What gives?”
Testing One’s Patience
I actually did feel a little indignant. This was a pretty big difference in pricing. And my reaction was tempered by a pro-A/B testing worldview. I wouldn’t expect most customers to be so enlightened.
To someone who doesn’t understand the impersonal randomization at work in such experiments — or even that it’s an experiment at all — it would be easy to attribute ill intent to Acme’s schemes.
And that’s not the end of the story.
We eventually signed him up — Acme does have a very nice product — in a browser where we had the special $100-for-everything offer. It started with the first two weeks as a completely free trial.
But when that expired, and he went to add his credit card info for the recurring subscription, the Acme system somehow reverted to giving him the regular three-tier pricing. Confused, he chose the $100 level, but it was the Basic version without all the Super-Duper features.
Technically, I believe this happened because two different computers were used. One had the cookie for the special pricing, the other did not. But however it got mixed up, it caused another round of confusion and consternation.
We opened a support ticket with Acme, explaining the situation and asking for the $100-for-everything package. The first reply we received was a little squirrelly, vaguely admitting that they had been testing new packages with a small customer group — to help them “understand how to better serve all of our customers, like you.” But they didn’t acknowledge us as one of the test subjects and didn’t adjust our package.
Now I was starting to get annoyed. We replied to clarify that, indeed, we had been in that test group, and that we expected them to honor the offer.
After an escalation or two on Acme’s side, they relented and gave us the special deal. They also apologized for the confusion caused by the price test. In the end, I felt they resolved it well. But I was convinced that for other prospects put through such an experience, it easily could have ended badly — lost trust, lost customers, negative referrals, or a social media PR disaster.
Suggestions For Testing Prices
After contemplating the above situation for a while, I have a few suggestions for Acme (and the rest of us).
First, although testing is the bedrock of modern marketing, recognize that price testing is a bit of a different beast. People are unlikely to be offended if they see one headline and their colleague sees another.
Experimenting with different content — such as videos versus images on a page — is very low risk. But if I get told a different price than the person sitting next to me, for no apparent reason, there’s going to be trouble.
Keep in mind how easy it is for people to compare the same site on two different computers. An employee sees one thing, the boss sees another.
A consumer sees one thing, a friend or family member sees another. And, of course, there’s the ever increasing propensity to share what one sees with whomever will listen on Twitter, Facebook, etc.
Nonetheless, sometimes you will want to test pricing.
As one alternative, consider testing serially instead of in parallel. Try offer A this month and offer B next month. Such tests have a few characteristics that are less than ideal — such as more potential for confounding variables — but they avoid scenarios where people are getting two different offers at the exact same time.
Or consider limiting tests to particular campaigns — with their own landing pages, separate from your primary site. You can have more control over who receives the offers and the context in which they’re presented.
One option is to explicitly identify something as a special offer, available for a limited time or constrained by other restrictions. Admittedly, this is a very different test than quietly testing two different prices that people assume is the regular price. But if the test is discovered, a special sale price seems more forgivable.
Another factor to consider is how big the price difference is. In the Acme example, they were effectively offering the product at 1/3 its regular price. A difference that big really smarts when the person told to pay full price finds out about it. If the difference had been 10%, maybe even 20%, it would have been less jarring to discover.
But perhaps you really want to test a big price difference, quietly, without identifying it as a special offer or sale. In that case, you may want to keep the ratio of the the “challenger” price low — maybe it only shows up 1 out of 10 times, rather than a traditional 50-50 split test. It doesn’t eliminate the problem, but it does reduce the probability of a collision.
Finally, no matter what, make sure your front-line staff is prepared. If someone stumbles into awareness of both prices, make sure that your team is ready to respond gracefully. You don’t necessarily have to extend the offer to people who just “heard” about it. But if you take that tack, you may be risking relationships that have far more value than the price difference.
Test, test, test. But when testing prices, test carefully.
Image courtesy of U.S. Department of the Treasury.
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.