Expert testing strategies for both low and high volume PPC accounts

Testing takes strategy no matter the account size, but low and high volume accounts have different needs. Amalia Fowler and Aaron Levy discussed testing from both perspectives at SMX East.

Chat with SearchBot
Aaron Levy Amalia Fowler Smx East 2018 1920x1080

Aaron Levy and Amalia Fowler answered audience questions about testing at SMX East.

In a session at SMX East on testing in paid search accounts, speakers Amalia Fowler, account director at Snaptech Marketing, and Aaron Levy, director of paid search at Elite SEM, approached the topic from two polar perspectives: low volume accounts and enterprise-scale accounts. The juxtaposition made for engaging discussion.

Amalia Fowler on testing in low volume accounts

In discussing testing risks and challenges for low conversion volume accounts, Fowler stressed the need to be extra selective and strategic about what you test. She provided a template for a “What if” testing ideas spreadsheet in which teams can collaborate to capture what has been tested in the past, those results and ideas for future tests.

“We need to consider, what would happen if [the test] failed? Is the business going to be okay? Will stakeholders be okay with failure?” said Fowler. Importantly, she added, “We need a hypothesis for every test. That’s the guiding force for the entire testing process.”

Particularly for low volume accounts, it may be necessary to test across multiple campaigns or ad groups. Fowler also said she sometimes lowers the statistical confidence level for a test from 95 percent to 90 percent. Google Ads’ draft and experiments confidence level is 95 percent, she noted. “Define your minimum necessary data. And prepare other people to wait for tests to complete when you have low volume accounts,” she advised.

No matter the account volume, however, Fowler said, “Don’t wait until something is broken to start testing. Be proactive rather than reactive.”

Aaron Levy on testing in high volume accounts

Levy discussed testing into the future, with a particular focus on high volume accounts. While Fowler stressed the need to test across multiple entities when volume is low, Levy presented several segmentation scenarios for accounts that have millions of keywords, automation is a must and budgets are large. “Keywords are an old data level,” he said. “We have many more ways of targeting now. AdWords is now called Google Ads for a reason.”

When discussing Smart Bidding, Levy said Smart Bidding usually means “spend more,” but that’s not inherently a bad thing from a profit perspective. Levy says companies need to embrace a testing culture and referred to his “Now, next, new” budget strategy for clients to allocate 70 percent of their paid search budgets to ongoing and proven efforts, 20 percent to evolving existing efforts and 10 percent to innovation and brand new tests.

“Making room for failure encourages experimentation,” said Levy. However, that doesn’t mean blindly experimenting. You should build tolerance forecasts to mitigate risks. “Learning periods cost. If a test works, you need to make up the cost of the experiment,” he said. How long will it take to complete the payback period of a test? Are you okay if a test never pays for itself? “That’s what the testing budget is for,” he said.

With automation, Levy said, the robots only care about two things: expected conversion rate and average order value. “If old campaigns are not designed for new automation, it won’t work,” cautioned Levy. You need to “structure your campaigns for success”. That means adding every audience possible on observe (though you don’t need to segment by recency since that is handled automatically). “The more you constrict the algorithms, the worse they will perform,” said Levy. “You’re adding in your own bias by adding restrictions. Err on the side of broad.”

That said, Levy noted that to structure for success you need to eliminate the stuff you know is not performing and remove outliers like a higher converting keyword from an ad group. Then give the machines the freedom to learn for a couple of weeks.

Levy also discussed the need to let go of match types when letting the machines run a test with Google Ads’ drafts and experiments.

[related-posts section_title=”More insights from SMX” top_post_title=”Check out the SMX West 2019 agenda and sign up now for lowest rates” top_post_url=”https://marketinglandevents.com/smx/west/agenda-at-a-glance/?source=ml&utm_medium=newspost&utm_campaign=smx-west” sel_ids=”307000,306978,307265,307075,307096,307052,307480,307286,306987,307890″ ml_ids=”250837,251959″ post_list_limit=”3″]


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Ginny Marvin
Contributor
Ginny Marvin was Third Door Media’s former Editor-in-Chief (October 2018 to December 2020), running the day-to-day editorial operations across all publications and overseeing paid media coverage. Ginny Marvin wrote about paid digital advertising and analytics news and trends for Search Engine Land, MarTech and MarTech Today. With more than 15 years of marketing experience, Ginny has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.

Get the must-read newsletter for search marketers.