28 Revealing Questions To Ask When Reviewing Last Year’s SEM Performance

The smartest way to begin 2013 might well be to make a careful review of paid search performance and practices in 2012. If your company spends closer to 8 figures than 6 on paid search annually it is well worth the time of the CMO to take a deep dive here. Evaluating an enterprise paid-search program […]

Chat with SearchBot
ppc-kick-ninja-featured

Image via Shutterstock

The smartest way to begin 2013 might well be to make a careful review of paid search performance and practices in 2012.

If your company spends closer to 8 figures than 6 on paid search annually it is well worth the time of the CMO to take a deep dive here. Evaluating an enterprise paid-search program can seem a daunting task.

Complexity and scale can lead senior marketing leadership to make one of four mistakes:

  • Allow paid search managers to evaluate themselves
  • Allow a vendor seeking to win your business to evaluate your team’s performance
  • Hire a consultant who knows little about paid search to do the evaluation
  • Worst of all, evaluate performance against aspirational goals set by others

For obvious reasons, none of these yield a fair assessment of how paid search performed against opportunity.

Instead, CMOs and marketing directors should take the bull by the horns. The answers to 28 questions in 3 key areas can allow the marketing leadership to gain a better understanding of paid search, better understand the challenges and opportunities the channel presents, and get a clear picture of how well her/his team performed against that competitive landscape.

Stage 1: Goal Review

Before reviewing any data, it is important to get a clear understanding of what paid search is expected to do for your organization. This isn’t a question of what the budget or forecast was, it is much more fundamental than that.

It is often remarkable how little thought goes into these questions. Deeply probing what we’re trying to accomplish and how we measure our performance against those objectives is a critical first step.

Checklist

Image via Shutterstock used under license.

  1. What do we seek to achieve through paid search advertising? Is the goal to make money in the immediate term, short term, or long term? Is it a branding exercise? Some hybrid?
  2. Have we included in our goal setting some notion of long term value of a customer? Should we?
  3. Do we correctly separate performance of our brand keywords from competitive non-brand keywords? {This is a pass/fail question. Blended performance goals make no sense.}
  4. Have those goals changed over time? Since last year? During the course of the year? Why did the goals change?
  5. What metrics do we use to know whether we are achieving our goal? Is it a single success metric? If so, is that metric as closely tied to value as it can be?
    • In e-commerce: why use orders as a metric instead of sales, and why use sales instead of margin?
    • If leads are the success metric, do we assess the value of those leads? Do the values vary by keyword, geography or device? Do we use those differences in our bidding?
  6. How do we measure offline success driven by paid search? If we cannot measure directly, why can we not estimate it?
  7. Are there other success metrics we should use? Should not a “get directions” click or “click-to-call” on a smartphone count as some type of success?
  8. How do we address the issue of multiple paid-search clicks preceding a successful visit? Do we parse credit appropriately?
  9. As a marketing team, how do we think about multiple marketing interactions preceding a successful visit (online or off)? Do we parse credit between channels? Should we?
  10. Do all of our business units follow the same practices? Is paid search managed by different teams within the organization? Does that make sense?
  11. If applicable, how do we manage international search? Do we have language skills internally to do this well? Do we advertise on Yandex (Russia) or Baidu (China)? How do we measure the success of those efforts?

Stage 2: Performance Review

How did we do against the goals we pursued? Whether the goals and metrics were the right ones, whether they changed during the course of the year rightly or wrongly, the question here should be how did the paid-search program do against the goals they sought? Performance shouldn’t be faulted for achieving the wrong goal.

Again, we should look at this not as a function of performance against a forecast or budget, but against opportunity. Results can be above or below forecast for reasons relating to forecasting methods and foibles, not performance. The numbers and responses in this section, and the responses in Stage 3 will get us what we want.

  1. Show me aggregated paid search performance data by month, splitting out brand and non-brand.* If there are apparent deviations between the performance and the goals month-to-month, what is the explanation for those deviations? If non-brand efficiency metrics vary significantly, why would we as an organization be willing to invest more in marketing some times than we are other times?* If the goal was to achieve an aggregated performance efficiency, we can’t fault the performance of non-brand advertising in isolation, but we should at least see it clearly and understand what it means. The non-brand performance is the only piece over which the paid-search team has meaningful control, so that data is what must be used to evaluate performance.
  2. Show me the Year-Over-Year non-brand performance by month. What factors drove the changes year to year?
  3. Show me the trend in the fraction of total website success metrics driven by paid search. Split this out by brand and non-brand, as well. Why is it trending up or down? What does that mean for the business? Does that say anything about performance?
  4. Show me non-brand performance data by month, split out by search engine. If there are differences in marketing efficiency, what is the explanation?
  5. Show me Google-only performance over 2012QX {Marketing leader picks the X} broken out by category (campaign might be a decent proxy). Do the efficiency differences make sense?
  6. For a different quarter, show me Google performance data by Keyword. Sort this data by advertising cost descending. Understanding that there is statistical noise involved, does the KW level performance look reasonably coherent? Do major differences in performance have a sensible explanation?
  7. Bucket this list of keywords by ranges of click volume so that there are roughly an equal number of clicks in each of 5 buckets ranging from the highest traffic terms to the lowest traffic terms. Do any performance differences between these buckets have a reasonable explanation?

There are reasonable explanations for many types of anomalies. The point of this exercise is to gain a deeper understanding of the program’s performance against goals, and to see how the paid-search team answers hard questions about performance. If the answers make sense, great. If the answers smell fishy, they probably are.

Stage 3: Practice Review

Even if the performance numbers, trends and answers to the questions in Stage 2 look good and sound reasonable, it is impossible to tell whether the program is truly optimized without scrutinizing the team’s practices as well. Efficient numbers could still reflect a poorly-built program covered up with decent bid management.

  1. How many distinct keywords are active in our Google account? Our Bing Account? Why are those different numbers? How many distinct keywords have we tried? Why have we not tried more? Why do we turn off keywords? Couldn’t a case be made that there are no bad keywords, only bad bids? Looking through the keyword-level data in #6 above, you might notice some holes; ask about them specifically and expect a reasoned response.
  2. What do we do to reduce poor quality traffic? The answer should include references to match type layers, negatives, and there syndication partner treatments? Go into the account with the paid search manager and drill into a campaign and adgroups. Are there obvious negative phrases that are missing? Is the ad copy associated with each adgroup compelling but also accurately reflective of your brand? Do you see different match types running for the same keyword? If so, is more being bid for the exact match traffic than broad match? If so, that’s probably a good sign; if not, it’s probably a bad sign, but another good avenue to question.
  3. While you’re in the account, navigate to a different campaign and look through the ad copy and landing page assignments. Do the keyword, copy and landing pages fit together? The landing page should reflect the specificity of the user’s query — no deeper, no shallower.
  4. Show me some examples of copy tests we’ve run in the last year. What lessons did we draw from them? What copy tests had the biggest impact on performance? Do we change copy without testing? Has that proved to be valuable? What fraction of your time goes into ad copy versus other tasks? Does that balance make sense?
  5. Do we split out campaigns by device? {Best case three separate campaigns for desktop, tablet and smartphone. Acceptable case and often just as good, two campaigns: desktop + tablet in one and smartphone in the other. Worst case is two campaigns desktop in one, tablet + smartphone in the other.} Do we have different ad copy for different devices? Do we have different objectives for different devices? Should we?
  6. Do we split out campaigns by geography? Should we?
  7. What other types of segmentation do we do to better target bids and copy?
  8. How do we set bids? {If the response involves ‘trying to find the right position,’ it’s time to find a new manager.} How often do we adjust bids? Are we adjusting based on time-of-day and day-of-week? Do we have mechanisms in place to anticipate seasonal flux? Do we make use of bid simulator data to understand the cost/benefit tradeoffs of different bids?
  9. Do we make use of all the available advertising vehicles and options associated with search these days? Are we using:
    • Product Listing Ads (mostly an e-commerce option)?
    • Dynamic Search Ads?
    • Sitelinks?
    • Seller Ratings (if applicable)?
    • Search Retargeting? {This one is new, but really important.}
  10. How do you prioritize activities? Do you spend most of your time on the highest leverage activities? If not, what keeps you from doing that? The answer might be that task lists from on high or demands from other constituencies in the company prevent the team from using its human resources wisely. If so, that’s important to understand, and address.

Conclusion

This review process does not require the reviewer to have deep knowledge of paid search, the reviewer simply needs to have common sense, and a good nose for BS. Paid search makes sense when done well. Data that doesn’t make sense combined with answers that don’t make sense should raise big, red flags about the health of the program.

If the practices sound right, the data makes sense given the goals and constraints placed on the team, and the team has smart, rational make-sense answers for anomalies, then the team is doing its job well against the opportunity available. The right goals combined with the right practices, execution and technology will produce the best data possible.

May 2013 bring you the best results from paid search ever!


Contributing authors are invited to create content for Search Engine Land and are chosen for their expertise and contribution to the search community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


About the author

George Michie
Contributor
George Michie is Chief Marketing Scientist of Merkle|RKG, a technology and service leader in paid search, SEO, performance display, social media, and the science of online marketing. He also writes for the RKG Blog. Follow him on Twitter at @georgemichie1.

Get the newsletter search marketers rely on.