If you spend enough time sitting at your desk in summer, you can go slightly insane. That aside, daydreaming sometimes leads to back-door observations, studies show.
Glancing out the window, it’s evident that it takes a lot of resources and coordination to run a neighborhood. Schools, crossing guards, street sweepers, garbage collection, public pool maintenance, snow removal, sewage, plumbing, hydro, zoning bylaws, parking permits, and the list goes on. Except, as a homeowner or resident, you don’t have to worry about any of these things. Just (directly or indirectly) pay your property tax bill, and presto - it’s all done.
It’s pocketbook management! Running a city is pretty easy — for you. Just pay your bills.
Wouldn’t it be that nice if paid search platforms worked that way? Unfortunately, the dynamic, competitive auctions are far from pay-and-forget. They require constant input from both the left and right sides of your brain. While larger companies must budget for in-house analysts and agency expertise (greatly reducing the hazards and heat blisters associated with “repairing your own roof”), small to midsized companies are left with a serious time and expertise deficit. How to manage all this complexity as a one-person show?
Shortcuts aren’t just for “losers,” but it depends on the shortcut. Putting the appropriate level of effort (not too much, and not too little) into different parts of the exercise should help you maximize efficiency.
The Long Tail revisited
In digerati circles, Chris Anderson’s fascinating study of the potential unlocked by “onesies and twosies” in digital content downloads got a conversation started. Before long, search marketers became fixated on the keyword frequency long tail, sometimes for good reason. Content-rich businesses can rank well organically on long tail terms, if they play their cards right. Paid search marketers may miss sales if they’re not working the full extent of the universe of potential keywords.
But keyword search frequency is only one issue. Its significance is easily misinterpreted. On the face of it, perhaps due to the length of the tail (or its sexiness), you might think you should actually spend *more effort* managing and scrutinizing long tail keywords. In reality, it’s less. They generally make up 2-5% of overall account spend and resulting sales.
Shouldn’t we be looking at another long tail graph to give us cues as to workflow and decision-making? Namely, the Long Tail of Account Effort? Many account managers get it backwards, and put in punishing hours of busywork and over-analysis trying to eke out pennies’ worth of efficiency, while the Benjamins are slipping through the cracks.
High, medium, and low priority tasks
To fully flesh this out would probably leak some kind of patentable secret sauce, so these are just examples and high-level thoughts. What are some of the activities that should really grab your attention? Which are less important? While I don’t cover this here, you’ll also need to decide in which sequence, or how frequently, each of these should be addressed.
- Account settings. In particular, distribution and geographic settings options, to name a few.
- Stopping major leaks. Short of frequent account-checking, some type of automation – if only alert-based, is a must to help you quickly yank the status quo on parts of the account that are melting down for whatever reason (seasonality, website or inventory issues, competition, or just plain unknown).
- Correct use of matching options.
- Establishing goals and understanding strategy.
- Establishing consumer responses through rigorous initial ad testing.
- Quality Score hygiene in initial account builds.
- Understanding everything there is to know about the highest-volume parts of any campaign, specifically the highest-volume ads and the top 20 or so keywords. This includes bidding tests and re-evaluations of strategy and competitive strategy.
- Bidding accurately in the torso (medium frequency) parts of accounts.
- Full keyword coverage in torso.
- Assessing anomalies in performance week to week or biweekly depending on volume. (Single day or two-day spikes up or down generally don’t lend themselves to accurate diagnosis; sometimes events are indeed random.)
- Quality Score awareness and keyword intent savvy to inform ongoing decisions.
- Savvy use of analytics and custom segments to inject new efficiencies into accounts (dayparting, demographics, ad copy, on-page copy changes, etc.).
- Establishing soft/secondary KPI’s to inform decisions on long-sales-cycle products and services.
- Expansion to other engines and sources once AdWords learnings are rock solid; syncing up the accounts using a campaign platform tool.
- Volume expansion experiments.
- Multivariate testing on high-volume ad groups.
- Precise, testing on medium-to-low-dollar-volume parts of the account.
- Replicating learnings throughout very low-frequency, long tail groups. (Important, yes, but relatively low dollar impact.)
- Bid adjustments on keyword groups, ads, segments, content, etc.? Exclusions of nonperforming segments?
For smaller accounts or verticals where competitors aren’t all that savvy, you might be able to get away with even less scrutiny than some of this implies. For example, you could put multivariate ad testing aside completely, or make a few mistakes on keyword matching options, without hurting yourself too much.
Note the confusion that arises in the final item, regarding the focus and methodology that needs to go into something like bid adjustments and responses to segment reporting. There are ongoing disagreements about these matters. This is for good reason. As I’ll attempt to show briefly below, approaches to decision-making vary by situation, and decisions on the whole require sound input in the form of various kinds of information.
If bidding accurately and conducting and concluding tests decisively sometimes feel like trying to solve a complex math equation where there are too many unknowns to provide a single, definitive answer, that’s probably because they are. (Related SEL item: Offline Conversions – How to Get SEM Credit.)
Decision theory and imperfect information
If we had time for one as busy marketers, a review of different eras of research in decision theory would uncover some shocking truths about decision-making environments, whether in the case of firms responding to marketing information or much more complex public policy decisions.
Most of what experts know and attempt to uncover about the process of decision is sharply at odds with the glib confidence of many busy paid search marketers, who may gratefully sop up rah-rah search-engine-speak and vendor pitches that oversell simplicity in automation.
The reality is that “perfect” decisions in any social environment only get made in hypothetical totalitarian societies with a tiny handful of perfectly defined goals, based on 100% perfect information.
We can force such situations for the sake of expediency, so we can always program a computer to perform certain routinized calculations, and train ourselves to respond in semi-robotic ways to the information we have at hand. But there can remain flawed assumptions throughout this process. For example, a bid automation tool likes nothing more than to work with a fixed “cost per acquisition target.” Yet the preponderance of company owners are shy about fixing a monolithic number on what “a lead” or “a customer” is worth.
Until recently, most people in the advertising industry understood that they operate in a climate of politics-like uncertainty; only a small cult of direct marketing gurus felt they could truly tame the data beast (much like hubristic Masters of the Universe trading derivatives on Wall Street).
The old ad people understood that you needed meetings with various stakeholders to address multiple directions and goals. They understood that consumer attitude research informed how ad campaigns were conceived, but that there was always an element of mystery to product development and marketing to shifting consumer tastes. They knew that you couldn’t always attribute every sale correctly to a given ad impression or combination of influences. They trusted that at scale, word of mouth and brand effects would take effect. And they knew that they didn’t know a lot about the details.
The downside to that era was that it was all too easy to ignore data completely and to dodge key, embarrassing facts. Today’s era of advertising is a bit more enlightened, but we also need to be sensitive to its limitations.
The apparent precision of current marketing data can mislead us into believing we can make decisions with the cool certainty of that mythical policymaker in a totalitarian society aiming at a single “KPI” based on perfect information (gained, no doubt, from spying on everyone 24-7 in “1984-esque” fashion).
Since that is rarely the case, I’m proposing two general attitudes that will help real-world managers move real-world campaigns towards a state of improved efficiency:
- Understanding the Long Tail of Account Effort, focus on high-priority items and use the appropriate combination of automated and human brain mechanisms to address them on a regular basis.
- Recognize that much of the information we look at is less complete and less accurate than we suspect, and always open to interpretation. Even an introductory-level statistics course should be enough to persuade anyone that huge mistakes can be made in inferences; statistics are routinely misinterpreted in all fields, from finance to medicine. So why would marketing be any different? Some account mechanisms work relatively reliably, but sometimes it’s simply important to note the degree of uncertainty and to wait for either more information or more consensus-building activity that might aid in weighting decision criteria.
Based on the above considerations, it’s certainly easier for a solo business owner (or dedicated consultant given free rein) to seamlessly “work” an account; using whatever work style they choose and whatever data interpretation method they feel is appropriate, no one will be there to second-guess.
In any situation involving multiple stakeholders, though, something funny arises — something that likely wasn’t anticipated by many of the coders creating these bid auctions. You have to discuss and debate the statistics. You have to look at the customer’s needs and online behavior from a variety of angles. You need to decide on the right way to decide, and move forward in an environment populated by yes (gasp!) other people.
To cite Bryan Eisenberg’s pithy description of this state of affairs (on a recent Orion Panel at SES Toronto): “It’s called marketing.”
If that’s your job, that totalitarian society with a single goal and perfect information based on 24-7 surveillance and a massive R&D budget isn’t looking so bad, is it?
Prioritize and collaborate for improved results
Whether you’re working in fiercely independent fashion, or nicely with others, today’s number one takeaway is: prioritize. Takeaway number two: in non-totalitarian societies (i.e. any most of us would want to live in), collaboration is generally a good way to get better (not perfect) information upon which to base decisions.
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.