The Technology Behind Autobidding: Q&A With OptiMine’s Dr. Rob Cooley

Autobidders. Every SEM management tool has one. The promise is powerful: simply input the performance you’re trying to achieve and sit back, have a sody-pop, and smile as the little robots make your job easy.

Ha! If it was only that easy!

Most of you reading this post have used one in the past. Have they worked well for you? Listen to any paid search veteran and you’ll hear stories of autobidders gone wild and delivering varying results.

I know that in the past, I’ve recommended in this column to just use auto optimization technology on your tail terms and manually optimize the important head terms of your account.

They’re good for managing low-impact keywords at scale but be careful not to just set and forget them. You may come back a week later and find that your pacing has slowed tremendously as the autobidder has paused every keyword except the handful that can meet your CPA goal.

Certainly, autobidders are not simple software. To build them right, you need to take into account dozens (if not hundreds) of variables and be able to slice and dice the data in order to decision upon bids inside paid search platforms.

To learn more about these tools, I spoke with Dr. Rob Cooley, Chief Technology Officer for OptiMine Software, a smart-cookie who has spent countless hours thinking about autobidders and trying to improve on them.

After speaking with him for more than an hour on this subject, I was wowed by the depth of what goes into cracking the autobidder puzzle.

Q: So, Rob, why do you love working on this complex issue?

Rob: My passion has always been creating data driven analytic applications and I tend to get bored if things are too easy. When I started working on my PhD in the mid 90’s there was this new thing called e-commerce that wasn’t very well understood from an analytics perspective. I dove in and found so many fun hard problems that needed to be solved in the space that it ended up being the subject of my thesis. Fifteen years later, I still haven’t run out of hard problems to work on.

Q: What has been your professional experience in this field?

Rob: I started consulting for e-commerce companies in the late 90’s while finishing my PhD, mainly working on things like shopping cart abandonment. In 2000 I joined a Xerox PARC spin-off that was focused on personalized search and acquired fairly quickly by Google in 2001.

Then, for the next eight years, I ran the technical operations for a data mining tool vendor. That gave me the opportunity to lead over 300 engagements where I had my hands on data to help solve a wide variety of marketing and advertising problems. The issue of pricing for online advertising kept popping up with customers, so in 2008, I decided to start OptiMine.

Q:  Before we begin, let’s define for the readers what an autobidder is.

Rob: My definition of an autobidder is a software application that automatically sets online advertising bids for every biddable entity (e.g. the “keyword” for paid search) in order to improve some performance metric.

Q: What are autobidders good for and what don’t they do well?

Rob: If done right, an autobidder improves performance while meeting critical business constraints, such as increasing profit while providing some minimum order volume, or increasing revenue while maintaining a minimum return on ad spend.

While they can provide some time savings, what they don’t do well is operate in a lights out environment without any human intervention. Someone with domain knowledge and an understanding of the business has to drive.

This is a common mis-perception about autobidders that I think comes from some early solutions in the market-place that were very black box in nature. In my opinion if you can’t steer the application, it wasn’t done right.

Q: Just how hard it is it to build a good autobidder? What are the variables or limitations that you have to consider?

Rob: It turns out that it’s incredibly hard to build a good autobidder. To do it right, you need to predict the future cost and value for each and every keyword. However, the vast majority of keywords have very little or no history of clicks or even impressions.

In addition, there is a lot of volatility even for the keywords that do get daily clicks and impressions. In technical terms, this is known as sparse noisy data.

Q: You mention there are different types of models out there, what are they and what are their pros/cons?

RobOK, here goes the deep dive…

To answer this, first there are some background terms you need to know.

Models versus Rules

A model based system uses past performance data to train statistical models to predict future performance. For example a model based system could predict the bids necessary to achieve a 200% ROAS. A rule-based system is typically a pre-defined set of reactions to certain situations. For example “if ROAS is less than 200% then lower bids by 10%”. In general model-based systems are predictive and rule-based systems are reactive.

Keyword versus Cluster models

 Within model-based solutions there are some that have different models for each keyword and some that group keywords together into clusters. The purpose of the clustering is to get around the sparse data problem by adding data from several keywords.

Global versus Local optimization

This is a really important distinction. Everyone throws around the term “optimize” or “optimal”, but there is a technical distinction between global and local optimization.

A local optimization simply bids each keyword or cluster separately from the other keywords or clusters. So if the goal is to maximize revenue with a minimum ROAS of 200%, every keyword is bid to obtain a ROAS of at least 200%.  A local solution won’t tradeoff low ROAS from one keyword with high ROAS from another.

A global optimization (referred to as a portfolio approach by some vendors) considers all of the keywords at once, assigning bids so on average the group as a whole maximizes a goal while meeting some constraints. It may turn out that one keyword can drive a ton of revenue at a ROAS of 180% and another at a ROAS of 220%, as long as the average ROAS is 200% the global solution will declare success.

You can spot a global solution by the looking at the available settings. If the words “maximize” or “minimize” are available as settings, then it’s a global solution. You generally can’t ask a local solution to “maximize revenue” or “minimize CPA” while meeting an additional set of constraints, you can only give it targets such as “provide a $15 CPA”.

Okay, now that you understand the ground work, I can answer your initial question.  The following are the four approaches on the market in terms of autobidding technology.

Global Keyword-Level

This is the gold standard or holy grail of bid management. The pros are performance and explainability. The cons are that it’s really hard to figure out since you need to predict behavior across a range of bids for each keyword and then solve a fairly nasty global constraint-based optimization.

Local Keyword-Level

In effect this is the approach advocated by Hal Varian. Bid each keyword separately based on the predicted value. The main pro is simplicity since you don’t need to predict behavior across a range of bids, just bid a percent of the predicted value. The main cons are limited settings and lower performance when there are constraints.

In a lot of cases, a local solution leaves money on the table. You can set a target but you can’t layer on multiple constraints. And while it may be achieving a ROAS of 200% you don’t know if it was actually possible to hit 250% that day because it is simply trying to hit the target, not maximize a metric.

Global Cluster-Level

Here, you still have a global optimization but models based on clusters of keywords are used to handle the sparse data problem. Some vendors are actually a hybrid of this plus the keyword-level global. e.g. they use keyword-level for head terms and clusters for the tail.

In either case, the pro to using clusters is model stability, meaning the results are repeatable. The cons are performance and lack of automation. The performance drop associated with clusters comes from the fact that each keyword is unique, and the value of extra data is outweighed by the loss of uniqueness. The other issue is clustering typically needs statisticians to manually tune the models, so cluster-based solutions are rarely pure software applications.


This is probably the most common solution available. In theory, the pros are simplicity and understandability. I’ve found that’s often not the case since if you layer a set of 25 rules on top of each other it is very difficult to wrap your head around exactly what will happen to the bids. The main con is performance. Because of the reactive nature of rules, they can be very good at what I call profit protection, but they rarely lead to optimal results.

Q: There are seemingly thousands of these technologies out there, whether inside licensable tools or in some home-grown, proprietary platforms.  Where do you think many of them stand in terms of effectiveness and why?

Rob: I think my opinion of the effectiveness of the various solutions out there is best summed up by the fact that I chose to quit my job and found a company at the height of the financial crisis because I was confident I could create a better solution.

In terms of why, I have a strong opinion that you need a solid academic foundation in data mining & optimization techniques, strong domain experience with online advertising, and a lot of field experience with actual data mining and analytic applications to put together a viable solution. From what I can see, a lot of creators of bidding solutions are missing one or more of those three key points.

Q: So what’s the autobidder at OptiMine?   Why do you think you have something special over there? Does it have a cool name like “Conan the Keyword Destroyer” or “Bidder Bidder Chicken Dinner”?

Rob: Sorry, no cool name. It’s just OptiMine Bid Management. It uses a Global Keyword-Level approach per my answer above. The reason we think we have something special is that we regularly improve performance (profit, revenue, ROAS, etc) by 25% or more in controlled tests against other technologies. We haven’t lost a competitive test.

Q:  Can you share some results on how you’ve compared to competing platforms?

Rob: Here’s an example for each of the three competing types described above.

OptiMine vs Global Cluster-Level

The goal was to drive as many new accounts a possible at a fixed CPA. Against a competing global cluster-level solution, OptiMine drove 216% more new accounts at the same CPA. The key issue here was the age of the clusters. In this case they hadn’t refreshed their clusters and the keyword grouping was simply obsolete. There were a handful of good keywords hidden in clusters of bad keywords. The act of separating those out and bidding them up was what led to the volume increase.

OptiMine vs Local Keyword-Level

The goal was to drive as much profit as possible while maintaining a minimum amount of revenue. Against a competing local keyword-level solution OptiMine drove 37% more profit. The key issue here was seasonality. The local solution had much simpler models that just didn’t pick up a declining seasonality as fast as OptiMine.

OptiMine vs Rules-Based

The goal was to drive as much revenue as possible at a fixed cost of sale (inverse of ROAS). Against a rules-based approach OptiMine drove 30% more revenue at the same cost of sale. The key issue here was tail terms. The rules put in place were simply too conservative for the tail terms. OptiMine drove 159% more revenue out of the tail which led to the overall 30% increase in revenue.

Q: So, I saw the announcement that Adobe is using OptiMine’s technology in SearchCenter?  Sounds exciting…can you talk more about this relationship and how it will work?

Rob: Yes, the next release of SearchCenter will allow their customers to use the OptiMine bid management technology. Essentially, it’s Adobe’s version of our interface. The back-end number crunching is still done by OptiMine, but instead of the OptiMine UI the features and functions will be seamlessly integrated into the SearchCenter UI.

Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.

Related Topics: Channel: Search Marketing | Search Marketing Toolbox


About The Author: has been a search marketer since 2003 with a focus on SEM technology. As a media technologist fluent in the use of leading industry systems, Josh stays abreast of cutting edge digital marketing and measurement tools to maximize the effect of digital media on business goals. He has a deep passion to monitor the constantly evolving intersection between marketing and technology. You can follow him on Twitter at @mediatechguy.

Connect with the author via: Email | Twitter | Google+ | LinkedIn


Get all the top search stories emailed daily!  


Other ways to share:

Read before commenting! We welcome constructive comments and allow any that meet our common sense criteria. This means being respectful and polite to others. It means providing helpful information that contributes to a story or discussion. It means leaving links only that substantially add further to a discussion. Comments using foul language, being disrespectful to others or otherwise violating what we believe are common sense standards of discussion will be deleted. Comments may also be removed if they are posted from anonymous accounts. You can read more about our comments policy here.
  • A.T.

    The first two match-ups vs. competing platforms don’t seem like very good assessments of the bidding system since both of the competitors issues could have been solved by having a smart and active analyst on the account. Third test sounds legit, but one out of three ain’t that great.

  • rwcooley

    There’s no question that having a smart and active analyst will improve performance, that’s why I mentioned that I strongly believe someone has to drive. However, in the two cases you’re questioning, the scale of the accounts made it very difficult for the analyst to keep up. Each has well over 100k keywords spread across multiple accounts. In addition, both of these were using an auto-bidding solution that had limitations on the types of changes that could be made. So given the available analyst resources and the available software features there was no easy fix short of changing the bidding solution.


Get Our News, Everywhere!

Daily Email:

Follow Search Engine Land on Twitter @sengineland Like Search Engine Land on Facebook Follow Search Engine Land on Google+ Get the Search Engine Land Feed Connect with Search Engine Land on LinkedIn Check out our Tumblr! See us on Pinterest


Click to watch SMX conference video

Join us at one of our SMX or MarTech events:

United States


Australia & China

Learn more about: SMX | MarTech

Free Daily Search News Recap!

SearchCap is a once-per-day newsletter update - sign up below and get the news delivered to you!



Search Engine Land Periodic Table of SEO Success Factors

Get Your Copy
Read The Full SEO Guide