1st November 2017
Google launched their ‘Smart Bidding’ technology a little over a year ago now – one which applies deep machine learning to automate bids during each and every auction, whilst taking into account a wide range of signals to set the optimal bid for that particular search, at that particular time.
Now, it should be said that automated bidding is not a new thing. Google first introduced ‘Conversion Optimiser’ back in 2008, allowing advertisers to set a target cost-per-acquisition (tCPA) and let Google set the bids based on past performance.
The truth is that, historically, we’ve found the technology to be a bit…lacklustre…when it comes to setting bids and achieving our client’s goals, instead preferring to set bids manually and optimise using our own knowledge and insights. Yet, the time has come where the complexity of AdWords accounts, and the increasingly complex paths-to-purchase across multiple devices means (ok, so not every path to conversion takes 900+ interactions, but you get the idea), combined with the recent rapid advancements in machine learning, mean that it really is time to give the technology more of a chance. As powerful as the human brain is, there’s no way that we can set optimal bids for each and every auction, accounting for as many signals as Smart Bidding does.
But still we need to ask the question – should we trust it? Is the technology all it’s cracked up to be? We’ve tested it over the past year and one of our Online Marketing Executives, Michelle, has identified a couple of things you should look out for.
When experimenting, don’t change your campaigns!
We first tested target CPA Smart Bidding using Google Draft Campaigns & Experiments, running it as a 50/50 experiment vs manual bidding, in one campaign at a time.
Initially we ran into a few teething problems when analysing the results of the experiment; this was due to us making changes to the original campaign which skewed the results.
So in our next 50/50 experiments we made sure both campaigns were untouched during the experiment. This led to a fairer test and we were able to see the tCPA did actually work in driving more leads at a cheaper CPA. So we then decided to run it 100% across the 2 campaigns we saw it work well in.
Key takeaway: try not to make any changes to your experiment campaign whilst you are running a test. If you must, make sure that you also make the same changes to the original campaign, to keep the test as fair as possible.
Be wary if your campaigns have limited conversion volume, budgets or high impression share
As we saw Smart Bidding working in those 2 campaigns, in April we decided to be more bold and test in two of our bigger campaigns, which we were initially quite cautious of. The tCPA bidding proved to perform well in the one of these campaigns by driving more leads at a cheaper CPA than the original campaign. However the other campaign didn’t yield such positive results; in fact it spent double that of the original campaign and had a much higher tCPA.
Google explained the reason as to why this campaign didn’t work so well:
“Due to the sporadic nature of conversions within this campaign, tCPA was not able to ascertain the correct conversion signals from converters within the campaign and as such was unable to optimise efficiently towards the tCPA level required. What we also saw here is a campaign with very high impression share – over 99%. Campaigns with very high impression share are unlikely to benefit as much from Target CPA, since there may not be any more conversions available for Target CPA to get for this campaign. tCPA may not be suitable for all campaigns.”
We’re now testing tCPA 50/50 in 3 other campaigns, and before doing so we asked our Google rep whether they thought they we’re suitable for tCPA bidding. Google have said tCPA won’t work well in some campaigns due to driving limited amount of leads, high impression share, and whether it’s limited by budget.
Key takeaway: Smart Bidding – specifically Target CPA – might not work as expected if your campaign has limited conversion volume (difficult to predict which clicks are more likely to convert), has high impression share (you’ll be capturing these conversions anyway), or limited budgets (Google have advised that the machine learning doesn’t work well under this constraint).