As cool as this sounds, AAR is a double-edged sword. On one hand it can be beneficial to automate bidding processes, but on the other hand, when used incorrectly, it can completely sabotage your campaign.
In this post I’d like to point out some issues one must be aware of when setting automated rules.
First of all you can never make good decisions on statistically insignificant data. Let’s say you have a rule that runs every day and pauses each keyword with CTR lower than 1%. A keyword the rule applies to got 100 impressions and 0 clicks and so was paused. Now let’s imagine a scenario where the rule was not applied. After few more days, the keyword got 1000 impressions and 20 clicks – CTR is now 2%. The decision to pause the keyword turned out to be hasty and ill-considered. 100 impressions was far too insignificant number to draw any conclusions.
Sadly AAR doesn’t allow users to run rules every x impressions or clicks to ensure relevance. A substitute would be to add another condition to apply the rule only if the number of impressions in considered timeframe was relevant (e.g. If ctr < 1% & impressions > 1000 then decrease maxCPC by 20%). Note that some rules don’t require many impressions to be relevant. If the measures in the rule are quality score, position or avg. cpc you just need to include a condition: clicks>1.
Another problem is a poor understanding of percentage value. For example if one wants to use rules to keep a keyword on the third position he/she can find it a great idea to increase cpc by 20% every time position is lower than 3 and decrease it by 20% every time it is higher than 3. Let’s assume that the maxCPC corresponding with position 3 is $0.55 and first time the rule runs maxCPC is $0.5 and position is 4. The rule increases maxCPC by 20%. The next time the rule runs maxCPC is $0.6 and position is 2, so the rule decreases maxCPC by 20%. New maxCPC will be $0.48. As you can see it’s lower than the initial one. The same rule in 10 steps would change maxCPC like this:
So what is the solution? The first thing that pops into mind (and seems like a good idea!) is to lower maxCPC by a smaller percentage value than we increase it. Following the example above let’s say we increase maxCPC by 20% and decrease it by 10%. It the first step we get maxCPC=$0.6 and position=2, in second step we get maxCPC=$0.54. It’s almost what we are aiming for (position=3 for maxCPC=$0.55), but the position is still slightly below 3 so the rule increases maxCPC again. In the next step maxCPC=$0.65 and thus move away from the optimal maxCPC. So this method doesn’t work as well. In 10 steps it would look like this:
The only solution I can think of is to decrease maxCPC by a percentage value that would take it back to what it was before the increase. The calculation is: decrease %value = 1-[1/(1+increase %value)]. So to keep a keyword around position 3 you would create 2 rules:
If position is lower than 3 increase maxCPC by, let’s say, 20%
If position is higher than 3 decrease maxCPC by 1-[1/(1+0.2)] = 0.17 (so we put 17%)
This way no matter by what percentage value you increase maxCPC; you can always calculate a suitable percentage value by which to decrease it.
It’s not a perfect method as it would make the maxCPC to switch constantly between two values, but at least we will always meander close to the desired maxCPC. In 10 steps it would look like this:
Those are just two of probably many more threats that lurk within AAR. The bottom line is to test rules by running them through few imaginative iterations and see what happens before putting them live. A great idea for testing rules is to use AdWords Campaign Experiments. I think I might even write a post about that next time. Meanwhile I welcome any comments.