This is a Short Writeup of a Security related audit i have done during Mar 2019.
given here as a reference.
Proof of the report existence
being the world’s largest advertising platform, the integrity of the fraud detection system is critical, as it affects buisness owners around the glob.
EDIT: google are working on this design. its not trivial, so it might take time..
While the google Trust dep, filed a bug based on this report, i think they didn’t yet improve on this component. I guess that this is an ongoing effort and not a Straight Forward vulnerability. but at the the bottom line:
- There is a security mechanisem here: efforts are obviously being made to mitigate this.
- Current efforts Can be bypassed.
description & background:
I was recently approached by an individual who asked me to assess a way that google click fraud can be done. He claimed that a college of him in the past is now ‘clicking his ads’, and also claimed that others do that as well and that he has some recording of that college stating that.
I began to look at possible ways to automate this and we looked at the ‘normal’ (e.g: link clicks that redirect the user to the website advertised) desktop ‘ad clicks’. It was very difficult to produce a ‘valid click’ ( a click that lasts and is paid for ) in an automated way. ( this was done on desktop). Whilst all of the clicks were shown (at the campaign results dashboard) in the first 20 minutes after the automation started to run, almost if not all of them were detected as invalid clicks and were removed from the individual’s campaign results and he was not charged for them. But the story is very different for ‘click to call’. The setup: Because this individual approached many people in the past to perform this ‘click fraud’ research then he wanted to ‘check’ the program ‘properly’. Because i’ve seen the results with my own eyes then i am sure that this is a very big issue and needs to be viewed with care.. The test were perform as such:
The individual initiated a campaign at the california bay area. The tests were mostly performed during the night while the activity hours were low. The test were performed at locations where there is very low activity such as: kerman, manteca, fresno, clovis… etc etc Where the individual stated that he did not receive a call in a long time, and when we saw that there was absolutely no other activity beyond our own automation. [at one scenario] Using the abuse setup provided we were able to produce (for a 2-4$ to click campaign, that is very low) on a period of 20 minutes run a total damage of 31 valid clicks using an hamble setup This produced about a 100$ that the individual paid for. We connected 5 ios devices and ran them simultaneously. With an ‘unmonitored’ campaign (e.g higher biding) and a more serious setup (there is no limit to the amounts of devices|automations that can run simultaneously with the right resources), the potential damage is collateral. This tests were performed several times with the same results. All the clicks are ‘click to call’, that are normally more pricey and can cause more damage.
The Abuse Setup can be found here: abuse
Original Report Sent to google:
General Constrains and abuse description:
- goal: perform ‘click fraud’ on a scale: initiate a fake traffic of bots performing calls to a rival campaign, in such a way, that the rival’s budget for the campaign would go down. in such a way, the ‘attcker’s ads would be shown first, and the rival would be put out of buisness.
- to view an ad, the bot must be ‘located’ at the geo location relevant to the ad.
Solution: Use the Tci & uule url parameters, in combination with ip addres’s from logical locations (e.g the same state). this is important, because ‘google proxy’s’ from specific locations are either too expensive or sometimes impossible to get- this would make the fraud effort redundant.
- to scrape the same ‘search term’ is not possible. google would trigger a ‘are you a robot’ recaptcha.
Solution: generate differant Search urls, via adding random parameters to the url search, or via spelling mistakes. see the later general algorithm for this. and use a ‘big’ search term list.
- Avoid behavior classification: Google stores the information about your user agent, Cookie history, IP address and profiles the bot behavior. for example, when Cookies are disabled, certain ads won’t show. if the ‘Same’ user is detected, then after clicking a ‘normal link’ (one that is redirecting to the advertised website) Twice, then the gclid parameter in the url bar would be the Same.
1) Allow website cookies and traking and browse the internet randomly. a simple vogue to 4 websites builds a very differant behavior - Its important to NOTE: almost the entire internet is using google analytics and other google provided tracking services, meaning that google would get a cookie history that it can fetch even if you visit non-google releted websites. as per our testing this was enough to produce a differant gclid.
2) Change ip address’s, but not from some random provider, but from NordVPN. this is done because its a very big provider that has many static ip’s in the locations of interest. and thousands of different users can be using the same ip. this makes the ip marking by google unsuccessful.
another advantage to this approach is that while many ‘Malicious’ users can use that specific ip, google cannot block the entire service because legitimate users are using this service in bigger amounts.
3) To defeat the ‘user agent’ classification, many solutions are possible:
jailbreak the device and hook webkit. (in a process similar to this one: #).
We went with: simply using a lot of differant ios devices… you get the same result.
NOTE: Spoofing the user agent would make it possible to automate this on desktop. to check that you can perform ‘phone calls’ to an add you can ask Safari on MacOS to change the user agent to iphone. this can also ease the work of scaleing this fraud.
Still a fail on desktop! google are not that easy after all:
this is the main issue description:
The above is the final component that makes this fraud work.
what i proposed to google was the following:
A possible mitigation (or patch call it as you like) would be to add the same activity monitors as the ones who are used for the ‘normal clicks’. Do not assume that an automation couldn’t ‘make phone calls’.
the general abuse algo is provided in the form of the java code i used to run the ios bots: code