DoS attack from google ip range

I believe I was attacked by a request with multiple numbers (5 / sec all day long) from the google ip range (66.249.65. * - perhaps ip spoofing?). This request has googlebot signature (Googlebot / 2.1; + http://www.google.com/bot.html ) in the http header, but it tries to get the old url (I deactivate it because it consumes a lot of CPU / $). If I blacklist this ip range, I also block legitimate googlebot :(.

And ironically, my app ( http://expoonews.com ) is hosted by a google engine service!

How can I stop this behavior without blocking the Google bot?

Below is a sample of my log to understand better.

 A 2014-11-25 19:41:19.145 404 234 B 10ms /AddPageAction?url=http%3A%2F%2Flincoln.pioneer.kohalibrary.com%2Fcgi-bin%2Fkoha%2Fopac-search.pl%3Fidx%3Disbn%26q%3D1842172131%26do%3DSearch
66.249.65.82 - - [25/Nov/2014:13:41:19 -0800] "GET /AddPageAction?url=http%3A%2F%2Flincoln.pioneer.kohalibrary.com%2Fcgi-bin%2Fkoha%2Fopac-search.pl%3Fidx%3Disbn%26q%3D1842172131%26do%3DSearch HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=10 cpu_ms=0 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16

 A 2014-11-25 19:41:19.550 404 234 B 11ms /AddPageAction?url=http%3A%2F%2Fwww.dnevniavaz.ba%2Fkultura%2Ffilm%2Fprica-o-hapsenju-ratnog-zlocinca
66.249.65.86 - - [25/Nov/2014:13:41:19 -0800] "GET /AddPageAction?url=http%3A%2F%2Fwww.dnevniavaz.ba%2Fkultura%2Ffilm%2Fprica-o-hapsenju-ratnog-zlocinca HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=11 cpu_ms=23 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16

 A 2014-11-25 19:41:19.956 404 234 B 12ms /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FNewcastle_Local_Municipality
66.249.65.78 - - [25/Nov/2014:13:41:19 -0800] "GET /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FNewcastle_Local_Municipality HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=12 cpu_ms=0 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16

 A 2014-11-25 19:41:20.426 404 234 B 10ms /AddPageAction?url=http%3A%2F%2Ftools.wmflabs.org%2Fgeohack%2Fgeohack.php%3Fpagename%3DRio_Grande_County%252C_Colorado%26params%3D37.61_N_-106.39_E_type%3Aadm2nd_region%3AUS-CO_source%3AUScensus1990
66.249.65.86 - - [25/Nov/2014:13:41:20 -0800] "GET /AddPageAction?url=http%3A%2F%2Ftools.wmflabs.org%2Fgeohack%2Fgeohack.php%3Fpagename%3DRio_Grande_County%252C_Colorado%26params%3D37.61_N_-106.39_E_type%3Aadm2nd_region%3AUS-CO_source%3AUScensus1990 HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=10 cpu_ms=23 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16

 A 2014-11-25 19:41:20.763 404 234 B 11ms /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2F%23cite_ref-Istanbul_43-1
66.249.65.86 - - [25/Nov/2014:13:41:20 -0800] "GET /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2F%23cite_ref-Istanbul_43-1 HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=11 cpu_ms=0 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16

 A 2014-11-25 19:41:21.166 404 234 B 10ms /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DHMAS%2520Pirie%26action%3Dhistory
66.249.65.86 - - [25/Nov/2014:13:41:21 -0800] "GET /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DHMAS%2520Pirie%26action%3Dhistory HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=10 cpu_ms=0 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16

 A 2014-11-25 19:41:21.571 404 234 B 11ms /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DUniversity_of_Engineering_and_Technology_Taxila_Chakwal_Campus_University_of_Engineering_and_Technology_Taxila_Chakwal_Campus%26action%3Dedit%26redlink%3D1
66.249.65.78 - - [25/Nov/2014:13:41:21 -0800] "GET /AddPageAction?url=http%3A%2F%2Fen.wikipedia.org%2Fw%2Findex.php%3Ftitle%3DUniversity_of_Engineering_and_Technology_Taxila_Chakwal_Campus_University_of_Engineering_and_Technology_Taxila_Chakwal_Campus%26action%3Dedit%26redlink%3D1 HTTP/1.1" 404 234 - "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "expoonews.com" ms=11 cpu_ms=23 cpm_usd=0.000026 instance=00c61b117c8ad4ca005d37349157867d41adaf app_engine_release=1.9.16 

      

+3


source to share


6 answers


I think I solved the problem by removing the get url parameters (url on another page).

I think the bot is trying to figure out which web url is open to access a specific site (to inflate access to count, maybe). My url was explicitly exposed (it was just a pass, the url was GET at the same time).



But thanks for the answers guys.

0


source


It seems that Googlebot is collecting injections that were stored either on your site or some other attacker who has hardcoded these URLs on their site and launches an attack using Google robots.

A web application firewall might be a good solution for you that can detect these signatures and explicitly deny such requests



Have a look at Apache-ModSecurity or Nginx NAXSI on Google!

+1


source


You can try to deny this particular directory or page with robots.txt http://www.robotstxt.org/robotstxt.html

0


source


The dos.yaml file in the root directory of your application (next to app.yaml) configures the DoS Protection Service blacklists for your expression. Below is an example dos.yaml file:

blacklist:
- subnet: 1.2.3.4   description: a single IP address
- subnet: 1.2.3.4/24   description: an IPv4 subnet
- subnet: abcd::123:4567   description: an IPv6 address
- subnet: abcd::123:4567/48   description: an IPv6 subnet

      

https://cloud.google.com/appengine/docs/python/config/dos

0


source


You should write robots.txt

at least to block the genuine googlebot from accessing the old urls, they often try to access indexed urls until the url returns 404 or any other ways that would be marked deleted.

I'm not sure if this is really a fake bot, because googlebot itself works like spam, there are too many pages in a short period of time.

To reduce the amount of access from googlebot (fake or genuine), how about this?

#allows access 100times/m
dos_n = memcache.get(key=bot_ip)
if dos_n != None:
    if dos_n>100:
        self.abort(400)
    dos_n = memcache.incr(bot_ip)
else:
    memcache.add(key= bot_ip, value=0, time=60)

      

and just for information, if the host is not included in gae, you can change the crawl frequency in the webmaster tool. https://www.google.com/webmasters/tools/

0


source


This suspicious feature is related to the googleBot website being viewed by your url. If you recently added or made changes to a page on your site, you can ask Google to (re) index it using the Get As Google tool.

-1


source







All Articles