使用Nginx阻止除少数机器人之外的所有机器人

Tro*_*one 13 bots nginx

我想阻止所有识别为机器人的http_user_agents,但在我输入以下代码时允许Googlebot:

map $http_user_agent $bad_bot {
default 1;
~*^Lynx 0; # Let Lynx go through
~*^google );
libwww-perl                      1;
~(?i)(libwww|Wget|LWP::Simple|BBBike|java|crawl|spider|bot) 1;
}
Run Code Online (Sandbox Code Playgroud)

但是,这个块甚至可以访问googlebot.

小智 30

这是我对nginx的逻辑

map $http_user_agent $limit_bots {
     default 0;
     ~*(google|bing|yandex|msnbot) 1;
     ~*(AltaVista|Googlebot|Slurp|BlackWidow|Bot|ChinaClaw|Custo|DISCo|Download|Demon|eCatch|EirGrabber|EmailSiphon|EmailWolf|SuperHTTP|Surfbot|WebWhacker) 1;
     ~*(Express|WebPictures|ExtractorPro|EyeNetIE|FlashGet|GetRight|GetWeb!|Go!Zilla|Go-Ahead-Got-It|GrabNet|Grafula|HMView|Go!Zilla|Go-Ahead-Got-It) 1;
     ~*(rafula|HMView|HTTrack|Stripper|Sucker|Indy|InterGET|Ninja|JetCar|Spider|larbin|LeechFTP|Downloader|tool|Navroad|NearSite|NetAnts|tAkeOut|WWWOFFLE) 1;
     ~*(GrabNet|NetSpider|Vampire|NetZIP|Octopus|Offline|PageGrabber|Foto|pavuk|pcBrowser|RealDownload|ReGet|SiteSnagger|SmartDownload|SuperBot|WebSpider) 1;
     ~*(Teleport|VoidEYE|Collector|WebAuto|WebCopier|WebFetch|WebGo|WebLeacher|WebReaper|WebSauger|eXtractor|Quester|WebStripper|WebZIP|Wget|Widow|Zeus) 1;
     ~*(Twengabot|htmlparser|libwww|Python|perl|urllib|scan|Curl|email|PycURL|Pyth|PyQ|WebCollector|WebCopy|webcraw) 1;
 } 

location / {
  if ($limit_bots = 1) {
    return 403;
  }
}
Run Code Online (Sandbox Code Playgroud)

  • 人们应该注意,这种方法将禁止谷歌通过捕获"google"和"bot"这个词来索引你的网站(因为谷歌的http_user_agent中有"bot"这个词). (9认同)

edi*_*igu 20

只需检查$http_user_agent您的$ bad_bot列表并返回HTTP 403,如果它在您的黑名单中:

location / {
   if ($http_user_agent ~ (libwww|Wget|LWP|damnBot|BBBike|java|spider|crawl) ) {
       return 403;
   }
}
Run Code Online (Sandbox Code Playgroud)

注意: ~if块中执行区分大小写的匹配.如果您想使黑名单不区分大小写,请使用~*而不是~.