Googlebot不尊重Robots.txt

6 robots.txt googlebot

出于某种原因,当我查看Google网站站长工具的"分析robots.txt"以查看我们的robots.txt文件阻止了哪些网址时,这不是我所期待的.以下是我们文件开头的代码段:

Sitemap: http://[omitted]/sitemap_index.xml

User-agent: Mediapartners-Google
Disallow: /scripts

User-agent: *
Disallow: /scripts
# list of articles given by the Content group
Disallow: http://[omitted]/Living/books/book-review-not-stupid.aspx
Disallow: http://[omitted]/Living/books/book-review-running-through-roadblocks-inspirational-stories-of-twenty-courageous-athletic-warriors.aspx
Disallow: http://[omitted]/Living/sportsandrecreation/book-review-running-through-roadblocks-inspirational-stories-of-twenty-courageous-athletic-warriors.aspx
Run Code Online (Sandbox Code Playgroud)

对于Googlebot和Mediapartners-Google,脚本文件夹中的任何内容都已正确阻止.我可以看到这两个机器人正在看到正确的指令,因为Googlebot表示脚本被阻止在第7行,而Mediapartners-Google被阻止在第4行.但是我从第二个用户下的不允许的网址中输入了任何其他网址-agent指令未被阻止!

我想知道我的评论或使用绝对网址是否搞砸了......

任何见解都表示赞赏.谢谢.

And*_*ore 11

忽略它们的原因是,在规范不允许的情况下,您在robots.txt文件中具有完全限定的URL .(您应该只使用/指定相对路径或绝对路径).请尝试以下方法:Disallow

Sitemap: /sitemap_index.xml

User-agent: Mediapartners-Google
Disallow: /scripts

User-agent: *
Disallow: /scripts
# list of articles given by the Content group
Disallow: /Living/books/book-review-not-stupid.aspx
Disallow: /Living/books/book-review-running-through-roadblocks-inspirational-stories-of-twenty-courageous-athletic-warriors.aspx
Disallow: /Living/sportsandrecreation/book-review-running-through-roadblocks-inspirational-stories-of-twenty-courageous-athletic-warriors.aspx
Run Code Online (Sandbox Code Playgroud)

至于缓存,谷歌平均每24小时尝试获取一份robots.txt文件.