DRY使用nokogiri搜索网站的每个页面

twi*_*tom 4 ruby dry web-crawler nokogiri web-scraping

我想搜索网站的每个页面.我的想法是找到保留在域内的页面上的所有链接,访问它们并重复.我将不得不采取措施,不再重复努力.

所以它很容易开始:

page = 'http://example.com'
nf = Nokogiri::HTML(open(page))

links = nf.xpath '//a' #find all links on current page

main_links = links.map{|l| l['href'] if l['href'] =~ /^\//}.compact.uniq 
Run Code Online (Sandbox Code Playgroud)

"main_links"现在是活动页面中以"/"开头的链接数组(应该只是当前域上的链接).

从这里我可以将这些链接提供给上面类似的代码,但我不知道确保我不重复自己的最佳方法.我想我在访问时会开始收集所有访问过的链接:

main_links.each do |ml| 
visited_links = [] #new array of what is visted
np = Nokogiri::HTML(open(page + ml)) #load the first main_link
visted_links.push(ml) #push the page we're on
np_links = np.xpath('//a').map{|l| l['href'] if l['href'] =~ /^\//}.compact.uniq #grab all links on this page pointing to the current domain
main_links.push(np_links).compact.uniq #remove duplicates after pushing?
end
Run Code Online (Sandbox Code Playgroud)

我还在研究最后一点......但这看起来是正确的做法吗?

谢谢.

Phr*_*ogz 8

其他人建议您不要编写自己的网络爬虫.如果性能和稳健性是您的目标,我同意这一点.但是,它可以是一个很好的学习练习.你写了这个:

"[...]但我不知道确保自己不重复的最好方法"

递归是关键.类似下面的代码:

require 'set'
require 'uri'
require 'nokogiri'
require 'open-uri'

def crawl_site( starting_at, &each_page )
  files = %w[png jpeg jpg gif svg txt js css zip gz]
  starting_uri = URI.parse(starting_at)
  seen_pages = Set.new                      # Keep track of what we've seen

  crawl_page = ->(page_uri) do              # A re-usable mini-function
    unless seen_pages.include?(page_uri)
      seen_pages << page_uri                # Record that we've seen this
      begin
        doc = Nokogiri.HTML(open(page_uri)) # Get the page
        each_page.call(doc,page_uri)        # Yield page and URI to the block

        # Find all the links on the page
        hrefs = doc.css('a[href]').map{ |a| a['href'] }

        # Make these URIs, throwing out problem ones like mailto:
        uris = hrefs.map{ |href| URI.join( page_uri, href ) rescue nil }.compact

        # Pare it down to only those pages that are on the same site
        uris.select!{ |uri| uri.host == starting_uri.host }

        # Throw out links to files (this could be more efficient with regex)
        uris.reject!{ |uri| files.any?{ |ext| uri.path.end_with?(".#{ext}") } }

        # Remove #foo fragments so that sub-page links aren't differentiated
        uris.each{ |uri| uri.fragment = nil }

        # Recursively crawl the child URIs
        uris.each{ |uri| crawl_page.call(uri) }

      rescue OpenURI::HTTPError # Guard against 404s
        warn "Skipping invalid link #{page_uri}"
      end
    end
  end

  crawl_page.call( starting_uri )   # Kick it all off!
end

crawl_site('http://phrogz.net/') do |page,uri|
  # page here is a Nokogiri HTML document
  # uri is a URI instance with the address of the page
  puts uri
end
Run Code Online (Sandbox Code Playgroud)

简而言之:

  • 跟踪您使用a看到的页面Set.这不是通过href值,而是通过完整的规范URI.
  • 使用URI.join转可能相对路径到正确的URI相对于当前页面.
  • 使用递归来继续抓取每个页面上的每个链接,但如果您已经看过该页面,则会挽救.