递归网络爬虫perl

ven*_*tes 3 recursion perl web-crawler web-scraping

我正在尝试编写一个最小的Web爬虫.目的是从种子中发现新URL并进一步抓取这些新URL.代码如下:

use strict;
use warnings;
use Carp;
use Data::Dumper;
use WWW::Mechanize;

my $url = "http://foobar.com"; # example
my %links;

my $mech = WWW::Mechanize->new(autocheck => 1);
$mech->get($url);
my @cr_fronteir = $mech->find_all_links();

foreach my $links (@cr_fronteir) {
    if ( $links->[0] =~ m/^http/xms ) {
        $links{$links->[0]} = $links->[1];
    }
}
Run Code Online (Sandbox Code Playgroud)

我被困在这里,如何进一步抓取%链接中的链接,以及如何添加深度以防止溢出.建议表示赞赏.

cre*_*ive 5

Mojolicious Web框架提供了一些对Web爬虫有用的有趣功能:

  • 除了Perl v5.10或更高版本之外没有依赖项
  • URL解析器
  • DOM树解析器
  • 异步HTTP/HTTPS客户端(允许没有fork()开销的并发请求)

下面是一个递归爬网本地Apache文档并显示页面标题和提取的链接的示例.它使用4个并行连接,并且不会超过3个路径级别,只访问每个提取的链接一次:

#!/usr/bin/env perl
use 5.010;
use open qw(:locale);
use strict;
use utf8;
use warnings qw(all);

use Mojo::UserAgent;

# FIFO queue
my @urls = (Mojo::URL->new('http://localhost/manual/'));

# User agent following up to 5 redirects
my $ua = Mojo::UserAgent->new(max_redirects => 5);

# Track accessed URLs
my %uniq;

my $active = 0;

sub parse {
    my ($tx) = @_;

    # Request URL
    my $url = $tx->req->url;

    say "\n$url";
    say $tx->res->dom->at('html title')->text;

    # Extract and enqueue URLs
    for my $e ($tx->res->dom('a[href]')->each) {

        # Validate href attribute
        my $link = Mojo::URL->new($e->{href});
        next if 'Mojo::URL' ne ref $link;

        # "normalize" link
        $link = $link->to_abs($tx->req->url)->fragment(undef);
        next unless $link->protocol =~ /^https?$/x;

        # Don't go deeper than /a/b/c
        next if @{$link->path->parts} > 3;

        # Access every link only once
        next if ++$uniq{$link->to_string} > 1;

        # Don't visit other hosts
        next if $link->host ne $url->host;

        push @urls, $link;
        say " -> $link";
    }

    return;
}

sub get_callback {
    my (undef, $tx) = @_;

    # Parse only OK HTML responses
    $tx->res->code == 200
        and
    $tx->res->headers->content_type =~ m{^text/html\b}ix
        and
    parse($tx);

    # Deactivate
    --$active;

    return;
}

Mojo::IOLoop->recurring(
    0 => sub {

        # Keep up to 4 parallel crawlers sharing the same user agent
        for ($active .. 4 - 1) {

            # Dequeue or halt if there are no active crawlers anymore
            return ($active or Mojo::IOLoop->stop)
                unless my $url = shift @urls;

            # Fetch non-blocking just by adding
            # a callback and marking as active
            ++$active;
            $ua->get($url => \&get_callback);
        }
    }
);

# Start event loop if necessary
Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
Run Code Online (Sandbox Code Playgroud)

有关更多网络抓取技巧和窍门,请阅读" 我不需要没有臭味API:网页刮刮乐趣和利润"一文.