ala*_*lan 5 perl asynchronous anyevent
我必须编写一个脚本来并行获取一些URL并做一些工作.在过去,我一直习惯于Parallel::ForkManager这样的事情,但现在我想学习一些新东西并尝试使用AnyEvent(和AnyEvent::HTTP或AnyEvent::Curl::Multi)进行异步编程......但是我在理解AnyEvent时遇到问题并编写了一个脚本应该:
我已阅读了许多手册和教程,但我仍然很难理解阻塞和非阻塞代码之间的差异.我在http://perlmaven.com/fetching-several-web-pages-in-parallel-using-anyevent找到了类似的脚本,其中Szabo先生解释了基础知识,但我仍然无法理解如何实现以下内容:
...
open my $fh, "<", $file;
while ( my $line = <$fh> )
{
# http request, read response, update MySQL
}
close $fh
...
...并在这种情况下添加并发限制.
我非常感谢你的帮助;)
根据池上的建议,我Net::Curl::Multi试了一下.我对结果非常满意.经过多年的使用Parallel::ForkManager只是为了同时抓取成千上万的URL,Net::Curl::Multi似乎很棒.这是我while在filehandle 上循环的代码.它似乎应该工作,但考虑到这是我第一次写这样的东西,我想请更多有经验的Perl用户看一看,告诉我是否有一些潜在的错误,我错过了什么等等.另外,如果我可能会问:因为我不完全理解Net::Curl::Multi并发是如何工作的,请告诉我是否应该在将MySQL UPDATE命令(via DBI)置于RESPONSE循环内时出现任何问题(除了更高的服务器负载之外 - 我希望最终脚本能够运行约50名并发N::C::M工人,可能更多).
#!/usr/bin/perl
use Net::Curl::Easy  qw( :constants );
use Net::Curl::Multi qw( );
sub make_request {
    my ( $url ) = @_;
    my $easy = Net::Curl::Easy->new();
    $easy->{url} = $url;
    $easy->setopt( CURLOPT_URL,        $url );
    $easy->setopt( CURLOPT_HEADERDATA, \$easy->{head} );
    $easy->setopt( CURLOPT_FILE,       \$easy->{body} );
    return $easy;
}
my $maxWorkers = 10;
my $multi = Net::Curl::Multi->new();
my $workers = 0;
my $i = 1;
open my $fh, "<", "urls.txt";
LINE: while ( my $url = <$fh> )
{
    chomp( $url );
    $url .= "?$i";
    print "($i) $url\n";
    my $easy = make_request( $url );
    $multi->add_handle( $easy );
    $workers++;
    my $running = 0;
    do {
        my ($r, $w, $e) = $multi->fdset();
        my $timeout = $multi->timeout();
        select $r, $w, $e, $timeout / 1000
        if $timeout > 0;
        $running = $multi->perform();
        RESPONSE: while ( my ( $msg, $easy, $result ) = $multi->info_read() ) {
            $multi->remove_handle( $easy );
            $workers--;
            printf( "%s getting %s\n", $easy->getinfo( CURLINFO_RESPONSE_CODE ), $easy->{url} );
        }
        # dont max CPU while waiting
        select( undef, undef, undef, 0.01 );
    } while ( $workers == $maxWorkers || ( eof && $running ) );
    $i++;
}
close $fh;
Net :: Curl是一个非常好的库,非常快.此外,它也可以处理并行请求!我建议使用它而不是AnyEvent.
use Net::Curl::Easy  qw( :constants );
use Net::Curl::Multi qw( );
sub make_request {
    my ( $url ) = @_;
    my $easy = Net::Curl::Easy->new();
    $easy->{url} = $url;
    $easy->setopt( CURLOPT_URL,        $url );
    $easy->setopt( CURLOPT_HEADERDATA, \$easy->{head} );
    $easy->setopt( CURLOPT_FILE,       \$easy->{body} );
    return $easy;
}
my $max_running = 10;
my @urls = ( 'http://www.google.com/' );
my $multi = Net::Curl::Multi->new();
my $running = 0;
while (1) {
    while ( @urls && $running < $max_running ) {
       my $easy = make_request( shift( @urls ) );
       $multi->add_handle( $easy );
       ++$running;
    }
    last if !$running;
    my ( $r, $w, $e ) = $multi->fdset();
    my $timeout = $multi->timeout();
    select( $r, $w, $e, $timeout / 1000 )
        if $timeout > 0;
    $running = $multi->perform();
    while ( my ( $msg, $easy, $result ) = $multi->info_read() ) {
        $multi->remove_handle( $easy );
        printf( "%s getting %s\n", $easy->getinfo( CURLINFO_RESPONSE_CODE ), $easy->{url} );
    }
}