简单的网络爬虫

use*_*172 1 beautifulsoup python-2.7

我在python写了下面的程序很简单的网络爬虫,但是当我运行它,它返回我"NoneType"对象不是可调用的",你能帮帮我吗?

import BeautifulSoup
import urllib2
def union(p,q):
    for e in q:
        if e not in p:
            p.append(e)

def crawler(SeedUrl):
    tocrawl=[SeedUrl]
    crawled=[]
    while tocrawl:
        page=tocrawl.pop()
        pagesource=urllib2.urlopen(page)
        s=pagesource.read()
        soup=BeautifulSoup.BeautifulSoup(s)
        links=soup('a')        
        if page not in crawled:
            union(tocrawl,links)
            crawled.append(page)

    return crawled
crawler('http://www.princeton.edu/main/')
Run Code Online (Sandbox Code Playgroud)

Des*_*han 5

[更新]这是完整的项目代码

https://bitbucket.org/deshan/simple-web-crawler

[ANWSER]

汤('a')返回完整的html标记.

<a href="http://itunes.apple.com/us/store">Buy Music Now</a>
Run Code Online (Sandbox Code Playgroud)

所以urlopen给出错误 'NoneType'对象不可调用'.你需要提取唯一的url/href.

links=soup.findAll('a',href=True)
for l in links:
    print(l['href'])
Run Code Online (Sandbox Code Playgroud)

您还需要验证网址.请参阅以下网址

我再次建议您使用python集代替Arrays.you可以轻松添加,省略重复的URL.

请尝试以下代码:

import re
import httplib
import urllib2
from urlparse import urlparse
import BeautifulSoup

regex = re.compile(
        r'^(?:http|ftp)s?://' # http:// or https://
        r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|' #domain...
        r'localhost|' #localhost...
        r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})' # ...or ip
        r'(?::\d+)?' # optional port
        r'(?:/?|[/?]\S+)$', re.IGNORECASE)

def isValidUrl(url):
    if regex.match(url) is not None:
        return True;
    return False

def crawler(SeedUrl):
    tocrawl=[SeedUrl]
    crawled=[]
    while tocrawl:
        page=tocrawl.pop()
        print 'Crawled:'+page
        pagesource=urllib2.urlopen(page)
        s=pagesource.read()
        soup=BeautifulSoup.BeautifulSoup(s)
        links=soup.findAll('a',href=True)        
        if page not in crawled:
            for l in links:
                if isValidUrl(l['href']):
                    tocrawl.append(l['href'])
            crawled.append(page)   
    return crawled
crawler('http://www.princeton.edu/main/')
Run Code Online (Sandbox Code Playgroud)