pg-*_*ban 7 python parsing dom
所以我对这个理论很感兴趣,如果你去一篇随机的维基百科文章,请点击不在括号内的第一个链接,在95%的情况下你最终会看到关于哲学的文章.
我想用Python编写一个脚本,为我提取链接,最后打印一个很好的列表,列出了哪些文章被访问过(linkA -> linkB -> linkC
)等等.
我设法获得了网页的HTML DOM,并设法删除了一些不必要的链接以及引导消歧页面的顶级描述栏.到目前为止,我的结论是:
<p>
元素作为它们的祖先(如果它位于<b>
标记或类似内容中,通常是父元素或祖父母<p>
元素.导致消歧页面的顶部条形似乎不包含任何元素.Wikipedia:
到现在为止还挺好.但这是括号,让我.例如,在关于Human的文章中,不在括号内的第一个链接是"/ wiki/Species",但脚本在其中找到"/ wiki/Taxonomy".
我不知道如何以编程方式进行此操作,因为我必须在父/子节点的某些组合中查找文本,这些组合可能并不总是相同.有任何想法吗?
我的代码可以在下面看到,但这是我很快就编写的东西,并不是很自豪.然而,它被评论,所以你可以看到我的思路(我希望:)).
"""Wikipedia fun"""
import urllib2
from xml.dom.minidom import parseString
import time
def validWikiArticleLinkString(href):
""" Takes a string and returns True if it contains the substring
'/wiki/' in the beginning and does not contain any of the
"special" wiki pages.
"""
return (href.find("/wiki/") == 0
and href.find("(disambiguation)") == -1
and href.find("File:") == -1
and href.find("Wikipedia:") == -1
and href.find("Portal:") == -1
and href.find("Special:") == -1
and href.find("Help:") == -1
and href.find("Template_talk:") == -1
and href.find("Template:") == -1
and href.find("Talk:") == -1
and href.find("Category:") == -1
and href.find("Bibcode") == -1
and href.find("Main_Page") == -1)
if __name__ == "__main__":
visited = [] # a list of visited links. used to avoid getting into loops
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')] # need headers for the api
currentPage = "Human" # the page to start with
while True:
infile = opener.open('http://en.wikipedia.org/w/index.php?title=%s&printable=yes' % currentPage)
html = infile.read() # retrieve the contents of the wiki page we are at
htmlDOM = parseString(html) # get the DOM of the parsed HTML
aTags = htmlDOM.getElementsByTagName("a") # find all <a> tags
for tag in aTags:
if "href" in tag.attributes.keys(): # see if we have the href attribute in the tag
href = tag.attributes["href"].value # get the value of the href attribute
if validWikiArticleLinkString(href): # if we have one of the link types we are looking for
# Now come the tricky parts. We want to look for links in the main content area only,
# and we want the first link not in parentheses.
# assume the link is valid.
invalid = False
# tables which appear to the right on the site appear first in the DOM, so we need to make sure
# we are not looking at a <a> tag somewhere inside a <table>.
pn = tag.parentNode
while pn is not None:
if str(pn).find("table at") >= 0:
invalid = True
break
else:
pn = pn.parentNode
if invalid: # go to next link
continue
# Next we look at the descriptive texts above the article, if any; e.g
# This article is about .... or For other uses, see ... (disambiguation).
# These kinds of links will lead into loops so we classify them as invalid.
# We notice that this text does not appear to be inside a <p> block, so
# we dismiss <a> tags which aren't inside any <p>.
pnode = tag.parentNode
while pnode is not None:
if str(pnode).find("p at") >= 0:
break
pnode = pnode.parentNode
# If we have reached the root node, which has parentNode None, we classify the
# link as invalid.
if pnode is None:
invalid = True
if invalid:
continue
###### this is where I got stuck:
# now we need to look if the link is inside parentheses. below is some junk
# for elem in tag.parentNode.childNodes:
# while elem.firstChild is not None:
# elem = elem.firstChid
# print elem.nodeValue
print href # this will be the next link
newLink = href[6:] # except for the /wiki/ part
break
# if we have been to this link before, break the loop
if newLink in visited:
print "Stuck in loop."
break
# or if we have reached Philosophy
elif newLink == "Philosophy":
print "Ended up in Philosophy."
break
else:
visited.append(currentPage) # mark this currentPage as visited
currentPage = newLink # make the the currentPage we found the new page to fetch
time.sleep(5) # sleep some to see results as debug
Run Code Online (Sandbox Code Playgroud)
我在 Github ( http://github.com/JensTimmerman/scripts/blob/master/philosophy.py )上找到了一个 python 脚本来玩这个游戏。它使用 Beautifulsoup 进行 HTML 解析,为了解决括号问题,他只是在解析链接之前删除括号之间的文本。