twn*_*ale 22 python asp.net asp.net-ajax
我需要从.aspx网页上抓取查询结果.
http://legistar.council.nyc.gov/Legislation.aspx
网址是静态的,那么如何向此网页提交查询并获得结果?假设我们需要从相应的下拉菜单中选择"所有年份"和"所有类型".
那里的人必须知道如何做到这一点.
mjv*_*mjv 28
作为概述,您将需要执行四个主要任务:
http请求和响应处理是使用Python标准库的urllib和urllib2中的方法和类完成的.可以使用Python的标准库的HTMLParser 或其他模块(如Beautiful Soup)来解析html页面.
以下代码段演示了在问题中指明的站点请求和接收搜索.这个站点是ASP驱动的,因此我们需要确保我们发送几个表单字段,其中一些具有"可怕的"值,因为ASP逻辑使用它们来维护状态并在某种程度上验证请求.确实提交.必须使用http POST方法发送请求,因为这是此ASP应用程序所期望的.主要的困难在于识别ASP所期望的表单字段和相关值(使用Python获取页面是很容易的部分).
这段代码是功能性的,或者更确切地说,是功能性的,直到我删除了大部分VSTATE值,并且可能通过添加注释引入了一两个错字.
import urllib
import urllib2
uri = 'http://legistar.council.nyc.gov/Legislation.aspx'
#the http headers are useful to simulate a particular browser (some sites deny
#access to non-browsers (bots, etc.)
#also needed to pass the content type.
headers = {
'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',
'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',
'Content-Type': 'application/x-www-form-urlencoded'
}
# we group the form fields and their values in a list (any
# iterable, actually) of name-value tuples. This helps
# with clarity and also makes it easy to later encoding of them.
formFields = (
# the viewstate is actualy 800+ characters in length! I truncated it
# for this sample code. It can be lifted from the first page
# obtained from the site. It may be ok to hardcode this value, or
# it may have to be refreshed each time / each day, by essentially
# running an extra page request and parse, for this specific value.
(r'__VSTATE', r'7TzretNIlrZiKb7EOB3AQE ... ...2qd6g5xD8CGXm5EftXtNPt+H8B'),
# following are more of these ASP form fields
(r'__VIEWSTATE', r''),
(r'__EVENTVALIDATION', r'/wEWDwL+raDpAgKnpt8nAs3q+pQOAs3q/pQOAs3qgpUOAs3qhpUOAoPE36ANAve684YCAoOs79EIAoOs89EIAoOs99EIAoOs39EIAoOs49EIAoOs09EIAoSs99EI6IQ74SEV9n4XbtWm1rEbB6Ic3/M='),
(r'ctl00_RadScriptManager1_HiddenField', ''),
(r'ctl00_tabTop_ClientState', ''),
(r'ctl00_ContentPlaceHolder1_menuMain_ClientState', ''),
(r'ctl00_ContentPlaceHolder1_gridMain_ClientState', ''),
#but then we come to fields of interest: the search
#criteria the collections to search from etc.
# Check boxes
(r'ctl00$ContentPlaceHolder1$chkOptions$0', 'on'), # file number
(r'ctl00$ContentPlaceHolder1$chkOptions$1', 'on'), # Legislative text
(r'ctl00$ContentPlaceHolder1$chkOptions$2', 'on'), # attachement
# etc. (not all listed)
(r'ctl00$ContentPlaceHolder1$txtSearch', 'york'), # Search text
(r'ctl00$ContentPlaceHolder1$lstYears', 'All Years'), # Years to include
(r'ctl00$ContentPlaceHolder1$lstTypeBasic', 'All Types'), #types to include
(r'ctl00$ContentPlaceHolder1$btnSearch', 'Search Legislation') # Search button itself
)
# these have to be encoded
encodedFields = urllib.urlencode(formFields)
req = urllib2.Request(uri, encodedFields, headers)
f= urllib2.urlopen(req) #that's the actual call to the http site.
# *** here would normally be the in-memory parsing of f
# contents, but instead I store this to file
# this is useful during design, allowing to have a
# sample of what is to be parsed in a text editor, for analysis.
try:
fout = open('tmp.htm', 'w')
except:
print('Could not open output file\n')
fout.writelines(f.readlines())
fout.close()
Run Code Online (Sandbox Code Playgroud)
这是关于获取初始页面的.如上所述,然后需要解析页面,即找到感兴趣的部分并在适当时收集它们,并将它们存储到文件/数据库/工具中.这项工作可以通过很多方式完成:使用html解析器或XSLT类型的技术(实际上在解析html到xml之后),甚至是原始作业,简单的正则表达式.此外,通常提取的项目之一是"下一个信息",即排序的链接,其可以在对服务器的新请求中使用以获得后续页面.
这应该会让你粗略地了解"长手"html刮痧的内容.还有很多其他方法,例如专用实用程序,Mozilla(FireFox)GreaseMonkey插件中的脚本,XSLT ......
大多数 ASP.NET 站点(包括您引用的站点)实际上会使用 HTTP POST 动词(而不是 GET 动词)将查询发送回自己。这就是为什么 URL 没有像您所说的那样改变。
您需要做的是查看生成的 HTML 并捕获它们的所有表单值。请务必捕获所有表单值,因为其中一些值用于页面验证,如果没有它们,您的 POST 请求将被拒绝。
除了验证之外,ASPX 页面在抓取和发布方面与其他 Web 技术没有什么不同。
| 归档时间: |
|
| 查看次数: |
27346 次 |
| 最近记录: |