amb*_*a88 10 python http python-requests
我正在使用python-requests编写一个web scraper.
每个页面都超过1MB,但我需要提取的实际数据很早就在文档的流程中,所以我浪费时间下载大量不必要的数据.
如果可能的话,我想在文档源代码中出现所需数据后立即停止下载,以节省时间.
例如,我只想提取"abc"Div中的文本,文档的其余部分是无用的:
<html>
<head>
<title>My site</title>
</head>
<body>
<div id="abc">blah blah...</div>
<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Mauris fermentum molestie ligula, a pharetra eros mollis ut.</p>
<p>Quisque auctor volutpat lobortis. Vestibulum pellentesque lacus sapien, quis vulputate enim mollis a. Vestibulum ultrices fermentum urna ac sodales.</p>
<p>Nunc sit amet augue at dolor fermentum ultrices. Curabitur faucibus porttitor vehicula. Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>
<p>Etiam sed leo at ipsum blandit dignissim ut a est.</p>
</body>
</html>
Run Code Online (Sandbox Code Playgroud)
目前我只是在做:
r = requests.get(URL)
Run Code Online (Sandbox Code Playgroud)
Jam*_*lls 18
你想在这里使用的是RangeHTTP Header.
请参阅:http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html(特别是Range上的位).
另请参阅自定义标头上的 API文档
例:
from requests import get
url = "http://download.thinkbroadband.com/5MB.zip"
headers = {"Range": "bytes=0-100"} # first 100 bytes
r = get(url, headers=headers)
Run Code Online (Sandbox Code Playgroud)
Neh*_*ani 10
我从这个问题来到这里:用Python打开一个url文件的前N个字符.但是,我不认为这是严格的重复,因为它没有在标题中明确提到是否必须使用该requests模块.此外,无论出于何种原因,可能会出现请求所在的服务器不支持范围字节.在这种情况下,我宁愿直接谈论HTTP:
#!/usr/bin/env python
import socket
import time
TCP_HOST = 'stackoverflow.com' # This is the host we are going to query
TCP_PORT = 80 # This is the standard port for HTTP protocol
MAX_LIMIT = 1024 # This is the maximum size of the info we want in bytes
# Create the string to talk HTTP/1.1
MESSAGE = \
"GET /questions/23602412/only-download-a-part-of-the-document-using-python-requests HTTP/1.1\r\n" \
"HOST: stackoverflow.com\r\n" \
"User-Agent: Custom/0.0.1\r\n" \
"Accept: */*\r\n\n"
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) # Create a socket
s.connect((TCP_HOST, TCP_PORT)) # Connect to remote socket at given address
s.send(MESSAGE) # Let's begin the transaction
time.sleep(0.1) # Machines are involved, but... oh, well!
# Keep reading from socket till max limit is reached
curr_size = 0
data = ""
while curr_size < MAX_LIMIT:
data += s.recv(MAX_LIMIT - curr_size)
curr_size = len(data)
s.close() # Mark the socket as closed
# Everyone likes a happy ending!
print data + "\n"
print "Length of received data:", len(data)
Run Code Online (Sandbox Code Playgroud)
样品运行:
$ python sample.py
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html; charset=utf-8
X-Frame-Options: SAMEORIGIN
X-Request-Guid: 3098c32c-3423-4e8a-9c7e-6dd530acdf8c
Content-Length: 73444
Accept-Ranges: bytes
Date: Fri, 05 Aug 2016 03:21:55 GMT
Via: 1.1 varnish
Connection: keep-alive
X-Served-By: cache-sin6926-SIN
X-Cache: MISS
X-Cache-Hits: 0
X-Timer: S1470367315.724674,VS0,VE246
X-DNS-Prefetch-Control: off
Set-Cookie: prov=c33383b6-3a4d-730f-02b9-0eab064b3487; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
<!DOCTYPE html>
<html itemscope itemtype="http://schema.org/QAPage">
<head>
<title>http - Only download a part of the document using python requests - Stack Overflow</title>
<link rel="shortcut icon" href="//cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d">
<link rel="apple-touch-icon image_src" href="//cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a">
<link rel="search" type="application/open
Length of received data: 1024
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
5091 次 |
| 最近记录: |