Rob*_*rtB 5 html python csv beautifulsoup
我正在使用以下代码获取HTML表:
import csv
import urllib2
from bs4 import BeautifulSoup
with open('listing.csv', 'wb') as f:
writer = csv.writer(f)
for i in range(39):
url = "file:///C:/projects/HTML/Export.htm".format(i)
u = urllib2.urlopen(url)
try:
html = u.read()
finally:
u.close()
soup=BeautifulSoup(html)
for tr in soup.find_all('tr')[2:]:
tds = tr.find_all('td')
row = [elem.text.encode('utf-8') for elem in tds]
writer.writerow(row)
Run Code Online (Sandbox Code Playgroud)
一切都很完美,但我试图抓住第9列Href URL.它目前给我txt值但不是URL.
另外,我的HTML中有两个表,无论如何要跳过第一个表并使用第二个表构建csv文件?
任何帮助都非常受欢迎,因为我是Python的新手,需要这个项目我自动进行每日转换.
非常感谢!
您应该访问第 8 个标签内href的标签属性:atd
import csv
import urllib2
from bs4 import BeautifulSoup
records = []
for index in range(39):
url = get_url(index) # where is the formatting in your example happening?
response = urllib2.urlopen(url)
try:
html = response.read()
except Exception:
raise
else:
my_parse(html)
finally:
try:
response.close()
except (UnboundLocalError, NameError):
raise UnboundLocalError
def my_parse(html):
soup = BeautifulSoup(html)
table2 = soup.find_all('table')[1]
for tr in table2.find_all('tr')[2:]:
tds = tr.find_all('td')
url = tds[8].a.get('href')
records.append([elem.text.encode('utf-8') for elem in tds])
# perhaps you want to update one of the elements of this last
# record with the found url now?
# It's more efficient to write only once
with open('listing.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(records)
Run Code Online (Sandbox Code Playgroud)
我冒昧地定义了一个get_url基于索引的函数,因为您的示例每次都会重新读取同一个文件,这是我猜您实际上并不想要的。我将把实施留给你。另外,我还添加了一些更好的异常处理。
同时,我展示了如何从该网页的表格访问第二个表格。
| 归档时间: |
|
| 查看次数: |
11632 次 |
| 最近记录: |