从需要登录的页面中抓取数据

Aar*_*tem 6 python cookies login beautifulsoup web-scraping

我是新来的Python和Web Scapping,我试图写一个非常基本的脚本,将只能在登录后访问的网页中获取数据.我已经看过了一堆不同的例子,但没有一个是固定的问题.这是我到目前为止:

from bs4 import BeautifulSoup
import urllib, urllib2, cookielib

username = 'name'
password = 'pass'

cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
login_data = urllib.urlencode({'username' : username, 'password' : password})
opener.open('WebpageWithLoginForm')
resp = opener.open('WebpageIWantToAccess')
soup = BeautifulSoup(resp, 'html.parser')
print soup.prettify()
Run Code Online (Sandbox Code Playgroud)

截至目前当我打印它只是打印页面的内容,如果我不是在登录页面中.我认为这个问题有事情做与我设置的cookie的方式,但我真的不知道,因为我做的不完全了解cookie处理器及其库的情况.谢谢!

现行代码:

import requests
import sys

EMAIL = 'usr'
PASSWORD = 'pass'

URL = 'https://connect.lehigh.edu/app/login'

def main():
    # Start a session so we can have persistant cookies
    session = requests.session(config={'verbose': sys.stderr})
    # This is the form data that the page sends when logging in
    login_data = {
        'username': EMAIL,
        'password': PASSWORD,
        'LOGIN': 'login',
    }

    # Authenticate
    r = session.post(URL, data=login_data)

    # Try accessing a page that requires you to be logged in
    r = session.get('https://lewisweb.cc.lehigh.edu/PROD/bwskfshd.P_CrseSchdDetl')

if __name__ == '__main__':
    main()
Run Code Online (Sandbox Code Playgroud)

Har*_*son 1

您可以使用该requests模块。

看看我在下面链接的这个答案。

/sf/answers/582189261/