在熊猫数据框中解析/拆分 URL 的 Pythonic 方法

Lui*_*uel 4 python urlparse pandas

我有一个 df,在标有 url 的列中,对于不同的用户,它有数千个链接,如下所示:

https://www.google.com/something
https://mail.google.com/anohtersomething
https://calendar.google.com/somethingelse
https://www.amazon.com/yetanotherthing
Run Code Online (Sandbox Code Playgroud)

我有以下代码:

import urlparse

df['domain'] = ''
df['protocol'] = ''
df['domain'] = ''
df['path'] = ''
df['query'] = ''
df['fragment'] = ''
unique_urls = df.url.unique()
l = len(unique_urls)
i=0
for url in unique_urls:
    i+=1
    print "\r%d / %d" %(i, l),
    split = urlparse.urlsplit(url)
    row_index = df.url == url
    df.loc[row_index, 'protocol'] = split.scheme
    df.loc[row_index, 'domain'] = split.netloc
    df.loc[row_index, 'path'] = split.path
    df.loc[row_index, 'query'] = split.query
    df.loc[row_index, 'fragment'] = split.fragment
Run Code Online (Sandbox Code Playgroud)

该代码能够正确解析和拆分 url,但速度很慢,因为我正在迭代 df 的每一行。有没有更有效的方法来解析 URL?

lem*_*ead 5

您可以使用Series.map一行来完成相同的操作:

df['protocol'],df['domain'],df['path'],df['query'],df['fragment'] = zip(*df['url'].map(urlparse.urlsplit))
Run Code Online (Sandbox Code Playgroud)

使用 timeit,当在 186 个 url 上运行时,这会在2.31 ms每个循环中运行,而不是179 ms像原始方法中那样在每个循环中运行。(但是请注意,该代码并未针对重复项进行优化,并且会通过 urlparse 多次运行相同的 url。)

完整代码:

import pandas

urls = ['https://www.google.com/something','https://mail.google.com/anohtersomething','https://www.amazon.com/yetanotherthing'] # tested with list of 186 urls instead
df['protocol'],df['domain'],df['path'],df['query'],df['fragment'] = zip(*df['url'].map(urlparse.urlsplit))
Run Code Online (Sandbox Code Playgroud)