为什么len在DataFrame上比在底层numpy数组上效率更高?

piR*_*red 12 python numpy pandas

我注意到len在DataFrame上使用比len在底层numpy数组上使用要快得多.我不明白为什么.通过访问相同的信息shape也没有任何帮助.当我尝试获取列数和行数时,这更相关.我一直在争论使用哪种方法.

我把以下实验放在一起,非常清楚我将len在数据帧上使用.但有人可以解释为什么吗?

from timeit import timeit
import pandas as pd
import numpy as np

ns = np.power(10, np.arange(6))
results = pd.DataFrame(
    columns=ns,
    index=pd.MultiIndex.from_product(
        [['len', 'len(values)', 'shape'],
         ns]))
dfs = {(n, m): pd.DataFrame(np.zeros((n, m))) for n in ns for m in ns}

for n, m in dfs.keys():
    df = dfs[(n, m)]
    results.loc[('len', n), m] = timeit('len(df)', 'from __main__ import df', number=10000)
    results.loc[('len(values)', n), m] = timeit('len(df.values)', 'from __main__ import df', number=10000)
    results.loc[('shape', n), m] = timeit('df.values.shape', 'from __main__ import df', number=10000)


fig, axes = plt.subplots(2, 3, figsize=(9, 6), sharex=True, sharey=True)
for i, (m, col) in enumerate(results.iteritems()):
    r, c = i // 3, i % 3
    col.unstack(0).plot.bar(ax=axes[r, c], title=m)
Run Code Online (Sandbox Code Playgroud)

在此输入图像描述

wfl*_*nny 7

从各种方法来看,主要原因是构建 numpy 数组df.values占用了大部分时间


len(df)df.shape

这两个速度很快,因为它们本质上是

len(df.index._data)
Run Code Online (Sandbox Code Playgroud)

(len(df.index._data), len(df.columns._data))
Run Code Online (Sandbox Code Playgroud)

哪里_datanumpy.ndarray. 因此, using 的df.shape速度应该是len(df)它的一半,因为它找到了df.indexdf.columns(都为 type pd.Index)的长度


len(df.values)df.values.shape

假设您已经提取了vals = df.values. 然后

In [1]: df = pd.DataFrame(np.random.rand(1000, 10), columns=range(10))

In [2]: vals = df.values

In [3]: %timeit len(vals)
10000000 loops, best of 3: 35.4 ns per loop

In [4]: %timeit vals.shape
10000000 loops, best of 3: 51.7 ns per loop
Run Code Online (Sandbox Code Playgroud)

相比:

In [5]: %timeit len(df.values)
100000 loops, best of 3: 3.55 µs per loop
Run Code Online (Sandbox Code Playgroud)

所以瓶颈不len在于如何df.values构建。如果您检查pandas.DataFrame.values(),您会发现(大致等效的)方法:

def values(self):
    return self.as_matrix()

def as_matrix(self, columns=None):
    self._consolidate_inplace()
    if self._AXIS_REVERSED:
        return self._data.as_matrix(columns).T

    if len(self._data.blocks) == 0:
        return np.empty(self._data.shape, dtype=float)

    if columns is not None:
        mgr = self._data.reindex_axis(columns, axis=0)
    else:
        mgr = self._data

    if self._data._is_single_block or not self._data.is_mixed_type:
        return mgr.blocks[0].get_values()
    else:
        dtype = _interleaved_dtype(self.blocks)
        result = np.empty(self.shape, dtype=dtype)
        if result.shape[0] == 0:
            return result

        itemmask = np.zeros(self.shape[0])
        for blk in self.blocks:
            rl = blk.mgr_locs
            result[rl.indexer] = blk.get_values(dtype)
            itemmask[rl.indexer] = 1

        # vvv here is your final array assuming you actually have data
        return result 

def _consolidate_inplace(self):
    def f():
        if self._data.is_consolidated():
            return self._data

        bm = self._data.__class__(self._data.blocks, self._data.axes)
        bm._is_consolidated = False
        bm._consolidate_inplace()
        return bm
    self._protect_consolidate(f)

def _protect_consolidate(self, f):
    blocks_before = len(self._data.blocks)
    result = f()
    if len(self._data.blocks) != blocks_before:
        if i is not None:
            self._item_cache.pop(i, None)
        else:
            self._item_cache.clear()
    return result
Run Code Online (Sandbox Code Playgroud)

请注意,这df._data是一个pandas.core.internals.BlockManager,而不是一个numpy.ndarray