创建大型Pandas DataFrames:preallocation vs append vs concat

and*_*rew 28 python pandas

在构建庞大的数据帧块时,我对Pandas的性能感到困惑.在Numpy中,我们(几乎)总是通过预先分配一个大的空数组然后填充值来看到更好的性能.据我所知,这是因为Numpy立刻抓住了所需的所有内存,而不必为每次append操作重新分配内存.

在Pandas中,我似乎通过使用df = df.append(temp)模式获得了更好的性能.

这是一个时间示例.Timer该类的定义如下.正如您所见,我发现预分配比使用速度慢大约10倍append!使用np.empty适当的dtype值预分配数据帧有很大帮助,但该append方法仍然是最快的.

import numpy as np
from numpy.random import rand
import pandas as pd

from timer import Timer

# Some constants
num_dfs = 10  # Number of random dataframes to generate
n_rows = 2500
n_cols = 40
n_reps = 100  # Number of repetitions for timing

# Generate a list of num_dfs dataframes of random values
df_list = [pd.DataFrame(rand(n_rows*n_cols).reshape((n_rows, n_cols)), columns=np.arange(n_cols)) for i in np.arange(num_dfs)]

##
# Define two methods of growing a large dataframe
##

# Method 1 - append dataframes
def method1():
    out_df1 = pd.DataFrame(columns=np.arange(4))
    for df in df_list:
        out_df1 = out_df1.append(df, ignore_index=True)
    return out_df1

def method2():
# # Create an empty dataframe that is big enough to hold all the dataframes in df_list
out_df2 = pd.DataFrame(columns=np.arange(n_cols), index=np.arange(num_dfs*n_rows))
#EDIT_1: Set the dtypes of each column
for ix, col in enumerate(out_df2.columns):
    out_df2[col] = out_df2[col].astype(df_list[0].dtypes[ix])
# Fill in the values
for ix, df in enumerate(df_list):
    out_df2.iloc[ix*n_rows:(ix+1)*n_rows, :] = df.values
return out_df2

# EDIT_2: 
# Method 3 - preallocate dataframe with np.empty data of appropriate type
def method3():
    # Create fake data array
    data = np.transpose(np.array([np.empty(n_rows*num_dfs, dtype=dt) for dt in df_list[0].dtypes]))
    # Create placeholder dataframe
    out_df3 = pd.DataFrame(data)
    # Fill in the real values
    for ix, df in enumerate(df_list):
        out_df3.iloc[ix*n_rows:(ix+1)*n_rows, :] = df.values
    return out_df3

##
# Time both methods
##

# Time Method 1
times_1 = np.empty(n_reps)
for i in np.arange(n_reps):
    with Timer() as t:
       df1 = method1()
    times_1[i] = t.secs
print 'Total time for %d repetitions of Method 1: %f [sec]' % (n_reps, np.sum(times_1))
print 'Best time: %f' % (np.min(times_1))
print 'Mean time: %f' % (np.mean(times_1))

#>>  Total time for 100 repetitions of Method 1: 2.928296 [sec]
#>>  Best time: 0.028532
#>>  Mean time: 0.029283

# Time Method 2
times_2 = np.empty(n_reps)
for i in np.arange(n_reps):
    with Timer() as t:
        df2 = method2()
    times_2[i] = t.secs
print 'Total time for %d repetitions of Method 2: %f [sec]' % (n_reps, np.sum(times_2))
print 'Best time: %f' % (np.min(times_2))
print 'Mean time: %f' % (np.mean(times_2))

#>>  Total time for 100 repetitions of Method 2: 32.143247 [sec]
#>>  Best time: 0.315075
#>>  Mean time: 0.321432

# Time Method 3
times_3 = np.empty(n_reps)
for i in np.arange(n_reps):
    with Timer() as t:
        df3 = method3()
    times_3[i] = t.secs
print 'Total time for %d repetitions of Method 3: %f [sec]' % (n_reps, np.sum(times_3))
print 'Best time: %f' % (np.min(times_3))
print 'Mean time: %f' % (np.mean(times_3))

#>>  Total time for 100 repetitions of Method 3: 6.577038 [sec]
#>>  Best time: 0.063437
#>>  Mean time: 0.065770
Run Code Online (Sandbox Code Playgroud)

我使用TimerHuy Nguyen的礼貌:

# credit: http://www.huyng.com/posts/python-performance-analysis/

import time

class Timer(object):
    def __init__(self, verbose=False):
        self.verbose = verbose

    def __enter__(self):
        self.start = time.clock()
        return self

    def __exit__(self, *args):
        self.end = time.clock()
        self.secs = self.end - self.start
        self.msecs = self.secs * 1000  # millisecs
        if self.verbose:
            print 'elapsed time: %f ms' % self.msecs
Run Code Online (Sandbox Code Playgroud)

如果您还在关注,我有两个问题:

1)为什么append方法更快?(注意:对于非常小的数据帧,即n_rows = 40它实际上更慢).

2)从块中构建大型数据帧的最有效方法是什么?(在我的例子中,块都是大型csv文件).

谢谢你的帮助!

EDIT_1:在我的真实世界项目中,列具有不同的dtypes.因此pd.DataFrame(.... dtype=some_type),根据BrenBarn的建议,我不能使用这个技巧来提高预分配的性能.dtype参数强制所有列都是相同的dtype [Ref.问题4464]

method2()在我的代码中添加了一些行来逐行更改dtypes以匹配输入数据帧.这种操作很昂贵,并且在写入行块时无法获得适当的dtypes.

EDIT_2:尝试使用占位符数组预分配数据帧np.empty(... dtyp=some_type).Per @ Joris的建议.

Jef*_*eff 30

您的基准实际上太小,无法显示真正的差异.追加,复制每个时间,所以你实际上是在复制N个存储空间N*(N-1)次.随着数据框大小的增加,这非常低效.这在一个非常小的框架中肯定无关紧要.但如果你有任何实际尺寸,这很重要.这是在这里的文档中特别注明的,虽然是一个小小的警告.

In [97]: df = DataFrame(np.random.randn(100000,20))

In [98]: df['B'] = 'foo'

In [99]: df['C'] = pd.Timestamp('20130101')

In [103]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 100000 entries, 0 to 99999
Data columns (total 22 columns):
0     100000 non-null float64
1     100000 non-null float64
2     100000 non-null float64
3     100000 non-null float64
4     100000 non-null float64
5     100000 non-null float64
6     100000 non-null float64
7     100000 non-null float64
8     100000 non-null float64
9     100000 non-null float64
10    100000 non-null float64
11    100000 non-null float64
12    100000 non-null float64
13    100000 non-null float64
14    100000 non-null float64
15    100000 non-null float64
16    100000 non-null float64
17    100000 non-null float64
18    100000 non-null float64
19    100000 non-null float64
B     100000 non-null object
C     100000 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(20), object(1)
memory usage: 17.5+ MB
Run Code Online (Sandbox Code Playgroud)

追加

In [85]: def f1():
   ....:     result = df
   ....:     for i in range(9):
   ....:         result = result.append(df)
   ....:     return result
   ....: 
Run Code Online (Sandbox Code Playgroud)

CONCAT

In [86]: def f2():
   ....:     result = []
   ....:     for i in range(10):
   ....:         result.append(df)
   ....:     return pd.concat(result)
   ....: 

In [100]: f1().equals(f2())
Out[100]: True

In [101]: %timeit f1()
1 loops, best of 3: 1.66 s per loop

In [102]: %timeit f2()
1 loops, best of 3: 220 ms per loop
Run Code Online (Sandbox Code Playgroud)

请注意,我甚至不打算尝试预先分配.它有点复杂,特别是因为你正在处理多个dtypes(例如,你可以制作一个巨大的框架,简单.loc而且它会起作用).但pd.concat它只是简单,可靠,快速.

从上面你的尺寸的时间

In [104]: df = DataFrame(np.random.randn(2500,40))

In [105]: %timeit f1()
10 loops, best of 3: 33.1 ms per loop

In [106]: %timeit f2()
100 loops, best of 3: 4.23 ms per loop
Run Code Online (Sandbox Code Playgroud)


and*_*rew 6

@Jeff,pd.concat赢得一英里!我使用基准的第四方法pd.concatnum_dfs = 500。结果是明确的:

method4()定义:

# Method 4 - us pd.concat on df_list
def method4():
return pd.concat(df_list, ignore_index=True)
Run Code Online (Sandbox Code Playgroud)

分析结果,Timer在我的原始问题中使用相同的结果:

Total time for 100 repetitions of Method 1: 3679.334655 [sec]
Best time: 35.570036
Mean time: 36.793347
Total time for 100 repetitions of Method 2: 1569.917425 [sec]
Best time: 15.457102
Mean time: 15.699174
Total time for 100 repetitions of Method 3: 325.730455 [sec]
Best time: 3.192702
Mean time: 3.257305
Total time for 100 repetitions of Method 4: 25.448473 [sec]
Best time: 0.244309
Mean time: 0.254485
Run Code Online (Sandbox Code Playgroud)

pd.concat方法比使用预分配者预分配的速度快13倍np.empty(... dtype)


Bre*_*arn 5

您没有为指定任何数据或类型out_df2,因此它具有“ object” dtype。这使得为​​其分配值非常慢。指定float64 dtype:

out_df2 = pd.DataFrame(columns=np.arange(n_cols), index=np.arange(num_dfs*n_rows), dtype=np.float64)
Run Code Online (Sandbox Code Playgroud)

您会看到戏剧性的加速。当我尝试时,method2此更改的速度约为的两倍method1