dbo*_*ose 4 python pandas pandas-groupby
我有一个时间序列数据。但是数据不连续。(2005-03-02 02:08:00丢失)。
我需要一个新的C列C(i)=A(i)+B(i)+average,其中我的平均值是B直到不连续为止的平均值(02:08:00)。
average=Data.loc['2005-03-02 02:05:30':'2005-03-02 02:07:30',['B']].mean(axis=0)
After discontinuity we have to again recalculate average till next discontinuity
average=Data.loc['2005-03-02 02:08:30':'2005-03-02 02:11:00',['B']].mean(axis=0)
Run Code Online (Sandbox Code Playgroud)
输入项
Date,A,B
2005-03-02 02:05:30,1,3
2005-03-02 02:06:00,2,4
2005-03-02 02:06:30,3,5
2005-03-02 02:07:00,4,6
2005-03-02 02:07:30,5,7
2005-03-02 02:08:30,7,9
2005-03-02 02:09:00,7,9
2005-03-02 02:09:30,7,9
2005-03-02 02:10:00,8,12
2005-03-02 02:10:30,9,13
2005-03-02 02:11:00,10,14
Run Code Online (Sandbox Code Playgroud)
输出量
Date,A,B,C
2005-03-02 02:05:30,1,3,9
2005-03-02 02:06:00,2,4,11
2005-03-02 02:06:30,3,5,13
2005-03-02 02:07:00,4,6,15
2005-03-02 02:07:30,5,7,17
2005-03-02 02:08:30,7,9,28
2005-03-02 02:09:00,7,9,28
2005-03-02 02:09:30,7,9,28
2005-03-02 02:10:00,8,12,32
2005-03-02 02:10:30,9,13,34
2005-03-02 02:11:00,10,14,36
Run Code Online (Sandbox Code Playgroud)
如何找出索引中的不连续性?
我如何使用熊猫来做整个事情?
步骤1:读入资料框
import pandas as pd
from io import StringIO
y = '''Date,A,B
2005-03-02 02:05:30,1,3
2005-03-02 02:06:00,2,4
2005-03-02 02:06:30,3,5
2005-03-02 02:07:00,4,6
2005-03-02 02:07:30,5,7
2005-03-02 02:08:30,7,9
2005-03-02 02:09:00,7,9
2005-03-02 02:09:30,7,9
2005-03-02 02:10:00,8,12
2005-03-02 02:10:30,9,13
2005-03-02 02:11:00,10,14'''
df = pd.read_csv(StringIO(y), index_col='Date')
Run Code Online (Sandbox Code Playgroud)
步骤2:转换为日期时间索引
df.index = pd.to_datetime(df.index)
Run Code Online (Sandbox Code Playgroud)
第2步:持续30秒进行重采样
new = df.resample('30s').mean()
Run Code Online (Sandbox Code Playgroud)
输出:
A B
Date
2005-03-02 02:05:30 1.0 3.0
2005-03-02 02:06:00 2.0 4.0
2005-03-02 02:06:30 3.0 5.0
2005-03-02 02:07:00 4.0 6.0
2005-03-02 02:07:30 5.0 7.0
2005-03-02 02:08:00 NaN NaN
2005-03-02 02:08:30 7.0 9.0
2005-03-02 02:09:00 7.0 9.0
2005-03-02 02:09:30 7.0 9.0
2005-03-02 02:10:00 8.0 12.0
2005-03-02 02:10:30 9.0 13.0
2005-03-02 02:11:00 10.0 14.0
Run Code Online (Sandbox Code Playgroud)
步骤3:按NaN行拆分数据框并获取组ID
new["group_no"] = new.T.isnull().all().cumsum()
Run Code Online (Sandbox Code Playgroud)
输出:
A B group_no
Date
2005-03-02 02:05:30 1.0 3.0 0
2005-03-02 02:06:00 2.0 4.0 0
2005-03-02 02:06:30 3.0 5.0 0
2005-03-02 02:07:00 4.0 6.0 0
2005-03-02 02:07:30 5.0 7.0 0
2005-03-02 02:08:00 NaN NaN 1
2005-03-02 02:08:30 7.0 9.0 1
2005-03-02 02:09:00 7.0 9.0 1
2005-03-02 02:09:30 7.0 9.0 1
2005-03-02 02:10:00 8.0 12.0 1
2005-03-02 02:10:30 9.0 13.0 1
2005-03-02 02:11:00 10.0 14.0 1
Run Code Online (Sandbox Code Playgroud)
步骤4:获取每个group_no的B平均值
new['Bmean'] = new.groupby('group_no').transform('mean').B
Run Code Online (Sandbox Code Playgroud)
输出:
A B group_no Bmean
Date
2005-03-02 02:05:30 1.0 3.0 0 5.0
2005-03-02 02:06:00 2.0 4.0 0 5.0
2005-03-02 02:06:30 3.0 5.0 0 5.0
2005-03-02 02:07:00 4.0 6.0 0 5.0
2005-03-02 02:07:30 5.0 7.0 0 5.0
2005-03-02 02:08:00 NaN NaN 1 11.0
2005-03-02 02:08:30 7.0 9.0 1 11.0
2005-03-02 02:09:00 7.0 9.0 1 11.0
2005-03-02 02:09:30 7.0 9.0 1 11.0
2005-03-02 02:10:00 8.0 12.0 1 11.0
2005-03-02 02:10:30 9.0 13.0 1 11.0
2005-03-02 02:11:00 10.0 14.0 1 11.0
Run Code Online (Sandbox Code Playgroud)
步骤5:应用必要的转换并删除多余的列
new['C'] = new['A'] + new['B'] + new['Bmean']
new.drop(['group_no', 'Bmean'], axis=1, inplace=True)
Run Code Online (Sandbox Code Playgroud)
输出:
A B C
Date
2005-03-02 02:05:30 1.0 3.0 9.0
2005-03-02 02:06:00 2.0 4.0 11.0
2005-03-02 02:06:30 3.0 5.0 13.0
2005-03-02 02:07:00 4.0 6.0 15.0
2005-03-02 02:07:30 5.0 7.0 17.0
2005-03-02 02:08:00 NaN NaN NaN
2005-03-02 02:08:30 7.0 9.0 27.0
2005-03-02 02:09:00 7.0 9.0 27.0
2005-03-02 02:09:30 7.0 9.0 27.0
2005-03-02 02:10:00 8.0 12.0 31.0
2005-03-02 02:10:30 9.0 13.0 33.0
2005-03-02 02:11:00 10.0 14.0 35.0
Run Code Online (Sandbox Code Playgroud)
我建议使用:
#if unique values in index use reindex
df = Data.reindex(pd.date_range(Data.index.min(), Data.index.max(), freq='30S'))
#if non unique values in index
#df = df.resample('30s').mean()
#get mask for NaNs rows
mask = df.isnull().all(axis=1)
#get sum of all columns
s1 = df.sum(axis=1)
#if need sum only A, B columns
#s1 = df[['A', 'B']].sum(axis=1)
#create column for grouping
df['C'] = mask.cumsum()
#filter out NaNs rows
df = df[~mask]
#transform mean and add sum
df['C'] = df.groupby('C')['B'].transform('mean') + s1
print (df)
A B C
2005-03-02 02:05:30 1.0 3.0 9.0
2005-03-02 02:06:00 2.0 4.0 11.0
2005-03-02 02:06:30 3.0 5.0 13.0
2005-03-02 02:07:00 4.0 6.0 15.0
2005-03-02 02:07:30 5.0 7.0 17.0
2005-03-02 02:08:30 7.0 9.0 27.0
2005-03-02 02:09:00 7.0 9.0 27.0
2005-03-02 02:09:30 7.0 9.0 27.0
2005-03-02 02:10:00 8.0 12.0 31.0
2005-03-02 02:10:30 9.0 13.0 33.0
2005-03-02 02:11:00 10.0 14.0 35.0
Run Code Online (Sandbox Code Playgroud)
另一个解决方案,谢谢@iDrwish的建议:
首先获取diff索引的差异()(尚未实现,因此请先通过将该索引转换为序列to_series),30 s Timedelta然后通过与进行比较并创建组cumsum。
上一次transform与mean和相加总和:
g = Data.index.to_series().diff().gt(pd.Timedelta(30, unit='s')).cumsum()
Data['C'] = Data.groupby(g)['B'].transform('mean') + Data.sum(axis=1)
#if need specify columns
#Data['C'] = Data.groupby(g)['B'].transform('mean') + Data['A'] + Data['B']
print (Data)
A B C
Date
2005-03-02 02:05:30 1 3 9
2005-03-02 02:06:00 2 4 11
2005-03-02 02:06:30 3 5 13
2005-03-02 02:07:00 4 6 15
2005-03-02 02:07:30 5 7 17
2005-03-02 02:08:30 7 9 27
2005-03-02 02:09:00 7 9 27
2005-03-02 02:09:30 7 9 27
2005-03-02 02:10:00 8 12 31
2005-03-02 02:10:30 9 13 33
2005-03-02 02:11:00 10 14 35
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
497 次 |
| 最近记录: |