重新采样非时间序列数据

dr_*_*sst 11 python pandas

我有一些数据,我正在处理数据帧和熊猫.它们包含大约10 000行和6列.

问题是,我做了几次试验,不同的数据集的索引号略有不同.(这是一种使用多种材料进行的"力 - 长度"测试,当然测量点并不完美.)

现在我的想法是,使用包含长度值的索引"重新采样"数据.似乎pandas中的重新采样功能仅适用于datetime数据类型.

我试图通过to_datetime转换索引并成功.但是在重新取样之后,我需要回到原来的规模.某种from_datetime函数.

有没有办法,或者我是在完全错误的轨道上,应该更好地使用像groupby这样的功能?

编辑添加:

数据如下所示.长度用作索引.在那些Dataframe中,我有一些,所以它们真的很好将它们全部对齐到相同的"帧率"然后剪切它们以便我可以比较不同的数据集.

我已经尝试过的想法就是这个:

    df_1_dt = df_1 #generate a table for the conversion
    df_1_dt.index = pd.to_datetime(df_1_dt.index, unit='s') # convert it simulating seconds.. good idea?!
    df_1_dt_rs= df_1_dt # generate a df for the resampling
    df_1_dt_rs = df_1_dt_rs.resample (rule='s') #resample by the generatet time
Run Code Online (Sandbox Code Playgroud)

数据:

+---------------------------------------------------+  
¦  Index (Lenght)   ¦    Force1     ¦    Force2     ¦  
¦-------------------+---------------+---------------¦  
¦ 8.04662074828e-06 ¦ 4.74251270294 ¦ 4.72051584721 ¦  
¦ 8.0898882798e-06  ¦ 4.72051584721 ¦ 4.72161570191 ¦  
¦ 1.61797765596e-05 ¦ 4.69851899147 ¦ 4.72271555662 ¦  
¦ 1.65476570973e-05 ¦ 4.65452528    ¦ 4.72491526604 ¦  
¦ 2.41398605024e-05 ¦ 4.67945501539 ¦ 4.72589291467 ¦  
¦ 2.42696630876e-05 ¦ 4.70438475079 ¦ 4.7268705633  ¦  
¦ 9.60953101751e-05 ¦ 4.72931448619 ¦ 4.72784821192 ¦  
¦ 0.00507703541206  ¦ 4.80410369237 ¦ 4.73078115781 ¦  
¦ 0.00513927175509  ¦ 4.87889289856 ¦ 4.7337141037  ¦  
¦ 0.00868965311878  ¦ 4.9349848032  ¦ 4.74251282215 ¦  
¦ 0.00902026197556  ¦ 4.99107670784 ¦ 4.7513115406  ¦  
¦ 0.00929150878827  ¦ 5.10326051712 ¦ 4.76890897751 ¦  
¦ 0.0291729332784   ¦ 5.14945375919 ¦ 4.78650641441 ¦  
¦ 0.0296332588857   ¦ 5.17255038023 ¦ 4.79530513287 ¦  
¦ 0.0297080942518   ¦ 5.19564700127 ¦ 4.80410385132 ¦  
¦ 0.0362595526707   ¦ 5.2187436223  ¦ 4.80850321054 ¦  
¦ 0.0370305483177   ¦ 5.24184024334 ¦ 4.81290256977 ¦  
¦ 0.0381506204153   ¦ 5.28803348541 ¦ 4.82170128822 ¦  
¦ 0.0444440795306   ¦ 5.30783069134 ¦ 4.83050000668 ¦  
¦ 0.0450121369102   ¦ 5.3177292943  ¦ 4.8348993659  ¦  
¦ 0.0453465140473   ¦ 5.32762789726 ¦ 4.83929872513 ¦  
¦ 0.0515533437013   ¦ 5.33752650023 ¦ 4.85359662771 ¦  
¦ 0.05262489708     ¦ 5.34742510319 ¦ 4.8678945303  ¦  
¦ 0.0541273847206   ¦ 5.36722230911 ¦ 4.89649033546 ¦  
¦ 0.0600755845953   ¦ 5.37822067738 ¦ 4.92508614063 ¦  
¦ 0.0607712385295   ¦ 5.38371986151 ¦ 4.93938404322 ¦  
¦ 0.0612954159368   ¦ 5.38921904564 ¦ 4.9536819458  ¦  
¦ 0.0670288249293   ¦ 5.39471822977 ¦ 4.97457891703 ¦  
¦ 0.0683640870058   ¦ 5.4002174139  ¦ 4.99547588825 ¦  
¦ 0.0703192637772   ¦ 5.41121578217 ¦ 5.0372698307  ¦  
¦ 0.0757871634772   ¦ 5.43981158733 ¦ 5.07906377316 ¦  
¦ 0.0766597757545   ¦ 5.45410948992 ¦ 5.09996074438 ¦  
¦ 0.077317850103    ¦ 5.4684073925  ¦ 5.12085771561 ¦  
¦ 0.0825991083545   ¦ 5.48270529509 ¦ 5.13295596838 ¦  
¦ 0.0841354654428   ¦ 5.49700319767 ¦ 5.14505422115 ¦  
¦ 0.0865525182528   ¦ 5.52559900284 ¦ 5.1692507267  ¦  
+---------------------------------------------------+  
Run Code Online (Sandbox Code Playgroud)

gre*_*ata 3

听起来您想要做的就是将长度数字舍入到较低的精度。

如果是这种情况,您可以使用内置的舍入函数:

(虚拟数据)

>>> df=pd.DataFrame([[1.0000005,4],[1.232463632,5],[5.234652,9],[5.675322,10]],columns=['length','force'])
>>> df
33:      length  force
0  1.000001      4
1  1.232464      5
2  5.234652      9
3  5.675322     10
>>> df['rounded_length'] = df.length.apply(round, ndigits=0)
>>> df
34:      length  force  rounded_length
0  1.000001      4             1.0
1  1.232464      5             1.0
2  5.234652      9             5.0
3  5.675322     10             6.0
>>> 
Run Code Online (Sandbox Code Playgroud)

然后您可以使用 groupby 复制 resample()...工作流程:

>>> df.groupby('rounded_length').mean().force
35: rounded_length
1.0     4.5
5.0     9.0
6.0    10.0
Name: force, dtype: float64
Run Code Online (Sandbox Code Playgroud)

一般来说,重新采样只是针对日期。如果您将其用于日期以外的其他用途,可能有一个更优雅的解决方案!