如何在 pandas 中动态创建新列,就像我们在 pyspark withColumn 中所做的那样

Inn*_*mer 5 python pandas pyspark

from statistics import mean
import pandas as pd
df = pd.DataFrame(columns=['A', 'B', 'C'])
df["A"] = [1, 2, 3, 4, 4, 5, 6]
df["B"] = ["Feb", "Feb", "Feb", "May", "May", "May", "May"]
df["C"] = [10, 20, 30, 40, 30, 50, 60]
df1 = df.groupby(["A","B"]).agg(mean_err=("C", mean)).reset_index()

df1["threshold"] = df1["A"] * df1["mean_err"]
Run Code Online (Sandbox Code Playgroud)

我该如何像 Pyspark .withColumn() 中那样执行最后一行代码?

在此输入图像描述

这段代码不会工作。我想通过使用动态操作的输出来创建新列,就像我们在 Pyspark withColumn 方法中所做的那样。

有人知道如何做到这一点吗?

Shu*_*rma 7

选项1:DataFrame.eval

(df.groupby(['A', 'B'], as_index=False)
   .agg(mean_err=('C', 'mean'))
   .eval('threshold = A * mean_err'))
Run Code Online (Sandbox Code Playgroud)

选项2:DataFrame.assign

(df.groupby(['A', 'B'], as_index=False)
   .agg(mean_err=('C', 'mean'))
   .assign(threshold=lambda x: x['A'] * x['mean_err']))
Run Code Online (Sandbox Code Playgroud)
   A    B  mean_err  threshold
0  1  Feb      10.0       10.0
1  2  Feb      20.0       40.0
2  3  Feb      30.0       90.0
3  4  May      35.0      140.0
4  5  May      50.0      250.0
5  6  May      60.0      360.0
Run Code Online (Sandbox Code Playgroud)