我需要帮助转换我的数据,以便我可以阅读交易数据.
商业案例
我正在尝试将一些相关的事务组合在一起,以创建一些组或事件类.这个数据集代表了工作人员参加各种缺席事件.我想根据离开事件类365天内的任何事务创建一类叶子.对于图表趋势,我想对类进行编号,以便获得序列/模式.
我的代码允许我查看第一个事件发生的时间,并且它可以识别新类何时启动,但它不会将每个事务都放入类中.
要求:
我在列中添加了所需的输出,标记为"Desired Output".注意,每个人可以有更多行/事件; 而且可能会有更多的人.
一些数据
import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])
Run Code Online (Sandbox Code Playgroud)
我试过的一些代码
df['Effective Date'] = df['Effective Date'].astype('datetime64[ns]')
df['EmplidShift'] = df['Employee ID'].shift(-1)
df['Effdt-Shift'] = df['Effective Date'].shift(-1)
df['Prior Row in Same Emplid Class'] = "No"
df['Effdt Diff'] = df['Effdt-Shift'] - df['Effective Date']
df['Effdt Diff'] = (pd.to_timedelta(df['Effdt Diff'], unit='d') + pd.to_timedelta(1,unit='s')).astype('timedelta64[D]')
df['Cumul. Count'] = df.groupby('Employee ID').cumcount()
df['Groupby'] = df.groupby('Employee ID')['Cumul. Count'].transform('max')
df['First Row Appears?'] = ""
df['First Row Appears?'][df['Cumul. Count'] == df['Groupby']] = "First Row"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Effdt > 1 Yr?'] = ""
df['Effdt > 1 Yr?'][ ((df['Prior Row in Same Emplid Class'] == "Yes" ) & (df['Effdt Diff'] < -365)) ] = "Yes"
df['Unique Leave Event'] = ""
df['Unique Leave Event'][ (df['Effdt > 1 Yr?'] == "Yes") | (df['First Row Appears?'] == "First Row") ] = "Unique Leave Event"
df
Run Code Online (Sandbox Code Playgroud)
您可以执行此操作,而无需循环或迭代数据帧。根据Wes McKinney 的说法,您可以与 groupBy 对象一起使用.apply()
并定义一个应用于 groupby 对象的函数。如果您将其与.shift()
(如此处)一起使用,则无需使用任何循环即可获得结果。
简洁的例子:
# Group by Employee ID
grouped = df.groupby("Employee ID")
# Define function
def get_unique_events(group):
# Convert to date and sort by date, like @Khris did
group["Effective Date"] = pd.to_datetime(group["Effective Date"])
group = group.sort_values("Effective Date")
event_series = (group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')).apply(lambda x: int(x)).cumsum()+1
return event_series
event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0)
df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True)
df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x))
df['Match'] = df['Desired Output'] == df['Output']
print(df)
Run Code Online (Sandbox Code Playgroud)
输出:
Employee ID Effective Date Desired Output Unique Event \
3 100 2013-01-01 Unique Leave Event 1 1
2 100 2014-07-01 Unique Leave Event 2 2
1 100 2015-06-05 Unique Leave Event 2 2
0 100 2016-01-01 Unique Leave Event 2 2
6 200 2013-01-01 Unique Leave Event 1 1
5 200 2015-01-01 Unique Leave Event 2 2
4 200 2016-01-01 Unique Leave Event 2 2
7 300 2014-01 Unique Leave Event 1 1
Output Match
3 Unique Leave Event 1 True
2 Unique Leave Event 2 True
1 Unique Leave Event 2 True
0 Unique Leave Event 2 True
6 Unique Leave Event 1 True
5 Unique Leave Event 2 True
4 Unique Leave Event 2 True
7 Unique Leave Event 1 True
Run Code Online (Sandbox Code Playgroud)
为了清楚起见,更详细的示例:
import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])
# Group by Employee ID
grouped = df.groupby("Employee ID")
# Define a function to get the unique events
def get_unique_events(group):
# Convert to date and sort by date, like @Khris did
group["Effective Date"] = pd.to_datetime(group["Effective Date"])
group = group.sort_values("Effective Date")
# Define a series of booleans to determine whether the time between dates is over 365 days
# Use .shift(1) to look back one row
is_year = group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')
# Convert booleans to integers (0 for False, 1 for True)
is_year_int = is_year.apply(lambda x: int(x))
# Use the cumulative sum function in pandas to get the cumulative adjustment from the first date.
# Add one to start the first event as 1 instead of 0
event_series = is_year_int.cumsum() + 1
return event_series
# Run function on df and put results into a new dataframe
# Convert Employee ID back from an index to a column with .reset_index(level=0)
event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0)
# Merge the dataframes
df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True)
# Add string to match desired format
df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x))
# Check to see if output matches desired output
df['Match'] = df['Desired Output'] == df['Output']
print(df)
Run Code Online (Sandbox Code Playgroud)
您会得到相同的输出:
Employee ID Effective Date Desired Output Unique Event \
3 100 2013-01-01 Unique Leave Event 1 1
2 100 2014-07-01 Unique Leave Event 2 2
1 100 2015-06-05 Unique Leave Event 2 2
0 100 2016-01-01 Unique Leave Event 2 2
6 200 2013-01-01 Unique Leave Event 1 1
5 200 2015-01-01 Unique Leave Event 2 2
4 200 2016-01-01 Unique Leave Event 2 2
7 300 2014-01 Unique Leave Event 1 1
Output Match
3 Unique Leave Event 1 True
2 Unique Leave Event 2 True
1 Unique Leave Event 2 True
0 Unique Leave Event 2 True
6 Unique Leave Event 1 True
5 Unique Leave Event 2 True
4 Unique Leave Event 2 True
7 Unique Leave Event 1 True
Run Code Online (Sandbox Code Playgroud)