如何分组并保留排序文件上的组顺序

A. *_*rid 2 python sorting group-by pandas pandas-groupby

我有一个很大的 CSV 文件,按其中的几列排序,我们将这些列称为 .csv sorted_columns
我想对这些执行 groupby 并对sorted_columns这些组中的每一个应用一些逻辑。

该文件不完全适合内存,所以我想分块读取它groupby并对每个块执行 a 。

我注意到的是,即使文件已经按这些列排序,也不会保留组的顺序。

最终,这就是我想要做的:

import pandas as pd

def run_logic(key, group):
    # some logic
    pass

last_group = pd.DataFrame()
last_key = None

for chunk_df in df:
    grouped_by_df = chunk_df.groupby(sorted_columns, sort=True)

    for key, group in grouped_by_df:
        if last_key is None or last_key == key:
            last_key = key
            last_group = pd.concat([last_group, group])
        else:  # last_key != key
            run_logic(last_key, last_group)
            last_key = key
            last_group = group.copy()
run_logic(last_key, last_group)
Run Code Online (Sandbox Code Playgroud)

但这不起作用,因为它没有承诺groupby保留组的顺序。如果相同的key存在于两个连续的块中,则不会保证在第一个块中它将是最后一个组,而在下一个块中它将是第一个。我尝试更改groupbyto usesort=False并尝试更改列的顺序,但没有帮助。

如果键已经在原始文件中排序,有没有人知道如何保留组的顺序?

有没有其他方法可以一次从文件中读取一个完整的组?

Ber*_*sen 7

我相信你的问题的本质是你试图在数据框中只用一次迭代来聚合每个组。在内存中适合多少组与需要读取数据帧的次数之间存在权衡

注意:我特意展示了冗长的代码来传递有必要多次迭代 df 的想法。两种解决方案都变得相对复杂,但仍能达到预期效果。代码的许多方面可以改进,任何重构代码的帮助都表示赞赏

我将使用这个虚拟的“data.csv”文件来举例说明我的解决方案。将 data.csv 保存在与脚本相同的目录中,您只需复制并粘贴解决方案并运行它们即可。

sorted1,sorted2,sorted3,othe1,other2,other3,other4 
1, 1, 1, 'a', 'a', 'a', 'a'  
1, 1, 1, 'a', 'a', 'a', 'a'
1, 1, 1, 'a', 'a', 'a', 'a'
1, 1, 1, 'a', 'a', 'a', 'a'  
2, 1, 1, 'a', 'a', 'a', 'a'  
2, 1, 1, 'd', 'd', 'd', 'd'
2, 1, 1, 'd', 'd', 'd', 'a'   
3, 1, 1, 'e', 'e', 'e', 'e'  
3, 1, 1, 'b', 'b', 'b', 'b'  
Run Code Online (Sandbox Code Playgroud)

在我们可以存储所有组密钥的场景中的初始解决方案:

先累积一个组的所有行,然后再处理。

基本上我会这样做:对于 df(chunks) 中的每次迭代,选择一组(如果内存允许,则选择多个)。通过查找已处理组键的字典来检查它是否尚未处理,然后通过迭代每个块来累积每个块中的选定组行。当所有块迭代完成时,处理组数据。

import pandas as pd
def run_logic(key, group):
    # some logic
    pass
def accumulate_nextGroup(alreadyProcessed_groups):
    past_accumulated_group = pd.DataFrame()
    pastChunk_groupKey = None
    for chunk_index, chunk_df in enumerate(pd.read_csv("data.csv",iterator=True, chunksize=3)):
            groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
            for currentChunk_groupKey, currentChunk_group in groupby_data:
                if (pastChunk_groupKey is None or pastChunk_groupKey == currentChunk_groupKey)\
                        and currentChunk_groupKey not in alreadyProcessed_groups.keys():
                    pastChunk_groupKey = currentChunk_groupKey
                    past_accumulated_group = pd.concat(
                            [past_accumulated_group, currentChunk_group]
                                                      )
                    print(f'I am the choosen group({currentChunk_groupKey}) of the moment in the chunk {chunk_index+1}')
                else: 
                    if currentChunk_groupKey in alreadyProcessed_groups:
                        print(f'group({currentChunk_groupKey}) is  not the choosen group because it was already processed')
                    else:
                        print(f'group({currentChunk_groupKey}) is  not the choosen group({pastChunk_groupKey}) yet :(')
    return pastChunk_groupKey, past_accumulated_group

alreadyProcessed_groups = {}
sorted_columns = ["sorted1","sorted2","sorted3"]
number_of_unique_groups = 3 # 
for iteration_in_df in range(number_of_unique_groups):  
    groupKey, groupData = accumulate_nextGroup(alreadyProcessed_groups)
    run_logic(groupKey, groupData)
    alreadyProcessed_groups[groupKey] = "Already Processed"
    print(alreadyProcessed_groups)
    print(f"end of {iteration_in_df+1} iterations in df")
    print("*"*50)
Run Code Online (Sandbox Code Playgroud)

输出解决方案1:

I am the choosen group((1, 1, 1)) of the moment in the chunk 1
I am the choosen group((1, 1, 1)) of the moment in the chunk 2
group((2, 1, 1)) is  not the choosen group((1, 1, 1)) yet :(
group((2, 1, 1)) is  not the choosen group((1, 1, 1)) yet :(
group((3, 1, 1)) is  not the choosen group((1, 1, 1)) yet :(
{(1, 1, 1): 'Already Processed'}
end of 1 iterations in df
**************************************************
group((1, 1, 1)) is  not the choosen group because it was already processed
group((1, 1, 1)) is  not the choosen group because it was already processed
I am the choosen group((2, 1, 1)) of the moment in the chunk 2
I am the choosen group((2, 1, 1)) of the moment in the chunk 3
group((3, 1, 1)) is  not the choosen group((2, 1, 1)) yet :(
{(1, 1, 1): 'Already Processed', (2, 1, 1): 'Already Processed'}
end of 2 iterations in df
**************************************************
group((1, 1, 1)) is  not the choosen group because it was already processed
group((1, 1, 1)) is  not the choosen group because it was already processed
group((2, 1, 1)) is  not the choosen group because it was already processed
group((2, 1, 1)) is  not the choosen group because it was already processed
I am the choosen group((3, 1, 1)) of the moment in the chunk 3
{(1, 1, 1): 'Already Processed', (2, 1, 1): 'Already Processed', (3, 1, 1): 'Already Processed'}
end of 3 iterations in df
**************************************************
Run Code Online (Sandbox Code Playgroud)

更新解决方案 2:在我们无法将所有组键存储在字典中的情况下:

在我们无法将所有组键存储在字典中的情况下,我们需要使用每个块中创建的每个组相对索引来创建每个组的全局引用索引。(请注意,此解决方案比前一个解决方案更密集)

这个解决方案的要点是我们不需要组键值来识别组。更深入地说,您可以将每个块想象成反向链表中的一个节点,其中第一个块指向 null,第二个块指向第一个块,依此类推……数据帧上的一次迭代对应于该链表中的一次遍历. 对于每个步骤(处理一个块),您每次需要保留的唯一信息是前一个块的头、尾和大小,只有使用这些信息,您才能为任何块中的组键分配唯一的索引标识符。

另一个重要信息是,由于文件已排序,因此块的第一个元素的引用索引将是最后一个元素的前一个块的最后一个元素 + 1。这使得从块索引推断全局引用索引成为可能。

import pandas as pd
import pysnooper
def run_logic(key, group):
    # some logic
    pass

def generate_currentChunkGroups_globalReferenceIdx(groupby_data,
        currentChunk_index, previousChunk_link):
    if currentChunk_index == 0:
        groupsIn_firstChunk=len(groupby_data.groups.keys())
        currentGroups_globalReferenceIdx = [(i,groupKey) 
                for i,(groupKey,_) in enumerate(groupby_data)]
    else:
        lastChunk_firstGroup, lastChunk_lastGroup, lastChunk_nGroups \
                = previousChunk_link 
        currentChunk_firstGroupKey = list(groupby_data.groups.keys())[0] 
        currentChunk_nGroups = len(groupby_data.groups.keys())

        lastChunk_lastGroupGlobalIdx, lastChunk_lastGroupKey \
                = lastChunk_lastGroup
        if currentChunk_firstGroupKey == lastChunk_lastGroupKey:
            currentChunk_firstGroupGlobalReferenceIdx =  lastChunk_lastGroupGlobalIdx
        else:
            currentChunk_firstGroupGlobalReferenceIdx =  lastChunk_lastGroupGlobalIdx + 1

        currentGroups_globalReferenceIdx = [
                (currentChunk_firstGroupGlobalReferenceIdx+i, groupKey)
                    for (i,groupKey) in enumerate(groupby_data.groups.keys())
                    ]

    next_previousChunk_link = (currentGroups_globalReferenceIdx[0],
            currentGroups_globalReferenceIdx[-1],
            len(currentGroups_globalReferenceIdx)
    )
    return currentGroups_globalReferenceIdx, next_previousChunk_link   

def accumulate_nextGroup(countOf_alreadyProcessedGroups, lastChunk_index, dataframe_accumulator):
    previousChunk_link = None
    currentIdx_beingProcessed = countOf_alreadyProcessedGroups
    for chunk_index, chunk_df in enumerate(pd.read_csv("data.csv",iterator=True, chunksize=3)):
        print(f'ITER:{iteration_in_df} CHUNK:{chunk_index} InfoPrevChunk:{previousChunk_link} lastProcessed_chunk:{lastChunk_index}')
        if (lastChunk_index !=  None) and (chunk_index < lastChunk_index):
            groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
            currentChunkGroups_globalReferenceIdx, next_previousChunk_link \
                    = generate_currentChunkGroups_globalReferenceIdx(
                            groupby_data, chunk_index, previousChunk_link
                            )
        elif((lastChunk_index == None) or (chunk_index >= lastChunk_index)):
            if (chunk_index == lastChunk_index):
                groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
                currentChunkGroups_globalReferenceIdx, next_previousChunk_link \
                        = generate_currentChunkGroups_globalReferenceIdx(
                                groupby_data, chunk_index, previousChunk_link
                                )
                currentChunkGroupGlobalIndexes = [GlobalIndex \
                        for (GlobalIndex,_) in currentChunkGroups_globalReferenceIdx]
                if((lastChunk_index is None) or (lastChunk_index <= chunk_index)):
                    lastChunk_index = chunk_index
                if currentIdx_beingProcessed in currentChunkGroupGlobalIndexes:
                    currentGroupKey_beingProcessed = [tup 
                            for tup in currentChunkGroups_globalReferenceIdx
                            if tup[0] == currentIdx_beingProcessed][0][1]
                    currentChunk_group = groupby_data.get_group(currentGroupKey_beingProcessed)
                    dataframe_accumulator = pd.concat(
                            [dataframe_accumulator, currentChunk_group]
                                                     )
            else: 
                groupby_data = chunk_df.groupby(sorted_columns, sort=True) 
                currentChunkGroups_globalReferenceIdx, next_previousChunk_link \
                        = generate_currentChunkGroups_globalReferenceIdx(
                                groupby_data, chunk_index, previousChunk_link
                                )
                currentChunkGroupGlobalIndexes = [GlobalIndex \
                        for (GlobalIndex,_) in currentChunkGroups_globalReferenceIdx]
                if((lastChunk_index is None) or (lastChunk_index <= chunk_index)):
                    lastChunk_index = chunk_index
                if currentIdx_beingProcessed in currentChunkGroupGlobalIndexes:
                    currentGroupKey_beingProcessed = [tup 
                            for tup in currentChunkGroups_globalReferenceIdx
                            if tup[0] == currentIdx_beingProcessed][0][1]
                    currentChunk_group = groupby_data.get_group(currentGroupKey_beingProcessed)
                    dataframe_accumulator = pd.concat(
                            [dataframe_accumulator, currentChunk_group]
                                                     )
                else:
                    countOf_alreadyProcessedGroups+=1
                    lastChunk_index = chunk_index-1
                    break
        previousChunk_link = next_previousChunk_link
    print(f'Done with chunks for group of global index:{currentIdx_beingProcessed} corresponding to groupKey:{currentGroupKey_beingProcessed}')
    return countOf_alreadyProcessedGroups, lastChunk_index, dataframe_accumulator, currentGroupKey_beingProcessed

sorted_columns = ["sorted1","sorted2","sorted3"]
number_of_unique_groups = 3 # 
lastChunk_index = None 
for iteration_in_df in range(number_of_unique_groups):  
    dataframe_accumulator = pd.DataFrame()
    countOf_alreadyProcessedGroups,lastChunk_index, group_data, currentGroupKey_Processed=\
            accumulate_nextGroup(
                    iteration_in_df, lastChunk_index, dataframe_accumulator
                                )
    run_logic(currentGroupKey_Processed, dataframe_accumulator)
    print(f"end of iteration number {iteration_in_df+1} in the df and processed {currentGroupKey_Processed}")
    print(group_data)
    print("*"*50)

Run Code Online (Sandbox Code Playgroud)

输出解决方案2:

ITER:0 CHUNK:0 InfoPrevChunk:None lastProcessed_chunk:None
ITER:0 CHUNK:1 InfoPrevChunk:((0, (1, 1, 1)), (0, (1, 1, 1)), 1) lastProcessed_chunk:0
ITER:0 CHUNK:2 InfoPrevChunk:((0, (1, 1, 1)), (1, (2, 1, 1)), 2) lastProcessed_chunk:1
Done with chunks for group of global index:0 corresponding to groupKey:(1, 1, 1)
end of iteration number 1 in the df and processed (1, 1, 1)
   sorted1  sorted2  sorted3 othe1 other2 other3 other4 
0        1        1        1   'a'    'a'    'a'   'a'  
1        1        1        1   'a'    'a'    'a'     'a'
2        1        1        1   'a'    'a'    'a'     'a'
3        1        1        1   'a'    'a'    'a'   'a'  
**************************************************
ITER:1 CHUNK:0 InfoPrevChunk:None lastProcessed_chunk:1
ITER:1 CHUNK:1 InfoPrevChunk:((0, (1, 1, 1)), (0, (1, 1, 1)), 1) lastProcessed_chunk:1
ITER:1 CHUNK:2 InfoPrevChunk:((0, (1, 1, 1)), (1, (2, 1, 1)), 2) lastProcessed_chunk:1
Done with chunks for group of global index:1 corresponding to groupKey:(2, 1, 1)
end of iteration number 2 in the df and processed (2, 1, 1)
   sorted1  sorted2  sorted3 othe1 other2 other3  other4 
4        2        1        1   'a'    'a'    'a'    'a'  
5        2        1        1   'd'    'd'    'd'   'd'   
6        2        1        1   'd'    'd'    'd'   'a'   
**************************************************
ITER:2 CHUNK:0 InfoPrevChunk:None lastProcessed_chunk:2
ITER:2 CHUNK:1 InfoPrevChunk:((0, (1, 1, 1)), (0, (1, 1, 1)), 1) lastProcessed_chunk:2
ITER:2 CHUNK:2 InfoPrevChunk:((0, (1, 1, 1)), (1, (2, 1, 1)), 2) lastProcessed_chunk:2
Done with chunks for group of global index:2 corresponding to groupKey:(3, 1, 1)
end of iteration number 3 in the df and processed (3, 1, 1)
   sorted1  sorted2  sorted3 othe1 other2 other3 other4 
7        3        1        1   'e'    'e'    'e'   'e'  
8        3        1        1   'b'    'b'    'b'    'b' 
**************************************************
Run Code Online (Sandbox Code Playgroud)