cst*_*eel 8 python apache-spark rdd pyspark
我有一个包含超过 60 亿行数据的 Spark RDD,我想使用 train_on_batch 来训练深度学习模型。我无法将所有行放入内存中,因此我希望一次获得 10K 左右的数据,以批量处理为 64 或 128 块(取决于模型大小)。我目前正在使用 rdd.sample() 但我不认为这保证我会获得所有行。有没有更好的方法来分区数据以使其更易于管理,以便我可以编写生成器函数来获取批次?我的代码如下:
data_df = spark.read.parquet(PARQUET_FILE)
print(f'RDD Count: {data_df.count()}') # 6B+
data_sample = data_df.sample(True, 0.0000015).take(6400)
sample_df = data_sample.toPandas()
def get_batch():
for row in sample_df.itertuples():
# TODO: put together a batch size of BATCH_SIZE
yield row
for i in range(10):
print(next(get_batch()))
Run Code Online (Sandbox Code Playgroud)
尝试这个:
from pyspark.sql import functions as F
sample_dict = {}
# Read the parquet file
df = spark.read.parquet("parquet file")
# add the partition_number as a column
df = df.withColumn('partition_num', F.spark_partition_id())
df.persist()
total_partition = [int(row.partition_num) for row in
df.select('partition_num').distinct().collect()]
for each_df in total_partition:
sample_dict[each_df] = df.where(df.partition_num == each_df)
Run Code Online (Sandbox Code Playgroud)
我不相信 Spark 会让您对数据进行偏移或分页。
但是您可以添加一个索引,然后对其进行分页,首先:
from pyspark.sql.functions import lit
data_df = spark.read.parquet(PARQUET_FILE)
count = data_df.count()
chunk_size = 10000
# Just adding a column for the ids
df_new_schema = data_df.withColumn('pres_id', lit(1))
# Adding the ids to the rdd
rdd_with_index = data_df.rdd.zipWithIndex().map(lambda (row,rowId): (list(row) + [rowId+1]))
# Creating a dataframe with index
df_with_index = spark.createDataFrame(rdd_with_index,schema=df_new_schema.schema)
# Iterating into the chunks
for page_num in range(0, count+1, chunk_size):
initial_page = page_num*chunk_size
final_page = initial_page + chunk_size
where_query = ('pres_id > {0} and pres_id <= {1}').format(initial_page,final_page)
chunk_df = df_with_index.where(where_query).toPandas()
train_on_batch(chunk_df) # <== Your function here
Run Code Online (Sandbox Code Playgroud)
这不是最佳的,由于使用了 pandas 数据框,它会严重利用 Spark,但会解决您的问题。
如果这影响您的功能,请不要忘记删除id。