groupby在我的主要转换中,我通过执行 a然后applyInPandas在 Foundry 中运行算法。构建需要很长时间,一种想法是使用哈希分区/分桶来组织文件以防止随机读取和排序。
对于 mcve,我有以下数据集:
def example_df():
return spark.createDataFrame(
[("1","2", 1.0), ("1","3", 2.0), ("2","4", 3.0), ("2","5", 5.0), ("2","2", 10.0)],
("id_1","id_2", "v"))
Run Code Online (Sandbox Code Playgroud)
我想要应用的变换是:
def df1(example_df):
def subtract_mean(pdf):
v = pdf.v
return pdf.assign(v=v - v.mean())
return example_df.groupby("id_1","id_2").applyInPandas(subtract_mean, schema="id_1 string, id_2 string, v double")
Run Code Online (Sandbox Code Playgroud)
当我查看没有分区的原始查询计划时,它看起来如下所示:
物理计划:
Execute FoundrySaveDatasetCommand `ri.foundry.main.transaction.00000059-eb1b-61f4-bdb8-a030ac6baf0a@master`.`ri.foundry.main.dataset.eb664037-fcae-4ce2-b92b-bd103cd504b3`, ErrorIfExists, [id_1, id_2, v], ComputedStatsServiceV2Blocking{_endpointChannelFactory=DialogueChannel@3127a629{channelName=dialogue-nonreloading-ComputedStatsServiceV2Blocking, delegate=com.palantir.dialogue.core.DialogueChannel$Builder$$Lambda$713/0x0000000800807c40@70f51090}, runtime=com.palantir.conjure.java.dialogue.serde.DefaultConjureRuntime@6c67a62a}, com.palantir.foundry.spark.catalog.caching.CachingSchemaService@7d881feb, com.palantir.foundry.spark.catalog.caching.CachingMetadataService@57a1ef9e, com.palantir.foundry.spark.catalog.FoundrySparkResolver@4d38f6f5, com.palantir.foundry.spark.auth.DefaultFoundrySparkAuthSupplier@21103ab4
+- AdaptiveSparkPlan isFinalPlan=true
+- == Final Plan ==
*(3) BasicStats `ri.foundry.main.transaction.00000059-eb1b-61f4-bdb8-a030ac6baf0a@master`.`ri.foundry.main.dataset.eb664037-fcae-4ce2-b92b-bd103cd504b3`
+- FlatMapGroupsInPandas [id_1#487, id_2#488], subtract_mean(id_1#487, id_2#488, v#489), [id_1#497, …Run Code Online (Sandbox Code Playgroud) partitioning apache-spark pyspark palantir-foundry foundry-code-repositories
我的 Foundry 实例中有一个管道设置,它使用增量计算,但由于某种原因没有达到我的预期。也就是说,我想读取转换的先前输出并获取日期的最大值,然后仅读取紧随该最大日期之后的数据的输入。
由于某种原因,它没有达到我的预期,并且在构建/分析/修改代码过程中逐步执行代码非常令人沮丧。
我的代码如下所示:
from pypsark.sql import functions as F, types as T, DataFrame
from transforms.api import transform, Input, Output, incremental
from datetime import date, timedelta
JUMP_DAYS = 1
START_DATE = date(year=2021, month=10, day=1)
OUTPUT_SCHEMA = T.StructType([
T.StructField("date", T.DateType()),
T.StructField("value", T.IntegerType())
])
@incremental(semantic_version=1)
@transform(
my_input=Input("/path/to/my/input"),
my_output=Output("/path/to/my/output")
)
def only_write_one_day(my_input, my_output):
"""Filter the input to only rows that are a day after the last written output and process them"""
# Get the previous output and full current input
previous_output_df = …Run Code Online (Sandbox Code Playgroud) palantir-foundry foundry-code-repositories foundry-python-transform
我需要上传一个具有列名及其各自值的测试数据集来测试生产代码的功能。如何在 palantir 代工厂中上传未经身份验证的数据集。有人请建议吗?
我注意到我的代码存储库警告我在 for/while 循环中使用 withColumn 是一种反模式。为什么不推荐这样做?这不是PySpark API的正常使用吗?
pyspark palantir-foundry foundry-code-repositories foundry-python-transform
在 palantir Foundry 中,我试图从数据集中读取所有 xml 文件。然后,在 for 循环中,我解析 xml 文件。
直到倒数第二行,代码运行良好,没有错误。
from transforms.api import transform, Input, Output
from transforms.verbs.dataframes import sanitize_schema_for_parquet
from bs4 import BeautifulSoup
import pandas as pd
import lxml
@transform(
output=Output("/Spring/xx/datasets/mydataset2"),
source_df=Input("ri.foundry.main.dataset.123"),
)
def read_xml(ctx, source_df, output):
df = pd.DataFrame()
filesystem = source_df.filesystem()
hadoop_path = filesystem.hadoop_path
files = [f"{hadoop_path}/{f.path}" for f in filesystem.ls()]
for i in files:
with open(i, 'r') as f:
file = f.read()
soup = BeautifulSoup(file,'xml')
data = []
for e in soup.select('offer'):
data.append({
'meldezeitraum': e.find_previous('data').get('meldezeitraum'), …Run Code Online (Sandbox Code Playgroud) python pandas pyspark palantir-foundry foundry-code-repositories
是否可以将 CSV 文件保存为 Foundry 代码存储库转换-python 语言,而不是保存为 Parquet 格式?