pyspark.ml 管道:基本预处理任务是否需要自定义转换器?

cls*_*udt 5 python machine-learning apache-spark pyspark data-science

在开始使用pyspark.ml管道 API 时,我发现自己为典型的预处理任务编写了自定义转换器,以便在管道中使用它们。例子:

from pyspark.ml import Pipeline, Transformer


class CustomTransformer(Transformer):
    # lazy workaround - a transformer needs to have these attributes
    _defaultParamMap = dict()
    _paramMap = dict()
    _params = dict()

class ColumnSelector(CustomTransformer):
    """Transformer that selects a subset of columns
    - to be used as pipeline stage"""

    def __init__(self, columns):
        self.columns = columns


    def _transform(self, data):
        return data.select(self.columns)


class ColumnRenamer(CustomTransformer):
    """Transformer renames one column"""


    def __init__(self, rename):
        self.rename = rename

    def _transform(self, data):
        (colNameBefore, colNameAfter) = self.rename
        return data.withColumnRenamed(colNameBefore, colNameAfter)


class NaDropper(CustomTransformer):
    """
    Drops rows with at least one not-a-number element
    """

    def __init__(self, cols=None):
        self.cols = cols


    def _transform(self, data):
        dataAfterDrop = data.dropna(subset=self.cols) 
        return dataAfterDrop


class ColumnCaster(CustomTransformer):

    def __init__(self, col, toType):
        self.col = col
        self.toType = toType

    def _transform(self, data):
        return data.withColumn(self.col, data[self.col].cast(self.toType))
Run Code Online (Sandbox Code Playgroud)

它们可以工作,但我想知道这是一种模式还是反模式——这样的转换器是使用管道 API 的好方法吗?是否有必要实现它们,或者是否在其他地方提供了等效功能?

hi-*_*zir 4

我想说它主要是基于意见的,尽管它看起来不必要地冗长,而且 PythonTransformers与 API 的其余部分集成得不好Pipeline

还值得指出的是,您在这里拥有的一切都可以轻松实现SQLTransformer。例如:

from pyspark.ml.feature import SQLTransformer

def column_selector(columns):
    return SQLTransformer(
        statement="SELECT {} FROM __THIS__".format(", ".join(columns))
    )
Run Code Online (Sandbox Code Playgroud)

或者

def na_dropper(columns):
    return SQLTransformer(
        statement="SELECT * FROM __THIS__ WHERE {}".format(
            " AND ".join(["{} IS NOT NULL".format(x) for x in columns])
        )
    )
Run Code Online (Sandbox Code Playgroud)

只需付出一点努力,您就可以将 SQLAlchemy 与 Hive 方言结合使用,以避免手写 SQL。