aam*_*irr 2 apache-spark apache-spark-sql pyspark apache-spark-mllib
我试图绘制具有列名称的某些基于树的模型的特征重要性.我正在使用Pyspark.
既然我有文本分类变量和数字变量,我不得不使用类似这样的管道方法 -
使用vectorassembler创建包含特征向量的要素列
步骤1,2,3 的文档中的一些示例代码-
from pyspark.ml import Pipeline
from pyspark.ml.feature import OneHotEncoderEstimator, StringIndexer,
VectorAssembler
categoricalColumns = ["workclass", "education", "marital_status",
"occupation", "relationship", "race", "sex", "native_country"]
stages = [] # stages in our Pipeline
for categoricalCol in categoricalColumns:
# Category Indexing with StringIndexer
stringIndexer = StringIndexer(inputCol=categoricalCol,
outputCol=categoricalCol + "Index")
# Use OneHotEncoder to convert categorical variables into binary
SparseVectors
# encoder = OneHotEncoderEstimator(inputCol=categoricalCol + "Index",
outputCol=categoricalCol + "classVec")
encoder = OneHotEncoderEstimator(inputCols=
[stringIndexer.getOutputCol()], outputCols=[categoricalCol + "classVec"])
# Add stages. These are not run here, but will run all at once later on.
stages += [stringIndexer, encoder]
numericCols = ["age", "fnlwgt", "education_num", "capital_gain",
"capital_loss", "hours_per_week"]
assemblerInputs = [c + "classVec" for c in categoricalColumns] + numericCols
assembler = VectorAssembler(inputCols=assemblerInputs, outputCol="features")
stages += [assembler]
# Create a Pipeline.
pipeline = Pipeline(stages=stages)
# Run the feature transformations.
# - fit() computes feature statistics as needed.
# - transform() actually transforms the features.
pipelineModel = pipeline.fit(dataset)
dataset = pipelineModel.transform(dataset)
Run Code Online (Sandbox Code Playgroud)最后训练模型
在训练和评估之后,我可以使用"model.featureImportances"来获得功能排名,但是我没有获得功能/列名称,而只是功能编号,就像这样 -
print dtModel_1.featureImportances
(38895,[38708,38714,38719,38720,38737,38870,38894],[0.0742343395738,0.169404823667,0.100485791055,0.0105823115814,0.0134236162982,0.194124862158,0.437744255667])
Run Code Online (Sandbox Code Playgroud)如何将其映射回初始列名称和值?这样我可以绘制?**
小智 8
提取元数据显示在这里通过user6910411
attrs = sorted(
(attr["idx"], attr["name"]) for attr in (chain(*dataset
.schema["features"]
.metadata["ml_attr"]["attrs"].values())))
Run Code Online (Sandbox Code Playgroud)
并结合功能重要性:
[(name, dtModel_1.featureImportances[idx])
for idx, name in attrs
if dtModel_1.featureImportances[idx]]
Run Code Online (Sandbox Code Playgroud)