FP-growth - 交易中的项目必须是唯一的

Eas*_*vil 1 apache-spark pyspark apache-spark-mllib

我已经在我的计算机中运行代码并使用Frequent模式挖掘.我使用FP-growth,但是pyspark会抛出一个错误,我不知道如何解决它,那么使用pyspark的人能帮助我吗?

首先我得到数据

data = sc.textFile(somewhere)
Run Code Online (Sandbox Code Playgroud)

这一步没有错误然后

transactions = data.map(lambda line: line.strip().split(' '))
Run Code Online (Sandbox Code Playgroud)

接下来是

model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
Run Code Online (Sandbox Code Playgroud)

这会引发错误

An error occurred while calling o19.trainFPGrowthModel.:org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 3, localhost): org.apache.spark.SparkException: Items in a transaction must be unique but got WrappedArray(,  ,  A,  ,  Seq,  0xBB20C554Ack,  0xE6A8BA01Win,  0x7D78TcpLen,  20).
Run Code Online (Sandbox Code Playgroud)

我的数据看起来像这样

 transactions.take(1)

[[u'03/07',
  u' 10',
  u' 22',
  u' 04.439824',
  u' 139',
  u' 1',
  u' 1',
  u' spp_sdf',
  u' SDFCombinationAlert',
  u' Classification',
  u' SenstiveData',
  u' Priority',
  u' 2',
  u' PROTO',
  u' 254',
  u' 197.218.177.69',
  u' 172.16.113.84']]
Run Code Online (Sandbox Code Playgroud)

zer*_*323 6

好吧,你得到的例外几乎是不言自明的.传递给FP-growth的每个桶必须包含一组项目,因此不能重复.例如,这不是有效的输入:

transactions = sc.parallelize([["A", "A", "B", "C"], ["B", "C", "A", "A"]])
FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
## Py4JJavaError: An error occurred while calling o71.trainFPGrowthModel.
## ...
## Caused by: org.apache.spark.SparkException: 
##   Items in a transaction must be unique but got WrappedArray(A, A, B, C).
Run Code Online (Sandbox Code Playgroud)

在下游传递之前,您必须确保项目是唯一的.

unique = transactions.map(lambda x: list(set(x))).cache()
FPGrowth.train(unique, minSupport=0.2, numPartitions=10)
Run Code Online (Sandbox Code Playgroud)

备注:

  • cache在运行之前数据是个好主意FPGrowth.
  • 主观上它不是您使用的数据的最佳选择.

  • 有没有可以处理非唯一项目的算法? (2认同)