我已经在我的计算机中运行代码并使用Frequent模式挖掘.我使用FP-growth,但是pyspark会抛出一个错误,我不知道如何解决它,那么使用pyspark的人能帮助我吗?
首先我得到数据
data = sc.textFile(somewhere)
Run Code Online (Sandbox Code Playgroud)
这一步没有错误然后
transactions = data.map(lambda line: line.strip().split(' '))
Run Code Online (Sandbox Code Playgroud)
接下来是
model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)
Run Code Online (Sandbox Code Playgroud)
这会引发错误
An error occurred while calling o19.trainFPGrowthModel.:org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 1 times, most recent failure: Lost task 1.0 in stage 1.0 (TID 3, localhost): org.apache.spark.SparkException: Items in a transaction must be unique but got WrappedArray(, , A, , Seq, 0xBB20C554Ack, 0xE6A8BA01Win, 0x7D78TcpLen, 20).
Run Code Online (Sandbox Code Playgroud)
我的数据看起来像这样
transactions.take(1)
[[u'03/07',
u' 10',
u' 22',
u' 04.439824',
u' …Run Code Online (Sandbox Code Playgroud)