小编Joo*_*rla的帖子

在Spark SQL中使用collect_list和collect_set

根据文档,这些collect_setcollect_list函数应该在Spark SQL中可用.但是,我无法让它发挥作用.我正在使用Docker镜像运行Spark 1.6.0 .

我想在Scala中这样做:

import org.apache.spark.sql.functions._ 

df.groupBy("column1") 
  .agg(collect_set("column2")) 
  .show() 
Run Code Online (Sandbox Code Playgroud)

并在运行时收到以下错误:

Exception in thread "main" org.apache.spark.sql.AnalysisException: undefined function collect_set; 
Run Code Online (Sandbox Code Playgroud)

也尝试使用它pyspark,但它也失败了.文档声明这些函数是Hive UDAF的别名,但我无法想出启用这些函数.

如何解决这个问题?感谢名单!

hive apache-spark apache-spark-sql

15
推荐指数
1
解决办法
2万
查看次数

从Spark创建DynamoDB Hive表时出错

在我尝试将DataFrame存储到DynamoDB表中时,我尝试创建DynamoDB Hive表.我正在使用Python/Pyspark,我正在启动我的Spark应用程序--jars /usr/share/aws/emr/ddb/lib/emr-ddb-hive.jar.

sqlContext = HiveContext(sc)

sqlContext.sql('CREATE EXTERNAL TABLE ddb (col1 string) \
    STORED BY "org.apache.hadoop.hive.dynamodb.DynamoDBStorageHandler" \
    TBLPROPERTIES ("dynamodb.table.name" = "table1", "dynamodb.column.mapping" = "col1:col1")')
Run Code Online (Sandbox Code Playgroud)

但是,我收到以下错误:

INFO DDLTask: Use StorageHandler-supplied org.apache.hadoop.hive.dynamodb.DynamoDBSerDe for table default.ddb
ERROR DDLTask: java.lang.NoSuchMethodError: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initSerdeParams(Lorg/apache/hadoop/conf/Configuration;Ljava/util/Properties;Ljava/lang/String;)Lorg/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe$SerDeParameters;
at org.apache.hadoop.hive.dynamodb.DynamoDBSerDe.initialize(DynamoDBSerDe.java:51)
at org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:527)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:391)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:276)
at org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:258)
at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:605)
at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:694)
at org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4135)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1195)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
....
Run Code Online (Sandbox Code Playgroud)

我正在使用EMR 4.3配置(所有应用程序).它可能与此Hive问题有关.

有什么方法可以绕过这个吗?感谢名单!

hive amazon-emr amazon-dynamodb apache-spark apache-spark-sql

5
推荐指数
0
解决办法
588
查看次数

Spark 中简单的 RDD 写入 DynamoDB

刚刚在尝试将基本 RDD 数据集导入 DynamoDB 时陷入困境。这是代码:

import org.apache.hadoop.mapred.JobConf

var rdd = sc.parallelize(Array(("", Map("col1" -> Map("s" -> "abc"), "col2" -> Map("n" -> "123")))))

var jobConf = new JobConf(sc.hadoopConfiguration)
jobConf.set("dynamodb.output.tableName", "table_x")
jobConf.set("mapred.output.format.class", "org.apache.hadoop.dynamodb.write.DynamoDBOutputFormat")

rdd.saveAsHadoopDataset(jobConf)
Run Code Online (Sandbox Code Playgroud)

这是我得到的错误:

16/02/28 15:40:38 WARN TaskSetManager: Lost task 7.0 in stage 1.0 (TID 18, ip-172-31-9-224.eu-west-1.compute.internal): java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.hadoop.io.Text
at org.apache.hadoop.dynamodb.write.DefaultDynamoDBRecordWriter.convertValueToDynamoDBItem(DefaultDynamoDBRecordWriter.java:10)
at org.apache.hadoop.dynamodb.write.AbstractDynamoDBRecordWriter.write(AbstractDynamoDBRecordWriter.java:90)
at org.apache.spark.SparkHadoopWriter.write(SparkHadoopWriter.scala:96)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1199)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) …
Run Code Online (Sandbox Code Playgroud)

hadoop amazon-emr amazon-dynamodb apache-spark

5
推荐指数
1
解决办法
4847
查看次数