Spark Dataframe中的重复列

Bam*_*mqf 6 csv hadoop r apache-spark sparkr

我在hadoop集群中有一个带有重复列的10GB csv文件.我尝试在SparkR中分析它,所以我使用spark-csv包来解析它DataFrame:

  df <- read.df(
    sqlContext,
    FILE_PATH,
    source = "com.databricks.spark.csv",
    header = "true",
    mode = "DROPMALFORMED"
  )
Run Code Online (Sandbox Code Playgroud)

但由于df有重复的Email列,如果我想选择此列,则会出错:

select(df, 'Email')

15/11/19 15:41:58 ERROR RBackendHandler: select on 1422 failed
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) : 
  org.apache.spark.sql.AnalysisException: Reference 'Email' is ambiguous, could be: Email#350, Email#361.;
    at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolve(LogicalPlan.scala:278)
...
Run Code Online (Sandbox Code Playgroud)

我想保留第一次出现的Email列并删除后者,我该怎么做?

sho*_*xer 7

最好的方法是更改​​上游的列名;)

但是,似乎这是不可能的,所以有几个选择:

  1. 如果列的大小写不同("电子邮件"与"电子邮件"),则可以启用区分大小写:

         sql(sqlContext, "set spark.sql.caseSensitive=true")
    
    Run Code Online (Sandbox Code Playgroud)
  2. 如果列名完全相同,则需要手动指定架构并跳过第一行以避免标题:

    customSchema <- structType(
    structField("year", "integer"), 
    structField("make", "string"),
    structField("model", "string"),
    structField("comment", "string"),
    structField("blank", "string"))
    
    df <- read.df(sqlContext, "cars.csv", source = "com.databricks.spark.csv", header="true", schema = customSchema)
    
    Run Code Online (Sandbox Code Playgroud)


Hru*_*shi 5

您可以在启动 Spark 会话时添加一行简单的行,在成功创建 Spark 会话后添加此行来设置 Spark 配置...

spark.conf.set("spark.sql.caseSensitive", "true")
Run Code Online (Sandbox Code Playgroud)