Spark 2.3.0读取带有标题选项的文本文件不起作用

Odi*_*seo 4 header text-files python-2.7 apache-spark spark-dataframe

下面的代码正在运行,并从文本文件创建Spark数据框。但是,我正在尝试使用header选项将第一列用作标题,由于某种原因,它似乎没有发生。我不明白为什么!这一定是愚蠢的,但我无法解决。

>>>from pyspark.sql import SparkSession
>>>spark = SparkSession.builder.master("local").appName("Word Count")\
    .config("spark.some.config.option", "some-value")\
    .getOrCreate()
>>>df = spark.read.option("header", "true")\
    .option("delimiter", ",")\
    .option("inferSchema", "true")\
    .text("StockData/ETFs/aadr.us.txt")
>>>df.take(3)
Run Code Online (Sandbox Code Playgroud)

返回以下内容:

[行(value = u'Date,Open,High,Low,Close,Volume,OpenInt'),行(value = u'2010-07-21,24.333,24.333,23.946,23.946,43321,0'),行(值= u'2010-07-22,24.644,24.644,24.362,24.487,18031,0')]

>>>df.columns
Run Code Online (Sandbox Code Playgroud)

返回以下内容:

['值']

Ram*_*jan 7

问题

问题是您使用的是.textapi而不是.csv.load。如果您阅读.text api文档,则会显示

def text(self, paths): """Loads text files and returns a :class:DataFrame whose schema starts with a string column named "value", and followed by partitioned columns if there are any. Each line in the text file is a new row in the resulting DataFrame. :param paths: string, or list of strings, for input path(s). df = spark.read.text('python/test_support/sql/text-test.txt') df.collect() [Row(value=u'hello'), Row(value=u'this')] """

使用.csv的解决方案

.text函数调用更改为.csv,您应该可以

df = spark.read.option("header", "true") \
    .option("delimiter", ",") \
    .option("inferSchema", "true") \
    .csv("StockData/ETFs/aadr.us.txt")

df.show(2, truncate=False)
Run Code Online (Sandbox Code Playgroud)

这应该给你

+-------------------+------+------+------+------+------+-------+
|Date               |Open  |High  |Low   |Close |Volume|OpenInt|
+-------------------+------+------+------+------+------+-------+
|2010-07-21 00:00:00|24.333|24.333|23.946|23.946|43321 |0      |
|2010-07-22 00:00:00|24.644|24.644|24.362|24.487|18031 |0      |
+-------------------+------+------+------+------+------+-------+
Run Code Online (Sandbox Code Playgroud)

使用.load的解决方案

.load如果未定义format选项,则将假定文件为拼花格式。所以,你会需要的格式选项来定义,以及

df = spark.read\
    .format("com.databricks.spark.csv")\
    .option("header", "true") \
    .option("delimiter", ",") \
    .option("inferSchema", "true") \
    .load("StockData/ETFs/aadr.us.txt")

df.show(2, truncate=False)
Run Code Online (Sandbox Code Playgroud)

我希望答案是有帮助的