Pyspark - 将json字符串转换为DataFrame

Jan*_*gcy 5 python apache-spark pyspark jupyter-notebook

我有一个包含简单json的test2.json文件:

{  "Name": "something",  "Url": "https://stackoverflow.com",  "Author": "jangcy",  "BlogEntries": 100,  "Caller": "jangcy"}
Run Code Online (Sandbox Code Playgroud)

我已将文件上传到blob存储,然后从中创建一个DataFrame:

df = spark.read.json("/example/data/test2.json")
Run Code Online (Sandbox Code Playgroud)

然后我可以毫无问题地看到它:

df.show()
+------+-----------+------+---------+--------------------+
|Author|BlogEntries|Caller|     Name|                 Url|
+------+-----------+------+---------+--------------------+
|jangcy|        100|jangcy|something|https://stackover...|
+------+-----------+------+---------+--------------------+
Run Code Online (Sandbox Code Playgroud)

第二种情况:我在笔记本中声明了相同的json字符串:

newJson = '{  "Name": "something",  "Url": "https://stackoverflow.com",  "Author": "jangcy",  "BlogEntries": 100,  "Caller": "jangcy"}'
Run Code Online (Sandbox Code Playgroud)

我可以打印等等.但是现在如果我想从它创建一个DataFrame:

df = spark.read.json(newJson)
Run Code Online (Sandbox Code Playgroud)

我得到'绝对URI中的相对路径'错误:

'java.net.URISyntaxException: Relative path in absolute URI: {  "Name":%20%22something%22,%20%20%22Url%22:%20%22https:/stackoverflow.com%22,%20%20%22Author%22:%20%22jangcy%22,%20%20%22BlogEntries%22:%20100,%20%20%22Caller%22:%20%22jangcy%22%7D'
Traceback (most recent call last):
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/readwriter.py", line 249, in json
    return self._df(self._jreader.json(self._spark._sc._jvm.PythonUtils.toSeq(path)))
  File "/usr/hdp/current/spark2-client/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/hdp/current/spark2-client/python/pyspark/sql/utils.py", line 79, in deco
    raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.IllegalArgumentException: 'java.net.URISyntaxException: Relative path in absolute URI: {  "Name":%20%22something%22,%20%20%22Url%22:%20%22https:/stackoverflow.com%22,%20%20%22Author%22:%20%22jangcy%22,%20%20%22BlogEntries%22:%20100,%20%20%22Caller%22:%20%22jangcy%22%7D'
Run Code Online (Sandbox Code Playgroud)

我应该对newJson字符串应用其他转换吗?如果是的话,它们应该是什么?请原谅我,如果这太微不足道了,因为我对Python和Spark很新.

我正在使用带有PySpark3内核的Jupyter笔记本.

提前致谢.

Ram*_*jan 12

您可以执行以下操作

newJson = '{"Name":"something","Url":"https://stackoverflow.com","Author":"jangcy","BlogEntries":100,"Caller":"jangcy"}'
df = spark.read.json(sc.parallelize([newJson]))
df.show(truncate=False)
Run Code Online (Sandbox Code Playgroud)

哪个应该给

+------+-----------+------+---------+-------------------------+
|Author|BlogEntries|Caller|Name     |Url                      |
+------+-----------+------+---------+-------------------------+
|jangcy|100        |jangcy|something|https://stackoverflow.com|
+------+-----------+------+---------+-------------------------+
Run Code Online (Sandbox Code Playgroud)

  • 非常感谢拉梅什。奇迹般有效!:) (2认同)