从 Spark BinaryType 中提取字节

R1t*_*chY 0 pyspark spark-dataframe pyspark-sql

我有一个带有 BinaryType 类型的二进制列的表:

>>> df.show(3)
+--------+--------------------+
|       t|               bytes|
+--------+--------------------+
|0.145533|[10 50 04 89 00 3...|
|0.345572|[60 94 05 89 80 9...|
|0.545574|[99 50 68 89 00 7...|
+--------+--------------------+
only showing top 3 rows
>>> df.schema
StructType(List(StructField(t,DoubleType,true),StructField(bytes,BinaryType,true)))
Run Code Online (Sandbox Code Playgroud)

如果我提取二进制文件的第一个字节,我会收到来自 Spark 的异常:

>>> df.select(n["t"], df["bytes"].getItem(0)).show(3)
AnalysisException: u"Can't extract value from bytes#477;"
Run Code Online (Sandbox Code Playgroud)

演员阵容ArrayType(ByteType)也不起作用:

>>> df.select(n["t"], df["bytes"].cast(ArrayType(ByteType())).getItem(0)).show(3)
AnalysisException: u"cannot resolve '`bytes`' due to data type mismatch: cannot cast BinaryType to ArrayType(ByteType,true) ..."
Run Code Online (Sandbox Code Playgroud)

如何提取字节?

Dan*_*ula 5

您可以为此制作一个简单的 udf:

from pyspark.sql import functions as f

a = bytearray([10, 50, 04])
df = sqlContext.createDataFrame([(1, a), (2, a)], ("t", "bytes"))
df.show()
Run Code Online (Sandbox Code Playgroud)
+---+----------+
|  t|     bytes|
+---+----------+
|  1|[0A 32 04]|
|  2|[0A 32 04]|
+---+----------+
Run Code Online (Sandbox Code Playgroud)
u = f.udf(lambda a: a[0])
df.select(u(df['bytes']).alias("first")).show()
Run Code Online (Sandbox Code Playgroud)
+-----+
|first|
+-----+
|   10|
|   10|
+-----+
Run Code Online (Sandbox Code Playgroud)

编辑

如果您希望提取的位置作为参数,您可以进行一些柯里化操作:

+---+----------+
|  t|     bytes|
+---+----------+
|  1|[0A 32 04]|
|  2|[0A 32 04]|
+---+----------+
Run Code Online (Sandbox Code Playgroud)