unix_timestamp()可以在Apache Spark中以毫秒为单位返回unix时间吗?

van*_*d39 14 timestamp unix-timestamp apache-spark

我试图从时间戳字段获取unix时间,以毫秒(13位数)为单位,但目前它以秒为单位返回(10位数).

scala> var df = Seq("2017-01-18 11:00:00.000", "2017-01-18 11:00:00.123", "2017-01-18 11:00:00.882", "2017-01-18 11:00:02.432").toDF()
df: org.apache.spark.sql.DataFrame = [value: string]

scala> df = df.selectExpr("value timeString", "cast(value as timestamp) time")
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp]


scala> df = df.withColumn("unix_time", unix_timestamp(df("time")))
df: org.apache.spark.sql.DataFrame = [timeString: string, time: timestamp ... 1 more field]

scala> df.take(4)
res63: Array[org.apache.spark.sql.Row] = Array(
[2017-01-18 11:00:00.000,2017-01-18 11:00:00.0,1484758800], 
[2017-01-18 11:00:00.123,2017-01-18 11:00:00.123,1484758800], 
[2017-01-18 11:00:00.882,2017-01-18 11:00:00.882,1484758800], 
[2017-01-18 11:00:02.432,2017-01-18 11:00:02.432,1484758802])
Run Code Online (Sandbox Code Playgroud)

即使2017-01-18 11:00:00.123并且2017-01-18 11:00:00.000不同,我也会得到相同的unix时间1484758800

我错过了什么?

小智 10

毫秒隐藏在小数部分时间戳格式中

尝试这个:

df = df.withColumn("time_in_milliseconds", col("time").cast("double"))
Run Code Online (Sandbox Code Playgroud)

你会得到类似 1484758800.792 的信息,其中 792 是毫秒

至少它对我有用(Scala、Spark、Hive)


San*_*wad 5

实施Dao Thi 的回答中建议的方法

import pyspark.sql.functions as F
df = spark.createDataFrame([('22-Jul-2018 04:21:18.792 UTC', ),('23-Jul-2018 04:21:25.888 UTC',)], ['TIME'])
df.show(2,False)
df.printSchema()
Run Code Online (Sandbox Code Playgroud)

输出:

+----------------------------+
|TIME                        |
+----------------------------+
|22-Jul-2018 04:21:18.792 UTC|
|23-Jul-2018 04:21:25.888 UTC|
+----------------------------+
root
|-- TIME: string (nullable = true)
Run Code Online (Sandbox Code Playgroud)

字符串时间格式(包括毫秒)转换为unix_timestamp(double)。使用子字符串方法从字符串中提取毫秒(start_position = -7,length_of_substring=3)并将毫秒分别添加到 unix_timestamp。(转换为子字符串以浮动以进行添加)

df1 = df.withColumn("unix_timestamp",F.unix_timestamp(df.TIME,'dd-MMM-yyyy HH:mm:ss.SSS z') + F.substring(df.TIME,-7,3).cast('float')/1000)
Run Code Online (Sandbox Code Playgroud)

在 Spark中将 unix_timestamp(double)转换为时间戳数据类型

df2 = df1.withColumn("TimestampType",F.to_timestamp(df1["unix_timestamp"]))
df2.show(n=2,truncate=False)
Run Code Online (Sandbox Code Playgroud)

这将为您提供以下输出

+----------------------------+----------------+-----------------------+
|TIME                        |unix_timestamp  |TimestampType          |
+----------------------------+----------------+-----------------------+
|22-Jul-2018 04:21:18.792 UTC|1.532233278792E9|2018-07-22 04:21:18.792|
|23-Jul-2018 04:21:25.888 UTC|1.532319685888E9|2018-07-23 04:21:25.888|
+----------------------------+----------------+-----------------------+
Run Code Online (Sandbox Code Playgroud)

检查架构:

df2.printSchema()


root
 |-- TIME: string (nullable = true)
 |-- unix_timestamp: double (nullable = true)
 |-- TimestampType: timestamp (nullable = true)
Run Code Online (Sandbox Code Playgroud)


小智 5

它不能用 unix_timestamp() 完成,但从 Spark 3.1.0 开始,有一个名为 unix_millis() 的内置函数:

unix_millis(timestamp) - 返回自 1970-01-01 00:00:00 UTC 以来的毫秒数。截断更高级别的精度。


Đào*_*ươu 4

unix_timestamp()返回以秒为单位的 unix 时间戳。

时间戳中的最后 3 位数字与毫秒字符串 ( ) 的最后 3 位数字相同1.999sec = 1999 milliseconds,因此只需取出时间戳字符串的最后 3 位数字并将它们附加到毫秒字符串的末尾即可。