id1*_*112 1 python-3.x pyspark pyspark-sql
我正在尝试PySpark Dataframe在 Python 中减去两列我在做这件事时遇到了很多问题,我将列类型作为时间戳,该列是date1 = 2011-01-03 13:25:59并且想要从其他日期列中减去它,date2 = 2011-01-03 13:27:00所以我想要date2 - date1从这些数据帧列中减去单独的 timediff 列,显示这两列的差异,例如timeDiff = 00:01:01
我怎样才能在 PySaprk 中做到这一点
我尝试了以下代码:
#timeDiff = df.withColumn(('timeDiff', col(df['date2']) - col(df['date1'])))
Run Code Online (Sandbox Code Playgroud)
此代码不起作用
我试着做这个简单的事情:
timeDiff = df['date2'] - df['date1']
Run Code Online (Sandbox Code Playgroud)
这实际上有效,但之后我尝试通过以下代码将此单独的列添加到我的数据框中
df = df.withColumn("Duration", timeDiff)
Run Code Online (Sandbox Code Playgroud)
它有以下错误:
Py4JJavaError: An error occurred while calling o107.withColumn.
: org.apache.spark.sql.AnalysisException: cannot resolve '(`date2` - `date1`)' due to data type mismatch: '(`date2` - `date1`)' requires (numeric or calendarinterval) type, not timestamp;;
Run Code Online (Sandbox Code Playgroud)
任何人都可以用任何其他方法帮助我,或者我该如何解决这个错误?
希望这可以帮助!
from pyspark.sql.functions import unix_timestamp
#sample data
df = sc.parallelize([
['2011-01-03 13:25:59', '2011-01-03 13:27:00'],
['2011-01-03 3:25:59', '2011-01-03 3:30:00']
]).toDF(('date1', 'date2'))
timeDiff = (unix_timestamp('date2', "yyyy-MM-dd HH:mm:ss") - unix_timestamp('date1', "yyyy-MM-dd HH:mm:ss"))
df = df.withColumn("Duration", timeDiff)
df.show()
Run Code Online (Sandbox Code Playgroud)
输出是:
+-------------------+-------------------+--------+
| date1| date2|Duration|
+-------------------+-------------------+--------+
|2011-01-03 13:25:59|2011-01-03 13:27:00| 61|
| 2011-01-03 3:25:59| 2011-01-03 3:30:00| 241|
+-------------------+-------------------+--------+
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
7086 次 |
| 最近记录: |