在 Spark 上使用 Scala 拆分 Dataframe 中的字符串

Big*_*bie -3 scala apache-spark apache-spark-sql

我有一个包含 100 多列的日志文件。其中我只需要两列“_raw”和“_time”,所以我创建了将日志文件加载为“csv”DF。

第1步:

scala> val log = spark.read.format("csv").option("inferSchema", "true").option("header", "true").load("soa_prod_diag_10_jan.csv")
log: org.apache.spark.sql.DataFrame = [ARRAffinity: string, CoordinatorNonSecureURL: string ... 126 more fields]
Run Code Online (Sandbox Code Playgroud)

第 2 步:我将 DF 注册为临时表 log.createOrReplaceTempView("logs"

第 3 步:我提取了两个必需的列“_raw”和“_time”

scala> val sqlDF = spark.sql("select _raw, _time from logs")
sqlDF: org.apache.spark.sql.DataFrame = [_raw: string, _time: string]

scala> sqlDF.show(1, false)
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+
|_raw                                                                                                                                                                                                                                                                                                                                                                                                |_time|
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+
|[2019-01-10T23:59:59.998-06:00] [xx_yyy_zz_sss_ra10] [ERROR] [OSB-473003] [oracle.osb.statistics.statistics] [tid: [ACTIVE].ExecuteThread: '28' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 92b39a8b-8234-4d19-9ac7-4908dc79c5ed-0000bd0b,0] [partition-name: DOMAIN] [tenant-name: GLOBAL] Aggregation Server Not Available. Failed to get remote aggregator[[|null |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----+
only showing top 1 row
Run Code Online (Sandbox Code Playgroud)

我的要求:

我需要拆分“_raw”列中的字符串以生成 [2019-01-10T23:59:59.998-06:00] [xx_yyy_zz_sss_ra10] [ERROR] [OSB-473003] [oracle.osb.statistics.statistics] [ ecid: 92b39a8b-8234-4d19-9ac7-4908dc79c5ed-0000bd0b] 列名分别为 a, b, c, d, e, f

同时从 '_raw' 和 '_time' 中删除所有空值

您的回答将不胜感激:)

Apu*_*dey 5

您可以拆分函数,并按空间拆分 _raw。这将返回一个数组,然后您可以从该数组中提取值。您还可以使用 regexp_extract 函数从日志消息中提取值。两种方式如下所示。我希望它有帮助。

//Creating Test Data
val df = Seq("[2019-01-10T23:59:59.998-06:00] [xx_yyy_zz_sss_ra10] [ERROR] [OSB-473003] [oracle.osb.statistics.statistics] [tid: [ACTIVE].ExecuteThread: '28' for queue: 'weblogic.kernel.Default (self-tuning)'] [userId: <anonymous>] [ecid: 92b39a8b-8234-4d19-9ac7-4908dc79c5ed-0000bd0b,0] [partition-name: DOMAIN] [tenant-name: GLOBAL] Aggregation Server Not Available. Failed to get remote aggregator[[")
  .toDF("_raw")

val splitDF = df.withColumn("split_raw_arr", split($"_raw", " "))
  .withColumn("A", $"split_raw_arr"(0))
  .withColumn("B", $"split_raw_arr"(1))
  .withColumn("C", $"split_raw_arr"(2))
  .withColumn("D", $"split_raw_arr"(3))
  .withColumn("E", $"split_raw_arr"(4))
  .drop("_raw", "split_raw_arr")

splitDF.show(false)

+-------------------------------+--------------------+-------+------------+----------------------------------+
|A                              |B                   |C      |D           |E                                 |
+-------------------------------+--------------------+-------+------------+----------------------------------+
|[2019-01-10T23:59:59.998-06:00]|[xx_yyy_zz_sss_ra10]|[ERROR]|[OSB-473003]|[oracle.osb.statistics.statistics]|
+-------------------------------+--------------------+-------+------------+----------------------------------+

val extractedDF = df
  .withColumn("a", regexp_extract($"_raw", "\\[(.*?)\\]",1))
  .withColumn("b", regexp_extract($"_raw", "\\[(.*?)\\] \\[(.*?)\\]",2))
  .withColumn("c", regexp_extract($"_raw", "\\[(.*?)\\] \\[(.*?)\\] \\[(.*?)\\]",3))
  .withColumn("d", regexp_extract($"_raw", "\\[(.*?)\\] \\[(.*?)\\] \\[(.*?)\\] \\[(.*?)\\]",4))
  .withColumn("e", regexp_extract($"_raw", "\\[(.*?)\\] \\[(.*?)\\] \\[(.*?)\\] \\[(.*?)\\] \\[(.*?)\\]",5))
  .withColumn("f", regexp_extract($"_raw", "(?<=ecid: )(.*?)(?=,)",1))
  .drop("_raw")

+-----------------------------+------------------+-----+----------+--------------------------------+---------------------------------------------+
|a                            |b                 |c    |d         |e                               |f                                            |
+-----------------------------+------------------+-----+----------+--------------------------------+---------------------------------------------+
|2019-01-10T23:59:59.998-06:00|xx_yyy_zz_sss_ra10|ERROR|OSB-473003|oracle.osb.statistics.statistics|92b39a8b-8234-4d19-9ac7-4908dc79c5ed-0000bd0b|
+-----------------------------+------------------+-----+----------+--------------------------------+---------------------------------------------+
Run Code Online (Sandbox Code Playgroud)