PySpark:收集带有嵌套列的数据帧作为字典

Ras*_*ka 1 python dictionary pyspark

我有一个具有以下嵌套架构的数据框:

root
 |-- data: struct (nullable = true)
 |    |-- ac_failure: string (nullable = true)
 |    |-- ac_failure_delayed: string (nullable = true)
 |    |-- alarm_exit_error: boolean (nullable = true)
 |    |-- alarm_has_delay: string (nullable = true)
 |    |-- nodes: array (nullable = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- battery_status: string (nullable = true)
 |    |    |    |-- device_id: long (nullable = true)
 |    |    |    |-- device_manufacture_id: long (nullable = true)
 |    |    |    |-- device_name: string (nullable = true)
 |    |    |    |-- device_product_id: long (nullable = true)
 |    |    |    |-- device_state: string (nullable = true)
 |    |    |    |-- device_status: string (nullable = true)
 |    |    |    |-- device_supported_command_class_list: string (nullable = true)
 |    |    |    |-- device_type: string (nullable = true)
 |    |    |    |-- endpoint_id: long (nullable = true)
 |    |    |    |-- partition_id: long (nullable = true)
 |-- device_id: long (nullable = true)
 |-- device_type: string (nullable = true)
 |-- event: string (nullable = true)
 |-- event_class: string (nullable = true)
 |-- event_timestamp: long (nullable = true)
 |-- event_type: string (nullable = true)
 |-- imei: string (nullable = true)
 |-- partition_id: long (nullable = true)
 |-- source: string (nullable = true)
Run Code Online (Sandbox Code Playgroud)

我想将行收集为字典。我试过:

seq = [row.asDict() for row in df2_final.collect()]
Run Code Online (Sandbox Code Playgroud)

我得到的是(示例 1 行):

    {'data': Row(ac_failure=None, ac_failure_delayed=None, alarm_exit_error=None, alarm_has_delay='true', nodes=None),
 'device_id': 2,
 'device_type': 'panel',
 'event': 'alarm_state',
 'event_class': 'panel_alarm',
 'event_timestamp': 1586921122886,
 'event_type': 'zone_alarm_perimeter',
 'imei': '9900000000000',
 'operation': 'report',
 'partition_id': 0,
 'source': 'panel'}
Run Code Online (Sandbox Code Playgroud)

我能做什么来获取数据作为字典。例如:

{'data': {ac_failure=None, ac_failure_delayed=None, alarm_exit_error=None, alarm_has_delay='true', nodes=None},
     'device_id': 2,
     'device_type': 'panel',
     'event': 'alarm_state',
     'event_class': 'panel_alarm',
     'event_timestamp': 1586921122886,
     'event_type': 'zone_alarm_perimeter',
     'imei': '9900000000000',
     'operation': 'report',
     'partition_id': 0,
     'source': 'panel'}
Run Code Online (Sandbox Code Playgroud)

我希望将所有嵌套列作为 dict 而不是 pyspark.sql.types.row。TIA

Ras*_*ka 6

谢谢@jxc。这有效:

seq = [row.asDict(recursive=True) for row in df2_final.collect()]
Run Code Online (Sandbox Code Playgroud)