awk/sed/perl 一行 + 如何仅打印 json 文件中的属性行

yae*_*ael 10 sed awk perl json jq

如何仅打印 json 文件中的属性行

json文件示例

{
  "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
  "items" : [
    {
      "href" : "http://master02:8080/api/v1/clusters/HDP/configurations?type=kafka-env&tag=version1527250007610",
      "tag" : "version1527250007610",
      "type" : "kafka-env",
      "version" : 8,
      "Config" : {
        "cluster_name" : "HDP",
        "stack_id" : "HDP-2.6"
      },
      "properties" : {
        "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
        "is_supported_kafka_ranger" : "true",
        "kafka_log_dir" : "/var/log/kafka",
        "kafka_pid_dir" : "/var/run/kafka",
        "kafka_user" : "kafka",
        "kafka_user_nofile_limit" : "128000",
        "kafka_user_nproc_limit" : "65536"
      }
    }
  ]
Run Code Online (Sandbox Code Playgroud)

预期产出

    "content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
    "is_supported_kafka_ranger" : "true",
    "kafka_log_dir" : "/var/log/kafka",
    "kafka_pid_dir" : "/var/run/kafka",
    "kafka_user" : "kafka",
    "kafka_user_nofile_limit" : "128000",
    "kafka_user_nproc_limit" : "65536"
Run Code Online (Sandbox Code Playgroud)

Rom*_*est 33

Jq 是处理 JSON 数据的正确工具:

jq '.items[].properties | to_entries[] | "\(.key) : \(.value)"' input.json
Run Code Online (Sandbox Code Playgroud)

输出:

"content : \n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi"
"is_supported_kafka_ranger : true"
"kafka_log_dir : /var/log/kafka"
"kafka_pid_dir : /var/run/kafka"
"kafka_user : kafka"
"kafka_user_nofile_limit : 128000"
"kafka_user_nproc_limit : 65536"
Run Code Online (Sandbox Code Playgroud)

如果确实必须获得双引号的每个键和值 - 使用以下修改:

jq -r '.items[].properties | to_entries[]
       | "\"\(.key)\" : \"\(.value | gsub("\n";"\\n"))\","' input.json
Run Code Online (Sandbox Code Playgroud)

输出:

"content" : "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=$PATH:$JAVA_HOME/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; then\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi",
"is_supported_kafka_ranger" : "true",
"kafka_log_dir" : "/var/log/kafka",
"kafka_pid_dir" : "/var/run/kafka",
"kafka_user" : "kafka",
"kafka_user_nofile_limit" : "128000",
"kafka_user_nproc_limit" : "65536",
Run Code Online (Sandbox Code Playgroud)


Ste*_*itt 28

请不要养成使用非结构化工具解析结构化数据的习惯。如果你解析XML,JSON,YAML等,使用特定的解析器,至少在结构化的数据转换成AWK,一个更合适的形式sedgrep等等。

在这种情况下,gron将有很大帮助:

$ gron yourfile | grep -F .properties.
json.items[0].properties.content = "\n#!/bin/bash\n\n# Set KAFKA specific environment variables here.\n\n# The java implementation to use.\nexport JAVA_HOME={{java64_home}}\nexport PATH=/usr/lib/ccache:/home/steve/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/bin\nexport PID_DIR={{kafka_pid_dir}}\nexport LOG_DIR={{kafka_log_dir}}\nexport KAFKA_KERBEROS_PARAMS={{kafka_kerberos_params}}\nexport JMX_PORT=9997\n# Add kafka sink to classpath and related depenencies\nif [ -e \"/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\" ]; then\n  export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar\n  export CLASSPATH=:/usr/lib/ambari-metrics-kafka-sink/lib/*\nfi\n\nif [ -f /etc/kafka/conf/kafka-ranger-env.sh ]; then\n. /etc/kafka/conf/kafka-ranger-env.sh\nfi";
json.items[0].properties.is_supported_kafka_ranger = "true";
json.items[0].properties.kafka_log_dir = "/var/log/kafka";
json.items[0].properties.kafka_pid_dir = "/var/run/kafka";
json.items[0].properties.kafka_user = "kafka";
json.items[0].properties.kafka_user_nofile_limit = "128000";
json.items[0].properties.kafka_user_nproc_limit = "65536";
Run Code Online (Sandbox Code Playgroud)

(您可以对它进行后处理| cut -d. -f4- | gron --ungron以获得非常接近您想要的输出的东西,尽管仍然是有效的 JSON。)

jq也适合