使用Python的Spark Redshift

Agu*_*uid 2 amazon-redshift apache-spark databricks

我正在尝试将Spark与亚马逊Redshift连接,但我收到此错误:

在此输入图像描述

我的代码如下:

from pyspark.sql import SQLContext
from pyspark import SparkContext

sc = SparkContext(appName="Connect Spark with Redshift")
sql_context = SQLContext(sc)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", <ACCESSID>)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", <ACCESSKEY>)

df = sql_context.read \
    .option("url", "jdbc:redshift://example.coyf2i236wts.eu-central-    1.redshift.amazonaws.com:5439/agcdb?user=user&password=pwd") \
    .option("dbtable", "table_name") \
    .option("tempdir", "bucket") \
    .load()
Run Code Online (Sandbox Code Playgroud)

bra*_*raj 8

这是连接到redshift的一步一步的过程.

  • 下载redshift连接器文件.尝试以下命令
wget "https://s3.amazonaws.com/redshift-downloads/drivers/RedshiftJDBC4-1.2.1.1001.jar"
Run Code Online (Sandbox Code Playgroud)
  • 将以下代码保存在python文件(您要运行的.py)中并相应地替换凭据.
from pyspark.conf import SparkConf
from pyspark.sql import SparkSession

#initialize the spark session 
spark = SparkSession.builder.master("yarn").appName("Connect to redshift").enableHiveSupport().getOrCreate()
sc = spark.sparkContext
sqlContext = HiveContext(sc)

sc._jsc.hadoopConfiguration().set("fs.s3.awsAccessKeyId", "<ACCESSKEYID>")
sc._jsc.hadoopConfiguration().set("fs.s3.awsSecretAccessKey", "<ACCESSKEYSECTRET>")


taxonomyDf = sqlContext.read \
    .format("com.databricks.spark.redshift") \
    .option("url", "jdbc:postgresql://url.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx") \
    .option("dbtable", "table_name") \
    .option("tempdir", "s3://mybucket/") \
    .load() 
Run Code Online (Sandbox Code Playgroud)
  • 像下面一样运行spark-submit
spark-submit --packages com.databricks:spark-redshift_2.10:0.5.0 --jars RedshiftJDBC4-1.2.1.1001.jar test.py
Run Code Online (Sandbox Code Playgroud)


Joe*_*ris -1

我认为s3n://URL 样式已被弃用和/或删除。

尝试将您的键定义为"fs.s3.awsAccessKeyId".