Jam*_*mes 9 scala sbt pyspark aws-glue
我希望能够Scala在我的本地IDE中编写,然后将其部署到AWS Glue,作为构建过程的一部分.但是我很难找到构建GlueAppAWS生成的骨架所需的库.
在AWS-java的SDK胶不含进口类,我无法找到那些库其他地方.虽然它们必须存在于某个地方,但它们可能只是这个库的Java/Scala端口:aws-glue-libs
来自AWS的模板scala代码:
import com.amazonaws.services.glue.GlueContext
import com.amazonaws.services.glue.MappingSpec
import com.amazonaws.services.glue.errors.CallSite
import com.amazonaws.services.glue.util.GlueArgParser
import com.amazonaws.services.glue.util.Job
import com.amazonaws.services.glue.util.JsonOptions
import org.apache.spark.SparkContext
import scala.collection.JavaConverters._
object GlueApp {
def main(sysArgs: Array[String]) {
val spark: SparkContext = new SparkContext()
val glueContext: GlueContext = new GlueContext(spark)
// @params: [JOB_NAME]
val args = GlueArgParser.getResolvedOptions(sysArgs, Seq("JOB_NAME").toArray)
Job.init(args("JOB_NAME"), glueContext, args.asJava)
// @type: DataSource
// @args: [database = "raw-tickers-oregon", table_name = "spark_delivery_2_1", transformation_ctx = "datasource0"]
// @return: datasource0
// @inputs: []
val datasource0 = glueContext.getCatalogSource(database = "raw-tickers-oregon", tableName = "spark_delivery_2_1", redshiftTmpDir = "", transformationContext = "datasource0").getDynamicFrame()
// @type: ApplyMapping
// @args: [mapping = [("exchangeid", "int", "exchangeid", "int"), ("data", "struct", "data", "struct")], transformation_ctx = "applymapping1"]
// @return: applymapping1
// @inputs: [frame = datasource0]
val applymapping1 = datasource0.applyMapping(mappings = Seq(("exchangeid", "int", "exchangeid", "int"), ("data", "struct", "data", "struct")), caseSensitive = false, transformationContext = "applymapping1")
// @type: DataSink
// @args: [connection_type = "s3", connection_options = {"path": "s3://spark-ticker-oregon/target", "compression": "gzip"}, format = "json", transformation_ctx = "datasink2"]
// @return: datasink2
// @inputs: [frame = applymapping1]
val datasink2 = glueContext.getSinkWithFormat(connectionType = "s3", options = JsonOptions("""{"path": "s3://spark-ticker-oregon/target", "compression": "gzip"}"""), transformationContext = "datasink2", format = "json").writeDynamicFrame(applymapping1)
Job.commit()
}
}
Run Code Online (Sandbox Code Playgroud)
而且build.sbt我已经开始为当地建设放在一起:
name := "aws-glue-scala"
version := "0.1"
scalaVersion := "2.11.12"
updateOptions := updateOptions.value.withCachedResolution(true)
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.1"
Run Code Online (Sandbox Code Playgroud)
AWS Glue Scala API的文档似乎概述了AWS Glue Python库中提供的类似功能.那么也许所需要的是下载并构建PySpark AWS Glue库并将其添加到类路径中?也许可能,因为Glue python库使用Py4J.
bot*_*que 13
@Frederic提供了一个非常有用的提示来获取依赖关系s3://aws-glue-jes-prod-us-east-1-assets/etl/jars/glue-assembly.jar.
不幸的是,这个版本glue-assembly.jar已经过时,并带来了反面的火花2.1.这很好,如果你使用的向后兼容的功能,但如果依靠最新版本的火花(也可能是最新的胶水功能),你可以得到从相应的JAR 胶DEV-终点下/usr/share/aws/glue/etl/jars/glue-assembly.jar.
如果您有一个名为dev-endpoint的端点my-dev-endpoint,则可以从中复制当前jar:
export DEV_ENDPOINT_HOST=`aws glue get-dev-endpoint --endpoint-name my-dev-endpoint --query 'DevEndpoint.PublicAddress' --output text`
scp -i dev-endpoint-private-key \
glue@$DEV_ENDPOINT_HOST:/usr/share/aws/glue/etl/jars/glue-assembly.jar .
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3879 次 |
| 最近记录: |