如何使用Spark-Scala本地程序从Google Cloud Storage中读取简单文本文件

Sha*_*awn 5 google-app-engine scala google-cloud-storage apache-spark-sql google-cloud-dataproc

如以下博客所述,

https://cloud.google.com/blog/big-data/2016/06/google-cloud-dataproc-the-fast-easy-and-safe-way-to-try-spark-20-preview

我试图使用Spark-scala从Google云端存储中读取文件。为此,我导入了以下Google Cloud Storage连接器和Google Cloud Storage,

// https://mvnrepository.com/artifact/com.google.cloud/google-cloud-storage
compile group: 'com.google.cloud', name: 'google-cloud-storage', version: '0.7.0'

// https://mvnrepository.com/artifact/com.google.cloud.bigdataoss/gcs-connector
compile group: 'com.google.cloud.bigdataoss', name: 'gcs-connector', version: '1.6.0-hadoop2'
Run Code Online (Sandbox Code Playgroud)

之后,创建了一个如下所示的简单scala对象文件,(创建了sparkSession)

val csvData = spark.read.csv("gs://my-bucket/project-data/csv")
Run Code Online (Sandbox Code Playgroud)

但它低于错误,

17/03/01 20:16:02 INFO GoogleHadoopFileSystemBase: GHFS version: 1.6.0-hadoop2
17/03/01 20:16:23 WARN HttpTransport: exception thrown while executing request
java.net.SocketTimeoutException: connect timed out
    at java.net.DualStackPlainSocketImpl.waitForConnect(Native Method)
    at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:85)
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
    at sun.net.www.http.HttpClient.New(HttpClient.java:308)
    at sun.net.www.http.HttpClient.New(HttpClient.java:326)
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
    at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93)
    at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:981)
    at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:158)
    at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489)
    at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:205)
    at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:70)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1816)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:1003)
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:966)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
    at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:317)
    at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:354)
    at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:413)
    at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:349)
    at test$.main(test.scala:41)
    at test.main(test.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Run Code Online (Sandbox Code Playgroud)

我还设置了所有身份验证。不确定超时如何闪烁。

编辑

我试图通过IntelliJ Idea(Windows)运行以上代码。相同代码的JAR文件在Google Cloud DataProc上运行正常,但在通过本地系统运行该文件时却出现上述错误。我已经在IntelliJ中安装了Spark,Scala,Google Cloud插件。

还有一件事,我已经创建了Dataproc实例,并尝试连接到文档https://cloud.google.com/compute/docs/instances/connecting-to-instance#standardssh中给出的外部IP地址

无法连接到服务器,提示超时错误

Den*_*Huo 6

您需要google.cloud.auth.service.account.json.keyfile按照这些生成私钥的说明为您创建的服务帐户设置json 凭证文件的本地路径。堆栈跟踪显示连接器认为它在 GCE VM 上,并试图从本地元数据服务器获取凭据。如果这不起作用,请尝试设置fs.gs.auth.service.account.json.keyfile

尝试 SSH 时,您尝试过gcloud compute ssh <instance name>吗?您可能还需要检查您的 Compute Engine 防火墙规则,以确保您允许端口 22 上的入站连接。


Sha*_*awn 5

谢谢丹尼斯流露出来的问题方向。由于我使用的是Windows操作系统,因此没有core-site.xml,因为hadoop无法用于Windows。

我已经下载了预先构建的spark,并且在代码本身中配置了您提到的参数,如下所示

创建一个SparkSession使用它的可变配置Hadoop的参数一样spark.SparkContext.hadoopConfiguration.set("google.cloud.auth.service.account.json.keyfile","<KeyFile Path>"),而我们需要设置在核心-site.xml中所有其它参数。

设置完所有这些之后,程序可以从Google Cloud Storage访问文件。