Ale*_*lex 2 google-cloud-storage apache-flink google-cloud-dataproc
我正在尝试为GCS中的flink作业配置检查点。如果我在本地运行测试作业(没有docker和任何群集设置),则一切正常,但是如果我使用docker-compose或群集设置运行它,并在flink仪表板中部署带有工作的胖子,则它会失败并显示错误。
有什么想法吗?谢谢!
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'gs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:405)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
at org.apache.flink.runtime.state.filesystem.FsCheckpointStorage.<init>(FsCheckpointStorage.java:61)
at org.apache.flink.runtime.state.filesystem.FsStateBackend.createCheckpointStorage(FsStateBackend.java:441)
at org.apache.flink.contrib.streaming.state.RocksDBStateBackend.createCheckpointStorage(RocksDBStateBackend.java:379)
at org.apache.flink.runtime.checkpoint.CheckpointCoordinator.<init>(CheckpointCoordinator.java:247)
... 33 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop is not in the classpath/dependencies.
at org.apache.flink.core.fs.UnsupportedSchemeFactory.create(UnsupportedSchemeFactory.java:64)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401)
Run Code Online (Sandbox Code Playgroud)
环境配置是这样的:
StreamExecutionEnvironment env = applicationContext.getBean(StreamExecutionEnvironment.class);
CheckpointConfig checkpointConfig = env.getCheckpointConfig();
checkpointConfig.setFailOnCheckpointingErrors(false);
checkpointConfig.setCheckpointInterval(10000);
checkpointConfig.setMinPauseBetweenCheckpoints(5000);
checkpointConfig.setMaxConcurrentCheckpoints(1);
checkpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
RocksDBStateBackend rocksDBStateBackend = new RocksDBStateBackend(
String.format("gs://checkpoints/%s", jobClass.getSimpleName()), true);
env.setStateBackend((StateBackend) rocksDBStateBackend);
Run Code Online (Sandbox Code Playgroud)
这是我的core-site.xml文件:
<configuration>
<property>
<name>google.cloud.auth.service.account.enable</name>
<value>true</value>
</property>
<property>
<name>google.cloud.auth.service.account.json.keyfile</name>
<value>${user.dir}/key.json</value>
</property>
<property>
<name>fs.gs.impl</name>
<value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem</value>
<description>The FileSystem for gs: (GCS) uris.</description>
</property>
<property>
<name>fs.AbstractFileSystem.gs.impl</name>
<value>com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS</value>
<description>The AbstractFileSystem for gs: (GCS) uris.</description>
</property>
<property>
<name>fs.gs.application.name.suffix</name>
<value>-kube-flink</value>
<description>
Appended to the user-agent header for API requests to GCS to help identify
the traffic as coming from Dataproc.
</description>
</property>
Run Code Online (Sandbox Code Playgroud)
对gcs-connector的依赖关系:
<dependency>
<groupId>com.google.cloud.bigdataoss</groupId>
<artifactId>gcs-connector</artifactId>
<version>1.9.4-hadoop2</version>
</dependency>
Run Code Online (Sandbox Code Playgroud)
更新:
在对依赖项进行一些操作之后,我已经能够编写检查点。我当前的设置是:
<dependency>
<groupId>com.google.cloud.bigdataoss</groupId>
<artifactId>gcs-connector</artifactId>
<version>hadoop2-1.9.5</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_${scala.version}</artifactId>
<version>1.5.1</version>
</dependency>
Run Code Online (Sandbox Code Playgroud)
我也将flink图像切换到了版本 flink:1.5.2-hadoop28
不幸的是,我仍然无法读取检查点数据,因为我的工作总是在恢复状态时失败,并显示错误:
java.lang.NoClassDefFoundError: com/google/cloud/hadoop/gcsio/GoogleCloudStorageImpl$6
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.open(GoogleCloudStorageImpl.java:666)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.open(GoogleCloudStorageFileSystem.java:323)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFSInputStream.<init>(GoogleHadoopFSInputStream.java:136)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.open(GoogleHadoopFileSystemBase.java:1102)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:787)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.open(HadoopFileSystem.java:119)
at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.open(HadoopFileSystem.java:36)
at org.apache.flink.core.fs.SafetyNetWrapperFileSystem.open(SafetyNetWrapperFileSystem.java:80)
at org.apache.flink.runtime.state.filesystem.FileStateHandle.openInputStream(FileStateHandle.java:68)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.copyStateDataHandleData(RocksDBKeyedStateBackend.java:1005)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.transferAllDataFromStateHandles(RocksDBKeyedStateBackend.java:988)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.transferAllStateDataToDirectory(RocksDBKeyedStateBackend.java:974)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.restoreInstance(RocksDBKeyedStateBackend.java:758)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend$RocksDBIncrementalRestoreOperation.restore(RocksDBKeyedStateBackend.java:732)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:443)
at org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.restore(RocksDBKeyedStateBackend.java:149)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.attemptCreateAndRestore(BackendRestorerProcedure.java:151)
at org.apache.flink.streaming.api.operators.BackendRestorerProcedure.createAndRestore(BackendRestorerProcedure.java:123)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.keyedStatedBackend(StreamTaskStateInitializerImpl.java:276)
at org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:132)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:227)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:730)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:295)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
Run Code Online (Sandbox Code Playgroud)
我相信这将是最后一个错误...
终于我在这里找到了解决方案
您必须创建自己的映像并将gcs-connector放入lib目录。否则,您总是会遇到类加载问题(用户代码和系统类加载器)。
要创建自定义的Docker映像,我们创建以下Dockerfile:
Run Code Online (Sandbox Code Playgroud)FROM registry.platform.data-artisans.net/trial/v1.0/flink:1.4.2-dap1-scala_2.11 RUN wget -O lib/gcs-connector-latest-hadoop2.jar https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar RUN wget -O lib/gcs-connector-latest-hadoop2.jar https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar && \ wget http://ftp.fau.de/apache/flink/flink-1.4.2/flink-1.4.2-bin-hadoop28-scala_2.11.tgz && \ tar xf flink-1.4.2-bin-hadoop28-scala_2.11.tgz && \ mv flink-1.4.2/lib/flink-shaded-hadoop2* lib/ && \ rm -r flink-1.4.2* RUN mkdir etc-hadoop COPY <name of key file>.json etc-hadoop/ COPY core-site.xml etc-hadoop/ ENTRYPOINT ["/docker-entrypoint.sh"] EXPOSE 6123 8081 CMD ["jobmanager"]Docker映像将基于dA Platform试用版中提供的Flink映像。我们将添加Google Cloud Storage连接器,Flink的Hadoop软件包以及带有配置文件的密钥。
要构建定制映像,以下文件应位于当前目录中:core-site.xml,Dockerfile和密钥文件(.json)。
为了最终触发自定义图像的构建,我们运行以下命令:
Run Code Online (Sandbox Code Playgroud)$ docker build -t flink-1.4.2-gs .生成图像后,我们会将图像上传到Google的Container Registry。要将Docker配置为正确访问注册表,请运行以下命令一次:
Run Code Online (Sandbox Code Playgroud)$ gcloud auth configure-docker接下来,我们将标记并上传容器:
Run Code Online (Sandbox Code Playgroud)$ docker tag flink-1.4.2-gs:latest eu.gcr.io/<your project id>/flink-1.4.2-gs $ docker push eu.gcr.io/<your project id>/flink-1.4.2-gs上传完成后,我们需要为Application Manager部署设置自定义映像。发送了以下PATCH请求:
Run Code Online (Sandbox Code Playgroud)PATCH /api/v1/deployments/<your AppMgr deployment id> spec: template: spec: flinkConfiguration: fs.hdfs.hadoopconf: /opt/flink/etc-hadoop/ artifact: flinkImageRegistry: eu.gcr.io flinkImageRepository: <your project id>/flink-1.4.2-gs flinkImageTag: latest或者,使用以下curl命令:
Run Code Online (Sandbox Code Playgroud)$ curl -X PATCH --header 'Content-Type: application/yaml' --header 'Accept: application/yaml' -d ' spec: \ template: \ spec: \ flinkConfiguration: fs.hdfs.hadoopconf: /opt/flink/etc-hadoop/ artifact: \ flinkImageRegistry: eu.gcr.io \ flinkImageRepository: <your project id>/flink-1.4.2-gs \ flinkImageTag: latest' 'http://localhost:8080/api/v1/deployments/<your AppMgr deployment id>‘实施此更改后,您将可以检查点到Google的云存储。指定目录gs:/// checkpoints时,使用以下模式。对于保存点,请设置state.savepoints.dir Flink配置选项。
| 归档时间: |
|
| 查看次数: |
882 次 |
| 最近记录: |