我正在尝试解析来自 GET 请求的 http 响应,但它抛出以下异常。
org.apache.http.ConnectionClosedException: Premature end of chunk coded message body: closing chunk expected
at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:266) ~[httpcore-4.4.10.jar!/:4.4.10]
at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:225) ~[httpcore-4.4.10.jar!/:4.4.10]
at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:184) ~[httpcore-4.4.10.jar!/:4.4.10]
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135) ~[httpclient-4.5.6.jar!/:4.5.6]
at java.base/sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) ~[na:na]
at java.base/sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) ~[na:na]
at java.base/sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) ~[na:na]
at java.base/java.io.InputStreamReader.read(InputStreamReader.java:185) ~[na:na]
at java.base/java.io.Reader.read(Reader.java:229) ~[na:na]
at org.apache.http.util.EntityUtils.toString(EntityUtils.java:227) ~[httpcore-4.4.10.jar!/:4.4.10]
at org.apache.http.util.EntityUtils.toString(EntityUtils.java:308) ~[httpcore-4.4.10.jar!/:4.4.10]
Run Code Online (Sandbox Code Playgroud)
我解析响应的代码是
String parseResponse(HttpResponse resp) {
try {
return org.apache.http.util.EntityUtils.toString(resp.getEntity());
} catch (IOException e) {
throw new RuntimeException(e);
}
}
Run Code Online (Sandbox Code Playgroud)
我使用org.apache.httpcomponents:httpcore:4.5.6
我调用的 GET 端点(spring boot 应用程序)如下
public ResponseEntity<org.springframework.data.domain.
Page<JSONObject>> getList() { …Run Code Online (Sandbox Code Playgroud) 我想从 S3 资源创建 Apache Spark DataFrame。我在 AWS 和 IBM S3 Clout 对象存储上尝试过,都失败了
org.apache.spark.util.TaskCompletionListenerException: Premature end of Content-Length delimited message body (expected: 2,250,236; received: 16,360)
Run Code Online (Sandbox Code Playgroud)
我正在运行 pyspark
./pyspark --packages com.amazonaws:aws-java-sdk-pom:1.11.828,org.apache.hadoop:hadoop-aws:2.7.0
Run Code Online (Sandbox Code Playgroud)
我正在为 IBM 设置 S3 配置
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "xx")
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "xx")
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.eu-de.cloud-object-storage.appdomain.cloud")
Run Code Online (Sandbox Code Playgroud)
或者 AWS 与
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", "xx")
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", " xx ")
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "s3.us-west-2.amazonaws.com")
Run Code Online (Sandbox Code Playgroud)
在这两种情况下,代码如下: df=spark.read.csv("s3a://drill-test/cases.csv")
它失败了,但有例外
org.apache.spark.util.TaskCompletionListenerException: Premature end of Content-Length delimited message body (expected: 2,250,236; received: 16,360)
Run Code Online (Sandbox Code Playgroud)