And*_*ssi 6 hadoop httpclient webhdfs fiware fiware-cosmos
我需要将大文件(至少14MB)从FIWARE Lab的Cosmos实例传输到我的后端.
我使用Spring RestTemplate作为此处描述的Hadoop WebHDFS REST API的客户端接口,但我遇到了IO异常:
Exception in thread "main" org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/<user.name>/<path>?op=open&user.name=<user.name>":Truncated chunk ( expected size: 14744230; actual size: 11285103); nested exception is org.apache.http.TruncatedChunkException: Truncated chunk ( expected size: 14744230; actual size: 11285103)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:580)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:545)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:466)
Run Code Online (Sandbox Code Playgroud)
这是生成异常的实际代码:
RestTemplate restTemplate = new RestTemplate();
restTemplate.setRequestFactory(new HttpComponentsClientHttpRequestFactory());
restTemplate.getMessageConverters().add(new ByteArrayHttpMessageConverter());
HttpEntity<?> entity = new HttpEntity<>(headers);
UriComponentsBuilder builder =
UriComponentsBuilder.fromHttpUrl(hdfs_path)
.queryParam("op", "OPEN")
.queryParam("user.name", user_name);
ResponseEntity<byte[]> response =
restTemplate
.exchange(builder.build().encode().toUri(), HttpMethod.GET, entity, byte[].class);
FileOutputStream output = new FileOutputStream(new File(local_path));
IOUtils.write(response.getBody(), output);
output.close();
Run Code Online (Sandbox Code Playgroud)
我认为这是由于Cosmos实例上的传输超时,所以我尝试curl通过指定offset, buffer and length参数发送路径,但它们似乎被忽略:我得到了整个文件.
提前致谢.
好的,我找到了解决方案。我不明白为什么,但如果我使用 Jetty HttpClient 而不是 RestTemplate (以及 Apache HttpClient),传输就会成功。这现在有效:
ContentExchange exchange = new ContentExchange(true){
ByteArrayOutputStream bos = new ByteArrayOutputStream();
protected void onResponseContent(Buffer content) throws IOException {
bos.write(content.asArray(), 0, content.length());
}
protected void onResponseComplete() throws IOException {
if (getResponseStatus()== HttpStatus.OK_200) {
FileOutputStream output = new FileOutputStream(new File(<local_path>));
IOUtils.write(bos.toByteArray(), output);
output.close();
}
}
};
UriComponentsBuilder builder = UriComponentsBuilder.fromHttpUrl(<hdfs_path>)
.queryParam("op", "OPEN")
.queryParam("user.name", <user_name>);
exchange.setURL(builder.build().encode().toUriString());
exchange.setMethod("GET");
exchange.setRequestHeader("X-Auth-Token", <token>);
HttpClient client = new HttpClient();
client.setConnectorType(HttpClient.CONNECTOR_SELECT_CHANNEL);
client.setMaxConnectionsPerAddress(200);
client.setThreadPool(new QueuedThreadPool(250));
client.start();
client.send(exchange);
exchange.waitForDone();
Run Code Online (Sandbox Code Playgroud)
Apache Http 客户端上是否存在用于分块文件传输的已知错误?
我在 RestTemplate 请求中做错了什么吗?
经过几次测试,我发现我没有解决我的问题。我发现 Cosmos 实例上安装的 hadoop 版本是相当旧的Hadoop 0.20.2-cdh3u6并且我读到 WebHDFS 不支持带length参数的部分文件传输(自 v 0.23.3 以来引入)。这些是我使用以下命令发送 GET 请求时从服务器收到的标头curl:
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-ID, Authorization
server: Apache-Coyote/1.1
set-cookie: hadoop.auth="u=<user>&p=<user>&t=simple&e=1448999699735&s=rhxMPyR1teP/bIJLfjOLWvW2pIQ="; Version=1; Path=/
Content-Type: application/octet-stream; charset=utf-8
content-length: 172934567
date: Tue, 01 Dec 2015 09:54:59 GMT
connection: close
Run Code Online (Sandbox Code Playgroud)
如您所见,连接标头设置为关闭。实际上,每次 GET 请求持续超过 120 秒时,连接通常都会关闭,即使文件传输尚未完成。
总而言之,我可以说,如果 Cosmos 不支持大文件传输,那么它就完全没有用处。
如果我错了,或者您知道解决方法,请纠正我。
| 归档时间: |
|
| 查看次数: |
934 次 |
| 最近记录: |