尽管" https://jsonplaceholder.typicode.com/posts/1 "适用于邮递员,但禁止以下代码使用403
@ComponentScan
@EnableAutoConfiguration
public class Application {
public static void main(String[] args) {
RestTemplate rt = new RestTemplate();
HttpHeaders headers = new HttpHeaders();
headers.setAccept(Arrays.asList(MediaType.APPLICATION_JSON));
HttpEntity<String> entity = new HttpEntity<String>("parameters", headers);
String url = "https://jsonplaceholder.typicode.com/posts/1";
ResponseEntity<String> res = rt.exchange(url, HttpMethod.GET, entity, String.class);
System.out.println(res);
}
}
Run Code Online (Sandbox Code Playgroud)
错误:
23:28:21.447 [main] DEBUG o.s.web.client.RestTemplate - Created GET request for "https://jsonplaceholder.typicode.com/posts/1"
23:28:21.452 [main] DEBUG o.s.web.client.RestTemplate - Setting request Accept header to [text/plain, application/json, application/*+json, */*]
23:28:21.452 [main] DEBUG o.s.web.client.RestTemplate - Writing [parameters] using [org.springframework.http.converter.StringHttpMessageConverter@3234e239] …Run Code Online (Sandbox Code Playgroud) Spark-shell:基本上会打开scala>提示符。查询需要以以下方式编写的地方
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
// Queries are expressed in HiveQL
sqlContext.sql("FROM src SELECT key, value").collect().foreach(println)
Run Code Online (Sandbox Code Playgroud)
spark-sql:似乎直接连接到hive metastore,我们可以用与hive类似的方式编写查询。并查询配置单元中的现有数据
我想知道这两者之间的区别。并且在spark-sql中处理任何查询是否与在spark-shell中进行处理一样?我的意思是我们可以利用spark-sql中spark的性能优势吗?
Spark 1.5.2在这里。