我正在尝试使用动态分区创建分区表,但我遇到了一个问题.我在Hortonworks Sandbox 2.0上运行Hive 0.12.
set hive.exec.dynamic.partition=true;
INSERT OVERWRITE TABLE demo_tab PARTITION (land)
SELECT stadt, geograph_breite, id, t.country
FROM demo_stg t;
Run Code Online (Sandbox Code Playgroud)
但它不起作用..我得到一个错误.
这是创建表demo_stg的Query :
create table demo_stg
(
country STRING,
stadt STRING,
geograph_breite FLOAT,
id INT
)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "\073";
Run Code Online (Sandbox Code Playgroud)
和demo_tab:
CREATE TABLE demo_tab
(
stadt STRING,
geograph_breite FLOAT,
id INT
)
PARTITIONED BY (land STRING)
ROW FORMAT DELIMITED FIELDS TERMINATED BY "\073";
Run Code Online (Sandbox Code Playgroud)
感谢帮助 :)
如果我尝试安装GitHub Mylyn Connector,我会收到此错误:
Cannot complete the install because one or more required items could not be found.
Software being installed: Eclipse GitHub integration with task focused interface 3.3.0.201403021825-r (org.eclipse.mylyn.github.feature.feature.group 3.3.0.201403021825-r)
Missing requirement: Eclipse GitHub integration with task focused interface 3.3.0.201403021825-r (org.eclipse.mylyn.github.feature.feature.group 3.3.0.201403021825-r) requires 'org.eclipse.mylyn_feature.feature.group 3.7.0' but it could not be found
Run Code Online (Sandbox Code Playgroud)
- 我找不到任何其他帖子有这个错误消息:/
我已经安装了EGit,我也尝试使用Eclipse Marketplace安装Github Mylyn Connector,但它没有改变任何东西.
我使用Eclipse 4.3 for Mac OS X.
谢谢你的帮助
我在Spark上使用Hive时遇到问题.我在CentOS 6.5上通过Ambari安装了单节点HDP 2.1(Hadoop 2.4).我正在尝试在Spark上运行Hive,所以我使用了这个说明:
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
我已经在官方的Apache Spark网站上找到了Spark的"Prebuilt for Hadoop 2.4"版本.所以我开始了大师:
./spark-class org.apache.spark.deploy.master.Master
Run Code Online (Sandbox Code Playgroud)
然后工人:
./spark-class org.apache.spark.deploy.worker.Worker spark://hadoop.hortonworks:7077
Run Code Online (Sandbox Code Playgroud)
然后我用这个提示启动了Hive:
hive –-auxpath /SharedFiles/spark-1.0.1-bin-hadoop2.4/lib/spark-assembly-1.1.0-hadoop2.4.0.jar
Run Code Online (Sandbox Code Playgroud)
然后,根据说明,我不得不改变hive的执行引擎以引发此提示:
set hive.execution.engine=spark;,
Run Code Online (Sandbox Code Playgroud)
结果是:
Query returned non-zero code: 1, cause: 'SET hive.execution.engine=spark' FAILED in validation : Invalid value.. expects one of [mr, tez].
Run Code Online (Sandbox Code Playgroud)
因此,如果我尝试启动一个简单的Hive查询,我可以在我的hadoop.hortonwork:8088上看到启动的作业是MapReduce-Job.
现在问我的问题:如何更改Hive的执行引擎,以便Hive使用Spark而不是MapReduce?有没有其他方法可以改变它?(我已经尝试通过ambari和hive-site.xml更改它)