我在ubuntu上安装了hadoop 2.8.1,然后安装了spark-2.2.0-bin-hadoop2.7.我使用了spark-shell并创建了表格.我再次使用beeline并创建表格.我观察到有三个不同的文件夹被创建名为spark-warehouse:
1- spark-2.2.0-bin-hadoop2.7/spark-warehouse
2- spark-2.2.0-bin-hadoop2.7/bin/spark-warehouse
3- spark-2.2.0-bin-hadoop2.7/sbin/spark-warehouse
什么是火花仓库,为什么这些创造了很多次?有时我的火花壳和直线显示不同的数据库和表格,有时它显示相同.我没有得到正在发生的事情?
此外,我没有安装配置单元,但我仍然可以使用beeline,我也可以通过java程序访问数据库.蜂巢是如何进入我的机器的?请帮我.我是新手,通过在线教程激发并安装它.
下面是我用来通过JDBC连接apache spark的java代码:
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
public static void main(String[] args) throws SQLException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
Connection con = DriverManager.getConnection("jdbc:hive2://10.171.0.117:10000/default", "", "");
Statement stmt = con.createStatement();
Run Code Online (Sandbox Code Playgroud) I am using spark 2.2.0. Below is the java code snippet which I am using as a job on spark:
SparkSession spark = SparkSession.builder()
.appName("MySQL Connection")
.master("spark://ip:7077")
.config("spark.jars", "/path/mysql.jar")
.getOrCreate();
Dataset dataset = spark.read().format("jdbc")
.option("url", "jdbc:mysql://ip:3306/mysql")
.option("user", "superadmin")
.option("password", "****")
.option("dbtable", "account")
.load();
Run Code Online (Sandbox Code Playgroud)
The above code works perfectly but the problem is that if I need to submit 2 jars then I dont know how to submit it? The config() method accepts only one parameter in key('spark.jars') and one in …