Ran*_*air 1 scala apache-spark
我正在尝试创建程序集 jar 可执行文件但出现以下错误
Caused by: java.lang.ClassNotFoundException: csv.DefaultSource
问题出在读取的 CSV 文件上。该代码在 IDE 中运行良好。请帮我
Scala代码如下
package extendedtable
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkContext
import org.apache.spark.sql.{DataFrame, Row, SparkSession}
import scala.collection.mutable.ListBuffer
object mainObject {
// var read = new fileRead
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().appName("generationobj").master("local[*]").config("spark.sql.crossJoin.enabled", value = true).getOrCreate()
val sc: SparkContext = spark.sparkContext
import spark.implicits._
val atomData = spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("Resources/atom.csv")
val moleculeData = spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load("Resources/molecule.csv")
val df = moleculeData.join(atomData,"molecule_id")
val molecule_df = moleculeData
val mid: List[Row] = molecule_df.select("molecule_id").collect.toList
var listofmoleculeid: List[String] = mid.map(r => r.getString(0))
// print(listofmoleculeid)
newDF.createTempView("table")
newDF.show()}
Run Code Online (Sandbox Code Playgroud)
以下是构建文件
name := "ExtendedTable"
version := "0.1"
scalaVersion := "2.11.12"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.3.0"
libraryDependencies += "org.apache.spark" %% "spark-mllib" % "2.3.0"
mainClass := Some("extendedtable.mainObject")
assemblyMergeStrategy in assembly := {
case PathList("META-INF", xs @ _*) => MergeStrategy.discard
case x => MergeStrategy.first
}
Run Code Online (Sandbox Code Playgroud)
更改assemblyMergeStrategy如下并构建 jar 文件。
您需要将此文件包含在您的 jar 文件中,并且此文件将在jar 文件org.apache.spark.sql.sources.DataSourceRegister中可用。spark-sql
路径是-spark-sql_2.11-<version>.jar /META-INF/services/org.apache.spark.sql.sources.DataSourceRegister
该文件包含以下列表
org.apache.spark.sql.execution.datasources.csv.CSVFileFormat
org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider
org.apache.spark.sql.execution.datasources.json.JsonFileFormat
org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat
org.apache.spark.sql.execution.datasources.text.TextFileFormat
org.apache.spark.sql.execution.streaming.ConsoleSinkProvider
org.apache.spark.sql.execution.streaming.TextSocketSourceProvider
org.apache.spark.sql.execution.streaming.RateSourceProvider
Run Code Online (Sandbox Code Playgroud)
assemblyMergeStrategy in assembly := {
case PathList("META-INF","services",xs @ _*) => MergeStrategy.filterDistinctLines // Added this
case PathList("META-INF",xs @ _*) => MergeStrategy.discard
case _ => MergeStrategy.first
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1264 次 |
| 最近记录: |