小编Ale*_*dro的帖子

Java/Hibernate - 例外:内部连接池已达到其最大大小,并且当前没有可用的连接

我第一次使用Hibernate进行大学项目,而且我是一个新手.我想我遵循了我教授给出的所有指示和我读过的一些教程,但我不断得到标题中的异常:

Exception in thread "main" org.hibernate.HibernateException: The internal connection pool has reached its maximum size and no connection is currently available!
Run Code Online (Sandbox Code Playgroud)

我想要做的只是将一个对象(AbitazioneDB)存储到我已经创建的MySql数据库中.这是我的配置文件:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
    "-//Hibernate/Hibernate Configuration DTD 3.0//EN"
    "http://www.hibernate.org/dtd/hibernate-configuration-3.0.dtd">

<hibernate-configuration>
    <session-factory>
        <!-- Connection to the database -->
        <property name="connection.driver_class">com.mysql.jdbc.Driver</property>
        <property name="connection.url">jdbc:mysql://localhost:3306/AllarmiDomesticiDB</property>

        <!-- Credentials -->
        <property name="hibernate.connection.username">root</property>
        <property name="connection.password">password</property>

        <!-- JDBC connection pool (use the built-in) -->
        <property name="connection.pool_size">1</property>

        <!-- SQL dialect -->
        <property name="dialect">org.hibernate.dialect.MySQLDialect</property>

        <!-- Enable Hibernate's automatic session context management -->
        <property name="current_session_context_class">thread</property> …
Run Code Online (Sandbox Code Playgroud)

java mysql sql hibernate

9
推荐指数
3
解决办法
1万
查看次数

带 Yarn 的 Spark Shell - 错误:Yarn 应用程序已经结束!它可能已被杀死或无法启动应用程序主

作为这个问题的后续,当我尝试在我的单节点机器上使用 Spark 2.1.1 over Yarn (Hadoop 2.8.0) 时,我遇到了一个新错误。如果我启动 Spark Shell

spark-shell
Run Code Online (Sandbox Code Playgroud)

它开始没有问题。在使用通常的start-dfs.shand启动 Hadoop 之后start-yarn.sh,如果我使用

spark-shell --master yarn
Run Code Online (Sandbox Code Playgroud)

我收到以下错误:

17/06/10 12:00:07 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/06/10 12:00:12 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:85)
    at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:62)
    at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
    at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2320)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
    at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
    at …
Run Code Online (Sandbox Code Playgroud)

hadoop hadoop-yarn apache-spark

7
推荐指数
1
解决办法
2万
查看次数

标签 统计

apache-spark ×1

hadoop ×1

hadoop-yarn ×1

hibernate ×1

java ×1

mysql ×1

sql ×1