根据JDO,您可以使用PersistenceManager.getObjectsById按对象ID加载多个实体实例.
在这里需要使用什么样的系列?Google数据存储密钥不能用作对象ID.
我正在尝试使用maven为GoogleAppEngine构建我的应用程序.我已经在我的pom中添加了以下内容,它应该在构建之后"增强"我的类,如DataNucleus文档中所建议的那样
<plugin>
                <groupId>org.datanucleus</groupId>
                <artifactId>maven-datanucleus-plugin</artifactId>
                <version>1.1.4</version>
                <configuration>
                    <log4jConfiguration>${basedir}/log4j.properties</log4jConfiguration>
                    <verbose>true</verbose>
                </configuration>
                <executions>
                    <execution>
                        <phase>process-classes</phase>
                        <goals>
                            <goal>enhance</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
根据GoogleAppEngine上的文档,您可以选择使用JDO或JPA,因为我过去使用过JPA,所以我选择使用JPA.当我尝试构建我的项目(在我上传到GAE之前)使用mvn clean package我得到以下输出
[ERROR] BUILD ERROR
[INFO] ------------------------------------------------------------------------
[INFO] Failed to resolve artifact.
Missing:
----------
1) javax.jdo:jdo2-api:jar:2.3-ec
  Try downloading the file manually from the project website.
  Then, install it using the command: 
      mvn install:install-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar -Dfile=/path/to/file
  Alternatively, if you host your own repository you can deploy the file there: 
      mvn deploy:deploy-file -DgroupId=javax.jdo -DartifactId=jdo2-api -Dversion=2.3-ec -Dpackaging=jar …我只是尝试使用java客户端连接到hbase,它是cloudera-vm的一部分.
(192.168.56.102是vm的inet ip)
我使用虚拟盒与主机网络设置.
所以我可以访问hbase master的webUI @ http://192.168.56.102:60010/master.jsp
我的java客户端(在vm本身运行良好)也建立了与192.168.56.102:2181的连接
但是当它调用getMaster我得到连接被拒绝时看到log:
11/09/14 11:19:30 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.56.102:2181 sessionTimeout=180000 watcher=hconnection
11/09/14 11:19:30 INFO zookeeper.ClientCnxn: Opening socket connection to server /192.168.56.102:2181
11/09/14 11:19:30 INFO zookeeper.ClientCnxn: Socket connection established to cloudera-vm/192.168.56.102:2181, initiating session
11/09/14 11:19:30 INFO zookeeper.ClientCnxn: Session establishment complete on server cloudera-vm/192.168.56.102:2181, sessionid = 0x13267157f930009, negotiated timeout = 40000
11/09/14 11:19:32 INFO client.HConnectionManager$HConnectionImplementation: getMaster attempt 0 of 10 failed; retrying after sleep of 1000
java.net.ConnectException: Connection refused: …今天,这是我第一次使用GWT和JDO.我在本地调试模式下使用Eclipse运行它.
我做了以下事情:
    public Collection<MyObject> add(MyObject o) {
PersistenceManager pm = PMF.get().getPersistenceManager();
try {
    pm.makePersistent(o);
    Query query = pm.newQuery(MyObject.class);// fetch all objects incl. o. But o only sometimes comes...
List<MyObject> rs = (List<MyObject>) query.execute();
ArrayList<MyObject> list= new ArrayList<MyObject>();
for (MyObject r : rs) {
    list.add(r);
}
return list; 
} finally {
    pm.close();
}
}
我已经入住<property name="datanucleus.appengine.datastoreReadConsistency" value="STRONG" />了jdoconfig.xml.我是否必须在配置中设置其他一些事务?有人有工作jdoconfig.xml吗?或者是其他地方的问题?一些缓存介于两者之间?
编辑:我尝试过的事情:
PersistenceManager虽然PMF.get().getPersistenceManager()多次调用PersistenceManagerflush和 …我正在使用Hbase-Hadoop组合作为我的应用程序以及Data Nucleus作为ORM.
当我试图一次通过几个线程访问hbase时.它会抛出异常:
Exception in thread "Thread-26" javax.jdo.JDODataStoreException
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately. This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information.
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
如果需要,我可以提供完整的堆栈跟踪.(因为完整的堆栈跟踪会使事情变得混乱).
请帮我解决一下如何处理这种情况.是否需要进行任何配置来增加连接池?
我正在使用Datanucleus和JDO开发用于嵌入式H2数据库的桌面应用程序。当我从Eclipse运行它时,一切正常,但是当我尝试用它制作可执行jar时,它将停止工作。我收到以下错误:
org.datanucleus.exceptions.NucleusUserException:已将持久性进程指定为使用名称为“ jdo”的ClassLoaderResolver,但DataNucleus插件机制尚未找到该持久性进程。请检查您的CLASSPATH和插件规范。
当然,这表明我没有正确配置某些内容-我缺少什么?如果我错过了一些大东西,那根本就行不通,所以我假设它是一个有缺陷的可执行jar。我已经在其他应用程序中看到了该错误,例如JPOX,该错误已得到修复,但没有给出任何解决方案。
整个错误stacktrace:
Exception in thread "main" javax.jdo.JDOFatalInternalException: Unexpected exception caught.
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1193)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at db.PersistenceManagerFilter.init(PersistenceManagerFilter.java:44)
        at Main.main(Main.java:26)
NestedThrowablesStackTrace:
java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
        at java.lang.reflect.Method.invoke(Unknown Source)
        at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960)
        at javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808)
        at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701)
        at db.PersistenceManagerFilter.init(PersistenceManagerFilter.java:44)
        at Main.main(Main.java:26)
Caused by: org.datanucleus.exceptions.NucleusUserException: Persistence process has been specified to use a ClassLoaderResolver of name "jdo" yet this has not been found by the DataNucleus plugin mechanism. …当我创建一个定义"gae.encoded-pk"和"gae.pk-id"持久化的类时,encoded-pk会更新,但id仍然为null.没有例外被抛出,代码是谷歌文档中的直接复制粘贴,所以我对这里可能发生的事情感到茫然.
该类定义:
@PersistenceCapable 
public class MyClass {
    @PrimaryKey
    @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
    @Extension(vendorName="datanucleus", key="gae.encoded-pk", value="true")
    private String encodedKey;
    @Persistent
    @Extension(vendorName="datanucleus", key="gae.pk-id", value="true")
    private Long keyId;
而且我坚持这样:
PersistenceManager pm = PMF.get().getPersistenceManager();
try {
    pm.makePersistent(myInstance);
    // myInstance = pm.makePersistent(myInstance); - Produces the same result.
} finally {
    pm.close();
}
我正在使用调试器来逐步执行此代码,但即使在关闭持久性管理器之后,keyId仍为null.
我还应该指出,这是使用google appengine开发工具包在本地运行的.关于我如何调试这个的任何指针将不胜感激!
我正在尝试使用带有App Engine的JDO和Maven配置创建一个简单的测试.
我的编译和数据增强步骤成功.但是在运行时(mvn:test和appengine:devserver)我得到:
1) Error in custom provider, javax.jdo.JDOFatalInternalException: 
Class "com.google.appengine.datanucleus.DatastoreManager" was not found in the CLASSPATH.
Please check your specification and your CLASSPATH.
但是,我的类路径(target/demo/WEB-INF/lib)确实包含:datanucleus-appengine-2.1.1.jar
我的依赖关系与Google datanucleus项目的POM中指定的依赖关系相同:
  <dependency>
    <groupId>javax.jdo</groupId>
    <artifactId>jdo-api</artifactId>
    <version>3.0.1</version>
  </dependency>
  <dependency>
    <groupId>org.datanucleus</groupId>
    <artifactId>datanucleus-core</artifactId>
    <version>[3.1.1, 3.2)</version>
    <scope>runtime</scope>
  </dependency>
  <dependency>
    <groupId>org.datanucleus</groupId>
    <artifactId>datanucleus-api-jdo</artifactId>
    <version>[3.1.1, 3.2)</version>
  </dependency>
  <dependency>
    <groupId>com.google.appengine.orm</groupId>
    <artifactId>datanucleus-appengine</artifactId>
    <version>2.1.1</version>
  </dependency>
感谢任何建议.
RB
在我的项目(Spring Framework + Google App Engine + DataNucleus + JPA)中,我在服务器启动时遇到以下异常:
WARNING: Nestedin org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'jpaMappingContext': Invocation of init method failed; 
    nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/spring/db.xml]: Invocation of init method failed; 
    nested exception is java.lang.NoSuchMethodError: org.datanucleus.metadata.MetaDataUtils.parsePersistenceFiles(Lorg/datanucleus/plugin/PluginManager;Ljava/lang/String;ZLorg/datanucleus/NucleusContext;)[Lorg/datanucleus/metadata/PersistenceFileMetaData;:
java.lang.NoSuchMethodError: org.datanucleus.metadata.MetaDataUtils.parsePersistenceFiles(Lorg/datanucleus/plugin/PluginManager;Ljava/lang/String;ZLorg/datanucleus/NucleusContext;)[Lorg/datanucleus/metadata/PersistenceFileMetaData;
    at org.datanucleus.api.jpa.JPAEntityManagerFactory.<init>(JPAEntityManagerFactory.java:342)
    at org.datanucleus.api.jpa.PersistenceProviderImpl.createEntityManagerFactory(PersistenceProviderImpl.java:91)
显然,在persistence.xml解析过程中会抛出此异常.Spring尝试调用方法MetaDataUtils#parsePersistenceFiles(PluginManager,String,NucleusContext,nucCtx),但它不存在.这种方法是其中的一部分org.datanucleus:datanucleus-core.起初我以为我在某处缺少或重复依赖.我已经执行了
gradle dependencies
仔细扫描输出,发现没有什么可疑:只有单一版本的依赖.
根据文档MetaDataUtils只有一种parsePersistenceFiles方法:
public static PersistenceFileMetaData[] parsePersistenceFiles(
  PluginManager pluginMgr, String persistenceFilename, boolean validate, …google-app-engine dependencies nosuchmethoderror datanucleus spring-data-jpa
我试图通过 hcatalog 访问 mapreduce 中的 hive 表并面临以下异常:我用谷歌搜索它并尝试找到根本原因但无法成功,所以我在这里发布我的查询。
2016-12-01 15:48:35,855 INFO  [main] metastore.HiveMetaStore (HiveMetaStore.java:newRawStore(564)) - 0: Opening raw store with implementation class:org.apache.hadoop.hive.metastore.ObjectStore
2016-12-01 15:48:35,857 INFO  [main] metastore.ObjectStore (ObjectStore.java:initialize(325)) - ObjectStore, initialize called
2016-12-01 15:48:35,862 ERROR [main] DataNucleus.Persistence (Log4JLogger.java:error(115)) - Error : Could not find API definition for name "JDO". Perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH?
Exception in thread "main" java.io.IOException: com.google.common.util.concurrent.UncheckedExecutionException: java.lang.RuntimeException: Unable to instantiate org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient
    at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:97)
    at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
    at hcatalog.DriverClass.run(DriverClass.java:30)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) …