我正在使用apache点火来做PoC.这是我正在测试的场景:
步骤6和7有一些麻烦.如果我在两者之间等待足够长的时间,一切正常.但是,如果尝试将6和7太靠近,那么我在客户端上得到此错误并在节点上出现此错误.
我看到错误IgniteClientDisconnectedException: Failed to wait for topology update, client disconnected. 但有没有办法避免这个问题?设置较长的等待拓扑更新的时间实际上并不是一种选择,因为客户端可能会随时尝试连接.是否与我的群集配置有关?我看到这个文档建议无限尝试连接,这似乎只会让人犯错误.
此外,我们需要能够动态增长/收缩群集.这可能吗?内存备份是否会修复功能?
注意,如果我省略步骤6,我没有看到它失败.
群集节点配置
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<!--<import resource="./cache.xml"/>-->
<bean id="grid.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<property name="peerClassLoadingEnabled" value="true"/>
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="recordData"/>
<!--<property name="rebalanceMode" value="SYNC"/>-->
<!-- Set cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<property name="cacheStoreFactory">
<bean class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg value="Application.RecordDataStore"/>
</bean>
</property>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
</bean>
</property>
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<!-- Override local port. -->
<property name="localPort" value="8000"/>
</bean>
</property>
<property name="communicationSpi">
<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
<!-- Override local port. -->
<property name="localPort" value="8100"/>
</bean>
</property>
</bean>
</beans>
Run Code Online (Sandbox Code Playgroud)
客户端配置
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd
http://www.springframework.org/schema/util
http://www.springframework.org/schema/util/spring-util.xsd">
<bean abstract="true" id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Set to true to enable distributed class loading for examples, default is false. -->
<property name="peerClassLoadingEnabled" value="true"/>
<property name="clientMode" value="true"/>
<property name="cacheConfiguration">
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<!-- Set a cache name. -->
<property name="name" value="recordData"/>
<!--<property name="rebalanceMode" value="SYNC"/>-->
<!-- Set cache mode. -->
<property name="cacheMode" value="PARTITIONED"/>
<property name="cacheStoreFactory">
<bean class="javax.cache.configuration.FactoryBuilder" factory-method="factoryOf">
<constructor-arg value="com.digitaslbi.idiom.util.RecordDataStore"/>
</bean>
</property>
<property name="readThrough" value="true"/>
<property name="writeThrough" value="true"/>
</bean>
</property>
<!-- Enable task execution events for examples. -->
<property name="includeEventTypes">
<list>
<!--Task execution events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_STARTED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FINISHED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_FAILED"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_TIMEDOUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_SESSION_ATTR_SET"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_TASK_REDUCED"/>
<!--Cache events-->
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_PUT"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_READ"/>
<util:constant static-field="org.apache.ignite.events.EventType.EVT_CACHE_OBJECT_REMOVED"/>
</list>
</property>
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>localhost:8000..8099</value>
<!--<value>127.0.0.1:47500..47509</value>-->
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Run Code Online (Sandbox Code Playgroud)
实现了CacheStoreAdaptor的方法
public class RecordDataStore extends CacheStoreAdapter<Long, List<Record>> {
// This method is called whenever "get(...)" methods are called on IgniteCache.
@Override public List<Record> load(Long key) {
System.out.println("Load data for pel: " + key);
try {
CouchDbConnector db = RecordDataStore.getDb();
ViewQuery viewQuery = new ViewQuery().designDocId("_design/docs").viewName("all");
List<Record> list = db.queryView(viewQuery,Record.class);
HashMultimap<Long,Record> multimap = HashMultimap.create();
list.forEach(r -> {
multimap.put(r.getId(),r);
});
return new LinkedList<>(multimap.get(key));
} catch (MalformedURLException e) {
throw new CacheLoaderException("Failed to load values from cache store.", e);
}
}
....
@Override public void loadCache(IgniteBiInClosure<Long, List<Record>> clo, Object... args) {
if (args == null || args.length == 0 || args[0] == null) {
throw new CacheLoaderException("Expected entry count parameter is not provided.");
}
System.out.println("Loading Cache...");
final long entryCnt = (Long)args[0];
try{
CouchDbConnector db = RecordDataStore.getDb();
ViewQuery viewQuery = new ViewQuery().designDocId("_design/docs").viewName("all");
List<Record> list = db.queryView(viewQuery,Record.class);
HashMultimap<Long,Record> multimap = HashMultimap.create();
long count = 0;
for(Record r : list) {
multimap.put(r.getPel(),r);
count++;
if(count == entryCnt)
break;
}
multimap.keySet().forEach(key -> {
clo.apply(key,new LinkedList<>(multimap.get(key)));
});
}
catch (MalformedURLException e) {
throw new CacheLoaderException("Failed to load values from cache store.", e);
}
System.out.println("Loaded Cache");
}
public static CouchDbConnector getDb() throws MalformedURLException {
HttpClient httpClient = new StdHttpClient.Builder()
.url("server:1111/")
.build();
CouchDbInstance dbInstance = new StdCouchDbInstance(httpClient);
CouchDbConnector db = new StdCouchDbConnector("ignite", dbInstance);
return db;
}
}
Run Code Online (Sandbox Code Playgroud)
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cluster-recovery-after-network-partition-td2775.html强调IgniteClientDisconnectedException提供了一个IgniteFuture可以通过调用来访问的
IgniteFuture f = myException.reconnectFuture();
Run Code Online (Sandbox Code Playgroud)
这个 future 有一个get()- 方法,它等待节点重新连接:
同步等待计算完成并返回计算结果。
因此,当客户端重新连接时,以下操作应该完成:
f.get();
Run Code Online (Sandbox Code Playgroud)