cro*_*ked 15 java spring multithreading spring-batch
我在Spring Batch中创建异步处理器时遇到问题.我的处理器越来越ID从reader基于从响应,建立对象SOAP调用.有时为1输入(ID)必须有例如60-100个SOAP调用,有时只有1个.我试图制作多线程步骤,它正在处理例如50个输入,但它没用,因为49个线程在1秒内完成了它们的工作并被阻止,等待这个正在进行60-100次SOAP通话的人.现在我用AsyncItemProcessor+ AsyncItemWriter但这个解决方案对我来说很慢.由于我的输入(IDs)很大,从DB读取的大约25k项目我希望在时间开始~50-100输入.
这是我的配置:
@Configuration
public class BatchConfig {
@Autowired
public JobBuilderFactory jobBuilderFactory;
@Autowired
public StepBuilderFactory stepBuilderFactory;
@Autowired
private DatabaseConfig databaseConfig;
@Value(value = "classpath:Categories.txt")
private Resource categories;
@Bean
public Job processJob() throws Exception {
return jobBuilderFactory.get("processJob").incrementer(new RunIdIncrementer()).listener(listener()).flow(orderStep1()).end().build();
}
@Bean
public Step orderStep1() throws Exception {
return stepBuilderFactory.get("orderStep1").<Category, CategoryDailyResult>chunk(1).reader(reader()).processor(asyncItemProcessor()).writer(asyncItemWriter()).taskExecutor(taskExecutor()).build();
}
@Bean
public JobExecutionListener listener() {
return new JobCompletionListener();
}
@Bean
public ItemWriter asyncItemWriter() {
AsyncItemWriter<CategoryDailyResult> asyncItemWriter = new AsyncItemWriter<>();
asyncItemWriter.setDelegate(itemWriter());
return asyncItemWriter;
}
@Bean
public ItemWriter<CategoryDailyResult> itemWriter(){
return new Writer();
}
@Bean
public ItemProcessor asyncItemProcessor() {
AsyncItemProcessor<Category, CategoryDailyResult> asyncItemProcessor = new AsyncItemProcessor<>();
asyncItemProcessor.setDelegate(itemProcessor());
asyncItemProcessor.setTaskExecutor(taskExecutor());
return asyncItemProcessor;
}
@Bean
public ItemProcessor<Category, CategoryDailyResult> itemProcessor(){
return new Processor();
}
@Bean
public TaskExecutor taskExecutor(){
SimpleAsyncTaskExecutor taskExecutor = new SimpleAsyncTaskExecutor();
taskExecutor.setConcurrencyLimit(50);
return taskExecutor;
}
@Bean(destroyMethod = "")
public ItemReader<Category> reader() throws Exception {
String query = "select c from Category c where not exists elements(c.children)";
JpaPagingItemReader<Category> reader = new JpaPagingItemReader<>();
reader.setSaveState(false);
reader.setQueryString(query);
reader.setEntityManagerFactory(databaseConfig.entityManagerFactory().getObject());
reader.setPageSize(1);
return reader;
}
}
Run Code Online (Sandbox Code Playgroud)
如何提升我的申请?也许我做错了什么?任何反馈欢迎;)
@Edit:对于ID的输入:1到100我想要例如50个正在执行处理器的线程.我希望它们不会相互阻塞:Thread1进程输入"1"2分钟,此时我希望Thread2处理输入"2","8","64"这些很小并在几秒钟内执行.
@ Edit2:
我的目标:
我在数据库中有25k ID,我读了它们,JpaPagingItemReader每个ID都由处理器处理.每个项目彼此独立.对于每个ID,我SOAP在循环中调用0-100次,然后我创建对象,我传递给Writer并保存在数据库中.如何才能获得此类任务的最佳性能?
@KCrookedHand:我处理过类似的场景,我必须阅读数千个内容,并且需要调用 SOAP 服务(我已将其注入 itemReader 中)来匹配条件。
我的配置如下所示,基本上您有几个选项来实现并行处理,其中两个是“分区”和“客户端服务器”方法。我选择分区是因为我可以根据我的数据更好地控制我需要多少个分区。
请按照 @MichaelMinella 提到的 ThreadPoolTaskExecutor 进行以下步骤执行,在适用的情况下使用 tasklet。
<batch:step id="notificationMapper">
<batch:partition partitioner="partitioner"
step="readXXXStep" />
</batch:step>
</batch:job>
<batch:step id="readXXXStep">
<batch:job ref="jobRef" job-launcher="jobLauncher"
job-parameters-extractor="jobParameterExtractor" />
</batch:step>
<batch:job id="jobRef">
<batch:step id="dummyStep" next="skippedItemsDecision">
<batch:tasklet ref="dummyTasklet"/>
<batch:listeners>
<batch:listener ref="stepListener" />
</batch:listeners>
</batch:step>
<batch:step id="xxx.readItems" next="xxx.then.finish">
<batch:tasklet>
<batch:chunk reader="xxxChunkReader" processor="chunkProcessor"
writer="itemWriter" commit-interval="100">
</batch:chunk>
</batch:tasklet>
<batch:listeners>
<batch:listener ref="taskletListener" />
</batch:listeners>
</batch:step>
...
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
4226 次 |
| 最近记录: |