Solr索引 - 主/从复制,如何处理巨大的索引和高流量?

Fan*_* H. 15 replication solr

我目前正面临SOLR的问题(更确切地说是奴隶复制),在花了很多时间在网上阅读后,我发现自己不得不要求一些启示.

- Solr的索引规模是否有一些限制?

在处理单个主服务器时,何时决定使用多核或多索引?是否有任何迹象表明何时达到一定的索引大小,建议进行分区?

- 从主站到从站复制段时是否有最大大小?

复制时,当从属设备无法下载内容并将其编入索引时,是否存在段大小限制?当有大量流量要检索信息和要复制的大量新文档时,奴隶无法复制的阈值是多少.

更事实的是,这里是导致我提出这些问题的背景:我们想要索引相当数量的文档,但是当数量达到十几万时,奴隶就无法处理它并开始无法复制SnapPull错误.文档由几个文本字段组成(名称,类型,描述,...大约10个其他字段,最多20个字符).

我们有一个主服务器和两个从主服务器复制数据的从服务器.

这是我第一次使用Solr(我通常在使用spring,hibernate的webapps上工作......但没有使用Solr),所以我不确定如何解决这个问题.

我们的想法是暂时向主服务器添加多个核心,并从每个核心复制一个从服务器.这是正确的方法吗?

如果是,我们如何确定所需的核心数量?现在我们只是试着看看它的行为和调整,如果有必要,但我想知道是否有任何最佳实践或某些基准已经针对这个特定主题.

对于具有此平均大小的此数量的文档,需要x核心或索引...

感谢您对我如何处理大量平均大小的文档的任何帮助!

这是从属设备尝试复制时抛出的错误的副本:

ERROR [org.apache.solr.handler.ReplicationHandler] - <SnapPull failed >
org.apache.solr.common.SolrException: Index fetch failed :
        at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:329)
        at org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:264)
        at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:159)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:417)
        at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:280)
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:135)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:65)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:142)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:166)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:650)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:675)
        at java.lang.Thread.run(Thread.java:595)
Caused by: java.lang.RuntimeException: java.io.IOException: read past EOF
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1068)
        at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:418)
        at org.apache.solr.handler.SnapPuller.doCommit(SnapPuller.java:467)
        at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:319)
        ... 11 more
Caused by: java.io.IOException: read past EOF
        at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
        at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
        at org.apache.lucene.store.IndexInput.readInt(IndexInput.java:70)
        at org.apache.lucene.index.SegmentInfos$2.doBody(SegmentInfos.java:410)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:704)
        at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:538)
        at org.apache.lucene.index.SegmentInfos.readCurrentVersion(SegmentInfos.java:402)
        at org.apache.lucene.index.DirectoryReader.isCurrent(DirectoryReader.java:791)
        at org.apache.lucene.index.DirectoryReader.doReopen(DirectoryReader.java:404)
        at org.apache.lucene.index.DirectoryReader.reopen(DirectoryReader.java:352)
        at org.apache.solr.search.SolrIndexReader.reopen(SolrIndexReader.java:413)
        at org.apache.solr.search.SolrIndexReader.reopen(SolrIndexReader.java:424)
        at org.apache.solr.search.SolrIndexReader.reopen(SolrIndexReader.java:35)
        at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1049)
        ... 14 more
Run Code Online (Sandbox Code Playgroud)

编辑:在Mauricio的回答之后,solr库已经更新到1.4.1但仍然提出了这个错误.我增加了commitReserveDuration,即使"SnapPull失败"错误似乎已经消失,另一个开始被提升,不知道为什么我似乎无法在网上找到太多答案:

ERROR [org.apache.solr.servlet.SolrDispatchFilter] - <ClientAbortException:  java.io.IOException
        at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:370)
        at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:323)
        at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:396)
        at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:385)
        at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:89)
        at org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:183)
        at org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:89)
        at org.apache.solr.request.BinaryResponseWriter.write(BinaryResponseWriter.java:48)
        at org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:322)
        at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.jstripe.tomcat.probe.Tomcat55AgentValve.invoke(Tomcat55AgentValve.java:20)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
        at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:837)
        at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:640)
        at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1286)
        at java.lang.Thread.run(Thread.java:595)
Caused by: java.io.IOException
        at org.apache.coyote.http11.InternalAprOutputBuffer.flushBuffer(InternalAprOutputBuffer.java:703)
        at org.apache.coyote.http11.InternalAprOutputBuffer$SocketOutputBuffer.doWrite(InternalAprOutputBuffer.java:733)
        at org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:124)
        at org.apache.coyote.http11.InternalAprOutputBuffer.doWrite(InternalAprOutputBuffer.java:539)
        at org.apache.coyote.Response.doWrite(Response.java:560)
        at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:365)
        ... 22 more
>
ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/].[SolrServer]] - <Servlet.service() for servlet SolrServer threw exception>
java.lang.IllegalStateException
        at org.apache.catalina.connector.ResponseFacade.sendError(ResponseFacade.java:405)
        at org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:362)
        at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:272)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:215)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:188)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:213)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:172)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at org.jstripe.tomcat.probe.Tomcat55AgentValve.invoke(Tomcat55AgentValve.java:20)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:117)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:108)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:174)
        at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:837)
        at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:640)
        at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1286)
        at java.lang.Thread.run(Thread.java:595)
Run Code Online (Sandbox Code Playgroud)

我仍然想知道处理包含solr的大量文档的大索引(超过20G)的最佳实践是什么.我在哪里错过了一些明显的链接?教程,文档?

Mau*_*fer 15

  • 核心是一种主要用于在单个Solr实例中具有不同模式的工具.也用作甲板上的索引.分片和复制是正交问题.
  • 你提到"很多流量".这是一个非常主观的衡量标准.相反,尝试确定Solr需要多少QPS(每秒查询数).此外,单个Solr实例是否足够快地回答您的查询?只有这样才能确定是否需要向外扩展.单个Solr实例可以处理大量流量,也许您甚至不需要扩展.
  • 确保在具有足够内存的服务器上运行Solr(并确保Java可以访问它).Solr非常渴望内存,如果你把它放在内存受限的服务器上,性能会受到影响.
  • 正如Solr wiki所解释的,如果单个查询运行时间过长,则使用分片,如果单个Solr实例无法处理流量,则使用复制."太长"和"流量"取决于您的特定应用.测量它们.
  • Solr有许多影响性能的设置:缓存自动加温,存储字段,合并因子等.查看SolrPerformanceFactors.
  • 这里没有严格的规定.每个应用都有不同的搜索需求.模拟和测量您的特定方案.
  • 关于复制错误,请确保您运行的是1.4.1,因为1.4.0存在复制错误.