Lucene 4.1:索引时如何拆分包含"点"的单词?

Seb*_*nne 1 lucene lexical-analysis

我试图找出我应该做什么来索引包含"."的关键字..

例如:this.name

我想索引术语:this和我的索引中的名字.

我使用StandardAnalyser.我尝试扩展WhitespaceTokensizer或扩展TokenFilter,但我不确定我是否在正确的方向.

如果我使用StandardAnalyser,我将获得"this.name"作为关键字,这不是我想要的,但分析师会正确地为我做好准备.

小智 5

您可以将CharFilter放在StandardTokenizer前面,将句点和下划线转换为空格.MappingCharFilter将起作用.

这里的MappingCharFilter添加到精简的StandardAnalyzer中(请参阅此处的原始4.1版本):

import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.charfilter.MappingCharFilter;
import org.apache.lucene.analysis.charfilter.NormalizeCharMap;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.core.StopAnalyzer;
import org.apache.lucene.analysis.core.StopFilter;
import org.apache.lucene.analysis.standard.StandardFilter;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.apache.lucene.analysis.util.StopwordAnalyzerBase;
import org.apache.lucene.util.Version;

import java.io.IOException;
import java.io.Reader;

public final class MyAnalyzer extends StopwordAnalyzerBase {
  private int maxTokenLength = 255;
  public MyAnalyzer() {
    super(Version.LUCENE_41, StopAnalyzer.ENGLISH_STOP_WORDS_SET);
  }

  @Override
  protected TokenStreamComponents createComponents
      (final String fieldName, final Reader reader) {
    final StandardTokenizer src = new StandardTokenizer(matchVersion, reader);
    src.setMaxTokenLength(maxTokenLength);
    TokenStream tok = new StandardFilter(matchVersion, src);
    tok = new LowerCaseFilter(matchVersion, tok);
    tok = new StopFilter(matchVersion, tok, stopwords);
    return new TokenStreamComponents(src, tok) {
      @Override
      protected void setReader(final Reader reader) throws IOException {
        src.setMaxTokenLength(MyAnalyzer.this.maxTokenLength);
        super.setReader(reader);
      }
    };
  }

  @Override
  protected Reader initReader(String fieldName, Reader reader) {
    NormalizeCharMap.Builder builder = new NormalizeCharMap.Builder();
    builder.add(".", " ");
    builder.add("_", " ");
    NormalizeCharMap normMap = builder.build();
    return new MappingCharFilter(normMap, reader);
  }
}
Run Code Online (Sandbox Code Playgroud)

这是一个快速测试,以证明它的工作原理:

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.BaseTokenStreamTestCase;

public class TestMyAnalyzer extends BaseTokenStreamTestCase {
  private Analyzer analyzer = new MyAnalyzer();

  public void testPeriods() throws Exception {
    BaseTokenStreamTestCase.assertAnalyzesTo
    (analyzer, 
     "this.name; here.i.am; sentences ... end with periods.",
     new String[] { "name", "here", "i", "am", "sentences", "end", "periods" } );
  }

  public void testUnderscores() throws Exception {
    BaseTokenStreamTestCase.assertAnalyzesTo
        (analyzer,
         "some_underscore_term _and____ stuff that is_not in it",
         new String[] { "some", "underscore", "term", "stuff" } );
  }
}
Run Code Online (Sandbox Code Playgroud)