mai*_*iky 15 lucene trigonometry similarity tf-idf
我在Lucene建了一个索引.我希望不指定查询,只是为了获得索引中两个文档之间的分数(余弦相似度或其他距离?).
例如,我从之前打开的IndexReader获取带有ID 2和4的文档.文档d1 = ir.document(2); 文件d2 = ir.document(4);
如何获得这两个文档之间的余弦相似度?
谢谢
Mar*_*ler 16
正如Julia指出的那样,Sujit Pal的例子非常有用,但Lucene 4 API有很大的变化.这是为Lucene 4重写的版本.
import java.io.IOException;
import java.util.*;
import org.apache.commons.math3.linear.*;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.core.SimpleAnalyzer;
import org.apache.lucene.document.*;
import org.apache.lucene.document.Field.Store;
import org.apache.lucene.index.*;
import org.apache.lucene.store.*;
import org.apache.lucene.util.*;
public class CosineDocumentSimilarity {
public static final String CONTENT = "Content";
private final Set<String> terms = new HashSet<>();
private final RealVector v1;
private final RealVector v2;
CosineDocumentSimilarity(String s1, String s2) throws IOException {
Directory directory = createIndex(s1, s2);
IndexReader reader = DirectoryReader.open(directory);
Map<String, Integer> f1 = getTermFrequencies(reader, 0);
Map<String, Integer> f2 = getTermFrequencies(reader, 1);
reader.close();
v1 = toRealVector(f1);
v2 = toRealVector(f2);
}
Directory createIndex(String s1, String s2) throws IOException {
Directory directory = new RAMDirectory();
Analyzer analyzer = new SimpleAnalyzer(Version.LUCENE_CURRENT);
IndexWriterConfig iwc = new IndexWriterConfig(Version.LUCENE_CURRENT,
analyzer);
IndexWriter writer = new IndexWriter(directory, iwc);
addDocument(writer, s1);
addDocument(writer, s2);
writer.close();
return directory;
}
/* Indexed, tokenized, stored. */
public static final FieldType TYPE_STORED = new FieldType();
static {
TYPE_STORED.setIndexed(true);
TYPE_STORED.setTokenized(true);
TYPE_STORED.setStored(true);
TYPE_STORED.setStoreTermVectors(true);
TYPE_STORED.setStoreTermVectorPositions(true);
TYPE_STORED.freeze();
}
void addDocument(IndexWriter writer, String content) throws IOException {
Document doc = new Document();
Field field = new Field(CONTENT, content, TYPE_STORED);
doc.add(field);
writer.addDocument(doc);
}
double getCosineSimilarity() {
return (v1.dotProduct(v2)) / (v1.getNorm() * v2.getNorm());
}
public static double getCosineSimilarity(String s1, String s2)
throws IOException {
return new CosineDocumentSimilarity(s1, s2).getCosineSimilarity();
}
Map<String, Integer> getTermFrequencies(IndexReader reader, int docId)
throws IOException {
Terms vector = reader.getTermVector(docId, CONTENT);
TermsEnum termsEnum = null;
termsEnum = vector.iterator(termsEnum);
Map<String, Integer> frequencies = new HashMap<>();
BytesRef text = null;
while ((text = termsEnum.next()) != null) {
String term = text.utf8ToString();
int freq = (int) termsEnum.totalTermFreq();
frequencies.put(term, freq);
terms.add(term);
}
return frequencies;
}
RealVector toRealVector(Map<String, Integer> map) {
RealVector vector = new ArrayRealVector(terms.size());
int i = 0;
for (String term : terms) {
int value = map.containsKey(term) ? map.get(term) : 0;
vector.setEntry(i++, value);
}
return (RealVector) vector.mapDivide(vector.getL1Norm());
}
}
Run Code Online (Sandbox Code Playgroud)
baj*_*ife 13
索引时,可以选择存储术语频率向量.
在运行时,使用IndexReader.getTermFreqVector()查找两个文档的术语频率向量,并使用IndexReader.docFreq()查找每个术语的文档频率数据.这将为您提供计算两个文档之间的余弦相似性所需的所有组件.
一种更简单的方法可能是将文档A作为查询提交(将所有单词作为OR术语添加到查询中,按术语频率增加每个单词)并在结果集中查找文档B.
| 归档时间: |
|
| 查看次数: |
22705 次 |
| 最近记录: |