在 Lucene 中使用不同的相似性获得相同的结果

Getting the same results using different similarities in Lucene

我们在 Java 中使用 Lucene 来搜索文档并确定它们是否相关。我们正在以 6 种不同的方式进行搜索:

搜索配置3和6的结果相同,配置1、2、4、5的结果也相同。这表明仅更改分析器(词干分析器)会改变任何内容。

我们已尝试对其进行调试以检查对象是否符合我们的预期,但一切似乎都井然有序 - 只是对象的行为与我们希望的不同。我们还记得在索引和搜索时使用相同的相似性。

我们做错了什么?我们是否遗漏了一些 'apply' 配置正确的代码?

    public IndexWriterConfig index(List<DocumentInCollection> docs) throws IOException 
    {

        Analyzer analyz;
        IndexWriterConfig config;

        if (analyzer.equals("vsm") && stopwords && stemmer) 
        {
            //VSM cosine similarity with TFIDF + stopwords + stemmer
            CharArraySet stopWords = EnglishAnalyzer.getDefaultStopSet();
            analyz = new EnglishAnalyzer(stopWords);
            config = new IndexWriterConfig(analyz);
            config.setSimilarity(new ClassicSimilarity());
        } 
        else if (analyzer.equals("vsm") && !stopwords && stemmer) 
        {
            //VSM cosine similarity with TFIDF - stopwords + stemmer
            analyz = new EnglishAnalyzer(CharArraySet.EMPTY_SET);
            config = new IndexWriterConfig(analyz);
            config.setSimilarity(new ClassicSimilarity());
        } 
        else if (analyzer.equals("vsm") && stopwords && !stemmer) 
        {
            //VSM cosine similarity with TFIDF - stopwords - stemmer
            CharArraySet stopWords = StandardAnalyzer.STOP_WORDS_SET;
            analyz = new StandardAnalyzer(stopWords);
            config = new IndexWriterConfig(analyz);
            config.setSimilarity(new ClassicSimilarity());
        } 
        else if (analyzer.equals("bm25") && stopwords && stemmer) 
        {
            //Analyzer + stopwords + stemmer
            CharArraySet stopWords = EnglishAnalyzer.getDefaultStopSet();
            analyz = new EnglishAnalyzer(stopWords);
            config = new IndexWriterConfig(analyz);
            //BM25 ranking method
            config.setSimilarity(new BM25Similarity());
        } 
        else if (analyzer.equals("bm25") && !stopwords && stemmer) 
        {
            //Analyzer - stopwords + stemmer
            analyz = new EnglishAnalyzer(CharArraySet.EMPTY_SET);
            config = new IndexWriterConfig(analyz);
            //BM25 ranking method
            config.setSimilarity(new BM25Similarity());
        } 
        else if (analyzer.equals("bm25") && stopwords && !stemmer) 
        {
            //Analyzer + stopwords - stemmer
            CharArraySet stopWords = StandardAnalyzer.STOP_WORDS_SET;
            analyz = new StandardAnalyzer(stopWords);
            config = new IndexWriterConfig(analyz);
            //BM25 ranking method
            config.setSimilarity(new BM25Similarity());
        }
        else 
        {
            //some default
            analyz = new StandardAnalyzer();
            config = new IndexWriterConfig(analyz);
            config.setSimilarity(new ClassicSimilarity());
        }


        IndexWriter w = new IndexWriter(corpus, config);

        //total 153 documents with group 5
        for (DocumentInCollection doc1 : docs) {
            if (doc1.getSearchTaskNumber() == 5) {
                Document doc = new Document();
                doc.add(new TextField("title", doc1.getTitle(), Field.Store.YES));
                doc.add(new TextField("abstract_text", doc1.getAbstractText(), Field.Store.YES));
                doc.add(new TextField("relevance", Boolean.toString(doc1.isRelevant()), Field.Store.YES));
                w.addDocument(doc);
                totalDocs++;
                if (doc1.isRelevant()) relevantDocs++;
            }
        }

        w.close();

        return config;
    }

    public List<String> search(String searchQuery, IndexWriterConfig cf) throws IOException {

        printQuery(searchQuery);

        List<String> results = new LinkedList<String>();


        //Constructing QueryParser to stem search query
        QueryParser qp = new QueryParser("abstract_text", cf.getAnalyzer());
        Query stemmedQuery = null;
        try {
            stemmedQuery = qp.parse(searchQuery);
        } catch (ParseException e) {
            e.printStackTrace();
        }



        // opening directory for search
        IndexReader reader = DirectoryReader.open(corpus);
        // implementing search over IndexReader
        IndexSearcher searcher = new IndexSearcher(reader);

        searcher.setSimilarity(cf.getSimilarity());

        // finding top totalDocs documents qualifying the search
        TopDocs docs = searcher.search(stemmedQuery, totalDocs);

        // representing array of hits from TopDocs
        ScoreDoc[] scored = docs.scoreDocs;

        // adding matched doc titles to results
        for (ScoreDoc aDoc : scored) {
            Document d = searcher.doc(aDoc.doc);
            retrieved++;
            //relevance and score are printed out for debug purposes
            if (d.get("relevance").equals("true")) {
                relevantRetrieved++;
                results.add("+ " + d.get("title") + " | relevant: " + d.get("relevance") + " | score: " + aDoc.score);
            } else {
                results.add("- " + d.get("title") + " | relevant: " + d.get("relevance") + " | score: " + aDoc.score);
            }

        }


        return results;
    }

首先,您通常不会期望 BM25 和 Classic Similarities return 一组不同的结果,只是分数不同(因此排序不同)。通常,相似性决定了如何为已经找到与查询匹配的文档计算分数。他们通常会 return 相同的结果,但分数不同,因此顺序不同。

如果您看到与 bm25 和 vsm 设置相同的 分数,那么是的,出了点问题。但是,根据我精简的可运行测试版本,您的代码 看起来 对我来说没问题:https://gist.github.com/anonymous/baf279806702edb54fab23db6d8d19b9

StopWord 过滤器通常并没有那么大的变化。它控制停用词是否被索引。停用词是像 "the" 和 "this" 这样的词。使用停用词过滤器,它们不会被编入索引,也无法被搜索到。除非您正在搜索停用词,否则差异通常不会很明显。同样,根据我的测试版本,似乎 可以正常工作。