在 Lucene 中搜索产品代码,phone 个数字

Seaching for product codes, phone numbers in Lucene

我正在寻找有关如何在 Apache Lucene 8.x 中搜索标识符、产品代码或 phone 数字的一般建议。假设我正在尝试搜索产品代码列表(例如 ISBN,例如 978-3-86680-192-9)。如果有人输入 9783978 3978-3,则应该出现 978-3-86680-192-9。如果标识符使用字母、空格、数字、标点符号的任意组合(例如:TS 123123.abc。我该怎么做?

我想我可以使用自定义分析器解决这个问题,该分析器删除所有标点符号和空格,但结果好坏参半:

public class IdentifierAnalyzer extends Analyzer {
    @Override
    protected TokenStreamComponents createComponents(String fieldName) {
        Tokenizer tokenizer = new KeywordTokenizer();
        TokenStream tokenStream = new LowerCaseFilter(tokenizer);
        tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("[^0-9a-z]"), "", true);
        tokenStream = new TrimFilter(tokenStream);
        return new TokenStreamComponents(tokenizer, tokenStream);
    }

    @Override
    protected TokenStream normalize(String fieldName, TokenStream in) {
        TokenStream tokenStream = new LowerCaseFilter(in);
        tokenStream = new PatternReplaceFilter(tokenStream, Pattern.compile("[^0-9a-z]"), "", true);
        tokenStream = new TrimFilter(tokenStream);
        return tokenStream;
    }
}

因此,虽然我在使用 TS1* 执行 PrefixQuery 时得到了想要的结果,但 TS 1*(带空格)并没有产生令人满意的结果。当我查看解析后的查询时,我看到 Lucene 将 TS 1* 拆分为两个查询:myField:TS myField:1*WordDelimiterGraphFilter 看起来很有趣,但我想不出在这里应用它。

这不是一个全面的答案 - 但我同意 WordDelimiterGraphFilter 可能对此类数据有帮助。但是,仍然可能有需要额外处理的测试用例。

这是我的自定义分析器,使用 WordDelimiterGraphFilter:

import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.core.KeywordTokenizer;
import org.apache.lucene.analysis.core.LowerCaseFilter;
import org.apache.lucene.analysis.miscellaneous.WordDelimiterGraphFilterFactory;
import java.util.Map;
import java.util.HashMap;

public class IdentifierAnalyzer extends Analyzer {

    private WordDelimiterGraphFilterFactory getWordDelimiter() {
        Map<String, String> settings = new HashMap<>();
        settings.put("generateWordParts", "1");   // e.g. "PowerShot" => "Power" "Shot"
        settings.put("generateNumberParts", "1"); // e.g. "500-42" => "500" "42"
        settings.put("catenateAll", "1");         // e.g. "wi-fi" => "wifi" and "500-42" => "50042"
        settings.put("preserveOriginal", "1");    // e.g. "500-42" => "500" "42" "500-42"
        settings.put("splitOnCaseChange", "1");   // e.g. "fooBar" => "foo" "Bar"
        return new WordDelimiterGraphFilterFactory(settings);
    }

    @Override
    protected TokenStreamComponents createComponents(String fieldName) {
        Tokenizer tokenizer = new KeywordTokenizer();
        TokenStream tokenStream = new LowerCaseFilter(tokenizer);
        tokenStream = getWordDelimiter().create(tokenStream);
        return new TokenStreamComponents(tokenizer, tokenStream);
    }
    
    @Override
    protected TokenStream normalize(String fieldName, TokenStream in) {
        TokenStream tokenStream = new LowerCaseFilter(in);
        return tokenStream;
    }

}

它使用 WordDelimiterGraphFilterFactory 助手以及参数映射来控制应用哪些设置。

您可以在 WordDelimiterGraphFilterFactory JavaDoc 中查看可用设置的完整列表。您可能想尝试 setting/unsetting 个不同的。

这是用于以下 3 个输入值的测试索引生成器:

978-3-86680-192-9
TS 123
123.abc
public static void buildIndex() throws IOException, FileNotFoundException, ParseException {
    final Directory dir = FSDirectory.open(Paths.get(INDEX_PATH));
    Analyzer analyzer = new IdentifierAnalyzer();
    IndexWriterConfig iwc = new IndexWriterConfig(analyzer);
    iwc.setOpenMode(OpenMode.CREATE);
    Document doc;

    List<String> identifiers = Arrays.asList("978-3-86680-192-9", "TS 123", "123.abc");

    try (IndexWriter writer = new IndexWriter(dir, iwc)) {
        for (String identifier : identifiers) {
            doc = new Document();
            doc.add(new TextField("identifiers", identifier, Field.Store.YES));
            writer.addDocument(doc);
        }
    }
}

这会创建以下令牌:

为了查询上面的索引数据,我使用了这个:

public static void doSearch() throws IOException, ParseException {
    Analyzer analyzer = new IdentifierAnalyzer();
    QueryParser parser = new QueryParser("identifiers", analyzer);

    List<String> searches = Arrays.asList("9783", "9783*", "978 3", "978-3", "TS1*", "TS 1*");

    for (String search : searches) {
        Query query = parser.parse(search);
        printHits(query, search);
    }
}

private static void printHits(Query query, String search) throws IOException {
    System.out.println("search term: " + search);
    System.out.println("parsed query: " + query.toString());
    IndexReader reader = DirectoryReader.open(FSDirectory.open(Paths.get(INDEX_PATH)));
    IndexSearcher searcher = new IndexSearcher(reader);
    TopDocs results = searcher.search(query, 100);
    ScoreDoc[] hits = results.scoreDocs;
    System.out.println("hits: " + hits.length);
    for (ScoreDoc hit : hits) {
        System.out.println("");
        System.out.println("  doc id: " + hit.doc + "; score: " + hit.score);
        Document doc = searcher.doc(hit.doc);
        System.out.println("  identifier: " + doc.get("identifiers"));
    }
    System.out.println("-----------------------------------------");
}

这使用以下搜索词 - 所有这些我都传递给经典查询解析器(当然,您可以通过 API 使用更复杂的查询类型):

9783
9783*
978 3
978-3
TS1*
TS 1*

唯一找不到任何匹配文档的查询是第一个:

search term: 9783
parsed query: identifiers:9783
hits: 0

这应该不足为奇,因为这是一个部分令牌,没有通配符。第二个查询(添加了通配符)按预期找到了一个文档。

我测试的最终查询 TS 1* 找到了三个匹配项 - 但我们想要的匹配分数最高:

search term: TS 1*
parsed query: identifiers:ts identifiers:1*
hits: 3

  doc id: 1; score: 1.590861
  identifier: TS 123

  doc id: 0; score: 1.0
  identifier: 978-3-86680-192-9

  doc id: 2; score: 1.0
  identifier: 123.abc