修改通过标准分词器生成的分词

modifying tokens generated through standard tokenizer

我试图了解标准分词器的工作原理。下面是我的 tokenizerFactory 文件中的代码:

package pl.allegro.tech.elasticsearch.index.analysis.pl;

import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.standard.StandardTokenizer;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AbstractTokenizerFactory;

public class UrlTokenizerFactory extends AbstractTokenizerFactory {

    public UrlTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
        super(indexSettings, name, settings);
    }

    @Override
    public Tokenizer create() {
        StandardTokenizer t = new StandardTokenizer();
        return t;
    }
}

我想修改通过标准分词器生成的每一个分词。例如,只是为了测试我可以修改令牌;我想在每个标记的末尾添加一个“a”或任何其他字符。我尝试使用“+”运算符连接上述创建函数的 return 语句中令牌末尾的“a”字符,但没有成功。有人知道如何实现吗?

您可以使用自定义分析器定义 Pattern replace char filter。这将添加 _a 和生成的所有令牌,您不需要在 Java 代码中更新它。

POST _analyze
{
  "text": [
    "Stack overflow"
  ],
  "tokenizer": "standard", 
  "char_filter": [
    {
      "type": "pattern_replace",
      "pattern": "(\S+)",
      "replacement": "[=10=]_a"
    }
  ]
}

输出:

{
  "tokens" : [
    {
      "token" : "Stack_a",
      "start_offset" : 0,
      "end_offset" : 5,
      "type" : "<ALPHANUM>",
      "position" : 0
    },
    {
      "token" : "overflow_a",
      "start_offset" : 6,
      "end_offset" : 14,
      "type" : "<ALPHANUM>",
      "position" : 1
    }
  ]
}