相同的正则表达式在 Java 和 Elasticsearch 中的工作方式不同

Same Regex works different in Java and Elasticsearch

我在 JAVA 程序中使用模式分词器。我正在使用正则表达式:"\p{Punct}{1}"

我还使用相同的正则表达式创建了一个 JAVA 程序。但是,当我比较 JAVA 程序和具有相同模式的 elasticsearch 分析器的结果时,它们是不同的。

我的 JAVA 文件中的代码是:

import java.util.regex.Matcher;
import java.util.regex.Pattern;
import java.util.*;

public class characters {
    public static void main(String[] args) {

        String userInput = "HTTps://www.google.cOM/";
        userInput = userInput.toLowerCase();

        Pattern pattern = Pattern.compile("\p{Punct}{1}");
        List<String> list = new ArrayList<String>();
        Matcher m = pattern.matcher(userInput);
        while (m.find()) {
            list.add(m.group());
        }
        System.out.println(list);
    }
}

以上程序给出了以下结果:

[:, /, /, ., ., /]

elasticsearch 模式文件中的代码是:

import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.pattern.PatternTokenizer;
import org.elasticsearch.common.regex.Regex;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.env.Environment;
import org.elasticsearch.index.IndexSettings;
import org.elasticsearch.index.analysis.AbstractTokenizerFactory;

import java.util.regex.Pattern;

public class UrlTokenizerFactory extends AbstractTokenizerFactory {

    private final Pattern pattern;
    private final int group;

    public UrlTokenizerFactory(IndexSettings indexSettings, Environment environment, String name, Settings settings) {
        super(indexSettings, name, settings);

        String sPattern = settings.get("pattern", "\p{Punct}{1}");

        if (sPattern == null) {
            throw new IllegalArgumentException("pattern is missing for [" + name + "] tokenizer of type 'pattern'");
        }

        this.pattern = Regex.compile(sPattern, settings.get("flags"));
        this.group = settings.getAsInt("group", -1);
    }

    @Override
    public Tokenizer create() {
        return new PatternTokenizer(pattern, group);
    }
}

它正在生成以下结果:

  "tokens" : [
    {
      "token" : "https",
      "start_offset" : 0,
      "end_offset" : 5,
      "type" : "word",
      "position" : 0
    },
    {
      "token" : "www",
      "start_offset" : 8,
      "end_offset" : 11,
      "type" : "word",
      "position" : 1
    },
    {
      "token" : "google",
      "start_offset" : 12,
      "end_offset" : 18,
      "type" : "word",
      "position" : 2
    },
    {
      "token" : "com",
      "start_offset" : 19,
      "end_offset" : 22,
      "type" : "word",
      "position" : 3
    }
  ]

期望的结果是 JAVA 文件的结果。但是,我在 elasticsearch 的案例中得到了不同的结果。

我可以通过简单地更改

的值来解决这个问题
this.group = settings.getAsInt("group", -1);

至:

this.group = settings.getAsInt("group", 0);

在我的 patternTokenizer 文件中。