Elasticsearch 分析器不会替换撇号 (')

Elasticsearch analyzer doesn't replace the apostophes (')

使用 Elasticsearch v7.0
这是我实现的分析器 (http://phoenyx2:9200/search_dev/_settings?pretty=true):

{
    "search_dev": {
        "settings": {
            "index": {
                "refresh_interval": "30s",
                "number_of_shards": "1",
                "provided_name": "search_dev",
                "creation_date": "1558444846417",
                "analysis": {
                    "analyzer": {
                        "my_standard": {
                            "filter": [
                                "lowercase"
                            ],
                            "char_filter": [
                                "my_char_filter"
                            ],
                            "tokenizer": "standard"
                        }
                    },
                    "char_filter": {
                        "my_char_filter": {
                            "type": "mapping",
                            "mappings": [
                                "' => "
                            ]
                        }
                    }
                },
                "number_of_replicas": "1",
                "uuid": "hYz0ZlWFTDKearW1rpx8lw",
                "version": {
                    "created": "7000099"
                }
            }
        }
    }
}

我已经重新创建了整个索引,分析仍然没有变化。
我也 运行 这个:url (phoenyx2:9200/search_dev/_analyze)

{
    "analyzer":"my_standard",
    "field":"stakeholderName",
    "text": "test't"
}

回复是:

{
    "tokens": [
        {
            "token": "test't",
            "start_offset": 0,
            "end_offset": 6,
            "type": "<ALPHANUM>",
            "position": 0
        }
    ]
}

我希望返回的令牌是“testt

当您 re-create 索引不足以在设置中定义新的分析器时。

您还必须在映射中指定哪些字段使用什么分析器,例如:

   "mappings":{
       "properties":{
          "stakeholderName": {
             "type":"text",
             "analyzer":"my_analyzer", 
         },
      }
   }

您的映射(可能)看起来像:

   "mappings":{
       "properties":{
          "stakeholderName": {
             "type":"text",
         },
      }
   }

基本上,如果您 运行 您的 "analyze" 再次测试并删除字段:

{
    "analyzer":"my_standard",
    "text": "test't"
}

您将获得:

{
  "token": "testt",
  "start_offset": 0,
  "end_offset": 6,
  "type": "<ALPHANUM>",
  "position": 0
}

如你所料,这是个坏消息,伙计,但你必须再次 re-index 所有数据,这次在映射中指定你想为每个字段使用哪个分析器,否则弹性将默认为每次都是他们的标准分析仪。