Elasticsearch edge_ngram token_chars 空格
Elasticsearch edge_ngram token_chars whitespace
有人可以告诉我 Elasticsearch edge_ngram token_chars whitespace 是如何工作的吗? Token_chars 应该定义将包含在令牌中的符号。那么如果我使用 token_chars: ['letter', 'digit', 'whitespace']
在短语“2 red foxes”的上下文中意味着什么?据我了解,它将以这种方式生成令牌 ['2', '2', '2 r', '2 re', '2 red']。我是对的吗,令牌没有按空格拆分,但空格将包含在令牌中?
谢谢
总的来说,你是对的,但这也取决于 min_gram
和 max_gram
参数值。
往下看
示例 1.
min_gram = 2
和 max_gram = 10
映射
PUT /so54090009
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit",
"whitespace"
]
}
}
}
}
}
分析
POST /so54090009/_analyze
{
"analyzer": "my_analyzer",
"text": "2 red foxes"
}
回应
{
"tokens": [
{
"token": "2 ",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 0
},
{
"token": "2 r",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 1
},
{
"token": "2 re",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 2
},
{
"token": "2 red",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 3
},
{
"token": "2 red ",
"start_offset": 0,
"end_offset": 6,
"type": "word",
"position": 4
},
{
"token": "2 red f",
"start_offset": 0,
"end_offset": 7,
"type": "word",
"position": 5
},
{
"token": "2 red fo",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 6
},
{
"token": "2 red fox",
"start_offset": 0,
"end_offset": 9,
"type": "word",
"position": 7
},
{
"token": "2 red foxe",
"start_offset": 0,
"end_offset": 10,
"type": "word",
"position": 8
}
]
}
示例 2.
min_gram = 1
和 max_gram = 5
映射
PUT /so54090009
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 5,
"token_chars": [
"letter",
"digit",
"whitespace"
]
}
}
}
}
}
分析
POST /so54090009/_analyze
{
"analyzer": "my_analyzer",
"text": "2 red foxes"
}
回应
{
"tokens": [
{
"token": "2",
"start_offset": 0,
"end_offset": 1,
"type": "word",
"position": 0
},
{
"token": "2 ",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 1
},
{
"token": "2 r",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 2
},
{
"token": "2 re",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 3
},
{
"token": "2 red",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 4
}
]
}
有人可以告诉我 Elasticsearch edge_ngram token_chars whitespace 是如何工作的吗? Token_chars 应该定义将包含在令牌中的符号。那么如果我使用 token_chars: ['letter', 'digit', 'whitespace']
在短语“2 red foxes”的上下文中意味着什么?据我了解,它将以这种方式生成令牌 ['2', '2', '2 r', '2 re', '2 red']。我是对的吗,令牌没有按空格拆分,但空格将包含在令牌中?
谢谢
总的来说,你是对的,但这也取决于 min_gram
和 max_gram
参数值。
往下看
示例 1.
min_gram = 2
和 max_gram = 10
映射
PUT /so54090009
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit",
"whitespace"
]
}
}
}
}
}
分析
POST /so54090009/_analyze
{
"analyzer": "my_analyzer",
"text": "2 red foxes"
}
回应
{
"tokens": [
{
"token": "2 ",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 0
},
{
"token": "2 r",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 1
},
{
"token": "2 re",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 2
},
{
"token": "2 red",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 3
},
{
"token": "2 red ",
"start_offset": 0,
"end_offset": 6,
"type": "word",
"position": 4
},
{
"token": "2 red f",
"start_offset": 0,
"end_offset": 7,
"type": "word",
"position": 5
},
{
"token": "2 red fo",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 6
},
{
"token": "2 red fox",
"start_offset": 0,
"end_offset": 9,
"type": "word",
"position": 7
},
{
"token": "2 red foxe",
"start_offset": 0,
"end_offset": 10,
"type": "word",
"position": 8
}
]
}
示例 2.
min_gram = 1
和 max_gram = 5
映射
PUT /so54090009
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 5,
"token_chars": [
"letter",
"digit",
"whitespace"
]
}
}
}
}
}
分析
POST /so54090009/_analyze
{
"analyzer": "my_analyzer",
"text": "2 red foxes"
}
回应
{
"tokens": [
{
"token": "2",
"start_offset": 0,
"end_offset": 1,
"type": "word",
"position": 0
},
{
"token": "2 ",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 1
},
{
"token": "2 r",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 2
},
{
"token": "2 re",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 3
},
{
"token": "2 red",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 4
}
]
}