如何在 Elasticsearch 中仅向前创建 ngrams?
How to create ngrams in only forward direction in Elasticsearch?
是否可以像这样创建 ngram :
homework -> ho,hom,home,homew,homewo,homewor,homework only ?
哪个只在正向?
目前正在创建所有可能的方法。
The edge_ngram tokenizer first breaks text down into words whenever it
encounters one of a list of specified characters, then it emits
N-grams of each word where the start of the N-gram is anchored to the
beginning of the word.
参考这个official documentation得到Edge的详细解释n-grams
索引映射:
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit"
]
}
}
}
}
}
分析API
将生成以下令牌:
{
"analyzer": "my_analyzer",
"text": "Homework"
}
tokens": [
{
"token": "Ho",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 0
},
{
"token": "Hom",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 1
},
{
"token": "Home",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 2
},
{
"token": "Homew",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 3
},
{
"token": "Homewo",
"start_offset": 0,
"end_offset": 6,
"type": "word",
"position": 4
},
{
"token": "Homewor",
"start_offset": 0,
"end_offset": 7,
"type": "word",
"position": 5
},
{
"token": "Homework",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 6
}
]
}
是否可以像这样创建 ngram :
homework -> ho,hom,home,homew,homewo,homewor,homework only ?
哪个只在正向?
目前正在创建所有可能的方法。
The edge_ngram tokenizer first breaks text down into words whenever it encounters one of a list of specified characters, then it emits N-grams of each word where the start of the N-gram is anchored to the beginning of the word.
参考这个official documentation得到Edge的详细解释n-grams
索引映射:
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter",
"digit"
]
}
}
}
}
}
分析API
将生成以下令牌:
{
"analyzer": "my_analyzer",
"text": "Homework"
}
tokens": [
{
"token": "Ho",
"start_offset": 0,
"end_offset": 2,
"type": "word",
"position": 0
},
{
"token": "Hom",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 1
},
{
"token": "Home",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 2
},
{
"token": "Homew",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 3
},
{
"token": "Homewo",
"start_offset": 0,
"end_offset": 6,
"type": "word",
"position": 4
},
{
"token": "Homewor",
"start_offset": 0,
"end_offset": 7,
"type": "word",
"position": 5
},
{
"token": "Homework",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 6
}
]
}