更高的前缀权重
Higher weightage to Prefix
在计算相似度的时候,有没有什么方法或者距离法可以给前缀分配更高的权重?我知道 Jaro-Winkler 方法,但它的应用仅限于字符。我正在寻找单词的相似性。
A <- data.frame(name = c(
"X-ray right leg arteries",
"Rgraphy left shoulder",
"x-ray leg arteries",
"x-ray leg with 20km distance"
), stringsAsFactors = F)
B <- data.frame(name = c(
"X-ray left leg arteries",
"Rgraphy right shoulder",
"X-ray left shoulder",
"Rgraphy right leg arteries"
), stringsAsFactors = F)
library(quanteda)
corp1 <- corpus(A, text_field = "name")
corp2 <- corpus(B, text_field = "name")
docnames(corp1) <- paste("A", seq_len(ndoc(corp1)), sep = ".")
docnames(corp2) <- paste("B", seq_len(ndoc(corp2)), sep = ".")
dtm3 <- rbind(dfm(corp1, ngrams=1:2), dfm(corp2, ngrams=2))
d2 <- textstat_simil(dtm3, method = "cosine", diag = TRUE)
as.matrix(d2)[docnames(corp1), docnames(corp2)]
我希望 dataframeA 中的 "X-ray right leg arteries" 应该映射到 dataframeB 中的 "X-ray left leg arteries" 而不是 "Rgraphy right leg arteries"。就此而言,我的意思是 "X-ray right leg arteries" 和 "X-ray left leg arteries" 之间的相似度分数应该高于 "X-ray right leg arteries" 和 "Rgraphy right leg arteries" 之间的相似度分数。
同样,我希望 "Rgraphy left shoulder" 应该映射到 "Rgraphy right shoulder" 而不是 "X-ray left shoulder"。上面的例子只是一个例子。实际上,我有一个很大的列表,它不仅限于 "X-ray" 和 "Rgraphy"。因此我不想在 "X-ray" 和 "Rgraphy" 上应用过滤器然后计算相似度。它应该更基于算法。
听起来您想保留某些诊断过程作为特征而不考虑所使用的确切措辞,以便这些可以构成计算文档之间相似性的基础。
您可以通过在字典中定义短语并在构建 dfm 之前应用它来实现。在这里,我对您的文本进行了一些扩展以包含其他功能。
A <- data.frame(text = c("Patient had X-ray right leg arteries.",
"Subject was administered Rgraphy left shoulder",
"Exam consisted of x-ray leg arteries",
"Patient administered x-ray leg with 20km distance."),
row.names = paste0("A", 1:4), stringsAsFactors = FALSE)
B <- data.frame(text = c(B = "Patient had X-ray left leg arteries",
"Rgraphy right shoulder given to patient",
"X-ray left shoulder revealed nothing sinister",
"Rgraphy right leg arteries tested"),
row.names = paste0("A", 1:4), stringsAsFactors = FALSE)
现在,我们可以定义一个字典,其中包含的短语将匹配您想要考虑为计算相似性的等效短语。在此示例中,X-ray 是用于右腿还是左腿并不重要,或者不指定它。相似性,我们不关心 "Rgraph" 程序是特定于左肩还是右肩。 (显然,您需要根据您的文本中的确切内容以及您愿意视为等效的内容来调整和完善这些内容。)
medicaldict <- dictionary(list(
xray_leg = c("X-ray right leg arteries", "x-ray left leg arteries",
"x-ray leg arteries"),
rgraphy_leg = c("Rgraphy right leg arteries", "Rgraphy left leg arteries"),
xray_shoulder = c("X-ray left shoulder", "X-ray right shoulder"),
rgraphy_shoulder = c("Rgraphy left shoulder", "Rgraphy right shoulder")
))
当我们以 "non-exclusive" 的方式使用 tokens_lookup()
将其应用于标记时,序列将被字典键替换。请注意,由于 tokens_lookup()
将相关的标记序列折叠为短语,因此不再需要像您的问题那样形成标记 ngram。
toks <- tokens(corpus(A) + corpus(B)) %>%
tokens_lookup(dictionary = medicaldict, exclusive = FALSE)
toks
# tokens from 8 documents.
# A1 :
# [1] "Patient" "had" "XRAY_LEG" "."
#
# A2 :
# [1] "Subject" "was" "administered" "RGRAPHY_SHOULDER"
#
# A3 :
# [1] "Exam" "consisted" "of" "XRAY_LEG"
#
# A4 :
# [1] "Patient" "administered" "x-ray" "leg" "with" "20km" "distance" "."
#
# A11 :
# [1] "Patient" "had" "XRAY_LEG"
#
# A21 :
# [1] "RGRAPHY_SHOULDER" "given" "to" "patient"
#
# A31 :
# [1] "XRAY_SHOULDER" "revealed" "nothing" "sinister"
#
# A41 :
# [1] "RGRAPHY_LEG" "tested"
最后,我们可以根据折叠特征而不是原始词袋来计算文档相似度。
dfm(toks) %>%
textstat_simil(method = "cosine", diag = TRUE)
# A1 A2 A3 A4 A11 A21 A31
# A2 0.0000000
# A3 0.2500000 0.0000000
# A4 0.3535534 0.1767767 0.0000000
# A11 0.8660254 0.0000000 0.2886751 0.2041241
# A21 0.2500000 0.2500000 0.0000000 0.1767767 0.2886751
# A31 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
# A41 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
在计算相似度的时候,有没有什么方法或者距离法可以给前缀分配更高的权重?我知道 Jaro-Winkler 方法,但它的应用仅限于字符。我正在寻找单词的相似性。
A <- data.frame(name = c(
"X-ray right leg arteries",
"Rgraphy left shoulder",
"x-ray leg arteries",
"x-ray leg with 20km distance"
), stringsAsFactors = F)
B <- data.frame(name = c(
"X-ray left leg arteries",
"Rgraphy right shoulder",
"X-ray left shoulder",
"Rgraphy right leg arteries"
), stringsAsFactors = F)
library(quanteda)
corp1 <- corpus(A, text_field = "name")
corp2 <- corpus(B, text_field = "name")
docnames(corp1) <- paste("A", seq_len(ndoc(corp1)), sep = ".")
docnames(corp2) <- paste("B", seq_len(ndoc(corp2)), sep = ".")
dtm3 <- rbind(dfm(corp1, ngrams=1:2), dfm(corp2, ngrams=2))
d2 <- textstat_simil(dtm3, method = "cosine", diag = TRUE)
as.matrix(d2)[docnames(corp1), docnames(corp2)]
我希望 dataframeA 中的 "X-ray right leg arteries" 应该映射到 dataframeB 中的 "X-ray left leg arteries" 而不是 "Rgraphy right leg arteries"。就此而言,我的意思是 "X-ray right leg arteries" 和 "X-ray left leg arteries" 之间的相似度分数应该高于 "X-ray right leg arteries" 和 "Rgraphy right leg arteries" 之间的相似度分数。
同样,我希望 "Rgraphy left shoulder" 应该映射到 "Rgraphy right shoulder" 而不是 "X-ray left shoulder"。上面的例子只是一个例子。实际上,我有一个很大的列表,它不仅限于 "X-ray" 和 "Rgraphy"。因此我不想在 "X-ray" 和 "Rgraphy" 上应用过滤器然后计算相似度。它应该更基于算法。
听起来您想保留某些诊断过程作为特征而不考虑所使用的确切措辞,以便这些可以构成计算文档之间相似性的基础。
您可以通过在字典中定义短语并在构建 dfm 之前应用它来实现。在这里,我对您的文本进行了一些扩展以包含其他功能。
A <- data.frame(text = c("Patient had X-ray right leg arteries.",
"Subject was administered Rgraphy left shoulder",
"Exam consisted of x-ray leg arteries",
"Patient administered x-ray leg with 20km distance."),
row.names = paste0("A", 1:4), stringsAsFactors = FALSE)
B <- data.frame(text = c(B = "Patient had X-ray left leg arteries",
"Rgraphy right shoulder given to patient",
"X-ray left shoulder revealed nothing sinister",
"Rgraphy right leg arteries tested"),
row.names = paste0("A", 1:4), stringsAsFactors = FALSE)
现在,我们可以定义一个字典,其中包含的短语将匹配您想要考虑为计算相似性的等效短语。在此示例中,X-ray 是用于右腿还是左腿并不重要,或者不指定它。相似性,我们不关心 "Rgraph" 程序是特定于左肩还是右肩。 (显然,您需要根据您的文本中的确切内容以及您愿意视为等效的内容来调整和完善这些内容。)
medicaldict <- dictionary(list(
xray_leg = c("X-ray right leg arteries", "x-ray left leg arteries",
"x-ray leg arteries"),
rgraphy_leg = c("Rgraphy right leg arteries", "Rgraphy left leg arteries"),
xray_shoulder = c("X-ray left shoulder", "X-ray right shoulder"),
rgraphy_shoulder = c("Rgraphy left shoulder", "Rgraphy right shoulder")
))
当我们以 "non-exclusive" 的方式使用 tokens_lookup()
将其应用于标记时,序列将被字典键替换。请注意,由于 tokens_lookup()
将相关的标记序列折叠为短语,因此不再需要像您的问题那样形成标记 ngram。
toks <- tokens(corpus(A) + corpus(B)) %>%
tokens_lookup(dictionary = medicaldict, exclusive = FALSE)
toks
# tokens from 8 documents.
# A1 :
# [1] "Patient" "had" "XRAY_LEG" "."
#
# A2 :
# [1] "Subject" "was" "administered" "RGRAPHY_SHOULDER"
#
# A3 :
# [1] "Exam" "consisted" "of" "XRAY_LEG"
#
# A4 :
# [1] "Patient" "administered" "x-ray" "leg" "with" "20km" "distance" "."
#
# A11 :
# [1] "Patient" "had" "XRAY_LEG"
#
# A21 :
# [1] "RGRAPHY_SHOULDER" "given" "to" "patient"
#
# A31 :
# [1] "XRAY_SHOULDER" "revealed" "nothing" "sinister"
#
# A41 :
# [1] "RGRAPHY_LEG" "tested"
最后,我们可以根据折叠特征而不是原始词袋来计算文档相似度。
dfm(toks) %>%
textstat_simil(method = "cosine", diag = TRUE)
# A1 A2 A3 A4 A11 A21 A31
# A2 0.0000000
# A3 0.2500000 0.0000000
# A4 0.3535534 0.1767767 0.0000000
# A11 0.8660254 0.0000000 0.2886751 0.2041241
# A21 0.2500000 0.2500000 0.0000000 0.1767767 0.2886751
# A31 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000
# A41 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000 0.0000000