从 joblib 文件加载的 TfidfVectorizer 模型仅在同一会话中训练时有效
TfidfVectorizer model loaded from joblib file only works when trained in same session
sklearn...TfidfVectorizer
仅在训练后立即应用时才有效,当分析器 returns nltk.tree.Tree
对象列表时。这是一个谜,因为 模型总是在应用之前从文件 加载。调试显示模型文件在其自己的会话开始时加载和应用时与在该会话中训练时没有任何错误或不同。在这两种情况下,分析器也适用并正常工作。
下面是帮助重现神秘行为的脚本:
import joblib
import numpy as np
from nltk import Tree
from sklearn.feature_extraction.text import TfidfVectorizer
def lexicalized_production_analyzer(sentence_trees):
productions_per_sentence = [tree.productions() for tree in sentence_trees]
return np.concatenate(productions_per_sentence)
def train(corpus):
model = TfidfVectorizer(analyzer=lexicalized_production_analyzer)
model.fit(corpus)
joblib.dump(model, "model.joblib")
def apply(corpus):
model = joblib.load("model.joblib")
result = model.transform(corpus)
return result
# exmaple data
trees = [Tree('ROOT', [Tree('FRAG', [Tree('S', [Tree('VP', [Tree('VBG', ['arkling']), Tree('NP', [Tree('NP', [Tree('NNS', ['dots'])]), Tree('VP', [Tree('VBG', ['nestling']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['grass'])])])])])])]), Tree(',', [',']), Tree('VP', [Tree('VBG', ['winking']), Tree('CC', ['and']), Tree('VP', [Tree('VBG', ['glimmering']), Tree('PP', [Tree('IN', ['like']), Tree('NP', [Tree('NNS', ['jewels'])])])])]), Tree('.', ['.'])])]),
Tree('ROOT', [Tree('FRAG', [Tree('NP', [Tree('NP', [Tree('NNP', ['Rose']), Tree('NNS', ['petals'])]), Tree('NP', [Tree('NP', [Tree('ADVP', [Tree('RB', ['perhaps'])]), Tree(',', [',']), Tree('CC', ['or']), Tree('NP', [Tree('DT', ['some'])]), Tree('NML', [Tree('NN', ['kind'])])]), Tree('PP', [Tree('IN', ['of']), Tree('NP', [Tree('NN', ['confetti'])])])])]), Tree('.', ['.'])])])]
corpus = [trees, trees, trees]
首先训练模型并保存 model.joblib
文件。
train(corpus)
result = apply(corpus)
print("number of elements in results: " + str(result.getnnz()))
print("shape of results: " + str(result.shape))
我们打印结果数 .getnnz()
以表明该模型正在处理 120 个元素:
number of elements in results: 120
shape of results: (3, 40)
然后重新启动 python 并将模型重新应用于同一个语料库,而不进行训练。
result = apply(corpus)
print("number of elements in results: " + str(result.getnnz()))
print("shape of results: " + str(result.shape))
你会看到这次保存了零个元素。
number of elements in results: 0
shape of results: (3, 40)
但是模型是从一个文件两次加载的,并且没有全局变量(我知道)所以我们想不出为什么它在一种情况下有效但不起作用'在另一个工作。
感谢您的帮助!
好的,我做了一些非常深入的挖掘,如果你检查你隐式使用 Tree
结构的 Production class here,看起来它们存储 _hash
当 class 创建时。然而 Python hash
函数在运行之间是不确定的,这意味着这个值在运行之间可能不一致。因此,散列是用 joblib
腌制的,而不是应该的 re-calculated 。所以这似乎是 nltk
中的错误。这导致模型在重新加载时看不到生产规则,因为哈希不匹配,就好像生产规则从未存储在词汇表中一样。
相当棘手!
在这个特定的 nltk
被修复之前,在 运行 之前设置 PYTHONHASHSEED
训练和测试脚本将强制哈希每次都相同。
PYTHONHASHSEED=0 python script.py
sklearn...TfidfVectorizer
仅在训练后立即应用时才有效,当分析器 returns nltk.tree.Tree
对象列表时。这是一个谜,因为 模型总是在应用之前从文件 加载。调试显示模型文件在其自己的会话开始时加载和应用时与在该会话中训练时没有任何错误或不同。在这两种情况下,分析器也适用并正常工作。
下面是帮助重现神秘行为的脚本:
import joblib
import numpy as np
from nltk import Tree
from sklearn.feature_extraction.text import TfidfVectorizer
def lexicalized_production_analyzer(sentence_trees):
productions_per_sentence = [tree.productions() for tree in sentence_trees]
return np.concatenate(productions_per_sentence)
def train(corpus):
model = TfidfVectorizer(analyzer=lexicalized_production_analyzer)
model.fit(corpus)
joblib.dump(model, "model.joblib")
def apply(corpus):
model = joblib.load("model.joblib")
result = model.transform(corpus)
return result
# exmaple data
trees = [Tree('ROOT', [Tree('FRAG', [Tree('S', [Tree('VP', [Tree('VBG', ['arkling']), Tree('NP', [Tree('NP', [Tree('NNS', ['dots'])]), Tree('VP', [Tree('VBG', ['nestling']), Tree('PP', [Tree('IN', ['in']), Tree('NP', [Tree('DT', ['the']), Tree('NN', ['grass'])])])])])])]), Tree(',', [',']), Tree('VP', [Tree('VBG', ['winking']), Tree('CC', ['and']), Tree('VP', [Tree('VBG', ['glimmering']), Tree('PP', [Tree('IN', ['like']), Tree('NP', [Tree('NNS', ['jewels'])])])])]), Tree('.', ['.'])])]),
Tree('ROOT', [Tree('FRAG', [Tree('NP', [Tree('NP', [Tree('NNP', ['Rose']), Tree('NNS', ['petals'])]), Tree('NP', [Tree('NP', [Tree('ADVP', [Tree('RB', ['perhaps'])]), Tree(',', [',']), Tree('CC', ['or']), Tree('NP', [Tree('DT', ['some'])]), Tree('NML', [Tree('NN', ['kind'])])]), Tree('PP', [Tree('IN', ['of']), Tree('NP', [Tree('NN', ['confetti'])])])])]), Tree('.', ['.'])])])]
corpus = [trees, trees, trees]
首先训练模型并保存 model.joblib
文件。
train(corpus)
result = apply(corpus)
print("number of elements in results: " + str(result.getnnz()))
print("shape of results: " + str(result.shape))
我们打印结果数 .getnnz()
以表明该模型正在处理 120 个元素:
number of elements in results: 120
shape of results: (3, 40)
然后重新启动 python 并将模型重新应用于同一个语料库,而不进行训练。
result = apply(corpus)
print("number of elements in results: " + str(result.getnnz()))
print("shape of results: " + str(result.shape))
你会看到这次保存了零个元素。
number of elements in results: 0
shape of results: (3, 40)
但是模型是从一个文件两次加载的,并且没有全局变量(我知道)所以我们想不出为什么它在一种情况下有效但不起作用'在另一个工作。
感谢您的帮助!
好的,我做了一些非常深入的挖掘,如果你检查你隐式使用 Tree
结构的 Production class here,看起来它们存储 _hash
当 class 创建时。然而 Python hash
函数在运行之间是不确定的,这意味着这个值在运行之间可能不一致。因此,散列是用 joblib
腌制的,而不是应该的 re-calculated 。所以这似乎是 nltk
中的错误。这导致模型在重新加载时看不到生产规则,因为哈希不匹配,就好像生产规则从未存储在词汇表中一样。
相当棘手!
在这个特定的 nltk
被修复之前,在 运行 之前设置 PYTHONHASHSEED
训练和测试脚本将强制哈希每次都相同。
PYTHONHASHSEED=0 python script.py