在 scikit learn 中保存并重用 TfidfVectorizer

Save and reuse TfidfVectorizer in scikit learn

我在 scikit 中使用 TfidfVectorizer 学习从文本数据创建矩阵。现在我需要保存这个对象以便以后重用。我尝试使用 pickle,但出现以下错误。

loc=open('vectorizer.obj','w')
pickle.dump(self.vectorizer,loc)
*** TypeError: can't pickle instancemethod objects

我尝试在 sklearn.externals 中使用 joblib,这再次给出了类似的错误。有什么方法可以保存这个对象以便我以后可以重用它吗?

这是我的完整对象:

class changeToMatrix(object):
def __init__(self,ngram_range=(1,1),tokenizer=StemTokenizer()):
    from sklearn.feature_extraction.text import TfidfVectorizer
    self.vectorizer = TfidfVectorizer(ngram_range=ngram_range,analyzer='word',lowercase=True,\
                                          token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=tokenizer)

def load_ref_text(self,text_file):
    textfile = open(text_file,'r')
    lines=textfile.readlines()
    textfile.close()
    lines = ' '.join(lines)
    sent_tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
    sentences = [ sent_tokenizer.tokenize(lines.strip()) ]
    sentences1 = [item.strip().strip('.') for sublist in sentences for item in sublist]      
    chk2=pd.DataFrame(self.vectorizer.fit_transform(sentences1).toarray()) #vectorizer is transformed in this step 
    return sentences1,[chk2]

def get_processed_data(self,data_loc):
    ref_sentences,ref_dataframes=self.load_ref_text(data_loc)
    loc=open("indexedData/vectorizer.obj","w")
    pickle.dump(self.vectorizer,loc) #getting error here
    loc.close()
    return ref_sentences,ref_dataframes

首先,最好将导入留在代码的顶部而不是在 class:

from sklearn.feature_extraction.text import TfidfVectorizer
class changeToMatrix(object):
  def __init__(self,ngram_range=(1,1),tokenizer=StemTokenizer()):
    ...

下一个 StemTokenizer 似乎不是规范 class。可能你是从 http://sahandsaba.com/visualizing-philosophers-and-scientists-by-the-words-they-used-with-d3js-and-python.html 或者其他地方得到的,所以 我们假设它 return 是一个字符串列表 .

class StemTokenizer(object):
    def __init__(self):
        self.ignore_set = {'footnote', 'nietzsche', 'plato', 'mr.'}

    def __call__(self, doc):
        words = []
        for word in word_tokenize(doc):
            word = word.lower()
            w = wn.morphy(word)
            if w and len(w) > 1 and w not in self.ignore_set:
                words.append(w)
        return words

现在回答您的实际问题,您可能需要在转储 pickle 之前以字节模式打开文件,即:

>>> from sklearn.feature_extraction.text import TfidfVectorizer
>>> from nltk import word_tokenize
>>> import cPickle as pickle
>>> vectorizer = TfidfVectorizer(ngram_range=(0,2),analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=word_tokenize)
>>> vectorizer
TfidfVectorizer(analyzer='word', binary=False, decode_error=u'strict',
        dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content',
        lowercase=True, max_df=1.0, max_features=None, min_df=1,
        ngram_range=(0, 2), norm=u'l2', preprocessor=None, smooth_idf=True,
        stop_words=None, strip_accents='unicode', sublinear_tf=False,
        token_pattern='[a-zA-Z0-9]+',
        tokenizer=<function word_tokenize at 0x7f5ea68e88c0>, use_idf=True,
        vocabulary=None)
>>> with open('vectorizer.pk', 'wb') as fin:
...     pickle.dump(vectorizer, fin)
... 
>>> exit()
alvas@ubi:~$ ls -lah vectorizer.pk 
-rw-rw-r-- 1 alvas alvas 763 Jun 15 14:18 vectorizer.pk

注意:使用with 习惯用法进行i/o 文件访问会在您离开with 范围后自动关闭文件。

关于 SnowballStemmer() 的问题,请注意 SnowballStemmer('english') 是一个对象,而词干提取函数是 SnowballStemmer('english').stem

重要:

  • TfidfVectorizer的tokenizer参数需要一个字符串和return一个字符串列表
  • 但 Snowball 词干提取器不将字符串作为输入,return 字符串列表。

所以你需要这样做:

>>> from nltk.stem import SnowballStemmer
>>> from nltk import word_tokenize
>>> stemmer = SnowballStemmer('english').stem
>>> def stem_tokenize(text):
...     return [stemmer(i) for i in word_tokenize(text)]
... 
>>> vectorizer = TfidfVectorizer(ngram_range=(0,2),analyzer='word',lowercase=True, token_pattern='[a-zA-Z0-9]+',strip_accents='unicode',tokenizer=stem_tokenize)
>>> with open('vectorizer.pk', 'wb') as fin:
...     pickle.dump(vectorizer, fin)
...
>>> exit()
alvas@ubi:~$ ls -lah vectorizer.pk 
-rw-rw-r-- 1 alvas alvas 758 Jun 15 15:55 vectorizer.pk