嵌入时出错:无法将字符串转换为浮点数:'ng'
Error while embedding: could not convert string to float: 'ng'
我正在使用 GloVe 方法处理预训练词向量。数据包含维基百科数据上的向量。嵌入数据时出现错误,指出无法将字符串转换为浮点数:'ng'
我尝试浏览数据,但找不到符号 'ng'
# load embedding as a dict
def load_embedding(filename):
# load embedding into memory, skip first line
file = open(filename,'r', errors = 'ignore')
# create a map of words to vectors
embedding = dict()
for line in file:
parts = line.split()
# key is string word, value is numpy array for vector
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
file.close()
return embedding
这是错误报告。请进一步指导我。
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
File "<ipython-input-1-d91aa5ebf9f8>", line 1, in <module>
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
File "C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 123, in <module>
raw_embedding = load_embedding('glove.6B.50d.txt')
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 67, in load_embedding
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
ValueError: could not convert string to float: 'ng'
看起来 'ng' 是您文件中的一个词(标记),您正在尝试为其获取词向量。手套预训练向量可能没有导致错误的 'ng' 向量。因此,您需要检查该词在 Glove 嵌入中是否具有向量。有关如何执行此操作的示例,请参阅此 post 中标记为 'Create a weight matrix for words in training docs' 的部分 - Text Classification Using CNN, LSTM and Pre-trained Glove Word Embeddings: Part-3
ValueError:无法将字符串转换为浮点数:'ng'
针对上述问题,在函数中添加encoding='utf8'如下:
file = open(filename,'r', errors = 'ignore', encoding='utf8')
这似乎工作正常:
embedding_model = {}
f = open(r'dataset/glove.840B.300d.txt', encoding="utf8", "r")
for line in f:
values = line.split()
word = ''.join(values[:-300])
coefs = np.asarray(values[-300:], dtype='float32')
embedding_model[word] = coefs
f.close()
我正在使用 GloVe 方法处理预训练词向量。数据包含维基百科数据上的向量。嵌入数据时出现错误,指出无法将字符串转换为浮点数:'ng'
我尝试浏览数据,但找不到符号 'ng'
# load embedding as a dict
def load_embedding(filename):
# load embedding into memory, skip first line
file = open(filename,'r', errors = 'ignore')
# create a map of words to vectors
embedding = dict()
for line in file:
parts = line.split()
# key is string word, value is numpy array for vector
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
file.close()
return embedding
这是错误报告。请进一步指导我。
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
File "<ipython-input-1-d91aa5ebf9f8>", line 1, in <module>
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
File "C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:\Users\AKSHAY\AppData\Local\conda\conda\envs\py355\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 123, in <module>
raw_embedding = load_embedding('glove.6B.50d.txt')
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 67, in load_embedding
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
ValueError: could not convert string to float: 'ng'
看起来 'ng' 是您文件中的一个词(标记),您正在尝试为其获取词向量。手套预训练向量可能没有导致错误的 'ng' 向量。因此,您需要检查该词在 Glove 嵌入中是否具有向量。有关如何执行此操作的示例,请参阅此 post 中标记为 'Create a weight matrix for words in training docs' 的部分 - Text Classification Using CNN, LSTM and Pre-trained Glove Word Embeddings: Part-3
ValueError:无法将字符串转换为浮点数:'ng'
针对上述问题,在函数中添加encoding='utf8'如下:
file = open(filename,'r', errors = 'ignore', encoding='utf8')
这似乎工作正常:
embedding_model = {}
f = open(r'dataset/glove.840B.300d.txt', encoding="utf8", "r")
for line in f:
values = line.split()
word = ''.join(values[:-300])
coefs = np.asarray(values[-300:], dtype='float32')
embedding_model[word] = coefs
f.close()