如何堆叠 LSTM 层以对语音文件进行分类
How to stack LSTM layers to classify speech files
我一直在尝试实现基于 LSTM 的分类器来对离散语音进行分类。我用 13 个 mfcc 创建了特征向量。对于给定的文件,其二维向量为 [99, 13]。在遵循 mnist_irnn 示例之后,我可以设置单层 RNN 来对我的语音文件进行分类。但现在我想向网络添加更多层。因此,我一直在尝试用两个 LSTM 层和 softmax 层作为输出层来实现网络。在浏览了这里的一些帖子之后,我可以按如下方式设置网络,在模型构建期间它不会抛出任何异常。
from __future__ import print_function
import numpy as np
from keras.optimizers import SGD
from keras.utils.visualize_util import plot
np.random.seed(1337) # for reproducibility
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, TimeDistributedDense
from keras.layers.recurrent import LSTM
from SpeechResearch import loadData
batch_size = 5
hidden_units = 100
nb_classes = 10
print('Loading data...')
(X_train, y_train), (X_test, y_test) = loadData.load_mfcc(10, 2)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
print('Build model...')
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
print(batch_size, 99, X_train.shape[2])
print(X_train.shape[1:])
print(X_train.shape[2])
model = Sequential()
model.add(LSTM(output_dim=hidden_units, init='uniform', inner_init='uniform',
forget_bias_init='one', activation='tanh', inner_activation='sigmoid', return_sequences=True,
stateful=True, batch_input_shape=(batch_size, 99, X_train.shape[2])))
# model.add(Dropout(0.5))
model.add(LSTM(output_dim=hidden_units, init='uniform', inner_init='uniform',
forget_bias_init='one', activation='tanh', inner_activation='sigmoid', return_sequences=True,
stateful=True, input_length=X_train.shape[2]))
model.add(TimeDistributedDense(input_dim=hidden_units, output_dim=nb_classes))
model.add(Activation('softmax'))
# try using different optimizers and different optimizer configs
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
print("Train...")
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=3, validation_data=(X_test, Y_test), show_accuracy=True)
score, acc = model.evaluate(X_test, Y_test,
batch_size=batch_size,
show_accuracy=True)
print('Test score:', score)
print('Test accuracy:', acc)
我一直在不同的点尝试不同的值。 (目前我一直在尝试使用小样本,因此值非常小)但是,现在它在训练期间抛出异常。有些维度不匹配。
Using Theano backend.
Loading data...
100 train sequences
20 test sequences
X_train shape: (100, 99, 13)
X_test shape: (20, 99, 13)
y_train shape: (100,)
y_test shape: (20,)
Build model...
5 99 13
(99, 13)
13
Train...
Train on 100 samples, validate on 20 samples
Epoch 1/3
Traceback (most recent call last):
File "/home/udani/PycharmProjects/testResearch/SpeechResearch/lstmNetwork.py", line 54, in <module>
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=3, validation_data=(X_test, Y_test), show_accuracy=True)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 581, in fit
shuffle=shuffle, metrics=metrics)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 239, in _fit
outs = f(ins_batch)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 365, in __call__
return self.function(*inputs)
File "/home/udani/Documents/ResearchSW/Theano/theano/compile/function_module.py", line 786, in __call__
allow_downcast=s.allow_downcast)
File "/home/udani/Documents/ResearchSW/Theano/theano/tensor/type.py", line 177, in filter
data.shape))
TypeError: ('Bad input argument to theano function with name "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py:362" at index 1(0-based)', 'Wrong number of dimensions: expected 3, got 2 with shape (5, 10).')
我想知道我在这里做错了什么。看了一整天的代码,还是想不通维度不匹配的原因。
此外,如果有人能解释一下 output_dim 的含义,我将不胜感激。 (当我们在给定层中有n个节点时,这是单个节点输出的向量的形状吗?它应该等于下一层的节点数吗?)
你的Y
维度有问题,输出应该是(100, 99, 10)
这样的东西,那是一组输出序列,和特征一样,只是输出中有1个。您的 Y
向量似乎不同。方法 to_categorical
不适用于序列,它需要一个向量。
或者,您可以输出单个向量并将其馈送到最后一个 LSTM 层中的密集层 return_sequences=False
您也不需要有状态网络。
我一直在尝试实现基于 LSTM 的分类器来对离散语音进行分类。我用 13 个 mfcc 创建了特征向量。对于给定的文件,其二维向量为 [99, 13]。在遵循 mnist_irnn 示例之后,我可以设置单层 RNN 来对我的语音文件进行分类。但现在我想向网络添加更多层。因此,我一直在尝试用两个 LSTM 层和 softmax 层作为输出层来实现网络。在浏览了这里的一些帖子之后,我可以按如下方式设置网络,在模型构建期间它不会抛出任何异常。
from __future__ import print_function
import numpy as np
from keras.optimizers import SGD
from keras.utils.visualize_util import plot
np.random.seed(1337) # for reproducibility
from keras.preprocessing import sequence
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, TimeDistributedDense
from keras.layers.recurrent import LSTM
from SpeechResearch import loadData
batch_size = 5
hidden_units = 100
nb_classes = 10
print('Loading data...')
(X_train, y_train), (X_test, y_test) = loadData.load_mfcc(10, 2)
print(len(X_train), 'train sequences')
print(len(X_test), 'test sequences')
print('X_train shape:', X_train.shape)
print('X_test shape:', X_test.shape)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
print('Build model...')
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
print(batch_size, 99, X_train.shape[2])
print(X_train.shape[1:])
print(X_train.shape[2])
model = Sequential()
model.add(LSTM(output_dim=hidden_units, init='uniform', inner_init='uniform',
forget_bias_init='one', activation='tanh', inner_activation='sigmoid', return_sequences=True,
stateful=True, batch_input_shape=(batch_size, 99, X_train.shape[2])))
# model.add(Dropout(0.5))
model.add(LSTM(output_dim=hidden_units, init='uniform', inner_init='uniform',
forget_bias_init='one', activation='tanh', inner_activation='sigmoid', return_sequences=True,
stateful=True, input_length=X_train.shape[2]))
model.add(TimeDistributedDense(input_dim=hidden_units, output_dim=nb_classes))
model.add(Activation('softmax'))
# try using different optimizers and different optimizer configs
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
print("Train...")
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=3, validation_data=(X_test, Y_test), show_accuracy=True)
score, acc = model.evaluate(X_test, Y_test,
batch_size=batch_size,
show_accuracy=True)
print('Test score:', score)
print('Test accuracy:', acc)
我一直在不同的点尝试不同的值。 (目前我一直在尝试使用小样本,因此值非常小)但是,现在它在训练期间抛出异常。有些维度不匹配。
Using Theano backend.
Loading data...
100 train sequences
20 test sequences
X_train shape: (100, 99, 13)
X_test shape: (20, 99, 13)
y_train shape: (100,)
y_test shape: (20,)
Build model...
5 99 13
(99, 13)
13
Train...
Train on 100 samples, validate on 20 samples
Epoch 1/3
Traceback (most recent call last):
File "/home/udani/PycharmProjects/testResearch/SpeechResearch/lstmNetwork.py", line 54, in <module>
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=3, validation_data=(X_test, Y_test), show_accuracy=True)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 581, in fit
shuffle=shuffle, metrics=metrics)
File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 239, in _fit
outs = f(ins_batch)
File "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py", line 365, in __call__
return self.function(*inputs)
File "/home/udani/Documents/ResearchSW/Theano/theano/compile/function_module.py", line 786, in __call__
allow_downcast=s.allow_downcast)
File "/home/udani/Documents/ResearchSW/Theano/theano/tensor/type.py", line 177, in filter
data.shape))
TypeError: ('Bad input argument to theano function with name "/usr/local/lib/python2.7/dist-packages/keras/backend/theano_backend.py:362" at index 1(0-based)', 'Wrong number of dimensions: expected 3, got 2 with shape (5, 10).')
我想知道我在这里做错了什么。看了一整天的代码,还是想不通维度不匹配的原因。
此外,如果有人能解释一下 output_dim 的含义,我将不胜感激。 (当我们在给定层中有n个节点时,这是单个节点输出的向量的形状吗?它应该等于下一层的节点数吗?)
你的Y
维度有问题,输出应该是(100, 99, 10)
这样的东西,那是一组输出序列,和特征一样,只是输出中有1个。您的 Y
向量似乎不同。方法 to_categorical
不适用于序列,它需要一个向量。
或者,您可以输出单个向量并将其馈送到最后一个 LSTM 层中的密集层 return_sequences=False
您也不需要有状态网络。