第二次调用 model.fit() 时出现 CNTK 内存不足错误
CNTK out of memory error when model.fit() is called second time
我正在使用 Keras 和 CNTK(后端)
我的代码是这样的:
def run_han(embeddings_index, fname, opt)
...
sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sentence_input)
l_lstm = Bidirectional(GRU(GRU_UNITS, return_sequences=True, kernel_regularizer=l2_reg,
implementation=GPU_IMPL))(embedded_sequences)
l_att = AttLayer(regularizer=l2_reg)(l_lstm)
sentEncoder = Model(sentence_input, l_att)
review_input = Input(shape=(MAX_SENTS, MAX_SENT_LENGTH), dtype='int32')
review_encoder = TimeDistributed(sentEncoder)(review_input)
l_lstm_sent = Bidirectional(GRU(GRU_UNITS, return_sequences=True, kernel_regularizer=l2_reg,
implementation=GPU_IMPL))(review_encoder)
l_att_sent = AttLayer(regularizer=l2_reg)(l_lstm_sent)
preds = Dense(n_classes, activation='softmax', kernel_regularizer=l2_reg)(l_att_sent)
model = Model(review_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer=opt, #SGD(lr=0.1, nesterov=True),
metrics=['acc'])
...
model.fit(x_train[ind,:,:], y_train[ind,:], epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, shuffle=False,
callbacks=[cr_result, history, csv_logger],
verbose=2,validation_data=(x_test, y_test), class_weight = class_weight)
...
%xdel model
gc.collect()
我在更改优化器时多次调用上述模型。像这样:
opt = optimizers.RMSprop(lr=0.0001, rho=0.9, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_rms_cw', opt, class_weight)
opt = optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_adadelta_cw', opt, class_weight)
opt = optimizers.Adagrad(lr=0.01, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_adagrad_cw', opt, class_weight)
第二次调用 model.fit() 时,显示内存不足错误
RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=USER-PC ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))
[CALL STACK]
> Microsoft::MSR::CNTK::CudaTimer:: Stop
- Microsoft::MSR::CNTK::CudaTimer:: Stop (x2)
- Microsoft::MSR::CNTK::GPUMatrix<float>:: Resize
- Microsoft::MSR::CNTK::Matrix<float>:: Resize
- Microsoft::MSR::CNTK::DataTransferer:: operator= (x4)
- CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD
- Microsoft::MSR::CNTK::DataTransferer:: operator=
- CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD
- CNTK::Function:: Forward
- CNTK:: CreateTrainer
- CNTK::Trainer:: TotalNumberOfSamplesSeen
- CNTK::Trainer:: TrainMinibatch
我以为是第一个运行的内存没有从gpu释放出来,
所以我在 model.fit()
之后添加了这个
%xdel model
gc.collect()
但是,错误是一样的。
我无法找出错误的原因。是因为我的 Keras 代码还是 CNTK?
(GTX 1080ti,Window 7,Python 2.7,CNTK 2.2,Jupyter)
这是一个非常烦人的问题,它是由于某种原因编译为在 CPU
上执行的代码没有正确地进行垃圾收集。因此,即使您 运行 使用垃圾收集器 - 编译模型仍在 GPU
上。为了克服这个问题,您可以尝试提供的解决方案 (TLDR:运行 在单独的进程中训练 - 当进程完成时 - 内存被清除)
我正在使用 Keras 和 CNTK(后端)
我的代码是这样的:
def run_han(embeddings_index, fname, opt)
...
sentence_input = Input(shape=(MAX_SENT_LENGTH,), dtype='int32')
embedded_sequences = embedding_layer(sentence_input)
l_lstm = Bidirectional(GRU(GRU_UNITS, return_sequences=True, kernel_regularizer=l2_reg,
implementation=GPU_IMPL))(embedded_sequences)
l_att = AttLayer(regularizer=l2_reg)(l_lstm)
sentEncoder = Model(sentence_input, l_att)
review_input = Input(shape=(MAX_SENTS, MAX_SENT_LENGTH), dtype='int32')
review_encoder = TimeDistributed(sentEncoder)(review_input)
l_lstm_sent = Bidirectional(GRU(GRU_UNITS, return_sequences=True, kernel_regularizer=l2_reg,
implementation=GPU_IMPL))(review_encoder)
l_att_sent = AttLayer(regularizer=l2_reg)(l_lstm_sent)
preds = Dense(n_classes, activation='softmax', kernel_regularizer=l2_reg)(l_att_sent)
model = Model(review_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer=opt, #SGD(lr=0.1, nesterov=True),
metrics=['acc'])
...
model.fit(x_train[ind,:,:], y_train[ind,:], epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, shuffle=False,
callbacks=[cr_result, history, csv_logger],
verbose=2,validation_data=(x_test, y_test), class_weight = class_weight)
...
%xdel model
gc.collect()
我在更改优化器时多次调用上述模型。像这样:
opt = optimizers.RMSprop(lr=0.0001, rho=0.9, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_rms_cw', opt, class_weight)
opt = optimizers.Adadelta(lr=1.0, rho=0.95, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_adadelta_cw', opt, class_weight)
opt = optimizers.Adagrad(lr=0.01, epsilon=1e-08, decay=0.0, clipvalue=0.5)
run_han(embeddings_index, 'w2v_100_all_adagrad_cw', opt, class_weight)
第二次调用 model.fit() 时,显示内存不足错误
RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=USER-PC ; expr=cudaMalloc((void**) &deviceBufferPtr, sizeof(AllocatedElemType) * AsMultipleOf(numElements, 2))
[CALL STACK]
> Microsoft::MSR::CNTK::CudaTimer:: Stop
- Microsoft::MSR::CNTK::CudaTimer:: Stop (x2)
- Microsoft::MSR::CNTK::GPUMatrix<float>:: Resize
- Microsoft::MSR::CNTK::Matrix<float>:: Resize
- Microsoft::MSR::CNTK::DataTransferer:: operator= (x4)
- CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD
- Microsoft::MSR::CNTK::DataTransferer:: operator=
- CNTK::Internal:: UseSparseGradientAggregationInDataParallelSGD
- CNTK::Function:: Forward
- CNTK:: CreateTrainer
- CNTK::Trainer:: TotalNumberOfSamplesSeen
- CNTK::Trainer:: TrainMinibatch
我以为是第一个运行的内存没有从gpu释放出来, 所以我在 model.fit()
之后添加了这个%xdel model
gc.collect()
但是,错误是一样的。 我无法找出错误的原因。是因为我的 Keras 代码还是 CNTK?
(GTX 1080ti,Window 7,Python 2.7,CNTK 2.2,Jupyter)
这是一个非常烦人的问题,它是由于某种原因编译为在 CPU
上执行的代码没有正确地进行垃圾收集。因此,即使您 运行 使用垃圾收集器 - 编译模型仍在 GPU
上。为了克服这个问题,您可以尝试提供的解决方案