即使在清除 GPU 会话后仍出现 OOM 错误

OOM error even after clearing GPU Session

我正在将 CNN 应用于包含 4684 张大小为 2000*102 的图像的数据集。我在 keras 中使用 5 折交叉验证来记录性能指标。我正在使用 del.model()del.histroyK.clear_session(),但是在 2 运行 两次之后它给出了 OOM 错误。请参阅下面开发的算法。 运行 1080Ti 11GB 内存。电脑内存 32GB

kf = KFold(n_splits=5, shuffle=True)
kf.get_n_splits(data_new)

AUC_SCORES = []
KAPPA_SCORES = []
MSE = []
Accuracy = []
for train, test in kf.split(data_new):
    Conf_model = None
    Conf_model = Sequential()
    Conf_model.add(Conv2D(32, (20,102),activation='relu',input_shape=(img_rows,img_cols,1),padding='same',data_format='channels_last'))
    Conf_model.add(MaxPooling2D((2,2),padding='same',dim_ordering="th"))
    Conf_model.add(Dropout(0.2))
    Conf_model.add(Flatten())     
    Conf_model.add(Dense(64, activation='relu'))  
    Conf_model.add(Dropout(0.5))        
    Conf_model.add(Dense(num_classes, activation='softmax'))
    Conf_model.compile(loss=keras.losses.binary_crossentropy, optimizer=keras.optimizers.Adam(),metrics=['accuracy'])

    data_train = data_new[train]
    data_train.shape
    labels_train = labels[train]

    data_test = data_new[test]
    data_test_Len = len(data_test)
    data_train = data_train.reshape(data_train.shape[0],img_rows,img_cols,1)
    data_test = data_test.reshape(data_test.shape[0],img_rows,img_cols,1)
    data_train = data_train.astype('float32')
    data_test = data_test.astype('float32')
    labels_test = labels[test]
    test_lab = list(labels_test)#test_lab.append(labels_test)
    labels_train = to_categorical(labels_train,num_classes)
    labels_test_Shot = to_categorical(labels_test,num_classes)
    print("Running Fold")
    history = Conf_model.fit(data_train, labels_train, batch_size=batch_size,epochs=epochs,verbose=1)
    Conf_predicted_classes=Conf_model.predict(data_test)
    Conf_predict=Conf_model.predict_classes(data_test)
    Conf_Accuracy = accuracy_score(labels_test, Conf_predict)
    Conf_Mean_Square = mean_squared_error(labels_test, Conf_predict)
    Label_predict = list(Conf_predict)#Label_predict.append(Conf_predict)
    Conf_predicted_classes = np.argmax(np.round(Conf_predicted_classes),axis=1)
    Conf_Confusion = confusion_matrix(labels_test, Conf_predicted_classes)
    print(Conf_Confusion)
    Conf_AUC = roc_auc_score(labels_test, Conf_predict)
    print("AUC value for Conf Original Data: ", Conf_AUC)
    Conf_KAPPA = cohen_kappa_score(labels_test, Conf_predict)
    print("Kappa value for Conf Original Data: ", Conf_KAPPA)
    AUC_SCORES.append(Conf_AUC)
    KAPPA_SCORES.append(abs(Conf_KAPPA))
    MSE.append(Conf_Mean_Square)
    Accuracy.append(Conf_Accuracy)
    del history
    del Conf_model
    K.clear_session()

以下错误

ResourceExhaustedError: OOM when allocating tensor with shape[1632000,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
     [[{{node training/Adam/gradients/dense_1/MatMul_grad/MatMul_1}} = MatMul[T=DT_FLOAT, transpose_a=true, transpose_b=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](flatten_1/Reshape, training/Adam/gradients/dense_1/Relu_grad/ReluGrad)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

我试过下面的代码,好像行得通。

  def clear_mem():
     try: tf.sess.close()
     except: pass
     sess = tf.InteractiveSession()
     K.set_session(sess)
     return

考虑到评论更新的一些建议:

1) 创建一个 bash 脚本单独启动 python 脚本(进程终止后,内存被释放)并让它们将结果写入单独的文件,您可以稍后处理和一起加入。例如,使用 bash 脚本迭代并提供 1) 种子和 2) 当前索引到 python 脚本。使用种子,你可以确保折叠分割没有泄漏,使用索引你可以只抓取相关部分

2) 使用python进程对结果进行多处理

在我推荐方法 1) 之前,在 python 多处理中使用过 tensorflow。在实施这个

时我遇到了很多陷阱

这些方法有意义吗?