如何在 Tensorflow/keras 上添加 InstanceNormalization
How to add InstanceNormalization on Tensorflow/keras
我是 TensorFlow 和 Keras 的新手,我一直在制作一个扩展的 resnet 并想在一个层上添加实例规范化,但我不能,因为它一直在抛出错误。
我正在使用 tensorflow 1.15 和 keras 2.1。我注释掉了有效的 BatchNormalization 部分,我尝试添加实例规范化但它找不到模块。
非常感谢您的建议
from keras.layers import Conv2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import Nadam, Adam
from keras.layers import Input, Dense, Reshape, Activation, Flatten, Embedding, Dropout, Lambda, add, concatenate, Concatenate, ConvLSTM2D, LSTM, average, MaxPooling2D, multiply, MaxPooling3D
from keras.layers import GlobalAveragePooling2D, Permute
from keras.layers.advanced_activations import LeakyReLU, PReLU
from keras.layers.convolutional import UpSampling2D, Conv2D, Conv1D
from keras.models import Sequential, Model
from keras.utils import multi_gpu_model
from keras.utils.generic_utils import Progbar
from keras.constraints import maxnorm
from keras.activations import tanh, softmax
from keras import metrics, initializers, utils, regularizers
import tensorflow as tf
import numpy as np
import math
import os
import sys
import random
import keras.backend as K
epsilon = K.epsilon()
def basic_block_conv2D_norm_elu(filters, kernel_size, kernel_regularizer=regularizers.l2(1e-4),act_func="elu", normalize="Instance", dropout='0.15',
strides=1,use_bias = True,kernel_initializer = "he_normal",_dilation_rate=0):
def f(input):
if kernel_regularizer == None:
if _dilation_rate == 0:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
padding="same", use_bias=use_bias)(input)
else:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
padding="same", use_bias=use_bias,dilation_rate=_dilation_rate)(input)
else:
if _dilation_rate == 0:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias,
kernel_regularizer=kernel_regularizer)(input)
else:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias,
kernel_regularizer=kernel_regularizer, dilation_rate=_dilation_rate)(input)
if dropout != None:
dropout_layer = Dropout(0.15)(conv)
if normalize == None and dropout != None:
norm_layer = conv(dropout_layer)
else:
norm_layer = InstanceNormalization()(dropout_layer)
# norm_layer = BatchNormalization()(dropout_layer)
return Activation(act_func)(norm_layer)
return f
没有InstanceNormalization()
这样的东西。在 Keras 中,InstanceNormalisation
没有单独的层。 (这并不意味着您不能申请 InstanceNormalisation
)
在 Keras 中,我们有 tf.keras.layers.BatchNormalization
层,可用于应用任何类型的归一化。
该层有以下参数:
axis=-1,
momentum=0.99,
epsilon=0.001,
center=True,
scale=True,
beta_initializer="zeros",
gamma_initializer="ones",
moving_mean_initializer="zeros",
moving_variance_initializer="ones",
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
**kwargs
)
现在您可以更改 axis
参数以生成 Instance normalisation
层或任何其他类型的规范化。
BatchNormalization 和 Instance Normalization 的公式为:
现在,假设您有通道优先实现,即 [B,C,H,W]
如果您想计算 BatchNormalisation,那么您需要将通道轴作为 BatchNormalisation() 层中的轴。在这种情况下,它将计算 C
均值和标准差
BatchNormalisation 层 : tf.keras.layers.BatchNormalization(axis=1)
如果你想计算 InstanceNormalisation,那么只需将你的轴设置为 Batch 和 Channel 的轴。在这种情况下,它将计算 B*C
均值和标准差
实例规范化层:tf.keras.layers.BatchNormalization(axis=[0,1])
更新 1
使用批量归一化时,如果要将其用作 InstanceNormalisation
,则必须保留 training =1
更新 2
你可以直接使用内置的InstanceNormalisation
给出如下
https://www.tensorflow.org/addons/api_docs/python/tfa/layers/InstanceNormalization
我是 TensorFlow 和 Keras 的新手,我一直在制作一个扩展的 resnet 并想在一个层上添加实例规范化,但我不能,因为它一直在抛出错误。
我正在使用 tensorflow 1.15 和 keras 2.1。我注释掉了有效的 BatchNormalization 部分,我尝试添加实例规范化但它找不到模块。
非常感谢您的建议
from keras.layers import Conv2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import Nadam, Adam
from keras.layers import Input, Dense, Reshape, Activation, Flatten, Embedding, Dropout, Lambda, add, concatenate, Concatenate, ConvLSTM2D, LSTM, average, MaxPooling2D, multiply, MaxPooling3D
from keras.layers import GlobalAveragePooling2D, Permute
from keras.layers.advanced_activations import LeakyReLU, PReLU
from keras.layers.convolutional import UpSampling2D, Conv2D, Conv1D
from keras.models import Sequential, Model
from keras.utils import multi_gpu_model
from keras.utils.generic_utils import Progbar
from keras.constraints import maxnorm
from keras.activations import tanh, softmax
from keras import metrics, initializers, utils, regularizers
import tensorflow as tf
import numpy as np
import math
import os
import sys
import random
import keras.backend as K
epsilon = K.epsilon()
def basic_block_conv2D_norm_elu(filters, kernel_size, kernel_regularizer=regularizers.l2(1e-4),act_func="elu", normalize="Instance", dropout='0.15',
strides=1,use_bias = True,kernel_initializer = "he_normal",_dilation_rate=0):
def f(input):
if kernel_regularizer == None:
if _dilation_rate == 0:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
padding="same", use_bias=use_bias)(input)
else:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
padding="same", use_bias=use_bias,dilation_rate=_dilation_rate)(input)
else:
if _dilation_rate == 0:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias,
kernel_regularizer=kernel_regularizer)(input)
else:
conv = Conv2D(filters=filters, kernel_size=kernel_size, strides=strides,
kernel_initializer=kernel_initializer, padding="same", use_bias=use_bias,
kernel_regularizer=kernel_regularizer, dilation_rate=_dilation_rate)(input)
if dropout != None:
dropout_layer = Dropout(0.15)(conv)
if normalize == None and dropout != None:
norm_layer = conv(dropout_layer)
else:
norm_layer = InstanceNormalization()(dropout_layer)
# norm_layer = BatchNormalization()(dropout_layer)
return Activation(act_func)(norm_layer)
return f
没有InstanceNormalization()
这样的东西。在 Keras 中,InstanceNormalisation
没有单独的层。 (这并不意味着您不能申请 InstanceNormalisation
)
在 Keras 中,我们有 tf.keras.layers.BatchNormalization
层,可用于应用任何类型的归一化。
该层有以下参数:
axis=-1,
momentum=0.99,
epsilon=0.001,
center=True,
scale=True,
beta_initializer="zeros",
gamma_initializer="ones",
moving_mean_initializer="zeros",
moving_variance_initializer="ones",
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
**kwargs
)
现在您可以更改 axis
参数以生成 Instance normalisation
层或任何其他类型的规范化。
BatchNormalization 和 Instance Normalization 的公式为:
现在,假设您有通道优先实现,即 [B,C,H,W]
如果您想计算 BatchNormalisation,那么您需要将通道轴作为 BatchNormalisation() 层中的轴。在这种情况下,它将计算 C
均值和标准差
BatchNormalisation 层 : tf.keras.layers.BatchNormalization(axis=1)
如果你想计算 InstanceNormalisation,那么只需将你的轴设置为 Batch 和 Channel 的轴。在这种情况下,它将计算 B*C
均值和标准差
实例规范化层:tf.keras.layers.BatchNormalization(axis=[0,1])
更新 1
使用批量归一化时,如果要将其用作 InstanceNormalisation
training =1
更新 2
你可以直接使用内置的InstanceNormalisation
给出如下
https://www.tensorflow.org/addons/api_docs/python/tfa/layers/InstanceNormalization