TensorFlow 数据集的预处理 'cats_vs_dogs'
Preprocessing for TensorFlow Dataset 'cats_vs_dogs'
我正在尝试创建一个预处理函数,以便 training_dataset 可以直接输入到 keras 顺序神经网络中。预处理函数应该 return 特征和标签。
def preprocessing_function(data):
features = ...
labels = ...
return features, labels
dataset, info = tfds.load(name='cats_vs_dogs', split=tfds.Split.TRAIN, with_info=True)
training_dataset = dataset.map(preprocessing_function)
preprocessing_function
应该怎么写?我花了几个小时研究并试图实现它,但无济于事。希望有人能提供帮助。
这里有两个预处理函数。第一个将应用于训练和验证数据以规范化数据并调整到预期的网络大小。第二个功能,增强,将仅应用于训练集。您想要进行的增强类型取决于您的数据集和应用程序,但我提供了这个作为示例。
#Fetching, pre-processing & preparing data-pipeline
def preprocess(ds):
x = tf.image.resize_with_pad(ds['image'], IMG_SIZE_W, IMG_SIZE_H)
x = tf.cast(x, tf.float32)
x = (x-MEAN)/(VARIANCE)
y = tf.one_hot(ds['label'], NUM_CLASSES)
return x, y
def augmentation(image,label):
image = tf.image.random_flip_left_right(image)
image = tf.image.resize_with_crop_or_pad(image, IMG_W+4, IMG_W+4) # zero pad each side with 4 pixels
image = tf.image.random_crop(image, size=[BATCH_SIZE, IMG_W, IMG_H, 3]) # Random crop back to 32x32
return image, label
要加载训练和验证数据集,请执行以下操作:
def get_dataset(dataset_name, shuffle_buff_size=1024, batch_size=BATCH_SIZE, augmented=True):
train, info_train = tfds.load(dataset_name, split='train[:80%]', with_info=True)
val, info_val = tfds.load(dataset_name, split='train[80%:]', with_info=True)
TRAIN_SIZE = info_train.splits['train'].num_examples * 0.8
VAL_SIZE = info_train.splits['train'].num_examples * 0.2
train = train.map(preprocess).cache().repeat().shuffle(shuffle_buff_size).batch(batch_size)
if augmented==True:
train = train.map(augmentation)
train = train.prefetch(tf.data.experimental.AUTOTUNE)
val = val.map(preprocess).cache().repeat().batch(batch_size)
val = val.prefetch(tf.data.experimental.AUTOTUNE)
return train, info_train, val, info_val, TRAIN_SIZE, VAL_SIZE
我正在尝试创建一个预处理函数,以便 training_dataset 可以直接输入到 keras 顺序神经网络中。预处理函数应该 return 特征和标签。
def preprocessing_function(data):
features = ...
labels = ...
return features, labels
dataset, info = tfds.load(name='cats_vs_dogs', split=tfds.Split.TRAIN, with_info=True)
training_dataset = dataset.map(preprocessing_function)
preprocessing_function
应该怎么写?我花了几个小时研究并试图实现它,但无济于事。希望有人能提供帮助。
这里有两个预处理函数。第一个将应用于训练和验证数据以规范化数据并调整到预期的网络大小。第二个功能,增强,将仅应用于训练集。您想要进行的增强类型取决于您的数据集和应用程序,但我提供了这个作为示例。
#Fetching, pre-processing & preparing data-pipeline
def preprocess(ds):
x = tf.image.resize_with_pad(ds['image'], IMG_SIZE_W, IMG_SIZE_H)
x = tf.cast(x, tf.float32)
x = (x-MEAN)/(VARIANCE)
y = tf.one_hot(ds['label'], NUM_CLASSES)
return x, y
def augmentation(image,label):
image = tf.image.random_flip_left_right(image)
image = tf.image.resize_with_crop_or_pad(image, IMG_W+4, IMG_W+4) # zero pad each side with 4 pixels
image = tf.image.random_crop(image, size=[BATCH_SIZE, IMG_W, IMG_H, 3]) # Random crop back to 32x32
return image, label
要加载训练和验证数据集,请执行以下操作:
def get_dataset(dataset_name, shuffle_buff_size=1024, batch_size=BATCH_SIZE, augmented=True):
train, info_train = tfds.load(dataset_name, split='train[:80%]', with_info=True)
val, info_val = tfds.load(dataset_name, split='train[80%:]', with_info=True)
TRAIN_SIZE = info_train.splits['train'].num_examples * 0.8
VAL_SIZE = info_train.splits['train'].num_examples * 0.2
train = train.map(preprocess).cache().repeat().shuffle(shuffle_buff_size).batch(batch_size)
if augmented==True:
train = train.map(augmentation)
train = train.prefetch(tf.data.experimental.AUTOTUNE)
val = val.map(preprocess).cache().repeat().batch(batch_size)
val = val.prefetch(tf.data.experimental.AUTOTUNE)
return train, info_train, val, info_val, TRAIN_SIZE, VAL_SIZE