Tensorflow 数据集:在 dataset.batch() 之后按批次裁剪/调整图像大小

Avi*_*ner 5 python tensorflow tensorflow-datasets

是否可以按批次裁剪/调整图像大小?

我正在使用 Tensorflow 数据集 API,如下所示:

dataset = dataset.shuffle().repeat().batch(batch_size, drop_remainder=True)
Run Code Online (Sandbox Code Playgroud)

我希望,批次内所有图像都应具有相同的尺寸。然而,不同批次的尺寸可能不同。

例如,第一个批次包含形状为 (batch_size, 300, 300, 3) 的所有图像。下一批可以具有形状为 (batch_size, 224, 224, 3) 的图像。另一个批次可以具有形状为 (batch_size, 400, 400, 3) 的图像。

基本上我想要动态形状的批次,但是批次内的所有图像都有静态形状。

如果我们这样做:

dataset = dataset.shuffle().repeat().batch(batch_size, drop_remainder=True).map(lambda x, y: map_fn(x, y))
Run Code Online (Sandbox Code Playgroud)

上面的 .map() 是单独应用于每个批次还是整个数据集?

如果上面的 .map() 不能单独应用于每个批次,我们该怎么做?我们可以在 dataset.batch() 之后定义任何迭代器,对每个批次的每个图像应用 tf.image.crop_and_resize() ,然后使用 dataset.concatenate() 来组合所有转换后的批次吗?

我正在创建数据集,如下所示:

# Dataset creation (read image data from files of COCO dataset)
dataset = tf.data.Dataset.list_files(self._file_pattern, shuffle=False)
dataset = dataset.shard(dataset_num_shards, dataset_shard_index)
dataset = dataset.shuffle(tf.cast(256 / dataset_num_shards, tf.int64))
dataset = dataset.interleave(map_func=tf.data.TFRecordDataset(filename).prefetch(1), cycle_length=32, block_length=1, num_parallel_calls=tf.data.experimental.AUTOTUNE)
dataset = dataset.map(tf_example_decoder.TfExampleDecoder().decode, num_parallel_calls=64)
dataset = dataset.shuffle(64).repeat()
# Parse each image for preprocessing
dataset = dataset.map(lambda data, _: _parse_example(data), num_parallel_calls=64)
dataset = dataset.batch(batch_size=batch_size, drop_remainder=True)

# Below code suggested by you to resize images to fixed shape in each batch
def resize_data(images, labels):
    tf.print('Original shape -->', tf.shape(images))
    SIZE = (300, 300)
    return tf.image.resize(images, SIZE), labels
dataset = dataset.map(resize_data)
dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE)

tf.estimator.Estimator(...).train(
        input_fn=dataset,
        steps=steps,
        hooks=train_hooks)
Run Code Online (Sandbox Code Playgroud)

Alo*_*her 2

一般来说,你可以尝试这样的事情:

import tensorflow as tf
import numpy as np

dataset1 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 300, 300, 3)))
dataset2 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 224, 224, 3)))
dataset3 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 400, 400, 3)))
dataset = dataset1.concatenate(dataset2.concatenate(dataset3))
dataset = dataset.shuffle(1).repeat().batch(32, drop_remainder=True)

def resize_data(images):
  tf.print('Original shape -->', tf.shape(images))
  SIZE = (180, 180)

  return tf.image.resize(images, SIZE)

dataset = dataset.map(resize_data)

for images in dataset.take(3):
  tf.print('New shape -->', tf.shape(images))
Run Code Online (Sandbox Code Playgroud)
Original shape --> [32 300 300 3]
New shape --> [32 180 180 3]
Original shape --> [32 224 224 3]
New shape --> [32 180 180 3]
Original shape --> [32 400 400 3]
New shape --> [32 180 180 3]
Run Code Online (Sandbox Code Playgroud)

tf.image.resize_with_crop_or_pad如果您愿意,您也可以使用:

Original shape --> [32 300 300 3]
New shape --> [32 180 180 3]
Original shape --> [32 224 224 3]
New shape --> [32 180 180 3]
Original shape --> [32 400 400 3]
New shape --> [32 180 180 3]
Run Code Online (Sandbox Code Playgroud)

请注意,使用repeat()将创建无限数据集。

更新1

如果您想要每个批次的随机大小,请尝试以下操作:

def resize_data(images):
  tf.print('Original shape -->', tf.shape(images))
  SIZE = (180, 180)
  return tf.image.resize_with_crop_or_pad(images, SIZE[0], SIZE[1])

dataset = dataset.map(resize_data)

for images in dataset.take(3):
  tf.print('New shape -->', tf.shape(images))
Run Code Online (Sandbox Code Playgroud)
New shape --> [32 392 385 3]
New shape --> [32 468 459 3]
New shape --> [32 466 461 3]
Run Code Online (Sandbox Code Playgroud)

更新2

适用于任何批量大小的非常灵活的选项如下所示:

import tensorflow as tf
import numpy as np

dataset1 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 300, 300, 3)))
dataset2 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 224, 224, 3)))
dataset3 = tf.data.Dataset.from_tensor_slices(np.random.random((32, 400, 400, 3)))
dataset = dataset1.concatenate(dataset2.concatenate(dataset3))
dataset = dataset.batch(32, drop_remainder=True).shuffle(96)


def resize_data(images):
  batch_size = tf.shape(images)[0]
  images_resized = tf.TensorArray(dtype=tf.float32, size = 0, dynamic_size=True)
  SIZE = tf.random.uniform((2,), minval=300, maxval=500, dtype=tf.int32)
  for i in range(batch_size):
    images_resized = images_resized.write(images_resized.size(), tf.image.resize(images[i], SIZE))
  return images_resized.stack()

dataset = dataset.map(resize_data)

for images in dataset:
  tf.print('New shape -->', tf.shape(images))
Run Code Online (Sandbox Code Playgroud)
New shape --> TensorShape([10, 399, 348, 3])
New shape --> TensorShape([10, 356, 329, 3])
New shape --> TensorShape([10, 473, 373, 3])
New shape --> TensorShape([10, 489, 489, 3])
New shape --> TensorShape([10, 421, 335, 3])
New shape --> TensorShape([10, 447, 455, 3])
New shape --> TensorShape([10, 355, 382, 3])
New shape --> TensorShape([10, 310, 396, 3])
New shape --> TensorShape([10, 345, 356, 3])
Run Code Online (Sandbox Code Playgroud)