小编shr*_*ath的帖子

批量大小可变的数据加载?

我目前正在研究基于补丁的超分辨率。大多数论文将图像划分为较小的补丁,然后将这些补丁用作模型的输入。我能够使用自定义数据加载器创建补丁。代码如下:

import torch.utils.data as data
from torchvision.transforms import CenterCrop, ToTensor, Compose, ToPILImage, Resize, RandomHorizontalFlip, RandomVerticalFlip
from os import listdir
from os.path import join
from PIL import Image
import random
import os
import numpy as np
import torch

def is_image_file(filename):
    return any(filename.endswith(extension) for extension in [".png", ".jpg", ".jpeg", ".bmp"])

class TrainDatasetFromFolder(data.Dataset):
    def __init__(self, dataset_dir, patch_size, is_gray, stride):
        super(TrainDatasetFromFolder, self).__init__()
        self.imageHrfilenames = []
        self.imageHrfilenames.extend(join(dataset_dir, x)
                                     for x in sorted(listdir(dataset_dir)) if is_image_file(x))
        self.is_gray = is_gray
        self.patchSize = patch_size
        self.stride = stride

    def _load_file(self, …
Run Code Online (Sandbox Code Playgroud)

image-processing python-3.x pytorch

2
推荐指数
1
解决办法
6106
查看次数

标签 统计

image-processing ×1

python-3.x ×1

pytorch ×1