我正在寻找使用python(h5py)将数据附加到h5文件中的现有数据集的可能性.
我的项目简介:我尝试使用医学图像数据训练CNN.由于在将数据转换为nparrays期间需要大量数据和大量内存,我需要将"转换"拆分为几个数据块 - >加载并预处理前100个医学图像并将nparray保存到hdf5 file - >加载下一个100个数据集并附加现有的h5文件.
现在我尝试按如下方式存储前100个转换后的nparrays:
import h5py
from LoadIPV import LoadIPV
X_train_data, Y_train_data, X_test_data, Y_test_data = LoadIPV()
with h5py.File('.\PreprocessedData.h5', 'w') as hf:
hf.create_dataset("X_train", data=X_train_data, maxshape=(None, 512, 512, 9))
hf.create_dataset("X_test", data=X_test_data, maxshape=(None, 512, 512, 9))
hf.create_dataset("Y_train", data=Y_train_data, maxshape=(None, 512, 512, 1))
hf.create_dataset("Y_test", data=Y_test_data, maxshape=(None, 512, 512, 1))
Run Code Online (Sandbox Code Playgroud)
可以看出,转换后的nparray被分成四个不同的"组",存储在四个hdf5数据集[X_train,X_test,Y_train,Y_test]中.LoadIPV()函数执行医学图像数据的预处理.
我的问题是,我想将接下来的100个nparray存储到现有数据集中的同一个h5文件中:这意味着我想要附加例如现有的X_train-dataset [100,512,512,9]以及接下来的100个nparrays这样X_train变为[200,512,512,9].这同样适用于其他三个数据集X_test,Y_train,Y_test.
非常感谢您的帮助!
我目前正在尝试重新创建Unet.在需要合并两层输出的"upconvolution"部分,我得到了上述错误.(TypeError:init()获得参数'axis'的多个值)
代码段:
import gzip
import os
from six.moves import urllib
import tensorflow as tf
import numpy as np
from keras.models import Sequential, Model
from keras.layers import Input, Dropout, Flatten, Concatenate
from keras.layers import Conv2D, MaxPool2D, Conv2DTranspose
from keras.utils import np_utils
import keras.callbacks
# Define model architecture
input1 = Input((X_train.shape[1], X_train.shape[2], 1))
conv1 = Conv2D(64,(3,3), activation='relu', padding='same')(input1)
conv1 = Dropout(0.2)(conv1)
conv1 = Conv2D(64,(3,3), activation='relu', padding='same')(conv1)
pool1 = MaxPool2D(pool_size=(2,2))(conv1)
conv2 = Conv2D(128,(3,3), activation='relu', padding='same')(pool1)
conv2 = Dropout(0.2)(conv2)
conv2 …
Run Code Online (Sandbox Code Playgroud) 我对DL和Keras比较陌生.
我试图在Keras中使用预训练的VGG16来实现感知损失但是有一些麻烦.我已经发现了这个问题,但我仍在努力:/
我的网络应该做的简短说明:
我有一个CNN(后来称为mainModel),它将灰度图像作为输入(#TrackData,512,512,1)并输出相同大小的灰度图像.网络应该减少图像中的伪像 - 但我认为这对于这个问题并不重要.我不想使用例如MSE作为损失函数,而是希望实现感知损失.
我想做什么(我希望我已经正确地理解了感知损失的概念):
我想将一个lossModel(带有固定参数的预先训练的VGG16)附加到我的mainModel.然后我想将mainModel的输出传递给lossModel.另外,我将标签图像(Y_train)传递给lossModel.此外,我使用例如MSE比较lossModel的特定层(例如block1_conv2)处的激活,并将其用作损失函数.
到目前为止我做了什么:
加载数据并创建mainModel:
### Load data ###
with h5py.File('.\train_test_val.h5', 'r') as hf:
X_train = hf['X_train'][:]
Y_train = hf['Y_train'][:]
X_test = hf['X_test'][:]
Y_test = hf['Y_test'][:]
X_val = hf['X_val'][:]
Y_val = hf['Y_val'][:]
### Create Main Model ###
input_1 = Input((512,512,9))
conv0 = Conv2D(64, (3,3), strides=(1,1), activation=relu, use_bias=True, padding='same')(input_1)
.
.
.
mainModel = Model(inputs=input_1, outputs=output)
Run Code Online (Sandbox Code Playgroud)
创建lossModel,将其附加到mainModel并修复params:
### Create Loss Model (VGG16) ###
lossModel = vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=mainModel.output, input_shape=(512,512, 1))
lossModel.trainable=False
for layer …
Run Code Online (Sandbox Code Playgroud) 我是DL和Keras的新手.目前我尝试实现类似Unet的CNN,现在我想将批量规范化层包含到我的非顺序模型中,但现在还没有.
这是我目前尝试包含它:
input_1 = Input((X_train.shape[1],X_train.shape[2], X_train.shape[3]))
conv1 = Conv2D(16, (3,3), strides=(2,2), activation='relu', padding='same')(input_1)
batch1 = BatchNormalization(axis=3)(conv1)
conv2 = Conv2D(32, (3,3), strides=(2,2), activation='relu', padding='same')(batch1)
batch2 = BatchNormalization(axis=3)(conv2)
conv3 = Conv2D(64, (3,3), strides=(2,2), activation='relu', padding='same')(batch2)
batch3 = BatchNormalization(axis=3)(conv3)
conv4 = Conv2D(128, (3,3), strides=(2,2), activation='relu', padding='same')(batch3)
batch4 = BatchNormalization(axis=3)(conv4)
conv5 = Conv2D(256, (3,3), strides=(2,2), activation='relu', padding='same')(batch4)
batch5 = BatchNormalization(axis=3)(conv5)
conv6 = Conv2D(512, (3,3), strides=(2,2), activation='relu', padding='same')(batch5)
drop1 = Dropout(0.25)(conv6)
upconv1 = Conv2DTranspose(256, (3,3), strides=(1,1), padding='same')(drop1)
upconv2 = Conv2DTranspose(128, (3,3), strides=(2,2), padding='same')(upconv1)
upconv3 = …
Run Code Online (Sandbox Code Playgroud) nonsequential deep-learning conv-neural-network keras batch-normalization
我是 DL 和 Keras 的新手,目前我正在尝试在 Keras 中实现一个基于 sobel-filter 的自定义损失函数。
这个想法是计算索贝尔滤波预测和索贝尔滤波地面实况图像的均方损失。
到目前为止,我的自定义损失函数如下所示:
from scipy import ndimage
def mse_sobel(y_true, y_pred):
for i in range (0, y_true.shape[0]):
dx_true = ndimage.sobel(y_true[i,:,:,:], 1)
dy_true = ndimage.sobel(y_true[i,:,:,:], 2)
mag_true[i,:,:,:] = np.hypot(dx_true, dy_true)
mag_true[i,:,:,:] *= 1.0 / np.max(mag_true[i,:,:,:])
dx_pred = ndimage.sobel(y_pred[i,:,:,:], 1)
dy_pred = ndimage.sobel(y_pred[i,:,:,:], 2)
mag_pred[i,:,:,:] = np.hypot(dx_pred, dy_pred)
mag_pred[i,:,:,:] *= 1.0 / np.max(mag_pred[i,:,:,:])
return(K.mean(K.square(mag_pred - mag_true), axis=-1))
Run Code Online (Sandbox Code Playgroud)
使用这个损失函数会导致这个错误:
in mse_sobel
for i in range (0, y_true.shape[0]):
TypeError: __index__ returned non-int (type NoneType)
Run Code Online (Sandbox Code Playgroud)
使用我发现的调试器,它y_true.shape
只会返回 …
keras ×4
python ×2
tensorflow ×2
h5py ×1
hdf5 ×1
keras-layer ×1
numpy ×1
scipy ×1
sobel ×1
vgg-net ×1