nad*_*dia 6 python keras tensorflow
我很困惑与Conv2D和conv2d在Keras。它们之间有什么区别?我认为第一个是层,第二个是后端功能,但这是什么意思?在Conv2D中,我们发送过滤器的数量,过滤器的大小和步幅(Conv2D(64,(3,3),stride=(8,8))(input))但是在conv2d中,我们使用的conv2d(input, kernel, stride=(8,8))是内核(64、3、3),然后将过滤器的数量和大小放在一起吗?我应该在哪里输入内核?您能帮我解决这个问题吗?谢谢。
pytorch中的代码
def apply_conv(self, image, filter_type: str):
if filter_type == 'dct':
filters = self.dct_conv_weights
elif filter_type == 'idct':
filters = self.idct_conv_weights
else:
raise('Unknown filter_type value.')
image_conv_channels = []
for channel in range(image.shape[1]):
image_yuv_ch = image[:, channel, :, :].unsqueeze_(1)
image_conv = F.conv2d(image_yuv_ch, filters, stride=8)
image_conv = image_conv.permute(0, 2, 3, 1)
image_conv = image_conv.view(image_conv.shape[0], image_conv.shape[1], image_conv.shape[2], 8, 8)
image_conv = image_conv.permute(0, 1, 3, 2, 4)
image_conv = image_conv.contiguous().view(image_conv.shape[0],
image_conv.shape[1]*image_conv.shape[2],
image_conv.shape[3]*image_conv.shape[4])
image_conv.unsqueeze_(1)
# image_conv = F.conv2d()
image_conv_channels.append(image_conv)
image_conv_stacked = torch.cat(image_conv_channels, dim=1)
return image_conv_stacked
Run Code Online (Sandbox Code Playgroud)
Keras中更改的代码
def apply_conv(self, image, filter_type: str):
if filter_type == 'dct':
filters = self.dct_conv_weights
elif filter_type == 'idct':
filters = self.idct_conv_weights
else:
raise('Unknown filter_type value.')
print(image.shape)
image_conv_channels = []
for channel in range(image.shape[1]):
print(image.shape)
print(channel)
image_yuv_ch = K.expand_dims(image[:, channel, :, :],1)
print( image_yuv_ch.shape)
print(filters.shape)
image_conv = Kr.backend.conv2d(image_yuv_ch,filters,strides=(8,8),data_format='channels_first')
image_conv = Kr.backend.permute_dimensions(image_conv,(0, 2, 3, 1))
image_conv = Kr.backend.reshape(image_conv,(image_conv.shape[0], image_conv.shape[1], image_conv.shape[2], 8, 8))
image_conv = Kr.backend.permute_dimensions(image_conv,(0, 1, 3, 2, 4))
image_conv = Kr.backend.reshape(image_conv,(image_conv.shape[0],
image_conv.shape[1]*image_conv.shape[2],
image_conv.shape[3]*image_conv.shape[4]))
Kr.backend.expand_dims(image_conv,1)
# image_conv = F.conv2d()
image_conv_channels.append(image_conv)
image_conv_stacked = Kr.backend.concatenate(image_conv_channels, axis=1)
return image_conv_stacked
Run Code Online (Sandbox Code Playgroud)
但是当我执行代码时,它会产生以下错误:
追溯(最近一次通话):
文件“”,第383行,位于decoded_noise = JpegCompression()(act11)#16中
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ engine \ base_layer.py”,行457,在调用 输出= self.call(输入,** kwargs)中
调用image_dct = self.apply_conv(noised_image,'dct')中的文件“”,行169
文件“”,第132行,位于apply_conv image_conv = Kr.backend.conv2d(image_yuv_ch,filters,strides =(8,8),data_format ='channels_first')
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ backend \ tensorflow_backend.py”,行3650,以conv2d data_format = tf_data_format格式)
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ ops \ nn_ops.py”,行779,卷积data_format = data_format)
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ ops \ nn_ops.py”,第839行,init filter_shape [num_spatial_dims]))
ValueError:输入通道数与过滤器的相应尺寸不匹配,1!= 8
新代码
for channel in range(image.shape[1]):
image_yuv_ch = K.expand_dims(image[:, channel, :, :],axis=1)
image_yuv_ch = K.permute_dimensions(image_yuv_ch, (0, 2, 3, 1))
image_conv = tf.keras.backend.conv2d(image_yuv_ch,kernel=filters,strides=(8,8),padding='same')
image_conv = tf.keras.backend.reshape(image_conv,(image_conv.shape[0],image_conv.shape[1], image_conv.shape[2],8,8))
Run Code Online (Sandbox Code Playgroud)
错误:
追溯(最近一次通话):
文件“”,第263行,位于decoded_noise = JpegCompression()(act11)#16中
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ keras \ engine \ base_layer.py”,行457,在调用 输出= self.call(输入,** kwargs)中
呼叫image_dct = self.apply_conv(noised_image,'dct')中的文件“”,行166
文件“”,行128,位于apply_conv image_conv = tf.keras.backend.reshape(image_conv,(image_conv.shape [0],image_conv.shape [1],image_conv.shape [2],8,8))
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ keras \ backend.py”,第2281行,重塑形式返回array_ops.reshape(x,shape)
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ ops \ gen_array_ops.py”,行6482,重塑为“ Reshape”,张量=张量,shape =形状,名称=名称)
_apply_op_helper中的文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ framework \ op_def_library.py”行513引发错误
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ framework \ op_def_library.py”,行510,在_apply_op_helper preferred_dtype = default_dtype中)
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ framework \ ops.py”,行1146,在internal_convert_to_tensor中ret = conversion_func(value,dtype = dtype,name = name,as_ref = as_ref)
_constant_tensor_conversion_function中的文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ framework \ constant_op.py”第229行返回常量(v,dtype = dtype,name = name)
文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ framework \ constant_op.py”,第208行,常量值,dtype = dtype,shape = shape,verify_shape = verify_shape))
make_tensor_proto“受支持的类型”中的文件“ D:\ software \ Anaconda3 \ envs \ py36 \ lib \ site-packages \ tensorflow \ python \ framework \ tensor_util.py”,行531。%(类型(值),值)
TypeError:无法将类型的对象转换为Tensor。内容:(Dimension(None),Dimension(4),Dimension(4),8,8)。考虑将元素强制转换为受支持的类型。
Tensorflow和Keras现在正在使用channel_last约定。因此,首先您应将通道暗淡置换为最后一个使用K.permute_dimension。您可以在colab.research.google.com中尝试使用此代码来弄清楚自己。
conv2d是执行2D卷积文档的功能keras.layers.Conv2D() will return an instance of class Conv2D which perform convolution function. See more here# The second
import keras
conv_layer = keras.layers.Conv2D(filters=64, kernel_size=8, strides=(4, 4), padding='same')
Run Code Online (Sandbox Code Playgroud)
Basically, they differ from the way to define and the way to use. K.conv2d is used inside keras.layers.Conv2D when conv_layer apply convolution on some input x such as conv_layer.
The example below may help you to understand it easier the difference between
say_helloandSayHello.
def say_hello(word, name):
print(word, name)
class SayHello():
def __init__(self, word='Hello'):
self.word = word
pass
def __call__(self, name):
say_hello(self.word, name)
say_hello('Hello', 'Nadia') #Hello Nadia
sayhello = SayHello(word='Hello') # you will get an instance `sayhello` from class SayHello
sayhello('Nadia') # Hello Nadia
Run Code Online (Sandbox Code Playgroud)
kernel here is a tensor of shape (kernel_size, kernel_size, in_channels, out_channels)image_conv of shape (8, 8, 64) then the strides=(4,4). import tensorflow as tf
import tensorflow.keras.backend as K
image = tf.random_normal((10,3, 32, 32))
print(image.shape) # shape=(10, 3, 32, 32)
channel = 1
image_yuv_ch = K.expand_dims(image[:, channel,:,:], axis=1) # shape=(10, 1, 32, 32)
image_yuv_ch = K.permute_dimensions(image_yuv_ch, (0, 2, 3, 1)) # shape=(10, 32, 32, 1)
# The first K.conv2d
in_channels = 1
out_channels = 64 # same as filters
kernel = tf.random_normal((8, 8, in_channels, out_channels)) # shape=(8, 8, 1, 64)
image_conv = tf.keras.backend.conv2d(image_yuv_ch, kernel=kernel, strides=(4, 4), padding='same')
print(image_conv.shape) #shape=(10, 8, 8, 64)
# The second
import keras
conv_layer = keras.layers.Conv2D(filters=64, kernel_size=8, strides=(4, 4), padding='same')
image_conv = conv_layer(image_yuv_ch)
print(image_conv.shape) #shape=(10, 8, 8, 64)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
759 次 |
| 最近记录: |