dan*_*lay 21 python tensorflow
好的,我想在Tensorflow中对时间序列数据进行一维卷积.tf.nn.conv2d根据这些 票据和手册,显然支持使用.唯一的要求就是设定strides=[1,1,1,1].听起来很简单!
但是,即使是非常小的测试用例,我也无法弄清楚如何做到这一点.我究竟做错了什么?
我们设置一下吧.
import tensorflow as tf
import numpy as np
print(tf.__version__)
>>> 0.9.0
Run Code Online (Sandbox Code Playgroud)
好的,现在在两个小阵列上生成基本卷积测试.我将通过使用1的批量大小来轻松实现,并且由于时间序列是1维的,因此我将具有1的"图像高度".并且因为它是单变量的时间序列,所以"通道"的数量显然也是1,这样会很简单吧?
g = tf.Graph()
with g.as_default():
# data shape is "[batch, in_height, in_width, in_channels]",
x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1,1,-1,1), name="x")
# filter shape is "[filter_height, filter_width, in_channels, out_channels]"
phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1,-1,1,1), name="phi")
conv = tf.nn.conv2d(
phi,
x,
strides=[1, 1, 1, 1],
padding="SAME",
name="conv")
Run Code Online (Sandbox Code Playgroud)
繁荣.错误.
ValueError: Dimensions 1 and 5 are not compatible
Run Code Online (Sandbox Code Playgroud)
好的,首先,我不明白任何维度都会发生这种情况,因为我已经指定我在卷积OP中填充参数.
但很好,也许有限制.我必须弄乱文档,并在张量的错误轴上设置这个卷积.我会尝试所有可能的排列:
for i in range(4):
for j in range(4):
shape1 = [1,1,1,1]
shape1[i] = -1
shape2 = [1,1,1,1]
shape2[j] = -1
x_array = np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(*shape1)
phi_array = np.array([0.0, 0.5, 1.0]).reshape(*shape2)
try:
g = tf.Graph()
with g.as_default():
x = tf.Variable(x_array, name="x")
phi = tf.Variable(phi_array, name="phi")
conv = tf.nn.conv2d(
x,
phi,
strides=[1, 1, 1, 1],
padding="SAME",
name="conv")
init_op = tf.initialize_all_variables()
sess = tf.Session(graph=g)
sess.run(init_op)
print("SUCCEEDED!", x_array.shape, phi_array.shape, conv.eval(session=sess))
sess.close()
except Exception as e:
print("FAILED!", x_array.shape, phi_array.shape, type(e), e.args or e._message)
Run Code Online (Sandbox Code Playgroud)
结果:
FAILED! (5, 1, 1, 1) (3, 1, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 1)',)
FAILED! (5, 1, 1, 1) (1, 3, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (1, 3) Input: (1, 1)',)
FAILED! (5, 1, 1, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',)
FAILED! (5, 1, 1, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
[[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 5, 1, 1) (3, 1, 1, 1) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
[[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 5, 1, 1) (1, 3, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (1, 3) Input: (5, 1)',)
FAILED! (1, 5, 1, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',)
FAILED! (1, 5, 1, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
[[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 1, 5, 1) (3, 1, 1, 1) <class 'ValueError'> ('Filter must not be larger than the input: Filter: (3, 1) Input: (1, 5)',)
FAILED! (1, 1, 5, 1) (1, 3, 1, 1) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
[[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 1, 5, 1) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 1 and 3 are not compatible',)
FAILED! (1, 1, 5, 1) (1, 1, 1, 3) <class 'tensorflow.python.framework.errors.InvalidArgumentError'> No OpKernel was registered to support Op 'Conv2D' with these attrs
[[Node: conv = Conv2D[T=DT_DOUBLE, data_format="NHWC", padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](x/read, phi/read)]]
FAILED! (1, 1, 1, 5) (3, 1, 1, 1) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',)
FAILED! (1, 1, 1, 5) (1, 3, 1, 1) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',)
FAILED! (1, 1, 1, 5) (1, 1, 3, 1) <class 'ValueError'> ('Dimensions 5 and 3 are not compatible',)
FAILED! (1, 1, 1, 5) (1, 1, 1, 3) <class 'ValueError'> ('Dimensions 5 and 1 are not compatible',)
Run Code Online (Sandbox Code Playgroud)
嗯.好吧,现在看起来有两个问题.首先,ValueError关于沿错误的轴应用滤波器,我想,虽然有两种形式.
但是我可以应用过滤器的轴也是混乱的 - 注意它实际上构造了具有输入形状(5,1,1,1)和过滤器形状(1,1,1,3)的图形.文档中的AFAICT,这应该是一个过滤器,它从批处理中查看示例,一个"像素"和一个"通道"并输出3个"通道".那么,为什么那个人工作呢?
无论如何,有时它在构造图形时不会失败.有时它会构造图形; 然后我们得到了tensorflow.python.framework.errors.InvalidArgumentError.从我收集的一些令人困惑的github门票这可能是由于事实上我是在CPU而不是GPU上运行,反之亦然事实上,卷积Op仅定义为32位浮点数,而不是64位浮点数.如果任何人都可能会引发一些光其轴我应该对准什么上,为了与卷积内核时间序列,我会非常感激.
Oli*_*rot 31
我很遗憾地说,但你的第一个代码几乎是正确的.你只是倒置x和phi在tf.nn.conv2d:
g = tf.Graph()
with g.as_default():
# data shape is "[batch, in_height, in_width, in_channels]",
x = tf.Variable(np.array([0.0, 0.0, 0.0, 0.0, 1.0]).reshape(1, 1, 5, 1), name="x")
# filter shape is "[filter_height, filter_width, in_channels, out_channels]"
phi = tf.Variable(np.array([0.0, 0.5, 1.0]).reshape(1, 3, 1, 1), name="phi")
conv = tf.nn.conv2d(
x,
phi,
strides=[1, 1, 1, 1],
padding="SAME",
name="conv")
Run Code Online (Sandbox Code Playgroud)
更新: TensorFlow现在支持自版本r0.11以来的1D卷积,使用tf.nn.conv1d.我以前做了一个指南,在我在这里粘贴的stackoverflow文档(现已灭绝)中使用它们:
考虑一个带有长度10和尺寸输入的基本示例16.批量大小是32.因此,我们有一个输入形状的占位符[batch_size, 10, 16].
batch_size = 32
x = tf.placeholder(tf.float32, [batch_size, 10, 16])
Run Code Online (Sandbox Code Playgroud)
然后我们创建一个宽度为3的过滤器,我们将16通道作为输入,并输出16通道.
filter = tf.zeros([3, 16, 16]) # these should be real values, not 0
Run Code Online (Sandbox Code Playgroud)
最后我们应用tf.nn.conv1d一个步幅和一个填充: - 步幅:整数s
- 填充:这在2D中工作,你可以选择SAME和VALID.SAME将输出相同的输入长度,而VALID不会添加零填充.
对于我们的示例,我们采用2的步幅和有效的填充.
output = tf.nn.conv1d(x, filter, stride=2, padding="VALID")
Run Code Online (Sandbox Code Playgroud)
输出形状应该是[batch_size, 4, 16].
有了padding="SAME",我们的输出形状会是[batch_size, 5, 16].
在 TF 的新版本(从 0.11 开始)中,您有conv1d,因此无需使用 2d 卷积来进行 1d 卷积。这是一个如何使用 conv1d 的简单示例:
import tensorflow as tf
i = tf.constant([1, 0, 2, 3, 0, 1, 1], dtype=tf.float32, name='i')
k = tf.constant([2, 1, 3], dtype=tf.float32, name='k')
data = tf.reshape(i, [1, int(i.shape[0]), 1], name='data')
kernel = tf.reshape(k, [int(k.shape[0]), 1, 1], name='kernel')
res = tf.squeeze(tf.nn.conv1d(data, kernel, stride=1, padding='VALID'))
with tf.Session() as sess:
print sess.run(res)
Run Code Online (Sandbox Code Playgroud)
要了解 conv1d 是如何计算的,请查看各种示例
| 归档时间: |
|
| 查看次数: |
25219 次 |
| 最近记录: |