在我学会了如何使用之后einsum,我现在正试图了解它是如何np.tensordot工作的.
但是,我有点迷失,特别是关于参数的各种可能性axes.
要理解它,因为我从未练习过张量微积分,我使用以下示例:
A = np.random.randint(2, size=(2, 3, 5))
B = np.random.randint(2, size=(3, 2, 4))
Run Code Online (Sandbox Code Playgroud)
在这种情况下,有什么不同可能np.tensordot,你会如何手动计算?
我已经实现了某种神经网络(GAN:Generative Adversarial Networks)tensorflow.
它按预期工作,直到我决定在方法中添加以下批量规范化层generator(z)(请参阅下面的完整代码):
out = tf.contrib.layers.batch_norm(out, is_training=False)
Run Code Online (Sandbox Code Playgroud)
因为我收到以下错误:
G_sample = generator(Z)
File "/Users/Florian/Documents/DeepLearning/tensorflow_stuff/tensorflow_stuff/DCGAN.py", line 84, in generator
out = tf.contrib.layers.batch_norm(out, is_training=False)
File "/Users/Florian/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 181, in func_with_args
return func(*args, **current_args)
File "/Users/Florian/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/layers/python/layers/layers.py", line 551, in batch_norm
outputs = layer.apply(inputs, training=is_training)
File "/Users/Florian/anaconda2/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 381, in apply
return self.__call__(inputs, **kwargs)
File "/Users/Florian/anaconda2/lib/python2.7/site-packages/tensorflow/python/layers/base.py", line 328, in __call__
self.build(input_shapes[0])
File "/Users/Florian/anaconda2/lib/python2.7/site-packages/tensorflow/python/layers/normalization.py", line 143, in build
input_shape)
ValueError: ('Input has undefined `axis` dimension. Input shape: ', TensorShape([Dimension(None), Dimension(None), Dimension(None), …Run Code Online (Sandbox Code Playgroud) 我想知道 Keras 如何计算指标(是否自定义)。
例如,假设我有以下度量,它产生预测和真实情况之间的最大误差:
def max_error(y_true, y_pred):
import keras.backend as K
return K.max(K.abs(y_true-y_pred))
Run Code Online (Sandbox Code Playgroud)
输出标量指标是在所有小批量上计算然后取平均值还是直接在整个数据集(训练或验证)上计算?
我有一个A形状的数组,(N, N, K)我想计算另一个B具有相同形状的数组B[:, :, i] = np.linalg.inv(A[:, :, i]).
作为解决方案,我看到map并for循环,但我想知道是否numpy提供了一个功能来做到这一点(我已经尝试过,np.apply_over_axes但它似乎只能处理1D数组).
用for循环:
B = np.zeros(shape=A.shape)
for i in range(A.shape[2]):
B[:, :, i] = np.linalg.inv(A[:, :, i])
Run Code Online (Sandbox Code Playgroud)
用map:
B = np.asarray(map(np.linalg.inv, np.squeeze(np.dsplit(A, A.shape[2])))).transpose(1, 2, 0)
Run Code Online (Sandbox Code Playgroud) 我想沿第一个轴将一个 numpy 数组拆分为大小不等的子数组。我已经检查了 numpy.split 但似乎我只能传递索引而不是大小(每个子数组的行数)。
例如:
arr = np.array([[1,2], [3,4], [5,6], [7,8], [9,10]])
Run Code Online (Sandbox Code Playgroud)
应该产生:
arr.split([2,1,2]) = [array([[1,2], [3,4]]), array([5,6]), array([[7,8], [9,10]])]
Run Code Online (Sandbox Code Playgroud) 我在做Keras库作者写的卷积自编码器教程:https ://blog.keras.io/building-autoencoders-in-keras.html
但是,当我启动完全相同的代码,并使用 summary() 分析网络架构时,似乎输出大小与输入大小不兼容(在自动编码器的情况下是必需的)。这是summary()的输出:
**____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 1, 28, 28) 0
____________________________________________________________________________________________________
convolution2d_1 (Convolution2D) (None, 16, 28, 28) 160 input_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D) (None, 16, 14, 14) 0 convolution2d_1[0][0]
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D) (None, 8, 14, 14) 1160 maxpooling2d_1[0][0]
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D) (None, 8, 7, 7) 0 convolution2d_2[0][0]
____________________________________________________________________________________________________
convolution2d_3 (Convolution2D) (None, 8, 7, 7) 584 maxpooling2d_2[0][0]
____________________________________________________________________________________________________
maxpooling2d_3 (MaxPooling2D) (None, 8, 3, 3) 0 convolution2d_3[0][0]
____________________________________________________________________________________________________
convolution2d_4 (Convolution2D) …Run Code Online (Sandbox Code Playgroud) 假设我有一个原型为:
def my_func(fixed_param, *args)
Run Code Online (Sandbox Code Playgroud)
我想用多个参数运行这个函数(每次运行不需要相同数量的参数),例如:
res = map(partial(my_func, fixed_param=3), [[1, 2, 3], [1, 2, 3, 4]])
Run Code Online (Sandbox Code Playgroud)
其中[1,2,3]和[1,2,3,4]分别对应于args的第一和第二组参数.
但是这行代码失败并出现以下错误:
TypeError: my_func() got multiple values for keyword argument 'fixed_param'
Run Code Online (Sandbox Code Playgroud) 我想在Keras的(X_train, y_train)每个N时期传递另一个训练数据集,这些数据(X_train, y_train)是通过Monte Carlo模拟获得的。
用伪代码,可以通过以下方式完成:
for i in range(nb_total_epochs):
if i%N == 0:
X_train, y_train = generate_new_dataset(simulation_parameters)
train_model(X_train, y_train)
Run Code Online (Sandbox Code Playgroud)
是否有任何现成的技巧可以通过该fit()功能实现?
Numpy中是否有一个函数在二进制数组中反转0和1?如果
a = np.array([0, 1, 0, 1, 1])
Run Code Online (Sandbox Code Playgroud)
我想得到:
b = [1, 0, 1, 0, 0]
Run Code Online (Sandbox Code Playgroud)
我用:
b[a==0] = 1
b[a==1] = 0
Run Code Online (Sandbox Code Playgroud)
但也许它已经在Numpy中存在了这样做.
我有一个inMap类型为map<double, pair<int, double>>.
我试图通过这样的方式过滤这张地图copy_if:
map<double, pair<int, double>> outMap;
copy_if(inMap.begin(), inMap.end(), outMap.begin(), [](pair<double, pair<int, double>> item) {return (true) ;} // I have simplified the predicate
Run Code Online (Sandbox Code Playgroud)
但是,在编译时,我收到以下错误:
error: use of deleted function 'std::pair<const double, std::pair<int, double>>& std::pair<const double, std::pair<int, double>>::operator=(const std::pair<const double, std::pair<int, double>>&)
Run Code Online (Sandbox Code Playgroud) 在我的应用程序中,我需要通过glUseProgram(程序)在两个GLSL程序之间切换.我想知道我是否写道:
glUseProgram(program1)
buf1 = glGenBuffers(1)
glUseProgram(program2)
buf2 = glGenBuffers(1)
Run Code Online (Sandbox Code Playgroud)
buf1和buf2可以是相同的值吗?即每个程序都有自己的缓冲区,还是共享缓冲区?
python ×9
numpy ×4
keras ×3
arrays ×1
autoencoder ×1
c++ ×1
dot-product ×1
opengl ×1
pyopengl ×1
std-pair ×1
stl ×1
tensor ×1
tensorflow ×1