张量流中的非全连接层

Ev4*_*Ev4 4 python machine-learning neural-network keras tensorflow

我想创建一个网络,其中输入层中的节点仅连接到下一层中的某些节点。这是一个小例子:

在此处输入图片说明

到目前为止,我的解决方案是将i1和之间的边的权重设置h1为零,并且在每个优化步骤之后,我将权重乘以一个矩阵(我称之为矩阵掩码矩阵),其中每个条目都是 1,除了权重的条目之间的边缘i1h1。(见下面的代码)

这种做法对吗?或者这对 GradientDescent 有影响吗?有没有另一种方法可以在 TensorFlow 中创建这种网络?

import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np

tf.enable_eager_execution()


model = tf.keras.Sequential([
  tf.keras.layers.Dense(2, activation=tf.sigmoid, input_shape=(2,)),  # input shape required
  tf.keras.layers.Dense(2, activation=tf.sigmoid)
])


#set the weights
weights=[np.array([[0, 0.25],[0.2,0.3]]),np.array([0.35,0.35]),np.array([[0.4,0.5],[0.45, 0.55]]),np.array([0.6,0.6])]

model.set_weights(weights)

model.get_weights()

features = tf.convert_to_tensor([[0.05,0.10 ]])
labels =  tf.convert_to_tensor([[0.01,0.99 ]])


mask =np.array([[0, 1],[1,1]])

#define the loss function
def loss(model, x, y):
  y_ = model(x)
  return tf.losses.mean_squared_error(labels=y, predictions=y_)

#define the gradient calculation
def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
  return loss_value, tape.gradient(loss_value, model.trainable_variables) 

#create optimizer an global Step
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
global_step = tf.train.get_or_create_global_step()


#optimization step
loss_value, grads = grad(model, features, labels)
optimizer.apply_gradients(zip(grads, model.variables),global_step)

#masking the optimized weights 
weights=(model.get_weights())[0]
masked_weights=tf.multiply(weights,mask)
model.set_weights([masked_weights])
Run Code Online (Sandbox Code Playgroud)

tod*_*day 10

如果您正在为您提供的特定示例寻找解决方案,您可以简单地使用tf.kerasFunctional API 并定义两个 Dense 层,其中一个连接到前一层中的两个神经元,另一个仅连接到一个神经元:

from tensorflow.keras.layer import Input, Lambda, Dense, concatenate
from tensorflow.keras.models import Model

inp = Input(shape=(2,))
inp2 = Lambda(lambda x: x[:,1:2])(inp)   # get the second neuron 

h1_out = Dense(1, activation='sigmoid')(inp2)  # only connected to the second neuron
h2_out = Dense(1, activation='sigmoid')(inp)  # connected to both neurons
h_out = concatenate([h1_out, h2_out])

out = Dense(2, activation='sigmoid')(h_out)

model = Model(inp, out)

# simply train it using `fit`
model.fit(...)
Run Code Online (Sandbox Code Playgroud)

  • @LB 据我所知,与 Numpy 不同,这是不可能的,因为索引应该是切片或标量。但是,您可以使用 [`tf.gather`](https://www.tensorflow.org/api_docs/python/tf/gather) 来实现此目的,例如:`tf.gather(x, [1,2,5 ], axis=1)`,这相当于“x[:, [1,2,5]]”。 (2认同)