god*_*dot 4 python deep-learning keras pytorch
如何实现这两个Keras模型(受到 Datacamp 课程“Python 中 Keras 的高级深度学习”的启发)Pytorch:
具有 1 个输入、2 个输出的分类:
from keras.layers import Input, Concatenate, Dense
from keras.models import Model
input_tensor = Input(shape=(1,))
output_tensor = Dense(2)(input_tensor)
model = Model(input_tensor, output_tensor)
model.compile(optimizer='adam', loss='categorical_crossentropy')
X = ... # e.g. a pandas series
y = ... # e.g. a pandas df with 2 columns
model.fit(X, y, epochs=100)
Run Code Online (Sandbox Code Playgroud)
具有分类和回归的模型:
from keras.layers import Input, Dense
from keras.models import Model
input_tensor = Input(shape=(1,))
output_tensor_reg = Dense(1)(input_tensor)
output_tensor_class = Dense(1, activation='sigmoid')(output_tensor_reg)
model.compile(loss=['mean_absolute_error','binary_crossentropy']
X = ...
y_reg = ...
y_class = ...
model.fit(X, [y_reg, y_class], epochs=100)
Run Code Online (Sandbox Code Playgroud)
该资源特别有帮助。
基本上,这个想法是,与 Keras 相反,您必须明确说明将在前向函数中计算每个输出以及如何根据它们计算全局损失。
例如,关于第一个例子:
def __init__(self, ...):
... # define your model elements
def forward(self, x):
# Do your stuff here
...
x1 = F.sigmoid(x) # class probabilities
x2 = F.sigmoid(x) # bounding box calculation
return x1, x2
Run Code Online (Sandbox Code Playgroud)
然后计算损失:
out1, out2 = model(data)
loss1 = criterion1(out1, target1)
loss2 = criterion2(out2, targt2)
alpha = ... # define the weights of each sub-loss in the global loss
loss = alpha * loss1 + (1-alpha) * loss2
loss.backward()
Run Code Online (Sandbox Code Playgroud)
对于第二个,它几乎相同,但是您计算前向传播中不同点的损失:
def __init__(self, ...):
... # define your model elements
def forward(self, main_input, aux_input):
aux = F.relu(self.dense_1(main_input))
x = F.relu(self.input_layer(aux))
x = F.sigmoid(x)
return x, aux
Run Code Online (Sandbox Code Playgroud)