我使用迁移学习来训练模型。基本模型是efficientNet
。你可以在这里读更多关于它的内容
from tensorflow import keras
from keras.models import Sequential,Model
from keras.layers import Dense,Dropout,Conv2D,MaxPooling2D,
Flatten,BatchNormalization, Activation
from keras.optimizers import RMSprop , Adam ,SGD
from keras.backend import sigmoid
Run Code Online (Sandbox Code Playgroud)
SwishActivation 类(激活):
def __init__(self, activation, **kwargs):
super(SwishActivation, self).__init__(activation, **kwargs)
self.__name__ = 'swish_act'
def swish_act(x, beta = 1):
return (x * sigmoid(beta * x))
from keras.utils.generic_utils import get_custom_objects
from keras.layers import Activation
get_custom_objects().update({'swish_act': SwishActivation(swish_act)})
Run Code Online (Sandbox Code Playgroud)
model = enet.EfficientNetB0(include_top=False, input_shape=(150,50,3), pooling='avg', weights='imagenet')
Run Code Online (Sandbox Code Playgroud)
x = model.output
x = …
Run Code Online (Sandbox Code Playgroud) 我刚刚升级到tensorflow 2.3。我想制作自己的数据生成器用于训练。使用tensorflow 1.x,我这样做了:
def get_data_generator(test_flag):
item_list = load_item_list(test_flag)
print('data loaded')
while True:
X = []
Y = []
for _ in range(BATCH_SIZE):
x, y = get_random_augmented_sample(item_list)
X.append(x)
Y.append(y)
yield np.asarray(X), np.asarray(Y)
data_generator_train = get_data_generator(False)
data_generator_test = get_data_generator(True)
model.fit_generator(data_generator_train, validation_data=data_generator_test,
epochs=10000, verbose=2,
use_multiprocessing=True,
workers=8,
validation_steps=100,
steps_per_epoch=500,
)
Run Code Online (Sandbox Code Playgroud)
这段代码在tensorflow 1.x 上运行良好。系统中创建了8个进程。处理器和显卡加载完美。“数据已加载”打印了 8 次。
使用tensorflow 2.3我收到警告:
警告:tensorflow:多处理可能与 TensorFlow 交互不良,导致不确定性死锁。对于高性能数据管道,建议使用 tf.data。
“数据已加载”打印一次(应该是8次)。GPU 没有得到充分利用。每个 epoch 都会有内存泄漏,因此训练会在几个 epoch 后停止。use_multiprocessing 标志没有帮助。
如何在tensorflow(keras) 2.x中制作一个可以轻松跨多个CPU进程并行化的生成器/迭代器?死锁和数据顺序并不重要。
我想训练我的数据,我已经将我的数据与 word2vec 预训练模型进行字符串连接https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.id.300.vec.gz
,并成功创建模型,但是当我想训练数据集时,我收到了这样的错误
UnimplementedError Traceback (most recent call last)
<ipython-input-28-85ce60cd1ded> in <module>()
1 history = model.fit(X_train, y_train, epochs=6,
2 validation_data=(X_test, y_test),
----> 3 validation_steps=30)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
UnimplementedError: Graph execution error:
#skiping error
Node: 'binary_crossentropy/Cast'
Cast string to float is not supported
[[{{node binary_crossentropy/Cast}}]] [Op:__inference_train_function_21541]
Run Code Online (Sandbox Code Playgroud)
代码 : …
我正在训练模型,fit_generator()
并希望为每个纪元保存的权重生成唯一的名称
已经尝试过:请参阅后面的代码
代码:
model_path = '.\checkpoints\cp{}.ckpt'.format(time())
cp_callback = tf.keras.callbacks.ModelCheckpoint(model_path,
verbose=1,
period=2)
Run Code Online (Sandbox Code Playgroud)
model.fit_generator(..........,callbacks=[cp_callback])
Run Code Online (Sandbox Code Playgroud)
预期:生成唯一的检查点名称,
例如 epoch_4.ckpt 或 epoch_5.ckpt
实际:每次保存时,都会覆盖现有检查点
当我执行如下代码时,错误消息TypeError: zip argument #2 must support iteration弹出到屏幕上。
theta = tf.Variable(tf.zeros(100), dtype=tf.float32, name='theta')
@tf.function
def p(x):
N = tf.cast(tf.shape(x)[0], tf.int64)
softmax = tf.ones([N, 1]) * tf.math.softmax(theta)
idx_x = tf.stack([tf.range(N, dtype=tf.int64), x-1], axis=1)
return tf.gather_nd(softmax, idx_x)
@tf.function
def softmaxLoss(x):
return tf.reduce_mean(-tf.math.log(p(x)))
train_dset = tf.data.Dataset.from_tensor_slices(data_train).\
repeat(1).batch(BATCH_SIZE)
# Create the metrics
loss_metric = tf.keras.metrics.Mean(name='train_loss')
val_loss_metric = tf.keras.metrics.Mean(name='val_loss')
optimizer = tf.keras.optimizers.Adam(0.001)
@tf.function
def train_step(inputs):
with tf.GradientTape() as tape:
log_loss = softmaxLoss(inputs)
gradients = tape.gradient(log_loss,theta)
optimizer.apply_gradients(zip(gradients, theta))
# Update the metrics
loss_metric.update_state(log_loss)
for epoch …
Run Code Online (Sandbox Code Playgroud) 我的模型是用这段代码编译的
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
Run Code Online (Sandbox Code Playgroud)
在训练过程中,我遇到了这个错误
tensorflow.python.framework.errors_impl.InvalidArgumentError:收到标签值 5,该值超出了 [0, 5) 的有效范围。
我的标签是1,2,3,4,5
哪个[1,5]
不是[0, 5)
。如何为该模型设置标签?
machine-learning keras tensorflow multiclass-classification tf.keras
错误:无法启动 TensorBoard(以 1 退出)。stderr 的内容:回溯(最近一次调用最后一次):文件“/home/arshad/anaconda3/bin/tensorboard”,第 10 行,在 sys.exit(run_main()) 文件“/home/arshad/anaconda3/lib/python3 .7/site-packages/tensorboard/main.py”,第 58 行,在 run_main default.get_plugins() + default.get_dynamic_plugins(),文件“/home/arshad/anaconda3/lib/python3.7/site-packages/ tensorboard/default.py", line 110, in get_dynamic_plugins for entry_point in pkg_resources.iter_entry_points('tensorboard_plugins') 文件 "/home/arshad/anaconda3/lib/python3.7/site-packages/tensorboard/default.py", line 110、在pkg_resources.iter_entry_points('tensorboard_plugins')文件中的entry_point文件"/home/arshad/anaconda3/lib/python3.py”为,线2442,在负荷self.require(*指定参数时,** kwargs)文件“/home/arshad/anaconda3/lib/python3.7/site-packages/pkg_resources/ INIT py”为,线2465,在要求项目= working_set.resolve(请求数,ENV,安装,演员= self.extras)文件“/home/arshad/anaconda3/lib/python3.7/site-packages/pkg_resources/初始化的.py”,线路791,在决心引发 VersionConflict(dist, req).with_context(dependent_req) pkg_resources.VersionConflict: (grpcio 1.16.1 (/home/arshad/anaconda3/lib/python3.7/site-packages), Requirement.parse('grpcio>=1.24. 3'))
machine-learning anaconda tensorboard tf.keras tensorflow2.0
这是我尝试保存并加载模型后的代码:
model.save('path_to_my_model.h5')
del model
model = tf.keras.models.load_model('path_to_my_model.h5', custom_objects={'Wraparound2D': Wraparound2D})
import tensorflow.keras.backend as K
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function(inp, outputs) # evaluation function
layer_outs = functor([X_test, 1.])
# Plot activations of different neurons in different layers
all_layer_activations = list()
min_max_scaler = lambda x : (x - np.min(x))/(np.max(x) - np.min(x))
# min_max_scaler = lambda x : (x - np.mean(x))
for j in range(1, 5):
if j==1:
layer_im …
Run Code Online (Sandbox Code Playgroud)我有一个使用 TensorFlow 1 运行 Keras 的代码。该代码修改了损失函数以进行深度强化学习:
import os
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
env = gym.make("CartPole-v0").env
env.reset()
n_actions = env.action_space.n
state_dim = env.observation_space.shape
from tensorflow import keras
import random
from tensorflow.keras import layers as L
import tensorflow as tf
from tensorflow.python.keras.backend import set_session
sess = tf.compat.v1.Session()
graph = tf.compat.v1.get_default_graph()
init = tf.global_variables_initializer()
sess.run(init)
network = keras.models.Sequential()
network.add(L.InputLayer(state_dim))
# let's create a network for approximate q-learning following guidelines above
network.add(L.Dense(5, activation='elu'))
network.add(L.Dense(5, …
Run Code Online (Sandbox Code Playgroud) tf.keras.model.predict()
生成给定数据范围之外的值是否正常?
我从model.predict()
. 模型中使用的目标预测列中的数据仅包含 1 或 0。我本来希望model.predict()
生成一个介于 0 和 1 之间的值。
当我将新的类似数据放入model.predict()
尝试进行分类时,我经常得到一个小于 0 或大于 1 的值。我是否应该认为这意味着所有大于 0.5 的值更有可能是 1 并且越高值更有可能是 1?
这是我的代码:
epoch_count = 1
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=epoch_count)
Run Code Online (Sandbox Code Playgroud) 我正在研究 python 中的神经网络和图像识别并遵循本指南。他们使用:from keras.datasets import cifar10
获取用于测试的图像。所以我的问题是:
提前致谢!
predict_step
为什么张量流在a 函数内禁用急切执行tf.keras.Model
?也许我弄错了,但这里有一个例子:
from __future__ import annotations
from functools import wraps
import tensorflow as tf
def print_execution(func):
@wraps(func)
def wrapper(self: SimpleModel, data):
print(tf.executing_eagerly()) # Prints False
return func(self, data)
return wrapper
class SimpleModel(tf.keras.Model):
def __init__(self):
super().__init__()
def call(self, inputs, training=None, mask=None):
return inputs
@print_execution
def predict_step(self, data):
return super().predict_step(data)
if __name__ == "__main__":
x = tf.random.uniform((2, 2))
print(tf.executing_eagerly()) # Prints True
model = SimpleModel()
pred = model.predict(x)
Run Code Online (Sandbox Code Playgroud)
这是预期的行为吗?有没有办法强制predict_step
以急切模式运行?
tf.keras ×12
tensorflow ×9
keras ×7
python ×3
anaconda ×2
dataset ×1
keras-lambda ×1
keras-layer ×1
lstm ×1
nlp ×1
python-3.x ×1
tensorboard ×1