我在 Keras 中创建了相当大的模型,我正在用 LaTeX 写一篇关于它的文章。为了很好地描述 LaTeX 中的 keras 模型,我想用它创建一个 LaTeX 表。我可以手动实现它,但我想知道是否有任何“更好”的方法来实现这一点。
我四处查看,发现了一些帖子,例如Keras model.summary( ) 有一个不错的输出吗?通过绘制图像来解决。然而,我希望将其作为文本数据(是的,拥有MRE :)),表格看起来更好并且格式也很好。如果有类似的东西,最好的选择是:statsmodels Summary to Latex。然而,我无法找到任何将输出model.summary()转换为表格表示的方法。
我在想是否有一种方法可以将其转换为 pandas 数据帧,然后可以使用df.to_latex(). 我尝试使用 执行此model.to_json()操作,但此函数不会返回有关打印输出形状的任何信息model.summary()。这是我的尝试:
df = pd.DataFrame(model.to_json())
df2 = pd.DataFrame(df.loc["layers","config"])
#for example select filters, need to do it like this as it is not always contained
filters = ["-" if "filters" not in x else x["filters"] for x in df2.loc[:,"config"]]
Run Code Online (Sandbox Code Playgroud)
model.to_json()我的模型返回以下 json :
{"class_name": "Model", "config": {"name": "Discriminator", "layers": [{"name": "input_3", "class_name": "InputLayer", "config": {"batch_input_shape": [null, 256, 256, 1], "dtype": "float32", "sparse": false, "name": "input_3"}, "inbound_nodes": []}, {"name": "input_4", "class_name": "InputLayer", "config": {"batch_input_shape": [null, 256, 256, 1], "dtype": "float32", "sparse": false, "name": "input_4"}, "inbound_nodes": []}, {"name": "concatenate_2", "class_name": "Concatenate", "config": {"name": "concatenate_2", "trainable": true, "dtype": "float32", "axis": -1}, "inbound_nodes": [[["input_3", 0, 0, {}], ["input_4", 0, 0, {}]]]}, {"name": "conv2d_6", "class_name": "Conv2D", "config": {"name": "conv2d_6", "trainable": true, "dtype": "float32", "filters": 8, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["concatenate_2", 0, 0, {}]]]}, {"name": "leaky_re_lu_5", "class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_5", "trainable": true, "dtype": "float32", "alpha": 0.20000000298023224}, "inbound_nodes": [[["conv2d_6", 0, 0, {}]]]}, {"name": "conv2d_7", "class_name": "Conv2D", "config": {"name": "conv2d_7", "trainable": true, "dtype": "float32", "filters": 16, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["leaky_re_lu_5", 0, 0, {}]]]}, {"name": "leaky_re_lu_6", "class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_6", "trainable": true, "dtype": "float32", "alpha": 0.20000000298023224}, "inbound_nodes": [[["conv2d_7", 0, 0, {}]]]}, {"name": "batch_normalization_4", "class_name": "BatchNormalization", "config": {"name": "batch_normalization_4", "trainable": true, "dtype": "float32", "axis": -1, "momentum": 0.8, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}, "inbound_nodes": [[["leaky_re_lu_6", 0, 0, {}]]]}, {"name": "conv2d_8", "class_name": "Conv2D", "config": {"name": "conv2d_8", "trainable": true, "dtype": "float32", "filters": 32, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["batch_normalization_4", 0, 0, {}]]]}, {"name": "leaky_re_lu_7", "class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_7", "trainable": true, "dtype": "float32", "alpha": 0.20000000298023224}, "inbound_nodes": [[["conv2d_8", 0, 0, {}]]]}, {"name": "batch_normalization_5", "class_name": "BatchNormalization", "config": {"name": "batch_normalization_5", "trainable": true, "dtype": "float32", "axis": -1, "momentum": 0.8, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}, "inbound_nodes": [[["leaky_re_lu_7", 0, 0, {}]]]}, {"name": "conv2d_9", "class_name": "Conv2D", "config": {"name": "conv2d_9", "trainable": true, "dtype": "float32", "filters": 64, "kernel_size": [4, 4], "strides": [2, 2], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["batch_normalization_5", 0, 0, {}]]]}, {"name": "leaky_re_lu_8", "class_name": "LeakyReLU", "config": {"name": "leaky_re_lu_8", "trainable": true, "dtype": "float32", "alpha": 0.20000000298023224}, "inbound_nodes": [[["conv2d_9", 0, 0, {}]]]}, {"name": "batch_normalization_6", "class_name": "BatchNormalization", "config": {"name": "batch_normalization_6", "trainable": true, "dtype": "float32", "axis": -1, "momentum": 0.8, "epsilon": 0.001, "center": true, "scale": true, "beta_initializer": {"class_name": "Zeros", "config": {}}, "gamma_initializer": {"class_name": "Ones", "config": {}}, "moving_mean_initializer": {"class_name": "Zeros", "config": {}}, "moving_variance_initializer": {"class_name": "Ones", "config": {}}, "beta_regularizer": null, "gamma_regularizer": null, "beta_constraint": null, "gamma_constraint": null}, "inbound_nodes": [[["leaky_re_lu_8", 0, 0, {}]]]}, {"name": "conv2d_10", "class_name": "Conv2D", "config": {"name": "conv2d_10", "trainable": true, "dtype": "float32", "filters": 1, "kernel_size": [4, 4], "strides": [1, 1], "padding": "same", "data_format": "channels_last", "dilation_rate": [1, 1], "activation": "linear", "use_bias": true, "kernel_initializer": {"class_name": "VarianceScaling", "config": {"scale": 1.0, "mode": "fan_avg", "distribution": "uniform", "seed": null}}, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "bias_regularizer": null, "activity_regularizer": null, "kernel_constraint": null, "bias_constraint": null}, "inbound_nodes": [[["batch_normalization_6", 0, 0, {}]]]}], "input_layers": [["input_3", 0, 0], ["input_4", 0, 0]], "output_layers": [["conv2d_10", 0, 0]]}, "keras_version": "2.3.1", "backend": "tensorflow"}
Run Code Online (Sandbox Code Playgroud)
虽然我想要model.summary()类似的信息:
Model: "Discriminator"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_3 (InputLayer) (None, 256, 256, 1) 0
__________________________________________________________________________________________________
input_4 (InputLayer) (None, 256, 256, 1) 0
...
Run Code Online (Sandbox Code Playgroud)
如果我将摘要输出转换为字符串( Keras model.summary() 对象转换为字符串)并解析字符串输出,也许有一些好方法?
小智 8
我编写了一个方法来修改 model.summary() 输出,以便可以将其复制到乳胶并且看起来与原始版本相同(除了行之间的间距稍小一些)。
def m2tex(model):
stringlist = []
model.summary(line_length=70, print_fn=lambda x: stringlist.append(x))
del stringlist[1:-4:2]
del stringlist[-1]
for ix in range(1,len(stringlist)-3):
tmp = stringlist[ix]
stringlist[ix] = tmp[0:31]+"& "+tmp[31:59]+"& "+tmp[59:]+"\\\\ \hline"
stringlist[0] = "Model: test \\\\ \hline"
stringlist[1] = stringlist[1]+" \hline"
stringlist[-4] = stringlist[-4]+" \hline"
stringlist[-3] = stringlist[-3]+" \\\\"
stringlist[-2] = stringlist[-2]+" \\\\"
stringlist[-1] = stringlist[-1]+" \\\\ \hline"
prefix = ["\\begin{table}[]", "\\begin{tabular}{lll}"]
suffix = ["\end{tabular}", "\caption{Model summary for test.}", "\label{tab:model-summary}" , "\end{table}"]
stringlist = prefix + stringlist + suffix
out_str = " \n".join(stringlist)
out_str = out_str.replace("_", "\_")
out_str = out_str.replace("#", "\#")
print(out_str)
Run Code Online (Sandbox Code Playgroud)
正如你所看到的,它很丑陋,但对我有用。如果图层名称很长,则可能需要相应地增加 line_length=70 参数和第 8 行中的索引。
输出示例如下:
\begin{table}[]
\begin{tabular}{lll}
Model: test \\ \hline
Layer (type) & Output Shape & Param \# \\ \hline \hline
input\_1 (InputLayer) & [(None, 28, 28, 1)] & 0 \\ \hline
dropout (Dropout) & (None, 28, 28, 1) & 0 \\ \hline
conv2d (Conv2D) & (None, 26, 26, 8) & 80 \\ \hline
dropout\_1 (Dropout) & (None, 26, 26, 8) & 0 \\ \hline
conv2d\_1 (Conv2D) & (None, 24, 24, 8) & 584 \\ \hline
dropout\_2 (Dropout) & (None, 24, 24, 8) & 0 \\ \hline
max\_pooling2d (MaxPooling2D) & (None, 12, 12, 8) & 0 \\ \hline
conv2d\_2 (Conv2D) & (None, 10, 10, 10) & 730 \\ \hline
dropout\_3 (Dropout) & (None, 10, 10, 10) & 0 \\ \hline
conv2d\_3 (Conv2D) & (None, 8, 8, 10) & 910 \\ \hline
dropout\_4 (Dropout) & (None, 8, 8, 10) & 0 \\ \hline
max\_pooling2d\_1 (MaxPooling2D) & (None, 4, 4, 10) & 0 \\ \hline
flatten (Flatten) & (None, 160) & 0 \\ \hline
dense (Dense) & (None, 16) & 2576 \\ \hline
dropout\_5 (Dropout) & (None, 16) & 0 \\ \hline
dense\_1 (Dense) & (None, 10) & 170 \\ \hline \hline
Total params: 5,050 \\
Trainable params: 5,050 \\
Non-trainable params: 0 \\ \hline
\end{tabular}
\caption{Model summary for test.}
\label{tab:model-summary}
\end{table}
Run Code Online (Sandbox Code Playgroud)
小智 6
根据 nico 的回复,我修改了他的一些代码并创建了一个github 存储库
def m2tex(model,modelName):
stringlist = []
model.summary(line_length=70, print_fn=lambda x: stringlist.append(x))
del stringlist[1:-4:2]
del stringlist[-1]
for ix in range(1,len(stringlist)-3):
tmp = stringlist[ix]
stringlist[ix] = tmp[0:31]+"& "+tmp[31:59]+"& "+tmp[59:]+"\\\\ \hline"
stringlist[0] = "Model: {} \\\\ \hline".format(modelName)
stringlist[1] = stringlist[1]+" \hline"
stringlist[-4] += " \hline"
stringlist[-3] += " \\\\"
stringlist[-2] += " \\\\"
stringlist[-1] += " \\\\ \hline"
prefix = ["\\begin{table}[]", "\\begin{tabular}{lll}"]
suffix = ["\end{tabular}", "\caption{{Model summary for {}.}}".format(modelName), "\label{tab:model-summary}" , "\end{table}"]
stringlist = prefix + stringlist + suffix
out_str = " \n".join(stringlist)
out_str = out_str.replace("_", "\_")
out_str = out_str.replace("#", "\#")
print(out_str)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
3830 次 |
| 最近记录: |