sea*_*ran 4 python neural-network keras tensorflow
在我的模型中,我有一个用于 1 列特征数组的归一化层。我假设这给出了 1 ndim 输出:
single_feature_model = keras.models.Sequential([
single_feature_normalizer,
layers.Dense(1)
])
Run Code Online (Sandbox Code Playgroud)
正常化步骤:
single_feature_normalizer = preprocessing.Normalization(axis=None)
single_feature_normalizer.adapt(single_feature)
Run Code Online (Sandbox Code Playgroud)
我收到的错误是:
ValueError Traceback (most recent call last)
<ipython-input-98-22191285d676> in <module>()
2 single_feature_model = keras.models.Sequential([
3 single_feature_normalizer,
----> 4 layers.Dense(1) # Linear Model
5 ])
/usr/local/lib/python3.7/dist-packages/keras/engine/input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
225 ndim = x.shape.rank
226 if ndim is not None and ndim < spec.min_ndim:
--> 227 raise ValueError(f'Input {input_index} of layer "{layer_name}" '
228 'is incompatible with the layer: '
229 f'expected min_ndim={spec.min_ndim}, '
ValueError: Input 0 of layer "dense_27" is incompatible with the layer: expected min_ndim=2, found ndim=1. Full shape received: (None,)
Run Code Online (Sandbox Code Playgroud)
我似乎密集层正在寻找 2 ndim 数组,而归一化层输出 1 ndim 数组。有没有办法解决这个问题并使模型正常工作?
我认为您需要使用输入形状显式定义输入层,因为您的输出层无法推断来自归一化层的张量的形状:
import tensorflow as tf
single_feature_normalizer = tf.keras.layers.Normalization(axis=None)
feature = tf.random.normal((314, 1))
single_feature_normalizer.adapt(feature)
single_feature_model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(1,)),
single_feature_normalizer,
tf.keras.layers.Dense(1)
])
Run Code Online (Sandbox Code Playgroud)
或者直接在归一化层中定义输入形状,而不使用输入层:
single_feature_normalizer = tf.keras.layers.Normalization(input_shape=[1,], axis=None)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
18965 次 |
| 最近记录: |