Hee*_*ony 5 python pycharm deep-learning tensorflow
我在 Pycharm 中编写了以下代码,该代码在 Tensorflow 中执行完全连接层 (FCL)。占位符发生无效参数错误。所以我在占位符中输入了所有dtype、shape、 和,但我仍然收到invalid argument error。name
我想通过FCL模型制作新的Signal(1, 222)。
\n输入信号(1, 222) => 输出信号(1, 222)
maxPredict:查找输出信号中具有最高值的索引。calculate Y:获取maxPredict对应的频率数组值。loss:使用真实Y之间的差值并计算Y作为损失。loss= tf.abs(trueY - 计算Y)`代码(发生错误)
\nx = tf.placeholder(dtype=tf.float32, shape=[1, 222], name=\'inputX\')
错误
\n\n\n\n\nInvalidArgumentError(请参阅上面的回溯):您必须为占位符张量 \'inputX\' 提供一个值,dtype float 和形状 [1,222] \n tensorflow.python.framework.errors_impl.InvalidArgumentError:您必须为占位符张量提供一个值 \ 'inputX\' 的 dtype float 和 shape [1,222]\n [[{{node inputX}} = Placeholderdtype=DT_FLOAT, shape=[1,222], _device="/job:localhost/replica:0/task:0/device :CPU:0"]]\n 在处理上述异常的过程中,又发生了一个异常:
\n
我改变了我的代码。
\nx = tf.placeholder(tf.float32, [None, 222], name=\'inputX\')
错误情况 1
\n tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
\n newY = tf.gather(tensorFreq, maxPredict) * 60
\nloss = tf.abs(y - tf.Variable(newY))
\n\n\nValueError:initial_value 必须指定形状:Tensor("mul:0", shape=(?,), dtype=float32)
\n
错误情况 2
\n tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)
\n newY = tf.gather(tensorFreq, maxPredict) * 60
\nloss = tf.abs(y - newY)
\n\n\n回溯(最近一次调用最后一次):\n 文件“D:/PycharmProject/DetectionSignal/TEST_FCL_StackOverflow.py”,第 127 行,位于 \n trainStep = opt.minimize(loss)\n 文件“C:\\Users\\Heewony” \\Anaconda3\\envs\\TSFW_pycharm\\lib\\site-packages\\tensorflow\\python\\training\\optimizer.py",第 407 行,最小化\n ([str(v) for _, v in grads_and_vars], loss))\n ValueError:没有为任何变量提供梯度,请检查图表中变量 [tf.Variable \'Variable:0\' shape=(222, 1024) dtype 之间不支持梯度的操作=float32_ref, tf.Variable \'Variable_1:0\' shape=(1024,) dtype=float32_re, ......... tf.Variable \'Variable_5:0\' shape=(222,) dtype= float32_ref] 和损失 Tensor("Abs:0", dtype=float32)。
\n
def Model_FCL(inputX):\n data = inputX # input Signals\n\n # Fully Connected Layer 1\n flatConvh1 = tf.reshape(data, [-1, 222])\n fcW1 = tf.Variable(tf.truncated_normal(shape=[222, 1024], stddev=0.05))\n fcb1 = tf.Variable(tf.constant(0.1, shape=[1024]))\n fch1 = tf.nn.relu(tf.matmul(flatConvh1, fcW1) + fcb1)\n\n # Fully Connected Layer 2\n flatConvh2 = tf.reshape(fch1, [-1, 1024])\n fcW2 = tf.Variable(tf.truncated_normal(shape=[1024, 1024], stddev=0.05))\n fcb2 = tf.Variable(tf.constant(0.1, shape=[1024]))\n fch2 = tf.nn.relu(tf.matmul(flatConvh2, fcW2) + fcb2)\n\n # Output Layer\n fcW3 = tf.Variable(tf.truncated_normal(shape=[1024, 222], stddev=0.05))\n fcb3 = tf.Variable(tf.constant(0.1, shape=[222]))\n\n logits = tf.add(tf.matmul(fch2, fcW3), fcb3)\n predictY = tf.nn.softmax(logits)\n return predictY, logits\n\ndef loadMatlabData(fileName):\n contentsMat = sio.loadmat(fileName)\n dataInput = contentsMat[\'dataInput\']\n dataLabel = contentsMat[\'dataLabel\']\n\n dataSize = dataInput.shape\n dataSize = dataSize[0]\n return dataInput, dataLabel, dataSize\n\ndef getNextSignal(num, data, labels, WINDOW_SIZE, OUTPUT_SIZE):\n shuffleSignal = data[num]\n shuffleLabels = labels[num]\n\n # shuffleSignal = shuffleSignal.reshape(1, WINDOW_SIZE)\n # shuffleSignal = np.asarray(shuffleSignal, np.float32)\n return shuffleSignal, shuffleLabels\n\ndef getBasicFrequency():\n # basicFreq => shape(222)\n basicFreq = np.array([0.598436736688, 0.610649731314, ... 3.297508549096])\n return basicFreq\nRun Code Online (Sandbox Code Playgroud)\n\nbasicFreq = getBasicFrequency()\nmyGraph = tf.Graph()\nwith myGraph.as_default():\n # define input data & output data \xec\x9e\x85\xeb\xa0\xa5\xeb\xb0\x9b\xea\xb8\xb0 \xec\x9c\x84\xed\x95\x9c placeholder\n x = tf.placeholder(dtype=tf.float32, shape=[1, 222], name=\'inputX\') # Signal size = [1, 222]\n y = tf.placeholder(tf.float32, name=\'trueY\') # Float value size = [1]\n\n print(\'inputzz \', x, y)\n print(\'Graph \', myGraph.get_operations())\n print(\'TrainVariable \', tf.trainable_variables())\n\n predictY, logits = Model_FCL(x) # Predict Signal, size = [1, 222]\n maxPredict = tf.argmax(predictY, 1, name=\'maxPredict\') # Find max index of Predict Signal\n\n tensorFreq = tf.convert_to_tensor(basicFreq, tf.float32)\n newY = tf.gather(tensorFreq, maxPredict) * 60 # Find the value that corresponds to the Freq array index\n loss = tf.abs(y - tf.Variable(newY)) # Calculate absolute (true Y - predict Y)\n opt = tf.train.AdamOptimizer(learning_rate=0.0001)\n trainStep = opt.minimize(loss)\n\n print(\'Graph \', myGraph.get_operations())\n print(\'TrainVariable \', tf.trainable_variables()) \nRun Code Online (Sandbox Code Playgroud)\n\nwith tf.Session(graph=myGraph) as sess:\n sess.run(tf.global_variables_initializer())\n\n dataFolder = \'./\'\n writer = tf.summary.FileWriter(\'./logMyGraph\', sess.graph)\n startTime = datetime.datetime.now()\n\n numberSummary = 0\n accuracyTotalTrain = []\n for trainEpoch in range(1, 25 + 1):\n arrayTrain = []\n\n dataPPG, dataLabel, dataSize = loadMatlabData(dataFolder + "TestValues.mat")\n\n for i in range(dataSize):\n batchSignal, valueTrue = getNextSignal(i, dataPPG, dataLabel, 222, 222)\n _, lossPrint, valuePredict = sess.run([trainStep, loss, newY], feed_dict={x: batchSignal, y: valueTrue})\n print(\'Train \', i, \' \', valueTrue, \' - \', valuePredict, \' Loss \', lossPrint)\n\n arrayTrain.append(lossPrint)\n writer.add_summary(tf.Summary(value=[tf.Summary.Value(tag=\'Loss\', simple_value=float(lossPrint))]),\n numberSummary)\n numberSummary += 1\n accuracyTotalTrain.append(np.mean(arrayTrain))\n print(\'Final Train : \', accuracyTotalTrain)\n\n sess.close() \nRun Code Online (Sandbox Code Playgroud)\n
该变量batchSignal的类型或形状似乎错误。它必须是一个形状完全相同的 numpy 数组[1, 222]。如果要使用一批大小为n \xc3\x97 222 的示例,则占位符的形状x应为[None, 222]且占位符y形状 [None]。
顺便说一下,考虑使用tf.layers.dense而不是显式初始化变量并自己实现层。
| 归档时间: |
|
| 查看次数: |
18693 次 |
| 最近记录: |