Tensorflow:InvalidArgumentError:预期的图像(JPEG,PNG或GIF),文件为空

Cwa*_*ang 5 python python-3.x tensorflow

我是一个初学者。当我学习了tensorflow的程序员指南时,我试图定义一个用于“估计器”的dataset_input_fn函数。我遇到了一个奇怪的错误,它表明:

INFO:tensorflow:使用默认配置。

INFO:tensorflow:使用配置:{'_model_dir':'/ model','_tf_random_seed':无,'_save_summary_steps':100,'_save_checkpoints_steps':无,'_save_checkpoints_secs':600,'_session_config':无,'_keep :5,5,'_keep_checkpoint_every_n_hours':10000,'_log_step_count_steps':100,'_service':无,'_cluster_spec':,'_task_type':'worker','_task_id':0,'_global_id_in_cluster':0,'_master': '','_evaluation_master':'','_is_chief':True,'_num_ps_replicas':0,'_num_worker_replicas':1}

INFO:tensorflow:调用model_fn。

INFO:tensorflow:完成调用model_fn。

INFO:tensorflow:创建CheckpointSaverHook。

INFO:tensorflow:Graph已完成。

2018-03-12 10:22:14.699465:IC:\ tf_jenkins \ workspace \ rel-win \ M \ windows \ PY \ 36 \ tensorflow \ core \ platform \ cpu_feature_guard.cc:140]您的CPU支持此TensorFlow二进制文件的指令未编译使用:AVX2

INFO:tensorflow:正在运行local_init_op。

INFO:tensorflow:已运行local_init_op。

2018-03-12 10:22:15.913858:WC:\ tf_jenkins \ workspace \ rel-win \ M \ windows \ PY \ 36 \ tensorflow \ core \ framework \ op_kernel.cc:1202] OP_REQUIRES在iterator_ops.cc:870失败:无效的参数:预期的图像(JPEG,PNG或GIF),文件为空[[节点:DecodeJpeg = DecodeJpegacableable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]

追溯(最近一次通话):文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,在_do_call中的第1361行返回fn(* args)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py“,第1340行,在_run_fn target_list中,状态,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ framework \ errors_impl.py ”,行516,在出口 c_api.TF_GetCode(self.status.status))中

tensorflow.python.framework.errors_impl.InvalidArgumentError:预期图像(JPEG,PNG或GIF),空文件[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1 ,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost / replica:0 / task:0 / device:CPU:0“]]在处理上述异常期间,发生了另一个异常:

追溯(最近一次通话):文件“ F:\ Program Files \ JetBrains \ PyCharm 2017.3.3 \ helpers \ pydev \ pydev_run_in_console.py”,第53行,位于run_file pydev_imports.execfile(file,globals,locals)中#执行脚本文件“ F:\ Program Files \ JetBrains \ PyCharm 2017.3.3 \ helpers \ pydev_pydev_imps_pydev_execfile.py”,第18行,在execfile中exec(compile(contents +“ \ n”,file,'exec'),glob,loc)文件“ E:/Learning_process/semester2018_spring/deep_learning/meituan/MNIST/demo_cnn_mnist_meituan.py”,第201行,位于tf.app.run(主)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ platform \ app_py”,第126行,在运行_sys.exit(main(argv))文件“ E:/Learning_process/semester2018_spring/deep_learning/meituan/MNIST/demo_cnn_mnist_meituan.py”中,第195行,在主要步骤中= 50)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ estimator \ estimator.py”,第352行,火车损耗= self._train_model(input_fn,hooks,Saving_listeners)文件“ F :\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ estimator \ estimator.py“,行891,在_train_model _中,loss = mon_sess.run([estimator_spec.train_op,estimator_spec.loss])文件” F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py“,第546行,在运行run_metadata = run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py”行中的1022行,在运行run_metadata = run_metadata中)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py”,行1113,在运行中引发六。reraise(* original_exc_info)文件“ F:\ Anaconda3 \ lib \ site-packages \ six.py”,行693,在reraise提高值文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session .py”,第1098行,在运行中返回self._sess.run(* args,** kwargs)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py”,第1170行,在运行中run_metadata = run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ training \ monitored_session.py”,行950,在运行中返回self._sess.run(* args,** kwargs)File “ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,第905行,在运行run_metadata_ptr中)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session .py”,第1137行,在_run feed_dict_tensor中,选项,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,第1355行,在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ _do_call中的site-packages \ tensorflow \ python \ client \ session.py“行1374行提高类型(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF),空文件[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,频道= 0,dct_method =“”,fancy_upscaling = true,比率= 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32],_ device =“ / job:localhost / replica:0 / task:0 / device:CPU:0”]] PyDev控制台:使用IPython 6.1 .000run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,第1355行,在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py“,第1374行,在_do_call中,引发type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF)具有空文件[[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,频道= 0,dct_method =“”,fancy_upscaling = true,比率= 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1] ,[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 /任务:0 /设备:CPU:0”]] PyDev控制台:使用IPython 6.1.0run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,第1355行,在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py“,第1374行,在_do_call中,引发type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF)具有空文件[[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,频道= 0,dct_method =“”,fancy_upscaling = true,比率= 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1] ,[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 /任务:0 /设备:CPU:0”]] PyDev控制台:使用IPython 6.1.0\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,行1355,在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session”。 py“,第1374行,在_do_call中,引发type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF),得到了空文件[[[Node:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]], output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 / task:0 / device:CPU:0”]] PyDev控制台:使用IPython 6.1.0\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,行1355,在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session”。 py“,第1374行,在_do_call中,引发type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF),得到了空文件[[[Node:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]], output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 / task:0 / device:CPU:0”]] PyDev控制台:使用IPython 6.1.0在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,第1374行,在_do_call中引发类型(e)(node_def,op,message)tensorflow.python .framework.errors_impl.InvalidArgumentError:预期图像(JPEG,PNG或GIF),空文件[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 /任务:0 / device:CPU:0“]] PyDev控制台:使用IPython 6.1.0在_do_run选项中,run_metadata)文件“ F:\ Anaconda3 \ lib \ site-packages \ tensorflow \ python \ client \ session.py”,第1374行,在_do_call中引发类型(e)(node_def,op,message)tensorflow.python .framework.errors_impl.InvalidArgumentError:预期图像(JPEG,PNG或GIF),空文件[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 /任务:0 / device:CPU:0“]] PyDev控制台:使用IPython 6.1.0在_do_call中提高type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF),得到空文件[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32 ],_device =“ / job:localhost / replica:0 / task:0 / device:CPU:0”]] PyDev控制台:使用IPython 6.1.0在_do_call中提高type(e)(node_def,op,message)tensorflow.python.framework.errors_impl.InvalidArgumentError:预期的图像(JPEG,PNG或GIF),得到空文件[[节点:DecodeJpeg = DecodeJpegacceptable_fraction = 1,channels = 0,dct_method =“”,fancy_upscaling = true,ratio = 1,try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32 ],_device =“ / job:localhost / replica:0 / task:0 / device:CPU:0”]] PyDev控制台:使用IPython 6.1.0try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 /任务:0 / device:CPU:0“]] PyDev控制台:使用IPython 6.1.0try_recover_truncated = false]] [[节点:IteratorGetNext = IteratorGetNextoutput_shapes = [[?, 28,28,1],[?]],output_types = [DT_FLOAT,DT_INT32],_device =“ / job:localhost /副本:0 /任务:0 / device:CPU:0“]] PyDev控制台:使用IPython 6.1.0

代码如下:

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

# Imports
import numpy as np
import os
import tensorflow as tf
import argparse

parser = argparse.ArgumentParser()
# parser.add_argument("--batch_size", default=100, type=int, help='batch_size')
# parser.add_argument("--train_steps", default=1000, type=int, help="train_steps")
parser.add_argument("--model_dir", default='/model', type=str, help='model_dir')
parser.add_argument("--data_dir", default='', type=str, help="data_dir")


def cnn_model(features, labels, mode):
    """

    :param features:
    :param labels:
    :param mode:
    :return:
    """

    # input
    input_layer = tf.reshape(features['image'], [-1, 28, 28, 1])

    conv1 = tf.layers.conv2d(inputs=input_layer,
                             filters = 32,
                             kernel_size=[5, 5],
                             padding='same',
                             activation=tf.nn.relu)

    pool1 = tf.layers.max_pooling2d(inputs=conv1,
                                    pool_size=[2, 2],
                                    strides=2)

    conv2 = tf.layers.conv2d(inputs=pool1,
                             filters=64,
                             kernel_size=[5, 5],
                             padding='same',
                             activation=tf.nn.relu)

    pool2 = tf.layers.max_pooling2d(inputs=conv2,
                                    pool_size=[2, 2],
                                    strides=2)

    pool_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])

    dense = tf.layers.dense(inputs=pool_flat,
                            units=1024,
                            activation=tf.nn.relu)

    dropout = tf.layers.dropout(inputs=dense,
                                rate=0.4,
                                training=mode == tf.estimator.ModeKeys.TRAIN)

    logits = tf.layers.dense(inputs=dropout,
                             units=10,
                             activation=None)

    predictions = {
        'class_ids': tf.argmax(logits, 1),
        'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
    }
    if mode == tf.estimator.ModeKeys.PREDICT:
        return tf.estimator.EstimatorSpec(mode,
                                          predictions=predictions)

    loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)

    if mode == tf.estimator.ModeKeys.EVAL:
        eval_metric_ops = {
            'accuracy': tf.metrics.accuracy(labels=labels,
                                            predictions=tf.argmax(logits, 1))
        }
        return tf.estimator.EstimatorSpec(mode,
                                          loss=loss,
                                          eval_metric_ops=eval_metric_ops)

    # train
    assert mode == tf.estimator.ModeKeys.TRAIN
    optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
    train_op = optimizer.minimize(loss=loss,
                                  global_step=tf.train.get_global_step())
    return tf.estimator.EstimatorSpec(mode,
                                      loss=loss,
                                      train_op=train_op)


def dataset_input_fn(filenames):
    """

    :param filenames: tfrecord file's path
    :return:
    """
    # filenames = ['train.tfrecords', 'test.tfrecords']
    dataset = tf.data.TFRecordDataset(filenames)

    def _parse(record):
        features = {"image": tf.FixedLenFeature((), tf.string, default_value=""),
                    "label": tf.FixedLenFeature((), tf.int64, default_value=0)}
        parsed = tf.parse_single_example(record, features)

        image = tf.image.decode_jpeg(parsed["image"])
        image = tf.cast(image, tf.float32)
        # image = tf.image.convert_image_dtype(image, tf.float32)
        image = tf.reshape(image, [28, 28, 1])
        # image = tf.cast(image, tf.float32)
        # image = tf.decode_raw(features['image'], tf.float64)
        label = tf.cast(parsed['label'], tf.int32)
        return {'image': image}, label

    dataset = dataset.map(_parse)
    dataset = dataset.shuffle(buffer_size=10000)
    dataset = dataset.batch(100)
    dataset = dataset.repeat(1)

    iterator = dataset.make_one_shot_iterator()
    features, labels = iterator.get_next()
    # features = tf.cast(features, tf.float32)
    return features, labels


def main(argv):
    """

    :param argv:
    :return:
    """
    args = parser.parse_args(argv[1:])
    train_path = ['train.tfrecords']
    test_path = ['test.tfrecords']

    print("\ndata has been loaded as 'train_x' and 'train_y'\n")

    classifier = tf.estimator.Estimator(model_fn=cnn_model,
                                        model_dir=args.model_dir)

    classifier.train(
        input_fn=lambda: dataset_input_fn(train_path),
        steps=50)

    print("\ntraining process is done\n")


if __name__ == '__main__':
    tf.app.run(main)
Run Code Online (Sandbox Code Playgroud)

iga*_*iga 2

错误似乎是在您的某些示例中没有实际图像。

基本上,当您调用 时image = tf.image.decode_jpeg(parsed["image"])parsed["image"]是一个空张量。