我使用谷歌合作实验室训练可能的数据集。我将数据集上传到了Google云端硬盘,并从Google Colab调用了该数据集。但是运行train.py脚本意味着出现以下错误。更确切地说,我跑:
!python3 /content/drive/tensorflow1/models/research/object_detection/train.py --logtostderr --train_dir=/content/drive/tensorflow1/models/research/object_detection/training/ --pipeline_config_path=/content/drive/tensorflow1/models/research/object_detection/training/faster_rcnn_inception_v2_pets.config
Run Code Online (Sandbox Code Playgroud)
我得到这些错误:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "/usr/lib/python3.6/imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory
During handling of the above exception, …Run Code Online (Sandbox Code Playgroud)