_audio_microfrontend_op.so 未找到

Nam*_*Kim 5 python tensorflow

我的Python版本是3.7\n(Windows10,64位,tensorflow 2.0)

\n

我使用opencv实现了一个检测眨眼的程序。\n在pycham中按F5即可执行正常操作。\n我使用pyinstaller制作了exe文件。\n但是,当我使用生成的exe文件运行时,出现错误。

\n

**........................\ntensorflow.python.framework.errors_impl.NotFoundError: C:\\User\\用户名\\AppData\\Local\\Temp\\_MEI401122\ \tensorflow\\lite\\experimental\\microfrontend\\python\\ops\\_audio_microfrontend_op.so 未找到

\n

[40068] 无法执行脚本测试**

\n

我尝试了多种方法并找到了解决方案,但没有成功。

\n

错误消息上的路径不在我的电脑上。(文件夹_MEI401122不存在)

\n

在我的电脑上,_audio_microfrontend_op.so 文件位于下图的路​​径中。

\n

我不知道。请帮我。

\n

在此输入图像描述

\n

我添加我的源代码和错误内容。

\n

我的项目文件夹路径如下。

\n

路径:C:\\Users\\用户名\\Downloads\\eye_blink_ detector-master

\n

[我的错误内容图片]

\n

在此输入图像描述

\n

[我的源代码]

\n
# -*- coding: utf-8 -*-\nimport cv2, dlib\nimport numpy as np\nfrom imutils import face_utils\nfrom keras.models import load_model\nfrom time import localtime, strftime\nfrom datetime import datetime\nimport time\nfrom tkinter import *\nimport tkinter.messagebox\n\n\nroot = Tk()\n\nWELCOME_MSG = \'\'\'Welcome to this event.\'\'\'\nWELCOME_DURATION = 2000\n\ndef welcome():\n    top = tkinter.Toplevel()\n    top.title(\'Welcome\')\n    Message(top, text="\xec\xb9\xb4\xec\x9a\xb4\xed\x8a\xb8", padx=20, pady=20).pack()\n    top.after(WELCOME_DURATION, top.destroy)\n\nIMG_SIZE = (34, 26)\n\n#root.mainloop()\n\ndetector = dlib.get_frontal_face_detector()\npredictor = dlib.shape_predictor(\'shape_predictor_68_face_landmarks.dat\')\n\nmodel = load_model(\'models/2018_12_17_22_58_35.h5\')\nmodel.summary()\n\ncount = 0\ncount_eye_open = 0\n\nf = open("d:/\xec\x83\x88\xed\x8c\x8c\xec\x9d\xbc.txt", \'a\')\n\n\ndef crop_eye(img, eye_points):\n  x1, y1 = np.amin(eye_points, axis=0)\n  x2, y2 = np.amax(eye_points, axis=0)\n  cx, cy = (x1 + x2) / 2, (y1 + y2) / 2\n\n  w = (x2 - x1) * 1.2\n  h = w * IMG_SIZE[1] / IMG_SIZE[0]\n\n  margin_x, margin_y = w / 2, h / 2\n\n  min_x, min_y = int(cx - margin_x), int(cy - margin_y)\n  max_x, max_y = int(cx + margin_x), int(cy + margin_y)\n\n  eye_rect = np.rint([min_x, min_y, max_x, max_y]).astype(np.int)\n\n  eye_img = gray[eye_rect[1]:eye_rect[3], eye_rect[0]:eye_rect[2]]\n\n  return eye_img, eye_rect\n\n# main\n\ncap = cv2.VideoCapture(0) #\'videos/2.mp4\')\n\nwhile cap.isOpened():\n  ret, img_ori = cap.read()\n\n  if not ret:\n    break\n\n#\xec\x9c\x88\xeb\x8f\x84\xec\x9a\xb0 \xec\x82\xac\xec\x9d\xb4\xec\xa6\x88\n  img_ori = cv2.resize(img_ori, dsize=(0, 0), fx=1.0, fy=1.0)\n\n  img = img_ori.copy()\n  gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n  faces = detector(gray)\n\n  dt = datetime.now()\n\n  for face in faces:\n\n    shapes = predictor(gray, face)\n    shapes = face_utils.shape_to_np(shapes)\n\n    eye_img_l, eye_rect_l = crop_eye(gray, eye_points=shapes[36:41]) #l_eye_poits = [36, 37, 38, 39, 40, 41] \xec\x9b\x90\xec\x86\x8c\xec\x8a\xa4: [36,42]\n    eye_img_r, eye_rect_r = crop_eye(gray, eye_points=shapes[42:47]) #r_eye_points = [42, 43, 44, 45, 46, 47] \xec\x9b\x90\xec\x86\x8c\xec\x8a\xa4: [42:48]\n\n    eye_img_l = cv2.resize(eye_img_l, dsize=IMG_SIZE)\n    eye_img_r = cv2.resize(eye_img_r, dsize=IMG_SIZE)\n    eye_img_r = cv2.flip(eye_img_r, flipCode=1)\n\n    cv2.imshow(\'l\', eye_img_l)\n    cv2.imshow(\'r\', eye_img_r)\n\n    eye_input_l = eye_img_l.copy().reshape((1, IMG_SIZE[1], IMG_SIZE[0], 1)).astype(np.float32) / 255.\n    eye_input_r = eye_img_r.copy().reshape((1, IMG_SIZE[1], IMG_SIZE[0], 1)).astype(np.float32) / 255.\n\n    pred_l = model.predict(eye_input_l)\n    pred_r = model.predict(eye_input_r)\n\n    # visualize\n    state_l = \'%.2f\' if pred_l > 0.1 else \'-%.1f\'\n    state_r = \'%.2f\' if pred_r > 0.1 else \'-%.1f\'\n\n    state_l = state_l % pred_l\n    state_r = state_r % pred_r\n\n    # Blink Count\n    if pred_l <= 0.1 and pred_r <= 0.1:\n        count_eye_open += 1\n        print("blinking, "+ str(dt.strftime(\'%Y-%m-%d %H:%M:%S.%f\')))\n\n    cv2.rectangle(img, pt1=tuple(eye_rect_l[0:2]), pt2=tuple(eye_rect_l[2:4]), color=(255,255,255), thickness=2)\n    cv2.rectangle(img, pt1=tuple(eye_rect_r[0:2]), pt2=tuple(eye_rect_r[2:4]), color=(255,255,255), thickness=2)\n\n    cv2.putText(img, state_l, tuple(eye_rect_l[0:2]), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 2)\n    cv2.putText(img, state_r, tuple(eye_rect_r[0:2]), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (255,255,255), 2)\n\n    cv2.putText(img, "eye blink: " + str(count_eye_open), (50, 50), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255))\n    cv2.putText(img, "Time: " + str(strftime("%S", localtime())), (50, 100), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255))\n\n    if str(strftime("%S", localtime())) == "00":\n        count += 1\n        if count == 1 and count_eye_open > 0:\n            print("Transfer Data...:" + str(count_eye_open))\n            f.write("Transfer Data...:" + str(count_eye_open) + "\\n")\n            count_eye_open = 0\n            count = 0\n    else:\n      count = 0\n\n    time.sleep(0.12)\n\n  cv2.imshow(\'result\', img)\n  if cv2.waitKey(1) == ord(\'q\'):\n    f.close()\n    break\n
Run Code Online (Sandbox Code Playgroud)\n

小智 4

我可以通过手动指定复制规范文件中的 _audio_microfrontend_op.so 文件来修复此错误。

# -*- mode: python ; coding: utf-8 -*-

import os
import importlib


a = Analysis(
        (...)
             datas=[(os.path.join(os.path.dirname(importlib.import_module('tensorflow').__file__),
                                  "lite/experimental/microfrontend/python/ops/_audio_microfrontend_op.so"),
                     "tensorflow/lite/experimental/microfrontend/python/ops/")],
        (...)
)
Run Code Online (Sandbox Code Playgroud)