我编写了一个 python 脚本来检测损坏的图像并对其进行计数,我的脚本中的问题是它检测到所有图像并且不检测损坏的图像。如何解决这个问题。我提到:
如何检查文件是否是有效的图像文件?对于我的代码
我的代码
import os
from os import listdir
from PIL import Image
count=0
for filename in os.listdir('/Users/ajinkyabobade/Desktop/2'):
if filename.endswith('.JPG'):
try:
img=Image.open('/Users/ajinkyabobade/Desktop/2'+filename)
img.verify()
except(IOError,SyntaxError)as e:
print('Bad file : '+filename)
count=count+1
print(count)
Run Code Online (Sandbox Code Playgroud) 我指的是Google的Tensor-Flow对象检测API.我已经成功地训练和测试了这些物体.我的问题是在测试后我获得了输出图像,并在对象周围绘制了框,我如何获得这些框的csv坐标?可以在(https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb)上找到测试代码.
如果我看到帮助程序代码,它会将图像加载到numpy数组中:
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
Run Code Online (Sandbox Code Playgroud)
在检测中,它采用这个图像阵列并给出输出框,如下所示
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0') …Run Code Online (Sandbox Code Playgroud) object-detection computer-vision python-3.x deep-learning tensorflow
我有以下fast_rcnn_resnet101_coco.config(在此处)。在此配置文件中,我用adam优化器替换了动量优化器,如下所示:
train_config: {
batch_size: 1
optimizer {
#momentum_optimizer: {
adam_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.00001
schedule {
step: 4500
learning_rate: .00001
}
schedule {
step: 10000
learning_rate: .000001
}
}
}
#momentum_optimizer_value: 0.9
}
use_moving_average: false
}
gradient_clipping_by_norm: 10.0
fine_tune_checkpoint: "faster_rcnn_resnet101_coco_2018_01_28/model.ckpt"
from_detection_checkpoint: true
data_augmentation_options {
random_horizontal_flip {
}
}
}
Run Code Online (Sandbox Code Playgroud)
我提到过Tensorflow对象检测:使用Adam而不是RMSProp进行此更改。我的目的是配置更快的rcnnresnet101.config文件(在此处附加)以匹配该文件:
我的目标是我的.config文件应具有.yaml文件中提到的所有参数。到目前为止,我仅对一个参数(“学习率”)成功完成了此操作。如何在配置文件中集成rpn_batch大小,步长等参数?
artificial-intelligence computer-vision python-3.x deep-learning tensorflow
我有一个类似于
我指的是: Canny边缘检测后如何填补字母空白
我想在这个图像上绘制黑色像素。上面 url 上的建议解决方案是首先使用以下方法找到所有黑色像素
import matplotlib.pyplot as pp
import numpy as np
image = pp.imread(r'/home/cris/tmp/Zuv3p.jpg')
bin = np.all(image<100, axis=2)
Run Code Online (Sandbox Code Playgroud)
我的问题是我是否在忽略所有其他颜色通道的同时在图像上绘制这个黑色像素(存储在 bin 中的数据)。
我有以下代码。我正在尝试使用 roi 获取这些坐标,但我不确定如何获取它们。
import cv2
import numpy as np
large = cv2.imread('1.jpg')
small = cv2.cvtColor(large, cv2.COLOR_BGR2GRAY)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
grad = cv2.morphologyEx(small, cv2.MORPH_GRADIENT, kernel)
_, bw = cv2.threshold(grad, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel)
contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
mask = np.zeros(bw.shape, dtype=np.uint8)
for idx in range(len(contours)):
x, y, w, h = cv2.boundingRect(contours[idx])
mask[y:y+h, x:x+w] = 0
cv2.drawContours(mask, contours, idx, (255, 255, 255), -1)
r = float(cv2.countNonZero(mask[y:y+h, …Run Code Online (Sandbox Code Playgroud) 我有以下输入图像:
我的目标是在红色区域绘制轮廓。为此,我有以下代码:import cv2
# Read image
src = cv2.imread("images.jpg", cv2.IMREAD_GRAYSCALE)
# Set threshold and maxValue
thresh = 150
maxValue = 200
# Basic threshold example
th, dst = cv2.threshold(src, thresh, maxValue, cv2.THRESH_BINARY);
# Find Contours
countours,hierarchy=cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
for c in countours:
rect = cv2.boundingRect(c)
if rect[2] < 10 or rect[3] < 10: continue
x,y,w,h = rect
cv2.rectangle(src,(x,y),(x+w,y+h),(255,255,255),2)
# Draw Contour
#cv2.drawContours(dst,countours,-1,(255,255,255),3)
cv2.imshow("Contour",src)
cv2.imwrite("contour.jpg",src)
cv2.waitKey(0)
Run Code Online (Sandbox Code Playgroud)
我得到以下输出:
我的目标是删除所有落在更大矩形内的矩形并连接更大的矩形,例如:
我怎么做 ?
我指的是Tensorflow对象检测API(https://github.com/tensorflow/models/tree/master/research/object_detection):这是我正在使用的检测代码的IPython笔记本(https://github.com/ tensorflow / models / blob / master / research / object_detection / object_detection_tutorial.ipynb)。在此文件中,输出值设置为绘制框的概率大于50%检测代码:
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores …Run Code Online (Sandbox Code Playgroud) python machine-learning object-detection deep-learning tensorflow
python-3.x ×5
python ×4
opencv ×3
tensorflow ×3
image ×1
jpeg ×1
ocr ×1
python-3.6 ×1
tesseract ×1