有没有办法在 Tensorflow 的对象检测 API 中使用针对 RGB 图像进行训练的预训练模型,用于单通道灰度图像(深度)?
我正在尝试阅读这一行中Tensorflow非极大值抑制方法的源代码。它是从gen_image_ops文件导入的,但我在张量流源代码中找不到该文件。
有什么来源可以让我获得这个方法的代码吗?
object-detection non-maximum-suppression tensorflow object-detection-api
我一直在我自己的数据集上使用Tensorflow Object Detection API。在训练时,我想知道神经网络从训练集中学习的情况如何。因此,我想对训练集和评估集进行评估,并在训练期间分别获得准确率 (mAP)。
我的配置文件:
model {
faster_rcnn {
num_classes: 50
image_resizer {
fixed_shape_resizer {
height: 960
width: 960
}
}
number_of_stages: 3
feature_extractor {
type: 'faster_rcnn_resnet101'
first_stage_features_stride: 8
}
first_stage_anchor_generator {
grid_anchor_generator {
scales: [0.25, 0.5, 1.0, 2.0]
aspect_ratios: [0.5, 1.0, 2.0]
height_stride: 8
width_stride: 8
}
}
first_stage_atrous_rate: 2
first_stage_box_predictor_conv_hyperparams {
op: CONV
regularizer {
l2_regularizer {
weight: 0.0
}
}
initializer {
truncated_normal_initializer {
stddev: 0.00999999977648
}
}
}
first_stage_nms_score_threshold: 0.0
first_stage_nms_iou_threshold: 0.699999988079
first_stage_max_proposals: …Run Code Online (Sandbox Code Playgroud) python machine-learning object-detection tensorflow object-detection-api
我正在自定义数据集(即车牌数据集)上训练张量流对象检测 API 模型。我的目标是使用 Tensorflow lite 将此模型部署到边缘设备,这样我就无法使用任何 RCNN 系列模型。因为,我无法将任何 RCNN 系列对象检测模型转换为 TensorFlow lite 模型(这是 TensorFlow 对象检测 API 的限制)。我正在使用ssd_mobilenet_v2_coco模型来训练自定义数据集。以下是我的配置文件的代码片段:
model {
ssd {
num_classes: 1
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
}
}
similarity_calculator {
iou_similarity {
}
}
anchor_generator {
ssd_anchor_generator {
num_layers: 6
min_scale: 0.2
max_scale: 0.95
aspect_ratios: 1.0
aspect_ratios: 2.0
aspect_ratios: 0.5
aspect_ratios: …Run Code Online (Sandbox Code Playgroud) object-detection deep-learning tensorflow tensorboard object-detection-api
我正在尝试按照此链接中的步骤安装 Tensorflow 对象检测 API ,这是 Tensorflow 2 的官方安装文档。
git clone https://github.com/tensorflow/models.git
> everything is ok
cd models/research/
> everything is ok
protoc object_detection/protos/*.proto --python_out=.
> everything is ok
cp object_detection/packages/tf2/setup.py .
> everything is ok
python -m pip install --use-feature=2020-resolver .
> Usage:
> /opt/anaconda3/envs/ml/bin/python -m pip install [options] <requirement specifier> [package-> index-options] ...
> /opt/anaconda3/envs/ml/bin/python -m pip install [options] -r <requirements file> [package-index-options] ...
> /opt/anaconda3/envs/ml/bin/python -m pip install [options] [-e] <vcs project url> ...
> /opt/anaconda3/envs/ml/bin/python …Run Code Online (Sandbox Code Playgroud) 我正在尝试使用以下命令编译 protoc 文件:
protoc/bin/protoc models/research/object_detection/protos/*.proto --python_out=.
Run Code Online (Sandbox Code Playgroud)
但我在 cmd 上得到这个输出
object_detection/protos/flexible_grid_anchor_generator.proto: File not found.
object_detection/protos/grid_anchor_generator.proto: File not found.
object_detection/protos/multiscale_anchor_generator.proto: File not found.
object_detection/protos/ssd_anchor_generator.proto: File not found.
models/research/object_detection/protos/anchor_generator.proto:5:1: Import "object_detection/protos/flexible_grid_anchor_generator.proto" was not found or had errors.
models/research/object_detection/protos/anchor_generator.proto:6:1: Import "object_detection/protos/grid_anchor_generator.proto" was not found or had errors.
models/research/object_detection/protos/anchor_generator.proto:7:1: Import "object_detection/protos/multiscale_anchor_generator.proto" was not found or had errors.
models/research/object_detection/protos/anchor_generator.proto:8:1: Import "object_detection/protos/ssd_anchor_generator.proto" was not found or had errors.
models/research/object_detection/protos/anchor_generator.proto:14:5: "GridAnchorGenerator" is not defined.
models/research/object_detection/protos/anchor_generator.proto:15:5: "SsdAnchorGenerator" is not defined.
models/research/object_detection/protos/anchor_generator.proto:16:5: "MultiscaleAnchorGenerator" is not defined.
models/research/object_detection/protos/anchor_generator.proto:17:5: "FlexibleGridAnchorGenerator" is not …Run Code Online (Sandbox Code Playgroud) 我有大约 50000 个图像和注释文件用于训练 YOLOv5 对象检测模型。我在另一台计算机上仅使用 CPU 训练模型没有问题,但需要太长时间,因此我需要 GPU 训练。我的问题是,当我尝试使用 GPU 进行训练时,我不断收到此错误:
OSError: [WinError 1455] The paging file is too small for this operation to complete
Run Code Online (Sandbox Code Playgroud)
这是我正在执行的命令:
train.py --img 640 --batch 4 --epochs 100 --data myyaml.yaml --weights yolov5l.pt
Run Code Online (Sandbox Code Playgroud)
CUDA和PyTorch已成功安装并可用。以下命令安装没有错误:
pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
Run Code Online (Sandbox Code Playgroud)
num_workers = 8我在网上发现其他人也有类似的问题,并通过将 更改为 来修复它num_workers = 1。当我尝试这个时,训练开始了,似乎已经过了出现错误的点the paging file is too small,但几个小时后崩溃了。我还根据此视频 ( https://www.youtube.com/watch?v=Oh6dga-Oy10 ) 增加了 GPU 上可用的虚拟内存,但这也不起作用。我认为这是一个内存问题,因为有时它崩溃时我会从计算机收到内存不足的警告。
任何帮助将非常感激。
我正在尝试使用 Pytorch 训练我自己的对象检测模型。但我总是遇到这个错误。我尝试更改火炬版本,但这没有帮助。
我的软件包:torchvision-0.11.1 和 torch-1.10.0
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-10-9e52b782b448> in <module>()
4 for epoch in range(num_epochs):
5 # training for one epoch
----> 6 train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
7 # update the learning rate
8 lr_scheduler.step()
/content/engine.py in train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq)
21 warmup_iters = min(1000, len(data_loader) - 1)
22
---> 23 lr_scheduler = torch.optim.lr_scheduler.LinearLR(
24 optimizer, start_factor=warmup_factor, total_iters=warmup_iters
25 )
AttributeError: module 'torch.optim.lr_scheduler' has no attribute 'LinearLR'
Run Code Online (Sandbox Code Playgroud) 我按照本教程进行对象检测: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html
及其 GitHub 存储库,其中包含以下train_one_epoch功能evaluate:
https://github.com/pytorch/vision/blob/main/references/detection/engine.py
但是,我想计算验证期间的损失。我为评估损失实现了这一点,本质上要获得损失,model.train()需要:
@torch.no_grad()
def evaluate_loss(model, data_loader, device):
val_loss = 0
model.train()
for images, targets in data_loader:
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = utils.reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
val_loss += losses_reduced …Run Code Online (Sandbox Code Playgroud) python object-detection computer-vision deep-learning pytorch
如何加载自定义 yolo v-7 模型。
这就是我知道如何加载 yolo v-5 模型的方法:
model = torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/runs/train/exp15/weights/last.pt', force_reload=True)
Run Code Online (Sandbox Code Playgroud)
我在网上看到视频,他们建议使用这个:
!python detect.py --weights runs/train/yolov7x-custom/weights/best.pt --conf 0.5 --img-size 640 --source final_test_v1.mp4
Run Code Online (Sandbox Code Playgroud)
但我希望它像普通模型一样加载,并给我它找到对象的边界框坐标。
这就是我在 yolo v-5 中的做法:
from models.experimental import attempt_load
yolov5_weight_file = r'weights/rider_helmet_number_medium.pt' # ... may need full path
model = attempt_load(yolov5_weight_file, map_location=device)
def object_detection(frame):
img = torch.from_numpy(frame)
img = img.permute(2, 0, 1).float().to(device) #convert to required shape based on index
img /= 255.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
pred = model(img, augment=False)[0]
pred = …Run Code Online (Sandbox Code Playgroud) object-detection ×10
python ×7
tensorflow ×6
pytorch ×2
yolov5 ×2
depth ×1
gpu ×1
protoc ×1
tensorboard ×1
torch ×1
torchvision ×1
yolo ×1
yolov4 ×1