自定义关键点数据集上的 Detectron2 Trainer 值错误,在训练数据之前停止

问题描述 投票:0回答:1

我目前正在开发一个具有自定义关键点数据集的项目。 在进一步处理数据集之前,我希望检查数据集和训练过程是否正确。

我遇到了这个错误,尝试在 Google 和 StackOverflow 中查找类似的问题,我发现很难通过浏览来定位问题。

我只使用总数据集中的 5 个来检查性能。 它有 18 个以 COCO 格式注释的关键点。

from detectron2.data.datasets import register_coco_instances
register_coco_instances("train_t", {}, "/content/drive/MyDrive/thesis/test/point_test-2.json", "/content/drive/MyDrive/thesis/test/train")
register_coco_instances("val_t", {}, "/content/drive/MyDrive/thesis/test/point_test-2.json", "/content/drive/MyDrive/thesis/test/val")

sample_metadata = MetadataCatalog.get("train_t")
dataset_dicts = DatasetCatalog.get("train_t")

from detectron2.data import MetadataCatalog

keypoint_names = ['0', '1', '2', '3', '4', '5','6','7','8','9',
                  '10', '11', '12', '13', '14', '15','16','17']

keypoint_flip_map = [('0', '1'), ('2', '15'), ('3','4'), ('5','6'),('7','8'),('9','10'),
                  ('11', '12'), ('13', '14'), ('16','17')]

classes = MetadataCatalog.get("train_t").thing_classes = ['points']
print(classes)

MetadataCatalog.get("train_t").thing_classes = ['points']
MetadataCatalog.get("train_t").thing_dataset_id_to_contiguous_id = {1:0}
MetadataCatalog.get("train_t").keypoint_names = keypoint_names
MetadataCatalog.get("train_t").keypoint_flip_map = keypoint_flip_map
MetadataCatalog.get("train_t").evaluator_type="coco"

这是我遇到的错误,

---------------------------------------------------------------------------

ValueError                                Traceback (most recent call last)

<ipython-input-13-cb06f7a198f0> in <module>()
     29 trainer = DefaultTrainer(cfg)    #CocoTrainer(cfg)
     30 trainer.resume_or_load(resume=False)
---> 31 trainer.train()

8 frames

/usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py in train(self)
    482             OrderedDict of results, if evaluation is enabled. Otherwise None.
    483         """
--> 484         super().train(self.start_iter, self.max_iter)
    485         if len(self.cfg.TEST.EXPECTED_RESULTS) and comm.is_main_process():
    486             assert hasattr(

/usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py in train(self, start_iter, max_iter)
    147                 for self.iter in range(start_iter, max_iter):
    148                     self.before_step()
--> 149                     self.run_step()
    150                     self.after_step()
    151                 # self.iter == max_iter can be used by `after_train` to

/usr/local/lib/python3.7/dist-packages/detectron2/engine/defaults.py in run_step(self)
    492     def run_step(self):
    493         self._trainer.iter = self.iter
--> 494         self._trainer.run_step()
    495 
    496     def state_dict(self):

/usr/local/lib/python3.7/dist-packages/detectron2/engine/train_loop.py in run_step(self)
    265         If you want to do something with the data, you can wrap the dataloader.
    266         """
--> 267         data = next(self._data_loader_iter)
    268         data_time = time.perf_counter() - start
    269 

/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py in __iter__(self)
    232 
    233     def __iter__(self):
--> 234         for d in self.dataset:
    235             w, h = d["width"], d["height"]
    236             bucket_id = 0 if w > h else 1

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in __next__(self)
    519             if self._sampler_iter is None:
    520                 self._reset()
--> 521             data = self._next_data()
    522             self._num_yielded += 1
    523             if self._dataset_kind == _DatasetKind.Iterable and \

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _next_data(self)
   1201             else:
   1202                 del self._task_info[idx]
-> 1203                 return self._process_data(data)
   1204 
   1205     def _try_put_index(self):

/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py in _process_data(self, data)
   1227         self._try_put_index()
   1228         if isinstance(data, ExceptionWrapper):
-> 1229             data.reraise()
   1230         return data
   1231 

/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
    432             # instantiate since we don't know how to
    433             raise RuntimeError(msg) from None
--> 434         raise exception
    435 
    436 

ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
    data.append(next(self.dataset_iter))
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py", line 201, in __iter__
    yield self.dataset[idx]
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/common.py", line 90, in __getitem__
    data = self._map_func(self._dataset[cur_idx])
  File "/usr/local/lib/python3.7/dist-packages/detectron2/utils/serialize.py", line 26, in __call__
    return self._obj(*args, **kwargs)
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 189, in __call__
    self._transform_annotations(dataset_dict, transforms, image_shape)
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 128, in _transform_annotations
    for obj in dataset_dict.pop("annotations")
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/dataset_mapper.py", line 129, in <listcomp>
    if obj.get("iscrowd", 0) == 0
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/detection_utils.py", line 314, in transform_instance_annotations
    annotation["keypoints"], transforms, image_size, keypoint_hflip_indices
  File "/usr/local/lib/python3.7/dist-packages/detectron2/data/detection_utils.py", line 360, in transform_keypoint_annotations
    "contains {} points!".format(len(keypoints), len(keypoint_hflip_indices))
ValueError: Keypoint data has 1 points, but metadata contains 18 points!

我检查了其他人的 Colab 和 Git,但他们在加载在单个类别中注释的关键点方面似乎没有问题,就像我对数据集所做的那样。

如果您对解决我在培训过程中遇到的这个问题有任何建议,请随时在这里与我分享您的一些知识......谢谢

keypoint detectron dataset
1个回答
0
投票

基本上,我在这里犯的错误是

我给出了 18(因为我认为探测器是从 1 开始计数),而不是 17(0 到 17)。

吸取的教训...正确计数。

cfg.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 18
cfg.TEST.KEYPOINT_OKS_SIGMAS = np.ones((18, 1), dtype=float).tolist()
© www.soinside.com 2019 - 2024. All rights reserved.