ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

TrackEval CLEAR metrics 详解

2022-04-17 20:32:31  阅读:283  来源: 互联网

标签:gt res CLEAR ids metrics tracker TrackEval np id


Data format

- Note: Training Test data in https://motchallenge.net/ is not the required(default)  format of TrackEval

MOT Challenge train/val/test

det.txt

3,-1,1433,512,60,100,0,-1,-1,-1
3,-1,1048,437,49,124,0,-1,-1,-1
3,-1,1087,552,78,177,0,-1,-1,-1
3,-1,1504,514,51,101,0,-1,-1,-1

gt.txt

48,1,335,811,128,270,1,1,0.85675
49,1,335,809,130,272,1,1,0.85326
50,1,335,808,132,272,1,1,0.85579

img1/

f"{:06d}.jpg"

000001.jpg
000002.jpg
000003.jpg

seqinfo.ini

[Sequence]
name=MOT20-01
imDir=img1
frameRate=25a
seqLength=429
imWidth=1920
imHeight=1080
imExt=.jpg

TrackEval

<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>

All frame numbers, target IDs and bounding boxes are 1-based.

Here is an example from the sample data offered by TrackEval

1,6.0,343.8669738769531,828.7033081054688,124.1097412109375,248.24200439453125,1,-1,-1,-1
1,7.0,1023.822265625,606.1856689453125,83.2244873046875,195.18109130859372,1,-1,-1,-1
1,8.0,1067.532958984375,513.0377197265625,52.221435546875,142.29217529296875,1,-1,-1,-1

Folder Hierarchy

  • gt

    • gt.txt

      txt details
      11,1,227,812,140,269,1,1,0.83704
      12,1,230,811,137,270,1,1,0.83764
      13,1,233,810,135,271,1,1,0.83456
      14,1,236,809,133,272,1,1,0.8315
      15,1,239,808,131,273,1,1,0.82847
      
    • seqinfo.ini(the same as MOT17 training data)

      txt details
      [Sequence]
      name=MOT20-01
      imDir=img1
      frameRate=25
      seqLength=429
      imWidth=1920
      imHeight=1080
      imExt=.jpg
      
  • trackers

refer to default_dataset_config = trackeval.datasets.MotChallenge2DBox.get_default_dataset_config() in run_mot_challenge.py

"GT_FOLDER": "/path/to/TrackEval/data/gt/mot_challenge/",
"TRACKERS_FOLDER": "/path/to/TrackEval/data/trackers/mot_challenge/",
"OUTPUT_FOLDER": None,
"TRACKERS_TO_EVAL": [
    "MPNTrack"
],
"TRACKER_SUB_FOLDER": "data",
"OUTPUT_SUB_FOLDER": "",
curr_file = os.path.join(self.tracker_fol, tracker,
    self.tracker_sub_fol, seq + '.txt')

refer to mot_challenge_2d_box.py to see how it get txt path

Pipeline

Data Preperation & arguments

  • GT_FOLDER: sequence with ground truth
    • in the format of MOT17
    • refer to mot_challenge_2d_box.py
`default_dataset_config` details
code_path = utils.get_code_path()
'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'),  # Location of GT data
'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'),  # Trackers location
`dataset_config` details
{
    "PRINT_CONFIG": True,
    "GT_FOLDER": "/path/to/TrackEval/data/gt/mot_challenge/",
    "TRACKERS_FOLDER": "/path/to/TrackEval/data/trackers/mot_challenge/",
    "OUTPUT_FOLDER": None,
    "TRACKERS_TO_EVAL": [
        "MPNTrack"
    ],
    "CLASSES_TO_EVAL": [
        "pedestrian"
    ],
    "BENCHMARK": "MOT17",
    "SPLIT_TO_EVAL": "train",
    "INPUT_AS_ZIP": False,
    "DO_PREPROC": True,
    "TRACKER_SUB_FOLDER": "data",
    "OUTPUT_SUB_FOLDER": "",
    "TRACKER_DISPLAY_NAMES": None,
    "SEQMAP_FOLDER": None, ...
}
`eval_config` details
{
    "USE_PARALLEL": False,
    "NUM_PARALLEL_CORES": 1,
    "BREAK_ON_ERROR": True,
    "RETURN_ON_ERROR": False,
    "LOG_ON_ERROR": "/path/to/TrackEval/error_log.txt",
    "PRINT_RESULTS": True,
    "PRINT_ONLY_COMBINED": False,
    "PRINT_CONFIG": True,
    "TIME_PROGRESS": True,
    "DISPLAY_LESS_PROGRESS": False,
    "OUTPUT_SUMMARY": True,
    "OUTPUT_EMPTY_CLASSES": True,
    "OUTPUT_DETAILED": True,
    "PLOT_CURVES": True
}
`metrics_config` details
{
    "METRICS": [
        "HOTA",
        "CLEAR",
        "Identity",
        "VACE"
    ],
    "THRESHOLD": 0.5
}

TrackEval has its own list arguments parser(command line to Python's Argument Parser)

Init Dataset

_get_seq_info

gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']
self.gt_set = gt_set
if self.config["SEQMAP_FOLDER"] is None:
  seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')

seqmap

name
TUD-Stadtmitte
TUD-Campus
PETS09-S2L1
ETH-Bahnhof
ETH-Sunnyday
ETH-Pedcross2
ADL-Rundle-6
ADL-Rundle-8
KITTI-13
KITTI-17
Venice-2

Init Evaluator

  • Init evaluator = Evaluator in eval.py
    • evaluator.evaluate()
  • evaluate_sequence in eval.py
raw_data = dataset.get_raw_seq_data(tracker, seq)
    seq_res = {}
    for cls in class_list:
        seq_res[cls] = {}
        data = dataset.get_preprocessed_seq_data(raw_data, cls)
        for metric, met_name in zip(metrics_list, metric_names):
            seq_res[cls][met_name] = metric.eval_sequence(data)
    return seq_res

Load Data

Data are loaded in a dict. Key is frame_id(timestep), and the value is the splitted row.

['21', '1', '912', '484', '97', '109', '0', '7', '1']

convert to ndarray

time_data = np.asarray(read_data[time_key], dtype=np.float)
raw_data['dets'][t] = np.atleast_2d(time_data[:, 2:6])
raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)

Calculate IoU Similarity

MotChallenge2DBox._calculate_similarities -> MotChallenge2DBox._calculate_box_ious

similarity_scores = []
for t, (gt_dets_t, tracker_dets_t) in enumerate(zip(raw_data['gt_dets'], raw_data['tracker_dets'])):
    ious = self._calculate_similarities(gt_dets_t, tracker_dets_t)
    similarity_scores.append(ious)
raw_data['similarity_scores'] = similarity_scores

How to Calculate IoU?

# layout: (x0, y0, x1, y1)
min_ = np.minimum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])
max_ = np.maximum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])

intersection = np.maximum(min_[..., 2] - max_[..., 0], 0) * np.maximum(min_[..., 3] - max_[..., 1], 0)

area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (bboxes1[..., 3] - bboxes1[..., 1])
area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (bboxes2[..., 3] - bboxes2[..., 1])

union = area1[:, np.newaxis] + area2[np.newaxis, :] - intersection
intersection[area1 <= 0 + np.finfo('float').eps, :] = 0
intersection[:, area2 <= 0 + np.finfo('float').eps] = 0
intersection[union <= 0 + np.finfo('float').eps] = 0
union[union <= 0 + np.finfo('float').eps] = 1
ious = intersection / union

get_preprocessed_seq_data in mot_challenge_2d_box.py

Cancel the distractor

matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0
match_rows, match_cols = linear_sum_assignment(-matching_scores)

[Note

标签:gt,res,CLEAR,ids,metrics,tracker,TrackEval,np,id
来源: https://www.cnblogs.com/zxyfrank/p/16157136.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有