Datasets:

ArXiv:
License:
zhengthomastang commited on
Commit
6b88ca2
·
1 Parent(s): a3341ee

Include evaluation tools for 2025 edition

Browse files
Files changed (29) hide show
  1. MTMC_Tracking_2025/eval/3rdParty_Licenses.md +442 -0
  2. MTMC_Tracking_2025/eval/README.md +38 -0
  3. MTMC_Tracking_2025/eval/environment.yml +23 -0
  4. MTMC_Tracking_2025/eval/main.py +132 -0
  5. MTMC_Tracking_2025/eval/sample_data/ground_truth_test_full.txt +3 -0
  6. MTMC_Tracking_2025/eval/sample_data/pred.txt +3 -0
  7. MTMC_Tracking_2025/eval/sample_data/scene_id_2_scene_name_full.json +3 -0
  8. MTMC_Tracking_2025/eval/utils/__init__.py +0 -0
  9. MTMC_Tracking_2025/eval/utils/classes.py +53 -0
  10. MTMC_Tracking_2025/eval/utils/io_utils.py +352 -0
  11. MTMC_Tracking_2025/eval/utils/trackeval/__init__.py +6 -0
  12. MTMC_Tracking_2025/eval/utils/trackeval/_timing.py +81 -0
  13. MTMC_Tracking_2025/eval/utils/trackeval/datasets/__init__.py +5 -0
  14. MTMC_Tracking_2025/eval/utils/trackeval/datasets/_base_dataset.py +485 -0
  15. MTMC_Tracking_2025/eval/utils/trackeval/datasets/mot_challenge_2d_box.py +471 -0
  16. MTMC_Tracking_2025/eval/utils/trackeval/datasets/mot_challenge_3d_location.py +475 -0
  17. MTMC_Tracking_2025/eval/utils/trackeval/datasets/mtmc_challenge_3d_bbox.py +474 -0
  18. MTMC_Tracking_2025/eval/utils/trackeval/datasets/mtmc_challenge_3d_location.py +473 -0
  19. MTMC_Tracking_2025/eval/utils/trackeval/datasets/test_mot.py +475 -0
  20. MTMC_Tracking_2025/eval/utils/trackeval/eval.py +230 -0
  21. MTMC_Tracking_2025/eval/utils/trackeval/metrics/__init__.py +5 -0
  22. MTMC_Tracking_2025/eval/utils/trackeval/metrics/_base_metric.py +198 -0
  23. MTMC_Tracking_2025/eval/utils/trackeval/metrics/clear.py +223 -0
  24. MTMC_Tracking_2025/eval/utils/trackeval/metrics/count.py +76 -0
  25. MTMC_Tracking_2025/eval/utils/trackeval/metrics/hota.py +245 -0
  26. MTMC_Tracking_2025/eval/utils/trackeval/metrics/identity.py +172 -0
  27. MTMC_Tracking_2025/eval/utils/trackeval/plotting.py +322 -0
  28. MTMC_Tracking_2025/eval/utils/trackeval/trackeval_utils.py +316 -0
  29. MTMC_Tracking_2025/eval/utils/trackeval/utils.py +204 -0
MTMC_Tracking_2025/eval/3rdParty_Licenses.md ADDED
@@ -0,0 +1,442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Third-Party Licenses
2
+
3
+ This project incorporates components from the following open-source software. We have provided links to the licenses for each component below.
4
+
5
+ | Package / Component Name | Version | License | Link to Component's License |
6
+ |---|---|---|---|
7
+ | annotated-types | 0.7.0 | MIT | [link](https://github.com/annotated-types/annotated-types/blob/main/LICENSE) |
8
+ | boto3 | 1.36.2 | Apache 2.0 | [link](https://github.com/boto/boto3/blob/develop/LICENSE) |
9
+ | certifi | 2025.4.26 | MPL | [link](https://github.com/certifi/python-certifi/blob/master/LICENSE) |
10
+ | charset-normalizer | 3.4.2 | MIT | [link](https://github.com/jawah/charset_normalizer/blob/master/LICENSE) |
11
+ | contourpy | 1.3.0 | BSD (any variant) | [link](https://github.com/contourpy/contourpy/blob/main/LICENSE) |
12
+ | cycler | 0.12.1 | BSD (any variant) | [link](https://github.com/matplotlib/cycler/blob/main/LICENSE) |
13
+ | fonttools | 4.55.3 | MIT | [link](https://github.com/fonttools/fonttools/blob/main/LICENSE) |
14
+ | fvcore | 0.1.5.post20221221 | Apache 2.0 | [link](https://github.com/facebookresearch/fvcore/blob/main/LICENSE) |
15
+ | iopath | 0.1.9 | MIT | [link](https://github.com/facebookresearch/iopath/blob/main/LICENSE) |
16
+ | idna | 3.1 | BSD (any variant) | [link](https://github.com/kjd/idna/blob/master/LICENSE.md) |
17
+ | kiwisolver | 1.4.7 | BSD (any variant) | [link](https://github.com/nucleic/kiwi/blob/main/pyproject.toml) |
18
+ | matplotlib | 3.5.3 | Other (Please describe in Comments) | [link](https://github.com/matplotlib/matplotlib/blob/main/LICENSE/LICENSE) |
19
+ | numpy | 1.26.0 | Other (Please describe in Comments) | [link](https://github.com/numpy/numpy/blob/main/LICENSE.txt) |
20
+ | packaging | 24.2 | Apache 2.0 | [link](https://github.com/pypa/packaging/blob/main/LICENSE.APACHE) |
21
+ | pillow | 11.1.0 | Other (Please describe in Comments) | [link](https://github.com/python-pillow/Pillow/blob/main/LICENSE) |
22
+ | pip | 23.3.1 | MIT | [link](https://github.com/pypa/pip/blob/main/LICENSE.txt) |
23
+ | pydantic | 2.10.5 | MIT | [link](https://github.com/pydantic/pydantic/blob/main/LICENSE) |
24
+ | pydantic_core | 2.27.2 | MIT | [link](https://github.com/pydantic/pydantic-core/blob/main/LICENSE) |
25
+ | pyparsing | 3.2.1 | MIT | [link](https://github.com/pyparsing/pyparsing/blob/master/LICENSE) |
26
+ | python-dateutil | 2.9.0.post0 | Apache 2.0 | [link](https://github.com/dateutil/dateutil/blob/master/LICENSE) |
27
+ | pytz | 2024.2 | MIT | [link](https://pythonhosted.org/pytz/#license) |
28
+ | PyYAML | 6.0.2 | MIT | [link](https://github.com/yaml/pyyaml/blob/main/LICENSE) |
29
+ | requests | 2.32.3 | Apache 2.0 | [link](https://github.com/psf/requests/blob/main/LICENSE) |
30
+ | scipy | 1.15.1 | BSD (any variant) | [link](https://github.com/scipy/scipy/blob/main/LICENSE.txt) |
31
+ | six | 1.17.0 | MIT | [link](https://github.com/benjaminp/six/blob/main/LICENSE) |
32
+ | sympy | 1.14.0 | Other (Please describe in Comments) | [link](https://github.com/sympy/sympy/blob/master/LICENSE) |
33
+ | tabulate | 0.9.0 | MIT | [link](https://github.com/openai/tabulate/blob/master/LICENSE) |
34
+ | torch | 2.5.1 | BSD (any variant) | [link](https://github.com/pytorch/pytorch/blob/main/LICENSE) |
35
+ | typing_extensions | 4.12.2 | Other (Please describe in Comments) | [link](https://github.com/python/typing_extensions/blob/main/LICENSE) |
36
+ | urllib3 | 2.4.0 | MIT | [link](https://github.com/urllib3/urllib3/blob/main/LICENSE.txt) |
37
+ | yacs | 0.1.8 | Apache 2.0 | [link](https://github.com/rbgirshick/yacs/blob/master/LICENSE) |
38
+
39
+
40
+
41
+ ### Apache Software License 2.0
42
+ ```
43
+ Apache License
44
+ Version 2.0, January 2004
45
+ http://www.apache.org/licenses/
46
+
47
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
48
+
49
+ 1. Definitions.
50
+
51
+ "License" shall mean the terms and conditions for use, reproduction,
52
+ and distribution as defined by Sections 1 through 9 of this document.
53
+
54
+ "Licensor" shall mean the copyright owner or entity authorized by
55
+ the copyright owner that is granting the License.
56
+
57
+ "Legal Entity" shall mean the union of the acting entity and all
58
+ other entities that control, are controlled by, or are under common
59
+ control with that entity. For the purposes of this definition,
60
+ "control" means (i) the power, direct or indirect, to cause the
61
+ direction or management of such entity, whether by contract or
62
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
63
+ outstanding shares, or (iii) beneficial ownership of such entity.
64
+
65
+ "You" (or "Your") shall mean an individual or Legal Entity
66
+ exercising permissions granted by this License.
67
+
68
+ "Source" form shall mean the preferred form for making modifications,
69
+ including but not limited to software source code, documentation
70
+ source, and configuration files.
71
+
72
+ "Object" form shall mean any form resulting from mechanical
73
+ transformation or translation of a Source form, including but
74
+ not limited to compiled object code, generated documentation,
75
+ and conversions to other media types.
76
+
77
+ "Work" shall mean the work of authorship, whether in Source or
78
+ Object form, made available under the License, as indicated by a
79
+ copyright notice that is included in or attached to the work
80
+ (an example is provided in the Appendix below).
81
+
82
+ "Derivative Works" shall mean any work, whether in Source or Object
83
+ form, that is based on (or derived from) the Work and for which the
84
+ editorial revisions, annotations, elaborations, or other modifications
85
+ represent, as a whole, an original work of authorship. For the purposes
86
+ of this License, Derivative Works shall not include works that remain
87
+ separable from, or merely link (or bind by name) to the interfaces of,
88
+ the Work and Derivative Works thereof.
89
+
90
+ "Contribution" shall mean any work of authorship, including
91
+ the original version of the Work and any modifications or additions
92
+ to that Work or Derivative Works thereof, that is intentionally
93
+ submitted to Licensor for inclusion in the Work by the copyright owner
94
+ or by an individual or Legal Entity authorized to submit on behalf of
95
+ the copyright owner. For the purposes of this definition, "submitted"
96
+ means any form of electronic, verbal, or written communication sent
97
+ to the Licensor or its representatives, including but not limited to
98
+ communication on electronic mailing lists, source code control systems,
99
+ and issue tracking systems that are managed by, or on behalf of, the
100
+ Licensor for the purpose of discussing and improving the Work, but
101
+ excluding communication that is conspicuously marked or otherwise
102
+ designated in writing by the copyright owner as "Not a Contribution."
103
+
104
+ "Contributor" shall mean Licensor and any individual or Legal Entity
105
+ on behalf of whom a Contribution has been received by Licensor and
106
+ subsequently incorporated within the Work.
107
+
108
+ 2. Grant of Copyright License. Subject to the terms and conditions of
109
+ this License, each Contributor hereby grants to You a perpetual,
110
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
111
+ copyright license to reproduce, prepare Derivative Works of,
112
+ publicly display, publicly perform, sublicense, and distribute the
113
+ Work and such Derivative Works in Source or Object form.
114
+
115
+ 3. Grant of Patent License. Subject to the terms and conditions of
116
+ this License, each Contributor hereby grants to You a perpetual,
117
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
118
+ (except as stated in this section) patent license to make, have made,
119
+ use, offer to sell, sell, import, and otherwise transfer the Work,
120
+ where such license applies only to those patent claims licensable
121
+ by such Contributor that are necessarily infringed by their
122
+ Contribution(s) alone or by combination of their Contribution(s)
123
+ with the Work to which such Contribution(s) was submitted. If You
124
+ institute patent litigation against any entity (including a
125
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
126
+ or a Contribution incorporated within the Work constitutes direct
127
+ or contributory patent infringement, then any patent licenses
128
+ granted to You under this License for that Work shall terminate
129
+ as of the date such litigation is filed.
130
+
131
+ 4. Redistribution. You may reproduce and distribute copies of the
132
+ Work or Derivative Works thereof in any medium, with or without
133
+ modifications, and in Source or Object form, provided that You
134
+ meet the following conditions:
135
+
136
+ (a) You must give any other recipients of the Work or
137
+ Derivative Works a copy of this License; and
138
+
139
+ (b) You must cause any modified files to carry prominent notices
140
+ stating that You changed the files; and
141
+
142
+ (c) You must retain, in the Source form of any Derivative Works
143
+ that You distribute, all copyright, patent, trademark, and
144
+ attribution notices from the Source form of the Work,
145
+ excluding those notices that do not pertain to any part of
146
+ the Derivative Works; and
147
+
148
+ (d) If the Work includes a "NOTICE" text file as part of its
149
+ distribution, then any Derivative Works that You distribute must
150
+ include a readable copy of the attribution notices contained
151
+ within such NOTICE file, excluding those notices that do not
152
+ pertain to any part of the Derivative Works, in at least one
153
+ of the following places: within a NOTICE text file distributed
154
+ as part of the Derivative Works; within the Source form or
155
+ documentation, if provided along with the Derivative Works; or,
156
+ within a display generated by the Derivative Works, if and
157
+ wherever such third-party notices normally appear. The contents
158
+ of the NOTICE file are for informational purposes only and
159
+ do not modify the License. You may add Your own attribution
160
+ notices within Derivative Works that You distribute, alongside
161
+ or as an addendum to the NOTICE text from the Work, provided
162
+ that such additional attribution notices cannot be construed
163
+ as modifying the License.
164
+
165
+ You may add Your own copyright statement to Your modifications and
166
+ may provide additional or different license terms and conditions
167
+ for use, reproduction, or distribution of Your modifications, or
168
+ for any such Derivative Works as a whole, provided Your use,
169
+ reproduction, and distribution of the Work otherwise complies with
170
+ the conditions stated in this License.
171
+
172
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
173
+ any Contribution intentionally submitted for inclusion in the Work
174
+ by You to the Licensor shall be under the terms and conditions of
175
+ this License, without any additional terms or conditions.
176
+ Notwithstanding the above, nothing herein shall supersede or modify
177
+ the terms of any separate license agreement you may have executed
178
+ with Licensor regarding such Contributions.
179
+
180
+ 6. Trademarks. This License does not grant permission to use the trade
181
+ names, trademarks, service marks, or product names of the Licensor,
182
+ except as required for reasonable and customary use in describing the
183
+ origin of the Work and reproducing the content of the NOTICE file.
184
+
185
+ 7. Disclaimer of Warranty. Unless required by applicable law or
186
+ agreed to in writing, Licensor provides the Work (and each
187
+ Contributor provides its Contributions) on an "AS IS" BASIS,
188
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
189
+ implied, including, without limitation, any warranties or conditions
190
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
191
+ PARTICULAR PURPOSE. You are solely responsible for determining the
192
+ appropriateness of using or redistributing the Work and assume any
193
+ risks associated with Your exercise of permissions under this License.
194
+
195
+ 8. Limitation of Liability. In no event and under no legal theory,
196
+ whether in tort (including negligence), contract, or otherwise,
197
+ unless required by applicable law (such as deliberate and grossly
198
+ negligent acts) or agreed to in writing, shall any Contributor be
199
+ liable to You for damages, including any direct, indirect, special,
200
+ incidental, or consequential damages of any character arising as a
201
+ result of this License or out of the use or inability to use the
202
+ Work (including but not limited to damages for loss of goodwill,
203
+ work stoppage, computer failure or malfunction, or any and all
204
+ other commercial damages or losses), even if such Contributor
205
+ has been advised of the possibility of such damages.
206
+
207
+ 9. Accepting Warranty or Additional Liability. While redistributing
208
+ the Work or Derivative Works thereof, You may choose to offer,
209
+ and charge a fee for, acceptance of support, warranty, indemnity,
210
+ or other liability obligations and/or rights consistent with this
211
+ License. However, in accepting such obligations, You may act only
212
+ on Your own behalf and on Your sole responsibility, not on behalf
213
+ of any other Contributor, and only if You agree to indemnify,
214
+ defend, and hold each Contributor harmless for any liability
215
+ incurred by, or claims asserted against, such Contributor by reason
216
+ of your accepting any such warranty or additional liability.
217
+
218
+ END OF TERMS AND CONDITIONS
219
+ ```
220
+
221
+ ### MIT License
222
+ ```
223
+ MIT License
224
+
225
+ Copyright (c) [year] [fullname]
226
+
227
+ Permission is hereby granted, free of charge, to any person obtaining a copy
228
+ of this software and associated documentation files (the "Software"), to deal
229
+ in the Software without restriction, including without limitation the rights
230
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
231
+ copies of the Software, and to permit persons to whom the Software is
232
+ furnished to do so, subject to the following conditions:
233
+
234
+ The above copyright notice and this permission notice shall be included in all
235
+ copies or substantial portions of the Software.
236
+
237
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
238
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
239
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
240
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
241
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
242
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
243
+ SOFTWARE.
244
+ ```
245
+
246
+ ### BSD License
247
+ ```
248
+ BSD License
249
+
250
+ Copyright (c) [year] [fullname]
251
+ All rights reserved.
252
+
253
+ Redistribution and use in source and binary forms, with or without
254
+ modification, are permitted provided that the following conditions are met:
255
+
256
+ * Redistributions of source code must retain the above copyright
257
+ notice, this list of conditions and the following disclaimer.
258
+ * Redistributions in binary form must reproduce the above copyright
259
+ notice, this list of conditions and the following disclaimer in the
260
+ documentation and/or other materials provided with the distribution.
261
+ * Neither the name of the copyright holder nor the names of its
262
+ contributors may be used to endorse or promote products derived from
263
+ this software without specific prior written permission.
264
+
265
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
266
+ AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
267
+ IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
268
+ ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
269
+ LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
270
+ CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
271
+ SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
272
+ INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
273
+ CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
274
+ ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
275
+ POSSIBILITY OF SUCH DAMAGE.
276
+ ```
277
+
278
+ ### ISC License
279
+
280
+ ```
281
+ Copyright (c) [year] [fullname]
282
+
283
+ Permission to use, copy, modify, and distribute this software for any
284
+ purpose with or without fee is hereby granted, provided that the above
285
+ copyright notice and this permission notice appear in all copies.
286
+
287
+ THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
288
+ WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
289
+ MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
290
+ ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
291
+ WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
292
+ ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
293
+ OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
294
+ ```
295
+
296
+ ### The Unlicense
297
+
298
+ ```
299
+ This is free and unencumbered software released into the public domain.
300
+
301
+ Anyone is free to copy, modify, publish, use, compile, sell, or
302
+ distribute this software, either in source code form or as a compiled
303
+ binary, for any purpose, commercial or non-commercial, and by any
304
+ means.
305
+
306
+ In jurisdictions that recognize copyright laws, the author or authors
307
+ of this software dedicate any and all copyright interest in the
308
+ software to the public domain. We make this dedication for the benefit
309
+ of the public at large and to the detriment of our heirs and
310
+ successors. We intend this dedication to be an overt act of
311
+ relinquishment in perpetuity of all present and future rights to this
312
+ software under copyright law.
313
+
314
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
315
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
316
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
317
+ IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
318
+ OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
319
+ ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
320
+ OTHER DEALINGS IN THE SOFTWARE.
321
+
322
+ For more information, please refer to <http://unlicense.org/>
323
+ ```
324
+
325
+ ### Python Software Foundation License Version 2 (PSF License)
326
+
327
+ ```
328
+ PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
329
+ --------------------------------------------
330
+
331
+ 1. This LICENSE AGREEMENT is between the Python Software Foundation
332
+ ("PSF"), and the Individual or Organization ("Licensee") accessing and
333
+ otherwise using this software ("Python") in source or binary form and
334
+ its associated documentation.
335
+
336
+ 2. Subject to the terms and conditions of this License Agreement, PSF
337
+ hereby grants Licensee a nonexclusive, royalty-free, world-wide
338
+ license to reproduce, analyze, test, perform and/or display publicly,
339
+ prepare derivative works, distribute, and otherwise use Python
340
+ alone or in any derivative version, provided, however, that PSF's
341
+ License Agreement and PSF's notice of copyright, i.e., "Copyright (c)
342
+ 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Python Software Foundation;
343
+ All Rights Reserved" are retained in Python alone or in any derivative
344
+ version prepared by Licensee.
345
+
346
+ 3. In the event Licensee prepares a derivative work that is based on
347
+ or incorporates Python or any part thereof, and wants to make
348
+ the derivative work available to others as provided herein, then
349
+ Licensee hereby agrees to include in any such work a brief summary of
350
+ the changes made to Python.
351
+
352
+ 4. PSF is making Python available to Licensee on an "AS IS"
353
+ basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
354
+ IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
355
+ DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
356
+ FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
357
+ INFRINGE ANY THIRD PARTY RIGHTS.
358
+
359
+ 5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
360
+ FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
361
+ A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
362
+ OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
363
+
364
+ 6. This License Agreement will automatically terminate upon a material
365
+ breach of its terms and conditions.
366
+
367
+ 7. Nothing in this License Agreement shall be deemed to create any
368
+ relationship of agency, partnership, or joint venture between PSF and
369
+ Licensee. This License Agreement does not grant permission to use PSF
370
+ trademarks or trade name in a trademark sense to endorse or promote
371
+ products or services of Licensee, or any third party.
372
+
373
+ 8. By copying, installing or otherwise using Python, Licensee
374
+ agrees to be bound by the terms and conditions of this License
375
+ Agreement.
376
+ ```
377
+
378
+
379
+
380
+ ### Other
381
+ Numpy license
382
+ ```
383
+ Copyright (c) 2005-2023, NumPy Developers.
384
+ All rights reserved.
385
+
386
+ Redistribution and use in source and binary forms, with or without
387
+ modification, are permitted provided that the following conditions are
388
+ met:
389
+ * Redistributions of source code must retain the above copyright
390
+ notice, this list of conditions and the following disclaimer.
391
+ * Redistributions in binary form must reproduce the above
392
+ copyright notice, this list of conditions and the following
393
+ disclaimer in the documentation and/or other materials provided
394
+ with the distribution.
395
+ * Neither the name of the NumPy Developers nor the names of any
396
+ contributors may be used to endorse or promote products derived
397
+ from this software without specific prior written permission.
398
+
399
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
400
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
401
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
402
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
403
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
404
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
405
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
406
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
407
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
408
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
409
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
410
+
411
+ 3. In the event Licensee prepares a derivative work that is based on or
412
+ incorporates matplotlib or any part thereof, and wants to
413
+ make the derivative work available to others as provided herein, then
414
+ Licensee hereby agrees to include in any such work a brief summary of
415
+ the changes made to matplotlib.
416
+
417
+ 4. JDH is making matplotlib available to Licensee on an "AS
418
+ IS" basis. JDH MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
419
+ IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, JDH MAKES NO AND
420
+ DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
421
+ FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF MATPLOTLIB
422
+ WILL NOT INFRINGE ANY THIRD PARTY RIGHTS.
423
+
424
+ 5. JDH SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF MATPLOTLIB
425
+ FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR
426
+ LOSS AS A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING
427
+ MATPLOTLIB , OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF
428
+ THE POSSIBILITY THEREOF.
429
+
430
+ 6. This License Agreement will automatically terminate upon a material
431
+ breach of its terms and conditions.
432
+
433
+ 7. Nothing in this License Agreement shall be deemed to create any
434
+ relationship of agency, partnership, or joint venture between JDH and
435
+ Licensee. This License Agreement does not grant permission to use JDH
436
+ trademarks or trade name in a trademark sense to endorse or promote
437
+ products or services of Licensee, or any third party.
438
+
439
+ 8. By copying, installing or otherwise using matplotlib,
440
+ Licensee agrees to be bound by the terms and conditions of this License
441
+ Agreement.
442
+ ```
MTMC_Tracking_2025/eval/README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Evaluation Code for - MTMC Tracking 2025 Dataset
2
+
3
+ Evaluation code for the Multi-Target Multi-Camera (MTMC) 2025 dataset.
4
+
5
+ The evaluation utilizes Higher Order Tracking Accuracy (HOTA) score as an evaluation metric for multi-object tracking that addresses the limitations of previous metrics like MOTA and IDF1. It integrates three key aspects of MOT: accurate detection, association, and localization into a unified metric. This comprehensive approach balances the importance of detecting each object (detection), correctly identifying objects across different frames (association), and accurately localizing objects in each frame (localization). Furthermore, HOTA can be decomposed into simpler components, allowing for detailed analysis of different aspects of tracking behavior.
6
+
7
+ HOTA scores calculated using 3D IOU of the bounding box dimension in a multi-camera setting.
8
+
9
+ # Environment setup:
10
+
11
+ ```
12
+ + conda env create -f environment.yml
13
+ + conda activate mtmc_eval_2025
14
+ ```
15
+
16
+ ## Usage:
17
+
18
+ ```
19
+ # The prediction file is a copy of the ground truth file used for testing
20
+
21
+ python3 main.py --ground_truth_file ./sample_data/ground_truth_test_full.txt --input_file sample_data/pred.txt --scene_id_2_scene_name_file sample_data/scene_id_2_scene_name_full.json --num_cores 16
22
+
23
+ Sample Result
24
+ --------------------------------------------------------------
25
+ 25/07/11 21:01:47 - Final HOTA: 100.0
26
+ 25/07/11 21:01:47 - Final DetA: 100.0
27
+ 25/07/11 21:01:47 - Final AssA: 100.0
28
+ 25/07/11 21:01:47 - Final LocA: 99.99993990354818
29
+ 25/07/11 21:01:47 - Total time taken: 522.6667423248291 seconds
30
+
31
+ ```
32
+
33
+ Note: A sample ground truth file is provided in `sample_data/ground_truth_test_full.txt` for demonstration purposes. Please replace this with your own ground truth file corresponding to the dataset split you are evaluating against.
34
+
35
+
36
+ ## Acknowledgements
37
+
38
+ This project utilizes a portion of code from [TrackEval](https://github.com/JonathonLuiten/TrackEval), an open-source project by Jonathon Luiten for evaluating multi-camera tracking results. TrackEval is licensed under the MIT License, which you can find in full [here](https://github.com/JonathonLuiten/TrackEval/blob/master/LICENSE).
MTMC_Tracking_2025/eval/environment.yml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: mtmc_eval_2025
2
+ channels:
3
+ - pytorch
4
+ - conda-forge
5
+ - defaults
6
+ dependencies:
7
+ - python=3.10.15
8
+ - pip=23.3.1
9
+ - pytorch=2.5.1
10
+ - torchvision=0.20.1
11
+ - cpuonly
12
+ - packaging=24.2
13
+ - omegaconf=2.3.0
14
+ - pydantic=2.10.5
15
+ - scipy=1.15.1
16
+ - tabulate=0.9.0
17
+ - matplotlib=3.5.3
18
+ - boto3=1.36.2
19
+ - pytz=2024.2
20
+ - requests=2.32.3
21
+ - numpy=1.26.0
22
+ - pip:
23
+ - git+https://github.com/facebookresearch/pytorch3d.git@75ebeea
MTMC_Tracking_2025/eval/main.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import logging
4
+ import argparse
5
+ from datetime import datetime
6
+ from typing import List, Dict, Set, Tuple, Any
7
+ import tempfile
8
+ import time
9
+ import numpy as np
10
+ from utils.io_utils import ValidateFile, validate_file_path, load_json_from_file, split_files_per_class, split_files_per_scene, get_no_of_objects_per_scene
11
+ from utils.trackeval.trackeval_utils import _evaluate_tracking_for_all_BEV_sensors
12
+
13
+
14
+ logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%y/%m/%d %H:%M:%S", level=logging.INFO)
15
+
16
+ def evaluate_tracking_for_all_BEV_sensors(ground_truth_file, prediction_file, output_root_dir, num_cores, scene_id, num_frames_to_eval):
17
+ logging.info(f"Computing tracking results for scene id: {scene_id}...")
18
+ output_directory = os.path.join(output_root_dir)
19
+ os.makedirs(output_directory, exist_ok=True)
20
+
21
+ split_files_per_class(ground_truth_file, prediction_file, output_directory, 0.0, num_frames_to_eval, 0.0, fps=30)
22
+ all_class_results = _evaluate_tracking_for_all_BEV_sensors(ground_truth_file, prediction_file, output_directory, num_cores, 30)
23
+ return all_class_results
24
+
25
+
26
+ def get_weighted_avg(weights, values):
27
+ common = weights.keys() & values.keys()
28
+ numerator = sum(weights[k] * values[k] for k in common)
29
+ denominator = sum(weights[k] for k in common)
30
+ return numerator / denominator if denominator else 0.0
31
+
32
+
33
+ def run_evaluation(ground_truth_file, input_file, output_dir, num_cores, num_frames_to_eval, scene_id_2_scene_name_file):
34
+
35
+ is_temp_dir = False
36
+ if output_dir is None:
37
+ temp_dir = tempfile.TemporaryDirectory()
38
+ is_temp_dir = True
39
+ output_dir = temp_dir.name
40
+ logging.info(f"Temp files will be created here: {output_dir}")
41
+
42
+ scene_id_2_scene_name = load_json_from_file(scene_id_2_scene_name_file)
43
+ logging.info(f"Evaluating scenes: {list(scene_id_2_scene_name.keys())}")
44
+ split_files_per_scene(ground_truth_file, input_file, output_dir, scene_id_2_scene_name, num_frames_to_eval)
45
+ objects_per_scene = get_no_of_objects_per_scene(ground_truth_file, scene_id_2_scene_name)
46
+
47
+ hota_per_scene = dict()
48
+ detA_per_scene = dict()
49
+ assA_per_scene = dict()
50
+ locA_per_scene = dict()
51
+ detailed_results = dict()
52
+
53
+ for scene_id in scene_id_2_scene_name.keys():
54
+ logging.info(f"Evaluating scene: {scene_id}")
55
+ output_directory = os.path.join(output_dir, f"scene_{scene_id}")
56
+ ground_truth_file = os.path.join(output_directory, "gt.txt")
57
+ input_file = os.path.join(output_directory, "pred.txt")
58
+ # check if both input & ground truth files exist
59
+ if not os.path.exists(ground_truth_file) or not os.path.exists(input_file):
60
+ logging.info(f"Skipping scene {scene_id} because input or ground truth file does not exist")
61
+ continue
62
+ results = evaluate_tracking_for_all_BEV_sensors(ground_truth_file, input_file, output_directory, num_cores, scene_id, num_frames_to_eval)
63
+ hota_per_class = []
64
+ detA_per_class = []
65
+ assA_per_class = []
66
+ locA_per_class = []
67
+ for class_name, scene_results in results.items():
68
+ class_results = dict()
69
+ result = scene_results[0]["MTMCChallenge3DBBox"]["data"]["MTMC"]["class"]["HOTA"]
70
+
71
+ # Avg. results across all thresholds
72
+ hota_per_class.append(np.mean(result["HOTA"]))
73
+ detA_per_class.append(np.mean(result["DetA"]))
74
+ assA_per_class.append(np.mean(result["AssA"]))
75
+ locA_per_class.append(np.mean(result["LocA"]))
76
+
77
+ # single class results
78
+ class_results[class_name] = {
79
+ "hota": np.mean(result["HOTA"]),
80
+ "detA": np.mean(result["DetA"]),
81
+ "assA": np.mean(result["AssA"]),
82
+ "locA": np.mean(result["LocA"])
83
+ }
84
+ scene_name = scene_id_2_scene_name[scene_id]
85
+ detailed_results[scene_name] = class_results
86
+ avg_hota_all_classes = np.mean(hota_per_class)
87
+ avg_detA_all_classes = np.mean(detA_per_class)
88
+ avg_assA_all_classes = np.mean(assA_per_class)
89
+ avg_locA_all_classes = np.mean(locA_per_class)
90
+
91
+
92
+ hota_per_scene[scene_name] = avg_hota_all_classes
93
+ detA_per_scene[scene_name] = avg_detA_all_classes
94
+ assA_per_scene[scene_name] = avg_assA_all_classes
95
+ locA_per_scene[scene_name] = avg_locA_all_classes
96
+
97
+ # match the keys: & then compute weighted avg
98
+ final_hota = get_weighted_avg(objects_per_scene, hota_per_scene) * 100
99
+ final_detA = get_weighted_avg(objects_per_scene, detA_per_scene) * 100
100
+ final_assA = get_weighted_avg(objects_per_scene, assA_per_scene) * 100
101
+ final_locA = get_weighted_avg(objects_per_scene, locA_per_scene) * 100
102
+
103
+ logging.info(f"Final HOTA: {final_hota}")
104
+ logging.info(f"Final DetA: {final_detA}")
105
+ logging.info(f"Final AssA: {final_assA}")
106
+ logging.info(f"Final LocA: {final_locA}")
107
+
108
+
109
+ if __name__ == "__main__":
110
+ start_time = time.time()
111
+ parser = argparse.ArgumentParser()
112
+ parser.add_argument("--ground_truth_file", type=validate_file_path,
113
+ action=ValidateFile, help="Input ground truth file", required=True)
114
+ parser.add_argument("--input_file", type=validate_file_path,
115
+ action=ValidateFile, help="Input prediction file", required=True)
116
+ parser.add_argument("--output_dir", type=str, help="Optional Output directory")
117
+ parser.add_argument("--scene_id_2_scene_name_file", type=validate_file_path,
118
+ action=ValidateFile, help="Input scene id to scene name file in json format", required=True)
119
+ parser.add_argument("--num_cores", type=int, help="Number of cores to use")
120
+ parser.add_argument("--num_frames_to_eval", type=int, help="Number of frames to evaluate", default=9000)
121
+
122
+ # Parse arguments
123
+ args = parser.parse_args()
124
+ ground_truth_file = validate_file_path(args.ground_truth_file)
125
+ input_file = validate_file_path(args.input_file)
126
+
127
+ # Run evaluation
128
+ run_evaluation(ground_truth_file, input_file, args.output_dir, args.num_cores, args.num_frames_to_eval, args.scene_id_2_scene_name_file)
129
+
130
+ # Log processing time
131
+ end_time = time.time()
132
+ logging.info(f"Total time taken: {end_time - start_time} seconds")
MTMC_Tracking_2025/eval/sample_data/ground_truth_test_full.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6721a7a17a67a32def5db2d9fd0deaffef42c729a40b853178710f77787c5da0
3
+ size 23415655
MTMC_Tracking_2025/eval/sample_data/pred.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbe95763a81df292df77acfddbd32d6e01010139fb7a596d0d7cb6778d96884d
3
+ size 24551491
MTMC_Tracking_2025/eval/sample_data/scene_id_2_scene_name_full.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac47b6a9b84a510712a8b692145ef3357f7612ed5a2fc6b483f4fa360aa5c3e2
3
+ size 203
MTMC_Tracking_2025/eval/utils/__init__.py ADDED
File without changes
MTMC_Tracking_2025/eval/utils/classes.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ CLASS_LIST = [
2
+ 'Person',
3
+ 'NovaCarter',
4
+ 'Transporter',
5
+ 'Forklift',
6
+ 'Box',
7
+ 'Pallet',
8
+ 'Crate',
9
+ 'Basket',
10
+ 'KLTBin',
11
+ 'Cone',
12
+ 'Rack',
13
+ 'FourierGR1T2',
14
+ 'AgilityDigit',
15
+ ]
16
+
17
+
18
+ map_class_id_to_class_name = {
19
+ 0: "Person",
20
+ 1: "Forklift",
21
+ 2: "NovaCarter",
22
+ 3: "Transporter",
23
+ 4: "FourierGR1T2",
24
+ 5: "AgilityDigit",
25
+ 6: "Crate",
26
+ 7: "Basket",
27
+ 8: "KLTBin",
28
+ 9: "Cone",
29
+ 10: "Rack"
30
+ }
31
+ map_sub_class_to_primary_class = {
32
+ "person": "Person",
33
+ "transporter": "Transporter",
34
+ "nova_carter": "NovaCarter",
35
+ "novacarter": "NovaCarter",
36
+ "forklift": "Forklift",
37
+ "box": "Box",
38
+ "cardbox": "Box",
39
+ "flatbox": "Box",
40
+ "multidepthbox": "Box",
41
+ "printersbox": "Box",
42
+ "cubebox": "Box",
43
+ "whitecorrugatedbox": "Box",
44
+ "longbox": "Box",
45
+ "basket": "Basket",
46
+ "exportpallet": "Pallet",
47
+ "blockpallet": "Pallet",
48
+ "pallet": "Pallet",
49
+ "woodencrate": "Crate",
50
+ "klt_bin": "KLTBin",
51
+ "cone": "Cone",
52
+ "rack": "Rack"
53
+ }
MTMC_Tracking_2025/eval/utils/io_utils.py ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ import json
4
+ import argparse
5
+ import logging
6
+ from typing import Any, Dict
7
+ from utils.classes import CLASS_LIST, map_sub_class_to_primary_class, map_class_id_to_class_name
8
+
9
+
10
+ logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%y/%m/%d %H:%M:%S", level=logging.INFO)
11
+
12
+
13
+
14
+ class ValidateFile(argparse.Action):
15
+ """
16
+ Custom argparse action to validate file paths.
17
+ """
18
+
19
+ def __call__(self, parser, namespace, values, option_string=None):
20
+ # Validate the file path format
21
+ file_path_pattern = r"^[a-zA-Z0-9_\-\/.#+]+$"
22
+ if not re.match(file_path_pattern, values):
23
+ parser.error(f"Invalid file path: {values}")
24
+
25
+ # Check if the file exists
26
+ if not os.path.exists(values):
27
+ parser.error(f"File {values} does NOT exist.")
28
+
29
+ # Check if the file is readable
30
+ if not os.access(values, os.R_OK):
31
+ parser.error(f"File {values} is NOT readable.")
32
+
33
+ # Set the validated file path in the namespace
34
+ setattr(namespace, self.dest, values)
35
+
36
+
37
+ def validate_file_path(input_string: str) -> str:
38
+ """
39
+ Validates whether the input string matches a file path pattern
40
+
41
+ :param str input_string: input string
42
+ :return: validated file path
43
+ :rtype: str
44
+ ::
45
+
46
+ file_path = validate_file_path(input_string)
47
+ """
48
+ file_path_pattern = r"^[a-zA-Z0-9_\-\/.#+]+$"
49
+ if re.match(file_path_pattern, input_string):
50
+ return input_string
51
+ else:
52
+ raise ValueError(f"Invalid file path: {input_string}")
53
+
54
+
55
+ def sanitize_string(input_string: str) -> str:
56
+ """
57
+ Sanitizes an input string
58
+
59
+ :param str input_string: input string
60
+ :return: sanitized string
61
+ :rtype: str
62
+ ::
63
+
64
+ sanitized_string = sanitize_string(input_string)
65
+ """
66
+ # Allow alphanumeric characters, dots, slashes, underscores, hashes, and dashes
67
+ return re.sub(r"[^a-zA-Z0-9\._/#-]", "_", input_string)
68
+
69
+
70
+ def make_dir(dir_path: str) -> None:
71
+ """
72
+ Safely create a directory.
73
+ """
74
+ valid_dir_path = validate_file_path(dir_path)
75
+ if os.path.islink(valid_dir_path):
76
+ raise ValueError(f"Directory path {dir_path} must not be a symbolic link.")
77
+
78
+ try:
79
+ if not os.path.isdir(valid_dir_path):
80
+ os.makedirs(valid_dir_path)
81
+ except OSError as e:
82
+ raise ValueError(f"Failed to create directory {dir_path}: {e}")
83
+
84
+ def load_json_from_file(file_path: str) -> Any:
85
+ """
86
+ Safely loads JSON data from a file.
87
+ """
88
+ valid_file_path = validate_file_path(file_path)
89
+ try:
90
+ with open(valid_file_path, "r") as f:
91
+ return json.load(f)
92
+ except json.JSONDecodeError as e:
93
+ raise ValueError(f"Invalid JSON format in file {file_path}: {e}")
94
+ except Exception as e:
95
+ raise ValueError(f"An error occurred while loading file {file_path}: {e}")
96
+
97
+
98
+ def split_files_per_scene(gt_path: str, pred_path: str, output_base_dir: str, scene_id_2_scene_name: Dict[int, str], num_frames_to_eval: int = 9000):
99
+ """
100
+ Splits GT and Pred files per scene, saving them into separate directories.
101
+
102
+ :param gt_path: Path to the ground truth JSON file.
103
+ :param pred_path: Path to the predictions JSON file.
104
+ :param output_base_dir: Base directory to save split files.
105
+ """
106
+ # Create output base directory
107
+ os.makedirs(output_base_dir, exist_ok=True)
108
+
109
+ gt_scenes = set()
110
+ pred_scenes = set()
111
+ # convert to int
112
+ valid_scene_ids = set(int(scene_id) for scene_id in scene_id_2_scene_name.keys())
113
+
114
+
115
+ # Process GT data
116
+ scene_gt_writers = {}
117
+ with open(gt_path, "r") as gt_file:
118
+ for line in gt_file:
119
+ line_split = line.split(" ")
120
+ scene_id = int(line_split[0])
121
+ gt_scenes.add(scene_id)
122
+ if scene_id not in scene_gt_writers:
123
+ os.makedirs(os.path.join(output_base_dir, f"scene_{scene_id}"), exist_ok=True)
124
+ scene_gt_writers[scene_id] = open(os.path.join(output_base_dir, f"scene_{scene_id}", "gt.txt"), "w")
125
+ scene_gt_writers[scene_id].write(line)
126
+
127
+ # Close all GT writers
128
+ for writer in scene_gt_writers.values():
129
+ writer.close()
130
+
131
+ # convert gt_scenes to a list and sort it
132
+ gt_scenes = list(gt_scenes)
133
+ gt_scenes.sort()
134
+ logging.info(f"Found scenes {gt_scenes} in ground truth.")
135
+
136
+ # Process Pred data
137
+ scene_pred_writers = {}
138
+ with open(pred_path, "r") as pred_file:
139
+ for line in pred_file:
140
+ line_split = line.split(" ")
141
+
142
+ # Validate line length
143
+ if len(line_split) != 11:
144
+ raise ValueError(f"Found incorrect entry in predictions. Each entry should have 11 elements: (scene_id class_id object_id frame_id x y z width length height yaw)")
145
+
146
+ # Validate scene id
147
+ scene_id = int(line_split[0])
148
+ if scene_id not in valid_scene_ids:
149
+ raise ValueError(f"Found incorrect scene id in predictions: {scene_id}. Valid scene ids are: {valid_scene_ids}, defined by the scene_id_2_scene_name json file")
150
+
151
+ # Validate class id
152
+ class_id = int(line_split[1])
153
+ if class_id not in map_class_id_to_class_name:
154
+ raise ValueError(f"Found incorrect class id in predictions: {class_id}. Valid class ids are: {map_class_id_to_class_name.keys()}")
155
+
156
+ # Validate object id
157
+ object_id = int(line_split[2])
158
+ if object_id < 0:
159
+ raise ValueError(f"Found incorrect object id in predictions: {object_id}. Object id should be positive.")
160
+
161
+ # Validate frame id
162
+ frame_id = int(line_split[3])
163
+ if frame_id < 0:
164
+ raise ValueError(f"Found incorrect frame id in predictions: {frame_id}. Frame id should be 0 or positive.")
165
+ if int(frame_id) >= int(num_frames_to_eval):
166
+ continue
167
+
168
+ pred_scenes.add(scene_id)
169
+ if scene_id not in scene_pred_writers:
170
+ os.makedirs(os.path.join(output_base_dir, f"scene_{scene_id}"), exist_ok=True)
171
+ scene_pred_writers[scene_id] = open(os.path.join(output_base_dir, f"scene_{scene_id}", "pred.txt"), "w")
172
+ scene_pred_writers[scene_id].write(line)
173
+
174
+ # Close all Pred writers
175
+ for writer in scene_pred_writers.values():
176
+ writer.close()
177
+
178
+ # convert gt_scenes to a list and sort it
179
+ pred_scenes = list(pred_scenes)
180
+ pred_scenes.sort()
181
+ logging.info(f"Found scenes {pred_scenes} in predictions.")
182
+
183
+
184
+ def split_files_per_class(gt_path: str, pred_path: str, output_base_dir: str, confidence_threshold: float = 0.0, num_frames_to_eval:int = 20000, ground_truth_frame_offset_secs: float = 0.0, fps: float = 30.0):
185
+ """
186
+ Splits GT and Pred files per class, saving them into separate directories.
187
+
188
+ :param gt_path: Path to the ground truth JSON file.
189
+ :param pred_path: Path to the predictions JSON file.
190
+ :param output_base_dir: Base directory to save split files.
191
+ """
192
+ # Create output base directory
193
+ os.makedirs(output_base_dir, exist_ok=True)
194
+
195
+ gt_classes = set()
196
+ pred_classes = set()
197
+
198
+ # Process GT data
199
+ class_gt_writers = {}
200
+ with open(gt_path, "r") as gt_file:
201
+ for line in gt_file:
202
+ line_split = line.split(" ")
203
+ class_id = int(line_split[1])
204
+ class_name = map_class_id_to_class_name[class_id]
205
+ gt_classes.add(class_name)
206
+ if class_name not in class_gt_writers:
207
+ os.makedirs(os.path.join(output_base_dir, class_name), exist_ok=True)
208
+ class_gt_writers[class_name] = open(os.path.join(output_base_dir, class_name, "gt.txt"), "w")
209
+ class_gt_writers[class_name].write(line)
210
+
211
+ # Close all GT writers
212
+ for writer in class_gt_writers.values():
213
+ writer.close()
214
+
215
+ # convert gt_classes to a list and sort it
216
+ gt_classes = list(gt_classes)
217
+ gt_classes.sort()
218
+ logging.info(f"Found classes {gt_classes} in ground truth.")
219
+
220
+ # Process Pred data
221
+ class_pred_writers = {}
222
+ with open(pred_path, "r") as pred_file:
223
+ for line in pred_file:
224
+ line_split = line.split(" ")
225
+ class_id = int(line_split[1])
226
+ class_name = map_class_id_to_class_name[class_id]
227
+ pred_classes.add(class_name)
228
+ if class_name not in class_pred_writers:
229
+ os.makedirs(os.path.join(output_base_dir, class_name), exist_ok=True)
230
+ class_pred_writers[class_name] = open(os.path.join(output_base_dir, class_name, "pred.txt"), "w")
231
+ class_pred_writers[class_name].write(line)
232
+
233
+ # Close all Pred writers
234
+ for writer in class_pred_writers.values():
235
+ writer.close()
236
+
237
+ # convert gt_classes to a list and sort it
238
+ pred_classes = list(pred_classes)
239
+ pred_classes.sort()
240
+ logging.info(f"Found classes {pred_classes} in predictions.")
241
+
242
+
243
+ def get_no_of_objects_per_scene(gt_path: str, scene_id_2_scene_name: Dict[int, str]):
244
+ """
245
+ Get the number of objects per scene in the ground truth file.
246
+ """
247
+ no_of_objects_per_scene = {}
248
+ with open(gt_path, "r") as gt_file:
249
+ for line in gt_file:
250
+ line_split = line.split(" ")
251
+ scene_id = line_split[0]
252
+ if scene_id not in scene_id_2_scene_name:
253
+ continue
254
+ scene_name = scene_id_2_scene_name[scene_id]
255
+ if scene_name not in no_of_objects_per_scene:
256
+ no_of_objects_per_scene[scene_name] = 0
257
+ no_of_objects_per_scene[scene_name] += 1
258
+ return no_of_objects_per_scene
259
+
260
+
261
+ def split_files_by_sensor(gt_path: str, pred_path: str, output_base_dir: str, map_camera_name_to_bev_name, confidence_threshold, num_frames_to_eval):
262
+ """
263
+ Splits GT and Pred files by sensor and saves them into separate directories.
264
+ :param gt_path: Path to the ground truth JSON file.
265
+ :param pred_path: Path to the predictions JSON file.
266
+ :param output_base_dir: Base directory to save split files.
267
+ """
268
+ # Create output base directory
269
+ os.makedirs(output_base_dir, exist_ok=True)
270
+
271
+ # Set to keep track of unique sensor IDs
272
+ gt_sensors = set()
273
+ pred_sensors = set()
274
+
275
+ # Create writers for GT data
276
+ sensor_gt_writers = {}
277
+ with open(gt_path, "r") as gt_file:
278
+ for line in gt_file:
279
+
280
+ if '"' not in line and "'" in line:
281
+ line = line.replace("'", '"')
282
+
283
+ data = json.loads(line)
284
+
285
+ # Only eval frames below num_frames_to_eval
286
+ if int(data['id']) >= num_frames_to_eval:
287
+ continue
288
+
289
+ cam_sensor_name = data['sensorId']
290
+
291
+ # Convert camera id to BEV sensor id
292
+ bev_sensor_names = map_camera_name_to_bev_name[cam_sensor_name]
293
+ for bev_sensor_name in bev_sensor_names:
294
+
295
+ gt_sensors.add(bev_sensor_name)
296
+ sensor_dir = os.path.join(output_base_dir, bev_sensor_name)
297
+ os.makedirs(sensor_dir, exist_ok=True)
298
+ gt_file_path = os.path.join(sensor_dir, "gt.json")
299
+
300
+ if bev_sensor_name not in sensor_gt_writers:
301
+ sensor_gt_writers[bev_sensor_name] = open(gt_file_path, "w")
302
+
303
+ sensor_gt_writers[bev_sensor_name].write(json.dumps(data) + "\n")
304
+
305
+ # Close all GT writers
306
+ for writer in sensor_gt_writers.values():
307
+ writer.close()
308
+
309
+ # Log found BEV sensors in GT
310
+ logging.info(f"Found BEV sensors: {', '.join(sorted(gt_sensors))} in ground truth file.")
311
+
312
+ # Create writers for Pred data
313
+ sensor_pred_writers = {}
314
+ with open(pred_path, "r") as pred_file:
315
+ for line in pred_file:
316
+
317
+ if '"' not in line and "'" in line:
318
+ line = line.replace("'", '"')
319
+ data = json.loads(line)
320
+
321
+ # Only eval frames below num_frames_to_eval
322
+ if int(data['id']) >= num_frames_to_eval:
323
+ continue
324
+
325
+ sensor_name = data['sensorId']
326
+ pred_sensors.add(sensor_name)
327
+ sensor_dir = os.path.join(output_base_dir, sensor_name)
328
+ os.makedirs(sensor_dir, exist_ok=True)
329
+
330
+ if sensor_name not in sensor_pred_writers:
331
+ pred_file_path = os.path.join(sensor_dir, "pred.json")
332
+ sensor_pred_writers[sensor_name] = open(pred_file_path, "w")
333
+
334
+ filtered_objects = []
335
+ for obj in data["objects"]:
336
+ # Get the confidence value from bbox3d.
337
+ confidence = obj["bbox3d"]["confidence"]
338
+ if confidence >= confidence_threshold:
339
+ filtered_objects.append(obj)
340
+
341
+ # Replace the "objects" list with the filtered version.
342
+ data["objects"] = filtered_objects
343
+
344
+ sensor_pred_writers[sensor_name].write(json.dumps(data) + "\n")
345
+
346
+ # Close all Pred writers
347
+ for writer in sensor_pred_writers.values():
348
+ writer.close()
349
+
350
+ # Log found BEV sensors in Prediction
351
+ logging.info(f"Found BEV sensors: {', '.join(sorted(pred_sensors))} in prediction file.")
352
+ print("")
MTMC_Tracking_2025/eval/utils/trackeval/__init__.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ """MTMC analytics trackeval modules"""
2
+ from .eval import Evaluator
3
+ from . import datasets
4
+ from . import metrics
5
+ from . import plotting
6
+ from . import utils
MTMC_Tracking_2025/eval/utils/trackeval/_timing.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import wraps
2
+ from time import perf_counter
3
+ import inspect
4
+
5
+ DO_TIMING = False
6
+ DISPLAY_LESS_PROGRESS = False
7
+ timer_dict = {}
8
+ counter = 0
9
+
10
+
11
+ def time(f):
12
+ """
13
+ Decorator function for timing the execution of a function.
14
+
15
+ :param f: The function to be timed.
16
+ :type f: function
17
+ :return: A wrapped function that measures the execution time of the original function.
18
+ :rtype: function
19
+
20
+ The wrapped function measures the execution time of the original function `f`. If the `DO_TIMING` flag is set to
21
+ `True`, the wrapped function records the accumulated time for each function and provides timing analysis when the
22
+ code is finished. If the flag is set to `False` or certain conditions are met, the wrapped function runs the
23
+ original function without timing.
24
+
25
+ Note that the timing analysis is printed to the console. Modify the implementation to save the timing information
26
+ in a different format or location if desired.
27
+ """
28
+ @wraps(f)
29
+ def wrap(*args, **kw):
30
+ if DO_TIMING:
31
+ # Run function with timing
32
+ ts = perf_counter()
33
+ result = f(*args, **kw)
34
+ te = perf_counter()
35
+ tt = te-ts
36
+
37
+ # Get function name
38
+ arg_names = inspect.getfullargspec(f)[0]
39
+ if arg_names[0] == 'self' and DISPLAY_LESS_PROGRESS:
40
+ return result
41
+ elif arg_names[0] == 'self':
42
+ method_name = type(args[0]).__name__ + '.' + f.__name__
43
+ else:
44
+ method_name = f.__name__
45
+
46
+ # Record accumulative time in each function for analysis
47
+ if method_name in timer_dict.keys():
48
+ timer_dict[method_name] += tt
49
+ else:
50
+ timer_dict[method_name] = tt
51
+
52
+ # If code is finished, display timing summary
53
+ if method_name == "Evaluator.evaluate":
54
+ print("")
55
+ print("Timing analysis:")
56
+ for key, value in timer_dict.items():
57
+ print('%-70s %2.4f sec' % (key, value))
58
+ else:
59
+ # Get function argument values for printing special arguments of interest
60
+ arg_titles = ['tracker', 'seq', 'cls']
61
+ arg_vals = []
62
+ for i, a in enumerate(arg_names):
63
+ if a in arg_titles:
64
+ arg_vals.append(args[i])
65
+ arg_text = '(' + ', '.join(arg_vals) + ')'
66
+
67
+ # Display methods and functions with different indentation.
68
+ if arg_names[0] == 'self':
69
+ print('%-74s %2.4f sec' % (' '*4 + method_name + arg_text, tt))
70
+ elif arg_names[0] == 'test':
71
+ pass
72
+ else:
73
+ global counter
74
+ counter += 1
75
+ print('%i %-70s %2.4f sec' % (counter, method_name + arg_text, tt))
76
+
77
+ return result
78
+ else:
79
+ # If config["TIME_PROGRESS"] is false, or config["USE_PARALLEL"] is true, run functions normally without timing.
80
+ return f(*args, **kw)
81
+ return wrap
MTMC_Tracking_2025/eval/utils/trackeval/datasets/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ """MTMC analytics datasets modules"""
2
+ from .mot_challenge_2d_box import MotChallenge2DBox
3
+ from .mot_challenge_3d_location import MotChallenge3DLocation
4
+ from .mtmc_challenge_3d_bbox import MTMCChallenge3DBBox
5
+ from .mtmc_challenge_3d_location import MTMCChallenge3DLocation
MTMC_Tracking_2025/eval/utils/trackeval/datasets/_base_dataset.py ADDED
@@ -0,0 +1,485 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import csv
2
+ import io
3
+ import zipfile
4
+ import os
5
+ import traceback
6
+ import numpy as np
7
+ from copy import deepcopy
8
+ from abc import ABC, abstractmethod
9
+ import sys
10
+ import torch
11
+ import math
12
+ sys.path.append('..')
13
+ sys.path.append('../..')
14
+ sys.path.append('../../..')
15
+ from pytorch3d.ops import box3d_overlap
16
+
17
+ from utils.trackeval import _timing
18
+ from utils.trackeval.utils import TrackEvalException
19
+
20
+
21
+ class _BaseDataset(ABC):
22
+ """
23
+ Module to create a skeleton of dataset formats
24
+ """
25
+ @abstractmethod
26
+ def __init__(self):
27
+ self.tracker_list = None
28
+ self.seq_list = None
29
+ self.class_list = None
30
+ self.output_fol = None
31
+ self.output_sub_fol = None
32
+ self.should_classes_combine = True
33
+ self.use_super_categories = False
34
+
35
+ @staticmethod
36
+ @abstractmethod
37
+ def get_default_dataset_config():
38
+ ...
39
+
40
+ @abstractmethod
41
+ def _load_raw_file(self, tracker, seq, is_gt):
42
+ ...
43
+
44
+ @_timing.time
45
+ @abstractmethod
46
+ def get_preprocessed_seq_data(self, raw_data, cls):
47
+ ...
48
+
49
+ @abstractmethod
50
+ def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
51
+ ...
52
+
53
+ @classmethod
54
+ def get_class_name(cls):
55
+ return cls.__name__
56
+
57
+ def get_name(self):
58
+ return self.get_class_name()
59
+
60
+ def get_output_fol(self, tracker):
61
+ return os.path.join(self.output_fol, tracker, self.output_sub_fol)
62
+
63
+ def get_display_name(self, tracker):
64
+ """
65
+ Can be overwritten if the trackers name (in files) is different to how it should be displayed.
66
+ By default this method just returns the trackers name as is.
67
+
68
+ :param tracker: name of tracker
69
+ :return: None
70
+ """
71
+ return tracker
72
+
73
+ def get_eval_info(self):
74
+ """Return info about the dataset needed for the Evaluator
75
+
76
+ :return: List[str] tracker_list: list of all trackers
77
+ :return: List[str] seq_list: list of all sequences
78
+ :return: List[str] class_list: list of all classes
79
+ """
80
+ return self.tracker_list, self.seq_list, self.class_list
81
+
82
+ @_timing.time
83
+ def get_raw_seq_data(self, tracker, seq):
84
+ """ Loads raw data (tracker and ground-truth) for a single tracker on a single sequence.
85
+ Raw data includes all of the information needed for both preprocessing and evaluation, for all classes.
86
+ A later function (get_processed_seq_data) will perform such preprocessing and extract relevant information for
87
+ the evaluation of each class.
88
+
89
+ This returns a dict which contains the fields:
90
+ [num_timesteps]: integer
91
+ [gt_ids, tracker_ids, gt_classes, tracker_classes, tracker_confidences]:
92
+ list (for each timestep) of 1D NDArrays (for each det).
93
+ [gt_dets, tracker_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.
94
+ [similarity_scores]: list (for each timestep) of 2D NDArrays.
95
+ [gt_extras]: dict (for each extra) of lists (for each timestep) of 1D NDArrays (for each det).
96
+
97
+ gt_extras contains dataset specific information used for preprocessing such as occlusion and truncation levels.
98
+
99
+ Note that similarities are extracted as part of the dataset and not the metric, because almost all metrics are
100
+ independent of the exact method of calculating the similarity. However datasets are not (e.g. segmentation
101
+ masks vs 2D boxes vs 3D boxes).
102
+ We calculate the similarity before preprocessing because often both preprocessing and evaluation require it and
103
+ we don't wish to calculate this twice.
104
+ We calculate similarity between all gt and tracker classes (not just each class individually) to allow for
105
+ calculation of metrics such as class confusion matrices. Typically the impact of this on performance is low.
106
+
107
+ :param: str tracker: name of tracker
108
+ :param: str sequence: name of sequence
109
+ :return: raw_data: similarity scores among all gt & tracker classes
110
+ """
111
+ # Load raw data.
112
+ raw_gt_data = self._load_raw_file(tracker, seq, is_gt=True)
113
+ raw_tracker_data = self._load_raw_file(tracker, seq, is_gt=False)
114
+ raw_data = {**raw_tracker_data, **raw_gt_data} # Merges dictionaries
115
+
116
+ # Calculate similarities for each timestep.
117
+ similarity_scores = []
118
+ for t, (gt_dets_t, tracker_dets_t) in enumerate(zip(raw_data['gt_dets'], raw_data['tracker_dets'])):
119
+ ious = self._calculate_similarities(gt_dets_t, tracker_dets_t)
120
+ similarity_scores.append(ious)
121
+ raw_data['similarity_scores'] = similarity_scores
122
+ return raw_data
123
+
124
+ @staticmethod
125
+ def _load_simple_text_file(file, time_col=0, id_col=None, remove_negative_ids=False, valid_filter=None,
126
+ crowd_ignore_filter=None, convert_filter=None, is_zipped=False, zip_file=None,
127
+ force_delimiters=None):
128
+ """ Function that loads data which is in a commonly used text file format.
129
+ Assumes each det is given by one row of a text file.
130
+ There is no limit to the number or meaning of each column,
131
+ however one column needs to give the timestep of each det (time_col) which is default col 0.
132
+
133
+ The file dialect (deliminator, num cols, etc) is determined automatically.
134
+ This function automatically separates dets by timestep,
135
+ and is much faster than alternatives such as np.loadtext or pandas.
136
+
137
+ If remove_negative_ids is True and id_col is not None, dets with negative values in id_col are excluded.
138
+ These are not excluded from ignore data.
139
+
140
+ valid_filter can be used to only include certain classes.
141
+ It is a dict with ints as keys, and lists as values,
142
+ such that a row is included if "row[key].lower() is in value" for all key/value pairs in the dict.
143
+ If None, all classes are included.
144
+
145
+ crowd_ignore_filter can be used to read crowd_ignore regions separately. It has the same format as valid filter.
146
+
147
+ convert_filter can be used to convert value read to another format.
148
+ This is used most commonly to convert classes given as string to a class id.
149
+ This is a dict such that the key is the column to convert, and the value is another dict giving the mapping.
150
+
151
+ Optionally, input files could be a zip of multiple text files for storage efficiency.
152
+
153
+ Returns read_data and ignore_data.
154
+ Each is a dict (with keys as timesteps as strings) of lists (over dets) of lists (over column values).
155
+ Note that all data is returned as strings, and must be converted to float/int later if needed.
156
+ Note that timesteps will not be present in the returned dict keys if there are no dets for them
157
+
158
+ :param str file: Path to the input text file or the name of the file within the zip file (if is_zipped is True).
159
+ :param int time_col: Index of the column containing the timestep of each detection, defaults to 0.
160
+ :param int id_col: Index of the column containing the ID of each detection, defaults to None.
161
+ :param bool remove_negative_ids: Whether to exclude dets with negative IDs, defaults to False.
162
+ :param dict valid_filter: Dictionary to include only certain classes, defaults to None.
163
+ :param dict crowd_ignore_filter: Dictionary to read crowd_ignore regions separately, defaults to None.
164
+ :param dict convert_filter: Dictionary to convert values read to another format, defaults to None.
165
+ :param bool is_zipped: Whether the input file is a zip file, defaults to False.
166
+ :param str zip_file: Path to the zip file (if is_zipped is True), defaults to None.
167
+ :param list force_delimiters: List of potential delimiters to override the automatic delimiter detection, defaults to None.
168
+ :raises TrackEvalException: If remove_negative_ids is True but id_col is not given, or if there's an error reading the file.
169
+ :return: A tuple containing read_data and crowd_ignore_data dictionaries.
170
+ read_data: dictionary with timesteps as keys (strings) and lists (over detections) of lists (over column values).
171
+ crowd_ignore_data: dictionary with timesteps as keys (strings) and lists (over detections) of lists (over column values).
172
+ :rtype: tuple
173
+ """
174
+
175
+ if remove_negative_ids and id_col is None:
176
+ raise TrackEvalException('remove_negative_ids is True, but id_col is not given.')
177
+ if crowd_ignore_filter is None:
178
+ crowd_ignore_filter = {}
179
+ if convert_filter is None:
180
+ convert_filter = {}
181
+ try:
182
+ if is_zipped: # Either open file directly or within a zip.
183
+ if zip_file is None:
184
+ raise TrackEvalException('is_zipped set to True, but no zip_file is given.')
185
+ archive = zipfile.ZipFile(os.path.join(zip_file), 'r')
186
+ fp = io.TextIOWrapper(archive.open(file, 'r'))
187
+ else:
188
+ fp = open(file)
189
+ read_data = {}
190
+ crowd_ignore_data = {}
191
+ fp.seek(0, os.SEEK_END)
192
+ # check if file is empty
193
+ if fp.tell():
194
+ fp.seek(0)
195
+ dialect = csv.Sniffer().sniff(fp.readline(), delimiters=force_delimiters) # Auto determine structure.
196
+ dialect.skipinitialspace = True # Deal with extra spaces between columns
197
+ fp.seek(0)
198
+ reader = csv.reader(fp, dialect)
199
+ for row in reader:
200
+ try:
201
+ # Deal with extra trailing spaces at the end of rows
202
+ if row[-1] in '':
203
+ row = row[:-1]
204
+ timestep = str(int(float(row[time_col])))
205
+ # Read ignore regions separately.
206
+ is_ignored = False
207
+ for ignore_key, ignore_value in crowd_ignore_filter.items():
208
+ if row[ignore_key].lower() in ignore_value:
209
+ # Convert values in one column (e.g. string to id)
210
+ for convert_key, convert_value in convert_filter.items():
211
+ row[convert_key] = convert_value[row[convert_key].lower()]
212
+ # Save data separated by timestep.
213
+ if timestep in crowd_ignore_data.keys():
214
+ crowd_ignore_data[timestep].append(row)
215
+ else:
216
+ crowd_ignore_data[timestep] = [row]
217
+ is_ignored = True
218
+ if is_ignored: # if det is an ignore region, it cannot be a normal det.
219
+ continue
220
+ # Exclude some dets if not valid.
221
+ if valid_filter is not None:
222
+ for key, value in valid_filter.items():
223
+ if row[key].lower() not in value:
224
+ continue
225
+ if remove_negative_ids:
226
+ if int(float(row[id_col])) < 0:
227
+ continue
228
+ # Convert values in one column (e.g. string to id)
229
+ for convert_key, convert_value in convert_filter.items():
230
+ row[convert_key] = convert_value[row[convert_key].lower()]
231
+ # Save data separated by timestep.
232
+ if timestep in read_data.keys():
233
+ read_data[timestep].append(row)
234
+ else:
235
+ read_data[timestep] = [row]
236
+ except Exception:
237
+ exc_str_init = 'In file %s the following line cannot be read correctly: \n' % os.path.basename(
238
+ file)
239
+ exc_str = ' '.join([exc_str_init]+row)
240
+ raise TrackEvalException(exc_str)
241
+ fp.close()
242
+ except Exception:
243
+ print('Error loading file: %s, printing traceback.' % file)
244
+ traceback.print_exc()
245
+ raise TrackEvalException(
246
+ 'File %s cannot be read because it is either not present or invalidly formatted' % os.path.basename(
247
+ file))
248
+ return read_data, crowd_ignore_data
249
+
250
+ @staticmethod
251
+ def _calculate_mask_ious(masks1, masks2, is_encoded=False, do_ioa=False):
252
+ """ Calculates the IOU (intersection over union) between two arrays of segmentation masks.
253
+ If is_encoded a run length encoding with pycocotools is assumed as input format, otherwise an input of numpy
254
+ arrays of the shape (num_masks, height, width) is assumed and the encoding is performed.
255
+ If do_ioa (intersection over area) , then calculates the intersection over the area of masks1 - this is commonly
256
+ used to determine if detections are within crowd ignore region.
257
+ :param masks1: first set of masks (numpy array of shape (num_masks, height, width) if not encoded,
258
+ else pycocotools rle encoded format)
259
+ :param masks2: second set of masks (numpy array of shape (num_masks, height, width) if not encoded,
260
+ else pycocotools rle encoded format)
261
+ :param is_encoded: whether the input is in pycocotools rle encoded format
262
+ :param do_ioa: whether to perform IoA computation
263
+ :return: the IoU/IoA scores
264
+ """
265
+
266
+ # Only loaded when run to reduce minimum requirements
267
+ from pycocotools import mask as mask_utils
268
+
269
+ # use pycocotools for run length encoding of masks
270
+ if not is_encoded:
271
+ masks1 = mask_utils.encode(np.array(np.transpose(masks1, (1, 2, 0)), order='F'))
272
+ masks2 = mask_utils.encode(np.array(np.transpose(masks2, (1, 2, 0)), order='F'))
273
+
274
+ # use pycocotools for iou computation of rle encoded masks
275
+ ious = mask_utils.iou(masks1, masks2, [do_ioa]*len(masks2))
276
+ if len(masks1) == 0 or len(masks2) == 0:
277
+ ious = np.asarray(ious).reshape(len(masks1), len(masks2))
278
+ assert (ious >= 0 - np.finfo('float').eps).all()
279
+ assert (ious <= 1 + np.finfo('float').eps).all()
280
+
281
+ return ious
282
+
283
+ @staticmethod
284
+ def _calculate_box_ious(bboxes1, bboxes2, box_format='xywh', do_ioa=False):
285
+ """ Calculates the IOU (intersection over union) between two arrays of boxes.
286
+ Allows variable box formats ('xywh' and 'x0y0x1y1').
287
+ If do_ioa (intersection over area) , then calculates the intersection over the area of boxes1 - this is commonly
288
+ used to determine if detections are within crowd ignore region.
289
+
290
+ :param bboxes1: first list of bounding boxes
291
+ :param bboxes2: second list of bounding boxes
292
+ :return: ious: the IoU/IoA scores
293
+ """
294
+ if box_format in 'xywh':
295
+ # layout: (x0, y0, w, h)
296
+ bboxes1 = deepcopy(bboxes1)
297
+ bboxes2 = deepcopy(bboxes2)
298
+
299
+ bboxes1[:, 2] = bboxes1[:, 0] + bboxes1[:, 2]
300
+ bboxes1[:, 3] = bboxes1[:, 1] + bboxes1[:, 3]
301
+ bboxes2[:, 2] = bboxes2[:, 0] + bboxes2[:, 2]
302
+ bboxes2[:, 3] = bboxes2[:, 1] + bboxes2[:, 3]
303
+ elif box_format not in 'x0y0x1y1':
304
+ raise (TrackEvalException('box_format %s is not implemented' % box_format))
305
+
306
+ # layout: (x0, y0, x1, y1)
307
+ min_ = np.minimum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])
308
+ max_ = np.maximum(bboxes1[:, np.newaxis, :], bboxes2[np.newaxis, :, :])
309
+ intersection = np.maximum(min_[..., 2] - max_[..., 0], 0) * np.maximum(min_[..., 3] - max_[..., 1], 0)
310
+ area1 = (bboxes1[..., 2] - bboxes1[..., 0]) * (bboxes1[..., 3] - bboxes1[..., 1])
311
+
312
+ if do_ioa:
313
+ ioas = np.zeros_like(intersection)
314
+ valid_mask = area1 > 0 + np.finfo('float').eps
315
+ ioas[valid_mask, :] = intersection[valid_mask, :] / area1[valid_mask][:, np.newaxis]
316
+
317
+ return ioas
318
+ else:
319
+ area2 = (bboxes2[..., 2] - bboxes2[..., 0]) * (bboxes2[..., 3] - bboxes2[..., 1])
320
+ union = area1[:, np.newaxis] + area2[np.newaxis, :] - intersection
321
+ intersection[area1 <= 0 + np.finfo('float').eps, :] = 0
322
+ intersection[:, area2 <= 0 + np.finfo('float').eps] = 0
323
+ intersection[union <= 0 + np.finfo('float').eps] = 0
324
+ union[union <= 0 + np.finfo('float').eps] = 1
325
+ ious = intersection / union
326
+ return ious
327
+
328
+ @staticmethod
329
+ def _calculate_3DBBox_ious(bboxes1, bboxes2):
330
+ """ Calculates the IOU (intersection over union) between two arrays of boxes.
331
+ Box format supported: x, y, z, width, length, height, yaw
332
+
333
+ :param bboxes1: first list of 3D bounding boxes
334
+ :param bboxes2: second list of 3D bounding boxes
335
+ :return: ious: the IoU scores
336
+ """
337
+
338
+ def euler_angles_to_rotation_matrix(pitch, roll, yaw):
339
+ """
340
+ Compute rotation matrix R for 3D rotation with:
341
+ - pitch about X
342
+ - roll about Y
343
+ - yaw about Z
344
+
345
+ Angles are in radians.
346
+ The final rotation is Rz(yaw) * Ry(roll) * Rx(pitch).
347
+ """
348
+ # Use torch trig functions
349
+ cx, sx = np.cos(pitch), np.sin(pitch)
350
+ cy, sy = np.cos(roll), np.sin(roll)
351
+ cz, sz = np.cos(yaw), np.sin(yaw)
352
+
353
+ # Rotation about X (pitch)
354
+ Rx = np.array([
355
+ [1, 0, 0],
356
+ [0, cx, -sx],
357
+ [0, sx, cx],
358
+ ], dtype=np.float64)
359
+
360
+ # Rotation about Y (roll)
361
+ Ry = np.array([
362
+ [ cy, 0, sy],
363
+ [ 0, 1, 0],
364
+ [-sy, 0, cy],
365
+ ], dtype=np.float64)
366
+
367
+ # Rotation about Z (yaw)
368
+ Rz = np.array([
369
+ [ cz, -sz, 0],
370
+ [ sz, cz, 0],
371
+ [ 0, 0, 1],
372
+ ], dtype=np.float64)
373
+
374
+ # Final rotation = Rz * Ry * Rx
375
+ return Rz @ Ry @ Rx # (3 x 3)
376
+
377
+ def _obb_to_corners(box_params):
378
+ """
379
+ Convert boxes in parametric form (B, 9):
380
+ [x, y, z, width, length, height, pitch, roll, yaw]
381
+ to corners of shape (B, 8, 3).
382
+ """
383
+ B = box_params.shape[0]
384
+ unit_corners = np.array([
385
+ [0, 0, 0], # (0)
386
+ [1, 0, 0], # (1)
387
+ [1, 1, 0], # (2)
388
+ [0, 1, 0], # (3)
389
+ [0, 0, 1], # (4)
390
+ [1, 0, 1], # (5)
391
+ [1, 1, 1], # (6)
392
+ [0, 1, 1], # (7)
393
+ ], dtype=np.float64) # (8, 3)
394
+
395
+ # Prepare an output tensor for corners
396
+ corners_out = np.zeros((B, 8, 3), dtype=np.float64)
397
+
398
+ for i in range(B):
399
+ x, y, z = box_params[i, 0:3]
400
+ w, l, h = box_params[i, 3:6]
401
+ pitch, roll, yaw = box_params[i, 6], box_params[i, 7], box_params[i, 8]
402
+ local_corners = unit_corners.copy()
403
+ local_corners[:, 0] *= w
404
+ local_corners[:, 1] *= l
405
+ local_corners[:, 2] *= h
406
+
407
+ # Shift so the center is at (0,0,0):
408
+ local_corners[:, 0] -= w / 2.0
409
+ local_corners[:, 1] -= l / 2.0
410
+ local_corners[:, 2] -= h / 2.0
411
+
412
+ # Build rotation matrix
413
+ R = euler_angles_to_rotation_matrix(pitch, roll, yaw) # (3,3)
414
+
415
+ # Rotate
416
+ local_corners = local_corners @ R.T # (8, 3)
417
+
418
+ # Translate to world coords
419
+ local_corners[:, 0] += x
420
+ local_corners[:, 1] += y
421
+ local_corners[:, 2] += z
422
+
423
+ corners_out[i] = local_corners
424
+
425
+ return corners_out
426
+
427
+ M = bboxes1.shape[0]
428
+ N = bboxes2.shape[0]
429
+ if M == 0 or N == 0:
430
+ return np.zeros((M, N), dtype=np.float64)
431
+
432
+ corners1 = _obb_to_corners(bboxes1) # (M, 8, 3)
433
+ corners2 = _obb_to_corners(bboxes2) # (N, 8, 3)
434
+
435
+ corners1 = torch.from_numpy(corners1).float()
436
+ corners2 = torch.from_numpy(corners2).float()
437
+
438
+ intersection_vol, iou_3d = box3d_overlap(corners1, corners2)
439
+
440
+ return iou_3d.cpu().detach().numpy()
441
+
442
+
443
+ @staticmethod
444
+ def _calculate_euclidean_similarity(dets1, dets2, zero_distance):
445
+ """ Calculates the euclidean distance between two sets of detections, and then converts this into a similarity
446
+ measure with values between 0 and 1 using the following formula: sim = max(0, 1 - dist/zero_distance).
447
+ The default zero_distance of 2.0, corresponds to the default used in MOT15_3D, such that a 0.5 similarity
448
+ threshold corresponds to a 1m distance threshold for TPs.
449
+
450
+ :param dets1: first list of detections
451
+ :param dets2: second list of detections
452
+ :return: sim: the similarity score
453
+ """
454
+ dist = np.linalg.norm(dets1[:, np.newaxis]-dets2[np.newaxis, :], axis=2)
455
+ sim = np.maximum(0, 1 - dist/zero_distance)
456
+ return sim
457
+
458
+ @staticmethod
459
+ def _check_unique_ids(data, after_preproc=False):
460
+ """Check the requirement that the tracker_ids and gt_ids are unique per timestep"""
461
+ gt_ids = data['gt_ids']
462
+ tracker_ids = data['tracker_ids']
463
+ for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(gt_ids, tracker_ids)):
464
+ if len(tracker_ids_t) > 0:
465
+ unique_ids, counts = np.unique(tracker_ids_t, return_counts=True)
466
+ if np.max(counts) != 1:
467
+ duplicate_ids = unique_ids[counts > 1]
468
+ exc_str_init = 'Tracker predicts the same ID more than once in a single timestep ' \
469
+ '(seq: %s, frame: %i, ids:' % (data['seq'], t+1)
470
+ exc_str = ' '.join([exc_str_init] + [str(d) for d in duplicate_ids]) + ')'
471
+ if after_preproc:
472
+ exc_str_init += '\n Note that this error occurred after preprocessing (but not before), ' \
473
+ 'so ids may not be as in file, and something seems wrong with preproc.'
474
+ raise TrackEvalException(exc_str)
475
+ if len(gt_ids_t) > 0:
476
+ unique_ids, counts = np.unique(gt_ids_t, return_counts=True)
477
+ if np.max(counts) != 1:
478
+ duplicate_ids = unique_ids[counts > 1]
479
+ exc_str_init = 'Ground-truth has the same ID more than once in a single timestep ' \
480
+ '(seq: %s, frame: %i, ids:' % (data['seq'], t+1)
481
+ exc_str = ' '.join([exc_str_init] + [str(d) for d in duplicate_ids]) + ')'
482
+ if after_preproc:
483
+ exc_str_init += '\n Note that this error occurred after preprocessing (but not before), ' \
484
+ 'so ids may not be as in file, and something seems wrong with preproc.'
485
+ raise TrackEvalException(exc_str)
MTMC_Tracking_2025/eval/utils/trackeval/datasets/mot_challenge_2d_box.py ADDED
@@ -0,0 +1,471 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+ import configparser
4
+ import numpy as np
5
+ from scipy.optimize import linear_sum_assignment
6
+ from utils.trackeval import utils
7
+ from utils.trackeval import _timing
8
+ from utils.trackeval.utils import TrackEvalException
9
+ from utils.trackeval.datasets._base_dataset import _BaseDataset
10
+
11
+
12
+ class MotChallenge2DBox(_BaseDataset):
13
+ """
14
+ Dataset class for MOT Challenge 2D bounding box tracking
15
+
16
+ :param dict config: configuration for the app
17
+ ::
18
+
19
+ default_dataset = trackeeval.datasets.MotChallenge2DBox(config)
20
+ """
21
+ @staticmethod
22
+ def get_default_dataset_config():
23
+ """Default class config values"""
24
+ code_path = utils.get_code_path()
25
+ default_config = {
26
+ 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data
27
+ 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location
28
+ 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER)
29
+ 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder)
30
+ 'CLASSES_TO_EVAL': ['class'], # Valid: ['class']
31
+ 'BENCHMARK': 'MOT17', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
32
+ 'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test', 'all'
33
+ 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped
34
+ 'PRINT_CONFIG': True, # Whether to print current config
35
+ 'DO_PREPROC': True, # Whether to perform preprocessing (never done for MOT15)
36
+ 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
37
+ 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
38
+ 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL
39
+ 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps)
40
+ 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval)
41
+ 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps
42
+ 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt'
43
+ 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in
44
+ # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/
45
+ # If True, then the middle 'benchmark-split' folder is skipped for both.
46
+ }
47
+ return default_config
48
+
49
+ def __init__(self, config=None):
50
+ """Initialise dataset, checking that all required files are present"""
51
+ super().__init__()
52
+ # Fill non-given config values with defaults
53
+ self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())
54
+
55
+ self.benchmark = self.config['BENCHMARK']
56
+ gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']
57
+ self.gt_set = gt_set
58
+ if not self.config['SKIP_SPLIT_FOL']:
59
+ split_fol = gt_set
60
+ else:
61
+ split_fol = ''
62
+ self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)
63
+ self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)
64
+ self.should_classes_combine = False
65
+ self.use_super_categories = False
66
+ self.data_is_zipped = self.config['INPUT_AS_ZIP']
67
+ self.do_preproc = self.config['DO_PREPROC']
68
+
69
+ self.output_fol = self.config['OUTPUT_FOLDER']
70
+ if self.output_fol is None:
71
+ self.output_fol = self.tracker_fol
72
+
73
+ self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']
74
+ self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']
75
+
76
+ # Get classes to eval
77
+ self.valid_classes = ['class']
78
+ self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None
79
+ for cls in self.config['CLASSES_TO_EVAL']]
80
+ if not all(self.class_list):
81
+ raise TrackEvalException('Attempted to evaluate an invalid class. Only class class is valid.')
82
+ self.class_name_to_class_id = {'class': 1, 'box': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5,
83
+ 'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9,
84
+ 'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13}
85
+ self.valid_class_numbers = list(self.class_name_to_class_id.values())
86
+
87
+ # Get sequences to eval and check gt files exist
88
+ self.seq_list, self.seq_lengths = self._get_seq_info()
89
+ if len(self.seq_list) < 1:
90
+ raise TrackEvalException('No sequences are selected to be evaluated.')
91
+
92
+ # Check gt files exist
93
+ for seq in self.seq_list:
94
+ if not self.data_is_zipped:
95
+ curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
96
+ if not os.path.isfile(curr_file):
97
+ print('GT file not found ' + curr_file)
98
+ raise TrackEvalException('GT file not found for sequence: ' + seq)
99
+ if self.data_is_zipped:
100
+ curr_file = os.path.join(self.gt_fol, 'data.zip')
101
+ if not os.path.isfile(curr_file):
102
+ print('GT file not found ' + curr_file)
103
+ raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))
104
+
105
+ # Get trackers to eval
106
+ if self.config['TRACKERS_TO_EVAL'] is None:
107
+ self.tracker_list = os.listdir(self.tracker_fol)
108
+ else:
109
+ self.tracker_list = self.config['TRACKERS_TO_EVAL']
110
+
111
+ if self.config['TRACKER_DISPLAY_NAMES'] is None:
112
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))
113
+ elif (self.config['TRACKERS_TO_EVAL'] is not None) and (
114
+ len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):
115
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))
116
+ else:
117
+ raise TrackEvalException('List of tracker files and tracker display names do not match.')
118
+
119
+ for tracker in self.tracker_list:
120
+ if self.data_is_zipped:
121
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
122
+ if not os.path.isfile(curr_file):
123
+ print('Tracker file not found: ' + curr_file)
124
+ raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))
125
+ else:
126
+ for seq in self.seq_list:
127
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
128
+ if not os.path.isfile(curr_file):
129
+ print('Tracker file not found: ' + curr_file)
130
+ raise TrackEvalException(
131
+ 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(
132
+ curr_file))
133
+
134
+ def get_display_name(self, tracker):
135
+ """
136
+ Gets the display name of the tracker
137
+
138
+ :param str tracker: Class of tracker
139
+ :return: str
140
+ ::
141
+
142
+ dataset.get_display_name(tracker)
143
+ """
144
+
145
+ return self.tracker_to_disp[tracker]
146
+
147
+ def _get_seq_info(self):
148
+ seq_list = []
149
+ seq_lengths = {}
150
+ if self.config["SEQ_INFO"]:
151
+ seq_list = list(self.config["SEQ_INFO"].keys())
152
+ seq_lengths = self.config["SEQ_INFO"]
153
+
154
+ # If sequence length is 'None' tries to read sequence length from .ini files.
155
+ for seq, seq_length in seq_lengths.items():
156
+ if seq_length is None:
157
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
158
+ if not os.path.isfile(ini_file):
159
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
160
+ ini_data = configparser.ConfigParser()
161
+ ini_data.read(ini_file)
162
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
163
+
164
+ else:
165
+ if self.config["SEQMAP_FILE"]:
166
+ seqmap_file = self.config["SEQMAP_FILE"]
167
+ else:
168
+ if self.config["SEQMAP_FOLDER"] is None:
169
+ seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')
170
+ else:
171
+ seqmap_file = os.path.join(self.config["SEQMAP_FOLDER"], self.gt_set + '.txt')
172
+ if not os.path.isfile(seqmap_file):
173
+ print('no seqmap found: ' + seqmap_file)
174
+ raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))
175
+ with open(seqmap_file) as fp:
176
+ reader = csv.reader(fp)
177
+ for i, row in enumerate(reader):
178
+ if i == 0 or row[0] == '':
179
+ continue
180
+ seq = row[0]
181
+ seq_list.append(seq)
182
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
183
+ if not os.path.isfile(ini_file):
184
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
185
+ ini_data = configparser.ConfigParser()
186
+ ini_data.read(ini_file)
187
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
188
+ return seq_list, seq_lengths
189
+
190
+ def _load_raw_file(self, tracker, seq, is_gt):
191
+ """Load a file (gt or tracker) in the MOT Challenge 2D box format
192
+
193
+ If is_gt, this returns a dict which contains the fields:
194
+ [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).
195
+ [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.
196
+ [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).
197
+
198
+ if not is_gt, this returns a dict which contains the fields:
199
+ [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).
200
+ [tracker_dets]: list (for each timestep) of lists of detections.
201
+
202
+ :param str tracker: Name of the tracker.
203
+ :param str seq: Sequence identifier.
204
+ :param bool is_gt: Indicates whether the file is ground truth or from a tracker.
205
+ :raises TrackEvalException: If there's an error loading the file or if the data is corrupted.
206
+ :return: dictionary containing the loaded data.
207
+ :rtype: dict
208
+ """
209
+ # File location
210
+ if self.data_is_zipped:
211
+ if is_gt:
212
+ zip_file = os.path.join(self.gt_fol, 'data.zip')
213
+ else:
214
+ zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
215
+ file = seq + '.txt'
216
+ else:
217
+ zip_file = None
218
+ if is_gt:
219
+ file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
220
+ else:
221
+ file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
222
+
223
+ # Load raw data from text file
224
+ read_data, ignore_data = self._load_simple_text_file(file, is_zipped=self.data_is_zipped, zip_file=zip_file)
225
+
226
+ # Convert data to required format
227
+ num_timesteps = self.seq_lengths[seq]
228
+ data_keys = ['ids', 'classes', 'dets']
229
+ if is_gt:
230
+ data_keys += ['gt_crowd_ignore_regions', 'gt_extras']
231
+ else:
232
+ data_keys += ['tracker_confidences']
233
+ raw_data = {key: [None] * num_timesteps for key in data_keys}
234
+
235
+ # Check for any extra time keys
236
+ current_time_keys = [str( t+ 1) for t in range(num_timesteps)]
237
+ extra_time_keys = [x for x in read_data.keys() if x not in current_time_keys]
238
+ if len(extra_time_keys) > 0:
239
+ if is_gt:
240
+ text = 'Ground-truth'
241
+ else:
242
+ text = 'Tracking'
243
+ raise TrackEvalException(
244
+ text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(
245
+ [str(x) + ', ' for x in extra_time_keys]))
246
+
247
+ for t in range(num_timesteps):
248
+ time_key = str(t+1)
249
+ if time_key in read_data.keys():
250
+ try:
251
+ time_data = np.asarray(read_data[time_key], dtype=float)
252
+ except ValueError:
253
+ if is_gt:
254
+ raise TrackEvalException(
255
+ 'Cannot convert gt data for sequence %s to float. Is data corrupted?' % seq)
256
+ else:
257
+ raise TrackEvalException(
258
+ 'Cannot convert tracking data from tracker %s, sequence %s to float. Is data corrupted?' % (
259
+ tracker, seq))
260
+ try:
261
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 2:6])
262
+ raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)
263
+ except IndexError:
264
+ if is_gt:
265
+ err = 'Cannot load gt data from sequence %s, because there is not enough ' \
266
+ 'columns in the data.' % seq
267
+ raise TrackEvalException(err)
268
+ else:
269
+ err = 'Cannot load tracker data from tracker %s, sequence %s, because there is not enough ' \
270
+ 'columns in the data.' % (tracker, seq)
271
+ raise TrackEvalException(err)
272
+ if time_data.shape[1] >= 8:
273
+ raw_data['classes'][t] = np.atleast_1d(time_data[:, 7]).astype(int)
274
+ else:
275
+ if not is_gt:
276
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
277
+ else:
278
+ raise TrackEvalException(
279
+ 'GT data is not in a valid format, there is not enough rows in seq %s, timestep %i.' % (
280
+ seq, t))
281
+ if is_gt:
282
+ gt_extras_dict = {'zero_marked': np.atleast_1d(time_data[:, 6].astype(int))}
283
+ raw_data['gt_extras'][t] = gt_extras_dict
284
+ else:
285
+ raw_data['tracker_confidences'][t] = np.atleast_1d(time_data[:, 6])
286
+ else:
287
+ raw_data['dets'][t] = np.empty((0, 4))
288
+ raw_data['ids'][t] = np.empty(0).astype(int)
289
+ raw_data['classes'][t] = np.empty(0).astype(int)
290
+ if is_gt:
291
+ gt_extras_dict = {'zero_marked': np.empty(0)}
292
+ raw_data['gt_extras'][t] = gt_extras_dict
293
+ else:
294
+ raw_data['tracker_confidences'][t] = np.empty(0)
295
+ if is_gt:
296
+ raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 4))
297
+
298
+ if is_gt:
299
+ key_map = {'ids': 'gt_ids',
300
+ 'classes': 'gt_classes',
301
+ 'dets': 'gt_dets'}
302
+ else:
303
+ key_map = {'ids': 'tracker_ids',
304
+ 'classes': 'tracker_classes',
305
+ 'dets': 'tracker_dets'}
306
+ for k, v in key_map.items():
307
+ raw_data[v] = raw_data.pop(k)
308
+ raw_data['num_timesteps'] = num_timesteps
309
+ raw_data['seq'] = seq
310
+ return raw_data
311
+
312
+ @_timing.time
313
+ def get_preprocessed_seq_data(self, raw_data, cls):
314
+ """ Preprocess data for a single sequence for a single class ready for evaluation.
315
+ Inputs:
316
+ - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().
317
+ - cls is the class to be evaluated.
318
+ Outputs:
319
+ - data is a dict containing all of the information that metrics need to perform evaluation.
320
+ It contains the following fields:
321
+ [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.
322
+ [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).
323
+ [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.
324
+ [similarity_scores]: list (for each timestep) of 2D NDArrays.
325
+ Notes:
326
+ General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.
327
+ 1) Extract only detections relevant for the class to be evaluated (including distractor detections).
328
+ 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a
329
+ distractor class, or otherwise marked as to be removed.
330
+ 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain
331
+ other criteria (e.g. are too small).
332
+ 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.
333
+ After the above preprocessing steps, this function also calculates the number of gt and tracker detections
334
+ and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are
335
+ unique within each timestep.
336
+
337
+ MOT Challenge:
338
+ In MOT Challenge, the 4 preproc steps are as follow:
339
+ 1) There is only one class (class) to be evaluated, but all other classes are used for preproc.
340
+ 2) Predictions are matched against all gt boxes (regardless of class), those matching with distractor
341
+ objects are removed.
342
+ 3) There is no crowd ignore regions.
343
+ 4) All gt dets except class are removed, also removes class gt dets marked with zero_marked.
344
+
345
+ :param raw_data: A dict containing the data for the sequence already read in by `get_raw_seq_data()`.
346
+ :param cls: The class to be evaluated.
347
+
348
+ :return: A dict containing all of the information that metrics need to perform evaluation.
349
+ It contains the following fields:
350
+ - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets]: Integers.
351
+ - [gt_ids, tracker_ids, tracker_confidences]: List (for each timestep) of 1D NDArrays (for each detection).
352
+ - [gt_dets, tracker_dets]: List (for each timestep) of lists of detections.
353
+ - [similarity_scores]: List (for each timestep) of 2D NDArrays.
354
+
355
+ """
356
+ # Check that input data has unique ids
357
+ self._check_unique_ids(raw_data)
358
+
359
+ distractor_class_names = ['box', 'static_person', 'distractor', 'reflection']
360
+ if self.benchmark == 'MOT20':
361
+ distractor_class_names.append('non_mot_vehicle')
362
+ distractor_classes = [self.class_name_to_class_id[x] for x in distractor_class_names]
363
+ cls_id = self.class_name_to_class_id[cls]
364
+
365
+ data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']
366
+ data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}
367
+ unique_gt_ids = []
368
+ unique_tracker_ids = []
369
+ num_gt_dets = 0
370
+ num_tracker_dets = 0
371
+ for t in range(raw_data['num_timesteps']):
372
+
373
+ # Get all data
374
+ gt_ids = raw_data['gt_ids'][t]
375
+ gt_dets = raw_data['gt_dets'][t]
376
+ gt_classes = raw_data['gt_classes'][t]
377
+ gt_zero_marked = raw_data['gt_extras'][t]['zero_marked']
378
+
379
+ tracker_ids = raw_data['tracker_ids'][t]
380
+ tracker_dets = raw_data['tracker_dets'][t]
381
+ tracker_classes = raw_data['tracker_classes'][t]
382
+ tracker_confidences = raw_data['tracker_confidences'][t]
383
+ similarity_scores = raw_data['similarity_scores'][t]
384
+
385
+ # Evaluation is ONLY valid for class class
386
+ if len(tracker_classes) > 0 and np.max(tracker_classes) > 1:
387
+ raise TrackEvalException(
388
+ 'Evaluation is only valid for class class. Non class class (%i) found in sequence %s at '
389
+ 'timestep %i.' % (np.max(tracker_classes), raw_data['seq'], t))
390
+
391
+ # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets
392
+ # which are labeled as belonging to a distractor class.
393
+ to_remove_tracker = np.array([], int)
394
+ if self.do_preproc and self.benchmark != 'MOT15' and gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:
395
+
396
+ # Check all classes are valid:
397
+ invalid_classes = np.setdiff1d(np.unique(gt_classes), self.valid_class_numbers)
398
+ if len(invalid_classes) > 0:
399
+ print(' '.join([str(x) for x in invalid_classes]))
400
+ raise(TrackEvalException('Attempting to evaluate using invalid gt classes. '
401
+ 'This warning only triggers if preprocessing is performed, '
402
+ 'e.g. not for MOT15 or where prepropressing is explicitly disabled. '
403
+ 'Please either check your gt data, or disable preprocessing. '
404
+ 'The following invalid classes were found in timestep ' + str(t) + ': ' +
405
+ ' '.join([str(x) for x in invalid_classes])))
406
+
407
+ matching_scores = similarity_scores.copy()
408
+ matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0
409
+ match_rows, match_cols = linear_sum_assignment(-matching_scores)
410
+ actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps
411
+ match_rows = match_rows[actually_matched_mask]
412
+ match_cols = match_cols[actually_matched_mask]
413
+
414
+ is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)
415
+ to_remove_tracker = match_cols[is_distractor_class]
416
+
417
+ # Apply preprocessing to remove all unwanted tracker dets.
418
+ data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)
419
+ data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)
420
+ data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)
421
+ similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)
422
+
423
+ # Remove gt detections marked as to remove (zero marked), and also remove gt detections not in class
424
+ # class (not applicable for MOT15)
425
+ if self.do_preproc and self.benchmark != 'MOT15':
426
+ gt_to_keep_mask = (np.not_equal(gt_zero_marked, 0)) & \
427
+ (np.equal(gt_classes, cls_id))
428
+ else:
429
+ # There are no classes for MOT15
430
+ gt_to_keep_mask = np.not_equal(gt_zero_marked, 0)
431
+ data['gt_ids'][t] = gt_ids[gt_to_keep_mask]
432
+ data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]
433
+ data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]
434
+
435
+ unique_gt_ids += list(np.unique(data['gt_ids'][t]))
436
+ unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))
437
+ num_tracker_dets += len(data['tracker_ids'][t])
438
+ num_gt_dets += len(data['gt_ids'][t])
439
+
440
+ # Re-label IDs such that there are no empty IDs
441
+ if len(unique_gt_ids) > 0:
442
+ unique_gt_ids = np.unique(unique_gt_ids)
443
+ gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))
444
+ gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))
445
+ for t in range(raw_data['num_timesteps']):
446
+ if len(data['gt_ids'][t]) > 0:
447
+ data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(int)
448
+ if len(unique_tracker_ids) > 0:
449
+ unique_tracker_ids = np.unique(unique_tracker_ids)
450
+ tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))
451
+ tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))
452
+ for t in range(raw_data['num_timesteps']):
453
+ if len(data['tracker_ids'][t]) > 0:
454
+ data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(int)
455
+
456
+ # Record overview statistics.
457
+ data['num_tracker_dets'] = num_tracker_dets
458
+ data['num_gt_dets'] = num_gt_dets
459
+ data['num_tracker_ids'] = len(unique_tracker_ids)
460
+ data['num_gt_ids'] = len(unique_gt_ids)
461
+ data['num_timesteps'] = raw_data['num_timesteps']
462
+ data['seq'] = raw_data['seq']
463
+
464
+ # Ensure again that ids are unique per timestep after preproc.
465
+ self._check_unique_ids(data, after_preproc=True)
466
+
467
+ return data
468
+
469
+ def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
470
+ similarity_scores = self._calculate_box_ious(gt_dets_t, tracker_dets_t, box_format='xywh')
471
+ return similarity_scores
MTMC_Tracking_2025/eval/utils/trackeval/datasets/mot_challenge_3d_location.py ADDED
@@ -0,0 +1,475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+ import configparser
4
+ import numpy as np
5
+ from scipy.optimize import linear_sum_assignment
6
+ from utils.trackeval import utils
7
+ from utils.trackeval import _timing
8
+ from utils.trackeval.utils import TrackEvalException
9
+ from utils.trackeval.datasets._base_dataset import _BaseDataset
10
+
11
+
12
+ class MotChallenge3DLocation(_BaseDataset):
13
+ """
14
+ Dataset class for MOT Challenge 3D tracking
15
+
16
+ :param dict config: configuration for the app
17
+ ::
18
+
19
+ default_dataset = trackeeval.datasets.MotChallenge2DBox(config)
20
+ """
21
+ @staticmethod
22
+ def get_default_dataset_config():
23
+ """Default class config values"""
24
+ code_path = utils.get_code_path()
25
+ default_config = {
26
+ 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data
27
+ 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location
28
+ 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER)
29
+ 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder)
30
+ 'CLASSES_TO_EVAL': ['class'], # Valid: ['class']
31
+ 'BENCHMARK': 'MOT17', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
32
+ 'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test', 'all'
33
+ 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped
34
+ 'PRINT_CONFIG': True, # Whether to print current config
35
+ 'DO_PREPROC': True, # Whether to perform preprocessing (never done for MOT15)
36
+ 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
37
+ 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
38
+ 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL
39
+ 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps)
40
+ 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval)
41
+ 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps
42
+ 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt'
43
+ 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in
44
+ # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/
45
+ # If True, then the middle 'benchmark-split' folder is skipped for both.
46
+ }
47
+ return default_config
48
+
49
+ def __init__(self, config=None, zd=2.0):
50
+ """Initialise dataset, checking that all required files are present"""
51
+ super().__init__()
52
+ # Fill non-given config values with defaults
53
+ self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())
54
+ self.zero_distance = zd
55
+ self.benchmark = self.config['BENCHMARK']
56
+ gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']
57
+ self.gt_set = gt_set
58
+ if not self.config['SKIP_SPLIT_FOL']:
59
+ split_fol = gt_set
60
+ else:
61
+ split_fol = ''
62
+ self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)
63
+ self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)
64
+ self.should_classes_combine = False
65
+ self.use_super_categories = False
66
+ self.data_is_zipped = self.config['INPUT_AS_ZIP']
67
+ self.do_preproc = self.config['DO_PREPROC']
68
+
69
+ self.output_fol = self.config['OUTPUT_FOLDER']
70
+ if self.output_fol is None:
71
+ self.output_fol = self.tracker_fol
72
+
73
+ self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']
74
+ self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']
75
+
76
+ # Get classes to eval
77
+ self.valid_classes = ['class']
78
+ self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None
79
+ for cls in self.config['CLASSES_TO_EVAL']]
80
+ if not all(self.class_list):
81
+ raise TrackEvalException('Attempted to evaluate an invalid class. Only class class is valid.')
82
+ self.class_name_to_class_id = {'class': 1, 'box': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5,
83
+ 'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9,
84
+ 'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13}
85
+ self.valid_class_numbers = list(self.class_name_to_class_id.values())
86
+
87
+ # Get sequences to eval and check gt files exist
88
+ self.seq_list, self.seq_lengths = self._get_seq_info()
89
+ if len(self.seq_list) < 1:
90
+ raise TrackEvalException('No sequences are selected to be evaluated.')
91
+
92
+ # Check gt files exist
93
+ for seq in self.seq_list:
94
+ if not self.data_is_zipped:
95
+ curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
96
+ if not os.path.isfile(curr_file):
97
+ print('GT file not found ' + curr_file)
98
+ raise TrackEvalException('GT file not found for sequence: ' + seq)
99
+ if self.data_is_zipped:
100
+ curr_file = os.path.join(self.gt_fol, 'data.zip')
101
+ if not os.path.isfile(curr_file):
102
+ print('GT file not found ' + curr_file)
103
+ raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))
104
+
105
+ # Get trackers to eval
106
+ if self.config['TRACKERS_TO_EVAL'] is None:
107
+ self.tracker_list = os.listdir(self.tracker_fol)
108
+ else:
109
+ self.tracker_list = self.config['TRACKERS_TO_EVAL']
110
+
111
+ if self.config['TRACKER_DISPLAY_NAMES'] is None:
112
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))
113
+ elif (self.config['TRACKERS_TO_EVAL'] is not None) and (
114
+ len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):
115
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))
116
+ else:
117
+ raise TrackEvalException('List of tracker files and tracker display names do not match.')
118
+
119
+ for tracker in self.tracker_list:
120
+ if self.data_is_zipped:
121
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
122
+ if not os.path.isfile(curr_file):
123
+ print('Tracker file not found: ' + curr_file)
124
+ raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))
125
+ else:
126
+ for seq in self.seq_list:
127
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
128
+ if not os.path.isfile(curr_file):
129
+ print('Tracker file not found: ' + curr_file)
130
+ raise TrackEvalException(
131
+ 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(
132
+ curr_file))
133
+
134
+ def get_display_name(self, tracker):
135
+ """
136
+ Gets the display name of the tracker
137
+
138
+ :param str tracker: Class of tracker
139
+ :return: str
140
+ ::
141
+
142
+ dataset.get_display_name(tracker)
143
+ """
144
+
145
+ return self.tracker_to_disp[tracker]
146
+
147
+ def _get_seq_info(self):
148
+ seq_list = []
149
+ seq_lengths = {}
150
+ if self.config["SEQ_INFO"]:
151
+ seq_list = list(self.config["SEQ_INFO"].keys())
152
+ seq_lengths = self.config["SEQ_INFO"]
153
+
154
+ # If sequence length is 'None' tries to read sequence length from .ini files.
155
+ for seq, seq_length in seq_lengths.items():
156
+ if seq_length is None:
157
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
158
+ if not os.path.isfile(ini_file):
159
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
160
+ ini_data = configparser.ConfigParser()
161
+ ini_data.read(ini_file)
162
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
163
+
164
+ else:
165
+ if self.config["SEQMAP_FILE"]:
166
+ seqmap_file = self.config["SEQMAP_FILE"]
167
+ else:
168
+ if self.config["SEQMAP_FOLDER"] is None:
169
+ seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')
170
+ else:
171
+ seqmap_file = os.path.join(self.config["SEQMAP_FOLDER"], self.gt_set + '.txt')
172
+ if not os.path.isfile(seqmap_file):
173
+ print('no seqmap found: ' + seqmap_file)
174
+ raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))
175
+ with open(seqmap_file) as fp:
176
+ reader = csv.reader(fp)
177
+ for i, row in enumerate(reader):
178
+ if i == 0 or row[0] == '':
179
+ continue
180
+ seq = row[0]
181
+ seq_list.append(seq)
182
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
183
+ if not os.path.isfile(ini_file):
184
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
185
+ ini_data = configparser.ConfigParser()
186
+ ini_data.read(ini_file)
187
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
188
+ return seq_list, seq_lengths
189
+
190
+ def _load_raw_file(self, tracker, seq, is_gt):
191
+ """Load a file (gt or tracker) in the MOT Challenge 3D location format
192
+
193
+ If is_gt, this returns a dict which contains the fields:
194
+ [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).
195
+ [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.
196
+ [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).
197
+
198
+ if not is_gt, this returns a dict which contains the fields:
199
+ [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).
200
+ [tracker_dets]: list (for each timestep) of lists of detections.
201
+
202
+ :param str tracker: Name of the tracker.
203
+ :param str seq: Sequence identifier.
204
+ :param bool is_gt: Indicates whether the file is ground truth or from a tracker.
205
+ :raises TrackEvalException: If there's an error loading the file or if the data is corrupted.
206
+ :return: dictionary containing the loaded data.
207
+ :rtype: dict
208
+ """
209
+ # File location
210
+ if self.data_is_zipped:
211
+ if is_gt:
212
+ zip_file = os.path.join(self.gt_fol, 'data.zip')
213
+ else:
214
+ zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
215
+ file = seq + '.txt'
216
+ else:
217
+ zip_file = None
218
+ if is_gt:
219
+ file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
220
+ else:
221
+ file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
222
+
223
+ # Load raw data from text file
224
+ read_data, ignore_data = self._load_simple_text_file(file, is_zipped=self.data_is_zipped, zip_file=zip_file)
225
+
226
+ # Convert data to required format
227
+ num_timesteps = self.seq_lengths[seq]
228
+ data_keys = ['ids', 'classes', 'dets']
229
+ if is_gt:
230
+ data_keys += ['gt_crowd_ignore_regions', 'gt_extras']
231
+ else:
232
+ data_keys += ['tracker_confidences']
233
+ raw_data = {key: [None] * num_timesteps for key in data_keys}
234
+
235
+ # Check for any extra time keys
236
+ current_time_keys = [str( t+ 1) for t in range(num_timesteps)]
237
+ extra_time_keys = [x for x in read_data.keys() if x not in current_time_keys]
238
+ if len(extra_time_keys) > 0:
239
+ if is_gt:
240
+ text = 'Ground-truth'
241
+ else:
242
+ text = 'Tracking'
243
+ raise TrackEvalException(
244
+ text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(
245
+ [str(x) + ', ' for x in extra_time_keys]))
246
+
247
+ for t in range(num_timesteps):
248
+ time_key = str(t+1)
249
+ if time_key in read_data.keys():
250
+ try:
251
+ time_data = np.asarray(read_data[time_key], dtype=float)
252
+ except ValueError:
253
+ if is_gt:
254
+ raise TrackEvalException(
255
+ 'Cannot convert gt data for sequence %s to float. Is data corrupted?' % seq)
256
+ else:
257
+ raise TrackEvalException(
258
+ 'Cannot convert tracking data from tracker %s, sequence %s to float. Is data corrupted?' % (
259
+ tracker, seq))
260
+ try:
261
+ if is_gt:
262
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 7:9])
263
+ else:
264
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 7:9])
265
+ raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)
266
+ except IndexError:
267
+ if is_gt:
268
+ err = 'Cannot load gt data from sequence %s, because there is not enough ' \
269
+ 'columns in the data.' % seq
270
+ raise TrackEvalException(err)
271
+ else:
272
+ err = 'Cannot load tracker data from tracker %s, sequence %s, because there is not enough ' \
273
+ 'columns in the data.' % (tracker, seq)
274
+ raise TrackEvalException(err)
275
+ if time_data.shape[1] >= 8:
276
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
277
+ # raw_data['classes'][t] = np.atleast_1d(time_data[:, 7]).astype(int)
278
+ else:
279
+ if not is_gt:
280
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
281
+ else:
282
+ raise TrackEvalException(
283
+ 'GT data is not in a valid format, there is not enough rows in seq %s, timestep %i.' % (
284
+ seq, t))
285
+ if is_gt:
286
+ gt_extras_dict = {'zero_marked': np.atleast_1d(time_data[:, 6].astype(int))}
287
+ raw_data['gt_extras'][t] = gt_extras_dict
288
+ else:
289
+ raw_data['tracker_confidences'][t] = np.atleast_1d(time_data[:, 6])
290
+ else:
291
+ raw_data['dets'][t] = np.empty((0, 2))
292
+ raw_data['ids'][t] = np.empty(0).astype(int)
293
+ raw_data['classes'][t] = np.empty(0).astype(int)
294
+ if is_gt:
295
+ gt_extras_dict = {'zero_marked': np.empty(0)}
296
+ raw_data['gt_extras'][t] = gt_extras_dict
297
+ else:
298
+ raw_data['tracker_confidences'][t] = np.empty(0)
299
+ if is_gt:
300
+ raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 2))
301
+
302
+ if is_gt:
303
+ key_map = {'ids': 'gt_ids',
304
+ 'classes': 'gt_classes',
305
+ 'dets': 'gt_dets'}
306
+ else:
307
+ key_map = {'ids': 'tracker_ids',
308
+ 'classes': 'tracker_classes',
309
+ 'dets': 'tracker_dets'}
310
+ for k, v in key_map.items():
311
+ raw_data[v] = raw_data.pop(k)
312
+ raw_data['num_timesteps'] = num_timesteps
313
+ raw_data['seq'] = seq
314
+ return raw_data
315
+
316
+ @_timing.time
317
+ def get_preprocessed_seq_data(self, raw_data, cls):
318
+ """ Preprocess data for a single sequence for a single class ready for evaluation.
319
+ Inputs:
320
+ - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().
321
+ - cls is the class to be evaluated.
322
+ Outputs:
323
+ - data is a dict containing all of the information that metrics need to perform evaluation.
324
+ It contains the following fields:
325
+ [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.
326
+ [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).
327
+ [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.
328
+ [similarity_scores]: list (for each timestep) of 2D NDArrays.
329
+ Notes:
330
+ General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.
331
+ 1) Extract only detections relevant for the class to be evaluated (including distractor detections).
332
+ 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a
333
+ distractor class, or otherwise marked as to be removed.
334
+ 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain
335
+ other criteria (e.g. are too small).
336
+ 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.
337
+ After the above preprocessing steps, this function also calculates the number of gt and tracker detections
338
+ and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are
339
+ unique within each timestep.
340
+
341
+ MOT Challenge:
342
+ In MOT Challenge, the 4 preproc steps are as follow:
343
+ 1) There is only one class (class) to be evaluated, but all other classes are used for preproc.
344
+ 2) Predictions are matched against all gt boxes (regardless of class), those matching with distractor
345
+ objects are removed.
346
+ 3) There is no crowd ignore regions.
347
+ 4) All gt dets except class are removed, also removes class gt dets marked with zero_marked.
348
+
349
+ :param raw_data: A dict containing the data for the sequence already read in by `get_raw_seq_data()`.
350
+ :param cls: The class to be evaluated.
351
+
352
+ :return: A dict containing all of the information that metrics need to perform evaluation.
353
+ It contains the following fields:
354
+ - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets]: Integers.
355
+ - [gt_ids, tracker_ids, tracker_confidences]: List (for each timestep) of 1D NDArrays (for each detection).
356
+ - [gt_dets, tracker_dets]: List (for each timestep) of lists of detections.
357
+ - [similarity_scores]: List (for each timestep) of 2D NDArrays.
358
+
359
+ """
360
+ # Check that input data has unique ids
361
+ self._check_unique_ids(raw_data)
362
+
363
+ distractor_class_names = ['box', 'static_person', 'distractor', 'reflection']
364
+ if self.benchmark == 'MOT20':
365
+ distractor_class_names.append('non_mot_vehicle')
366
+ distractor_classes = [self.class_name_to_class_id[x] for x in distractor_class_names]
367
+ cls_id = self.class_name_to_class_id[cls]
368
+
369
+ data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']
370
+ data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}
371
+ unique_gt_ids = []
372
+ unique_tracker_ids = []
373
+ num_gt_dets = 0
374
+ num_tracker_dets = 0
375
+ for t in range(raw_data['num_timesteps']):
376
+
377
+ # Get all data
378
+ gt_ids = raw_data['gt_ids'][t]
379
+ gt_dets = raw_data['gt_dets'][t]
380
+ gt_classes = raw_data['gt_classes'][t]
381
+ gt_zero_marked = raw_data['gt_extras'][t]['zero_marked']
382
+
383
+ tracker_ids = raw_data['tracker_ids'][t]
384
+ tracker_dets = raw_data['tracker_dets'][t]
385
+ tracker_classes = raw_data['tracker_classes'][t]
386
+ tracker_confidences = raw_data['tracker_confidences'][t]
387
+ similarity_scores = raw_data['similarity_scores'][t]
388
+
389
+ # Evaluation is ONLY valid for class class
390
+ if len(tracker_classes) > 0 and np.max(tracker_classes) > 1:
391
+ raise TrackEvalException(
392
+ 'Evaluation is only valid for class class. Non class class (%i) found in sequence %s at '
393
+ 'timestep %i.' % (np.max(tracker_classes), raw_data['seq'], t))
394
+
395
+ # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets
396
+ # which are labeled as belonging to a distractor class.
397
+ to_remove_tracker = np.array([], int)
398
+ if self.do_preproc and self.benchmark != 'MOT15' and gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:
399
+
400
+ # Check all classes are valid:
401
+ invalid_classes = np.setdiff1d(np.unique(gt_classes), self.valid_class_numbers)
402
+ if len(invalid_classes) > 0:
403
+ print(' '.join([str(x) for x in invalid_classes]))
404
+ raise(TrackEvalException('Attempting to evaluate using invalid gt classes. '
405
+ 'This warning only triggers if preprocessing is performed, '
406
+ 'e.g. not for MOT15 or where prepropressing is explicitly disabled. '
407
+ 'Please either check your gt data, or disable preprocessing. '
408
+ 'The following invalid classes were found in timestep ' + str(t) + ': ' +
409
+ ' '.join([str(x) for x in invalid_classes])))
410
+
411
+ matching_scores = similarity_scores.copy()
412
+ matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0
413
+ match_rows, match_cols = linear_sum_assignment(-matching_scores)
414
+ actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps
415
+ match_rows = match_rows[actually_matched_mask]
416
+ match_cols = match_cols[actually_matched_mask]
417
+
418
+ is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)
419
+ to_remove_tracker = match_cols[is_distractor_class]
420
+
421
+ # Apply preprocessing to remove all unwanted tracker dets.
422
+ data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)
423
+ data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)
424
+ data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)
425
+ similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)
426
+
427
+ # Remove gt detections marked as to remove (zero marked), and also remove gt detections not in class
428
+ # class (not applicable for MOT15)
429
+ if self.do_preproc and self.benchmark != 'MOT15':
430
+ gt_to_keep_mask = (np.not_equal(gt_zero_marked, 0)) & \
431
+ (np.equal(gt_classes, cls_id))
432
+ else:
433
+ # There are no classes for MOT15
434
+ gt_to_keep_mask = np.not_equal(gt_zero_marked, 0)
435
+ data['gt_ids'][t] = gt_ids[gt_to_keep_mask]
436
+ data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]
437
+ data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]
438
+
439
+ unique_gt_ids += list(np.unique(data['gt_ids'][t]))
440
+ unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))
441
+ num_tracker_dets += len(data['tracker_ids'][t])
442
+ num_gt_dets += len(data['gt_ids'][t])
443
+
444
+ # Re-label IDs such that there are no empty IDs
445
+ if len(unique_gt_ids) > 0:
446
+ unique_gt_ids = np.unique(unique_gt_ids)
447
+ gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))
448
+ gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))
449
+ for t in range(raw_data['num_timesteps']):
450
+ if len(data['gt_ids'][t]) > 0:
451
+ data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(int)
452
+ if len(unique_tracker_ids) > 0:
453
+ unique_tracker_ids = np.unique(unique_tracker_ids)
454
+ tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))
455
+ tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))
456
+ for t in range(raw_data['num_timesteps']):
457
+ if len(data['tracker_ids'][t]) > 0:
458
+ data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(int)
459
+
460
+ # Record overview statistics.
461
+ data['num_tracker_dets'] = num_tracker_dets
462
+ data['num_gt_dets'] = num_gt_dets
463
+ data['num_tracker_ids'] = len(unique_tracker_ids)
464
+ data['num_gt_ids'] = len(unique_gt_ids)
465
+ data['num_timesteps'] = raw_data['num_timesteps']
466
+ data['seq'] = raw_data['seq']
467
+
468
+ # Ensure again that ids are unique per timestep after preproc.
469
+ self._check_unique_ids(data, after_preproc=True)
470
+
471
+ return data
472
+
473
+ def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
474
+ similarity_scores = self._calculate_euclidean_similarity(gt_dets_t, tracker_dets_t, zero_distance=self.zero_distance)
475
+ return similarity_scores
MTMC_Tracking_2025/eval/utils/trackeval/datasets/mtmc_challenge_3d_bbox.py ADDED
@@ -0,0 +1,474 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+ import configparser
4
+ import numpy as np
5
+ from scipy.optimize import linear_sum_assignment
6
+ from utils.trackeval import utils
7
+ from utils.trackeval import _timing
8
+ from utils.trackeval.utils import TrackEvalException
9
+ from utils.trackeval.datasets._base_dataset import _BaseDataset
10
+ from pytorch3d.ops import box3d_overlap
11
+
12
+
13
+ class MTMCChallenge3DBBox(_BaseDataset):
14
+ """
15
+ Dataset class for MOT Challenge 3D tracking
16
+ :param dict config: configuration for the app
17
+ ::
18
+
19
+ default_dataset = trackeeval.datasets.MTMCChallenge3DBBox(config)
20
+ """
21
+ @staticmethod
22
+ def get_default_dataset_config():
23
+ """Default class config values"""
24
+ code_path = utils.get_code_path()
25
+ default_config = {
26
+ 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data
27
+ 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location
28
+ 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER)
29
+ 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder)
30
+ 'CLASSES_TO_EVAL': ['class'], # Valid: ['class']
31
+ 'BENCHMARK': 'MOT17', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
32
+ 'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test', 'all'
33
+ 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped
34
+ 'PRINT_CONFIG': True, # Whether to print current config
35
+ 'DO_PREPROC': True, # Whether to perform preprocessing (never done for MOT15)
36
+ 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
37
+ 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
38
+ 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL
39
+ 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps)
40
+ 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval)
41
+ 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps
42
+ 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt'
43
+ 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in
44
+ # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/
45
+ # If True, then the middle 'benchmark-split' folder is skipped for both.
46
+ }
47
+ return default_config
48
+
49
+ def __init__(self, config=None, zd=2.0):
50
+ """Initialise dataset, checking that all required files are present"""
51
+ super().__init__()
52
+ # Fill non-given config values with defaults
53
+ self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())
54
+ self.zero_distance = zd
55
+ self.benchmark = self.config['BENCHMARK']
56
+ gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']
57
+ self.gt_set = gt_set
58
+ if not self.config['SKIP_SPLIT_FOL']:
59
+ split_fol = gt_set
60
+ else:
61
+ split_fol = ''
62
+ self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)
63
+ self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)
64
+ self.should_classes_combine = False
65
+ self.use_super_categories = False
66
+ self.data_is_zipped = self.config['INPUT_AS_ZIP']
67
+ self.do_preproc = self.config['DO_PREPROC']
68
+
69
+ self.output_fol = self.config['OUTPUT_FOLDER']
70
+ if self.output_fol is None:
71
+ self.output_fol = self.tracker_fol
72
+
73
+ self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']
74
+ self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']
75
+
76
+ # Get classes to eval
77
+ self.valid_classes = ['class']
78
+ self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None
79
+ for cls in self.config['CLASSES_TO_EVAL']]
80
+ if not all(self.class_list):
81
+ raise TrackEvalException('Attempted to evaluate an invalid class. Only class class is valid.')
82
+ self.class_name_to_class_id = {'class': 1, 'box': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5,
83
+ 'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9,
84
+ 'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13}
85
+ self.valid_class_numbers = list(self.class_name_to_class_id.values())
86
+
87
+ # Get sequences to eval and check gt files exist
88
+ self.seq_list, self.seq_lengths = self._get_seq_info()
89
+ if len(self.seq_list) < 1:
90
+ raise TrackEvalException('No sequences are selected to be evaluated.')
91
+
92
+ # Check gt files exist
93
+ for seq in self.seq_list:
94
+ if not self.data_is_zipped:
95
+ curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
96
+ if not os.path.isfile(curr_file):
97
+ print('GT file not found ' + curr_file)
98
+ raise TrackEvalException('GT file not found for sequence: ' + seq)
99
+ if self.data_is_zipped:
100
+ curr_file = os.path.join(self.gt_fol, 'data.zip')
101
+ if not os.path.isfile(curr_file):
102
+ print('GT file not found ' + curr_file)
103
+ raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))
104
+
105
+ # Get trackers to eval
106
+ if self.config['TRACKERS_TO_EVAL'] is None:
107
+ self.tracker_list = os.listdir(self.tracker_fol)
108
+ else:
109
+ self.tracker_list = self.config['TRACKERS_TO_EVAL']
110
+
111
+ if self.config['TRACKER_DISPLAY_NAMES'] is None:
112
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))
113
+ elif (self.config['TRACKERS_TO_EVAL'] is not None) and (
114
+ len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):
115
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))
116
+ else:
117
+ raise TrackEvalException('List of tracker files and tracker display names do not match.')
118
+
119
+ for tracker in self.tracker_list:
120
+ if self.data_is_zipped:
121
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
122
+ if not os.path.isfile(curr_file):
123
+ print('Tracker file not found: ' + curr_file)
124
+ raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))
125
+ else:
126
+ for seq in self.seq_list:
127
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
128
+ if not os.path.isfile(curr_file):
129
+ print('Tracker file not found: ' + curr_file)
130
+ raise TrackEvalException(
131
+ 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(
132
+ curr_file))
133
+
134
+ def get_display_name(self, tracker):
135
+ """
136
+ Gets the display name of the tracker
137
+
138
+ :param str tracker: Class of tracker
139
+ :return: str
140
+ ::
141
+
142
+ dataset.get_display_name(tracker)
143
+ """
144
+
145
+ return self.tracker_to_disp[tracker]
146
+
147
+ def _get_seq_info(self):
148
+ seq_list = []
149
+ seq_lengths = {}
150
+ if self.config["SEQ_INFO"]:
151
+ seq_list = list(self.config["SEQ_INFO"].keys())
152
+ seq_lengths = self.config["SEQ_INFO"]
153
+
154
+ # If sequence length is 'None' tries to read sequence length from .ini files.
155
+ for seq, seq_length in seq_lengths.items():
156
+ if seq_length is None:
157
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
158
+ if not os.path.isfile(ini_file):
159
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
160
+ ini_data = configparser.ConfigParser()
161
+ ini_data.read(ini_file)
162
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
163
+
164
+ else:
165
+ if self.config["SEQMAP_FILE"]:
166
+ seqmap_file = self.config["SEQMAP_FILE"]
167
+ else:
168
+ if self.config["SEQMAP_FOLDER"] is None:
169
+ seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')
170
+ else:
171
+ seqmap_file = os.path.join(self.config["SEQMAP_FOLDER"], self.gt_set + '.txt')
172
+ if not os.path.isfile(seqmap_file):
173
+ print('no seqmap found: ' + seqmap_file)
174
+ raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))
175
+ with open(seqmap_file) as fp:
176
+ reader = csv.reader(fp)
177
+ for i, row in enumerate(reader):
178
+ if i == 0 or row[0] == '':
179
+ continue
180
+ seq = row[0]
181
+ seq_list.append(seq)
182
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
183
+ if not os.path.isfile(ini_file):
184
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
185
+ ini_data = configparser.ConfigParser()
186
+ ini_data.read(ini_file)
187
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
188
+ return seq_list, seq_lengths
189
+
190
+ def _load_raw_file(self, tracker, seq, is_gt):
191
+ """Load a file (gt or tracker) in the MOT Challenge 3D location format
192
+
193
+ If is_gt, this returns a dict which contains the fields:
194
+ [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).
195
+ [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.
196
+ [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).
197
+
198
+ if not is_gt, this returns a dict which contains the fields:
199
+ [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).
200
+ [tracker_dets]: list (for each timestep) of lists of detections.
201
+
202
+ :param str tracker: Name of the tracker.
203
+ :param str seq: Sequence identifier.
204
+ :param bool is_gt: Indicates whether the file is ground truth or from a tracker.
205
+ :raises TrackEvalException: If there's an error loading the file or if the data is corrupted.
206
+ :return: dictionary containing the loaded data.
207
+ :rtype: dict
208
+ """
209
+ # File location
210
+ if self.data_is_zipped:
211
+ if is_gt:
212
+ zip_file = os.path.join(self.gt_fol, 'data.zip')
213
+ else:
214
+ zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
215
+ file = seq + '.txt'
216
+ else:
217
+ zip_file = None
218
+ if is_gt:
219
+ file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
220
+ else:
221
+ file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
222
+
223
+ # Load raw data from text file
224
+ read_data, ignore_data = self._load_simple_text_file(file, is_zipped=self.data_is_zipped, zip_file=zip_file)
225
+
226
+ # Convert data to required format
227
+ num_timesteps = self.seq_lengths[seq]
228
+ data_keys = ['ids', 'classes', 'dets']
229
+ if is_gt:
230
+ data_keys += ['gt_crowd_ignore_regions', 'gt_extras']
231
+ else:
232
+ data_keys += ['tracker_confidences']
233
+ raw_data = {key: [None] * num_timesteps for key in data_keys}
234
+
235
+ # Check for any extra time keys
236
+ current_time_keys = [str( t+ 1) for t in range(num_timesteps)]
237
+ extra_time_keys = [x for x in read_data.keys() if x not in current_time_keys]
238
+ if len(extra_time_keys) > 0:
239
+ if is_gt:
240
+ text = 'Ground-truth'
241
+ else:
242
+ text = 'Tracking'
243
+ raise TrackEvalException(
244
+ text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(
245
+ [str(x) + ', ' for x in extra_time_keys]))
246
+
247
+ for t in range(num_timesteps):
248
+ time_key = str(t+1)
249
+ if time_key in read_data.keys():
250
+ try:
251
+ time_data = np.asarray(read_data[time_key], dtype=float)
252
+ except ValueError:
253
+ if is_gt:
254
+ raise TrackEvalException(
255
+ 'Cannot convert gt data for sequence %s to float. Is data corrupted?' % seq)
256
+ else:
257
+ raise TrackEvalException(
258
+ 'Cannot convert tracking data from tracker %s, sequence %s to float. Is data corrupted?' % (
259
+ tracker, seq))
260
+ try:
261
+ if is_gt:
262
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 3:12])
263
+ else:
264
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 3:12])
265
+ raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)
266
+ except IndexError:
267
+ if is_gt:
268
+ err = 'Cannot load gt data from sequence %s, because there is not enough ' \
269
+ 'columns in the data.' % seq
270
+ raise TrackEvalException(err)
271
+ else:
272
+ err = 'Cannot load tracker data from tracker %s, sequence %s, because there is not enough ' \
273
+ 'columns in the data.' % (tracker, seq)
274
+ raise TrackEvalException(err)
275
+ if time_data.shape[1] >= 12:
276
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
277
+ # raw_data['classes'][t] = np.atleast_1d(time_data[:, 7]).astype(int)
278
+ else:
279
+ if not is_gt:
280
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
281
+ else:
282
+ raise TrackEvalException(
283
+ 'GT data is not in a valid format, there is not enough rows in seq %s, timestep %i.' % (
284
+ seq, t))
285
+ if is_gt:
286
+ gt_extras_dict = {'zero_marked': np.ones_like(time_data[:, 1], dtype=int)}
287
+ raw_data['gt_extras'][t] = gt_extras_dict
288
+ else:
289
+ raw_data['tracker_confidences'][t] = np.ones_like(time_data[:, 1])
290
+ else:
291
+ raw_data['dets'][t] = np.empty((0, 9))
292
+ raw_data['ids'][t] = np.empty(0).astype(int)
293
+ raw_data['classes'][t] = np.empty(0).astype(int)
294
+ if is_gt:
295
+ gt_extras_dict = {'zero_marked': np.empty(0)}
296
+ raw_data['gt_extras'][t] = gt_extras_dict
297
+ else:
298
+ raw_data['tracker_confidences'][t] = np.empty(0)
299
+ if is_gt:
300
+ raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 9))
301
+
302
+ if is_gt:
303
+ key_map = {'ids': 'gt_ids',
304
+ 'classes': 'gt_classes',
305
+ 'dets': 'gt_dets'}
306
+ else:
307
+ key_map = {'ids': 'tracker_ids',
308
+ 'classes': 'tracker_classes',
309
+ 'dets': 'tracker_dets'}
310
+ for k, v in key_map.items():
311
+ raw_data[v] = raw_data.pop(k)
312
+ raw_data['num_timesteps'] = num_timesteps
313
+ raw_data['seq'] = seq
314
+ return raw_data
315
+
316
+ @_timing.time
317
+ def get_preprocessed_seq_data(self, raw_data, cls):
318
+ """ Preprocess data for a single sequence for a single class ready for evaluation.
319
+ Inputs:
320
+ - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().
321
+ - cls is the class to be evaluated.
322
+ Outputs:
323
+ - data is a dict containing all of the information that metrics need to perform evaluation.
324
+ It contains the following fields:
325
+ [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.
326
+ [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).
327
+ [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.
328
+ [similarity_scores]: list (for each timestep) of 2D NDArrays.
329
+ Notes:
330
+ General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.
331
+ 1) Extract only detections relevant for the class to be evaluated (including distractor detections).
332
+ 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a
333
+ distractor class, or otherwise marked as to be removed.
334
+ 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain
335
+ other criteria (e.g. are too small).
336
+ 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.
337
+ After the above preprocessing steps, this function also calculates the number of gt and tracker detections
338
+ and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are
339
+ unique within each timestep.
340
+
341
+ MOT Challenge:
342
+ In MOT Challenge, the 4 preproc steps are as follow:
343
+ 1) There is only one class (class) to be evaluated, but all other classes are used for preproc.
344
+ 2) Predictions are matched against all gt boxes (regardless of class), those matching with distractor
345
+ objects are removed.
346
+ 3) There is no crowd ignore regions.
347
+ 4) All gt dets except class are removed, also removes class gt dets marked with zero_marked.
348
+
349
+ :param raw_data: A dict containing the data for the sequence already read in by `get_raw_seq_data()`.
350
+ :param cls: The class to be evaluated.
351
+
352
+ :return: A dict containing all of the information that metrics need to perform evaluation.
353
+ It contains the following fields:
354
+ - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets]: Integers.
355
+ - [gt_ids, tracker_ids, tracker_confidences]: List (for each timestep) of 1D NDArrays (for each detection).
356
+ - [gt_dets, tracker_dets]: List (for each timestep) of lists of detections.
357
+ - [similarity_scores]: List (for each timestep) of 2D NDArrays.
358
+
359
+ """
360
+ # Check that input data has unique ids
361
+ self._check_unique_ids(raw_data)
362
+
363
+ distractor_class_names = ['box', 'static_person', 'distractor', 'reflection']
364
+ if self.benchmark == 'MOT20':
365
+ distractor_class_names.append('non_mot_vehicle')
366
+ distractor_classes = [self.class_name_to_class_id[x] for x in distractor_class_names]
367
+ cls_id = self.class_name_to_class_id[cls]
368
+
369
+ data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']
370
+ data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}
371
+ unique_gt_ids = []
372
+ unique_tracker_ids = []
373
+ num_gt_dets = 0
374
+ num_tracker_dets = 0
375
+ for t in range(raw_data['num_timesteps']):
376
+
377
+ # Get all data
378
+ gt_ids = raw_data['gt_ids'][t]
379
+ gt_dets = raw_data['gt_dets'][t]
380
+ gt_classes = raw_data['gt_classes'][t]
381
+ gt_zero_marked = raw_data['gt_extras'][t]['zero_marked']
382
+
383
+ tracker_ids = raw_data['tracker_ids'][t]
384
+ tracker_dets = raw_data['tracker_dets'][t]
385
+ tracker_classes = raw_data['tracker_classes'][t]
386
+ tracker_confidences = raw_data['tracker_confidences'][t]
387
+ similarity_scores = raw_data['similarity_scores'][t]
388
+
389
+ # Evaluation is ONLY valid for class class
390
+ if len(tracker_classes) > 0 and np.max(tracker_classes) > 1:
391
+ raise TrackEvalException(
392
+ 'Evaluation is only valid for class class. Non class class (%i) found in sequence %s at '
393
+ 'timestep %i.' % (np.max(tracker_classes), raw_data['seq'], t))
394
+
395
+ # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets
396
+ # which are labeled as belonging to a distractor class.
397
+ to_remove_tracker = np.array([], int)
398
+ if self.do_preproc and self.benchmark != 'MOT15' and gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:
399
+
400
+ # Check all classes are valid:
401
+ invalid_classes = np.setdiff1d(np.unique(gt_classes), self.valid_class_numbers)
402
+ if len(invalid_classes) > 0:
403
+ print(' '.join([str(x) for x in invalid_classes]))
404
+ raise(TrackEvalException('Attempting to evaluate using invalid gt classes. '
405
+ 'This warning only triggers if preprocessing is performed, '
406
+ 'e.g. not for MOT15 or where prepropressing is explicitly disabled. '
407
+ 'Please either check your gt data, or disable preprocessing. '
408
+ 'The following invalid classes were found in timestep ' + str(t) + ': ' +
409
+ ' '.join([str(x) for x in invalid_classes])))
410
+
411
+ matching_scores = similarity_scores.copy()
412
+ matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0
413
+ match_rows, match_cols = linear_sum_assignment(-matching_scores)
414
+ actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps
415
+ match_rows = match_rows[actually_matched_mask]
416
+ match_cols = match_cols[actually_matched_mask]
417
+
418
+ is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)
419
+ to_remove_tracker = match_cols[is_distractor_class]
420
+
421
+ # Apply preprocessing to remove all unwanted tracker dets.
422
+ data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)
423
+ data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)
424
+ data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)
425
+ similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)
426
+
427
+ # Remove gt detections marked as to remove (zero marked), and also remove gt detections not in class
428
+ # class (not applicable for MOT15)
429
+ if self.do_preproc and self.benchmark != 'MOT15':
430
+ gt_to_keep_mask = (np.not_equal(gt_zero_marked, 0)) & \
431
+ (np.equal(gt_classes, cls_id))
432
+ else:
433
+ # There are no classes for MOT15
434
+ gt_to_keep_mask = np.not_equal(gt_zero_marked, 0)
435
+ data['gt_ids'][t] = gt_ids[gt_to_keep_mask]
436
+ data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]
437
+ data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]
438
+
439
+ unique_gt_ids += list(np.unique(data['gt_ids'][t]))
440
+ unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))
441
+ num_tracker_dets += len(data['tracker_ids'][t])
442
+ num_gt_dets += len(data['gt_ids'][t])
443
+
444
+ # Re-label IDs such that there are no empty IDs
445
+ if len(unique_gt_ids) > 0:
446
+ unique_gt_ids = np.unique(unique_gt_ids)
447
+ gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))
448
+ gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))
449
+ for t in range(raw_data['num_timesteps']):
450
+ if len(data['gt_ids'][t]) > 0:
451
+ data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(int)
452
+ if len(unique_tracker_ids) > 0:
453
+ unique_tracker_ids = np.unique(unique_tracker_ids)
454
+ tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))
455
+ tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))
456
+ for t in range(raw_data['num_timesteps']):
457
+ if len(data['tracker_ids'][t]) > 0:
458
+ data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(int)
459
+
460
+ # Record overview statistics.
461
+ data['num_tracker_dets'] = num_tracker_dets
462
+ data['num_gt_dets'] = num_gt_dets
463
+ data['num_tracker_ids'] = len(unique_tracker_ids)
464
+ data['num_gt_ids'] = len(unique_gt_ids)
465
+ data['num_timesteps'] = raw_data['num_timesteps']
466
+ data['seq'] = raw_data['seq']
467
+
468
+ # Ensure again that ids are unique per timestep after preproc.
469
+ self._check_unique_ids(data, after_preproc=True)
470
+ return data
471
+
472
+ def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
473
+ similarity_scores = self._calculate_3DBBox_ious(gt_dets_t, tracker_dets_t)
474
+ return similarity_scores
MTMC_Tracking_2025/eval/utils/trackeval/datasets/mtmc_challenge_3d_location.py ADDED
@@ -0,0 +1,473 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+ import configparser
4
+ import numpy as np
5
+ from scipy.optimize import linear_sum_assignment
6
+ from utils.trackeval import utils
7
+ from utils.trackeval import _timing
8
+ from utils.trackeval.utils import TrackEvalException
9
+ from utils.trackeval.datasets._base_dataset import _BaseDataset
10
+
11
+
12
+ class MTMCChallenge3DLocation(_BaseDataset):
13
+ """
14
+ Dataset class for MOT Challenge 3D tracking
15
+ :param dict config: configuration for the app
16
+ ::
17
+
18
+ default_dataset = trackeeval.datasets.MTMCChallenge3DLocation(config)
19
+ """
20
+ @staticmethod
21
+ def get_default_dataset_config():
22
+ """Default class config values"""
23
+ code_path = utils.get_code_path()
24
+ default_config = {
25
+ 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data
26
+ 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location
27
+ 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER)
28
+ 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder)
29
+ 'CLASSES_TO_EVAL': ['class'], # Valid: ['class']
30
+ 'BENCHMARK': 'MOT17', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
31
+ 'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test', 'all'
32
+ 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped
33
+ 'PRINT_CONFIG': True, # Whether to print current config
34
+ 'DO_PREPROC': True, # Whether to perform preprocessing (never done for MOT15)
35
+ 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
36
+ 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
37
+ 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL
38
+ 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps)
39
+ 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval)
40
+ 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps
41
+ 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt'
42
+ 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in
43
+ # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/
44
+ # If True, then the middle 'benchmark-split' folder is skipped for both.
45
+ }
46
+ return default_config
47
+
48
+ def __init__(self, config=None, zd=2.0):
49
+ """Initialise dataset, checking that all required files are present"""
50
+ super().__init__()
51
+ # Fill non-given config values with defaults
52
+ self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())
53
+ self.zero_distance = zd
54
+ self.benchmark = self.config['BENCHMARK']
55
+ gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']
56
+ self.gt_set = gt_set
57
+ if not self.config['SKIP_SPLIT_FOL']:
58
+ split_fol = gt_set
59
+ else:
60
+ split_fol = ''
61
+ self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)
62
+ self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)
63
+ self.should_classes_combine = False
64
+ self.use_super_categories = False
65
+ self.data_is_zipped = self.config['INPUT_AS_ZIP']
66
+ self.do_preproc = self.config['DO_PREPROC']
67
+
68
+ self.output_fol = self.config['OUTPUT_FOLDER']
69
+ if self.output_fol is None:
70
+ self.output_fol = self.tracker_fol
71
+
72
+ self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']
73
+ self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']
74
+
75
+ # Get classes to eval
76
+ self.valid_classes = ['class']
77
+ self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None
78
+ for cls in self.config['CLASSES_TO_EVAL']]
79
+ if not all(self.class_list):
80
+ raise TrackEvalException('Attempted to evaluate an invalid class. Only class class is valid.')
81
+ self.class_name_to_class_id = {'class': 1, 'box': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5,
82
+ 'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9,
83
+ 'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13}
84
+ self.valid_class_numbers = list(self.class_name_to_class_id.values())
85
+
86
+ # Get sequences to eval and check gt files exist
87
+ self.seq_list, self.seq_lengths = self._get_seq_info()
88
+ if len(self.seq_list) < 1:
89
+ raise TrackEvalException('No sequences are selected to be evaluated.')
90
+
91
+ # Check gt files exist
92
+ for seq in self.seq_list:
93
+ if not self.data_is_zipped:
94
+ curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
95
+ if not os.path.isfile(curr_file):
96
+ print('GT file not found ' + curr_file)
97
+ raise TrackEvalException('GT file not found for sequence: ' + seq)
98
+ if self.data_is_zipped:
99
+ curr_file = os.path.join(self.gt_fol, 'data.zip')
100
+ if not os.path.isfile(curr_file):
101
+ print('GT file not found ' + curr_file)
102
+ raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))
103
+
104
+ # Get trackers to eval
105
+ if self.config['TRACKERS_TO_EVAL'] is None:
106
+ self.tracker_list = os.listdir(self.tracker_fol)
107
+ else:
108
+ self.tracker_list = self.config['TRACKERS_TO_EVAL']
109
+
110
+ if self.config['TRACKER_DISPLAY_NAMES'] is None:
111
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))
112
+ elif (self.config['TRACKERS_TO_EVAL'] is not None) and (
113
+ len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):
114
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))
115
+ else:
116
+ raise TrackEvalException('List of tracker files and tracker display names do not match.')
117
+
118
+ for tracker in self.tracker_list:
119
+ if self.data_is_zipped:
120
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
121
+ if not os.path.isfile(curr_file):
122
+ print('Tracker file not found: ' + curr_file)
123
+ raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))
124
+ else:
125
+ for seq in self.seq_list:
126
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
127
+ if not os.path.isfile(curr_file):
128
+ print('Tracker file not found: ' + curr_file)
129
+ raise TrackEvalException(
130
+ 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(
131
+ curr_file))
132
+
133
+ def get_display_name(self, tracker):
134
+ """
135
+ Gets the display name of the tracker
136
+
137
+ :param str tracker: Class of tracker
138
+ :return: str
139
+ ::
140
+
141
+ dataset.get_display_name(tracker)
142
+ """
143
+
144
+ return self.tracker_to_disp[tracker]
145
+
146
+ def _get_seq_info(self):
147
+ seq_list = []
148
+ seq_lengths = {}
149
+ if self.config["SEQ_INFO"]:
150
+ seq_list = list(self.config["SEQ_INFO"].keys())
151
+ seq_lengths = self.config["SEQ_INFO"]
152
+
153
+ # If sequence length is 'None' tries to read sequence length from .ini files.
154
+ for seq, seq_length in seq_lengths.items():
155
+ if seq_length is None:
156
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
157
+ if not os.path.isfile(ini_file):
158
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
159
+ ini_data = configparser.ConfigParser()
160
+ ini_data.read(ini_file)
161
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
162
+
163
+ else:
164
+ if self.config["SEQMAP_FILE"]:
165
+ seqmap_file = self.config["SEQMAP_FILE"]
166
+ else:
167
+ if self.config["SEQMAP_FOLDER"] is None:
168
+ seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')
169
+ else:
170
+ seqmap_file = os.path.join(self.config["SEQMAP_FOLDER"], self.gt_set + '.txt')
171
+ if not os.path.isfile(seqmap_file):
172
+ print('no seqmap found: ' + seqmap_file)
173
+ raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))
174
+ with open(seqmap_file) as fp:
175
+ reader = csv.reader(fp)
176
+ for i, row in enumerate(reader):
177
+ if i == 0 or row[0] == '':
178
+ continue
179
+ seq = row[0]
180
+ seq_list.append(seq)
181
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
182
+ if not os.path.isfile(ini_file):
183
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
184
+ ini_data = configparser.ConfigParser()
185
+ ini_data.read(ini_file)
186
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
187
+ return seq_list, seq_lengths
188
+
189
+ def _load_raw_file(self, tracker, seq, is_gt):
190
+ """Load a file (gt or tracker) in the MOT Challenge 3D location format
191
+
192
+ If is_gt, this returns a dict which contains the fields:
193
+ [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).
194
+ [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.
195
+ [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).
196
+
197
+ if not is_gt, this returns a dict which contains the fields:
198
+ [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).
199
+ [tracker_dets]: list (for each timestep) of lists of detections.
200
+
201
+ :param str tracker: Name of the tracker.
202
+ :param str seq: Sequence identifier.
203
+ :param bool is_gt: Indicates whether the file is ground truth or from a tracker.
204
+ :raises TrackEvalException: If there's an error loading the file or if the data is corrupted.
205
+ :return: dictionary containing the loaded data.
206
+ :rtype: dict
207
+ """
208
+ # File location
209
+ if self.data_is_zipped:
210
+ if is_gt:
211
+ zip_file = os.path.join(self.gt_fol, 'data.zip')
212
+ else:
213
+ zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
214
+ file = seq + '.txt'
215
+ else:
216
+ zip_file = None
217
+ if is_gt:
218
+ file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
219
+ else:
220
+ file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
221
+
222
+ # Load raw data from text file
223
+ read_data, ignore_data = self._load_simple_text_file(file, is_zipped=self.data_is_zipped, zip_file=zip_file)
224
+
225
+ # Convert data to required format
226
+ num_timesteps = self.seq_lengths[seq]
227
+ data_keys = ['ids', 'classes', 'dets']
228
+ if is_gt:
229
+ data_keys += ['gt_crowd_ignore_regions', 'gt_extras']
230
+ else:
231
+ data_keys += ['tracker_confidences']
232
+ raw_data = {key: [None] * num_timesteps for key in data_keys}
233
+
234
+ # Check for any extra time keys
235
+ current_time_keys = [str( t+ 1) for t in range(num_timesteps)]
236
+ extra_time_keys = [x for x in read_data.keys() if x not in current_time_keys]
237
+ if len(extra_time_keys) > 0:
238
+ if is_gt:
239
+ text = 'Ground-truth'
240
+ else:
241
+ text = 'Tracking'
242
+ raise TrackEvalException(
243
+ text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(
244
+ [str(x) + ', ' for x in extra_time_keys]))
245
+
246
+ for t in range(num_timesteps):
247
+ time_key = str(t+1)
248
+ if time_key in read_data.keys():
249
+ try:
250
+ time_data = np.asarray(read_data[time_key], dtype=float)
251
+ except ValueError:
252
+ if is_gt:
253
+ raise TrackEvalException(
254
+ 'Cannot convert gt data for sequence %s to float. Is data corrupted?' % seq)
255
+ else:
256
+ raise TrackEvalException(
257
+ 'Cannot convert tracking data from tracker %s, sequence %s to float. Is data corrupted?' % (
258
+ tracker, seq))
259
+ try:
260
+ if is_gt:
261
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 3:6])
262
+ else:
263
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 3:6])
264
+ raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)
265
+ except IndexError:
266
+ if is_gt:
267
+ err = 'Cannot load gt data from sequence %s, because there is not enough ' \
268
+ 'columns in the data.' % seq
269
+ raise TrackEvalException(err)
270
+ else:
271
+ err = 'Cannot load tracker data from tracker %s, sequence %s, because there is not enough ' \
272
+ 'columns in the data.' % (tracker, seq)
273
+ raise TrackEvalException(err)
274
+ if time_data.shape[1] >= 8:
275
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
276
+ # raw_data['classes'][t] = np.atleast_1d(time_data[:, 7]).astype(int)
277
+ else:
278
+ if not is_gt:
279
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
280
+ else:
281
+ raise TrackEvalException(
282
+ 'GT data is not in a valid format, there is not enough rows in seq %s, timestep %i.' % (
283
+ seq, t))
284
+ if is_gt:
285
+ gt_extras_dict = {'zero_marked': np.ones_like(time_data[:, 1], dtype=int)}
286
+ raw_data['gt_extras'][t] = gt_extras_dict
287
+ else:
288
+ raw_data['tracker_confidences'][t] = np.ones_like(time_data[:, 1])
289
+ else:
290
+ raw_data['dets'][t] = np.empty((0, 3))
291
+ raw_data['ids'][t] = np.empty(0).astype(int)
292
+ raw_data['classes'][t] = np.empty(0).astype(int)
293
+ if is_gt:
294
+ gt_extras_dict = {'zero_marked': np.empty(0)}
295
+ raw_data['gt_extras'][t] = gt_extras_dict
296
+ else:
297
+ raw_data['tracker_confidences'][t] = np.empty(0)
298
+ if is_gt:
299
+ raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 3))
300
+
301
+ if is_gt:
302
+ key_map = {'ids': 'gt_ids',
303
+ 'classes': 'gt_classes',
304
+ 'dets': 'gt_dets'}
305
+ else:
306
+ key_map = {'ids': 'tracker_ids',
307
+ 'classes': 'tracker_classes',
308
+ 'dets': 'tracker_dets'}
309
+ for k, v in key_map.items():
310
+ raw_data[v] = raw_data.pop(k)
311
+ raw_data['num_timesteps'] = num_timesteps
312
+ raw_data['seq'] = seq
313
+ return raw_data
314
+
315
+ @_timing.time
316
+ def get_preprocessed_seq_data(self, raw_data, cls):
317
+ """ Preprocess data for a single sequence for a single class ready for evaluation.
318
+ Inputs:
319
+ - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().
320
+ - cls is the class to be evaluated.
321
+ Outputs:
322
+ - data is a dict containing all of the information that metrics need to perform evaluation.
323
+ It contains the following fields:
324
+ [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.
325
+ [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).
326
+ [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.
327
+ [similarity_scores]: list (for each timestep) of 2D NDArrays.
328
+ Notes:
329
+ General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.
330
+ 1) Extract only detections relevant for the class to be evaluated (including distractor detections).
331
+ 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a
332
+ distractor class, or otherwise marked as to be removed.
333
+ 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain
334
+ other criteria (e.g. are too small).
335
+ 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.
336
+ After the above preprocessing steps, this function also calculates the number of gt and tracker detections
337
+ and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are
338
+ unique within each timestep.
339
+
340
+ MOT Challenge:
341
+ In MOT Challenge, the 4 preproc steps are as follow:
342
+ 1) There is only one class (class) to be evaluated, but all other classes are used for preproc.
343
+ 2) Predictions are matched against all gt boxes (regardless of class), those matching with distractor
344
+ objects are removed.
345
+ 3) There is no crowd ignore regions.
346
+ 4) All gt dets except class are removed, also removes class gt dets marked with zero_marked.
347
+
348
+ :param raw_data: A dict containing the data for the sequence already read in by `get_raw_seq_data()`.
349
+ :param cls: The class to be evaluated.
350
+
351
+ :return: A dict containing all of the information that metrics need to perform evaluation.
352
+ It contains the following fields:
353
+ - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets]: Integers.
354
+ - [gt_ids, tracker_ids, tracker_confidences]: List (for each timestep) of 1D NDArrays (for each detection).
355
+ - [gt_dets, tracker_dets]: List (for each timestep) of lists of detections.
356
+ - [similarity_scores]: List (for each timestep) of 2D NDArrays.
357
+
358
+ """
359
+ # Check that input data has unique ids
360
+ self._check_unique_ids(raw_data)
361
+
362
+ distractor_class_names = ['box', 'static_person', 'distractor', 'reflection']
363
+ if self.benchmark == 'MOT20':
364
+ distractor_class_names.append('non_mot_vehicle')
365
+ distractor_classes = [self.class_name_to_class_id[x] for x in distractor_class_names]
366
+ cls_id = self.class_name_to_class_id[cls]
367
+
368
+ data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']
369
+ data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}
370
+ unique_gt_ids = []
371
+ unique_tracker_ids = []
372
+ num_gt_dets = 0
373
+ num_tracker_dets = 0
374
+ for t in range(raw_data['num_timesteps']):
375
+
376
+ # Get all data
377
+ gt_ids = raw_data['gt_ids'][t]
378
+ gt_dets = raw_data['gt_dets'][t]
379
+ gt_classes = raw_data['gt_classes'][t]
380
+ gt_zero_marked = raw_data['gt_extras'][t]['zero_marked']
381
+
382
+ tracker_ids = raw_data['tracker_ids'][t]
383
+ tracker_dets = raw_data['tracker_dets'][t]
384
+ tracker_classes = raw_data['tracker_classes'][t]
385
+ tracker_confidences = raw_data['tracker_confidences'][t]
386
+ similarity_scores = raw_data['similarity_scores'][t]
387
+
388
+ # Evaluation is ONLY valid for class class
389
+ if len(tracker_classes) > 0 and np.max(tracker_classes) > 1:
390
+ raise TrackEvalException(
391
+ 'Evaluation is only valid for class class. Non class class (%i) found in sequence %s at '
392
+ 'timestep %i.' % (np.max(tracker_classes), raw_data['seq'], t))
393
+
394
+ # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets
395
+ # which are labeled as belonging to a distractor class.
396
+ to_remove_tracker = np.array([], int)
397
+ if self.do_preproc and self.benchmark != 'MOT15' and gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:
398
+
399
+ # Check all classes are valid:
400
+ invalid_classes = np.setdiff1d(np.unique(gt_classes), self.valid_class_numbers)
401
+ if len(invalid_classes) > 0:
402
+ print(' '.join([str(x) for x in invalid_classes]))
403
+ raise(TrackEvalException('Attempting to evaluate using invalid gt classes. '
404
+ 'This warning only triggers if preprocessing is performed, '
405
+ 'e.g. not for MOT15 or where prepropressing is explicitly disabled. '
406
+ 'Please either check your gt data, or disable preprocessing. '
407
+ 'The following invalid classes were found in timestep ' + str(t) + ': ' +
408
+ ' '.join([str(x) for x in invalid_classes])))
409
+
410
+ matching_scores = similarity_scores.copy()
411
+ matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0
412
+ match_rows, match_cols = linear_sum_assignment(-matching_scores)
413
+ actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps
414
+ match_rows = match_rows[actually_matched_mask]
415
+ match_cols = match_cols[actually_matched_mask]
416
+
417
+ is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)
418
+ to_remove_tracker = match_cols[is_distractor_class]
419
+
420
+ # Apply preprocessing to remove all unwanted tracker dets.
421
+ data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)
422
+ data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)
423
+ data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)
424
+ similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)
425
+
426
+ # Remove gt detections marked as to remove (zero marked), and also remove gt detections not in class
427
+ # class (not applicable for MOT15)
428
+ if self.do_preproc and self.benchmark != 'MOT15':
429
+ gt_to_keep_mask = (np.not_equal(gt_zero_marked, 0)) & \
430
+ (np.equal(gt_classes, cls_id))
431
+ else:
432
+ # There are no classes for MOT15
433
+ gt_to_keep_mask = np.not_equal(gt_zero_marked, 0)
434
+ data['gt_ids'][t] = gt_ids[gt_to_keep_mask]
435
+ data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]
436
+ data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]
437
+
438
+ unique_gt_ids += list(np.unique(data['gt_ids'][t]))
439
+ unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))
440
+ num_tracker_dets += len(data['tracker_ids'][t])
441
+ num_gt_dets += len(data['gt_ids'][t])
442
+
443
+ # Re-label IDs such that there are no empty IDs
444
+ if len(unique_gt_ids) > 0:
445
+ unique_gt_ids = np.unique(unique_gt_ids)
446
+ gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))
447
+ gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))
448
+ for t in range(raw_data['num_timesteps']):
449
+ if len(data['gt_ids'][t]) > 0:
450
+ data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(int)
451
+ if len(unique_tracker_ids) > 0:
452
+ unique_tracker_ids = np.unique(unique_tracker_ids)
453
+ tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))
454
+ tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))
455
+ for t in range(raw_data['num_timesteps']):
456
+ if len(data['tracker_ids'][t]) > 0:
457
+ data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(int)
458
+
459
+ # Record overview statistics.
460
+ data['num_tracker_dets'] = num_tracker_dets
461
+ data['num_gt_dets'] = num_gt_dets
462
+ data['num_tracker_ids'] = len(unique_tracker_ids)
463
+ data['num_gt_ids'] = len(unique_gt_ids)
464
+ data['num_timesteps'] = raw_data['num_timesteps']
465
+ data['seq'] = raw_data['seq']
466
+
467
+ # Ensure again that ids are unique per timestep after preproc.
468
+ self._check_unique_ids(data, after_preproc=True)
469
+ return data
470
+
471
+ def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
472
+ similarity_scores = self._calculate_euclidean_similarity(gt_dets_t, tracker_dets_t, zero_distance=self.zero_distance)
473
+ return similarity_scores
MTMC_Tracking_2025/eval/utils/trackeval/datasets/test_mot.py ADDED
@@ -0,0 +1,475 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+ import configparser
4
+ import numpy as np
5
+ from scipy.optimize import linear_sum_assignment
6
+ from mdx.mtmc.utils.trackeval import utils
7
+ from mdx.mtmc.utils.trackeval import _timing
8
+ from mdx.mtmc.utils.trackeval.utils import TrackEvalException
9
+ from mdx.mtmc.utils.trackeval.datasets._base_dataset import _BaseDataset
10
+
11
+
12
+ class MotChallenge2DLocation(_BaseDataset):
13
+ """
14
+ Dataset class for MOT Challenge 2D bounding box tracking
15
+
16
+ :param dict config: configuration for the app
17
+ ::
18
+
19
+ default_dataset = trackeeval.datasets.MotChallenge2DBox(config)
20
+ """
21
+ @staticmethod
22
+ def get_default_dataset_config():
23
+ """Default class config values"""
24
+ code_path = utils.get_code_path()
25
+ default_config = {
26
+ 'GT_FOLDER': os.path.join(code_path, 'data/gt/mot_challenge/'), # Location of GT data
27
+ 'TRACKERS_FOLDER': os.path.join(code_path, 'data/trackers/mot_challenge/'), # Trackers location
28
+ 'OUTPUT_FOLDER': None, # Where to save eval results (if None, same as TRACKERS_FOLDER)
29
+ 'TRACKERS_TO_EVAL': None, # Filenames of trackers to eval (if None, all in folder)
30
+ 'CLASSES_TO_EVAL': ['Person'], # Valid: ['Person']
31
+ 'BENCHMARK': 'MOT17', # Valid: 'MOT17', 'MOT16', 'MOT20', 'MOT15'
32
+ 'SPLIT_TO_EVAL': 'train', # Valid: 'train', 'test', 'all'
33
+ 'INPUT_AS_ZIP': False, # Whether tracker input files are zipped
34
+ 'PRINT_CONFIG': True, # Whether to print current config
35
+ 'DO_PREPROC': True, # Whether to perform preprocessing (never done for MOT15)
36
+ 'TRACKER_SUB_FOLDER': 'data', # Tracker files are in TRACKER_FOLDER/tracker_name/TRACKER_SUB_FOLDER
37
+ 'OUTPUT_SUB_FOLDER': '', # Output files are saved in OUTPUT_FOLDER/tracker_name/OUTPUT_SUB_FOLDER
38
+ 'TRACKER_DISPLAY_NAMES': None, # Names of trackers to display, if None: TRACKERS_TO_EVAL
39
+ 'SEQMAP_FOLDER': None, # Where seqmaps are found (if None, GT_FOLDER/seqmaps)
40
+ 'SEQMAP_FILE': None, # Directly specify seqmap file (if none use seqmap_folder/benchmark-split_to_eval)
41
+ 'SEQ_INFO': None, # If not None, directly specify sequences to eval and their number of timesteps
42
+ 'GT_LOC_FORMAT': '{gt_folder}/{seq}/gt/gt.txt', # '{gt_folder}/{seq}/gt/gt.txt'
43
+ 'SKIP_SPLIT_FOL': False, # If False, data is in GT_FOLDER/BENCHMARK-SPLIT_TO_EVAL/ and in
44
+ # TRACKERS_FOLDER/BENCHMARK-SPLIT_TO_EVAL/tracker/
45
+ # If True, then the middle 'benchmark-split' folder is skipped for both.
46
+ }
47
+ return default_config
48
+
49
+ def __init__(self, config=None):
50
+ """Initialise dataset, checking that all required files are present"""
51
+ super().__init__()
52
+ # Fill non-given config values with defaults
53
+ self.config = utils.init_config(config, self.get_default_dataset_config(), self.get_name())
54
+
55
+ self.benchmark = self.config['BENCHMARK']
56
+ gt_set = self.config['BENCHMARK'] + '-' + self.config['SPLIT_TO_EVAL']
57
+ self.gt_set = gt_set
58
+ if not self.config['SKIP_SPLIT_FOL']:
59
+ split_fol = gt_set
60
+ else:
61
+ split_fol = ''
62
+ self.gt_fol = os.path.join(self.config['GT_FOLDER'], split_fol)
63
+ self.tracker_fol = os.path.join(self.config['TRACKERS_FOLDER'], split_fol)
64
+ self.should_classes_combine = False
65
+ self.use_super_categories = False
66
+ self.data_is_zipped = self.config['INPUT_AS_ZIP']
67
+ self.do_preproc = self.config['DO_PREPROC']
68
+
69
+ self.output_fol = self.config['OUTPUT_FOLDER']
70
+ if self.output_fol is None:
71
+ self.output_fol = self.tracker_fol
72
+
73
+ self.tracker_sub_fol = self.config['TRACKER_SUB_FOLDER']
74
+ self.output_sub_fol = self.config['OUTPUT_SUB_FOLDER']
75
+
76
+ # Get classes to eval
77
+ self.valid_classes = ['Person']
78
+ self.class_list = [cls.lower() if cls.lower() in self.valid_classes else None
79
+ for cls in self.config['CLASSES_TO_EVAL']]
80
+ if not all(self.class_list):
81
+ raise TrackEvalException('Attempted to evaluate an invalid class. Only Person class is valid.')
82
+ self.class_name_to_class_id = {'Person': 1, 'box': 2, 'car': 3, 'bicycle': 4, 'motorbike': 5,
83
+ 'non_mot_vehicle': 6, 'static_person': 7, 'distractor': 8, 'occluder': 9,
84
+ 'occluder_on_ground': 10, 'occluder_full': 11, 'reflection': 12, 'crowd': 13}
85
+ self.valid_class_numbers = list(self.class_name_to_class_id.values())
86
+
87
+ # Get sequences to eval and check gt files exist
88
+ self.seq_list, self.seq_lengths = self._get_seq_info()
89
+ if len(self.seq_list) < 1:
90
+ raise TrackEvalException('No sequences are selected to be evaluated.')
91
+
92
+ # Check gt files exist
93
+ for seq in self.seq_list:
94
+ if not self.data_is_zipped:
95
+ curr_file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
96
+ if not os.path.isfile(curr_file):
97
+ print('GT file not found ' + curr_file)
98
+ raise TrackEvalException('GT file not found for sequence: ' + seq)
99
+ if self.data_is_zipped:
100
+ curr_file = os.path.join(self.gt_fol, 'data.zip')
101
+ if not os.path.isfile(curr_file):
102
+ print('GT file not found ' + curr_file)
103
+ raise TrackEvalException('GT file not found: ' + os.path.basename(curr_file))
104
+
105
+ # Get trackers to eval
106
+ if self.config['TRACKERS_TO_EVAL'] is None:
107
+ self.tracker_list = os.listdir(self.tracker_fol)
108
+ else:
109
+ self.tracker_list = self.config['TRACKERS_TO_EVAL']
110
+
111
+ if self.config['TRACKER_DISPLAY_NAMES'] is None:
112
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.tracker_list))
113
+ elif (self.config['TRACKERS_TO_EVAL'] is not None) and (
114
+ len(self.config['TRACKER_DISPLAY_NAMES']) == len(self.tracker_list)):
115
+ self.tracker_to_disp = dict(zip(self.tracker_list, self.config['TRACKER_DISPLAY_NAMES']))
116
+ else:
117
+ raise TrackEvalException('List of tracker files and tracker display names do not match.')
118
+
119
+ for tracker in self.tracker_list:
120
+ if self.data_is_zipped:
121
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
122
+ if not os.path.isfile(curr_file):
123
+ print('Tracker file not found: ' + curr_file)
124
+ raise TrackEvalException('Tracker file not found: ' + tracker + '/' + os.path.basename(curr_file))
125
+ else:
126
+ for seq in self.seq_list:
127
+ curr_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
128
+ if not os.path.isfile(curr_file):
129
+ print('Tracker file not found: ' + curr_file)
130
+ raise TrackEvalException(
131
+ 'Tracker file not found: ' + tracker + '/' + self.tracker_sub_fol + '/' + os.path.basename(
132
+ curr_file))
133
+
134
+ def get_display_name(self, tracker):
135
+ """
136
+ Gets the display name of the tracker
137
+
138
+ :param str tracker: Class of tracker
139
+ :return: str
140
+ ::
141
+
142
+ dataset.get_display_name(tracker)
143
+ """
144
+
145
+ return self.tracker_to_disp[tracker]
146
+
147
+ def _get_seq_info(self):
148
+ seq_list = []
149
+ seq_lengths = {}
150
+ if self.config["SEQ_INFO"]:
151
+ seq_list = list(self.config["SEQ_INFO"].keys())
152
+ seq_lengths = self.config["SEQ_INFO"]
153
+
154
+ # If sequence length is 'None' tries to read sequence length from .ini files.
155
+ for seq, seq_length in seq_lengths.items():
156
+ if seq_length is None:
157
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
158
+ if not os.path.isfile(ini_file):
159
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
160
+ ini_data = configparser.ConfigParser()
161
+ ini_data.read(ini_file)
162
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
163
+
164
+ else:
165
+ if self.config["SEQMAP_FILE"]:
166
+ seqmap_file = self.config["SEQMAP_FILE"]
167
+ else:
168
+ if self.config["SEQMAP_FOLDER"] is None:
169
+ seqmap_file = os.path.join(self.config['GT_FOLDER'], 'seqmaps', self.gt_set + '.txt')
170
+ else:
171
+ seqmap_file = os.path.join(self.config["SEQMAP_FOLDER"], self.gt_set + '.txt')
172
+ if not os.path.isfile(seqmap_file):
173
+ print('no seqmap found: ' + seqmap_file)
174
+ raise TrackEvalException('no seqmap found: ' + os.path.basename(seqmap_file))
175
+ with open(seqmap_file) as fp:
176
+ reader = csv.reader(fp)
177
+ for i, row in enumerate(reader):
178
+ if i == 0 or row[0] == '':
179
+ continue
180
+ seq = row[0]
181
+ seq_list.append(seq)
182
+ ini_file = os.path.join(self.gt_fol, seq, 'seqinfo.ini')
183
+ if not os.path.isfile(ini_file):
184
+ raise TrackEvalException('ini file does not exist: ' + seq + '/' + os.path.basename(ini_file))
185
+ ini_data = configparser.ConfigParser()
186
+ ini_data.read(ini_file)
187
+ seq_lengths[seq] = int(float(ini_data['Sequence']['seqLength']))
188
+ return seq_list, seq_lengths
189
+
190
+ def _load_raw_file(self, tracker, seq, is_gt):
191
+ """Load a file (gt or tracker) in the MOT Challenge 2D box format
192
+
193
+ If is_gt, this returns a dict which contains the fields:
194
+ [gt_ids, gt_classes] : list (for each timestep) of 1D NDArrays (for each det).
195
+ [gt_dets, gt_crowd_ignore_regions]: list (for each timestep) of lists of detections.
196
+ [gt_extras] : list (for each timestep) of dicts (for each extra) of 1D NDArrays (for each det).
197
+
198
+ if not is_gt, this returns a dict which contains the fields:
199
+ [tracker_ids, tracker_classes, tracker_confidences] : list (for each timestep) of 1D NDArrays (for each det).
200
+ [tracker_dets]: list (for each timestep) of lists of detections.
201
+
202
+ :param str tracker: Name of the tracker.
203
+ :param str seq: Sequence identifier.
204
+ :param bool is_gt: Indicates whether the file is ground truth or from a tracker.
205
+ :raises TrackEvalException: If there's an error loading the file or if the data is corrupted.
206
+ :return: dictionary containing the loaded data.
207
+ :rtype: dict
208
+ """
209
+ # File location
210
+ if self.data_is_zipped:
211
+ if is_gt:
212
+ zip_file = os.path.join(self.gt_fol, 'data.zip')
213
+ else:
214
+ zip_file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol + '.zip')
215
+ file = seq + '.txt'
216
+ else:
217
+ zip_file = None
218
+ if is_gt:
219
+ file = self.config["GT_LOC_FORMAT"].format(gt_folder=self.gt_fol, seq=seq)
220
+ else:
221
+ file = os.path.join(self.tracker_fol, tracker, self.tracker_sub_fol, seq + '.txt')
222
+
223
+ # Load raw data from text file
224
+ read_data, ignore_data = self._load_simple_text_file(file, is_zipped=self.data_is_zipped, zip_file=zip_file)
225
+
226
+ # Convert data to required format
227
+ num_timesteps = self.seq_lengths[seq]
228
+ data_keys = ['ids', 'classes', 'dets']
229
+ if is_gt:
230
+ data_keys += ['gt_crowd_ignore_regions', 'gt_extras']
231
+ else:
232
+ data_keys += ['tracker_confidences']
233
+ raw_data = {key: [None] * num_timesteps for key in data_keys}
234
+
235
+ # Check for any extra time keys
236
+ current_time_keys = [str( t+ 1) for t in range(num_timesteps)]
237
+ extra_time_keys = [x for x in read_data.keys() if x not in current_time_keys]
238
+ if len(extra_time_keys) > 0:
239
+ if is_gt:
240
+ text = 'Ground-truth'
241
+ else:
242
+ text = 'Tracking'
243
+ raise TrackEvalException(
244
+ text + ' data contains the following invalid timesteps in seq %s: ' % seq + ', '.join(
245
+ [str(x) + ', ' for x in extra_time_keys]))
246
+
247
+ for t in range(num_timesteps):
248
+ time_key = str(t+1)
249
+ if time_key in read_data.keys():
250
+ try:
251
+ time_data = np.asarray(read_data[time_key], dtype=float)
252
+ except ValueError:
253
+ if is_gt:
254
+ raise TrackEvalException(
255
+ 'Cannot convert gt data for sequence %s to float. Is data corrupted?' % seq)
256
+ else:
257
+ raise TrackEvalException(
258
+ 'Cannot convert tracking data from tracker %s, sequence %s to float. Is data corrupted?' % (
259
+ tracker, seq))
260
+ try:
261
+ if is_gt:
262
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 7:9])
263
+ else:
264
+ raw_data['dets'][t] = np.atleast_2d(time_data[:, 7:9])
265
+ raw_data['ids'][t] = np.atleast_1d(time_data[:, 1]).astype(int)
266
+ except IndexError:
267
+ if is_gt:
268
+ err = 'Cannot load gt data from sequence %s, because there is not enough ' \
269
+ 'columns in the data.' % seq
270
+ raise TrackEvalException(err)
271
+ else:
272
+ err = 'Cannot load tracker data from tracker %s, sequence %s, because there is not enough ' \
273
+ 'columns in the data.' % (tracker, seq)
274
+ raise TrackEvalException(err)
275
+ if time_data.shape[1] >= 8:
276
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
277
+ # raw_data['classes'][t] = np.atleast_1d(time_data[:, 7]).astype(int)
278
+ else:
279
+ if not is_gt:
280
+ raw_data['classes'][t] = np.ones_like(raw_data['ids'][t])
281
+ else:
282
+ raise TrackEvalException(
283
+ 'GT data is not in a valid format, there is not enough rows in seq %s, timestep %i.' % (
284
+ seq, t))
285
+ if is_gt:
286
+ gt_extras_dict = {'zero_marked': np.atleast_1d(time_data[:, 6].astype(int))}
287
+ raw_data['gt_extras'][t] = gt_extras_dict
288
+ else:
289
+ raw_data['tracker_confidences'][t] = np.atleast_1d(time_data[:, 6])
290
+ else:
291
+ raw_data['dets'][t] = np.empty((0, 4))
292
+ raw_data['ids'][t] = np.empty(0).astype(int)
293
+ raw_data['classes'][t] = np.empty(0).astype(int)
294
+ if is_gt:
295
+ gt_extras_dict = {'zero_marked': np.empty(0)}
296
+ raw_data['gt_extras'][t] = gt_extras_dict
297
+ else:
298
+ raw_data['tracker_confidences'][t] = np.empty(0)
299
+ if is_gt:
300
+ raw_data['gt_crowd_ignore_regions'][t] = np.empty((0, 4))
301
+
302
+ if is_gt:
303
+ key_map = {'ids': 'gt_ids',
304
+ 'classes': 'gt_classes',
305
+ 'dets': 'gt_dets'}
306
+ else:
307
+ key_map = {'ids': 'tracker_ids',
308
+ 'classes': 'tracker_classes',
309
+ 'dets': 'tracker_dets'}
310
+ for k, v in key_map.items():
311
+ raw_data[v] = raw_data.pop(k)
312
+ raw_data['num_timesteps'] = num_timesteps
313
+ raw_data['seq'] = seq
314
+ return raw_data
315
+
316
+ @_timing.time
317
+ def get_preprocessed_seq_data(self, raw_data, cls):
318
+ """ Preprocess data for a single sequence for a single class ready for evaluation.
319
+ Inputs:
320
+ - raw_data is a dict containing the data for the sequence already read in by get_raw_seq_data().
321
+ - cls is the class to be evaluated.
322
+ Outputs:
323
+ - data is a dict containing all of the information that metrics need to perform evaluation.
324
+ It contains the following fields:
325
+ [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets] : integers.
326
+ [gt_ids, tracker_ids, tracker_confidences]: list (for each timestep) of 1D NDArrays (for each det).
327
+ [gt_dets, tracker_dets]: list (for each timestep) of lists of detections.
328
+ [similarity_scores]: list (for each timestep) of 2D NDArrays.
329
+ Notes:
330
+ General preprocessing (preproc) occurs in 4 steps. Some datasets may not use all of these steps.
331
+ 1) Extract only detections relevant for the class to be evaluated (including distractor detections).
332
+ 2) Match gt dets and tracker dets. Remove tracker dets that are matched to a gt det that is of a
333
+ distractor class, or otherwise marked as to be removed.
334
+ 3) Remove unmatched tracker dets if they fall within a crowd ignore region or don't meet a certain
335
+ other criteria (e.g. are too small).
336
+ 4) Remove gt dets that were only useful for preprocessing and not for actual evaluation.
337
+ After the above preprocessing steps, this function also calculates the number of gt and tracker detections
338
+ and unique track ids. It also relabels gt and tracker ids to be contiguous and checks that ids are
339
+ unique within each timestep.
340
+
341
+ MOT Challenge:
342
+ In MOT Challenge, the 4 preproc steps are as follow:
343
+ 1) There is only one class (Person) to be evaluated, but all other classes are used for preproc.
344
+ 2) Predictions are matched against all gt boxes (regardless of class), those matching with distractor
345
+ objects are removed.
346
+ 3) There is no crowd ignore regions.
347
+ 4) All gt dets except Person are removed, also removes Person gt dets marked with zero_marked.
348
+
349
+ :param raw_data: A dict containing the data for the sequence already read in by `get_raw_seq_data()`.
350
+ :param cls: The class to be evaluated.
351
+
352
+ :return: A dict containing all of the information that metrics need to perform evaluation.
353
+ It contains the following fields:
354
+ - [num_timesteps, num_gt_ids, num_tracker_ids, num_gt_dets, num_tracker_dets]: Integers.
355
+ - [gt_ids, tracker_ids, tracker_confidences]: List (for each timestep) of 1D NDArrays (for each detection).
356
+ - [gt_dets, tracker_dets]: List (for each timestep) of lists of detections.
357
+ - [similarity_scores]: List (for each timestep) of 2D NDArrays.
358
+
359
+ """
360
+ # Check that input data has unique ids
361
+ self._check_unique_ids(raw_data)
362
+
363
+ distractor_class_names = ['box', 'static_person', 'distractor', 'reflection']
364
+ if self.benchmark == 'MOT20':
365
+ distractor_class_names.append('non_mot_vehicle')
366
+ distractor_classes = [self.class_name_to_class_id[x] for x in distractor_class_names]
367
+ cls_id = self.class_name_to_class_id[cls]
368
+
369
+ data_keys = ['gt_ids', 'tracker_ids', 'gt_dets', 'tracker_dets', 'tracker_confidences', 'similarity_scores']
370
+ data = {key: [None] * raw_data['num_timesteps'] for key in data_keys}
371
+ unique_gt_ids = []
372
+ unique_tracker_ids = []
373
+ num_gt_dets = 0
374
+ num_tracker_dets = 0
375
+ for t in range(raw_data['num_timesteps']):
376
+
377
+ # Get all data
378
+ gt_ids = raw_data['gt_ids'][t]
379
+ gt_dets = raw_data['gt_dets'][t]
380
+ gt_classes = raw_data['gt_classes'][t]
381
+ gt_zero_marked = raw_data['gt_extras'][t]['zero_marked']
382
+
383
+ tracker_ids = raw_data['tracker_ids'][t]
384
+ tracker_dets = raw_data['tracker_dets'][t]
385
+ tracker_classes = raw_data['tracker_classes'][t]
386
+ tracker_confidences = raw_data['tracker_confidences'][t]
387
+ similarity_scores = raw_data['similarity_scores'][t]
388
+
389
+ # Evaluation is ONLY valid for Person class
390
+ if len(tracker_classes) > 0 and np.max(tracker_classes) > 1:
391
+ raise TrackEvalException(
392
+ 'Evaluation is only valid for Person class. Non Person class (%i) found in sequence %s at '
393
+ 'timestep %i.' % (np.max(tracker_classes), raw_data['seq'], t))
394
+
395
+ # Match tracker and gt dets (with hungarian algorithm) and remove tracker dets which match with gt dets
396
+ # which are labeled as belonging to a distractor class.
397
+ to_remove_tracker = np.array([], int)
398
+ if self.do_preproc and self.benchmark != 'MOT15' and gt_ids.shape[0] > 0 and tracker_ids.shape[0] > 0:
399
+
400
+ # Check all classes are valid:
401
+ invalid_classes = np.setdiff1d(np.unique(gt_classes), self.valid_class_numbers)
402
+ if len(invalid_classes) > 0:
403
+ print(' '.join([str(x) for x in invalid_classes]))
404
+ raise(TrackEvalException('Attempting to evaluate using invalid gt classes. '
405
+ 'This warning only triggers if preprocessing is performed, '
406
+ 'e.g. not for MOT15 or where prepropressing is explicitly disabled. '
407
+ 'Please either check your gt data, or disable preprocessing. '
408
+ 'The following invalid classes were found in timestep ' + str(t) + ': ' +
409
+ ' '.join([str(x) for x in invalid_classes])))
410
+
411
+ matching_scores = similarity_scores.copy()
412
+ matching_scores[matching_scores < 0.5 - np.finfo('float').eps] = 0
413
+ match_rows, match_cols = linear_sum_assignment(-matching_scores)
414
+ actually_matched_mask = matching_scores[match_rows, match_cols] > 0 + np.finfo('float').eps
415
+ match_rows = match_rows[actually_matched_mask]
416
+ match_cols = match_cols[actually_matched_mask]
417
+
418
+ is_distractor_class = np.isin(gt_classes[match_rows], distractor_classes)
419
+ to_remove_tracker = match_cols[is_distractor_class]
420
+
421
+ # Apply preprocessing to remove all unwanted tracker dets.
422
+ data['tracker_ids'][t] = np.delete(tracker_ids, to_remove_tracker, axis=0)
423
+ data['tracker_dets'][t] = np.delete(tracker_dets, to_remove_tracker, axis=0)
424
+ data['tracker_confidences'][t] = np.delete(tracker_confidences, to_remove_tracker, axis=0)
425
+ similarity_scores = np.delete(similarity_scores, to_remove_tracker, axis=1)
426
+
427
+ # Remove gt detections marked as to remove (zero marked), and also remove gt detections not in Person
428
+ # class (not applicable for MOT15)
429
+ if self.do_preproc and self.benchmark != 'MOT15':
430
+ gt_to_keep_mask = (np.not_equal(gt_zero_marked, 0)) & \
431
+ (np.equal(gt_classes, cls_id))
432
+ else:
433
+ # There are no classes for MOT15
434
+ gt_to_keep_mask = np.not_equal(gt_zero_marked, 0)
435
+ data['gt_ids'][t] = gt_ids[gt_to_keep_mask]
436
+ data['gt_dets'][t] = gt_dets[gt_to_keep_mask, :]
437
+ data['similarity_scores'][t] = similarity_scores[gt_to_keep_mask]
438
+
439
+ unique_gt_ids += list(np.unique(data['gt_ids'][t]))
440
+ unique_tracker_ids += list(np.unique(data['tracker_ids'][t]))
441
+ num_tracker_dets += len(data['tracker_ids'][t])
442
+ num_gt_dets += len(data['gt_ids'][t])
443
+
444
+ # Re-label IDs such that there are no empty IDs
445
+ if len(unique_gt_ids) > 0:
446
+ unique_gt_ids = np.unique(unique_gt_ids)
447
+ gt_id_map = np.nan * np.ones((np.max(unique_gt_ids) + 1))
448
+ gt_id_map[unique_gt_ids] = np.arange(len(unique_gt_ids))
449
+ for t in range(raw_data['num_timesteps']):
450
+ if len(data['gt_ids'][t]) > 0:
451
+ data['gt_ids'][t] = gt_id_map[data['gt_ids'][t]].astype(int)
452
+ if len(unique_tracker_ids) > 0:
453
+ unique_tracker_ids = np.unique(unique_tracker_ids)
454
+ tracker_id_map = np.nan * np.ones((np.max(unique_tracker_ids) + 1))
455
+ tracker_id_map[unique_tracker_ids] = np.arange(len(unique_tracker_ids))
456
+ for t in range(raw_data['num_timesteps']):
457
+ if len(data['tracker_ids'][t]) > 0:
458
+ data['tracker_ids'][t] = tracker_id_map[data['tracker_ids'][t]].astype(int)
459
+
460
+ # Record overview statistics.
461
+ data['num_tracker_dets'] = num_tracker_dets
462
+ data['num_gt_dets'] = num_gt_dets
463
+ data['num_tracker_ids'] = len(unique_tracker_ids)
464
+ data['num_gt_ids'] = len(unique_gt_ids)
465
+ data['num_timesteps'] = raw_data['num_timesteps']
466
+ data['seq'] = raw_data['seq']
467
+
468
+ # Ensure again that ids are unique per timestep after preproc.
469
+ self._check_unique_ids(data, after_preproc=True)
470
+
471
+ return data
472
+
473
+ def _calculate_similarities(self, gt_dets_t, tracker_dets_t):
474
+ similarity_scores = self._calculate_euclidean_similarity(gt_dets_t, tracker_dets_t)
475
+ return similarity_scores
MTMC_Tracking_2025/eval/utils/trackeval/eval.py ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import time
2
+ import logging
3
+ import traceback
4
+ from multiprocessing.pool import Pool
5
+ from functools import partial
6
+ import os
7
+ from . import utils
8
+ from .utils import TrackEvalException
9
+ from . import _timing
10
+ from .metrics import Count
11
+
12
+
13
+ class Evaluator:
14
+ """
15
+ Evaluator class for evaluating different metrics for different datasets
16
+
17
+ :param dict config: configuration for the app
18
+ ::
19
+
20
+ evaluator = Evaluator(config)
21
+ """
22
+
23
+ @staticmethod
24
+ def get_default_eval_config():
25
+ """Returns the default config values for evaluation"""
26
+ code_path = utils.get_code_path()
27
+ default_config = {
28
+ 'USE_PARALLEL': False,
29
+ 'NUM_PARALLEL_CORES': 8,
30
+ 'BREAK_ON_ERROR': True, # Raises exception and exits with error
31
+ 'RETURN_ON_ERROR': False, # if not BREAK_ON_ERROR, then returns from function on error
32
+ 'LOG_ON_ERROR': os.path.join(code_path, 'error_log.txt'), # if not None, save any errors into a log file.
33
+
34
+ 'PRINT_RESULTS': True,
35
+ 'PRINT_ONLY_COMBINED': False,
36
+ 'PRINT_CONFIG': True,
37
+ 'TIME_PROGRESS': True,
38
+ 'DISPLAY_LESS_PROGRESS': True,
39
+
40
+ 'OUTPUT_SUMMARY': True,
41
+ 'OUTPUT_EMPTY_CLASSES': True, # If False, summary files are not output for classes with no detections
42
+ 'OUTPUT_DETAILED': True,
43
+ 'PLOT_CURVES': True,
44
+ }
45
+ return default_config
46
+
47
+ def __init__(self, config=None):
48
+ self.config = utils.init_config(config, self.get_default_eval_config(), 'Eval')
49
+ # Only run timing analysis if not run in parallel.
50
+ if self.config['TIME_PROGRESS'] and not self.config['USE_PARALLEL']:
51
+ _timing.DO_TIMING = True
52
+ if self.config['DISPLAY_LESS_PROGRESS']:
53
+ _timing.DISPLAY_LESS_PROGRESS = True
54
+
55
+ @_timing.time
56
+ def evaluate(self, dataset_list, metrics_list):
57
+ """
58
+ Evaluate a list of datasets with a list of metrics
59
+
60
+ :param List[str] dataset_list: list of all datasets
61
+ :param List[str] metrics_list: list of all metrics
62
+
63
+ :return: str output_res: results of the evaluation
64
+ :return: str output_msg: status of the evaluation
65
+
66
+ ::
67
+
68
+ trackeval.eval.evaluate(dataset_list, metrics_list)
69
+ """
70
+ config = self.config
71
+ metrics_list = metrics_list + [Count()] # Count metrics are always run
72
+ metric_names = utils.validate_metrics_list(metrics_list)
73
+ dataset_names = [dataset.get_name() for dataset in dataset_list]
74
+ output_res = {}
75
+ output_msg = {}
76
+
77
+ for dataset, dataset_name in zip(dataset_list, dataset_names):
78
+ # Get dataset info about what to evaluate
79
+ output_res[dataset_name] = {}
80
+ output_msg[dataset_name] = {}
81
+ tracker_list, seq_list, class_list = dataset.get_eval_info()
82
+ logging.info('Evaluating %i tracker(s) on %i sequence(s) for %i class(es) on %s dataset using the following '
83
+ 'metrics: %s\n' % (len(tracker_list), len(seq_list), len(class_list), dataset_name,
84
+ ', '.join(metric_names)))
85
+
86
+ # Evaluate each tracker
87
+ for tracker in tracker_list:
88
+ # if not config['BREAK_ON_ERROR'] then go to next tracker without breaking
89
+ try:
90
+ # Evaluate each sequence in parallel or in series.
91
+ # returns a nested dict (res), indexed like: res[seq][class][metric_name][sub_metric field]
92
+ # e.g. res[seq_0001][class][hota][DetA]
93
+ logging.info('Evaluating %s\n' % tracker)
94
+ time_start = time.time()
95
+ if config['USE_PARALLEL']:
96
+ with Pool(config['NUM_PARALLEL_CORES']) as pool:
97
+ _eval_sequence = partial(eval_sequence, dataset=dataset, tracker=tracker,
98
+ class_list=class_list, metrics_list=metrics_list,
99
+ metric_names=metric_names)
100
+ results = pool.map(_eval_sequence, seq_list)
101
+ res = dict(zip(seq_list, results))
102
+ else:
103
+ res = {}
104
+ for curr_seq in sorted(seq_list):
105
+ res[curr_seq] = eval_sequence(curr_seq, dataset, tracker, class_list, metrics_list,
106
+ metric_names)
107
+
108
+ # Combine results over all sequences and then over all classes
109
+
110
+ # collecting combined cls keys (cls averaged, det averaged, super classes)
111
+ combined_cls_keys = []
112
+ res['COMBINED_SEQ'] = {}
113
+ # combine sequences for each class
114
+ for c_cls in class_list:
115
+ res['COMBINED_SEQ'][c_cls] = {}
116
+ for metric, metric_name in zip(metrics_list, metric_names):
117
+ curr_res = {seq_key: seq_value[c_cls][metric_name] for seq_key, seq_value in res.items() if
118
+ seq_key != 'COMBINED_SEQ'}
119
+ res['COMBINED_SEQ'][c_cls][metric_name] = metric.combine_sequences(curr_res)
120
+ # combine classes
121
+ if dataset.should_classes_combine:
122
+ combined_cls_keys += ['cls_comb_cls_av', 'cls_comb_det_av', 'all']
123
+ res['COMBINED_SEQ']['cls_comb_cls_av'] = {}
124
+ res['COMBINED_SEQ']['cls_comb_det_av'] = {}
125
+ for metric, metric_name in zip(metrics_list, metric_names):
126
+ cls_res = {cls_key: cls_value[metric_name] for cls_key, cls_value in
127
+ res['COMBINED_SEQ'].items() if cls_key not in combined_cls_keys}
128
+ res['COMBINED_SEQ']['cls_comb_cls_av'][metric_name] = \
129
+ metric.combine_classes_class_averaged(cls_res)
130
+ res['COMBINED_SEQ']['cls_comb_det_av'][metric_name] = \
131
+ metric.combine_classes_det_averaged(cls_res)
132
+ # combine classes to super classes
133
+ if dataset.use_super_categories:
134
+ for cat, sub_cats in dataset.super_categories.items():
135
+ combined_cls_keys.append(cat)
136
+ res['COMBINED_SEQ'][cat] = {}
137
+ for metric, metric_name in zip(metrics_list, metric_names):
138
+ cat_res = {cls_key: cls_value[metric_name] for cls_key, cls_value in
139
+ res['COMBINED_SEQ'].items() if cls_key in sub_cats}
140
+ res['COMBINED_SEQ'][cat][metric_name] = metric.combine_classes_det_averaged(cat_res)
141
+
142
+ # Print and output results in various formats
143
+ if config['TIME_PROGRESS']:
144
+ logging.info('All sequences for %s finished in %.2f seconds' % (tracker, time.time() - time_start))
145
+ output_fol = dataset.get_output_fol(tracker)
146
+ tracker_display_name = dataset.get_display_name(tracker)
147
+ for c_cls in res['COMBINED_SEQ'].keys(): # class_list + combined classes if calculated
148
+ summaries = []
149
+ details = []
150
+ num_dets = res['COMBINED_SEQ'][c_cls]['Count']['Dets']
151
+ if config['OUTPUT_EMPTY_CLASSES'] or num_dets > 0:
152
+ for metric, metric_name in zip(metrics_list, metric_names):
153
+ # for combined classes there is no per sequence evaluation
154
+ if c_cls in combined_cls_keys:
155
+ table_res = {'COMBINED_SEQ': res['COMBINED_SEQ'][c_cls][metric_name]}
156
+ else:
157
+ table_res = {seq_key: seq_value[c_cls][metric_name] for seq_key, seq_value
158
+ in res.items()}
159
+
160
+ if config['PRINT_RESULTS'] and config['PRINT_ONLY_COMBINED']:
161
+ dont_print = dataset.should_classes_combine and c_cls not in combined_cls_keys
162
+ if not dont_print:
163
+ metric.print_table({'COMBINED_SEQ': table_res['COMBINED_SEQ']},
164
+ tracker_display_name, c_cls)
165
+ elif config['PRINT_RESULTS']:
166
+ metric.print_table(table_res, tracker_display_name, c_cls)
167
+ if config['OUTPUT_SUMMARY']:
168
+ summaries.append(metric.summary_results(table_res))
169
+ if config['OUTPUT_DETAILED']:
170
+ details.append(metric.detailed_results(table_res))
171
+ if config['PLOT_CURVES']:
172
+ metric.plot_single_tracker_results(table_res, tracker_display_name, c_cls,
173
+ output_fol)
174
+ if config['OUTPUT_SUMMARY']:
175
+ utils.write_summary_results(summaries, c_cls, output_fol)
176
+ if config['OUTPUT_DETAILED']:
177
+ utils.write_detailed_results(details, c_cls, output_fol)
178
+
179
+ # Output for returning from function
180
+ output_res[dataset_name][tracker] = res
181
+ output_msg[dataset_name][tracker] = 'Success'
182
+
183
+ except Exception as err:
184
+ output_res[dataset_name][tracker] = None
185
+ if type(err) == TrackEvalException:
186
+ output_msg[dataset_name][tracker] = str(err)
187
+ else:
188
+ output_msg[dataset_name][tracker] = 'Unknown error occurred.'
189
+ logging.info('Tracker %s was unable to be evaluated.' % tracker)
190
+ logging.error(err)
191
+ traceback.print_exc()
192
+ if config['LOG_ON_ERROR'] is not None:
193
+ with open(config['LOG_ON_ERROR'], 'a') as f:
194
+ logging.info(dataset_name, file=f)
195
+ logging.info(tracker, file=f)
196
+ logging.info(traceback.format_exc(), file=f)
197
+ logging.info('\n\n\n', file=f)
198
+ if config['BREAK_ON_ERROR']:
199
+ raise err
200
+ elif config['RETURN_ON_ERROR']:
201
+ return output_res, output_msg
202
+
203
+ return output_res, output_msg
204
+
205
+
206
+ @_timing.time
207
+ def eval_sequence(seq, dataset, tracker, class_list, metrics_list, metric_names):
208
+ """
209
+ Function for evaluating a single sequence
210
+
211
+ :param str seq: name of the sequence
212
+ :param str dataset: name of the dataset
213
+ :param str tracker: name of the tracker
214
+ :param List[str] class_list: list of all classes to be evaluated
215
+ :param List[str] metrics_list: list of all metrics
216
+ :param List[str] metric_names: list of all metrics names
217
+
218
+ :return: Dict[str] seq_res: results of the eval sequence
219
+ ::
220
+
221
+ trackeval.eval.eval_sequence(seq, dataset, tracker, class_list, metrics_list, metric_names)
222
+ """
223
+ raw_data = dataset.get_raw_seq_data(tracker, seq)
224
+ seq_res = {}
225
+ for cls in class_list:
226
+ seq_res[cls] = {}
227
+ data = dataset.get_preprocessed_seq_data(raw_data, cls)
228
+ for metric, met_name in zip(metrics_list, metric_names):
229
+ seq_res[cls][met_name] = metric.eval_sequence(data)
230
+ return seq_res
MTMC_Tracking_2025/eval/utils/trackeval/metrics/__init__.py ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ """MTMC analytics hota-metrics submodules"""
2
+ from .hota import HOTA
3
+ from .clear import CLEAR
4
+ from .identity import Identity
5
+ from .count import Count
MTMC_Tracking_2025/eval/utils/trackeval/metrics/_base_metric.py ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from abc import ABC, abstractmethod
3
+ from utils.trackeval import _timing
4
+ from utils.trackeval.utils import TrackEvalException
5
+
6
+
7
+ class _BaseMetric(ABC):
8
+ @abstractmethod
9
+ def __init__(self):
10
+ self.plottable = False
11
+ self.integer_fields = []
12
+ self.float_fields = []
13
+ self.array_labels = []
14
+ self.integer_array_fields = []
15
+ self.float_array_fields = []
16
+ self.fields = []
17
+ self.summary_fields = []
18
+ self.registered = False
19
+
20
+ #####################################################################
21
+ # Abstract functions for subclasses to implement
22
+
23
+ @_timing.time
24
+ @abstractmethod
25
+ def eval_sequence(self, data):
26
+ ...
27
+
28
+ @abstractmethod
29
+ def combine_sequences(self, all_res):
30
+ ...
31
+
32
+ @abstractmethod
33
+ def combine_classes_class_averaged(self, all_res, ignore_empty_classes=False):
34
+ ...
35
+
36
+ @ abstractmethod
37
+ def combine_classes_det_averaged(self, all_res):
38
+ ...
39
+
40
+ def plot_single_tracker_results(self, all_res, tracker, output_folder, cls):
41
+ """
42
+ Plot results of metrics, only valid for metrics with self.plottable
43
+
44
+ :param Dict all_res: dictionary containing all results
45
+ :param str tracker: The tracker to plot results for
46
+ :param str output_folder: The output folder for saving the plots
47
+ :param str cls: The class to plot results for
48
+
49
+ :raises NotImplementedError: If the metric does not have self.plottable
50
+
51
+ """
52
+ if self.plottable:
53
+ raise NotImplementedError('plot_results is not implemented for metric %s' % self.get_name())
54
+ else:
55
+ pass
56
+
57
+ #####################################################################
58
+ # Helper functions which are useful for all metrics:
59
+
60
+ @classmethod
61
+ def get_name(cls):
62
+ return cls.__name__
63
+
64
+ @staticmethod
65
+ def _combine_sum(all_res, field):
66
+ """
67
+ Combine sequence results via sum
68
+
69
+ :param Dict all_res: dictionary containing sequence results
70
+ :param str field: The field to be combined
71
+ :return: The sum of the combined results
72
+ :rtype: float
73
+ """
74
+ return sum([all_res[k][field] for k in all_res.keys()])
75
+
76
+ @staticmethod
77
+ def _combine_weighted_av(all_res, field, comb_res, weight_field):
78
+ """
79
+ Combine sequence results via weighted average
80
+
81
+ :param Dict all_res: dictionary containing sequence results
82
+ :param str field: The field to be combined
83
+ :param Dict comb_res: dictionary containing combined results
84
+ :param str weight_field: The field representing the weight
85
+ :return: The weighted average of the combined results
86
+ :rtype: float
87
+ """
88
+ return sum([all_res[k][field] * all_res[k][weight_field] for k in all_res.keys()]) / np.maximum(1.0, comb_res[
89
+ weight_field])
90
+
91
+ def print_table(self, table_res, tracker, cls):
92
+ """
93
+ Prints table of results for all sequences
94
+
95
+ :param Dict table_res: dictionary containing the results for each sequence.
96
+ :param str tracker: The name of the tracker.
97
+ :param str cls: The name of the class.
98
+ :return None
99
+ """
100
+ print('')
101
+ metric_name = self.get_name()
102
+ self._row_print([metric_name + ': ' + tracker + '-' + cls] + self.summary_fields)
103
+ for seq, results in sorted(table_res.items()):
104
+ if seq == 'COMBINED_SEQ':
105
+ continue
106
+ summary_res = self._summary_row(results)
107
+ self._row_print([seq] + summary_res)
108
+ summary_res = self._summary_row(table_res['COMBINED_SEQ'])
109
+ #self._row_print(['COMBINED'] + summary_res)
110
+
111
+ def _summary_row(self, results_):
112
+ """
113
+ Generate a summary row of values based on the provided results.
114
+ :param Dict results_: dictionary containing the metric results.
115
+
116
+ :return: A list of formatted values for the summary row.
117
+ :rtype: list
118
+ :raises NotImplementedError: If the summary function is not implemented for a field type.
119
+ """
120
+ vals = []
121
+ for h in self.summary_fields:
122
+ if h in self.float_array_fields:
123
+ vals.append("{0:1.5g}".format(100 * np.mean(results_[h])))
124
+ elif h in self.float_fields:
125
+ vals.append("{0:1.5g}".format(100 * float(results_[h])))
126
+ elif h in self.integer_fields:
127
+ vals.append("{0:d}".format(int(results_[h])))
128
+ else:
129
+ raise NotImplementedError("Summary function not implemented for this field type.")
130
+ return vals
131
+
132
+ @staticmethod
133
+ def _row_print(*argv):
134
+ """
135
+ Prints results in an evenly spaced rows, with more space in first row
136
+
137
+ :param argv: The values to be printed in each column of the row.
138
+ :type argv: tuple or list
139
+ """
140
+ if len(argv) == 1:
141
+ argv = argv[0]
142
+ to_print = '%-35s' % argv[0]
143
+ for v in argv[1:]:
144
+ to_print += '%-10s' % str(v)
145
+ print(to_print)
146
+
147
+ def summary_results(self, table_res):
148
+ """
149
+ Returns a simple summary of final results for a tracker
150
+
151
+ :param Dict table_res: The table of results containing per-sequence and combined sequence results.
152
+ :return: dictionary representing the summary of final results.
153
+ :rtype: Dict
154
+ """
155
+ return dict(zip(self.summary_fields, self._summary_row(table_res['COMBINED_SEQ'])))
156
+
157
+ def detailed_results(self, table_res):
158
+ """
159
+ Returns detailed final results for a tracker
160
+
161
+ :param Dict table_res: The table of results containing per-sequence and combined sequence results.
162
+ :return: Detailed results for each sequence as a dictionary of dictionaries.
163
+ :rtype: Dict
164
+ :raises TrackEvalException: If the field names and data have different sizes.
165
+ """
166
+ # Get detailed field information
167
+ detailed_fields = self.float_fields + self.integer_fields
168
+ for h in self.float_array_fields + self.integer_array_fields:
169
+ for alpha in [int(100*x) for x in self.array_labels]:
170
+ detailed_fields.append(h + '___' + str(alpha))
171
+ detailed_fields.append(h + '___AUC')
172
+
173
+ # Get detailed results
174
+ detailed_results = {}
175
+ for seq, res in table_res.items():
176
+ detailed_row = self._detailed_row(res)
177
+ if len(detailed_row) != len(detailed_fields):
178
+ raise TrackEvalException(
179
+ 'Field names and data have different sizes (%i and %i)' % (len(detailed_row), len(detailed_fields)))
180
+ detailed_results[seq] = dict(zip(detailed_fields, detailed_row))
181
+ return detailed_results
182
+
183
+ def _detailed_row(self, res):
184
+ """
185
+ Calculates a detailed row of results for a given set of metrics.
186
+
187
+ :param Dict res: The results containing the metrics.
188
+ :return: Detailed row of results.
189
+ :rtype: list
190
+ """
191
+ detailed_row = []
192
+ for h in self.float_fields + self.integer_fields:
193
+ detailed_row.append(res[h])
194
+ for h in self.float_array_fields + self.integer_array_fields:
195
+ for i, alpha in enumerate([int(100 * x) for x in self.array_labels]):
196
+ detailed_row.append(res[h][i])
197
+ detailed_row.append(np.mean(res[h]))
198
+ return detailed_row
MTMC_Tracking_2025/eval/utils/trackeval/metrics/clear.py ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from scipy.optimize import linear_sum_assignment
3
+ from ._base_metric import _BaseMetric
4
+ from utils.trackeval import _timing
5
+ from utils.trackeval import utils
6
+
7
+
8
+ class CLEAR(_BaseMetric):
9
+ """
10
+ Class which implements the CLEAR metrics
11
+
12
+ :param Dict config: configuration for the app
13
+ ::
14
+
15
+ identity = trackeval.metrics.CLEAR(config)
16
+ """
17
+
18
+ @staticmethod
19
+ def get_default_config():
20
+ """Default class config values"""
21
+ default_config = {
22
+ 'THRESHOLD': 0.5, # Similarity score threshold required for a TP match. Default 0.5.
23
+ 'PRINT_CONFIG': True, # Whether to print the config information on init. Default: False.
24
+ }
25
+ return default_config
26
+
27
+ def __init__(self, config=None):
28
+ super().__init__()
29
+ main_integer_fields = ['CLR_TP', 'CLR_FN', 'CLR_FP', 'IDSW', 'MT', 'PT', 'ML', 'Frag']
30
+ extra_integer_fields = ['CLR_Frames']
31
+ self.integer_fields = main_integer_fields + extra_integer_fields
32
+ main_float_fields = ['MOTA', 'MOTP', 'MODA', 'CLR_Re', 'CLR_Pr', 'MTR', 'PTR', 'MLR', 'sMOTA']
33
+ extra_float_fields = ['CLR_F1', 'FP_per_frame', 'MOTAL', 'MOTP_sum']
34
+ self.float_fields = main_float_fields + extra_float_fields
35
+ self.fields = self.float_fields + self.integer_fields
36
+ self.summed_fields = self.integer_fields + ['MOTP_sum']
37
+ self.summary_fields = main_float_fields + main_integer_fields
38
+
39
+ # Configuration options:
40
+ self.config = utils.init_config(config, self.get_default_config(), self.get_name())
41
+ self.threshold = float(self.config['THRESHOLD'])
42
+
43
+
44
+ @_timing.time
45
+ def eval_sequence(self, data):
46
+ """
47
+ Calculates CLEAR metrics for one sequence
48
+
49
+ :param Dict[str, float] data: dictionary containing the data for the sequence
50
+
51
+ :return: dictionary containing the calculated count metrics
52
+ :rtype: Dict[str, float]
53
+ """
54
+ # Initialise results
55
+ res = {}
56
+ for field in self.fields:
57
+ res[field] = 0
58
+
59
+ # Return result quickly if tracker or gt sequence is empty
60
+ if data['num_tracker_dets'] == 0:
61
+ res['CLR_FN'] = data['num_gt_dets']
62
+ res['ML'] = data['num_gt_ids']
63
+ res['MLR'] = 1.0
64
+ return res
65
+ if data['num_gt_dets'] == 0:
66
+ res['CLR_FP'] = data['num_tracker_dets']
67
+ res['MLR'] = 1.0
68
+ return res
69
+
70
+ # Variables counting global association
71
+ num_gt_ids = data['num_gt_ids']
72
+ gt_id_count = np.zeros(num_gt_ids) # For MT/ML/PT
73
+ gt_matched_count = np.zeros(num_gt_ids) # For MT/ML/PT
74
+ gt_frag_count = np.zeros(num_gt_ids) # For Frag
75
+
76
+ # Note that IDSWs are counted based on the last time each gt_id was present (any number of frames previously),
77
+ # but are only used in matching to continue current tracks based on the gt_id in the single previous timestep.
78
+ prev_tracker_id = np.nan * np.zeros(num_gt_ids) # For scoring IDSW
79
+ prev_timestep_tracker_id = np.nan * np.zeros(num_gt_ids) # For matching IDSW
80
+
81
+ # Calculate scores for each timestep
82
+ for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):
83
+ # Deal with the case that there are no gt_det/tracker_det in a timestep.
84
+ if len(gt_ids_t) == 0:
85
+ res['CLR_FP'] += len(tracker_ids_t)
86
+ continue
87
+ if len(tracker_ids_t) == 0:
88
+ res['CLR_FN'] += len(gt_ids_t)
89
+ gt_id_count[gt_ids_t] += 1
90
+ continue
91
+
92
+ # Calc score matrix to first minimise IDSWs from previous frame, and then maximise MOTP secondarily
93
+ similarity = data['similarity_scores'][t]
94
+ score_mat = (tracker_ids_t[np.newaxis, :] == prev_timestep_tracker_id[gt_ids_t[:, np.newaxis]])
95
+ score_mat = 1000 * score_mat + similarity
96
+ score_mat[similarity < self.threshold - np.finfo('float').eps] = 0
97
+
98
+ # Hungarian algorithm to find best matches
99
+ match_rows, match_cols = linear_sum_assignment(-score_mat)
100
+ actually_matched_mask = score_mat[match_rows, match_cols] > 0 + np.finfo('float').eps
101
+ match_rows = match_rows[actually_matched_mask]
102
+ match_cols = match_cols[actually_matched_mask]
103
+
104
+ matched_gt_ids = gt_ids_t[match_rows]
105
+ matched_tracker_ids = tracker_ids_t[match_cols]
106
+
107
+ # Calc IDSW for MOTA
108
+ prev_matched_tracker_ids = prev_tracker_id[matched_gt_ids]
109
+ is_idsw = (np.logical_not(np.isnan(prev_matched_tracker_ids))) & (
110
+ np.not_equal(matched_tracker_ids, prev_matched_tracker_ids))
111
+ res['IDSW'] += np.sum(is_idsw)
112
+
113
+ # Update counters for MT/ML/PT/Frag and record for IDSW/Frag for next timestep
114
+ gt_id_count[gt_ids_t] += 1
115
+ gt_matched_count[matched_gt_ids] += 1
116
+ not_previously_tracked = np.isnan(prev_timestep_tracker_id)
117
+ prev_tracker_id[matched_gt_ids] = matched_tracker_ids
118
+ prev_timestep_tracker_id[:] = np.nan
119
+ prev_timestep_tracker_id[matched_gt_ids] = matched_tracker_ids
120
+ currently_tracked = np.logical_not(np.isnan(prev_timestep_tracker_id))
121
+ gt_frag_count += np.logical_and(not_previously_tracked, currently_tracked)
122
+
123
+ # Calculate and accumulate basic statistics
124
+ num_matches = len(matched_gt_ids)
125
+ res['CLR_TP'] += num_matches
126
+ res['CLR_FN'] += len(gt_ids_t) - num_matches
127
+ res['CLR_FP'] += len(tracker_ids_t) - num_matches
128
+ if num_matches > 0:
129
+ res['MOTP_sum'] += sum(similarity[match_rows, match_cols])
130
+
131
+ # Calculate MT/ML/PT/Frag/MOTP
132
+ tracked_ratio = gt_matched_count[gt_id_count > 0] / gt_id_count[gt_id_count > 0]
133
+ res['MT'] = np.sum(np.greater(tracked_ratio, 0.8))
134
+ res['PT'] = np.sum(np.greater_equal(tracked_ratio, 0.2)) - res['MT']
135
+ res['ML'] = num_gt_ids - res['MT'] - res['PT']
136
+ res['Frag'] = np.sum(np.subtract(gt_frag_count[gt_frag_count > 0], 1))
137
+ res['MOTP'] = res['MOTP_sum'] / np.maximum(1.0, res['CLR_TP'])
138
+
139
+ res['CLR_Frames'] = data['num_timesteps']
140
+
141
+ # Calculate final CLEAR scores
142
+ res = self._compute_final_fields(res)
143
+ return res
144
+
145
+ def combine_sequences(self, all_res):
146
+ """
147
+ Combines metrics across all sequences
148
+
149
+ :param Dict[str, float] all_res: dictionary containing the metrics for each sequence
150
+ :return: dictionary containing the combined metrics across sequences
151
+ :rtype: Dict[str, float]
152
+ """
153
+ res = {}
154
+ for field in self.summed_fields:
155
+ res[field] = self._combine_sum(all_res, field)
156
+ res = self._compute_final_fields(res)
157
+ return res
158
+
159
+ def combine_classes_det_averaged(self, all_res):
160
+ """
161
+ Combines metrics across all classes by averaging over the detection values
162
+
163
+ :param Dict[str, float] all_res: dictionary containing the metrics for each class
164
+ :return: dictionary containing the combined metrics averaged over detections
165
+ :rtype: Dict[str, float]
166
+ """
167
+ res = {}
168
+ for field in self.summed_fields:
169
+ res[field] = self._combine_sum(all_res, field)
170
+ res = self._compute_final_fields(res)
171
+ return res
172
+
173
+ def combine_classes_class_averaged(self, all_res, ignore_empty_classes=False):
174
+ """
175
+ Combines metrics across all classes by averaging over the class values.
176
+ If 'ignore_empty_classes' is True, then it only sums over classes with at least one gt or predicted detection.
177
+
178
+ :param Dict[str, float] all_res: dictionary containing the ID metrics for each class
179
+ :param bool ignore_empty_classes: Flag to ignore empty classes, defaults to False
180
+ :return: dictionary containing the combined metrics averaged over classes
181
+ :rtype: Dict[str, float]
182
+ """
183
+ res = {}
184
+ for field in self.integer_fields:
185
+ if ignore_empty_classes:
186
+ res[field] = self._combine_sum(
187
+ {k: v for k, v in all_res.items() if v['CLR_TP'] + v['CLR_FN'] + v['CLR_FP'] > 0}, field)
188
+ else:
189
+ res[field] = self._combine_sum({k: v for k, v in all_res.items()}, field)
190
+ for field in self.float_fields:
191
+ if ignore_empty_classes:
192
+ res[field] = np.mean(
193
+ [v[field] for v in all_res.values() if v['CLR_TP'] + v['CLR_FN'] + v['CLR_FP'] > 0], axis=0)
194
+ else:
195
+ res[field] = np.mean([v[field] for v in all_res.values()], axis=0)
196
+ return res
197
+
198
+ @staticmethod
199
+ def _compute_final_fields(res):
200
+ """
201
+ Calculate sub-metric ('field') values which only depend on other sub-metric values.
202
+ This function is used both for both per-sequence calculation, and in combining values across sequences.
203
+
204
+ :param Dict[str, float] res: dictionary containing the sub-metric values
205
+ :return: dictionary containing the updated sub-metric values
206
+ :rtype: Dict[str, float]
207
+ """
208
+ num_gt_ids = res['MT'] + res['ML'] + res['PT']
209
+ res['MTR'] = res['MT'] / np.maximum(1.0, num_gt_ids)
210
+ res['MLR'] = res['ML'] / np.maximum(1.0, num_gt_ids)
211
+ res['PTR'] = res['PT'] / np.maximum(1.0, num_gt_ids)
212
+ res['CLR_Re'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])
213
+ res['CLR_Pr'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + res['CLR_FP'])
214
+ res['MODA'] = (res['CLR_TP'] - res['CLR_FP']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])
215
+ res['MOTA'] = (res['CLR_TP'] - res['CLR_FP'] - res['IDSW']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])
216
+ res['MOTP'] = res['MOTP_sum'] / np.maximum(1.0, res['CLR_TP'])
217
+ res['sMOTA'] = (res['MOTP_sum'] - res['CLR_FP'] - res['IDSW']) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])
218
+
219
+ res['CLR_F1'] = res['CLR_TP'] / np.maximum(1.0, res['CLR_TP'] + 0.5*res['CLR_FN'] + 0.5*res['CLR_FP'])
220
+ res['FP_per_frame'] = res['CLR_FP'] / np.maximum(1.0, res['CLR_Frames'])
221
+ safe_log_idsw = np.log10(res['IDSW']) if res['IDSW'] > 0 else res['IDSW']
222
+ res['MOTAL'] = (res['CLR_TP'] - res['CLR_FP'] - safe_log_idsw) / np.maximum(1.0, res['CLR_TP'] + res['CLR_FN'])
223
+ return res
MTMC_Tracking_2025/eval/utils/trackeval/metrics/count.py ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from ._base_metric import _BaseMetric
2
+ from utils.trackeval import _timing
3
+
4
+
5
+ class Count(_BaseMetric):
6
+ """
7
+ Class which simply counts the number of tracker and gt detections and ids.
8
+
9
+ :param Dict config: configuration for the app
10
+ ::
11
+
12
+ identity = trackeval.metrics.Count(config)
13
+ """
14
+ def __init__(self, config=None):
15
+ super().__init__()
16
+ self.integer_fields = ['Dets', 'GT_Dets', 'IDs', 'GT_IDs']
17
+ self.fields = self.integer_fields
18
+ self.summary_fields = self.fields
19
+
20
+ @_timing.time
21
+ def eval_sequence(self, data):
22
+ """
23
+ Returns counts for one sequence
24
+
25
+ :param Dict data: dictionary containing the data for the sequence
26
+
27
+ :return: dictionary containing the calculated count metrics
28
+ :rtype: Dict[str, Dict[str]]
29
+ """
30
+ # Get results
31
+ res = {'Dets': data['num_tracker_dets'],
32
+ 'GT_Dets': data['num_gt_dets'],
33
+ 'IDs': data['num_tracker_ids'],
34
+ 'GT_IDs': data['num_gt_ids'],
35
+ 'Frames': data['num_timesteps']}
36
+ return res
37
+
38
+ def combine_sequences(self, all_res):
39
+ """
40
+ Combines metrics across all sequences
41
+
42
+ :param Dict[str, float] all_res: dictionary containing the metrics for each sequence
43
+ :return: dictionary containing the combined metrics across sequences
44
+ :rtype: Dict[str, float]
45
+ """
46
+ res = {}
47
+ for field in self.integer_fields:
48
+ res[field] = self._combine_sum(all_res, field)
49
+ return res
50
+
51
+ def combine_classes_class_averaged(self, all_res, ignore_empty_classes=None):
52
+ """
53
+ Combines metrics across all classes by averaging over the class values
54
+
55
+ :param Dict[str, float] all_res: dictionary containing the ID metrics for each class
56
+ :param bool ignore_empty_classes: Flag to ignore empty classes, defaults to False
57
+ :return: dictionary containing the combined metrics averaged over classes
58
+ :rtype: Dict[str, float]
59
+ """
60
+ res = {}
61
+ for field in self.integer_fields:
62
+ res[field] = self._combine_sum(all_res, field)
63
+ return res
64
+
65
+ def combine_classes_det_averaged(self, all_res):
66
+ """
67
+ Combines metrics across all classes by averaging over the detection values
68
+
69
+ :param Dict[str, float] all_res: dictionary containing the metrics for each class
70
+ :return: dictionary containing the combined metrics averaged over detections
71
+ :rtype: Dict[str, float]
72
+ """
73
+ res = {}
74
+ for field in self.integer_fields:
75
+ res[field] = self._combine_sum(all_res, field)
76
+ return res
MTMC_Tracking_2025/eval/utils/trackeval/metrics/hota.py ADDED
@@ -0,0 +1,245 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import numpy as np
3
+ from utils.trackeval import _timing
4
+ from scipy.optimize import linear_sum_assignment
5
+ from utils.trackeval.metrics._base_metric import _BaseMetric
6
+
7
+
8
+ class HOTA(_BaseMetric):
9
+ """
10
+ Class which implements the HOTA metrics.
11
+ See: https://link.springer.com/article/10.1007/s11263-020-01375-2
12
+
13
+ :param Dict config: configuration for the app
14
+ ::
15
+
16
+ identity = trackeval.metrics.HOTA(config)
17
+ """
18
+
19
+ def __init__(self, config=None):
20
+ super().__init__()
21
+ self.plottable = True
22
+ self.array_labels = np.arange(0.05, 0.99, 0.05)
23
+ self.integer_array_fields = ['HOTA_TP', 'HOTA_FN', 'HOTA_FP']
24
+ self.float_array_fields = ['HOTA', 'DetA', 'AssA', 'DetRe', 'DetPr', 'AssRe', 'AssPr', 'LocA', 'OWTA']
25
+ self.float_fields = ['HOTA(0)', 'LocA(0)', 'HOTALocA(0)']
26
+ self.fields = self.float_array_fields + self.integer_array_fields + self.float_fields
27
+ self.summary_fields = self.float_array_fields + self.float_fields
28
+
29
+ @_timing.time
30
+ def eval_sequence(self, data):
31
+ """
32
+ Calculates the HOTA metrics for one sequence
33
+
34
+ :param Dict data: dictionary containing the data for the sequence
35
+
36
+ :return: dictionary containing the calculated hota metrics
37
+ :rtype: Dict
38
+ """
39
+
40
+ # Initialise results
41
+ res = {}
42
+ for field in self.float_array_fields + self.integer_array_fields:
43
+ res[field] = np.zeros((len(self.array_labels)), dtype=float)
44
+ for field in self.float_fields:
45
+ res[field] = 0
46
+
47
+ # Return result quickly if tracker or gt sequence is empty
48
+ if data['num_tracker_dets'] == 0:
49
+ res['HOTA_FN'] = data['num_gt_dets'] * np.ones((len(self.array_labels)), dtype=float)
50
+ res['LocA'] = np.ones((len(self.array_labels)), dtype=float)
51
+ res['LocA(0)'] = 1.0
52
+ return res
53
+ if data['num_gt_dets'] == 0:
54
+ res['HOTA_FP'] = data['num_tracker_dets'] * np.ones((len(self.array_labels)), dtype=float)
55
+ res['LocA'] = np.ones((len(self.array_labels)), dtype=float)
56
+ res['LocA(0)'] = 1.0
57
+ return res
58
+
59
+ # Variables counting global association
60
+ potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids']))
61
+ gt_id_count = np.zeros((data['num_gt_ids'], 1))
62
+ tracker_id_count = np.zeros((1, data['num_tracker_ids']))
63
+
64
+ # First loop through each timestep and accumulate global track information.
65
+ for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):
66
+ # Count the potential matches between ids in each timestep
67
+ # These are normalised, weighted by the match similarity.
68
+ similarity = data['similarity_scores'][t]
69
+ sim_iou_denom = similarity.sum(0)[np.newaxis, :] + similarity.sum(1)[:, np.newaxis] - similarity
70
+ sim_iou = np.zeros_like(similarity)
71
+ sim_iou_mask = sim_iou_denom > 0 + np.finfo('float').eps
72
+ sim_iou[sim_iou_mask] = similarity[sim_iou_mask] / sim_iou_denom[sim_iou_mask]
73
+ potential_matches_count[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] += sim_iou
74
+
75
+ # Calculate the total number of dets for each gt_id and tracker_id.
76
+ gt_id_count[gt_ids_t] += 1
77
+ tracker_id_count[0, tracker_ids_t] += 1
78
+
79
+ # Calculate overall jaccard alignment score (before unique matching) between IDs
80
+ global_alignment_score = potential_matches_count / (gt_id_count + tracker_id_count - potential_matches_count)
81
+ matches_counts = [np.zeros_like(potential_matches_count) for _ in self.array_labels]
82
+
83
+ # Calculate scores for each timestep
84
+ for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):
85
+ # Deal with the case that there are no gt_det/tracker_det in a timestep.
86
+ if len(gt_ids_t) == 0:
87
+ for a, alpha in enumerate(self.array_labels):
88
+ res['HOTA_FP'][a] += len(tracker_ids_t)
89
+ continue
90
+ if len(tracker_ids_t) == 0:
91
+ for a, alpha in enumerate(self.array_labels):
92
+ res['HOTA_FN'][a] += len(gt_ids_t)
93
+ continue
94
+
95
+ # Get matching scores between pairs of dets for optimizing HOTA
96
+ similarity = data['similarity_scores'][t]
97
+ score_mat = global_alignment_score[gt_ids_t[:, np.newaxis], tracker_ids_t[np.newaxis, :]] * similarity
98
+
99
+ # Hungarian algorithm to find best matches
100
+ match_rows, match_cols = linear_sum_assignment(-score_mat)
101
+
102
+ # Calculate and accumulate basic statistics
103
+ for a, alpha in enumerate(self.array_labels):
104
+ actually_matched_mask = similarity[match_rows, match_cols] >= alpha - np.finfo('float').eps
105
+ alpha_match_rows = match_rows[actually_matched_mask]
106
+ alpha_match_cols = match_cols[actually_matched_mask]
107
+ num_matches = len(alpha_match_rows)
108
+ res['HOTA_TP'][a] += num_matches
109
+ res['HOTA_FN'][a] += len(gt_ids_t) - num_matches
110
+ res['HOTA_FP'][a] += len(tracker_ids_t) - num_matches
111
+ if num_matches > 0:
112
+ res['LocA'][a] += sum(similarity[alpha_match_rows, alpha_match_cols])
113
+ matches_counts[a][gt_ids_t[alpha_match_rows], tracker_ids_t[alpha_match_cols]] += 1
114
+
115
+ # Calculate association scores (AssA, AssRe, AssPr) for the alpha value.
116
+ # First calculate scores per gt_id/tracker_id combo and then average over the number of detections.
117
+ for a, alpha in enumerate(self.array_labels):
118
+ matches_count = matches_counts[a]
119
+ ass_a = matches_count / np.maximum(1, gt_id_count + tracker_id_count - matches_count)
120
+ res['AssA'][a] = np.sum(matches_count * ass_a) / np.maximum(1, res['HOTA_TP'][a])
121
+ ass_re = matches_count / np.maximum(1, gt_id_count)
122
+ res['AssRe'][a] = np.sum(matches_count * ass_re) / np.maximum(1, res['HOTA_TP'][a])
123
+ ass_pr = matches_count / np.maximum(1, tracker_id_count)
124
+ res['AssPr'][a] = np.sum(matches_count * ass_pr) / np.maximum(1, res['HOTA_TP'][a])
125
+
126
+ # Calculate final scores
127
+ res['LocA'] = np.maximum(0, res['LocA']) / np.maximum(1e-10, res['HOTA_TP'])
128
+ res = self._compute_final_fields(res)
129
+ return res
130
+
131
+ def combine_sequences(self, all_res):
132
+ """
133
+ Combines metrics across all sequences
134
+
135
+ :param Dict[str, float] all_res: dictionary containing the metrics for each sequence
136
+ :return: dictionary containing the combined metrics across sequences
137
+ :rtype: Dict[str, float]
138
+ """
139
+ res = {}
140
+ for field in self.integer_array_fields:
141
+ res[field] = self._combine_sum(all_res, field)
142
+ for field in ['AssRe', 'AssPr', 'AssA']:
143
+ res[field] = self._combine_weighted_av(all_res, field, res, weight_field='HOTA_TP')
144
+ loca_weighted_sum = sum([all_res[k]['LocA'] * all_res[k]['HOTA_TP'] for k in all_res.keys()])
145
+ res['LocA'] = np.maximum(1e-10, loca_weighted_sum) / np.maximum(1e-10, res['HOTA_TP'])
146
+ res = self._compute_final_fields(res)
147
+ return res
148
+
149
+ def combine_classes_class_averaged(self, all_res, ignore_empty_classes=False):
150
+ """
151
+ Combines metrics across all classes by averaging over the class values.
152
+ If 'ignore_empty_classes' is True, then it only sums over classes with at least one gt or predicted detection.
153
+
154
+ :param Dict[str, float] all_res: dictionary containing the ID metrics for each class
155
+ :param bool ignore_empty_classes: Flag to ignore empty classes, defaults to False
156
+ :return: dictionary containing the combined metrics averaged over classes
157
+ :rtype: Dict[str, float]
158
+ """
159
+ res = {}
160
+ for field in self.integer_array_fields:
161
+ if ignore_empty_classes:
162
+ res[field] = self._combine_sum(
163
+ {k: v for k, v in all_res.items()
164
+ if (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()}, field)
165
+ else:
166
+ res[field] = self._combine_sum({k: v for k, v in all_res.items()}, field)
167
+
168
+ for field in self.float_fields + self.float_array_fields:
169
+ if ignore_empty_classes:
170
+ res[field] = np.mean([v[field] for v in all_res.values() if
171
+ (v['HOTA_TP'] + v['HOTA_FN'] + v['HOTA_FP'] > 0 + np.finfo('float').eps).any()],
172
+ axis=0)
173
+ else:
174
+ res[field] = np.mean([v[field] for v in all_res.values()], axis=0)
175
+ return res
176
+
177
+ def combine_classes_det_averaged(self, all_res):
178
+ """
179
+ Combines metrics across all classes by averaging over the detection values
180
+
181
+ :param Dict[str, float] all_res: dictionary containing the metrics for each class
182
+ :return: dictionary containing the combined metrics averaged over detections
183
+ :rtype: Dict[str, float]
184
+ """
185
+ res = {}
186
+ for field in self.integer_array_fields:
187
+ res[field] = self._combine_sum(all_res, field)
188
+ for field in ['AssRe', 'AssPr', 'AssA']:
189
+ res[field] = self._combine_weighted_av(all_res, field, res, weight_field='HOTA_TP')
190
+ loca_weighted_sum = sum([all_res[k]['LocA'] * all_res[k]['HOTA_TP'] for k in all_res.keys()])
191
+ res['LocA'] = np.maximum(1e-10, loca_weighted_sum) / np.maximum(1e-10, res['HOTA_TP'])
192
+ res = self._compute_final_fields(res)
193
+ return res
194
+
195
+ @staticmethod
196
+ def _compute_final_fields(res):
197
+ """
198
+ Calculate sub-metric ('field') values which only depend on other sub-metric values.
199
+ This function is used both for both per-sequence calculation, and in combining values across sequences.
200
+
201
+ :param Dict[str, float] res: dictionary containing the sub-metric values
202
+ :return: dictionary containing the updated sub-metric values
203
+ :rtype: Dict[str, float]
204
+ """
205
+ res['DetRe'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FN'])
206
+ res['DetPr'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FP'])
207
+ res['DetA'] = res['HOTA_TP'] / np.maximum(1, res['HOTA_TP'] + res['HOTA_FN'] + res['HOTA_FP'])
208
+ res['HOTA'] = np.sqrt(res['DetA'] * res['AssA'])
209
+ res['OWTA'] = np.sqrt(res['DetRe'] * res['AssA'])
210
+
211
+ res['HOTA(0)'] = res['HOTA'][0]
212
+ res['LocA(0)'] = res['LocA'][0]
213
+ res['HOTALocA(0)'] = res['HOTA(0)']*res['LocA(0)']
214
+ return res
215
+
216
+ def plot_single_tracker_results(self, table_res, tracker, cls, output_folder):
217
+ """
218
+ Create plot of results
219
+
220
+ :param Dict table_res: dictionary containing the evaluation results
221
+ :param str tracker: The name of the tracker
222
+ :param str cls: The class name
223
+ :param str output_folder: The output folder path for saving the plot
224
+ """
225
+
226
+ # Only loaded when run to reduce minimum requirements
227
+ from matplotlib import pyplot as plt
228
+
229
+ res = table_res['COMBINED_SEQ']
230
+ styles_to_plot = ['r', 'b', 'g', 'b--', 'b:', 'g--', 'g:', 'm']
231
+ for name, style in zip(self.float_array_fields, styles_to_plot):
232
+ plt.plot(self.array_labels, res[name], style)
233
+ plt.xlabel('alpha')
234
+ plt.ylabel('score')
235
+ plt.title(tracker + ' - ' + cls)
236
+ plt.axis([0, 1, 0, 1])
237
+ legend = []
238
+ for name in self.float_array_fields:
239
+ legend += [name + ' (' + str(np.round(np.mean(res[name]), 2)) + ')']
240
+ plt.legend(legend, loc='lower left')
241
+ out_file = os.path.join(output_folder, cls + '_plot.pdf')
242
+ os.makedirs(os.path.dirname(out_file), exist_ok=True)
243
+ plt.savefig(out_file)
244
+ plt.savefig(out_file.replace('.pdf', '.png'))
245
+ plt.clf()
MTMC_Tracking_2025/eval/utils/trackeval/metrics/identity.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from scipy.optimize import linear_sum_assignment
3
+ from utils.trackeval import _timing
4
+ from utils.trackeval import utils
5
+ from utils.trackeval.metrics._base_metric import _BaseMetric
6
+
7
+
8
+ class Identity(_BaseMetric):
9
+ """
10
+ Class which implements the Identity metrics
11
+
12
+ :param Dict config: configuration for the app
13
+ ::
14
+
15
+ identity = trackeval.metrics.Identity(config)
16
+ """
17
+
18
+ @staticmethod
19
+ def get_default_config():
20
+ """Default class config values"""
21
+ default_config = {
22
+ 'THRESHOLD': 0.5, # Similarity score threshold required for a IDTP match. Default 0.5.
23
+ 'PRINT_CONFIG': True, # Whether to print the config information on init. Default: False.
24
+ }
25
+ return default_config
26
+
27
+ def __init__(self, config=None):
28
+ super().__init__()
29
+ self.integer_fields = ['IDTP', 'IDFN', 'IDFP']
30
+ self.float_fields = ['IDF1', 'IDR', 'IDP']
31
+ self.fields = self.float_fields + self.integer_fields
32
+ self.summary_fields = self.fields
33
+
34
+ # Configuration options:
35
+ self.config = utils.init_config(config, self.get_default_config(), self.get_name())
36
+ self.threshold = float(self.config['THRESHOLD'])
37
+
38
+ @_timing.time
39
+ def eval_sequence(self, data):
40
+ """
41
+ Calculates ID metrics for one sequence
42
+
43
+ :param Dict[str, float] data: dictionary containing the data for the sequence
44
+
45
+ :return: dictionary containing the calculated ID metrics
46
+ :rtype: Dict[str, float]
47
+ """
48
+ # Initialise results
49
+ res = {}
50
+ for field in self.fields:
51
+ res[field] = 0
52
+
53
+ # Return result quickly if tracker or gt sequence is empty
54
+ if data['num_tracker_dets'] == 0:
55
+ res['IDFN'] = data['num_gt_dets']
56
+ return res
57
+ if data['num_gt_dets'] == 0:
58
+ res['IDFP'] = data['num_tracker_dets']
59
+ return res
60
+
61
+ # Variables counting global association
62
+ potential_matches_count = np.zeros((data['num_gt_ids'], data['num_tracker_ids']))
63
+ gt_id_count = np.zeros(data['num_gt_ids'])
64
+ tracker_id_count = np.zeros(data['num_tracker_ids'])
65
+
66
+ # First loop through each timestep and accumulate global track information.
67
+ for t, (gt_ids_t, tracker_ids_t) in enumerate(zip(data['gt_ids'], data['tracker_ids'])):
68
+ # Count the potential matches between ids in each timestep
69
+ matches_mask = np.greater_equal(data['similarity_scores'][t], self.threshold)
70
+ match_idx_gt, match_idx_tracker = np.nonzero(matches_mask)
71
+ potential_matches_count[gt_ids_t[match_idx_gt], tracker_ids_t[match_idx_tracker]] += 1
72
+
73
+ # Calculate the total number of dets for each gt_id and tracker_id.
74
+ gt_id_count[gt_ids_t] += 1
75
+ tracker_id_count[tracker_ids_t] += 1
76
+
77
+ # Calculate optimal assignment cost matrix for ID metrics
78
+ num_gt_ids = data['num_gt_ids']
79
+ num_tracker_ids = data['num_tracker_ids']
80
+ fp_mat = np.zeros((num_gt_ids + num_tracker_ids, num_gt_ids + num_tracker_ids))
81
+ fn_mat = np.zeros((num_gt_ids + num_tracker_ids, num_gt_ids + num_tracker_ids))
82
+ fp_mat[num_gt_ids:, :num_tracker_ids] = 1e10
83
+ fn_mat[:num_gt_ids, num_tracker_ids:] = 1e10
84
+ for gt_id in range(num_gt_ids):
85
+ fn_mat[gt_id, :num_tracker_ids] = gt_id_count[gt_id]
86
+ fn_mat[gt_id, num_tracker_ids + gt_id] = gt_id_count[gt_id]
87
+ for tracker_id in range(num_tracker_ids):
88
+ fp_mat[:num_gt_ids, tracker_id] = tracker_id_count[tracker_id]
89
+ fp_mat[tracker_id + num_gt_ids, tracker_id] = tracker_id_count[tracker_id]
90
+ fn_mat[:num_gt_ids, :num_tracker_ids] -= potential_matches_count
91
+ fp_mat[:num_gt_ids, :num_tracker_ids] -= potential_matches_count
92
+
93
+ # Hungarian algorithm
94
+ match_rows, match_cols = linear_sum_assignment(fn_mat + fp_mat)
95
+
96
+ # Accumulate basic statistics
97
+ res['IDFN'] = fn_mat[match_rows, match_cols].sum().astype(int)
98
+ res['IDFP'] = fp_mat[match_rows, match_cols].sum().astype(int)
99
+ res['IDTP'] = (gt_id_count.sum() - res['IDFN']).astype(int)
100
+
101
+ # Calculate final ID scores
102
+ res = self._compute_final_fields(res)
103
+ return res
104
+
105
+ def combine_classes_class_averaged(self, all_res, ignore_empty_classes=False):
106
+ """
107
+ Combines metrics across all classes by averaging over the class values.
108
+ If 'ignore_empty_classes' is True, then it only sums over classes with at least one gt or predicted detection.
109
+
110
+ :param Dict[str, float] all_res: dictionary containing the ID metrics for each class
111
+ :param bool ignore_empty_classes: flag to ignore empty classes, defaults to False
112
+ :return: dictionary containing the combined metrics averaged over classes
113
+ :rtype: Dict[str, float]
114
+ """
115
+ res = {}
116
+ for field in self.integer_fields:
117
+ if ignore_empty_classes:
118
+ res[field] = self._combine_sum({k: v for k, v in all_res.items()
119
+ if v['IDTP'] + v['IDFN'] + v['IDFP'] > 0 + np.finfo('float').eps},
120
+ field)
121
+ else:
122
+ res[field] = self._combine_sum({k: v for k, v in all_res.items()}, field)
123
+ for field in self.float_fields:
124
+ if ignore_empty_classes:
125
+ res[field] = np.mean([v[field] for v in all_res.values()
126
+ if v['IDTP'] + v['IDFN'] + v['IDFP'] > 0 + np.finfo('float').eps], axis=0)
127
+ else:
128
+ res[field] = np.mean([v[field] for v in all_res.values()], axis=0)
129
+ return res
130
+
131
+ def combine_classes_det_averaged(self, all_res):
132
+ """
133
+ Combines metrics across all classes by averaging over the detection values
134
+
135
+ :param Dict[str, float] all_res: dictionary containing the metrics for each class
136
+ :return: dictionary containing the combined metrics averaged over detections
137
+ :rtype: Dict[str, float]
138
+ """
139
+ res = {}
140
+ for field in self.integer_fields:
141
+ res[field] = self._combine_sum(all_res, field)
142
+ res = self._compute_final_fields(res)
143
+ return res
144
+
145
+ def combine_sequences(self, all_res):
146
+ """
147
+ Combines metrics across all sequences
148
+
149
+ :param Dict[str, float] all_res: dictionary containing the metrics for each sequence
150
+ :return: dictionary containing the combined metrics across sequences
151
+ :rtype: Dict[str, float][str, float]
152
+ """
153
+ res = {}
154
+ for field in self.integer_fields:
155
+ res[field] = self._combine_sum(all_res, field)
156
+ res = self._compute_final_fields(res)
157
+ return res
158
+
159
+ @staticmethod
160
+ def _compute_final_fields(res):
161
+ """
162
+ Calculate sub-metric ('field') values which only depend on other sub-metric values.
163
+ This function is used both for both per-sequence calculation, and in combining values across sequences.
164
+
165
+ :param Dict[str, float] res: dictionary containing the sub-metric values
166
+ :return: dictionary containing the updated sub-metric values
167
+ :rtype: Dict[str, float]
168
+ """
169
+ res['IDR'] = res['IDTP'] / np.maximum(1.0, res['IDTP'] + res['IDFN'])
170
+ res['IDP'] = res['IDTP'] / np.maximum(1.0, res['IDTP'] + res['IDFP'])
171
+ res['IDF1'] = res['IDTP'] / np.maximum(1.0, res['IDTP'] + 0.5 * res['IDFP'] + 0.5 * res['IDFN'])
172
+ return res
MTMC_Tracking_2025/eval/utils/trackeval/plotting.py ADDED
@@ -0,0 +1,322 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import logging
3
+ import numpy as np
4
+ from .utils import TrackEvalException
5
+
6
+
7
+ def plot_compare_trackers(tracker_folder, tracker_list, cls, output_folder, plots_list=None):
8
+ """
9
+ Create plots which compare metrics across different trackers
10
+
11
+ :param str tracker_folder: root tracker folder
12
+ :param str tracker_list: names of all trackers
13
+ :param List[cls] cls: names of classes
14
+ :param str output_folder: root folder to save the plots in
15
+ :param List[str] plots_list: list of all plots to generate
16
+ :return: None
17
+ ::
18
+
19
+ plotting.plot_compare_trackers(tracker_folder, tracker_list, cls, output_folder, plots_list)
20
+ """
21
+ if plots_list is None:
22
+ plots_list = get_default_plots_list()
23
+
24
+ # Load data
25
+ data = load_multiple_tracker_summaries(tracker_folder, tracker_list, cls)
26
+ out_loc = os.path.join(output_folder, cls)
27
+
28
+ # Plot
29
+ print("\n")
30
+ for args in plots_list:
31
+ create_comparison_plot(data, out_loc, *args)
32
+
33
+
34
+ def get_default_plots_list():
35
+ """
36
+ Create a intermediate config to define the type of plots.
37
+ The plot uses the following order to generate the charts:
38
+ y_label, x_label, sort_label, bg_label, bg_function
39
+
40
+ :param None
41
+ :return: List[List[str]] plots_list: detailed description of the plots
42
+ ::
43
+
44
+ plotting.get_default_plots_list(tracker_folder, tracker_list, cls, output_folder, plots_list)
45
+ """
46
+ plots_list = [
47
+ ['AssA', 'DetA', 'HOTA', 'HOTA', 'geometric_mean'],
48
+ ['AssPr', 'AssRe', 'HOTA', 'AssA', 'jaccard'],
49
+ ['DetPr', 'DetRe', 'HOTA', 'DetA', 'jaccard'],
50
+ ['HOTA(0)', 'LocA(0)', 'HOTA', 'HOTALocA(0)', 'multiplication'],
51
+ ['HOTA', 'LocA', 'HOTA', None, None],
52
+
53
+ ['HOTA', 'MOTA', 'HOTA', None, None],
54
+ ['HOTA', 'IDF1', 'HOTA', None, None],
55
+ ['IDF1', 'MOTA', 'HOTA', None, None],
56
+ ]
57
+ return plots_list
58
+
59
+
60
+ def load_multiple_tracker_summaries(tracker_folder, tracker_list, cls):
61
+ """
62
+ Loads summary data for multiple trackers
63
+
64
+ :param str tracker_folder: directory of the tracker folder
65
+ :param str tracker_list: names of the trackers
66
+ :param str cls: names of all classes
67
+
68
+ :return: Dict[str] data: summaried data of the trackers
69
+ ::
70
+
71
+ plotting.load_multiple_tracker_summaries(tracker_folder, tracker_list, cls, output_folder, plots_list)
72
+ """
73
+ data = {}
74
+ for tracker in tracker_list:
75
+ with open(os.path.join(tracker_folder, tracker, cls + '_summary.txt')) as f:
76
+ keys = next(f).split(' ')
77
+ done = False
78
+ while not done:
79
+ values = next(f).split(' ')
80
+ if len(values) == len(keys):
81
+ done = True
82
+ data[tracker] = dict(zip(keys, map(float, values)))
83
+ return data
84
+
85
+
86
+ def create_comparison_plot(data, out_loc, y_label, x_label, sort_label, bg_label=None, bg_function=None, settings=None):
87
+ """
88
+ Creates a scatter plot comparing multiple trackers between two metric fields, with one on the x-axis and the
89
+ other on the y axis. Adds pareto optical lines and (optionally) a background contour.
90
+
91
+ :param data: dict of dicts such that data[tracker_name][metric_field_name] = float
92
+ :param str y_label: the metric_field_name to be plotted on the y-axis
93
+ :param strx_label: the metric_field_name to be plotted on the x-axis
94
+ :param str sort_label: the metric_field_name by which trackers are ordered and ranked
95
+ :param str bg_label: the metric_field_name by which (optional) background contours are plotted
96
+ :param str bg_function: the (optional) function bg_function(x,y) which converts the x_label / y_label values into bg_label.
97
+ :param Dict[str] settings: dict of plot settings with keys:
98
+ 'gap_val': gap between axis ticks and bg curves.
99
+ 'num_to_plot': maximum number of trackers to plot
100
+
101
+ :return: None
102
+ ::
103
+
104
+ plotting.create_comparison_plot(x_values, y_values)
105
+ """
106
+
107
+ # Only loaded when run to reduce minimum requirements
108
+ from matplotlib import pyplot as plt
109
+
110
+ # Get plot settings
111
+ if settings is None:
112
+ gap_val = 2
113
+ num_to_plot = 20
114
+ else:
115
+ gap_val = settings['gap_val']
116
+ num_to_plot = settings['num_to_plot']
117
+
118
+ if (bg_label is None) != (bg_function is None):
119
+ raise TrackEvalException('bg_function and bg_label must either be both given or neither given.')
120
+
121
+ # Extract data
122
+ tracker_names = np.array(list(data.keys()))
123
+ sort_index = np.array([data[t][sort_label] for t in tracker_names]).argsort()[::-1]
124
+ x_values = np.array([data[t][x_label] for t in tracker_names])[sort_index][:num_to_plot]
125
+ y_values = np.array([data[t][y_label] for t in tracker_names])[sort_index][:num_to_plot]
126
+
127
+ # Print info on what is being plotted
128
+ tracker_names = tracker_names[sort_index][:num_to_plot]
129
+ logging.info('Plotting %s vs %s...' % (y_label, x_label))
130
+ #for i, name in enumerate(tracker_names):
131
+ #print('%i: %s' % (i+1, name))
132
+
133
+ # Find best fitting boundaries for data
134
+ boundaries = _get_boundaries(x_values, y_values, round_val=gap_val/2)
135
+
136
+ fig = plt.figure()
137
+
138
+ # Plot background contour
139
+ if bg_function is not None:
140
+ _plot_bg_contour(bg_function, boundaries, gap_val)
141
+
142
+ # Plot pareto optimal lines
143
+ _plot_pareto_optimal_lines(x_values, y_values)
144
+
145
+ # Plot data points with number labels
146
+ labels = np.arange(len(y_values)) + 1
147
+ plt.plot(x_values, y_values, 'b.', markersize=15)
148
+ for xx, yy, l in zip(x_values, y_values, labels):
149
+ plt.text(xx, yy, str(l), color="red", fontsize=15)
150
+
151
+ # Add extra explanatory text to plots
152
+ plt.text(0, -0.11, 'label order:\nHOTA', horizontalalignment='left', verticalalignment='center',
153
+ transform=fig.axes[0].transAxes, color="red", fontsize=12)
154
+ if bg_label is not None:
155
+ plt.text(1, -0.11, 'curve values:\n' + bg_label, horizontalalignment='right', verticalalignment='center',
156
+ transform=fig.axes[0].transAxes, color="grey", fontsize=12)
157
+
158
+ plt.xlabel(x_label, fontsize=15)
159
+ plt.ylabel(y_label, fontsize=15)
160
+ title = y_label + ' vs ' + x_label
161
+ if bg_label is not None:
162
+ title += ' (' + bg_label + ')'
163
+ plt.title(title, fontsize=17)
164
+ plt.xticks(np.arange(0, 100, gap_val))
165
+ plt.yticks(np.arange(0, 100, gap_val))
166
+ min_x, max_x, min_y, max_y = boundaries
167
+ plt.xlim(min_x, max_x)
168
+ plt.ylim(min_y, max_y)
169
+ plt.gca().set_aspect('equal', adjustable='box')
170
+ plt.tight_layout()
171
+
172
+ os.makedirs(out_loc, exist_ok=True)
173
+ filename = os.path.join(out_loc, title.replace(' ', '_'))
174
+ plt.savefig(filename + '.pdf', bbox_inches='tight', pad_inches=0.05)
175
+ plt.savefig(filename + '.png', bbox_inches='tight', pad_inches=0.05)
176
+
177
+
178
+ def _get_boundaries(x_values, y_values, round_val):
179
+ """
180
+ Computes boundaries of a plot
181
+
182
+ :param List[Float] x_values: x values
183
+ :param List[Float] y_values: y values
184
+ :param Float round_val: interval
185
+
186
+ :return: Float, Float, Float, Float: boundaries of the plot
187
+ ::
188
+
189
+ plotting._get_boundaries(x_values, y_values)
190
+ """
191
+ x1 = np.min(np.floor((x_values - 0.5) / round_val) * round_val)
192
+ x2 = np.max(np.ceil((x_values + 0.5) / round_val) * round_val)
193
+ y1 = np.min(np.floor((y_values - 0.5) / round_val) * round_val)
194
+ y2 = np.max(np.ceil((y_values + 0.5) / round_val) * round_val)
195
+ x_range = x2 - x1
196
+ y_range = y2 - y1
197
+ max_range = max(x_range, y_range)
198
+ x_center = (x1 + x2) / 2
199
+ y_center = (y1 + y2) / 2
200
+ min_x = max(x_center - max_range / 2, 0)
201
+ max_x = min(x_center + max_range / 2, 100)
202
+ min_y = max(y_center - max_range / 2, 0)
203
+ max_y = min(y_center + max_range / 2, 100)
204
+ return min_x, max_x, min_y, max_y
205
+
206
+
207
+ def geometric_mean(x, y):
208
+ """
209
+ Computes geometric mean
210
+
211
+ :param Float x: x values
212
+ :param Float y: y values
213
+
214
+ :return: Float: geometric mean value
215
+ ::
216
+
217
+ plotting.geometric_mean(x_values, y_values)
218
+ """
219
+ return np.sqrt(x * y)
220
+
221
+
222
+ def jaccard(x, y):
223
+ x = x / 100
224
+ y = y / 100
225
+ return 100 * (x * y) / (x + y - x * y)
226
+
227
+
228
+ def multiplication(x, y):
229
+ """
230
+ Computes multiplication for plots
231
+
232
+ :param Float x: x values
233
+ :param Float y: y values
234
+
235
+ :return: Float: multiplied value
236
+ ::
237
+
238
+ plotting.multiplication(x_values, y_values)
239
+ """
240
+ return x * y / 100
241
+
242
+
243
+ bg_function_dict = {
244
+ "geometric_mean": geometric_mean,
245
+ "jaccard": jaccard,
246
+ "multiplication": multiplication,
247
+ }
248
+
249
+
250
+ def _plot_bg_contour(bg_function, plot_boundaries, gap_val):
251
+ """
252
+ Plot background contour
253
+
254
+ :param Dict[str:func()] bg_function: sort order function
255
+ :param List[float] plot_boundaries: limit values for the plot
256
+ :param int gap_val: interval value
257
+
258
+ :return: None
259
+ ::
260
+
261
+ plotting._plot_bg_contour(x_values, y_values)
262
+ """
263
+ # Only loaded when run to reduce minimum requirements
264
+ from matplotlib import pyplot as plt
265
+
266
+ # Plot background contour
267
+ min_x, max_x, min_y, max_y = plot_boundaries
268
+ x = np.arange(min_x, max_x, 0.1)
269
+ y = np.arange(min_y, max_y, 0.1)
270
+ x_grid, y_grid = np.meshgrid(x, y)
271
+ if bg_function in bg_function_dict.keys():
272
+ z_grid = bg_function_dict[bg_function](x_grid, y_grid)
273
+ else:
274
+ raise TrackEvalException("background plotting function '%s' is not defined." % bg_function)
275
+ levels = np.arange(0, 100, gap_val)
276
+ con = plt.contour(x_grid, y_grid, z_grid, levels, colors='grey')
277
+
278
+ def bg_format(val):
279
+ s = '{:1f}'.format(val)
280
+ return '{:.0f}'.format(val) if s[-1] == '0' else s
281
+
282
+ con.levels = [bg_format(val) for val in con.levels]
283
+ plt.clabel(con, con.levels, inline=True, fmt='%r', fontsize=8)
284
+
285
+
286
+ def _plot_pareto_optimal_lines(x_values, y_values):
287
+ """
288
+ Plot pareto optimal lines
289
+
290
+ :param List[float] x_values: values to plot on x axis
291
+ :param List[float] y_values: values to plot on y axis
292
+
293
+ :return: None
294
+ ::
295
+
296
+ plotting._plot_pareto_optimal_lines(x_values, y_values)
297
+ """
298
+
299
+ # Only loaded when run to reduce minimum requirements
300
+ from matplotlib import pyplot as plt
301
+
302
+ # Plot pareto optimal lines
303
+ cxs = x_values
304
+ cys = y_values
305
+ best_y = np.argmax(cys)
306
+ x_pareto = [0, cxs[best_y]]
307
+ y_pareto = [cys[best_y], cys[best_y]]
308
+ t = 2
309
+ remaining = cxs > x_pareto[t - 1]
310
+ cys = cys[remaining]
311
+ cxs = cxs[remaining]
312
+ while len(cxs) > 0 and len(cys) > 0:
313
+ best_y = np.argmax(cys)
314
+ x_pareto += [x_pareto[t - 1], cxs[best_y]]
315
+ y_pareto += [cys[best_y], cys[best_y]]
316
+ t += 2
317
+ remaining = cxs > x_pareto[t - 1]
318
+ cys = cys[remaining]
319
+ cxs = cxs[remaining]
320
+ x_pareto.append(x_pareto[t - 1])
321
+ y_pareto.append(0)
322
+ plt.plot(np.array(x_pareto), np.array(y_pareto), '--r')
MTMC_Tracking_2025/eval/utils/trackeval/trackeval_utils.py ADDED
@@ -0,0 +1,316 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import logging
3
+ import numpy as np
4
+ from tabulate import tabulate
5
+ import utils.trackeval as trackeval
6
+ from typing import List, Dict, Set, Tuple, Any
7
+
8
+ from utils.io_utils import make_dir, validate_file_path, load_json_from_file
9
+ from utils.classes import CLASS_LIST
10
+
11
+ logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%y/%m/%d %H:%M:%S", level=logging.INFO)
12
+
13
+
14
+ def prepare_ground_truth_file(input_file_path: str, output_file_path: str, fps: int) -> None:
15
+ """
16
+ Converts the ground truth file into a MOT (Multiple Object Tracking) format for evaluation.
17
+
18
+ :param str input_file_path: The path to the input ground truth file.
19
+ :param str output_file_path: The path where the output MOT file will be saved.
20
+ :param int fps: The frame rate (FPS) of the videos.
21
+ :param AppConfig app_config: The application configuration object.
22
+ :return: None
23
+ :rtype: None
24
+ ::
25
+
26
+ prepare_ground_truth_file(input_file_path, output_file_path, fps, app_config, ground_truth_frame_offset_secs)
27
+ """
28
+
29
+ output_file = open(output_file_path, "w")
30
+
31
+ with open(input_file_path) as f:
32
+ for line_number, line in enumerate(f):
33
+
34
+ line = line.split(" ")
35
+ object_id = int(line[2])
36
+ frame_id = int(line[3]) + 1
37
+ x = float(line[4])
38
+ y = float(line[5])
39
+ z = float(line[6])
40
+ width = float(line[7])
41
+ length = float(line[8])
42
+ height = float(line[9])
43
+ yaw = float(line[10])
44
+ pitch = 0
45
+ roll = 0
46
+
47
+ result_str = (
48
+ f"{frame_id} {object_id} 1 "
49
+ f"{x:.5f} {y:.5f} {z:.5f} "
50
+ f"{width:.5f} {length:.5f} {height:.5f} {pitch:.5f} {roll:.5f} {yaw:.5f}\n"
51
+ )
52
+ output_file.write(result_str)
53
+
54
+ output_file.close()
55
+
56
+ def prepare_prediction_file(input_file_path: str, output_file_path: str, fps: float) -> List[int]:
57
+ """
58
+ Converts the prediction file into a MOT (Multiple Object Tracking) format for evaluation.
59
+
60
+ :param str input_file_path: The path to the input prediction file.
61
+ :param str output_file_path: The path where the output MOT file will be saved.
62
+ :param float fps: The frame rate (FPS) of the videos.
63
+ ::
64
+
65
+ prepare_prediction_file(input_file_path, output_file_path, fps)
66
+ """
67
+
68
+ output_file = open(output_file_path, "w")
69
+ with open(input_file_path) as f:
70
+ for line_number, line in enumerate(f):
71
+
72
+ line = line.split(" ")
73
+
74
+ object_id = int(line[2])
75
+ frame_id = int(line[3]) + 1
76
+ x = float(line[4])
77
+ y = float(line[5])
78
+ z = float(line[6])
79
+ width = float(line[7])
80
+ length = float(line[8])
81
+ height = float(line[9])
82
+ yaw = float(line[10])
83
+ pitch = 0
84
+ roll = 0
85
+ result_str = (
86
+ f"{frame_id} {object_id} 1 "
87
+ f"{x:.5f} {y:.5f} {z:.5f} "
88
+ f"{width:.5f} {length:.5f} {height:.5f} {pitch:.5f} {roll:.5f} {yaw:.5f}\n"
89
+ )
90
+ output_file.write(result_str)
91
+ output_file.close()
92
+
93
+ return
94
+
95
+ def make_seq_maps_file(seq_maps_dir_path: str, sensor_ids: List[str], benchmark: str, split_to_eval: str) -> None:
96
+ """
97
+ Creates a sequence-maps file used by the TrackEval library.
98
+
99
+ :param str seq_maps_dir_path: The directory path where the sequence-maps file will be saved.
100
+ :param List[str] sensor_ids: A list of sensor IDs to include in the sequence-maps file.
101
+ :param str benchmark: The name of the benchmark.
102
+ :param str split_to_eval: The name of the split for evaluation.
103
+ :return: None
104
+ :rtype: None
105
+ ::
106
+
107
+ make_seq_maps_file(seq_maps_dir_path, sensor_ids, benchmark, split_to_eval)
108
+ """
109
+ make_dir(seq_maps_dir_path)
110
+ seq_maps_file_name = benchmark + "-" + split_to_eval + ".txt"
111
+ seq_maps_file_path = os.path.join(seq_maps_dir_path, seq_maps_file_name)
112
+ f = open(seq_maps_file_path, "w")
113
+ f.write("name\n")
114
+
115
+ for sensor_id in sensor_ids:
116
+ f.write(sensor_id + "\n")
117
+ f.close()
118
+
119
+
120
+ def setup_evaluation_configs(results_dir_path: str, eval_type:str, num_cores:int) -> Tuple[Dict[str, Any], Dict[str, Any]]:
121
+ """
122
+ Sets up the evaluation configurations for TrackEval.
123
+
124
+ :param str results_dir_path: The path to the folder that stores the results.
125
+ :param str eval_type: The type of evaluation to perform ("bbox" or "location").
126
+ :return: A tuple containing the dataset configuration and evaluation configuration.
127
+ :rtype: Tuple[Dict[str, Any], Dict[str, Any]]
128
+ ::
129
+
130
+ dataset_config, eval_config = setup_evaluation_configs(results_dir_path, eval_type)
131
+ """
132
+ eval_config = trackeval.eval.Evaluator.get_default_eval_config()
133
+ eval_config["PRINT_CONFIG"] = False
134
+ eval_config["USE_PARALLEL"] = True
135
+ eval_config["NUM_PARALLEL_CORES"] = num_cores
136
+
137
+ # Create dataset configs for TrackEval library
138
+ if eval_type == "bbox":
139
+ dataset_config = trackeval.datasets.MTMCChallenge3DBBox.get_default_dataset_config()
140
+ elif eval_type == "location":
141
+ dataset_config = trackeval.datasets.MTMCChallenge3DLocation.get_default_dataset_config()
142
+ dataset_config["DO_PREPROC"] = False
143
+ dataset_config["SPLIT_TO_EVAL"] = "all"
144
+ evaluation_dir_path = os.path.join(results_dir_path, "evaluation")
145
+ make_dir(evaluation_dir_path)
146
+ dataset_config["GT_FOLDER"] = os.path.join(evaluation_dir_path, "gt")
147
+ dataset_config["TRACKERS_FOLDER"] = os.path.join(evaluation_dir_path, "scores")
148
+ dataset_config["PRINT_CONFIG"] = False
149
+
150
+ return dataset_config, eval_config
151
+
152
+ def make_seq_ini_file(gt_dir: str, camera: str, seq_length: int) -> None:
153
+ """
154
+ Creates a sequence-ini file used by the TrackEval library.
155
+
156
+ :param str gt_dir: The directory path where the sequence-ini file will be saved.
157
+ :param str camera: The name of a single sensor
158
+ :param int seq_length: The number of frames in the sequence.
159
+ :return: None
160
+ :rtype: None
161
+ ::
162
+
163
+ make_seq_ini_file(gt_dir, camera, seq_length)
164
+ """
165
+ ini_file_name = gt_dir + "/seqinfo.ini"
166
+ f = open(ini_file_name, "w")
167
+ f.write("[Sequence]\n")
168
+ name= "name=" +str(camera)+ "\n"
169
+ f.write(name)
170
+ f.write("imDir=img1\n")
171
+ f.write("frameRate=30\n")
172
+ seq = "seqLength=" + str(seq_length) + "\n"
173
+ f.write(seq)
174
+ f.write("imWidth=1920\n")
175
+ f.write("imHeight=1080\n")
176
+ f.write("imExt=.jpg\n")
177
+ f.close()
178
+
179
+
180
+ def prepare_evaluation_folder(dataset_config: Dict[str, Any], input_file_type: str) -> Tuple[str, str]:
181
+ """
182
+ Prepares the evaluation folder structure required for TrackEval.
183
+
184
+ :param Dict[str, Any] dataset_config: The dataset configuration dictionary.
185
+ :return: A tuple containing the prediction file path and ground truth file path.
186
+ :rtype: Tuple[str, str]
187
+ ::
188
+
189
+ pred_file_path, gt_file_path = prepare_evaluation_folder(dataset_config)
190
+ """
191
+ # Create evaluation configs for TrackEval library
192
+ sensor_ids: Set[str] = set()
193
+ sensor_ids.add(input_file_type)
194
+ sensor_ids = sorted(list(sensor_ids))
195
+
196
+ # Create sequence maps file for evaluation
197
+ seq_maps_dir_path = os.path.join(dataset_config["GT_FOLDER"], "seqmaps")
198
+ make_seq_maps_file(seq_maps_dir_path, sensor_ids, dataset_config["BENCHMARK"], dataset_config["SPLIT_TO_EVAL"])
199
+
200
+ # Create ground truth directory
201
+ mot_version = dataset_config["BENCHMARK"] + "-" + dataset_config["SPLIT_TO_EVAL"]
202
+ gt_root_dir_path = os.path.join(dataset_config["GT_FOLDER"], mot_version)
203
+ gt_dir_path = os.path.join(gt_root_dir_path, input_file_type)
204
+ make_dir(gt_dir_path)
205
+ gt_output_dir_path = os.path.join(gt_dir_path, "gt")
206
+ make_dir(gt_output_dir_path)
207
+ gt_file_path = os.path.join(gt_output_dir_path, "gt.txt")
208
+
209
+ # Generate sequence file required for TrackEval library
210
+ make_seq_ini_file(gt_dir_path, camera=input_file_type, seq_length=20000)
211
+
212
+ # Create prediction directory
213
+ pred_dir_path = os.path.join(dataset_config["TRACKERS_FOLDER"], mot_version, "data", "data")
214
+ make_dir(pred_dir_path)
215
+ pred_file_path = os.path.join(pred_dir_path, f"{input_file_type}.txt")
216
+
217
+ return pred_file_path, gt_file_path
218
+
219
+
220
+ def run_evaluation(gt_file, prediction_file, fps, app_config, dataset_config, eval_config, eval_type):
221
+ """
222
+ Executes the evaluation process using TrackEval based on the provided configurations.
223
+
224
+ :param str gt_file: The ground truth file path.
225
+ :param str prediction_file: The prediction file path.
226
+ :param float fps: The frames per second rate.
227
+ :param AppConfig app_config: The application configuration object.
228
+ :param Dict[str, Any] dataset_config: The dataset configuration dictionary.
229
+ :param Dict[str, Any] eval_config: The evaluation configuration dictionary.
230
+ :param str eval_type: The type of evaluation to perform ("bbox" or "location").
231
+ :return: The evaluation results.
232
+ :rtype: Any
233
+ ::
234
+
235
+ results = run_evaluation(gt_file, prediction_file, fps, app_config, dataset_config, eval_config, eval_type)
236
+ """
237
+
238
+ # Define the metrics to calculate
239
+ metrics_config = {"METRICS": ["HOTA"]}
240
+ metrics_config["PRINT_CONFIG"] = False
241
+ config = {**eval_config, **dataset_config, **metrics_config} # Merge configs
242
+ eval_config = {k: v for k, v in config.items() if k in eval_config.keys()}
243
+ dataset_config = {k: v for k, v in config.items() if k in dataset_config.keys()}
244
+ metrics_config = {k: v for k, v in config.items() if k in metrics_config.keys()}
245
+
246
+ # Run the Evaluator
247
+ evaluator = trackeval.eval.Evaluator(eval_config)
248
+ if eval_type == "bbox":
249
+ dataset_list = [trackeval.datasets.MTMCChallenge3DBBox(dataset_config)]
250
+ elif eval_type == "location":
251
+ dataset_list = [trackeval.datasets.MTMCChallenge3DLocation(dataset_config)]
252
+
253
+ metrics_list: List[str] = list()
254
+ for metric in [trackeval.metrics.HOTA, trackeval.metrics.CLEAR, trackeval.metrics.Identity]:
255
+ if metric.get_name() in metrics_config["METRICS"]:
256
+ metrics_list.append(metric(metrics_config))
257
+ if len(metrics_list) == 0:
258
+ raise Exception("No metric selected for evaluation.")
259
+ results = evaluator.evaluate(dataset_list, metrics_list)
260
+
261
+ return results
262
+
263
+ def _evaluate_tracking_for_all_BEV_sensors(ground_truth_file: str, prediction_file: str, output_directory, num_cores, fps):
264
+ """
265
+ Evaluates tracking performance for all BEV sensors.
266
+
267
+ :param str ground_truth_file: The path to the ground truth file.
268
+ :param str prediction_file: The path to the prediction file.
269
+ :param str output_directory: The directory where output files will be stored.
270
+ :param str eval_options: The type of evaluation ("bbox" or "location").
271
+ :return: The evaluation results.
272
+ :rtype: Any
273
+ ::
274
+
275
+ results = evaluate_tracking_for_all_bev_sensors(ground_truth_file, prediction_file, output_directory, app_config_path, calibration_file, eval_options)
276
+ """
277
+
278
+ print("")
279
+ all_results = {}
280
+ for class_name in CLASS_LIST:
281
+ class_dir = os.path.join(output_directory, class_name)
282
+
283
+ if not os.path.isdir(class_dir):
284
+ logging.warning(f"Skipping class folder '{class_name}' as it was not found.")
285
+ print("--------------------------------")
286
+ continue
287
+
288
+ logging.info(f"Evaluating all BEV sensors on class {class_name}.")
289
+
290
+ ground_truth_file = os.path.join(class_dir, "gt.txt")
291
+ prediction_file = os.path.join(class_dir, "pred.txt")
292
+ output_dir = os.path.join(class_dir, "output")
293
+
294
+ if not os.path.exists(ground_truth_file) or not os.path.exists(prediction_file):
295
+ logging.info(f"Skipping class folder '{class_name}' as it was not found.")
296
+ print("--------------------------------")
297
+ continue
298
+
299
+ # Setup evaluation library & folders
300
+ dataset_config, eval_config = setup_evaluation_configs(output_directory, "bbox", num_cores)
301
+ output_pred_file_name, output_gt_file_name = prepare_evaluation_folder(dataset_config, "MTMC")
302
+ logging.info("Completed setup for evaluation library.")
303
+
304
+ # Prepare ground truth
305
+ prepare_ground_truth_file(ground_truth_file, output_gt_file_name, fps)
306
+ logging.info(f"Completed parsing ground-truth file {ground_truth_file}.")
307
+
308
+ # Prepare prediction results
309
+ prepare_prediction_file(prediction_file, output_pred_file_name, fps)
310
+ logging.info(f"Completed parsing prediction file {prediction_file}.")
311
+
312
+ # Run evaluation
313
+ results = run_evaluation(output_gt_file_name, output_pred_file_name, fps, None, dataset_config, eval_config, "bbox")
314
+ all_results[class_name] = results
315
+ print("--------------------------------------------------------------")
316
+ return all_results
MTMC_Tracking_2025/eval/utils/trackeval/utils.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import csv
3
+ import argparse
4
+ from collections import OrderedDict
5
+
6
+
7
+ def init_config(config, default_config, name=None):
8
+ """
9
+ Initialise non-given config values with defaults
10
+
11
+ :param str config: config
12
+ :param str default_config: default config
13
+ :param str name: name of dataset/metric
14
+ :return: None
15
+ ::
16
+
17
+ trackeval.utils.init_config(config, default_config, name)
18
+ """
19
+ if config is None:
20
+ config = default_config
21
+ else:
22
+ for k in default_config.keys():
23
+ if k not in config.keys():
24
+ config[k] = default_config[k]
25
+ if name and config['PRINT_CONFIG']:
26
+ print('\n%s Config:' % name)
27
+ for c in config.keys():
28
+ print('%-20s : %-30s' % (c, config[c]))
29
+ return config
30
+
31
+
32
+ def update_config(config):
33
+ """
34
+ Parse the arguments of a script and updates the config values for a given value if specified in the arguments.
35
+
36
+ :param str config: the config to update
37
+ :return: the updated config
38
+ ::
39
+
40
+ trackeval.utils.update_config(config, default_config, name)
41
+ """
42
+ parser = argparse.ArgumentParser()
43
+ for setting in config.keys():
44
+ if type(config[setting]) == list or type(config[setting]) == type(None):
45
+ parser.add_argument("--" + setting, nargs='+')
46
+ else:
47
+ parser.add_argument("--" + setting)
48
+ args = parser.parse_args().__dict__
49
+ for setting in args.keys():
50
+ if args[setting] is not None:
51
+ if type(config[setting]) == type(True):
52
+ if args[setting] == 'True':
53
+ x = True
54
+ elif args[setting] == 'False':
55
+ x = False
56
+ else:
57
+ raise Exception('Command line parameter ' + setting + 'must be True or False')
58
+ elif type(config[setting]) == type(1):
59
+ x = int(args[setting])
60
+ elif type(args[setting]) == type(None):
61
+ x = None
62
+ else:
63
+ x = args[setting]
64
+ config[setting] = x
65
+ return config
66
+
67
+
68
+ def get_code_path():
69
+ """
70
+ Get base path where the trackeval library is located
71
+
72
+ :param None
73
+ :return: str: base path of trackeval library
74
+ ::
75
+
76
+ trackeval.utils.get_code_path(config, default_config, name)
77
+ """
78
+ return os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
79
+
80
+
81
+ def validate_metrics_list(metrics_list):
82
+ """
83
+ Get names of metric class and ensures they are unique, further checks that the fields within each metric class
84
+ do not have overlapping names.
85
+
86
+ :param List[str] metrics_list: list of all metrics to test
87
+ :return: List[str] metric_names: valid list of all metrics to test
88
+ ::
89
+
90
+ trackeval.utils.get_code_path(config, default_config, name)
91
+ """
92
+ metric_names = [metric.get_name() for metric in metrics_list]
93
+ # check metric names are unique
94
+ if len(metric_names) != len(set(metric_names)):
95
+ raise TrackEvalException('Code being run with multiple metrics of the same name')
96
+ fields = []
97
+ for m in metrics_list:
98
+ fields += m.fields
99
+ # check metric fields are unique
100
+ if len(fields) != len(set(fields)):
101
+ raise TrackEvalException('Code being run with multiple metrics with fields of the same name')
102
+ return metric_names
103
+
104
+
105
+ def write_summary_results(summaries, cls, output_folder):
106
+ """
107
+ Write summary results to file
108
+
109
+ :param List[str] summaries: list of all summaries
110
+ :param List[str] cls: list of classes
111
+ :param List[str] output_folder: directory to store the summary results
112
+
113
+ :return: None
114
+ ::
115
+
116
+ trackeval.utils.write_summary_results(config, default_config, name)
117
+ """
118
+ fields = sum([list(s.keys()) for s in summaries], [])
119
+ values = sum([list(s.values()) for s in summaries], [])
120
+
121
+ # In order to remain consistent upon new fields being adding, for each of the following fields if they are present
122
+ # they will be output in the summary first in the order below. Any further fields will be output in the order each
123
+ # metric family is called, and within each family either in the order they were added to the dict (python >= 3.6) or
124
+ # randomly (python < 3.6).
125
+ default_order = ['HOTA', 'DetA', 'AssA', 'DetRe', 'DetPr', 'AssRe', 'AssPr', 'LocA', 'OWTA', 'HOTA(0)', 'LocA(0)',
126
+ 'HOTALocA(0)', 'MOTA', 'MOTP', 'MODA', 'CLR_Re', 'CLR_Pr', 'MTR', 'PTR', 'MLR', 'CLR_TP', 'CLR_FN',
127
+ 'CLR_FP', 'IDSW', 'MT', 'PT', 'ML', 'Frag', 'sMOTA', 'IDF1', 'IDR', 'IDP', 'IDTP', 'IDFN', 'IDFP',
128
+ 'Dets', 'GT_Dets', 'IDs', 'GT_IDs']
129
+ default_ordered_dict = OrderedDict(zip(default_order, [None for _ in default_order]))
130
+ for f, v in zip(fields, values):
131
+ default_ordered_dict[f] = v
132
+ for df in default_order:
133
+ if default_ordered_dict[df] is None:
134
+ del default_ordered_dict[df]
135
+ fields = list(default_ordered_dict.keys())
136
+ values = list(default_ordered_dict.values())
137
+
138
+ out_file = os.path.join(output_folder, cls + '_summary.txt')
139
+ os.makedirs(os.path.dirname(out_file), exist_ok=True)
140
+ with open(out_file, 'w', newline='') as f:
141
+ writer = csv.writer(f, delimiter=' ')
142
+ writer.writerow(fields)
143
+ writer.writerow(values)
144
+
145
+
146
+ def write_detailed_results(details, cls, output_folder):
147
+ """
148
+ Write detailed results to file
149
+
150
+ :param Dict[str, Object] details: dictionary of all trackers
151
+ :param List[str] cls: list of classes
152
+ :param List[str] output_folder: directory to store the detailed results
153
+
154
+ :return: None
155
+ ::
156
+
157
+ trackeval.utils.write_detailed_results(config, default_config, name)
158
+ """
159
+ sequences = details[0].keys()
160
+ fields = ['seq'] + sum([list(s['COMBINED_SEQ'].keys()) for s in details], [])
161
+ out_file = os.path.join(output_folder, cls + '_detailed.csv')
162
+ os.makedirs(os.path.dirname(out_file), exist_ok=True)
163
+ with open(out_file, 'w', newline='') as f:
164
+ writer = csv.writer(f)
165
+ writer.writerow(fields)
166
+ for seq in sorted(sequences):
167
+ if seq == 'COMBINED_SEQ':
168
+ continue
169
+ writer.writerow([seq] + sum([list(s[seq].values()) for s in details], []))
170
+ writer.writerow(['COMBINED'] + sum([list(s['COMBINED_SEQ'].values()) for s in details], []))
171
+
172
+
173
+ def load_detail(file):
174
+ """
175
+ Loads detailed data for a tracker.
176
+
177
+ :param Dict[str] file: file to load the detailed results from
178
+
179
+ :return: Dict[str] :data
180
+ ::
181
+
182
+ trackeval.utils.load_detail(config, default_config, name)
183
+ """
184
+ data = {}
185
+ with open(file) as f:
186
+ for i, row_text in enumerate(f):
187
+ row = row_text.replace('\r', '').replace('\n', '').split(',')
188
+ if i == 0:
189
+ keys = row[1:]
190
+ continue
191
+ current_values = row[1:]
192
+ seq = row[0]
193
+ if seq == 'COMBINED':
194
+ seq = 'COMBINED_SEQ'
195
+ if (len(current_values) == len(keys)) and seq != '':
196
+ data[seq] = {}
197
+ for key, value in zip(keys, current_values):
198
+ data[seq][key] = float(value)
199
+ return data
200
+
201
+
202
+ class TrackEvalException(Exception):
203
+ """Custom exception for catching expected errors."""
204
+ ...