# YOLO: Official Implementation of YOLOv9, YOLOv7



[](https://paperswithcode.com/sota/real-time-object-detection-on-coco)
[]()
[](https://huggingface.co/spaces/henry000/YOLO)
Welcome to the official implementation of YOLOv7 and YOLOv9. This repository will contains the complete codebase, pre-trained models, and detailed instructions for training and deploying YOLOv9.
## TL;DR
- This is the official YOLO model implementation with an MIT License.
- For quick deployment: you can directly install by pip+git:
```shell
pip install git+https://github.com/WongKinYiu/YOLO.git
yolo task.data.source=0 # source could be a single file, video, image folder, webcam ID
```
## Introduction
- [**YOLOv9**: Learning What You Want to Learn Using Programmable Gradient Information](https://arxiv.org/abs/2402.13616)
- [**YOLOv7**: Trainable Bag-of-Freebies Sets New State-of-the-Art for Real-Time Object Detectors](https://arxiv.org/abs/2207.02696)
## Installation
To get started using YOLOv9's developer mode, we recommand you clone this repository and install the required dependencies:
```shell
git clone git@github.com:WongKinYiu/YOLO.git
cd YOLO
pip install -r requirements.txt
```
## Features
| Tools | pip ๐ | HuggingFace ๐ค | Docker ๐ณ |
| -------------------- | :----: | :--------------: | :-------: |
| Compatibility | โ
| โ
| ๐งช |
| Phase | Training | Validation | Inference |
| ------------------- | :------: | :---------: | :-------: |
| Supported | โ
| โ
| โ
|
|
| Device | CUDA | CPU | MPS |
| ------------------ | :---------: | :-------: | :-------: |
| PyTorch | v1.12 | v2.3+ | v1.12 |
| ONNX | โ
| โ
| - |
| TensorRT | โ
| - | - |
| OpenVINO | - | ๐งช | โ |
|
## Task
These are simple examples. For more customization details, please refer to [Notebooks](examples) and lower-level modifications **[HOWTO](docs/HOWTO.md)**.
## Training
To train YOLO on your machine/dataset:
1. Modify the configuration file `yolo/config/dataset/**.yaml` to point to your dataset.
2. Run the training script:
```shell
python yolo/lazy.py task=train dataset=** use_wandb=True
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c weight=False # or more args
```
### Transfer Learning
To perform transfer learning with YOLOv9:
```shell
python yolo/lazy.py task=train task.data.batch_size=8 model=v9-c dataset={dataset_config} device={cpu, mps, cuda}
```
### Inference
To use a model for object detection, use:
```shell
python yolo/lazy.py # if cloned from GitHub
python yolo/lazy.py task=inference \ # default is inference
name=AnyNameYouWant \ # AnyNameYouWant
device=cpu \ # hardware cuda, cpu, mps
model=v9-s \ # model version: v9-c, m, s
task.nms.min_confidence=0.1 \ # nms config
task.fast_inference=onnx \ # onnx, trt, deploy
task.data.source=data/toy/images/train \ # file, dir, webcam
+quite=True \ # Quite Output
yolo task.data.source={Any Source} # if pip installed
yolo task=inference task.data.source={Any}
```
### Validation
To validate model performance, or generate a json file in COCO format:
```shell
python yolo/lazy.py task=validation
python yolo/lazy.py task=validation dataset=toy
```
## Contributing
Contributions to the YOLO project are welcome! See [CONTRIBUTING](docs/CONTRIBUTING.md) for guidelines on how to contribute.
### TODO Diagrams
```mermaid
flowchart TB
subgraph Features
Taskv7-->Segmentation["#35 Segmentation"]
Taskv7-->Classification["#34 Classification"]
Taskv9-->Segmentation
Taskv9-->Classification
Trainv7
end
subgraph Model
MODELv7-->v7-X
MODELv7-->v7-E6
MODELv7-->v7-E6E
MODELv9-->v9-T
MODELv9-->v9-S
MODELv9-->v9-E
end
subgraph Bugs
Fix-->Fix1["#12 mAP > 1"]
Fix-->Fix2["v9 Gradient Bump"]
Reply-->Reply1["#39"]
Reply-->Reply2["#36"]
end
```
## Star History
[](https://star-history.com/#WongKinYiu/YOLO&Date)
## Citations
```
@misc{wang2022yolov7,
title={YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors},
author={Chien-Yao Wang and Alexey Bochkovskiy and Hong-Yuan Mark Liao},
year={2022},
eprint={2207.02696},
archivePrefix={arXiv},
primaryClass={id='cs.CV' full_name='Computer Vision and Pattern Recognition' is_active=True alt_name=None in_archive='cs' is_general=False description='Covers image processing, computer vision, pattern recognition, and scene understanding. Roughly includes material in ACM Subject Classes I.2.10, I.4, and I.5.'}
}
@misc{wang2024yolov9,
title={YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information},
author={Chien-Yao Wang and I-Hau Yeh and Hong-Yuan Mark Liao},
year={2024},
eprint={2402.13616},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```