File size: 2,597 Bytes
7cb6290
 
 
 
 
 
 
 
 
 
 
ed881c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---

title: Mars Vision Leaderboard
emoji: πŸš€
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: "4.19.2"
app_file: run.py
pinned: false
---


# Mars Vision Leaderboard

A comprehensive leaderboard for evaluating computer vision models on Mars-specific datasets. This leaderboard tracks performance across multiple tasks including classification, object detection, and segmentation.

## Overview

This leaderboard provides a standardized evaluation framework for computer vision models on Mars-specific datasets.

## Tasks

1. **Classification**
   - DoMars16k - Surface Types
   - Mars Image - Content Analysis
   - Deep Mars - Deep Learning
   - Dusty vs Non-dusty - Dust Analysis

2. **Object Detection**
   - Robins & Hynek - Craters
   - Lagain - Surface Features
   - SPOC - Surface Properties
   - AI4MARS - Surface Analysis
   - MarsData - General Surface

3. **Segmentation**
   - S5Mars - Surface
   - Mars-Seg - Features
   - Martian Landslide
   - Martian Frost

## Getting Started

1. Clone the repository:
   ```bash

   git clone https://huggingface.co/spaces/gremlin97/mars-vision-leaderboard

   cd mars-vision-leaderboard

   ```

2. Install dependencies using Poetry:
   ```bash

   poetry install

   ```

3. Run the leaderboard:
   ```bash

   # From the project root directory

   poetry run python run.py

   ```

The leaderboard will be accessible at `http://localhost:7860` when running locally.

## Features

- Interactive Gradio interface
- Filter models by task
- Compare performance across datasets
- Visualize results with plots
- Track best performing models
- Detailed results table

## Contributing

To add your model's results to the leaderboard:

1. Fork this repository
2. Add your results to the appropriate data dictionary in `app/data.py`
3. Submit a pull request with your changes

### Results Format

For each task, results should be added in the following format:

```python

TASK_DATA = {

    "Model": ["Model1", "Model2", ...],

    "Dataset": ["Dataset1", "Dataset1", ...],

    "Metric1": [value1, value2, ...],

    "Metric2": [value1, value2, ...],

}

```

## Project Structure

```

mars-vision-leaderboard/

β”œβ”€β”€ app/

β”‚   β”œβ”€β”€ __init__.py

β”‚   β”œβ”€β”€ app.py          # Main Gradio interface

β”‚   β”œβ”€β”€ data.py         # Dataset and model data

β”‚   └── leaderboard.py  # Visualization functions

β”œβ”€β”€ run.py              # Application entry point

β”œβ”€β”€ pyproject.toml      # Poetry dependencies

└── README.md

```