This model has been pushed to the Hub using the PytorchModelHubMixin integration:
- Hugging Face Space (available now):
https://huggingface.co/spaces/Tournesol-Saturday/railNet-tooth-segmentation-in-CBCT-image - Code: https://github.com/Tournesol-Saturday/RAIL
- Paper: RAIL: Region-Aware Instructive Learning for Semi-Supervised Tooth Segmentation in CBCT
Steps to use our model in this repository:
- Clone this repository with the following command:
git clone https://huggingface.co/Tournesol-Saturday/railNet-tooth-segmentation-in-CBCT-image cd railNet-tooth-segmentation-in-CBCT-image
- Create a virtual environment to experience our model using the following command:
conda create -n railnet python=3.10 conda activate railnet pip install -r requirements.txt python gradio_app.py
- In the current working directory, find the
example_input_file
folder.
Select an arbitrary.h5
file in this folder and drag it into theGradio
interface for model inference. - Waiting for about 1min~2min30s, the model inference is completed and the segmentation result and 3D rendering visualization will be produced.
Both the original image and the segmentation result are saved in.nii.gz
format in theoutput
folder of the same directory. - Since
Gradio
performs 1/2 downsampling on the 3D segmentation visualization, the segmentation accuracy is degraded.
Users can drag the.nii.gz
format files in theoutput
folder into theITK-SNAP
software to view the accurate segmentation visualization.
- Downloads last month
- 159
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support