danhtran2mind commited on
Commit
12e1d9e
Β·
verified Β·
1 Parent(s): aafa12c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -19
README.md CHANGED
@@ -25,11 +25,8 @@ Transform grayscale landscape images into vibrant, full-color visuals with this
25
 
26
  ## Notebook
27
  Explore the implementation in our Jupyter notebook:
28
- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
29
- [![Open in SageMaker](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
30
- [![Open in Deepnote](https://deepnote.com/buttons/launch-in-deepnote-small.svg)](https://deepnote.com/launch?url=https://github.com/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
31
- [![JupyterLab](https://img.shields.io/badge/Launch-JupyterLab-orange?logo=Jupyter)](https://mybinder.org/v2/gh/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/main?filepath=autoencoder-grayscale-to-color-landscape.ipynb)
32
- [![View on GitHub](https://img.shields.io/badge/View%20on-GitHub-181717?logo=github)](https://github.com/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
33
 
34
  ## Dataset
35
  Details about the dataset are available in the [README Dataset](./dataset/README.md). πŸ“‚
@@ -48,8 +45,9 @@ Experience the brilliance of our cutting-edge technology! Transform grayscale la
48
 
49
  ### Step 1: Clone the Repository
50
  ```bash
51
- git clone https://github.com/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch
52
- cd /autoencoder-grayscale2color-landscape-from-scratch
 
53
  ```
54
 
55
  ### Step 2: Install Dependencies
@@ -82,16 +80,6 @@ Download and load the autoencoder model from a remote source if it’s not alrea
82
  load_model_path = "./ckpts/best_model.h5"
83
  os.makedirs(os.path.dirname(load_model_path), exist_ok=True)
84
 
85
- if not os.path.exists(load_model_path):
86
- url = "https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/resolve/main/ckpts/best_model.h5"
87
- print(f"Downloading model from {url}...")
88
- with requests.get(url, stream=True) as response:
89
- response.raise_for_status()
90
- with open(load_model_path, "wb") as f:
91
- for chunk in response.iter_content(chunk_size=8192):
92
- f.write(chunk)
93
- print("Model downloaded successfully.")
94
-
95
  print(f"Loading model from {load_model_path}...")
96
  loaded_autoencoder = tf.keras.models.load_model(
97
  load_model_path, custom_objects={"SpatialAttention": SpatialAttention}
@@ -185,7 +173,7 @@ The output will be a side-by-side comparison of the input grayscale image and th
185
  - **Early Stopping**: Monitors `val_loss`, patience of 20 epochs, restores best weights.
186
  - **ReduceLROnPlateau**: Monitors `val_loss`, reduces learning rate by 50% after 5 epochs, minimum learning rate of 1e-6.
187
  - **BackupAndRestore**: Saves checkpoints to `./ckpts/backup`.
188
- -
189
  ## Metrics
190
  - **PSNR (Validation)**: 21.70 πŸ“ˆ
191
 
@@ -202,4 +190,4 @@ The output will be a side-by-side comparison of the input grayscale image and th
202
  ```
203
 
204
  ## Contact
205
- For questions or issues, reach out via the [GitHub Issues](https://github.com/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/issues) tab. πŸš€
 
25
 
26
  ## Notebook
27
  Explore the implementation in our Jupyter notebook:
28
+ [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
29
+ [![View on HuggingFace](https://img.shields.io/badge/View%20on-HuggingFace-181717?logo=huggingface)](https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
 
 
 
30
 
31
  ## Dataset
32
  Details about the dataset are available in the [README Dataset](./dataset/README.md). πŸ“‚
 
45
 
46
  ### Step 1: Clone the Repository
47
  ```bash
48
+ git clone https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape
49
+ cd ./autoencoder-grayscale2color-landscape
50
+ git lfs pull
51
  ```
52
 
53
  ### Step 2: Install Dependencies
 
80
  load_model_path = "./ckpts/best_model.h5"
81
  os.makedirs(os.path.dirname(load_model_path), exist_ok=True)
82
 
 
 
 
 
 
 
 
 
 
 
83
  print(f"Loading model from {load_model_path}...")
84
  loaded_autoencoder = tf.keras.models.load_model(
85
  load_model_path, custom_objects={"SpatialAttention": SpatialAttention}
 
173
  - **Early Stopping**: Monitors `val_loss`, patience of 20 epochs, restores best weights.
174
  - **ReduceLROnPlateau**: Monitors `val_loss`, reduces learning rate by 50% after 5 epochs, minimum learning rate of 1e-6.
175
  - **BackupAndRestore**: Saves checkpoints to `./ckpts/backup`.
176
+
177
  ## Metrics
178
  - **PSNR (Validation)**: 21.70 πŸ“ˆ
179
 
 
190
  ```
191
 
192
  ## Contact
193
+ For questions or issues, reach out via the [HuggingFace Community](https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/discussions) tab. πŸš€