Update README.md
Browse files
README.md
CHANGED
@@ -25,11 +25,8 @@ Transform grayscale landscape images into vibrant, full-color visuals with this
|
|
25 |
|
26 |
## Notebook
|
27 |
Explore the implementation in our Jupyter notebook:
|
28 |
-
[](https://colab.research.google.com/
|
29 |
-
[](https://deepnote.com/launch?url=https://github.com/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
|
31 |
-
[](https://mybinder.org/v2/gh/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/main?filepath=autoencoder-grayscale-to-color-landscape.ipynb)
|
32 |
-
[](https://github.com/danhtran2mind/autoencoder-grayscale2color-landscape-from-scratch/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
|
33 |
|
34 |
## Dataset
|
35 |
Details about the dataset are available in the [README Dataset](./dataset/README.md). π
|
@@ -48,8 +45,9 @@ Experience the brilliance of our cutting-edge technology! Transform grayscale la
|
|
48 |
|
49 |
### Step 1: Clone the Repository
|
50 |
```bash
|
51 |
-
git clone https://
|
52 |
-
cd
|
|
|
53 |
```
|
54 |
|
55 |
### Step 2: Install Dependencies
|
@@ -82,16 +80,6 @@ Download and load the autoencoder model from a remote source if itβs not alrea
|
|
82 |
load_model_path = "./ckpts/best_model.h5"
|
83 |
os.makedirs(os.path.dirname(load_model_path), exist_ok=True)
|
84 |
|
85 |
-
if not os.path.exists(load_model_path):
|
86 |
-
url = "https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/resolve/main/ckpts/best_model.h5"
|
87 |
-
print(f"Downloading model from {url}...")
|
88 |
-
with requests.get(url, stream=True) as response:
|
89 |
-
response.raise_for_status()
|
90 |
-
with open(load_model_path, "wb") as f:
|
91 |
-
for chunk in response.iter_content(chunk_size=8192):
|
92 |
-
f.write(chunk)
|
93 |
-
print("Model downloaded successfully.")
|
94 |
-
|
95 |
print(f"Loading model from {load_model_path}...")
|
96 |
loaded_autoencoder = tf.keras.models.load_model(
|
97 |
load_model_path, custom_objects={"SpatialAttention": SpatialAttention}
|
@@ -185,7 +173,7 @@ The output will be a side-by-side comparison of the input grayscale image and th
|
|
185 |
- **Early Stopping**: Monitors `val_loss`, patience of 20 epochs, restores best weights.
|
186 |
- **ReduceLROnPlateau**: Monitors `val_loss`, reduces learning rate by 50% after 5 epochs, minimum learning rate of 1e-6.
|
187 |
- **BackupAndRestore**: Saves checkpoints to `./ckpts/backup`.
|
188 |
-
|
189 |
## Metrics
|
190 |
- **PSNR (Validation)**: 21.70 π
|
191 |
|
@@ -202,4 +190,4 @@ The output will be a side-by-side comparison of the input grayscale image and th
|
|
202 |
```
|
203 |
|
204 |
## Contact
|
205 |
-
For questions or issues, reach out via the [
|
|
|
25 |
|
26 |
## Notebook
|
27 |
Explore the implementation in our Jupyter notebook:
|
28 |
+
[](https://colab.research.google.com/#fileId=https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
|
29 |
+
[](https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/blob/main/notebooks/autoencoder-grayscale-to-color-landscape.ipynb)
|
|
|
|
|
|
|
30 |
|
31 |
## Dataset
|
32 |
Details about the dataset are available in the [README Dataset](./dataset/README.md). π
|
|
|
45 |
|
46 |
### Step 1: Clone the Repository
|
47 |
```bash
|
48 |
+
git clone https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape
|
49 |
+
cd ./autoencoder-grayscale2color-landscape
|
50 |
+
git lfs pull
|
51 |
```
|
52 |
|
53 |
### Step 2: Install Dependencies
|
|
|
80 |
load_model_path = "./ckpts/best_model.h5"
|
81 |
os.makedirs(os.path.dirname(load_model_path), exist_ok=True)
|
82 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
print(f"Loading model from {load_model_path}...")
|
84 |
loaded_autoencoder = tf.keras.models.load_model(
|
85 |
load_model_path, custom_objects={"SpatialAttention": SpatialAttention}
|
|
|
173 |
- **Early Stopping**: Monitors `val_loss`, patience of 20 epochs, restores best weights.
|
174 |
- **ReduceLROnPlateau**: Monitors `val_loss`, reduces learning rate by 50% after 5 epochs, minimum learning rate of 1e-6.
|
175 |
- **BackupAndRestore**: Saves checkpoints to `./ckpts/backup`.
|
176 |
+
|
177 |
## Metrics
|
178 |
- **PSNR (Validation)**: 21.70 π
|
179 |
|
|
|
190 |
```
|
191 |
|
192 |
## Contact
|
193 |
+
For questions or issues, reach out via the [HuggingFace Community](https://huggingface.co/danhtran2mind/autoencoder-grayscale2color-landscape/discussions) tab. π
|