text
stringlengths 0
4.99k
|
---|
train_ds, validation_ds, test_ds = tfds.load( |
"cats_vs_dogs", |
# Reserve 10% for validation and 10% for test |
split=["train[:40%]", "train[40%:50%]", "train[50%:60%]"], |
as_supervised=True, # Include labels |
) |
print("Number of training samples: %d" % tf.data.experimental.cardinality(train_ds)) |
print( |
"Number of validation samples: %d" % tf.data.experimental.cardinality(validation_ds) |
) |
print("Number of test samples: %d" % tf.data.experimental.cardinality(test_ds)) |
Number of training samples: 9305 |
Number of validation samples: 2326 |
Number of test samples: 2326 |
These are the first 9 images in the training dataset -- as you can see, they're all different sizes. |
import matplotlib.pyplot as plt |
plt.figure(figsize=(10, 10)) |
for i, (image, label) in enumerate(train_ds.take(9)): |
ax = plt.subplot(3, 3, i + 1) |
plt.imshow(image) |
plt.title(int(label)) |
plt.axis("off") |
png |
We can also see that label 1 is "dog" and label 0 is "cat". |
Standardizing the data |
Our raw images have a variety of sizes. In addition, each pixel consists of 3 integer values between 0 and 255 (RGB level values). This isn't a great fit for feeding a neural network. We need to do 2 things: |
Standardize to a fixed image size. We pick 150x150. |
Normalize pixel values between -1 and 1. We'll do this using a Normalization layer as part of the model itself. |
In general, it's a good practice to develop models that take raw data as input, as opposed to models that take already-preprocessed data. The reason being that, if your model expects preprocessed data, any time you export your model to use it elsewhere (in a web browser, in a mobile app), you'll need to reimplement the exact same preprocessing pipeline. This gets very tricky very quickly. So we should do the least possible amount of preprocessing before hitting the model. |
Here, we'll do image resizing in the data pipeline (because a deep neural network can only process contiguous batches of data), and we'll do the input value scaling as part of the model, when we create it. |
Let's resize images to 150x150: |
size = (150, 150) |
train_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y)) |
validation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y)) |
test_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y)) |
Besides, let's batch the data and use caching & prefetching to optimize loading speed. |
batch_size = 32 |
train_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10) |
validation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10) |
test_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10) |
Using random data augmentation |
When you don't have a large image dataset, it's a good practice to artificially introduce sample diversity by applying random yet realistic transformations to the training images, such as random horizontal flipping or small random rotations. This helps expose the model to different aspects of the training data while slowing down overfitting. |
from tensorflow import keras |
from tensorflow.keras import layers |
data_augmentation = keras.Sequential( |
[ |
layers.experimental.preprocessing.RandomFlip("horizontal"), |
layers.experimental.preprocessing.RandomRotation(0.1), |
] |
) |
Let's visualize what the first image of the first batch looks like after various random transformations: |
import numpy as np |
for images, labels in train_ds.take(1): |
plt.figure(figsize=(10, 10)) |
first_image = images[0] |
for i in range(9): |
ax = plt.subplot(3, 3, i + 1) |
augmented_image = data_augmentation( |
tf.expand_dims(first_image, 0), training=True |
) |
plt.imshow(augmented_image[0].numpy().astype("int32")) |
plt.title(int(labels[i])) |
plt.axis("off") |
png |
Build a model |
Now let's built a model that follows the blueprint we've explained earlier. |
Note that: |
We add a Normalization layer to scale input values (initially in the [0, 255] range) to the [-1, 1] range. |
We add a Dropout layer before the classification layer, for regularization. |
We make sure to pass training=False when calling the base model, so that it runs in inference mode, so that batchnorm statistics don't get updated even after we unfreeze the base model for fine-tuning. |
base_model = keras.applications.Xception( |
weights="imagenet", # Load weights pre-trained on ImageNet. |
input_shape=(150, 150, 3), |
include_top=False, |
) # Do not include the ImageNet classifier at the top. |
# Freeze the base_model |
base_model.trainable = False |
# Create new model on top |
inputs = keras.Input(shape=(150, 150, 3)) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.