text
stringlengths 0
4.99k
|
---|
x = data_augmentation(inputs) # Apply random data augmentation |
# Pre-trained Xception weights requires that input be normalized |
# from (0, 255) to a range (-1., +1.), the normalization layer |
# does the following, outputs = (inputs - mean) / sqrt(var) |
norm_layer = keras.layers.experimental.preprocessing.Normalization() |
mean = np.array([127.5] * 3) |
var = mean ** 2 |
# Scale inputs to [-1, +1] |
x = norm_layer(x) |
norm_layer.set_weights([mean, var]) |
# The base model contains batchnorm layers. We want to keep them in inference mode |
# when we unfreeze the base model for fine-tuning, so we make sure that the |
# base_model is running in inference mode here. |
x = base_model(x, training=False) |
x = keras.layers.GlobalAveragePooling2D()(x) |
x = keras.layers.Dropout(0.2)(x) # Regularize with dropout |
outputs = keras.layers.Dense(1)(x) |
model = keras.Model(inputs, outputs) |
model.summary() |
Model: "model" |
_________________________________________________________________ |
Layer (type) Output Shape Param # |
================================================================= |
input_5 (InputLayer) [(None, 150, 150, 3)] 0 |
_________________________________________________________________ |
sequential_3 (Sequential) (None, 150, 150, 3) 0 |
_________________________________________________________________ |
normalization (Normalization (None, 150, 150, 3) 7 |
_________________________________________________________________ |
xception (Model) (None, 5, 5, 2048) 20861480 |
_________________________________________________________________ |
global_average_pooling2d (Gl (None, 2048) 0 |
_________________________________________________________________ |
dropout (Dropout) (None, 2048) 0 |
_________________________________________________________________ |
dense_7 (Dense) (None, 1) 2049 |
================================================================= |
Total params: 20,863,536 |
Trainable params: 2,049 |
Non-trainable params: 20,861,487 |
_________________________________________________________________ |
Train the top layer |
model.compile( |
optimizer=keras.optimizers.Adam(), |
loss=keras.losses.BinaryCrossentropy(from_logits=True), |
metrics=[keras.metrics.BinaryAccuracy()], |
) |
epochs = 20 |
model.fit(train_ds, epochs=epochs, validation_data=validation_ds) |
Epoch 1/20 |
291/291 [==============================] - 24s 83ms/step - loss: 0.1639 - binary_accuracy: 0.9276 - val_loss: 0.0883 - val_binary_accuracy: 0.9652 |
Epoch 2/20 |
291/291 [==============================] - 22s 76ms/step - loss: 0.1202 - binary_accuracy: 0.9491 - val_loss: 0.0855 - val_binary_accuracy: 0.9686 |
Epoch 3/20 |
291/291 [==============================] - 23s 80ms/step - loss: 0.1076 - binary_accuracy: 0.9546 - val_loss: 0.0802 - val_binary_accuracy: 0.9682 |
Epoch 4/20 |
291/291 [==============================] - 23s 80ms/step - loss: 0.1127 - binary_accuracy: 0.9539 - val_loss: 0.0798 - val_binary_accuracy: 0.9682 |
Epoch 5/20 |
291/291 [==============================] - 23s 78ms/step - loss: 0.1072 - binary_accuracy: 0.9558 - val_loss: 0.0807 - val_binary_accuracy: 0.9695 |
Epoch 6/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.1073 - binary_accuracy: 0.9565 - val_loss: 0.0746 - val_binary_accuracy: 0.9733 |
Epoch 7/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.1037 - binary_accuracy: 0.9562 - val_loss: 0.0738 - val_binary_accuracy: 0.9712 |
Epoch 8/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.1061 - binary_accuracy: 0.9580 - val_loss: 0.0764 - val_binary_accuracy: 0.9738 |
Epoch 9/20 |
291/291 [==============================] - 23s 78ms/step - loss: 0.0959 - binary_accuracy: 0.9612 - val_loss: 0.0823 - val_binary_accuracy: 0.9673 |
Epoch 10/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0956 - binary_accuracy: 0.9600 - val_loss: 0.0736 - val_binary_accuracy: 0.9725 |
Epoch 11/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0944 - binary_accuracy: 0.9603 - val_loss: 0.0781 - val_binary_accuracy: 0.9716 |
Epoch 12/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0960 - binary_accuracy: 0.9615 - val_loss: 0.0720 - val_binary_accuracy: 0.9725 |
Epoch 13/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0987 - binary_accuracy: 0.9614 - val_loss: 0.0791 - val_binary_accuracy: 0.9708 |
Epoch 14/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0930 - binary_accuracy: 0.9636 - val_loss: 0.0780 - val_binary_accuracy: 0.9690 |
Epoch 15/20 |
291/291 [==============================] - 23s 78ms/step - loss: 0.0954 - binary_accuracy: 0.9624 - val_loss: 0.0772 - val_binary_accuracy: 0.9678 |
Epoch 16/20 |
291/291 [==============================] - 23s 78ms/step - loss: 0.0963 - binary_accuracy: 0.9598 - val_loss: 0.0781 - val_binary_accuracy: 0.9695 |
Epoch 17/20 |
291/291 [==============================] - 23s 78ms/step - loss: 0.1006 - binary_accuracy: 0.9585 - val_loss: 0.0832 - val_binary_accuracy: 0.9699 |
Epoch 18/20 |
291/291 [==============================] - 23s 78ms/step - loss: 0.0942 - binary_accuracy: 0.9615 - val_loss: 0.0761 - val_binary_accuracy: 0.9703 |
Epoch 19/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0950 - binary_accuracy: 0.9613 - val_loss: 0.0817 - val_binary_accuracy: 0.9690 |
Epoch 20/20 |
291/291 [==============================] - 23s 79ms/step - loss: 0.0906 - binary_accuracy: 0.9624 - val_loss: 0.0755 - val_binary_accuracy: 0.9712 |
<tensorflow.python.keras.callbacks.History at 0x7f3fa4cdab00> |
Do a round of fine-tuning of the entire model |
Finally, let's unfreeze the base model and train the entire model end-to-end with a low learning rate. |
Importantly, although the base model becomes trainable, it is still running in inference mode since we passed training=False when calling it when we built the model. This means that the batch normalization layers inside won't update their batch statistics. If they did, they would wreck havoc on the representations learned by the model so far. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.