text
stringlengths 0
4.99k
|
---|
loss_value = loss_fn(y, logits) |
grads = tape.gradient(loss_value, model.trainable_weights) |
optimizer.apply_gradients(zip(grads, model.trainable_weights)) |
train_acc_metric.update_state(y, logits) |
return loss_value |
Let's do the same with the evaluation step: |
@tf.function |
def test_step(x, y): |
val_logits = model(x, training=False) |
val_acc_metric.update_state(y, val_logits) |
Now, let's re-run our training loop with this compiled training step: |
import time |
epochs = 2 |
for epoch in range(epochs): |
print("\nStart of epoch %d" % (epoch,)) |
start_time = time.time() |
# Iterate over the batches of the dataset. |
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): |
loss_value = train_step(x_batch_train, y_batch_train) |
# Log every 200 batches. |
if step % 200 == 0: |
print( |
"Training loss (for one batch) at step %d: %.4f" |
% (step, float(loss_value)) |
) |
print("Seen so far: %d samples" % ((step + 1) * 64)) |
# Display metrics at the end of each epoch. |
train_acc = train_acc_metric.result() |
print("Training acc over epoch: %.4f" % (float(train_acc),)) |
# Reset training metrics at the end of each epoch |
train_acc_metric.reset_states() |
# Run a validation loop at the end of each epoch. |
for x_batch_val, y_batch_val in val_dataset: |
test_step(x_batch_val, y_batch_val) |
val_acc = val_acc_metric.result() |
val_acc_metric.reset_states() |
print("Validation acc: %.4f" % (float(val_acc),)) |
print("Time taken: %.2fs" % (time.time() - start_time)) |
Start of epoch 0 |
Training loss (for one batch) at step 0: 0.6483 |
Seen so far: 64 samples |
Training loss (for one batch) at step 200: 0.5966 |
Seen so far: 12864 samples |
Training loss (for one batch) at step 400: 0.5951 |
Seen so far: 25664 samples |
Training loss (for one batch) at step 600: 1.3830 |
Seen so far: 38464 samples |
Training loss (for one batch) at step 800: 0.2758 |
Seen so far: 51264 samples |
Training acc over epoch: 0.8756 |
Validation acc: 0.8955 |
Time taken: 1.18s |
Start of epoch 1 |
Training loss (for one batch) at step 0: 0.4447 |
Seen so far: 64 samples |
Training loss (for one batch) at step 200: 0.3794 |
Seen so far: 12864 samples |
Training loss (for one batch) at step 400: 0.4636 |
Seen so far: 25664 samples |
Training loss (for one batch) at step 600: 0.3694 |
Seen so far: 38464 samples |
Training loss (for one batch) at step 800: 0.2763 |
Seen so far: 51264 samples |
Training acc over epoch: 0.8926 |
Validation acc: 0.9078 |
Time taken: 0.71s |
Much faster, isn't it? |
Low-level handling of losses tracked by the model |
Layers & models recursively track any losses created during the forward pass by layers that call self.add_loss(value). The resulting list of scalar loss values are available via the property model.losses at the end of the forward pass. |
If you want to be using these loss components, you should sum them and add them to the main loss in your training step. |
Consider this layer, that creates an activity regularization loss: |
class ActivityRegularizationLayer(layers.Layer): |
def call(self, inputs): |
self.add_loss(1e-2 * tf.reduce_sum(inputs)) |
return inputs |
Let's build a really simple model that uses it: |
inputs = keras.Input(shape=(784,), name="digits") |
x = layers.Dense(64, activation="relu")(inputs) |
# Insert activity regularization as a layer |
x = ActivityRegularizationLayer()(x) |
x = layers.Dense(64, activation="relu")(x) |
outputs = layers.Dense(10, name="predictions")(x) |
model = keras.Model(inputs=inputs, outputs=outputs) |
Here's what our training step should look like now: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.