text
stringlengths 0
4.99k
|
---|
# Instantiate an optimizer. |
optimizer = keras.optimizers.SGD(learning_rate=1e-3) |
# Instantiate a loss function. |
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) |
# Prepare the training dataset. |
batch_size = 64 |
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() |
x_train = np.reshape(x_train, (-1, 784)) |
x_test = np.reshape(x_test, (-1, 784)) |
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) |
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) |
Here's our training loop: |
We open a for loop that iterates over epochs |
For each epoch, we open a for loop that iterates over the dataset, in batches |
For each batch, we open a GradientTape() scope |
Inside this scope, we call the model (forward pass) and compute the loss |
Outside the scope, we retrieve the gradients of the weights of the model with regard to the loss |
Finally, we use the optimizer to update the weights of the model based on the gradients |
epochs = 2 |
for epoch in range(epochs): |
print("\nStart of epoch %d" % (epoch,)) |
# Iterate over the batches of the dataset. |
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): |
# Open a GradientTape to record the operations run |
# during the forward pass, which enables auto-differentiation. |
with tf.GradientTape() as tape: |
# Run the forward pass of the layer. |
# The operations that the layer applies |
# to its inputs are going to be recorded |
# on the GradientTape. |
logits = model(x_batch_train, training=True) # Logits for this minibatch |
# Compute the loss value for this minibatch. |
loss_value = loss_fn(y_batch_train, logits) |
# Use the gradient tape to automatically retrieve |
# the gradients of the trainable variables with respect to the loss. |
grads = tape.gradient(loss_value, model.trainable_weights) |
# Run one step of gradient descent by updating |
# the value of the variables to minimize the loss. |
optimizer.apply_gradients(zip(grads, model.trainable_weights)) |
# Log every 200 batches. |
if step % 200 == 0: |
print( |
"Training loss (for one batch) at step %d: %.4f" |
% (step, float(loss_value)) |
) |
print("Seen so far: %s samples" % ((step + 1) * 64)) |
Start of epoch 0 |
Training loss (for one batch) at step 0: 76.3562 |
Seen so far: 64 samples |
Training loss (for one batch) at step 200: 1.3921 |
Seen so far: 12864 samples |
Training loss (for one batch) at step 400: 1.0018 |
Seen so far: 25664 samples |
Training loss (for one batch) at step 600: 0.8904 |
Seen so far: 38464 samples |
Training loss (for one batch) at step 800: 0.8393 |
Seen so far: 51264 samples |
Start of epoch 1 |
Training loss (for one batch) at step 0: 0.8572 |
Seen so far: 64 samples |
Training loss (for one batch) at step 200: 0.7616 |
Seen so far: 12864 samples |
Training loss (for one batch) at step 400: 0.8453 |
Seen so far: 25664 samples |
Training loss (for one batch) at step 600: 0.4959 |
Seen so far: 38464 samples |
Training loss (for one batch) at step 800: 0.9363 |
Seen so far: 51264 samples |
Low-level handling of metrics |
Let's add metrics monitoring to this basic loop. |
You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here's the flow: |
Instantiate the metric at the start of the loop |
Call metric.update_state() after each batch |
Call metric.result() when you need to display the current value of the metric |
Call metric.reset_states() when you need to clear the state of the metric (typically at the end of an epoch) |
Let's use this knowledge to compute SparseCategoricalAccuracy on validation data at the end of each epoch: |
# Get model |
inputs = keras.Input(shape=(784,), name="digits") |
x = layers.Dense(64, activation="relu", name="dense_1")(inputs) |
x = layers.Dense(64, activation="relu", name="dense_2")(x) |
outputs = layers.Dense(10, name="predictions")(x) |
model = keras.Model(inputs=inputs, outputs=outputs) |
# Instantiate an optimizer to train the model. |
optimizer = keras.optimizers.SGD(learning_rate=1e-3) |
# Instantiate a loss function. |
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.