text
stringlengths 0
4.99k
|
---|
# Prepare the metrics. |
train_acc_metric = keras.metrics.SparseCategoricalAccuracy() |
val_acc_metric = keras.metrics.SparseCategoricalAccuracy() |
# Prepare the training dataset. |
batch_size = 64 |
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)) |
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size) |
# Prepare the validation dataset. |
# Reserve 10,000 samples for validation. |
x_val = x_train[-10000:] |
y_val = y_train[-10000:] |
x_train = x_train[:-10000] |
y_train = y_train[:-10000] |
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)) |
val_dataset = val_dataset.batch(64) |
Here's our training & evaluation loop: |
import time |
epochs = 2 |
for epoch in range(epochs): |
print("\nStart of epoch %d" % (epoch,)) |
start_time = time.time() |
# Iterate over the batches of the dataset. |
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset): |
with tf.GradientTape() as tape: |
logits = model(x_batch_train, training=True) |
loss_value = loss_fn(y_batch_train, logits) |
grads = tape.gradient(loss_value, model.trainable_weights) |
optimizer.apply_gradients(zip(grads, model.trainable_weights)) |
# Update training metric. |
train_acc_metric.update_state(y_batch_train, logits) |
# Log every 200 batches. |
if step % 200 == 0: |
print( |
"Training loss (for one batch) at step %d: %.4f" |
% (step, float(loss_value)) |
) |
print("Seen so far: %d samples" % ((step + 1) * 64)) |
# Display metrics at the end of each epoch. |
train_acc = train_acc_metric.result() |
print("Training acc over epoch: %.4f" % (float(train_acc),)) |
# Reset training metrics at the end of each epoch |
train_acc_metric.reset_states() |
# Run a validation loop at the end of each epoch. |
for x_batch_val, y_batch_val in val_dataset: |
val_logits = model(x_batch_val, training=False) |
# Update val metrics |
val_acc_metric.update_state(y_batch_val, val_logits) |
val_acc = val_acc_metric.result() |
val_acc_metric.reset_states() |
print("Validation acc: %.4f" % (float(val_acc),)) |
print("Time taken: %.2fs" % (time.time() - start_time)) |
Start of epoch 0 |
Training loss (for one batch) at step 0: 134.3001 |
Seen so far: 64 samples |
Training loss (for one batch) at step 200: 1.3430 |
Seen so far: 12864 samples |
Training loss (for one batch) at step 400: 1.3557 |
Seen so far: 25664 samples |
Training loss (for one batch) at step 600: 0.8682 |
Seen so far: 38464 samples |
Training loss (for one batch) at step 800: 0.5862 |
Seen so far: 51264 samples |
Training acc over epoch: 0.7176 |
Validation acc: 0.8403 |
Time taken: 4.65s |
Start of epoch 1 |
Training loss (for one batch) at step 0: 0.4264 |
Seen so far: 64 samples |
Training loss (for one batch) at step 200: 0.4168 |
Seen so far: 12864 samples |
Training loss (for one batch) at step 400: 0.6106 |
Seen so far: 25664 samples |
Training loss (for one batch) at step 600: 0.4762 |
Seen so far: 38464 samples |
Training loss (for one batch) at step 800: 0.4031 |
Seen so far: 51264 samples |
Training acc over epoch: 0.8429 |
Validation acc: 0.8774 |
Time taken: 5.07s |
Speeding-up your training step with tf.function |
The default runtime in TensorFlow 2.0 is eager execution. As such, our training loop above executes eagerly. |
This is great for debugging, but graph compilation has a definite performance advantage. Describing your computation as a static graph enables the framework to apply global performance optimizations. This is impossible when the framework is constrained to greedly execute one operation after another, with no knowledge of what comes next. |
You can compile into a static graph any function that takes tensors as input. Just add a @tf.function decorator on it, like this: |
@tf.function |
def train_step(x, y): |
with tf.GradientTape() as tape: |
logits = model(x, training=True) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.