text
stringlengths
0
4.99k
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
1/1 [==============================] - 0s 241ms/step - loss: 0.9990 - binary_accuracy: 0.0000e+00
<tensorflow.python.keras.callbacks.History at 0x14da49f10>
For more information about training multi-input models, see the section Passing data to multi-input, multi-output models.
Automatically setting apart a validation holdout set
In the first end-to-end example you saw, we used the validation_data argument to pass a tuple of NumPy arrays (x_val, y_val) to the model for evaluating a validation loss and validation metrics at the end of each epoch.
Here's another option: the argument validation_split allows you to automatically reserve part of your training data for validation. The argument value represents the fraction of the data to be reserved for validation, so it should be set to a number higher than 0 and lower than 1. For instance, validation_split=0.2 means "use 20% of the data for validation", and validation_split=0.6 means "use 60% of the data for validation".
The way the validation is computed is by taking the last x% samples of the arrays received by the fit() call, before any shuffling.
Note that you can only use validation_split when training with NumPy data.
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
625/625 [==============================] - 1s 1ms/step - loss: 0.6263 - sparse_categorical_accuracy: 0.8288 - val_loss: 0.2548 - val_sparse_categorical_accuracy: 0.9206
<tensorflow.python.keras.callbacks.History at 0x14db21a90>
Training & evaluation from tf.data Datasets
In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers, and you've seen how to use the validation_data and validation_split arguments in fit(), when your data is passed as NumPy arrays.
Let's now take a look at the case where your data comes in the form of a tf.data.Dataset object.
The tf.data API is a set of utilities in TensorFlow 2.0 for loading and preprocessing data in a way that's fast and scalable.
For a complete guide about creating Datasets, see the tf.data documentation.
You can pass a Dataset instance directly to the methods fit(), evaluate(), and predict():
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
Epoch 1/3
782/782 [==============================] - 1s 1ms/step - loss: 0.5631 - sparse_categorical_accuracy: 0.8472
Epoch 2/3
782/782 [==============================] - 1s 1ms/step - loss: 0.1676 - sparse_categorical_accuracy: 0.9497
Epoch 3/3
782/782 [==============================] - 1s 1ms/step - loss: 0.1211 - sparse_categorical_accuracy: 0.9638
Evaluate
157/157 [==============================] - 0s 790us/step - loss: 0.1231 - sparse_categorical_accuracy: 0.9625
{'loss': 0.1230572983622551, 'sparse_categorical_accuracy': 0.9624999761581421}
Note that the Dataset is reset at the end of each epoch, so it can be reused of the next epoch.
If you want to run training only on a specific number of batches from this Dataset, you can pass the steps_per_epoch argument, which specifies how many training steps the model should run using this Dataset before moving on to the next epoch.
If you do this, the dataset is not reset at the end of each epoch, instead we just keep drawing the next batches. The dataset will eventually run out of data (unless it is an infinitely-looping dataset).
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
Epoch 1/3
100/100 [==============================] - 0s 1ms/step - loss: 1.2444 - sparse_categorical_accuracy: 0.6461
Epoch 2/3
100/100 [==============================] - 0s 1ms/step - loss: 0.3783 - sparse_categorical_accuracy: 0.8929
Epoch 3/3
100/100 [==============================] - 0s 1ms/step - loss: 0.3543 - sparse_categorical_accuracy: 0.8988
<tensorflow.python.keras.callbacks.History at 0x14de00790>
Using a validation dataset
You can pass a Dataset instance as the validation_data argument in fit():
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset