text
stringlengths
0
4.99k
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
782/782 [==============================] - 1s 1ms/step - loss: 0.5465 - sparse_categorical_accuracy: 0.8459 - val_loss: 0.1943 - val_sparse_categorical_accuracy: 0.9435
<tensorflow.python.keras.callbacks.History at 0x14df31d10>
At the end of each epoch, the model will iterate over the validation dataset and compute the validation loss and validation metrics.
If you want to run validation only on a specific number of batches from this dataset, you can pass the validation_steps argument, which specifies how many validation steps the model should run with the validation dataset before interrupting validation and moving on to the next epoch:
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
782/782 [==============================] - 1s 1ms/step - loss: 0.5561 - sparse_categorical_accuracy: 0.8464 - val_loss: 0.2889 - val_sparse_categorical_accuracy: 0.9156
<tensorflow.python.keras.callbacks.History at 0x14df31810>
Note that the validation dataset will be reset after each use (so that you will always be evaluating on the same samples from epoch to epoch).
The argument validation_split (generating a holdout set from the training data) is not supported when training from Dataset objects, since this feature requires the ability to index the samples of the datasets, which is not possible in general with the Dataset API.
Other input formats supported
Besides NumPy arrays, eager tensors, and TensorFlow Datasets, it's possible to train a Keras model using Pandas dataframes, or from Python generators that yield batches of data & labels.
In particular, the keras.utils.Sequence class offers a simple interface to build Python data generators that are multiprocessing-aware and can be shuffled.
In general, we recommend that you use:
NumPy input data if your data is small and fits in memory
Dataset objects if you have large datasets and you need to do distributed training
Sequence objects if you have large datasets and you need to do a lot of custom Python-side processing that cannot be done in TensorFlow (e.g. if you rely on external libraries for data loading or preprocessing).
Using a keras.utils.Sequence object as input
keras.utils.Sequence is a utility that you can subclass to obtain a Python generator with two important properties:
It works well with multiprocessing.
It can be shuffled (e.g. when passing shuffle=True in fit()).
A Sequence must implement two methods:
__getitem__
__len__
The method __getitem__ should return a complete batch. If you want to modify your dataset between epochs, you may implement on_epoch_end.
Here's a quick example:
from skimage.io import imread
from skimage.transform import resize
import numpy as np
# Here, `filenames` is list of path to the images
# and `labels` are the associated labels.
class CIFAR10Sequence(Sequence):
def __init__(self, filenames, labels, batch_size):
self.filenames, self.labels = filenames, labels
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.filenames) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.array([
resize(imread(filename), (200, 200))
for filename in batch_x]), np.array(batch_y)
sequence = CIFAR10Sequence(filenames, labels, batch_size)
model.fit(sequence, epochs=10)
Using sample weighting and class weighting
With the default settings the weight of a sample is decided by its frequency in the dataset. There are two methods to weight the data, independent of sample frequency:
Class weights
Sample weights
Class weights
This is set by passing a dictionary to the class_weight argument to Model.fit(). This dictionary maps class indices to the weight that should be used for samples belonging to this class.
This can be used to balance classes without resampling, or to train a model that gives more importance to a particular class.
For instance, if class "0" is half as represented as class "1" in your data, you could use Model.fit(..., class_weight={0: 1., 1: 0.5}).
Here's a NumPy example where we use class weights or sample weights to give more importance to the correct classification of class #5 (which is the digit "5" in the MNIST dataset).
import numpy as np
class_weight = {