text
stringlengths 0
4.99k
|
---|
# We need to one-hot encode the labels to use MSE |
y_train_one_hot = tf.one_hot(y_train, depth=10) |
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1) |
782/782 [==============================] - 1s 756us/step - loss: 0.0279 |
<tensorflow.python.keras.callbacks.History at 0x14d2534d0> |
If you need a loss function that takes in parameters beside y_true and y_pred, you can subclass the tf.keras.losses.Loss class and implement the following two methods: |
__init__(self): accept parameters to pass during the call of your loss function |
call(self, y_true, y_pred): use the targets (y_true) and the model predictions (y_pred) to compute the model's loss |
Let's say you want to use mean squared error, but with an added term that will de-incentivize prediction values far from 0.5 (we assume that the categorical targets are one-hot encoded and take values between 0 and 1). This creates an incentive for the model not to be too confident, which may help reduce overfitting (we won't know if it works until we try!). |
Here's how you would do it: |
class CustomMSE(keras.losses.Loss): |
def __init__(self, regularization_factor=0.1, name="custom_mse"): |
super().__init__(name=name) |
self.regularization_factor = regularization_factor |
def call(self, y_true, y_pred): |
mse = tf.math.reduce_mean(tf.square(y_true - y_pred)) |
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred)) |
return mse + reg * self.regularization_factor |
model = get_uncompiled_model() |
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE()) |
y_train_one_hot = tf.one_hot(y_train, depth=10) |
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1) |
782/782 [==============================] - 1s 787us/step - loss: 0.0484 |
<tensorflow.python.keras.callbacks.History at 0x14d43edd0> |
Custom metrics |
If you need a metric that isn't part of the API, you can easily create custom metrics by subclassing the tf.keras.metrics.Metric class. You will need to implement 4 methods: |
__init__(self), in which you will create state variables for your metric. |
update_state(self, y_true, y_pred, sample_weight=None), which uses the targets y_true and the model predictions y_pred to update the state variables. |
result(self), which uses the state variables to compute the final results. |
reset_states(self), which reinitializes the state of the metric. |
State update and results computation are kept separate (in update_state() and result(), respectively) because in some cases, the results computation might be very expensive and would only be done periodically. |
Here's a simple example showing how to implement a CategoricalTruePositives metric that counts how many samples were correctly classified as belonging to a given class: |
class CategoricalTruePositives(keras.metrics.Metric): |
def __init__(self, name="categorical_true_positives", **kwargs): |
super(CategoricalTruePositives, self).__init__(name=name, **kwargs) |
self.true_positives = self.add_weight(name="ctp", initializer="zeros") |
def update_state(self, y_true, y_pred, sample_weight=None): |
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1)) |
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32") |
values = tf.cast(values, "float32") |
if sample_weight is not None: |
sample_weight = tf.cast(sample_weight, "float32") |
values = tf.multiply(values, sample_weight) |
self.true_positives.assign_add(tf.reduce_sum(values)) |
def result(self): |
return self.true_positives |
def reset_states(self): |
# The state of the metric will be reset at the start of each epoch. |
self.true_positives.assign(0.0) |
model = get_uncompiled_model() |
model.compile( |
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3), |
loss=keras.losses.SparseCategoricalCrossentropy(), |
metrics=[CategoricalTruePositives()], |
) |
model.fit(x_train, y_train, batch_size=64, epochs=3) |
Epoch 1/3 |
782/782 [==============================] - 1s 871us/step - loss: 0.5631 - categorical_true_positives: 22107.3525 |
Epoch 2/3 |
782/782 [==============================] - 1s 826us/step - loss: 0.1679 - categorical_true_positives: 23860.3078 |
Epoch 3/3 |
782/782 [==============================] - 1s 823us/step - loss: 0.1102 - categorical_true_positives: 24231.2771 |
<tensorflow.python.keras.callbacks.History at 0x14d578bd0> |
Handling losses and metrics that don't fit the standard signature |
The overwhelming majority of losses and metrics can be computed from y_true and y_pred, where y_pred is an output of your model -- but not all of them. For instance, a regularization loss may only require the activation of a layer (there are no targets in this case), and this activation may not be a model output. |
In such cases, you can call self.add_loss(loss_value) from inside the call method of a custom layer. Losses added in this way get added to the "main" loss during training (the one passed to compile()). Here's a simple example that adds activity regularization (note that activity regularization is built-in in all Keras layers -- this layer is just for the sake of providing a concrete example): |
class ActivityRegularizationLayer(layers.Layer): |
def call(self, inputs): |
self.add_loss(tf.reduce_sum(inputs) * 0.1) |
return inputs # Pass-through layer. |
inputs = keras.Input(shape=(784,), name="digits") |
x = layers.Dense(64, activation="relu", name="dense_1")(inputs) |
# Insert activity regularization as a layer |
x = ActivityRegularizationLayer()(x) |
x = layers.Dense(64, activation="relu", name="dense_2")(x) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.