text
stringlengths 0
4.99k
|
---|
keras.metrics.MeanAbsoluteError(), |
], |
[keras.metrics.CategoricalAccuracy()], |
], |
) |
Since we gave names to our output layers, we could also specify per-output losses and metrics via a dict: |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss={ |
"score_output": keras.losses.MeanSquaredError(), |
"class_output": keras.losses.CategoricalCrossentropy(), |
}, |
metrics={ |
"score_output": [ |
keras.metrics.MeanAbsolutePercentageError(), |
keras.metrics.MeanAbsoluteError(), |
], |
"class_output": [keras.metrics.CategoricalAccuracy()], |
}, |
) |
We recommend the use of explicit names and dicts if you have more than 2 outputs. |
It's possible to give different weights to different output-specific losses (for instance, one might wish to privilege the "score" loss in our example, by giving to 2x the importance of the class loss), using the loss_weights argument: |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss={ |
"score_output": keras.losses.MeanSquaredError(), |
"class_output": keras.losses.CategoricalCrossentropy(), |
}, |
metrics={ |
"score_output": [ |
keras.metrics.MeanAbsolutePercentageError(), |
keras.metrics.MeanAbsoluteError(), |
], |
"class_output": [keras.metrics.CategoricalAccuracy()], |
}, |
loss_weights={"score_output": 2.0, "class_output": 1.0}, |
) |
You could also choose not to compute a loss for certain outputs, if these outputs are meant for prediction but not for training: |
# List loss version |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss=[None, keras.losses.CategoricalCrossentropy()], |
) |
# Or dict loss version |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss={"class_output": keras.losses.CategoricalCrossentropy()}, |
) |
Passing data to a multi-input or multi-output model in fit() works in a similar way as specifying a loss function in compile: you can pass lists of NumPy arrays (with 1:1 mapping to the outputs that received a loss function) or dicts mapping output names to NumPy arrays. |
model.compile( |
optimizer=keras.optimizers.RMSprop(1e-3), |
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()], |
) |
# Generate dummy NumPy data |
img_data = np.random.random_sample(size=(100, 32, 32, 3)) |
ts_data = np.random.random_sample(size=(100, 20, 10)) |
score_targets = np.random.random_sample(size=(100, 1)) |
class_targets = np.random.random_sample(size=(100, 5)) |
# Fit on lists |
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1) |
# Alternatively, fit on dicts |
model.fit( |
{"img_input": img_data, "ts_input": ts_data}, |
{"score_output": score_targets, "class_output": class_targets}, |
batch_size=32, |
epochs=1, |
) |
4/4 [==============================] - 1s 5ms/step - loss: 13.0462 - score_output_loss: 2.7483 - class_output_loss: 10.2979 |
4/4 [==============================] - 0s 4ms/step - loss: 11.9004 - score_output_loss: 1.7583 - class_output_loss: 10.1420 |
<tensorflow.python.keras.callbacks.History at 0x14e67af50> |
Here's the Dataset use case: similarly as what we did for NumPy arrays, the Dataset should return a tuple of dicts. |
train_dataset = tf.data.Dataset.from_tensor_slices( |
( |
{"img_input": img_data, "ts_input": ts_data}, |
{"score_output": score_targets, "class_output": class_targets}, |
) |
) |
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64) |
model.fit(train_dataset, epochs=1) |
2/2 [==============================] - 0s 6ms/step - loss: 11.5102 - score_output_loss: 1.3747 - class_output_loss: 10.1355 |
<tensorflow.python.keras.callbacks.History at 0x14dc5ce90> |
Using callbacks |
Callbacks in Keras are objects that are called at different points during training (at the start of an epoch, at the end of a batch, at the end of an epoch, etc.). They can be used to implement certain behaviors, such as: |
Doing validation at different points during training (beyond the built-in per-epoch validation) |
Checkpointing the model at regular intervals or when it exceeds a certain accuracy threshold |
Changing the learning rate of the model when training seems to be plateauing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.