text
stringlengths 0
4.99k
|
---|
>>> kl = tf.keras.losses.KLDivergence() |
>>> kl(y_true, y_pred).numpy() |
0.458 |
>>> # Calling with 'sample_weight'. |
>>> kl(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() |
0.366 |
>>> # Using 'sum' reduction type. |
>>> kl = tf.keras.losses.KLDivergence( |
... reduction=tf.keras.losses.Reduction.SUM) |
>>> kl(y_true, y_pred).numpy() |
0.916 |
>>> # Using 'none' reduction type. |
>>> kl = tf.keras.losses.KLDivergence( |
... reduction=tf.keras.losses.Reduction.NONE) |
>>> kl(y_true, y_pred).numpy() |
array([0.916, -3.08e-06], dtype=float32) |
Usage with the compile() API: |
model.compile(optimizer='sgd', loss=tf.keras.losses.KLDivergence()) |
kl_divergence function |
tf.keras.losses.kl_divergence(y_true, y_pred) |
Computes Kullback-Leibler divergence loss between y_true and y_pred. |
loss = y_true * log(y_true / y_pred) |
See: https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence |
Standalone usage: |
>>> y_true = np.random.randint(0, 2, size=(2, 3)).astype(np.float64) |
>>> y_pred = np.random.random(size=(2, 3)) |
>>> loss = tf.keras.losses.kullback_leibler_divergence(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> y_true = tf.keras.backend.clip(y_true, 1e-7, 1) |
>>> y_pred = tf.keras.backend.clip(y_pred, 1e-7, 1) |
>>> assert np.array_equal( |
... loss.numpy(), np.sum(y_true * np.log(y_true / y_pred), axis=-1)) |
Arguments |
y_true: Tensor of true targets. |
y_pred: Tensor of predicted targets. |
Returns |
A Tensor with loss. |
Raises |
TypeError: If y_true cannot be cast to the y_pred.dtype. |
Backend utilities |
clear_session function |
tf.keras.backend.clear_session() |
Resets all state generated by Keras. |
Keras manages a global state, which it uses to implement the Functional model-building API and to uniquify autogenerated layer names. |
If you are creating many models in a loop, this global state will consume an increasing amount of memory over time, and you may want to clear it. Calling clear_session() releases the global state: this helps avoid clutter from old models and layers, especially when memory is limited. |
Example 1: calling clear_session() when creating models in a loop |
for _ in range(100): |
# Without `clear_session()`, each iteration of this loop will |
# slightly increase the size of the global state managed by Keras |
model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)]) |
for _ in range(100): |
# With `clear_session()` called at the beginning, |
# Keras starts with a blank state at each iteration |
# and memory consumption is constant over time. |
tf.keras.backend.clear_session() |
model = tf.keras.Sequential([tf.keras.layers.Dense(10) for _ in range(10)]) |
Example 2: resetting the layer name generation counter |
>>> import tensorflow as tf |
>>> layers = [tf.keras.layers.Dense(10) for _ in range(10)] |
>>> new_layer = tf.keras.layers.Dense(10) |
>>> print(new_layer.name) |
dense_10 |
>>> tf.keras.backend.set_learning_phase(1) |
>>> print(tf.keras.backend.learning_phase()) |
1 |
>>> tf.keras.backend.clear_session() |
>>> new_layer = tf.keras.layers.Dense(10) |
>>> print(new_layer.name) |
dense |
floatx function |
tf.keras.backend.floatx() |
Returns the default float type, as a string. |
E.g. 'float16', 'float32', 'float64'. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.