text
stringlengths 0
4.99k
|
---|
Standalone usage: |
>>> y_true = [[0., 1.], [1., 1.]] |
>>> y_pred = [[1., 0.], [1., 1.]] |
>>> # Using 'auto'/'sum_over_batch_size' reduction type. |
>>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1) |
>>> # l2_norm(y_true) = [[0., 1.], [1./1.414], 1./1.414]]] |
>>> # l2_norm(y_pred) = [[1., 0.], [1./1.414], 1./1.414]]] |
>>> # l2_norm(y_true) . l2_norm(y_pred) = [[0., 0.], [0.5, 0.5]] |
>>> # loss = mean(sum(l2_norm(y_true) . l2_norm(y_pred), axis=1)) |
>>> # = -((0. + 0.) + (0.5 + 0.5)) / 2 |
>>> cosine_loss(y_true, y_pred).numpy() |
-0.5 |
>>> # Calling with 'sample_weight'. |
>>> cosine_loss(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() |
-0.0999 |
>>> # Using 'sum' reduction type. |
>>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, |
... reduction=tf.keras.losses.Reduction.SUM) |
>>> cosine_loss(y_true, y_pred).numpy() |
-0.999 |
>>> # Using 'none' reduction type. |
>>> cosine_loss = tf.keras.losses.CosineSimilarity(axis=1, |
... reduction=tf.keras.losses.Reduction.NONE) |
>>> cosine_loss(y_true, y_pred).numpy() |
array([-0., -0.999], dtype=float32) |
Usage with the compile() API: |
model.compile(optimizer='sgd', loss=tf.keras.losses.CosineSimilarity(axis=1)) |
Arguments |
axis: (Optional) Defaults to -1. The dimension along which the cosine similarity is computed. |
reduction: (Optional) Type of tf.keras.losses.Reduction to apply to loss. Default value is AUTO. AUTO indicates that the reduction option will be determined by the usage context. For almost all cases this defaults to SUM_OVER_BATCH_SIZE. When used with tf.distribute.Strategy, outside of built-in training loops such as tf.keras compile and fit, using AUTO or SUM_OVER_BATCH_SIZE will raise an error. Please see this custom training [tutorial] (https://www.tensorflow.org/tutorials/distribute/custom_training) for more details. |
name: Optional name for the op. |
mean_squared_error function |
tf.keras.losses.mean_squared_error(y_true, y_pred) |
Computes the mean squared error between labels and predictions. |
After computing the squared distance between the inputs, the mean value over the last dimension is returned. |
loss = mean(square(y_true - y_pred), axis=-1) |
Standalone usage: |
>>> y_true = np.random.randint(0, 2, size=(2, 3)) |
>>> y_pred = np.random.random(size=(2, 3)) |
>>> loss = tf.keras.losses.mean_squared_error(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> assert np.array_equal( |
... loss.numpy(), np.mean(np.square(y_true - y_pred), axis=-1)) |
Arguments |
y_true: Ground truth values. shape = [batch_size, d0, .. dN]. |
y_pred: The predicted values. shape = [batch_size, d0, .. dN]. |
Returns |
Mean squared error values. shape = [batch_size, d0, .. dN-1]. |
mean_absolute_error function |
tf.keras.losses.mean_absolute_error(y_true, y_pred) |
Computes the mean absolute error between labels and predictions. |
loss = mean(abs(y_true - y_pred), axis=-1) |
Standalone usage: |
>>> y_true = np.random.randint(0, 2, size=(2, 3)) |
>>> y_pred = np.random.random(size=(2, 3)) |
>>> loss = tf.keras.losses.mean_absolute_error(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> assert np.array_equal( |
... loss.numpy(), np.mean(np.abs(y_true - y_pred), axis=-1)) |
Arguments |
y_true: Ground truth values. shape = [batch_size, d0, .. dN]. |
y_pred: The predicted values. shape = [batch_size, d0, .. dN]. |
Returns |
Mean absolute error values. shape = [batch_size, d0, .. dN-1]. |
mean_absolute_percentage_error function |
tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred) |
Computes the mean absolute percentage error between y_true and y_pred. |
loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1) |
Standalone usage: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.