text
stringlengths 0
4.99k
|
---|
>>> y_true = np.random.random(size=(2, 3)) |
>>> y_true = np.maximum(y_true, 1e-7) # Prevent division by zero |
>>> y_pred = np.random.random(size=(2, 3)) |
>>> loss = tf.keras.losses.mean_absolute_percentage_error(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> assert np.array_equal( |
... loss.numpy(), |
... 100. * np.mean(np.abs((y_true - y_pred) / y_true), axis=-1)) |
Arguments |
y_true: Ground truth values. shape = [batch_size, d0, .. dN]. |
y_pred: The predicted values. shape = [batch_size, d0, .. dN]. |
Returns |
Mean absolute percentage error values. shape = [batch_size, d0, .. dN-1]. |
mean_squared_logarithmic_error function |
tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred) |
Computes the mean squared logarithmic error between y_true and y_pred. |
loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1) |
Standalone usage: |
>>> y_true = np.random.randint(0, 2, size=(2, 3)) |
>>> y_pred = np.random.random(size=(2, 3)) |
>>> loss = tf.keras.losses.mean_squared_logarithmic_error(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> y_true = np.maximum(y_true, 1e-7) |
>>> y_pred = np.maximum(y_pred, 1e-7) |
>>> assert np.allclose( |
... loss.numpy(), |
... np.mean( |
... np.square(np.log(y_true + 1.) - np.log(y_pred + 1.)), axis=-1)) |
Arguments |
y_true: Ground truth values. shape = [batch_size, d0, .. dN]. |
y_pred: The predicted values. shape = [batch_size, d0, .. dN]. |
Returns |
Mean squared logarithmic error values. shape = [batch_size, d0, .. dN-1]. |
cosine_similarity function |
tf.keras.losses.cosine_similarity(y_true, y_pred, axis=-1) |
Computes the cosine similarity between labels and predictions. |
Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets. |
loss = -sum(l2_norm(y_true) * l2_norm(y_pred)) |
Standalone usage: |
>>> y_true = [[0., 1.], [1., 1.], [1., 1.]] |
>>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]] |
>>> loss = tf.keras.losses.cosine_similarity(y_true, y_pred, axis=1) |
>>> loss.numpy() |
array([-0., -0.999, 0.999], dtype=float32) |
Arguments |
y_true: Tensor of true targets. |
y_pred: Tensor of predicted targets. |
axis: Axis along which to determine similarity. |
Returns |
Cosine similarity tensor. |
Huber class |
tf.keras.losses.Huber(delta=1.0, reduction="auto", name="huber_loss") |
Computes the Huber loss between y_true and y_pred. |
For each value x in error = y_true - y_pred: |
loss = 0.5 * x^2 if |x| <= d |
loss = 0.5 * d^2 + d * (|x| - d) if |x| > d |
where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss |
Standalone usage: |
>>> y_true = [[0, 1], [0, 0]] |
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]] |
>>> # Using 'auto'/'sum_over_batch_size' reduction type. |
>>> h = tf.keras.losses.Huber() |
>>> h(y_true, y_pred).numpy() |
0.155 |
>>> # Calling with 'sample_weight'. |
>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy() |
0.09 |
>>> # Using 'sum' reduction type. |
>>> h = tf.keras.losses.Huber( |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.