text
stringlengths 0
4.99k
|
---|
... reduction=tf.keras.losses.Reduction.SUM) |
>>> h(y_true, y_pred).numpy() |
0.31 |
>>> # Using 'none' reduction type. |
>>> h = tf.keras.losses.Huber( |
... reduction=tf.keras.losses.Reduction.NONE) |
>>> h(y_true, y_pred).numpy() |
array([0.18, 0.13], dtype=float32) |
Usage with the compile() API: |
model.compile(optimizer='sgd', loss=tf.keras.losses.Huber()) |
huber function |
tf.keras.losses.huber(y_true, y_pred, delta=1.0) |
Computes Huber loss value. |
For each value x in error = y_true - y_pred: |
loss = 0.5 * x^2 if |x| <= d |
loss = d * |x| - 0.5 * d^2 if |x| > d |
where d is delta. See: https://en.wikipedia.org/wiki/Huber_loss |
Arguments |
y_true: tensor of true targets. |
y_pred: tensor of predicted targets. |
delta: A float, the point where the Huber loss function changes from a quadratic to linear. |
Returns |
Tensor with one scalar loss entry per sample. |
LogCosh class |
tf.keras.losses.LogCosh(reduction="auto", name="log_cosh") |
Computes the logarithm of the hyperbolic cosine of the prediction error. |
logcosh = log((exp(x) + exp(-x))/2), where x is the error y_pred - y_true. |
Standalone usage: |
>>> y_true = [[0., 1.], [0., 0.]] |
>>> y_pred = [[1., 1.], [0., 0.]] |
>>> # Using 'auto'/'sum_over_batch_size' reduction type. |
>>> l = tf.keras.losses.LogCosh() |
>>> l(y_true, y_pred).numpy() |
0.108 |
>>> # Calling with 'sample_weight'. |
>>> l(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy() |
0.087 |
>>> # Using 'sum' reduction type. |
>>> l = tf.keras.losses.LogCosh( |
... reduction=tf.keras.losses.Reduction.SUM) |
>>> l(y_true, y_pred).numpy() |
0.217 |
>>> # Using 'none' reduction type. |
>>> l = tf.keras.losses.LogCosh( |
... reduction=tf.keras.losses.Reduction.NONE) |
>>> l(y_true, y_pred).numpy() |
array([0.217, 0.], dtype=float32) |
Usage with the compile() API: |
model.compile(optimizer='sgd', loss=tf.keras.losses.LogCosh()) |
log_cosh function |
tf.keras.losses.log_cosh(y_true, y_pred) |
Logarithm of the hyperbolic cosine of the prediction error. |
log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that 'logcosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction. |
Standalone usage: |
>>> y_true = np.random.random(size=(2, 3)) |
>>> y_pred = np.random.random(size=(2, 3)) |
>>> loss = tf.keras.losses.logcosh(y_true, y_pred) |
>>> assert loss.shape == (2,) |
>>> x = y_pred - y_true |
>>> assert np.allclose( |
... loss.numpy(), |
... np.mean(x + np.log(np.exp(-2. * x) + 1.) - math_ops.log(2.), axis=-1), |
... atol=1e-5) |
Arguments |
y_true: Ground truth values. shape = [batch_size, d0, .. dN]. |
y_pred: The predicted values. shape = [batch_size, d0, .. dN]. |
Returns |
Logcosh error values. shape = [batch_size, d0, .. dN-1]. |
Hinge losses for "maximum-margin" classification |
Hinge class |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.