text
stringlengths 0
4.99k
|
---|
>>> mae = tf.keras.losses.MeanAbsoluteError(
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> mae(y_true, y_pred).numpy()
|
array([0.5, 0.5], dtype=float32)
|
Usage with the compile() API:
|
model.compile(optimizer='sgd', loss=tf.keras.losses.MeanAbsoluteError())
|
MeanAbsolutePercentageError class
|
tf.keras.losses.MeanAbsolutePercentageError(
|
reduction="auto", name="mean_absolute_percentage_error"
|
)
|
Computes the mean absolute percentage error between y_true and y_pred.
|
loss = 100 * abs(y_true - y_pred) / y_true
|
Standalone usage:
|
>>> y_true = [[2., 1.], [2., 3.]]
|
>>> y_pred = [[1., 1.], [1., 0.]]
|
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
|
>>> mape = tf.keras.losses.MeanAbsolutePercentageError()
|
>>> mape(y_true, y_pred).numpy()
|
50.
|
>>> # Calling with 'sample_weight'.
|
>>> mape(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()
|
20.
|
>>> # Using 'sum' reduction type.
|
>>> mape = tf.keras.losses.MeanAbsolutePercentageError(
|
... reduction=tf.keras.losses.Reduction.SUM)
|
>>> mape(y_true, y_pred).numpy()
|
100.
|
>>> # Using 'none' reduction type.
|
>>> mape = tf.keras.losses.MeanAbsolutePercentageError(
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> mape(y_true, y_pred).numpy()
|
array([25., 75.], dtype=float32)
|
Usage with the compile() API:
|
model.compile(optimizer='sgd',
|
loss=tf.keras.losses.MeanAbsolutePercentageError())
|
MeanSquaredLogarithmicError class
|
tf.keras.losses.MeanSquaredLogarithmicError(
|
reduction="auto", name="mean_squared_logarithmic_error"
|
)
|
Computes the mean squared logarithmic error between y_true and y_pred.
|
loss = square(log(y_true + 1.) - log(y_pred + 1.))
|
Standalone usage:
|
>>> y_true = [[0., 1.], [0., 0.]]
|
>>> y_pred = [[1., 1.], [1., 0.]]
|
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
|
>>> msle = tf.keras.losses.MeanSquaredLogarithmicError()
|
>>> msle(y_true, y_pred).numpy()
|
0.240
|
>>> # Calling with 'sample_weight'.
|
>>> msle(y_true, y_pred, sample_weight=[0.7, 0.3]).numpy()
|
0.120
|
>>> # Using 'sum' reduction type.
|
>>> msle = tf.keras.losses.MeanSquaredLogarithmicError(
|
... reduction=tf.keras.losses.Reduction.SUM)
|
>>> msle(y_true, y_pred).numpy()
|
0.480
|
>>> # Using 'none' reduction type.
|
>>> msle = tf.keras.losses.MeanSquaredLogarithmicError(
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> msle(y_true, y_pred).numpy()
|
array([0.240, 0.240], dtype=float32)
|
Usage with the compile() API:
|
model.compile(optimizer='sgd',
|
loss=tf.keras.losses.MeanSquaredLogarithmicError())
|
CosineSimilarity class
|
tf.keras.losses.CosineSimilarity(
|
axis=-1, reduction="auto", name="cosine_similarity"
|
)
|
Computes the cosine similarity between labels and predictions.
|
Note that it is a number between -1 and 1. When it is a negative number between -1 and 0, 0 indicates orthogonality and values closer to -1 indicate greater similarity. The values closer to 1 indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either y_true or y_pred is a zero vector, cosine similarity will be 0 regardless of the proximity between predictions and targets.
|
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.