text
stringlengths 0
4.99k
|
---|
from_logits=False, reduction="auto", name="sparse_categorical_crossentropy"
|
)
|
Computes the crossentropy loss between the labels and predictions.
|
Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided as integers. If you want to provide labels using one-hot representation, please use CategoricalCrossentropy loss. There should be # classes floating point values per feature for y_pred and a single floating point value per feature for y_true.
|
In the snippet below, there is a single floating point value per example for y_true and # classes floating pointing values per example for y_pred. The shape of y_true is [batch_size] and the shape of y_pred is [batch_size, num_classes].
|
Standalone usage:
|
>>> y_true = [1, 2]
|
>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
|
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
|
>>> scce = tf.keras.losses.SparseCategoricalCrossentropy()
|
>>> scce(y_true, y_pred).numpy()
|
1.177
|
>>> # Calling with 'sample_weight'.
|
>>> scce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()
|
0.814
|
>>> # Using 'sum' reduction type.
|
>>> scce = tf.keras.losses.SparseCategoricalCrossentropy(
|
... reduction=tf.keras.losses.Reduction.SUM)
|
>>> scce(y_true, y_pred).numpy()
|
2.354
|
>>> # Using 'none' reduction type.
|
>>> scce = tf.keras.losses.SparseCategoricalCrossentropy(
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> scce(y_true, y_pred).numpy()
|
array([0.0513, 2.303], dtype=float32)
|
Usage with the compile() API:
|
model.compile(optimizer='sgd',
|
loss=tf.keras.losses.SparseCategoricalCrossentropy())
|
Poisson class
|
tf.keras.losses.Poisson(reduction="auto", name="poisson")
|
Computes the Poisson loss between y_true and y_pred.
|
loss = y_pred - y_true * log(y_pred)
|
Standalone usage:
|
>>> y_true = [[0., 1.], [0., 0.]]
|
>>> y_pred = [[1., 1.], [0., 0.]]
|
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
|
>>> p = tf.keras.losses.Poisson()
|
>>> p(y_true, y_pred).numpy()
|
0.5
|
>>> # Calling with 'sample_weight'.
|
>>> p(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()
|
0.4
|
>>> # Using 'sum' reduction type.
|
>>> p = tf.keras.losses.Poisson(
|
... reduction=tf.keras.losses.Reduction.SUM)
|
>>> p(y_true, y_pred).numpy()
|
0.999
|
>>> # Using 'none' reduction type.
|
>>> p = tf.keras.losses.Poisson(
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> p(y_true, y_pred).numpy()
|
array([0.999, 0.], dtype=float32)
|
Usage with the compile() API:
|
model.compile(optimizer='sgd', loss=tf.keras.losses.Poisson())
|
binary_crossentropy function
|
tf.keras.losses.binary_crossentropy(
|
y_true, y_pred, from_logits=False, label_smoothing=0
|
)
|
Computes the binary crossentropy loss.
|
Standalone usage:
|
>>> y_true = [[0, 1], [0, 0]]
|
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]]
|
>>> loss = tf.keras.losses.binary_crossentropy(y_true, y_pred)
|
>>> assert loss.shape == (2,)
|
>>> loss.numpy()
|
array([0.916 , 0.714], dtype=float32)
|
Arguments
|
y_true: Ground truth values. shape = [batch_size, d0, .. dN].
|
y_pred: The predicted values. shape = [batch_size, d0, .. dN].
|
from_logits: Whether y_pred is expected to be a logits tensor. By default, we assume that y_pred encodes a probability distribution.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.