text
stringlengths 0
4.99k
|
---|
Use this cross-entropy loss for binary (0 or 1) classification applications. The loss function requires the following inputs:
|
y_true (true label): This is either 0 or 1.
|
y_pred (predicted value): This is the model's prediction, i.e, a single floating-point value which either represents a logit, (i.e, value in [-inf, inf] when from_logits=True) or a probability (i.e, value in [0., 1.] when from_logits=False).
|
Recommended Usage: (set from_logits=True)
|
With tf.keras API:
|
model.compile(
|
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
|
....
|
)
|
As a standalone function:
|
>>> # Example 1: (batch_size = 1, number of samples = 4)
|
>>> y_true = [0, 1, 0, 0]
|
>>> y_pred = [-18.6, 0.51, 2.94, -12.8]
|
>>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)
|
>>> bce(y_true, y_pred).numpy()
|
0.865
|
>>> # Example 2: (batch_size = 2, number of samples = 4)
|
>>> y_true = [[0, 1], [0, 0]]
|
>>> y_pred = [[-18.6, 0.51], [2.94, -12.8]]
|
>>> # Using default 'auto'/'sum_over_batch_size' reduction type.
|
>>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)
|
>>> bce(y_true, y_pred).numpy()
|
0.865
|
>>> # Using 'sample_weight' attribute
|
>>> bce(y_true, y_pred, sample_weight=[0.8, 0.2]).numpy()
|
0.243
|
>>> # Using 'sum' reduction` type.
|
>>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,
|
... reduction=tf.keras.losses.Reduction.SUM)
|
>>> bce(y_true, y_pred).numpy()
|
1.730
|
>>> # Using 'none' reduction type.
|
>>> bce = tf.keras.losses.BinaryCrossentropy(from_logits=True,
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> bce(y_true, y_pred).numpy()
|
array([0.235, 1.496], dtype=float32)
|
Default Usage: (set from_logits=False)
|
>>> # Make the following updates to the above "Recommended Usage" section
|
>>> # 1. Set `from_logits=False`
|
>>> tf.keras.losses.BinaryCrossentropy() # OR ...('from_logits=False')
|
>>> # 2. Update `y_pred` to use probabilities instead of logits
|
>>> y_pred = [0.6, 0.3, 0.2, 0.8] # OR [[0.6, 0.3], [0.2, 0.8]]
|
CategoricalCrossentropy class
|
tf.keras.losses.CategoricalCrossentropy(
|
from_logits=False,
|
label_smoothing=0,
|
reduction="auto",
|
name="categorical_crossentropy",
|
)
|
Computes the crossentropy loss between the labels and predictions.
|
Use this crossentropy loss function when there are two or more label classes. We expect labels to be provided in a one_hot representation. If you want to provide labels as integers, please use SparseCategoricalCrossentropy loss. There should be # classes floating point values per feature.
|
In the snippet below, there is # classes floating pointing values per example. The shape of both y_pred and y_true are [batch_size, num_classes].
|
Standalone usage:
|
>>> y_true = [[0, 1, 0], [0, 0, 1]]
|
>>> y_pred = [[0.05, 0.95, 0], [0.1, 0.8, 0.1]]
|
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
|
>>> cce = tf.keras.losses.CategoricalCrossentropy()
|
>>> cce(y_true, y_pred).numpy()
|
1.177
|
>>> # Calling with 'sample_weight'.
|
>>> cce(y_true, y_pred, sample_weight=tf.constant([0.3, 0.7])).numpy()
|
0.814
|
>>> # Using 'sum' reduction type.
|
>>> cce = tf.keras.losses.CategoricalCrossentropy(
|
... reduction=tf.keras.losses.Reduction.SUM)
|
>>> cce(y_true, y_pred).numpy()
|
2.354
|
>>> # Using 'none' reduction type.
|
>>> cce = tf.keras.losses.CategoricalCrossentropy(
|
... reduction=tf.keras.losses.Reduction.NONE)
|
>>> cce(y_true, y_pred).numpy()
|
array([0.0513, 2.303], dtype=float32)
|
Usage with the compile() API:
|
model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalCrossentropy())
|
SparseCategoricalCrossentropy class
|
tf.keras.losses.SparseCategoricalCrossentropy(
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.