text
stringlengths 0
4.99k
|
---|
Note that this pattern does not prevent you from building models with the Functional API. You can do this whether you're building Sequential models, Functional API models, or subclassed models.
|
Let's see how that works.
|
Setup
|
Requires TensorFlow 2.2 or later.
|
import tensorflow as tf
|
from tensorflow import keras
|
A first simple example
|
Let's start from a simple example:
|
We create a new class that subclasses keras.Model.
|
We just override the method train_step(self, data).
|
We return a dictionary mapping metric names (including the loss) to their current value.
|
The input argument data is what gets passed to fit as training data:
|
If you pass Numpy arrays, by calling fit(x, y, ...), then data will be the tuple (x, y)
|
If you pass a tf.data.Dataset, by calling fit(dataset, ...), then data will be what gets yielded by dataset at each batch.
|
In the body of the train_step method, we implement a regular training update, similar to what you are already familiar with. Importantly, we compute the loss via self.compiled_loss, which wraps the loss(es) function(s) that were passed to compile().
|
Similarly, we call self.compiled_metrics.update_state(y, y_pred) to update the state of the metrics that were passed in compile(), and we query results from self.metrics at the end to retrieve their current value.
|
class CustomModel(keras.Model):
|
def train_step(self, data):
|
# Unpack the data. Its structure depends on your model and
|
# on what you pass to `fit()`.
|
x, y = data
|
with tf.GradientTape() as tape:
|
y_pred = self(x, training=True) # Forward pass
|
# Compute the loss value
|
# (the loss function is configured in `compile()`)
|
loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
|
# Compute gradients
|
trainable_vars = self.trainable_variables
|
gradients = tape.gradient(loss, trainable_vars)
|
# Update weights
|
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
|
# Update metrics (includes the metric that tracks the loss)
|
self.compiled_metrics.update_state(y, y_pred)
|
# Return a dict mapping metric names to current value
|
return {m.name: m.result() for m in self.metrics}
|
Let's try this out:
|
import numpy as np
|
# Construct and compile an instance of CustomModel
|
inputs = keras.Input(shape=(32,))
|
outputs = keras.layers.Dense(1)(inputs)
|
model = CustomModel(inputs, outputs)
|
model.compile(optimizer="adam", loss="mse", metrics=["mae"])
|
# Just use `fit` as usual
|
x = np.random.random((1000, 32))
|
y = np.random.random((1000, 1))
|
model.fit(x, y, epochs=3)
|
Epoch 1/3
|
32/32 [==============================] - 0s 721us/step - loss: 0.5791 - mae: 0.6232
|
Epoch 2/3
|
32/32 [==============================] - 0s 601us/step - loss: 0.2739 - mae: 0.4296
|
Epoch 3/3
|
32/32 [==============================] - 0s 576us/step - loss: 0.2547 - mae: 0.4078
|
<tensorflow.python.keras.callbacks.History at 0x1423856d0>
|
Going lower-level
|
Naturally, you could just skip passing a loss function in compile(), and instead do everything manually in train_step. Likewise for metrics.
|
Here's a lower-level example, that only uses compile() to configure the optimizer:
|
We start by creating Metric instances to track our loss and a MAE score.
|
We implement a custom train_step() that updates the state of these metrics (by calling update_state() on them), then query them (via result()) to return their current average value, to be displayed by the progress bar and to be pass to any callback.
|
Note that we would need to call reset_states() on our metrics between each epoch! Otherwise calling result() would return an average since the start of training, whereas we usually work with per-epoch averages. Thankfully, the framework can do that for us: just list any metric you want to reset in the metrics property of the model. The model will call reset_states() on any object listed here at the beginning of each fit() epoch or at the beginning of a call to evaluate().
|
loss_tracker = keras.metrics.Mean(name="loss")
|
mae_metric = keras.metrics.MeanAbsoluteError(name="mae")
|
class CustomModel(keras.Model):
|
def train_step(self, data):
|
x, y = data
|
with tf.GradientTape() as tape:
|
y_pred = self(x, training=True) # Forward pass
|
# Compute our own loss
|
loss = keras.losses.mean_squared_error(y, y_pred)
|
# Compute gradients
|
trainable_vars = self.trainable_variables
|
gradients = tape.gradient(loss, trainable_vars)
|
# Update weights
|
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
|
# Compute our own metrics
|
loss_tracker.update_state(loss)
|
mae_metric.update_state(y, y_pred)
|
return {"loss": loss_tracker.result(), "mae": mae_metric.result()}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.