text
stringlengths 0
4.99k
|
---|
However, model subclassing provides greater flexibility when building models that are not easily expressible as directed acyclic graphs of layers. For example, you could not implement a Tree-RNN with the functional API and would have to subclass Model directly.
|
For an in-depth look at the differences between the functional API and model subclassing, read What are Symbolic and Imperative APIs in TensorFlow 2.0?.
|
Functional API strengths:
|
The following properties are also true for Sequential models (which are also data structures), but are not true for subclassed models (which are Python bytecode, not data structures).
|
Less verbose
|
There is no super(MyClass, self).__init__(...), no def call(self, ...):, etc.
|
Compare:
|
inputs = keras.Input(shape=(32,))
|
x = layers.Dense(64, activation='relu')(inputs)
|
outputs = layers.Dense(10)(x)
|
mlp = keras.Model(inputs, outputs)
|
With the subclassed version:
|
class MLP(keras.Model):
|
def __init__(self, **kwargs):
|
super(MLP, self).__init__(**kwargs)
|
self.dense_1 = layers.Dense(64, activation='relu')
|
self.dense_2 = layers.Dense(10)
|
def call(self, inputs):
|
x = self.dense_1(inputs)
|
return self.dense_2(x)
|
# Instantiate the model.
|
mlp = MLP()
|
# Necessary to create the model's state.
|
# The model doesn't have a state until it's called at least once.
|
_ = mlp(tf.zeros((1, 32)))
|
Model validation while defining its connectivity graph
|
In the functional API, the input specification (shape and dtype) is created in advance (using Input). Every time you call a layer, the layer checks that the specification passed to it matches its assumptions, and it will raise a helpful error message if not.
|
This guarantees that any model you can build with the functional API will run. All debugging -- other than convergence-related debugging -- happens statically during the model construction and not at execution time. This is similar to type checking in a compiler.
|
A functional model is plottable and inspectable
|
You can plot the model as a graph, and you can easily access intermediate nodes in this graph. For example, to extract and reuse the activations of intermediate layers (as seen in a previous example):
|
features_list = [layer.output for layer in vgg19.layers]
|
feat_extraction_model = keras.Model(inputs=vgg19.input, outputs=features_list)
|
A functional model can be serialized or cloned
|
Because a functional model is a data structure rather than a piece of code, it is safely serializable and can be saved as a single file that allows you to recreate the exact same model without having access to any of the original code. See the serialization & saving guide.
|
To serialize a subclassed model, it is necessary for the implementer to specify a get_config() and from_config() method at the model level.
|
Functional API weakness:
|
It does not support dynamic architectures
|
The functional API treats models as DAGs of layers. This is true for most deep learning architectures, but not all -- for example, recursive networks or Tree RNNs do not follow this assumption and cannot be implemented in the functional API.
|
Mix-and-match API styles
|
Choosing between the functional API or Model subclassing isn't a binary decision that restricts you into one category of models. All models in the tf.keras API can interact with each other, whether they're Sequential models, functional models, or subclassed models that are written from scratch.
|
You can always use a functional model or Sequential model as part of a subclassed model or layer:
|
units = 32
|
timesteps = 10
|
input_dim = 5
|
# Define a Functional model
|
inputs = keras.Input((None, units))
|
x = layers.GlobalAveragePooling1D()(inputs)
|
outputs = layers.Dense(1)(x)
|
model = keras.Model(inputs, outputs)
|
class CustomRNN(layers.Layer):
|
def __init__(self):
|
super(CustomRNN, self).__init__()
|
self.units = units
|
self.projection_1 = layers.Dense(units=units, activation="tanh")
|
self.projection_2 = layers.Dense(units=units, activation="tanh")
|
# Our previously-defined Functional model
|
self.classifier = model
|
def call(self, inputs):
|
outputs = []
|
state = tf.zeros(shape=(inputs.shape[0], self.units))
|
for t in range(inputs.shape[1]):
|
x = inputs[:, t, :]
|
h = self.projection_1(x)
|
y = h + self.projection_2(state)
|
state = y
|
outputs.append(y)
|
features = tf.stack(outputs, axis=1)
|
print(features.shape)
|
return self.classifier(features)
|
rnn_model = CustomRNN()
|
_ = rnn_model(tf.zeros((1, timesteps, input_dim)))
|
(1, 10, 32)
|
You can use any subclassed layer or model in the functional API as long as it implements a call method that follows one of the following patterns:
|
call(self, inputs, **kwargs) -- Where inputs is a tensor or a nested structure of tensors (e.g. a list of tensors), and where **kwargs are non-tensor arguments (non-inputs).
|
call(self, inputs, training=None, **kwargs) -- Where training is a boolean indicating whether the layer should behave in training mode and inference mode.
|
call(self, inputs, mask=None, **kwargs) -- Where mask is a boolean mask tensor (useful for RNNs, for instance).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.