text
stringlengths 0
4.99k
|
---|
Use the same graph of layers to define multiple models
|
In the functional API, models are created by specifying their inputs and outputs in a graph of layers. That means that a single graph of layers can be used to generate multiple models.
|
In the example below, you use the same stack of layers to instantiate two models: an encoder model that turns image inputs into 16-dimensional vectors, and an end-to-end autoencoder model for training.
|
encoder_input = keras.Input(shape=(28, 28, 1), name="img")
|
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
|
x = layers.Conv2D(32, 3, activation="relu")(x)
|
x = layers.MaxPooling2D(3)(x)
|
x = layers.Conv2D(32, 3, activation="relu")(x)
|
x = layers.Conv2D(16, 3, activation="relu")(x)
|
encoder_output = layers.GlobalMaxPooling2D()(x)
|
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
|
encoder.summary()
|
x = layers.Reshape((4, 4, 1))(encoder_output)
|
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
|
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
|
x = layers.UpSampling2D(3)(x)
|
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
|
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)
|
autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
|
autoencoder.summary()
|
Model: "encoder"
|
_________________________________________________________________
|
Layer (type) Output Shape Param #
|
=================================================================
|
img (InputLayer) [(None, 28, 28, 1)] 0
|
_________________________________________________________________
|
conv2d (Conv2D) (None, 26, 26, 16) 160
|
_________________________________________________________________
|
conv2d_1 (Conv2D) (None, 24, 24, 32) 4640
|
_________________________________________________________________
|
max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0
|
_________________________________________________________________
|
conv2d_2 (Conv2D) (None, 6, 6, 32) 9248
|
_________________________________________________________________
|
conv2d_3 (Conv2D) (None, 4, 4, 16) 4624
|
_________________________________________________________________
|
global_max_pooling2d (Global (None, 16) 0
|
=================================================================
|
Total params: 18,672
|
Trainable params: 18,672
|
Non-trainable params: 0
|
_________________________________________________________________
|
Model: "autoencoder"
|
_________________________________________________________________
|
Layer (type) Output Shape Param #
|
=================================================================
|
img (InputLayer) [(None, 28, 28, 1)] 0
|
_________________________________________________________________
|
conv2d (Conv2D) (None, 26, 26, 16) 160
|
_________________________________________________________________
|
conv2d_1 (Conv2D) (None, 24, 24, 32) 4640
|
_________________________________________________________________
|
max_pooling2d (MaxPooling2D) (None, 8, 8, 32) 0
|
_________________________________________________________________
|
conv2d_2 (Conv2D) (None, 6, 6, 32) 9248
|
_________________________________________________________________
|
conv2d_3 (Conv2D) (None, 4, 4, 16) 4624
|
_________________________________________________________________
|
global_max_pooling2d (Global (None, 16) 0
|
_________________________________________________________________
|
reshape (Reshape) (None, 4, 4, 1) 0
|
_________________________________________________________________
|
conv2d_transpose (Conv2DTran (None, 6, 6, 16) 160
|
_________________________________________________________________
|
conv2d_transpose_1 (Conv2DTr (None, 8, 8, 32) 4640
|
_________________________________________________________________
|
up_sampling2d (UpSampling2D) (None, 24, 24, 32) 0
|
_________________________________________________________________
|
conv2d_transpose_2 (Conv2DTr (None, 26, 26, 16) 4624
|
_________________________________________________________________
|
conv2d_transpose_3 (Conv2DTr (None, 28, 28, 1) 145
|
=================================================================
|
Total params: 28,241
|
Trainable params: 28,241
|
Non-trainable params: 0
|
_________________________________________________________________
|
Here, the decoding architecture is strictly symmetrical to the encoding architecture, so the output shape is the same as the input shape (28, 28, 1).
|
The reverse of a Conv2D layer is a Conv2DTranspose layer, and the reverse of a MaxPooling2D layer is an UpSampling2D layer.
|
All models are callable, just like layers
|
You can treat any model as if it were a layer by invoking it on an Input or on the output of another layer. By calling a model you aren't just reusing the architecture of the model, you're also reusing its weights.
|
To see this in action, here's a different take on the autoencoder example that creates an encoder model, a decoder model, and chains them in two calls to obtain the autoencoder model:
|
encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
|
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
|
x = layers.Conv2D(32, 3, activation="relu")(x)
|
x = layers.MaxPooling2D(3)(x)
|
x = layers.Conv2D(32, 3, activation="relu")(x)
|
x = layers.Conv2D(16, 3, activation="relu")(x)
|
encoder_output = layers.GlobalMaxPooling2D()(x)
|
encoder = keras.Model(encoder_input, encoder_output, name="encoder")
|
encoder.summary()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.