text
stringlengths 0
4.99k
|
---|
# Unfreeze the base_model. Note that it keeps running in inference mode
|
# since we passed `training=False` when calling it. This means that
|
# the batchnorm layers will not update their batch statistics.
|
# This prevents the batchnorm layers from undoing all the training
|
# we've done so far.
|
base_model.trainable = True
|
model.summary()
|
model.compile(
|
optimizer=keras.optimizers.Adam(1e-5), # Low learning rate
|
loss=keras.losses.BinaryCrossentropy(from_logits=True),
|
metrics=[keras.metrics.BinaryAccuracy()],
|
)
|
epochs = 10
|
model.fit(train_ds, epochs=epochs, validation_data=validation_ds)
|
Model: "model"
|
_________________________________________________________________
|
Layer (type) Output Shape Param #
|
=================================================================
|
input_5 (InputLayer) [(None, 150, 150, 3)] 0
|
_________________________________________________________________
|
sequential_3 (Sequential) (None, 150, 150, 3) 0
|
_________________________________________________________________
|
normalization (Normalization (None, 150, 150, 3) 7
|
_________________________________________________________________
|
xception (Model) (None, 5, 5, 2048) 20861480
|
_________________________________________________________________
|
global_average_pooling2d (Gl (None, 2048) 0
|
_________________________________________________________________
|
dropout (Dropout) (None, 2048) 0
|
_________________________________________________________________
|
dense_7 (Dense) (None, 1) 2049
|
=================================================================
|
Total params: 20,863,536
|
Trainable params: 20,809,001
|
Non-trainable params: 54,535
|
_________________________________________________________________
|
Epoch 1/10
|
291/291 [==============================] - 92s 318ms/step - loss: 0.0766 - binary_accuracy: 0.9710 - val_loss: 0.0571 - val_binary_accuracy: 0.9772
|
Epoch 2/10
|
291/291 [==============================] - 90s 308ms/step - loss: 0.0534 - binary_accuracy: 0.9800 - val_loss: 0.0471 - val_binary_accuracy: 0.9807
|
Epoch 3/10
|
291/291 [==============================] - 90s 308ms/step - loss: 0.0491 - binary_accuracy: 0.9799 - val_loss: 0.0411 - val_binary_accuracy: 0.9815
|
Epoch 4/10
|
291/291 [==============================] - 90s 308ms/step - loss: 0.0349 - binary_accuracy: 0.9868 - val_loss: 0.0438 - val_binary_accuracy: 0.9832
|
Epoch 5/10
|
291/291 [==============================] - 89s 307ms/step - loss: 0.0302 - binary_accuracy: 0.9881 - val_loss: 0.0440 - val_binary_accuracy: 0.9837
|
Epoch 6/10
|
291/291 [==============================] - 90s 308ms/step - loss: 0.0290 - binary_accuracy: 0.9890 - val_loss: 0.0445 - val_binary_accuracy: 0.9832
|
Epoch 7/10
|
291/291 [==============================] - 90s 310ms/step - loss: 0.0209 - binary_accuracy: 0.9920 - val_loss: 0.0527 - val_binary_accuracy: 0.9811
|
Epoch 8/10
|
291/291 [==============================] - 91s 311ms/step - loss: 0.0162 - binary_accuracy: 0.9940 - val_loss: 0.0510 - val_binary_accuracy: 0.9828
|
Epoch 9/10
|
291/291 [==============================] - 91s 311ms/step - loss: 0.0199 - binary_accuracy: 0.9933 - val_loss: 0.0470 - val_binary_accuracy: 0.9867
|
Epoch 10/10
|
291/291 [==============================] - 90s 308ms/step - loss: 0.0128 - binary_accuracy: 0.9953 - val_loss: 0.0471 - val_binary_accuracy: 0.9845
|
<tensorflow.python.keras.callbacks.History at 0x7f3c0ca6d0f0>
|
After 10 epochs, fine-tuning gains us a nice improvement here.Making new layers and models via subclassing
|
Author: fchollet
|
Date created: 2019/03/01
|
Last modified: 2020/04/13
|
Description: Complete guide to writing Layer and Model objects from scratch.
|
View in Colab • GitHub source
|
Setup
|
import tensorflow as tf
|
from tensorflow import keras
|
The Layer class: the combination of state (weights) and some computation
|
One of the central abstraction in Keras is the Layer class. A layer encapsulates both a state (the layer's "weights") and a transformation from inputs to outputs (a "call", the layer's forward pass).
|
Here's a densely-connected layer. It has a state: the variables w and b.
|
class Linear(keras.layers.Layer):
|
def __init__(self, units=32, input_dim=32):
|
super(Linear, self).__init__()
|
w_init = tf.random_normal_initializer()
|
self.w = tf.Variable(
|
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
|
trainable=True,
|
)
|
b_init = tf.zeros_initializer()
|
self.b = tf.Variable(
|
initial_value=b_init(shape=(units,), dtype="float32"), trainable=True
|
)
|
def call(self, inputs):
|
return tf.matmul(inputs, self.w) + self.b
|
You would use a layer by calling it on some tensor input(s), much like a Python function.
|
x = tf.ones((2, 2))
|
linear_layer = Linear(4, 2)
|
y = linear_layer(x)
|
print(y)
|
tf.Tensor(
|
[[ 0.01013444 -0.01070027 -0.01888977 0.05208318]
|
[ 0.01013444 -0.01070027 -0.01888977 0.05208318]], shape=(2, 4), dtype=float32)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.