text
stringlengths 0
4.99k
|
---|
call(self, inputs, training=None, mask=None, **kwargs) -- Of course, you can have both masking and training-specific behavior at the same time.
|
Additionally, if you implement the get_config method on your custom Layer or model, the functional models you create will still be serializable and cloneable.
|
Here's a quick example of a custom RNN, written from scratch, being used in a functional model:
|
units = 32
|
timesteps = 10
|
input_dim = 5
|
batch_size = 16
|
class CustomRNN(layers.Layer):
|
def __init__(self):
|
super(CustomRNN, self).__init__()
|
self.units = units
|
self.projection_1 = layers.Dense(units=units, activation="tanh")
|
self.projection_2 = layers.Dense(units=units, activation="tanh")
|
self.classifier = layers.Dense(1)
|
def call(self, inputs):
|
outputs = []
|
state = tf.zeros(shape=(inputs.shape[0], self.units))
|
for t in range(inputs.shape[1]):
|
x = inputs[:, t, :]
|
h = self.projection_1(x)
|
y = h + self.projection_2(state)
|
state = y
|
outputs.append(y)
|
features = tf.stack(outputs, axis=1)
|
return self.classifier(features)
|
# Note that you specify a static batch size for the inputs with the `batch_shape`
|
# arg, because the inner computation of `CustomRNN` requires a static batch size
|
# (when you create the `state` zeros tensor).
|
inputs = keras.Input(batch_shape=(batch_size, timesteps, input_dim))
|
x = layers.Conv1D(32, 3)(inputs)
|
outputs = CustomRNN()(x)
|
model = keras.Model(inputs, outputs)
|
rnn_model = CustomRNN()
|
_ = rnn_model(tf.zeros((1, 10, 5)))Working with preprocessing layers
|
Authors: Francois Chollet, Mark Omernick
|
Date created: 2020/07/25
|
Last modified: 2021/04/23
|
Description: Overview of how to leverage preprocessing layers to create end-to-end models.
|
View in Colab • GitHub source
|
Keras preprocessing layers
|
The Keras preprocessing layers API allows developers to build Keras-native input processing pipelines. These input processing pipelines can be used as independent preprocessing code in non-Keras workflows, combined directly with Keras models, and exported as part of a Keras SavedModel.
|
With Keras preprocessing layers, you can build and export models that are truly end-to-end: models that accept raw images or raw structured data as input; models that handle feature normalization or feature value indexing on their own.
|
Available preprocessing layers
|
Core preprocessing layers
|
TextVectorization layer: turns raw strings into an encoded representation that can be read by an Embedding layer or Dense layer.
|
Normalization layer: performs feature-wise normalize of input features.
|
Structured data preprocessing layers
|
These layers are for structured data encoding and feature engineering.
|
CategoryEncoding layer: turns integer categorical features into one-hot, multi-hot, or count dense representations.
|
Hashing layer: performs categorical feature hashing, also known as the "hashing trick".
|
Discretization layer: turns continuous numerical features into integer categorical features.
|
StringLookup layer: turns string categorical values an encoded representation that can be read by an Embedding layer or Dense layer.
|
IntegerLookup layer: turns integer categorical values into an encoded representation that can be read by an Embedding layer or Dense layer.
|
CategoryCrossing layer: combines categorical features into co-occurrence features. E.g. if you have feature values "a" and "b", it can provide with the combination feature "a and b are present at the same time".
|
Image preprocessing layers
|
These layers are for standardizing the inputs of an image model.
|
Resizing layer: resizes a batch of images to a target size.
|
Rescaling layer: rescales and offsets the values of a batch of image (e.g. go from inputs in the [0, 255] range to inputs in the [0, 1] range.
|
CenterCrop layer: returns a center crop of a batch of images.
|
Image data augmentation layers
|
These layers apply random augmentation transforms to a batch of images. They are only active during training.
|
RandomCrop layer
|
RandomFlip layer
|
RandomTranslation layer
|
RandomRotation layer
|
RandomZoom layer
|
RandomHeight layer
|
RandomWidth layer
|
The adapt() method
|
Some preprocessing layers have an internal state that must be computed based on a sample of the training data. The list of stateful preprocessing layers is:
|
TextVectorization: holds a mapping between string tokens and integer indices
|
StringLookup and IntegerLookup: hold a mapping between input values and integer indices.
|
Normalization: holds the mean and standard deviation of the features.
|
Discretization: holds information about value bucket boundaries.
|
Crucially, these layers are non-trainable. Their state is not set during training; it must be set before training, a step called "adaptation".
|
You set the state of a preprocessing layer by exposing it to training data, via the adapt() method:
|
import numpy as np
|
import tensorflow as tf
|
from tensorflow.keras.layers.experimental import preprocessing
|
data = np.array([[0.1, 0.2, 0.3], [0.8, 0.9, 1.0], [1.5, 1.6, 1.7],])
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.