text
stringlengths 0
4.99k
|
---|
)(x)
|
model = keras.Model(inputs, outputs)
|
You can see a similar setup in action in the example image classification from scratch.
|
Normalizing numerical features
|
# Load some data
|
(x_train, y_train), _ = keras.datasets.cifar10.load_data()
|
x_train = x_train.reshape((len(x_train), -1))
|
input_shape = x_train.shape[1:]
|
classes = 10
|
# Create a Normalization layer and set its internal state using the training data
|
normalizer = preprocessing.Normalization()
|
normalizer.adapt(x_train)
|
# Create a model that include the normalization layer
|
inputs = keras.Input(shape=input_shape)
|
x = normalizer(inputs)
|
outputs = layers.Dense(classes, activation="softmax")(x)
|
model = keras.Model(inputs, outputs)
|
# Train the model
|
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
|
model.fit(x_train, y_train)
|
1563/1563 [==============================] - 3s 2ms/step - loss: 2.1828
|
<tensorflow.python.keras.callbacks.History at 0x7f049093f130>
|
Encoding string categorical features via one-hot encoding
|
# Define some toy data
|
data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]])
|
# Use StringLookup to build an index of the feature values and encode output.
|
lookup = preprocessing.StringLookup(output_mode="binary")
|
lookup.adapt(data)
|
# Convert new test data (which includes unknown feature values)
|
test_data = tf.constant([["a"], ["b"], ["c"], ["d"], ["e"], [""]])
|
encoded_data = lookup(test_data)
|
print(encoded_data)
|
tf.Tensor(
|
[[0. 0. 0. 1.]
|
[0. 0. 1. 0.]
|
[0. 1. 0. 0.]
|
[1. 0. 0. 0.]
|
[1. 0. 0. 0.]
|
[0. 0. 0. 0.]], shape=(6, 4), dtype=float32)
|
Note that index 0 is reserved for missing values (which you should specify as the empty string ""), and index 1 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can configure this by using the mask_token and oov_token constructor arguments of StringLookup.
|
You can see the StringLookup in action in the Structured data classification from scratch example.
|
Encoding integer categorical features via one-hot encoding
|
# Define some toy data
|
data = tf.constant([[10], [20], [20], [10], [30], [0]])
|
# Use IntegerLookup to build an index of the feature values and encode output.
|
lookup = preprocessing.IntegerLookup(output_mode="binary")
|
lookup.adapt(data)
|
# Convert new test data (which includes unknown feature values)
|
test_data = tf.constant([[10], [10], [20], [50], [60], [0]])
|
encoded_data = lookup(test_data)
|
print(encoded_data)
|
tf.Tensor(
|
[[0. 0. 1. 0.]
|
[0. 0. 1. 0.]
|
[0. 1. 0. 0.]
|
[1. 0. 0. 0.]
|
[1. 0. 0. 0.]
|
[0. 0. 0. 0.]], shape=(6, 4), dtype=float32)
|
Note that index 0 is reserved for missing values (which you should specify as the value 0), and index 1 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can configure this by using the mask_token and oov_token constructor arguments of IntegerLookup.
|
You can see the IntegerLookup in action in the example structured data classification from scratch.
|
Applying the hashing trick to an integer categorical feature
|
If you have a categorical feature that can take many different values (on the order of 10e3 or higher), where each value only appears a few times in the data, it becomes impractical and ineffective to index and one-hot encode the feature values. Instead, it can be a good idea to apply the "hashing trick": hash the values to a vector of fixed size. This keeps the size of the feature space manageable, and removes the need for explicit indexing.
|
# Sample data: 10,000 random integers with values between 0 and 100,000
|
data = np.random.randint(0, 100000, size=(10000, 1))
|
# Use the Hashing layer to hash the values to the range [0, 64]
|
hasher = preprocessing.Hashing(num_bins=64, salt=1337)
|
# Use the CategoryEncoding layer to one-hot encode the hashed values
|
encoder = preprocessing.CategoryEncoding(num_tokens=64, output_mode="binary")
|
encoded_data = encoder(hasher(data))
|
print(encoded_data.shape)
|
(10000, 64)
|
Encoding text as a sequence of token indices
|
This is how you should preprocess text to be passed to an Embedding layer.
|
# Define some text data to adapt the layer
|
data = tf.constant(
|
[
|
"The Brain is wider than the Sky",
|
"For put them side by side",
|
"The one the other will contain",
|
"With ease and You beside",
|
]
|
)
|
# Instantiate TextVectorization with "int" output_mode
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.