text
stringlengths 0
4.99k
|
---|
text_vectorizer = preprocessing.TextVectorization(output_mode="int")
|
# Index the vocabulary via `adapt()`
|
text_vectorizer.adapt(data)
|
# You can retrieve the vocabulary we indexed via get_vocabulary()
|
vocab = text_vectorizer.get_vocabulary()
|
print("Vocabulary:", vocab)
|
# Create an Embedding + LSTM model
|
inputs = keras.Input(shape=(1,), dtype="string")
|
x = text_vectorizer(inputs)
|
x = layers.Embedding(input_dim=len(vocab), output_dim=64)(x)
|
outputs = layers.LSTM(1)(x)
|
model = keras.Model(inputs, outputs)
|
# Call the model on test data (which includes unknown tokens)
|
test_data = tf.constant(["The Brain is deeper than the sea"])
|
test_output = model(test_data)
|
Vocabulary: ['', '[UNK]', 'the', 'side', 'you', 'with', 'will', 'wider', 'them', 'than', 'sky', 'put', 'other', 'one', 'is', 'for', 'ease', 'contain', 'by', 'brain', 'beside', 'and']
|
You can see the TextVectorization layer in action, combined with an Embedding mode, in the example text classification from scratch.
|
Note that when training such a model, for best performance, you should use the TextVectorization layer as part of the input pipeline (which is what we do in the text classification example above).
|
Encoding text as a dense matrix of ngrams with multi-hot encoding
|
This is how you should preprocess text to be passed to a Dense layer.
|
# Define some text data to adapt the layer
|
data = tf.constant(
|
[
|
"The Brain is wider than the Sky",
|
"For put them side by side",
|
"The one the other will contain",
|
"With ease and You beside",
|
]
|
)
|
# Instantiate TextVectorization with "binary" output_mode (multi-hot)
|
# and ngrams=2 (index all bigrams)
|
text_vectorizer = preprocessing.TextVectorization(output_mode="binary", ngrams=2)
|
# Index the bigrams via `adapt()`
|
text_vectorizer.adapt(data)
|
print(
|
"Encoded text:\n",
|
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
|
"\n",
|
)
|
# Create a Dense model
|
inputs = keras.Input(shape=(1,), dtype="string")
|
x = text_vectorizer(inputs)
|
outputs = layers.Dense(1)(x)
|
model = keras.Model(inputs, outputs)
|
# Call the model on test data (which includes unknown tokens)
|
test_data = tf.constant(["The Brain is deeper than the sea"])
|
test_output = model(test_data)
|
print("Model output:", test_output)
|
Encoded text:
|
[[1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 1. 1. 0. 0. 0. 0. 0.
|
0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 1. 1. 0. 0. 0.]]
|
Model output: tf.Tensor([[0.53373265]], shape=(1, 1), dtype=float32)
|
Encoding text as a dense matrix of ngrams with TF-IDF weighting
|
This is an alternative way of preprocessing text before passing it to a Dense layer.
|
# Define some text data to adapt the layer
|
data = tf.constant(
|
[
|
"The Brain is wider than the Sky",
|
"For put them side by side",
|
"The one the other will contain",
|
"With ease and You beside",
|
]
|
)
|
# Instantiate TextVectorization with "tf-idf" output_mode
|
# (multi-hot with TF-IDF weighting) and ngrams=2 (index all bigrams)
|
text_vectorizer = preprocessing.TextVectorization(output_mode="tf-idf", ngrams=2)
|
# Index the bigrams and learn the TF-IDF weights via `adapt()`
|
text_vectorizer.adapt(data)
|
print(
|
"Encoded text:\n",
|
text_vectorizer(["The Brain is deeper than the sea"]).numpy(),
|
"\n",
|
)
|
# Create a Dense model
|
inputs = keras.Input(shape=(1,), dtype="string")
|
x = text_vectorizer(inputs)
|
outputs = layers.Dense(1)(x)
|
model = keras.Model(inputs, outputs)
|
# Call the model on test data (which includes unknown tokens)
|
test_data = tf.constant(["The Brain is deeper than the sea"])
|
test_output = model(test_data)
|
print("Model output:", test_output)
|
Encoded text:
|
[[5.461647 1.6945957 0. 0. 0. 0. 0.
|
0. 0. 0. 0. 0. 0. 0.
|
0. 0. 1.0986123 1.0986123 1.0986123 0. 0.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.