code
stringlengths 66
870k
| docstring
stringlengths 19
26.7k
| func_name
stringlengths 1
138
| language
stringclasses 1
value | repo
stringlengths 7
68
| path
stringlengths 5
324
| url
stringlengths 46
389
| license
stringclasses 7
values |
---|---|---|---|---|---|---|---|
def forward(self, inputs, states, src_seq_lengths=None):
"""Sample by beam search.
Parameters
----------
inputs : mx.np.ndarray
The initial input of the decoder. Shape is (batch_size,).
states : Object that contains mx.np.ndarrays
The initial states of the decoder.
src_seq_lengths : mx.np.ndarray
The source sequence lengths. Shape is (batch_size,).
Returns
-------
samples : mx.np.ndarray
Samples draw by beam search. Shape (batch_size, beam_size, length).
DType is int32.
scores : mx.np.ndarray
Scores of the samples. Shape (batch_size, beam_size).
We make sure that scores[i, :] are in descending order.
valid_length : mx.np.ndarray
The valid length of the samples. Shape (batch_size, beam_size).
DType is int32.
"""
ctx = inputs.ctx
batch_size = inputs.shape[self._data_batch_axis]
beam_size = self._beam_size
if src_seq_lengths is not None:
max_src_sequence_length = int(src_seq_lengths.asnumpy().max())
max_length = max(self._min_length, max_src_sequence_length * self._max_length_a
+ self._max_length_b)
else:
if self._max_length_a != 0:
raise ValueError('If src_seq_lengths is not given, max_length_a must be 0!'
' Received {}'
.format(self._max_length_a))
max_length = max(self._min_length, self._max_length_b)
# Tile the states and inputs to have shape (batch_size * beam_size, ...)
states = _expand_to_beam_size(states, beam_size=beam_size, batch_size=batch_size,
state_batch_axis=self._state_batch_axis)
step_input = _expand_to_beam_size(inputs, beam_size=beam_size,
batch_size=batch_size,
state_batch_axis=self._data_batch_axis).astype(mx.np.int32)
# All beams are initialized to alive
# Generated samples are initialized to be the inputs
# Except the first beam where the scores are set to be zero, all beams have -inf scores.
# Valid length is initialized to be 1
beam_alive_mask = mx.np.ones(shape=(batch_size, beam_size), ctx=ctx, dtype=mx.np.float32)
valid_length = mx.np.ones(shape=(batch_size, beam_size), ctx=ctx, dtype=mx.np.int32)
scores = mx.np.zeros(shape=(batch_size, beam_size), ctx=ctx)
if beam_size > 1:
scores[:, 1:beam_size] = LARGE_NEGATIVE_FLOAT
samples = step_input.reshape((batch_size, beam_size, -1))
batch_shift = mx.np.arange(0, batch_size * beam_size, beam_size, ctx=ctx, dtype=mx.np.int32)
step = mx.np.array(0, ctx=ctx, dtype=mx.np.float32)
for i in range(max_length):
log_probs, new_states = self._decoder(step_input, states)
assert log_probs.shape[1] == self._vocab_size
step = step + 1
samples, valid_length, scores, chosen_word_ids, beam_alive_mask, states = \
self._updater(samples, valid_length, log_probs, scores, step, beam_alive_mask,
new_states, batch_shift)
step_input = mx.npx.relu(chosen_word_ids).reshape((-1,))
if self._early_return:
if mx.np.sum(beam_alive_mask).asnumpy() == 0:
return samples, scores, valid_length
beam_alive_mask = beam_alive_mask.astype(mx.np.int32)
if self._eos_id is not None:
final_word = mx.np.where(beam_alive_mask,
mx.np.full((batch_size, beam_size), self._eos_id,
ctx=ctx, dtype=mx.np.int32),
mx.np.full((batch_size, beam_size), -1, ctx=ctx, dtype=mx.np.int32))
samples = mx.np.concatenate([samples,
final_word.reshape((final_word.shape[0],
final_word.shape[1], 1))],
axis=2)
valid_length = valid_length + beam_alive_mask
return samples, scores, valid_length | Sample by beam search.
Parameters
----------
inputs : mx.np.ndarray
The initial input of the decoder. Shape is (batch_size,).
states : Object that contains mx.np.ndarrays
The initial states of the decoder.
src_seq_lengths : mx.np.ndarray
The source sequence lengths. Shape is (batch_size,).
Returns
-------
samples : mx.np.ndarray
Samples draw by beam search. Shape (batch_size, beam_size, length).
DType is int32.
scores : mx.np.ndarray
Scores of the samples. Shape (batch_size, beam_size).
We make sure that scores[i, :] are in descending order.
valid_length : mx.np.ndarray
The valid length of the samples. Shape (batch_size, beam_size).
DType is int32.
| forward | python | dmlc/gluon-nlp | src/gluonnlp/sequence_sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/sequence_sampler.py | Apache-2.0 |
def forward(self, samples, valid_length, outputs, scores, step, beam_alive_mask,
states, batch_shift):
"""
Parameters
----------
samples : mx.np.ndarray
The current samples generated by beam search.
Shape (batch_size, beam_size, L).
valid_length : mx.np.ndarray
The current valid lengths of the samples
outputs : mx.np.ndarray
Outputs from predictor. If from_logits was set to True in scorer, then it's the
log probability of the current step. Else, it's the unnormalized outputs before
softmax or log_softmax.
Shape (batch_size * beam_size, V).
scores : mx.np.ndarray
The previous scores. Shape (batch_size, beam_size)
step : mx.np.ndarray
The current step for doing beam search. Begins from 1. Shape ()
beam_alive_mask : mx.np.ndarray
Shape (batch_size, beam_size)
states : nested structure of mx.np.ndarray
Each mx.np.ndarray should have shape (N, ...) when state_info is None,
or same as the layout in state_info when it's not None.
batch_shift : mx.np.ndarray
Contains [0, beam_size, 2 * beam_size, ..., (batch_size - 1) * beam_size].
Shape (batch_size,)
Returns
-------
new_samples : mx.np.ndarray or an empty list
The updated samples.
When single_step is False, shape (batch_size, beam_size, L + 1)
new_valid_length : mx.np.ndarray
Valid lengths of the samples. Shape (batch_size, beam_size)
new_scores : mx.np.ndarray
Shape (batch_size, beam_size)
chosen_word_ids : mx.np.ndarray
The chosen word ids of the step. Shape (batch_size, beam_size). If it's negative,
no word will be appended to the beam.
beam_alive_mask : mx.np.ndarray
Shape (batch_size, beam_size)
new_states : nested structure of mx.np.ndarray
Inner mx.np.ndarrays have shape (batch_size * beam_size, ...)
"""
# bsz * beam_size * vocab_size
outputs = outputs.reshape((-1, self._beam_size, self._vocab_size))
probs = mx.npx.softmax(outputs / self._temperature)
if self._sampling_topp > 0:
probs = mx.np.where(
probs > self._sampling_topp,
probs,
mx.np.zeros_like(probs)
)
elif self._sampling_topk > 0:
topk_probs = mx.npx.topk(probs, axis=2, k=self._sampling_topk, ret_typ='value')
# choose the k max prob
k_prob = topk_probs[:, :, -1]
k_prob = mx.np.expand_dims(k_prob, axis=-1)
probs = mx.np.where(
probs >= k_prob,
probs,
mx.np.zeros_like(probs)
)
# renormalize
probs_sum = mx.np.sum(probs, axis=2, keepdims=True)
probs = probs / probs_sum
# bsz * beam_size
chosen_word_ids, chosen_word_log_probs = \
mx.npx.random.categorical(probs, get_prob=True)
new_scores = scores + mx.np.where(
beam_alive_mask,
chosen_word_log_probs,
mx.np.zeros_like(chosen_word_log_probs)
)
# mask dead words
chosen_word_ids = mx.np.where(
beam_alive_mask,
chosen_word_ids,
mx.np.full_like(beam_alive_mask, -1, dtype=mx.np.int32)
)
new_valid_length = valid_length + beam_alive_mask.astype(mx.np.int32)
new_samples = mx.np.concatenate(
[samples, mx.np.expand_dims(chosen_word_ids, axis=2)],
axis=2
)
new_states = states
if self._eos_id is not None:
beam_alive_mask\
= beam_alive_mask * (chosen_word_ids != self._eos_id).astype(mx.np.int32)
return new_samples, new_valid_length, new_scores, chosen_word_ids,\
beam_alive_mask, new_states |
Parameters
----------
samples : mx.np.ndarray
The current samples generated by beam search.
Shape (batch_size, beam_size, L).
valid_length : mx.np.ndarray
The current valid lengths of the samples
outputs : mx.np.ndarray
Outputs from predictor. If from_logits was set to True in scorer, then it's the
log probability of the current step. Else, it's the unnormalized outputs before
softmax or log_softmax.
Shape (batch_size * beam_size, V).
scores : mx.np.ndarray
The previous scores. Shape (batch_size, beam_size)
step : mx.np.ndarray
The current step for doing beam search. Begins from 1. Shape ()
beam_alive_mask : mx.np.ndarray
Shape (batch_size, beam_size)
states : nested structure of mx.np.ndarray
Each mx.np.ndarray should have shape (N, ...) when state_info is None,
or same as the layout in state_info when it's not None.
batch_shift : mx.np.ndarray
Contains [0, beam_size, 2 * beam_size, ..., (batch_size - 1) * beam_size].
Shape (batch_size,)
Returns
-------
new_samples : mx.np.ndarray or an empty list
The updated samples.
When single_step is False, shape (batch_size, beam_size, L + 1)
new_valid_length : mx.np.ndarray
Valid lengths of the samples. Shape (batch_size, beam_size)
new_scores : mx.np.ndarray
Shape (batch_size, beam_size)
chosen_word_ids : mx.np.ndarray
The chosen word ids of the step. Shape (batch_size, beam_size). If it's negative,
no word will be appended to the beam.
beam_alive_mask : mx.np.ndarray
Shape (batch_size, beam_size)
new_states : nested structure of mx.np.ndarray
Inner mx.np.ndarrays have shape (batch_size * beam_size, ...)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/sequence_sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/sequence_sampler.py | Apache-2.0 |
def _pad_arrs_to_max_length(arrs, pad_axis, pad_val, use_shared_mem, dtype, round_to=None):
"""Inner Implementation of the Pad batchify
Parameters
----------
arrs : list
pad_axis : int
pad_val : number
use_shared_mem : bool, default False
dtype :
round_to : int
Returns
-------
ret : NDArray
original_length : NDArray
"""
if isinstance(arrs[0], mx.nd.NDArray):
dtype = dtype or arrs[0].dtype
arrs = [arr.asnumpy() for arr in arrs]
elif not isinstance(arrs[0], np.ndarray):
arrs = [np.asarray(ele) for ele in arrs]
else:
dtype = dtype or arrs[0].dtype
original_length = [ele.shape[pad_axis] for ele in arrs]
max_size = max(original_length)
if round_to is not None:
max_size = round_to * math.ceil(max_size / round_to)
ret_shape = list(arrs[0].shape)
ret_shape[pad_axis] = max_size
ret_shape = (len(arrs), ) + tuple(ret_shape)
ret = np.full(shape=ret_shape, fill_value=pad_val, dtype=dtype)
for i, arr in enumerate(arrs):
if arr.shape[pad_axis] == max_size:
ret[i] = arr
else:
slices = [slice(None) for _ in range(arr.ndim)]
slices[pad_axis] = slice(0, arr.shape[pad_axis])
if slices[pad_axis].start != slices[pad_axis].stop:
slices = [slice(i, i + 1)] + slices
ret[tuple(slices)] = arr
ctx = mx.Context('cpu', 0) if use_shared_mem else mx.cpu()
if is_np_array():
ret = mx.np.array(ret, ctx=ctx, dtype=dtype)
else:
ret = mx.nd.array(ret, ctx=ctx, dtype=dtype)
return ret | Inner Implementation of the Pad batchify
Parameters
----------
arrs : list
pad_axis : int
pad_val : number
use_shared_mem : bool, default False
dtype :
round_to : int
Returns
-------
ret : NDArray
original_length : NDArray
| _pad_arrs_to_max_length | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data):
"""Batchify the input data.
The input can be list of numpy.ndarray, list of numbers or list of
mxnet.nd.NDArray. Inputting mxnet.nd.NDArray is discouraged as each
array need to be converted to numpy for efficient padding.
The arrays will be padded to the largest dimension at `axis` and then
stacked to form the final output. In addition, the function will output
the original dimensions at the `axis` if ret_length is turned on.
Parameters
----------
data : List[np.ndarray] or List[List[dtype]] or List[mx.nd.NDArray]
List of samples to pad and stack.
Returns
-------
batch_data: NDArray
Data in the minibatch. Shape is (N, ...)
"""
if isinstance(data[0], mx.nd.NDArray) and not self._warned:
self._warned = True
#TODO(sxjscience) Investigate the warning
warnings.warn(
'Using Pad with NDArrays is discouraged for speed reasons. '
'Instead you should pad your data while it is still a list '
'and before converting to an NDArray. '
'Alternatively you can consider inputting a numpy.ndarray.')
if isinstance(data[0], (mx.nd.NDArray, np.ndarray, list)):
padded_arr = _pad_arrs_to_max_length(data, self._axis, self._val, False, self._dtype,
round_to=self._round_to)
return padded_arr
else:
raise NotImplementedError | Batchify the input data.
The input can be list of numpy.ndarray, list of numbers or list of
mxnet.nd.NDArray. Inputting mxnet.nd.NDArray is discouraged as each
array need to be converted to numpy for efficient padding.
The arrays will be padded to the largest dimension at `axis` and then
stacked to form the final output. In addition, the function will output
the original dimensions at the `axis` if ret_length is turned on.
Parameters
----------
data : List[np.ndarray] or List[List[dtype]] or List[mx.nd.NDArray]
List of samples to pad and stack.
Returns
-------
batch_data: NDArray
Data in the minibatch. Shape is (N, ...)
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data):
"""Batchify the input data.
Parameters
----------
data : list
The samples to batchfy. Each sample should contain N attributes.
Returns
-------
ret : tuple
A tuple of length N. Contains the batchified result of each attribute in the input.
"""
assert len(data[0]) == len(self._fn),\
'The number of attributes in each data sample should contains' \
' {} elements'.format(len(self._fn))
ret = []
for i, ele_fn in enumerate(self._fn):
ret.append(ele_fn([ele[i] for ele in data]))
return tuple(ret) | Batchify the input data.
Parameters
----------
data : list
The samples to batchfy. Each sample should contain N attributes.
Returns
-------
ret : tuple
A tuple of length N. Contains the batchified result of each attribute in the input.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data: t_List[t_Dict]) -> t_Dict:
"""
Parameters
----------
data
The samples to batchify. Each sample should be a dictionary
Returns
-------
ret
The resulting dictionary that stores the merged samples.
"""
ret = dict()
for k, ele_fn in self._fn_dict.items():
ret[k] = ele_fn([ele[k] for ele in data])
return ret |
Parameters
----------
data
The samples to batchify. Each sample should be a dictionary
Returns
-------
ret
The resulting dictionary that stores the merged samples.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def __call__(self, data: t_List[t_NamedTuple]) -> t_NamedTuple:
"""Batchify the input data.
Parameters
----------
data
The samples to batchfy. Each sample should be a namedtuple.
Returns
-------
ret
A namedtuple of length N. Contains the batchified result of each attribute in the input.
"""
if not isinstance(data[0], self._container):
raise ValueError('The samples should have the same type as the stored namedtuple.'
' data[0]={}, container={}'.format(data[0], self._container))
ret = []
for i, ele_fn in enumerate(self._fn_l):
ret.append(ele_fn([ele[i] for ele in data]))
return self._container(*ret) | Batchify the input data.
Parameters
----------
data
The samples to batchfy. Each sample should be a namedtuple.
Returns
-------
ret
A namedtuple of length N. Contains the batchified result of each attribute in the input.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/batchify.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/batchify.py | Apache-2.0 |
def _words_match_regex(words: List[str], ignore_case=False, replace_white_space=False) -> Pattern:
"""Obtain the regex that finds whether a given corpus contains any word in the input words
Parameters
----------
words
Returns
-------
regex
"""
words = [ele for ele in words if ele]
if ignore_case:
flags = re.IGNORECASE
else:
flags = 0
if replace_white_space:
words = [ele.replace(' ', r'\s+') for ele in words]
regex = re.compile('[^a-z]({words})[^a-z]|^({words})$|^({words})[^a-z]|[^a-z]({words})$'
.format(words='|'.join(words)), flags)
return regex | Obtain the regex that finds whether a given corpus contains any word in the input words
Parameters
----------
words
Returns
-------
regex
| _words_match_regex | python | dmlc/gluon-nlp | src/gluonnlp/data/filtering.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/filtering.py | Apache-2.0 |
def __call__(self, corpus: str):
"""
Parameters
----------
corpus
Input corpus
Returns
-------
lang_label
The ISO-639 1 code of the predicted language
score
The score of the prediction
"""
if self._use_fasttext:
labels, scores = self._model.predict(corpus)
label = labels[0].replace("__label__", "")
return label, scores[0]
else:
return self._model.classify(corpus.lower()) |
Parameters
----------
corpus
Input corpus
Returns
-------
lang_label
The ISO-639 1 code of the predicted language
score
The score of the prediction
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/filtering.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/filtering.py | Apache-2.0 |
def _dataset_worker_fn(urls, dataset_fn, batch_sampler_fn):
"""Function to generate datasets and batch sampler for each worker."""
global _manager, _dataset
dataset = dataset_fn(urls)
batch_sampler = batch_sampler_fn(dataset)
if _manager:
dataset = _manager.list(zip(*dataset._data))
_dataset = dataset
return dataset, batch_sampler | Function to generate datasets and batch sampler for each worker. | _dataset_worker_fn | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _batch_worker_fn(samples, batchify_fn, dataset=None, counter=None):
"""Function for processing data in worker process."""
# pylint: disable=unused-argument
# it is required that each worker process has to fork a new MXIndexedRecordIO handle
# preserving dataset as global variable can save tons of overhead and is safe in new process
if len(dataset[0]) > 1:
if isinstance(samples[0], (list, tuple)):
batch = [batchify_fn([dataset[i] for i in shard]) for shard in samples]
else:
batch = batchify_fn([dataset[i] for i in samples])
else:
if isinstance(samples[0], (list, tuple)):
batch = [batchify_fn([dataset[i][0] for i in shard]) for shard in samples]
else:
batch = batchify_fn([dataset[i][0] for i in samples])
buf = io.BytesIO()
ForkingPickler(buf, pickle.HIGHEST_PROTOCOL).dump(batch)
return buf.getvalue(), counter | Function for processing data in worker process. | _batch_worker_fn | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _push_next(self):
"""Assign next batch workload to workers."""
if self._batch_iter is not None:
r = next(self._batch_iter, None)
else:
r = None
if r is None:
result = self._next_dataset()
if result is None:
return
else:
dataset, batch_sampler = result
# Without checking the reference counts of previous datasets in the master process,
# the key error can be triggered occasionally. This may be a bug in Python.
self._count_dataset_ref(dataset)
self._dataset = dataset
# initialize reference counter
if id(dataset) not in self._counter_ref:
self._counter_ref[id(dataset)] = self._manager.Value('i', 0)
self._batch_iter = iter(batch_sampler)
self._push_next()
else:
counter = self._counter_ref[id(self._dataset)]
counter.value += 1
async_ret = self._worker_pool.apply_async(
self._worker_fn, (r, self._batchify_fn, self._dataset, counter))
self._data_buffer[self._sent_idx] = async_ret
self._sent_idx += 1 | Assign next batch workload to workers. | _push_next | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _push_next_dataset(self):
"""Assign next dataset workload to workers."""
current_dataset_idx = self._sent_idx * self._circle_length
if current_dataset_idx < self._num_datasets:
circle_length = min(self._circle_length,
self._num_datasets - current_dataset_idx)
urls = [self._dataset[current_dataset_idx + i] for i in range(circle_length)]
else:
return
# push to worker asynchronously
async_ret = self._worker_pool.apply_async(
self._worker_fn, (urls, self._dataset_fn, self._batch_sampler_fn))
# data buffer stores the async result
self._data_buffer[self._sent_idx] = async_ret
self._sent_idx += 1 | Assign next dataset workload to workers. | _push_next_dataset | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def _next_dataset(self):
"""Retrieve the next dataset. Returns None if no dataset is available."""
if self._rcvd_idx == self._sent_idx:
assert not self._data_buffer, 'Data buffer should be empty at this moment'
return None
assert self._rcvd_idx < self._sent_idx, \
'rcvd_idx must be smaller than sent_idx'
assert self._rcvd_idx in self._data_buffer, \
'fatal error with _next_dataset, rcvd_idx missing'
if len(self._cached_dataset) == 0 or self._data_buffer[self._rcvd_idx].ready():
ret = self._data_buffer.pop(self._rcvd_idx)
dataset, batch_sampler = ret.get()
self._rcvd_idx += 1
if self._cached and len(self._cached_dataset) < self._num_max_cached:
self._cached_dataset.append((dataset, batch_sampler))
else:
dataset, batch_sampler = self._cached_dataset.pop(0)
return dataset, batch_sampler | Retrieve the next dataset. Returns None if no dataset is available. | _next_dataset | python | dmlc/gluon-nlp | src/gluonnlp/data/loading.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/loading.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
"""Generate bucket keys based on the lengths of sequences and number of buckets.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys
A list including the keys of the buckets.
"""
raise NotImplementedError | Generate bucket keys based on the lengths of sequences and number of buckets.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys
A list including the keys of the buckets.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
r"""This generate bucket keys given that all the buckets have the same width.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys : list of int
A list including the keys of the buckets.
"""
if not isinstance(max_lengths, INT_TYPES):
bucket_width_l = [max((1 + max_len - min_len) // num_buckets, 1)
for max_len, min_len in
zip(max_lengths, min_lengths)]
bucket_keys = \
[tuple(max(max_len - i * width, min_len) for max_len, min_len, width in
zip(max_lengths, min_lengths, bucket_width_l))
for i in range(num_buckets)]
else:
bucket_width = max((1 + max_lengths - min_lengths) // num_buckets, 1)
bucket_keys = [max(max_lengths - i * bucket_width, min_lengths)
for i in range(num_buckets)]
return bucket_keys | This generate bucket keys given that all the buckets have the same width.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys : list of int
A list including the keys of the buckets.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
r"""This function generates bucket keys with linearly increasing bucket width:
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys
A list including the keys of the buckets.
"""
if not isinstance(max_lengths, INT_TYPES):
alpha_l = [2 * float(max_len - min_len - num_buckets)
/ (num_buckets * (num_buckets + 1))
for max_len, min_len in
zip(max_lengths, min_lengths)]
bucket_keys = \
[tuple(int(round(min_len + alpha * (((i + 1) * (i + 2)) / 2) + i + 1))
for min_len, alpha in zip(min_lengths, alpha_l))
for i in range(num_buckets)]
bucket_keys[-1] = tuple(max(max_bucket_key, max_len)
for max_bucket_key, max_len
in zip(bucket_keys[-1], max_lengths))
else:
alpha = 2 * float(max_lengths - min_lengths - num_buckets) \
/ (num_buckets * (num_buckets + 1))
bucket_keys = [int(round(min_lengths + alpha * (((i + 1) * (i + 2)) / 2) + i + 1))
for i in range(num_buckets)]
bucket_keys[-1] = max(bucket_keys[-1], max_lengths)
return bucket_keys | This function generates bucket keys with linearly increasing bucket width:
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys
A list including the keys of the buckets.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __call__(self, max_lengths: Union[int, Sequence[int]],
min_lengths: Union[int, Sequence[int]], num_buckets: int) -> List[int]:
r"""This function generates bucket keys exponentially increasing bucket width.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys
A list including the keys of the buckets.
"""
if not isinstance(max_lengths, INT_TYPES):
initial_width_l = [
(max_len - min_len) * (self.bucket_len_step - 1)
/ (math.pow(self.bucket_len_step, num_buckets) - 1)
for max_len, min_len in
zip(max_lengths, min_lengths)]
bucket_keys = \
[tuple(
int(round(min_len + initial_width * (math.pow(self.bucket_len_step, i + 1) - 1)
/ (self.bucket_len_step - 1)))
for min_len, initial_width in zip(min_lengths, initial_width_l))
for i in range(num_buckets)]
bucket_keys[-1] = tuple(max(max_bucket_key, max_len)
for max_bucket_key, max_len
in zip(bucket_keys[-1], max_lengths))
else:
initial_width = (max_lengths - min_lengths) * (self.bucket_len_step - 1) \
/ (math.pow(self.bucket_len_step, num_buckets) - 1)
bucket_keys = [
int(round(min_lengths + initial_width * (math.pow(self.bucket_len_step, i + 1) - 1)
/ (self.bucket_len_step - 1)))
for i in range(num_buckets)]
bucket_keys[-1] = max(bucket_keys[-1], max_lengths)
return bucket_keys | This function generates bucket keys exponentially increasing bucket width.
Parameters
----------
max_lengths
Maximum of lengths of sequences.
min_lengths
Minimum of lengths of sequences.
num_buckets
Number of buckets
Returns
-------
bucket_keys
A list including the keys of the buckets.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def __repr__(self):
"""Return a string representing the statistics of the bucketing sampler.
Returns
-------
ret : str
String representing the statistics of the buckets.
"""
ret = '{name}(\n' \
' sample_num={sample_num}, batch_num={batch_num}\n' \
' key={bucket_keys}\n' \
' cnt={bucket_counts}\n' \
' batch_size={bucket_batch_sizes}\n'\
')'\
.format(name=self.__class__.__name__,
sample_num=len(self._lengths),
batch_num=len(self._batch_infos),
bucket_keys=self._bucket_keys,
bucket_counts=[len(sample_ids) for sample_ids in self._bucket_sample_ids],
bucket_batch_sizes=self._bucket_batch_sizes)
return ret | Return a string representing the statistics of the bucketing sampler.
Returns
-------
ret : str
String representing the statistics of the buckets.
| __repr__ | python | dmlc/gluon-nlp | src/gluonnlp/data/sampler.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/sampler.py | Apache-2.0 |
def _check_special_token_identifier(key):
"""Raise error if the key is not valid as a key for the special token.
Parameters
----------
key
The identifier
"""
if not (key.endswith('_token') and key != '_token'):
raise ValueError('Each key needs to have the form "name_token".'
' Received {}'.format(key)) | Raise error if the key is not valid as a key for the special token.
Parameters
----------
key
The identifier
| _check_special_token_identifier | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def to_tokens(self, idx: Union[int, Tuple[int], List[int], np.ndarray])\
-> Union[Hashable, List[Hashable]]:
"""Get the tokens correspond to the chosen indices
Parameters
----------
idx
The index used to select the tokens.
Returns
-------
ret
The tokens of these selected indices.
"""
if isinstance(idx, (list, tuple)):
return [self.all_tokens[i] for i in idx]
elif isinstance(idx, np.ndarray):
if idx.ndim == 0:
return self.all_tokens[idx]
elif idx.ndim == 1:
return [self.all_tokens[i] for i in idx]
else:
raise ValueError('Unsupported numpy ndarray ndim={}'.format(idx.ndim))
else:
return self.all_tokens[idx] | Get the tokens correspond to the chosen indices
Parameters
----------
idx
The index used to select the tokens.
Returns
-------
ret
The tokens of these selected indices.
| to_tokens | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def __getitem__(self, tokens: Union[Hashable, List[Hashable], Tuple[Hashable]])\
-> Union[int, List[int]]:
"""Looks up indices of text tokens according to the vocabulary.
If `unknown_token` of the vocabulary is None, looking up unknown tokens results in KeyError.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
-------
ret
A token index or a list of token indices according to the vocabulary.
"""
if isinstance(tokens, (list, tuple)):
if self.has_unk:
return [self._token_to_idx.get(token, self.unk_id) for token in tokens]
else:
return [self._token_to_idx[token] for token in tokens]
else:
if self.has_unk:
return self._token_to_idx.get(tokens, self.unk_id)
else:
return self._token_to_idx[tokens] | Looks up indices of text tokens according to the vocabulary.
If `unknown_token` of the vocabulary is None, looking up unknown tokens results in KeyError.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
-------
ret
A token index or a list of token indices according to the vocabulary.
| __getitem__ | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def __call__(self, tokens: Union[Hashable, List[Hashable], Tuple[Hashable]])\
-> Union[int, np.ndarray]:
"""Looks up indices of text tokens according to the vocabulary.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
-------
ret
A token index or a list of token indices according to the vocabulary.
"""
return self[tokens] | Looks up indices of text tokens according to the vocabulary.
Parameters
----------
tokens
A source token or tokens to be converted.
Returns
-------
ret
A token index or a list of token indices according to the vocabulary.
| __call__ | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def to_json(self) -> str:
"""Serialize Vocab object into a json string.
Returns
-------
ret
The serialized json string
"""
vocab_dict = dict()
# Perform sanity check to make sure that we are able to reconstruct the original vocab
for i, tok in enumerate(self._all_tokens):
if self._token_to_idx[tok] != i:
warnings.warn('The vocabulary is corrupted! One possible reason is that the '
'tokens are changed manually without updating the '
'_token_to_idx map. Please check your code or report an issue in '
'Github!')
vocab_dict['all_tokens'] = self._all_tokens
vocab_dict['special_token_key_value'] = self._special_token_kv
ret = json.dumps(vocab_dict, ensure_ascii=False)
return ret | Serialize Vocab object into a json string.
Returns
-------
ret
The serialized json string
| to_json | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def from_json(cls, json_str: Union[str, bytes, bytearray]) -> 'Vocab':
"""Deserialize Vocab object from json string.
Parameters
----------
json_str
Serialized json string of a Vocab object.
Returns
-------
vocab
The constructed Vocab object
"""
vocab_dict = json.loads(json_str)
all_tokens = vocab_dict.get('all_tokens')
special_token_kv = vocab_dict.get('special_token_key_value')
if 'unk_token' not in special_token_kv:
special_token_kv['unk_token'] = None
vocab = cls(tokens=all_tokens, **special_token_kv)
return vocab | Deserialize Vocab object from json string.
Parameters
----------
json_str
Serialized json string of a Vocab object.
Returns
-------
vocab
The constructed Vocab object
| from_json | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def load_vocab(vocab: Union[str, Vocab]) -> Vocab:
"""Quick helper function to load vocabulary from a file.
Parameters
----------
vocab
Returns
-------
"""
if isinstance(vocab, Vocab):
return vocab
elif isinstance(vocab, str):
return Vocab.load(vocab)
else:
raise NotImplementedError('Type of the input vocab is not supported. '
'We only support "str" or "Vocab". type(vocab) = "{}".'
.format(type(vocab))) | Quick helper function to load vocabulary from a file.
Parameters
----------
vocab
Returns
-------
| load_vocab | python | dmlc/gluon-nlp | src/gluonnlp/data/vocab.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/vocab.py | Apache-2.0 |
def get_token_type(tokens: Union[List[str], List[int], List[List[str]],
List[List[int]]]) -> type:
"""
Parameters
----------
tokens
The input tokens.
Returns
-------
token_type
If the tokens is empty, return `str`.
Otherwise, return `str` if the input is str and `int` if the input is int.
"""
if len(tokens) == 0:
return str
if isinstance(tokens[0], int):
return int
elif isinstance(tokens[0], str):
return str
elif isinstance(tokens[0], list):
flatten_tokens_it = itertools.chain.from_iterable(tokens)
try:
first_token = next(flatten_tokens_it)
return type(first_token)
except StopIteration:
return str
else:
raise TokenTypeNotSupportedError(type(tokens[0])) |
Parameters
----------
tokens
The input tokens.
Returns
-------
token_type
If the tokens is empty, return `str`.
Otherwise, return `str` if the input is str and `int` if the input is int.
| get_token_type | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def rebuild_offset_from_tokens(sentence: str, tokens: List[str]) \
-> List[Tuple[int, int]]:
"""Recover the offset of the tokens in the original sentence.
If you are using a subword tokenizer, make sure to remove the prefix/postfix of the tokens
before using this function. Also, this does not work for n-gram-based (n>1) subword
tokenization, i.e.
it works for "gluonnlp" --> ["gluon", "nlp"]
but not for "gluonnlp" --> ["gl", "lu", "uo", "on", "nl", "lp"]
Parameters
----------
sentence
The input sentence
tokens
A list of strings that represent the tokenization result
Returns
-------
offsets
A list of start+end pairs: [(start0, end0), (start1, end1), ...].
Each pair represents the start and end positions of the token in the original
sentence.
"""
running_offset = 0
ret = []
for token in tokens:
token_offset = sentence.index(token, running_offset)
token_len = len(token)
running_offset = token_offset + token_len
ret.append((token_offset, running_offset))
return ret | Recover the offset of the tokens in the original sentence.
If you are using a subword tokenizer, make sure to remove the prefix/postfix of the tokens
before using this function. Also, this does not work for n-gram-based (n>1) subword
tokenization, i.e.
it works for "gluonnlp" --> ["gluon", "nlp"]
but not for "gluonnlp" --> ["gl", "lu", "uo", "on", "nl", "lp"]
Parameters
----------
sentence
The input sentence
tokens
A list of strings that represent the tokenization result
Returns
-------
offsets
A list of start+end pairs: [(start0, end0), (start1, end1), ...].
Each pair represents the start and end positions of the token in the original
sentence.
| rebuild_offset_from_tokens | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def get_char_offset_from_byte_offset(sentence: str, byte_offsets: List[Tuple[int, int]]):
"""Get the character-level offsets based on the byte-level offsets
Parameters
----------
sentence
The input sentence
byte_offsets
The byte-level offsets
Returns
-------
char_offsets
The character-level offsets
"""
byte_offset_to_char_offset = {}
byte_offset = 0
for i, ele in enumerate(sentence):
byte_offset_to_char_offset[byte_offset] = i
byte_offset += len(ele.encode('utf-8'))
byte_offset_to_char_offset[byte_offset] = i + 1 # Handle the last sentence
ret = []
for ele in byte_offsets:
ret.append((byte_offset_to_char_offset[ele[0]],
byte_offset_to_char_offset[ele[1]]))
return ret | Get the character-level offsets based on the byte-level offsets
Parameters
----------
sentence
The input sentence
byte_offsets
The byte-level offsets
Returns
-------
char_offsets
The character-level offsets
| get_char_offset_from_byte_offset | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def encode(self, sentences: SentencesType,
output_type: type = str) \
-> Union[TokensType, TokenIDsType]:
"""Encode the input sentence(s) into multiple tokens.
Parameters
----------
sentences
The sentences to tokenize
output_type
The type of the output tokens.
- str means each token is represented by its original text.
- int means each token is represented by the index in the vocabulary.
Returns
-------
tokens
The output tokens.
"""
pass | Encode the input sentence(s) into multiple tokens.
Parameters
----------
sentences
The sentences to tokenize
output_type
The type of the output tokens.
- str means each token is represented by its original text.
- int means each token is represented by the index in the vocabulary.
Returns
-------
tokens
The output tokens.
| encode | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def encode_with_offsets(self, sentences: SentencesType,
output_type: type = str) \
-> Tuple[Union[TokensType, TokenIDsType], TokenOffsetsType]:
"""Encode the input sentence(s) into multiple tokens. Different from encode, it
will also return the character start and end positions of each token in the original text.
The original text is assumed to be
Here, the default implementation is to use the tokenized result to recover the offsets.
Parameters
----------
sentences
The sentence(s) to tokenize
output_type
The type of the output tokens.
- `str` means each token is represented by its original text.
- `int` means each token is represented by the index in the vocabulary.
Returns
-------
tokens
The output tokens.
offsets
The offsets of these tokens. Each encodes the start and end location in the original
unicode string. We return the character-offset instead of the byte-offset.
"""
raise NotImplementedError | Encode the input sentence(s) into multiple tokens. Different from encode, it
will also return the character start and end positions of each token in the original text.
The original text is assumed to be
Here, the default implementation is to use the tokenized result to recover the offsets.
Parameters
----------
sentences
The sentence(s) to tokenize
output_type
The type of the output tokens.
- `str` means each token is represented by its original text.
- `int` means each token is represented by the index in the vocabulary.
Returns
-------
tokens
The output tokens.
offsets
The offsets of these tokens. Each encodes the start and end location in the original
unicode string. We return the character-offset instead of the byte-offset.
| encode_with_offsets | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/base.py | Apache-2.0 |
def is_new_version_model_file(model_file_path: str) -> bool:
"""Check whether the model file belongs to the new version of HuggingFace Tokenizers,
i.e., >= 0.8
Parameters
----------
model_file_path
Path to the model file
Returns
-------
is_new_version
Whether the model file is generated by the new version of huggingface tokenizer.
"""
with open(model_file_path, 'r', encoding='utf-8') as f:
try:
_ = json.load(f)
return True
except Exception:
return False | Check whether the model file belongs to the new version of HuggingFace Tokenizers,
i.e., >= 0.8
Parameters
----------
model_file_path
Path to the model file
Returns
-------
is_new_version
Whether the model file is generated by the new version of huggingface tokenizer.
| is_new_version_model_file | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def hf_encode(model, sentences, output_type: type = str):
"""
Parameters
----------
model
Model object in HuggingFace tokenizer
sentences
Input sentences
output_type
Output type
Returns
-------
ret
"""
is_multi_sentences = isinstance(sentences, list)
if not is_multi_sentences:
sentences = [sentences]
encode_sentences = model.encode_batch(sentences, add_special_tokens=False)
if output_type is str:
ret = [encode_sentence.tokens for encode_sentence in encode_sentences]
elif output_type is int:
ret = [encode_sentence.ids for encode_sentence in encode_sentences]
else:
raise TokenTypeNotSupportedError(output_type)
if is_multi_sentences:
return ret
else:
return ret[0] |
Parameters
----------
model
Model object in HuggingFace tokenizer
sentences
Input sentences
output_type
Output type
Returns
-------
ret
| hf_encode | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_last_subword(self, tokens):
"""Whether the sub-token is the last sub-token in a split token list.
Only supports the case when the tokenizer is a HuggingFaceBPETokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-------
ret
The results
"""
assert self.model_type == 'BPEDecoder',\
'Only supports BPE model. The model_type={}'.format(self.model_type)
if isinstance(tokens, str):
return tokens.endswith('</w>')
elif isinstance(tokens, int):
return tokens in self._last_subtoken_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [ele.endswith('</w>') for ele in tokens], False
elif isinstance(tokens[0], int):
return [ele in self._last_subtoken_id_set for ele in tokens], False
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the sub-token is the last sub-token in a split token list.
Only supports the case when the tokenizer is a HuggingFaceBPETokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-------
ret
The results
| is_last_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_first_subword(self, tokens):
"""Whether the sub-token is the first sub-token in a token list.
Only supports the case when the tokenizer is a HuggingFaceWordPieceTokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-------
ret
The results
"""
assert self.model_type == 'WordPiece', \
'Only supports WordPiece model. The model_type={}'.format(self.model_type)
if isinstance(tokens, str):
return not tokens.startswith('##')
elif isinstance(tokens, int):
return tokens in self._first_subtoken_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [not ele.startswith('##') for ele in tokens]
elif isinstance(tokens[0], int):
return [ele in self._first_subtoken_id_set for ele in tokens]
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the sub-token is the first sub-token in a token list.
Only supports the case when the tokenizer is a HuggingFaceWordPieceTokenizer
Parameters
----------
tokens
A single token or a list of tokens
Returns
-------
ret
The results
| is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def __init__(self, merges_file: Optional[str] = None,
vocab_file: Optional[str] = None,
unk_token: Optional[str] = Vocab.UNK_TOKEN,
suffix: Optional[str] = '</w>',
dropout: Optional[float] = None,
lowercase: bool = False):
"""
Parameters
----------
merges_file
The merges file saved by HuggingFace
vocab_file
Vocabulary file in GluonNLP
unk_token
The unknown token
suffix
The suffix for sub-tokens. For example, "Sunnyvale" will be "Sunny vale</w>"
dropout
Ratio of the BPE-Dropout
lowercase
Whether to lowercase the input before tokenizer
"""
super().__init__()
self._merges_file = merges_file
self._vocab_file = vocab_file
self._unk_token = unk_token
self._suffix = suffix
self._dropout = dropout
self._lowercase = lowercase
self.__rebuild_tokenizer()
self._last_subword_id_set = frozenset([self._vocab[ele]
for ele in self._vocab.all_tokens
if ele.endswith(self._suffix)]) |
Parameters
----------
merges_file
The merges file saved by HuggingFace
vocab_file
Vocabulary file in GluonNLP
unk_token
The unknown token
suffix
The suffix for sub-tokens. For example, "Sunnyvale" will be "Sunny vale</w>"
dropout
Ratio of the BPE-Dropout
lowercase
Whether to lowercase the input before tokenizer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_last_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the last subword token. This can be used for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the last subword token in the list of subwords.
"""
if isinstance(tokens, str):
return tokens.endswith(self._suffix)
elif isinstance(tokens, int):
return tokens in self._last_subword_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [ele.endswith(self._suffix) for ele in tokens]
elif isinstance(tokens[0], int):
return [ele in self._last_subword_id_set for ele in tokens]
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the token is the last subword token. This can be used for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the last subword token in the list of subwords.
| is_last_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_first_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the first subword token in a sequence of subword tokens.
This can be used for implementing whole-word masking.
We won't care about the special tokens
Parameters
----------
tokens
Returns
-------
ret
"""
if isinstance(tokens, str):
return not tokens.startswith(self._wordpieces_prefix)
elif isinstance(tokens, int):
return tokens in self._first_subword_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [not ele.startswith(self._wordpieces_prefix)
for ele in tokens]
elif isinstance(tokens[0], int):
return [ele in self._first_subword_id_set for ele in tokens]
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the token is the first subword token in a sequence of subword tokens.
This can be used for implementing whole-word masking.
We won't care about the special tokens
Parameters
----------
tokens
Returns
-------
ret
| is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/huggingface.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/huggingface.py | Apache-2.0 |
def is_first_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the first subword token. This can be used to implement
whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the first subword token in the list of subwords
"""
if isinstance(tokens, str):
return tokens.startswith(self._meta_symbol)
elif isinstance(tokens, int):
return tokens in self._first_subword_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [ele.startswith(self._meta_symbol) for ele in tokens]
elif isinstance(tokens[0], int):
return [ele in self._first_subword_id_set for ele in tokens]
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the token is the first subword token. This can be used to implement
whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the first subword token in the list of subwords
| is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/sentencepiece.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/sentencepiece.py | Apache-2.0 |
def set_subword_regularization(self, nbest, alpha):
"""Set the subword-regularization parameters
For more details, you may refer to the official SentencePiece library:
https://github.com/google/sentencepiece
Parameters
----------
nbest
alpha
Returns
-------
"""
self._nbest = nbest
self._alpha = alpha | Set the subword-regularization parameters
For more details, you may refer to the official SentencePiece library:
https://github.com/google/sentencepiece
Parameters
----------
nbest
alpha
Returns
-------
| set_subword_regularization | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/sentencepiece.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/sentencepiece.py | Apache-2.0 |
def __getstate__(self):
"""Make the SentencepieceTokenizer pickleble.
We will remove the _spt_cls and _sp_model, which are not picklable, and try to
reconstruct the class via the saved model_path. This behavior is only acceptable for
multiprocessing and should not be used to save sentencepiece models."""
state = self.__dict__.copy()
state['_spt_cls'] = None
state['_sp_model'] = None
return state | Make the SentencepieceTokenizer pickleble.
We will remove the _spt_cls and _sp_model, which are not picklable, and try to
reconstruct the class via the saved model_path. This behavior is only acceptable for
multiprocessing and should not be used to save sentencepiece models. | __getstate__ | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/sentencepiece.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/sentencepiece.py | Apache-2.0 |
def transform_sentence(self, sentence):
"""replace the separator in encoded result with suffix
a@@, b@@, c -> a, b, c</w>
Parameters
----------
sentence
Returns
-------
new_sentence
"""
return [word[:-2] if len(word) > 2 and word[-2:] == self._separator else word + self._suffix
for word in sentence] | replace the separator in encoded result with suffix
a@@, b@@, c -> a, b, c</w>
Parameters
----------
sentence
Returns
-------
new_sentence
| transform_sentence | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/subword_nmt.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/subword_nmt.py | Apache-2.0 |
def is_last_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the last subword token. This can be used
for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the last subword token in the list of subwords
"""
if isinstance(tokens, str):
return not tokens.endswith(self._separator)
elif isinstance(tokens, int):
return tokens in self._last_subword_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [not ele.endswith(self._separator) for ele in tokens]
elif isinstance(tokens[0], int):
return [ele in self._last_subword_id_set for ele in tokens]
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the token is the last subword token. This can be used
for whole-word masking.
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the last subword token in the list of subwords
| is_last_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/subword_nmt.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/subword_nmt.py | Apache-2.0 |
def is_first_subword(self, tokens: Union[str, int, List[str], List[int]]) \
-> Union[bool, List[bool]]:
"""Whether the token is the first subword token in a list of subword tokens
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the first subword token in a sequence of subword tokens
that construct the token
"""
if isinstance(tokens, str):
return tokens.startswith(self._meta_symbol)
elif isinstance(tokens, int):
return tokens in self._first_subword_id_set
elif isinstance(tokens, list):
if len(tokens) == 0:
return []
if isinstance(tokens[0], str):
return [ele.startswith(self._meta_symbol) for ele in tokens]
elif isinstance(tokens[0], int):
return [ele in self._first_subword_id_set for ele in tokens]
else:
raise NotImplementedError
else:
raise NotImplementedError | Whether the token is the first subword token in a list of subword tokens
Parameters
----------
tokens
The input tokens
Returns
-------
ret
Whether the token is the first subword token in a sequence of subword tokens
that construct the token
| is_first_subword | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/yttm.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/yttm.py | Apache-2.0 |
def __getstate__(self):
"""Support multiprocessing by making it pickleble"""
state = self.__dict__.copy()
state['_bpe'] = None
return state | Support multiprocessing by making it pickleble | __getstate__ | python | dmlc/gluon-nlp | src/gluonnlp/data/tokenizers/yttm.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/data/tokenizers/yttm.py | Apache-2.0 |
def list_sources(embedding_name=None):
"""Get valid token embedding names and their pre-trained file names.
Parameters
----------
embedding_name : str or None, default None
The pre-trained token embedding name.
Returns
-------
dict or list:
A list of all the valid pre-trained token embedding file names (`source`) for the
specified token embedding name (`embedding_name`). If the text embedding name is set to
None, returns a dict mapping each valid token embedding name to a list of valid pre-trained
files (`source`).
"""
if embedding_name is not None:
embedding_name = embedding_name.lower()
if embedding_name == 'fasttext.bin':
return list(C.FAST_TEXT_BIN_SHA1.keys())
if embedding_name not in text_embedding_reg:
raise KeyError('Cannot find `embedding_name` {}. Use '
'`list_sources(embedding_name=None).keys()` to get all the valid'
'embedding names.'.format(embedding_name))
return list(text_embedding_reg[embedding_name].keys())
else:
return {embedding_name: list(embedding_cls.keys())
for embedding_name, embedding_cls in text_embedding_reg.items()} | Get valid token embedding names and their pre-trained file names.
Parameters
----------
embedding_name : str or None, default None
The pre-trained token embedding name.
Returns
-------
dict or list:
A list of all the valid pre-trained token embedding file names (`source`) for the
specified token embedding name (`embedding_name`). If the text embedding name is set to
None, returns a dict mapping each valid token embedding name to a list of valid pre-trained
files (`source`).
| list_sources | python | dmlc/gluon-nlp | src/gluonnlp/embedding/embed_loader.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/embedding/embed_loader.py | Apache-2.0 |
def load_embeddings(vocab=None, pretrained_name_or_dir='glove.6B.50d', unknown='<unk>',
unk_method=None):
"""Load pretrained word embeddings for building an embedding matrix for a given Vocab.
This function supports loading GloVe, Word2Vec and FastText word embeddings from remote sources.
You can also load your own embedding file(txt with Word2Vec or GloVe format) from a given file path.
Glove: an unsupervised learning algorithm for obtaining vector representations for words.
Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and
the resulting representations showcase interesting linear substructures of the word vector
space. (Source from https://nlp.stanford.edu/projects/glove/)
Available sources:
['glove.42B.300d', 'glove.6B.100d', 'glove.6B.200d', 'glove.6B.300d', 'glove.6B.50d', \
'glove.840B.300d', 'glove.twitter.27B.100d', 'glove.twitter.27B.200d', \
'glove.twitter.27B.25d', 'glove.twitter.27B.50d']
Word2Vec: an unsupervised learning algorithm for obtaining vector representations for words.
Training is performed with continuous bag-of-words or skip-gram architecture for computing vector
representations of words.
Available sources:
['GoogleNews-vectors-negative300', 'freebase-vectors-skipgram1000', \
'freebase-vectors-skipgram1000-en']
FastText: an open-source, free, lightweight library that allows users to learn text
representations and text classifiers. It works on standard, generic hardware. Models can later
be reduced in size to even fit on mobile devices. (Source from https://fasttext.cc/)
Available sources:
['cc.af.300', ..., 'cc.en.300', ..., 'crawl-300d-2M', 'crawl-300d-2M-subword', \
'wiki-news-300d-1M', 'wiki-news-300d-1M-subword', \
'wiki.aa', ..., 'wiki.multi.ar', ..., 'wiki.zu']
Detailed sources can be founded by `gluonnlp.embedding.list_sources('FastText')`
For 'wiki.multi' embedding:
Word Translation Without Parallel Data
Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou.
https://arxiv.org/abs/1710.04087
Parameters
----------
vocab : gluonnlp.data.Vocab object, default None
A vocabulary on which an embedding matrix is built.
If `vocab` is `None`, then all tokens in the pretrained file will be used.
pretrained_name_or_dir : str, default 'glove.6B.50d'
A file path for a pretrained embedding file or the name of the pretrained token embedding file.
This method would first check if it is a file path.
If not, the method will load from cache or download.
unknown : str, default '<unk>'
To specify the unknown token in the pretrained file.
unk_method : Callable, default None
A function which receives `List[str]` and returns `numpy.ndarray`.
The input of the function is a list of words which are in the `vocab`,
but do not occur in the pretrained file.
And the function is aimed to return an embedding matrix for these words.
If `unk_method` is None, we generate vectors for these words,
by sampling from normal distribution with the same std and mean of the embedding matrix.
It is only useful when `vocab` is not `None`.
Returns
-------
If `vocab` is `None`
numpy.ndarray:
An embedding matrix in the pretrained file.
gluonnlp.data.Vocab:
The vocabulary in the pretrained file.
Otherwise,
numpy.ndarray:
An embedding matrix for the given vocabulary.
"""
assert isinstance(vocab, (Vocab, type(None))), "Only gluonnlp.data.Vocab is supported."
file_path = _check_and_get_path(pretrained_name_or_dir)
if file_path is None:
raise ValueError("Cannot recognize `{}`".format(pretrained_name_or_dir))
if file_path.endswith('.npz'):
matrix, result = _load_embedding_npz(file_path, vocab, unknown)
else:
matrix, result = _load_embedding_txt(file_path, vocab, unknown)
dim = matrix.shape[-1]
logging.info("Pre-trained embedding dim: {}".format(dim))
if vocab is None:
return matrix, result
else:
hit_flags = result
total_hits = sum(hit_flags)
logging.info("Found {} out of {} words in the pretrained embedding.".format(total_hits, len(vocab)))
if total_hits != len(vocab):
if unk_method is None:
found_vectors = matrix[hit_flags]
mean = np.mean(found_vectors, axis=0, keepdims=True)
std = np.std(found_vectors, axis=0, keepdims=True)
unfound_vec_num = len(vocab) - total_hits
r_vecs = np.random.randn(unfound_vec_num, dim).astype('float32') * std + mean
matrix[hit_flags == False] = r_vecs
else:
unk_idxs = (hit_flags == False).nonzero()[0]
matrix[hit_flags == False] = unk_method(vocab.to_tokens(unk_idxs))
return matrix | Load pretrained word embeddings for building an embedding matrix for a given Vocab.
This function supports loading GloVe, Word2Vec and FastText word embeddings from remote sources.
You can also load your own embedding file(txt with Word2Vec or GloVe format) from a given file path.
Glove: an unsupervised learning algorithm for obtaining vector representations for words.
Training is performed on aggregated global word-word co-occurrence statistics from a corpus, and
the resulting representations showcase interesting linear substructures of the word vector
space. (Source from https://nlp.stanford.edu/projects/glove/)
Available sources:
['glove.42B.300d', 'glove.6B.100d', 'glove.6B.200d', 'glove.6B.300d', 'glove.6B.50d', 'glove.840B.300d', 'glove.twitter.27B.100d', 'glove.twitter.27B.200d', 'glove.twitter.27B.25d', 'glove.twitter.27B.50d']
Word2Vec: an unsupervised learning algorithm for obtaining vector representations for words.
Training is performed with continuous bag-of-words or skip-gram architecture for computing vector
representations of words.
Available sources:
['GoogleNews-vectors-negative300', 'freebase-vectors-skipgram1000', 'freebase-vectors-skipgram1000-en']
FastText: an open-source, free, lightweight library that allows users to learn text
representations and text classifiers. It works on standard, generic hardware. Models can later
be reduced in size to even fit on mobile devices. (Source from https://fasttext.cc/)
Available sources:
['cc.af.300', ..., 'cc.en.300', ..., 'crawl-300d-2M', 'crawl-300d-2M-subword', 'wiki-news-300d-1M', 'wiki-news-300d-1M-subword', 'wiki.aa', ..., 'wiki.multi.ar', ..., 'wiki.zu']
Detailed sources can be founded by `gluonnlp.embedding.list_sources('FastText')`
For 'wiki.multi' embedding:
Word Translation Without Parallel Data
Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou.
https://arxiv.org/abs/1710.04087
Parameters
----------
vocab : gluonnlp.data.Vocab object, default None
A vocabulary on which an embedding matrix is built.
If `vocab` is `None`, then all tokens in the pretrained file will be used.
pretrained_name_or_dir : str, default 'glove.6B.50d'
A file path for a pretrained embedding file or the name of the pretrained token embedding file.
This method would first check if it is a file path.
If not, the method will load from cache or download.
unknown : str, default '<unk>'
To specify the unknown token in the pretrained file.
unk_method : Callable, default None
A function which receives `List[str]` and returns `numpy.ndarray`.
The input of the function is a list of words which are in the `vocab`,
but do not occur in the pretrained file.
And the function is aimed to return an embedding matrix for these words.
If `unk_method` is None, we generate vectors for these words,
by sampling from normal distribution with the same std and mean of the embedding matrix.
It is only useful when `vocab` is not `None`.
Returns
-------
If `vocab` is `None`
numpy.ndarray:
An embedding matrix in the pretrained file.
gluonnlp.data.Vocab:
The vocabulary in the pretrained file.
Otherwise,
numpy.ndarray:
An embedding matrix for the given vocabulary.
| load_embeddings | python | dmlc/gluon-nlp | src/gluonnlp/embedding/embed_loader.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/embedding/embed_loader.py | Apache-2.0 |
def get_fasttext_model(model_name_or_dir='cc.en.300'):
""" Load fasttext model from the binaray file
This method will load fasttext model binaray file from a given file path or remote sources,
and return a `fasttext` model object. See `fasttext.cc` for more usage information.
Available sources:
['wiki-news-300d-1M-subword', 'crawl-300d-2M-subword', \
'cc.af.300', ..., 'cc.en.300', ..., 'wiki.aa', ..., 'wiki.en', ..., 'wiki.zu']
Detailed sources can be founded by `gluonnlp.embedding.list_sources('FastText.bin')`
Parameters
----------
model_name_or_dir : str, default 'cc.en.300'
A file path for a FastText binary file or the name of the FastText model.
This method would first check if it is a file path.
If not, the method will load from cache or download.
Returns
-------
fasttext.FastText._FastText:
A FastText model based on `fasttext` package.
"""
if os.path.exists(model_name_or_dir):
file_path = model_name_or_dir
else:
source = model_name_or_dir
root_path = os.path.expanduser(os.path.join(get_home_dir(), 'embedding'))
embedding_dir = os.path.join(root_path, 'fasttext')
if source not in C.FAST_TEXT_BIN_SHA1:
raise ValueError('Cannot recognize {} for the bin file'.format(source))
file_name, file_hash = C.FAST_TEXT_BIN_SHA1[source]
file_path = _get_file_path('fasttext', file_name, file_hash)
return fasttext.load_model(file_path) | Load fasttext model from the binaray file
This method will load fasttext model binaray file from a given file path or remote sources,
and return a `fasttext` model object. See `fasttext.cc` for more usage information.
Available sources:
['wiki-news-300d-1M-subword', 'crawl-300d-2M-subword', 'cc.af.300', ..., 'cc.en.300', ..., 'wiki.aa', ..., 'wiki.en', ..., 'wiki.zu']
Detailed sources can be founded by `gluonnlp.embedding.list_sources('FastText.bin')`
Parameters
----------
model_name_or_dir : str, default 'cc.en.300'
A file path for a FastText binary file or the name of the FastText model.
This method would first check if it is a file path.
If not, the method will load from cache or download.
Returns
-------
fasttext.FastText._FastText:
A FastText model based on `fasttext` package.
| get_fasttext_model | python | dmlc/gluon-nlp | src/gluonnlp/embedding/embed_loader.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/embedding/embed_loader.py | Apache-2.0 |
def forward(self, data, valid_length):
"""
Generate the representation given the inputs.
This is used in training or fine-tuning a Bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length :
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C)
"""
# 1. Embed the data
time_axis = 1 if self.layout == 'NT' else 0
attn_mask = gen_self_attn_mask(data, valid_length, dtype=self._dtype,
attn_type='full', layout=self.layout)
out = data
all_encodings_outputs = []
additional_outputs = []
for layer_idx in range(self._num_layers):
groups_id = layer_idx // self._num_layers_each_group
layer = self.all_encoder_groups[groups_id]
out, attention_weights = layer(out, attn_mask)
# out : [batch_size, seq_len, units]
# attention_weights : [batch_size, num_heads, seq_len, seq_len]
if self._output_all_encodings:
out = npx.sequence_mask(out,
sequence_length=valid_length,
use_sequence_length=True,
axis=time_axis)
all_encodings_outputs.append(out)
if self._output_attention:
additional_outputs.append(attention_weights)
if not self._output_all_encodings:
# if self._output_all_encodings, SequenceMask is already applied above
out = npx.sequence_mask(out, sequence_length=valid_length,
use_sequence_length=True,
axis=time_axis)
return out, additional_outputs
else:
return all_encodings_outputs, additional_outputs |
Generate the representation given the inputs.
This is used in training or fine-tuning a Bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length :
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length=None):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
valid_length :
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units)
- layout = 'TN'
Shape (seq_length, batch_size, units)
pooled_output
This is optional. Shape (batch_size, units)
"""
initial_embedding = self.get_initial_embedding(inputs, token_types)
# Projecting the embedding into units
prev_out = initial_embedding
if self.embed_size != self.units:
prev_out = self.embed_factorized_proj(prev_out)
outputs = []
if self._compute_layout != self._layout:
# Swap input to reflect the compute_layout
contextual_embeddings, additional_outputs = self.encoder(np.swapaxes(prev_out, 0, 1),
valid_length)
contextual_embeddings = np.swapaxes(contextual_embeddings, 0, 1)
else:
contextual_embeddings, additional_outputs = self.encoder(prev_out, valid_length)
outputs.append(contextual_embeddings)
if self.use_pooler:
pooled_out = self.apply_pooling(contextual_embeddings)
outputs.append(pooled_out)
return tuple(outputs) if len(outputs) > 1 else outputs[0] | Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
valid_length :
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units)
- layout = 'TN'
Shape (seq_length, batch_size, units)
pooled_output
This is optional. Shape (batch_size, units)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The types of tokens. If it is None, it will be initialized as all zeros.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
- layout = 'NT'
Shape (batch_size, seq_length, C_embed)
- layout = 'TN'
Shape (seq_length, batch_size, C_embed)
"""
if self.layout == 'NT':
batch_axis, time_axis = 0, 1
else:
batch_axis, time_axis = 1, 0
embedding = self.word_embed(inputs)
if token_types is None:
token_types = np.zeros_like(inputs)
type_embedding = self.token_type_embed(token_types)
embedding = embedding + type_embedding
if self.pos_embed_type is not None:
positional_embedding = self.token_pos_embed(npx.arange_like(inputs, axis=time_axis))
positional_embedding = np.expand_dims(positional_embedding, axis=batch_axis)
embedding = embedding + positional_embedding
# Extra layer normalization plus dropout
embedding = self.embed_layer_norm(embedding)
embedding = self.embed_dropout(embedding)
return embedding | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The types of tokens. If it is None, it will be initialized as all zeros.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
- layout = 'NT'
Shape (batch_size, seq_length, C_embed)
- layout = 'TN'
Shape (seq_length, batch_size, C_embed)
| get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def apply_pooling(self, sequence):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a Bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
Returns
-------
pooled_out
Shape (batch_size, units)
"""
if self.layout == 'NT':
outputs = sequence[:, 0, :]
else:
outputs = sequence[0, :, :]
return self.pooler(outputs) | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a Bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
Returns
-------
pooled_out
Shape (batch_size, units)
| apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def from_cfg(cls, cfg, use_pooler=True, dtype=None) -> 'AlbertModel':
"""
Parameters
----------
cfg
use_pooler
Whether to use pooler
dtype
The dtype of the backbone model
Returns
-------
model
The created AlbertModel
"""
cfg = cls.get_cfg().clone_merge(cfg)
assert cfg.VERSION == 1, 'Wrong version!'
embed_initializer = mx.init.create(*cfg.INITIALIZER.embed)
weight_initializer = mx.init.create(*cfg.INITIALIZER.weight)
bias_initializer = mx.init.create(*cfg.INITIALIZER.bias)
if dtype is None:
dtype = cfg.MODEL.dtype
return cls(vocab_size=cfg.MODEL.vocab_size,
units=cfg.MODEL.units,
hidden_size=cfg.MODEL.hidden_size,
embed_size=cfg.MODEL.embed_size,
num_layers=cfg.MODEL.num_layers,
num_heads=cfg.MODEL.num_heads,
num_groups=cfg.MODEL.num_groups,
max_length=cfg.MODEL.max_length,
hidden_dropout_prob=cfg.MODEL.hidden_dropout_prob,
attention_dropout_prob=cfg.MODEL.attention_dropout_prob,
num_token_types=cfg.MODEL.num_token_types,
pos_embed_type=cfg.MODEL.pos_embed_type,
activation=cfg.MODEL.activation,
layer_norm_eps=cfg.MODEL.layer_norm_eps,
dtype=dtype,
layout=cfg.MODEL.layout,
embed_initializer=embed_initializer,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
use_pooler=use_pooler) |
Parameters
----------
cfg
use_pooler
Whether to use pooler
dtype
The dtype of the backbone model
Returns
-------
model
The created AlbertModel
| from_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of the token. For example, if the inputs contain two sequences,
we will set different token types for the first sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length :
The valid length of each sequence
Shape (batch_size,)
masked_positions :
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units)
- layout = 'TN'
Shape (seq_length, batch_size, units)
pooled_out
Shape (batch_size, units)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
"""
contextual_embeddings, pooled_out = self.backbone_model(inputs, token_types, valid_length)
if self.layout == 'NT':
mlm_features = select_vectors_by_position(contextual_embeddings, masked_positions)
else:
mlm_features = select_vectors_by_position(np.swapaxes(contextual_embeddings, 0, 1),
masked_positions)
mlm_scores = self.mlm_decoder(mlm_features)
return contextual_embeddings, pooled_out, mlm_scores | Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of the token. For example, if the inputs contain two sequences,
we will set different token types for the first sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length :
The valid length of each sequence
Shape (batch_size,)
masked_positions :
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units)
- layout = 'TN'
Shape (seq_length, batch_size, units)
pooled_out
Shape (batch_size, units)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def __init__(self, backbone_cfg,
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
"""
super().__init__()
self.backbone_model = AlbertModel.from_cfg(backbone_cfg)
if weight_initializer is None:
weight_initializer = self.backbone_model.weight_initializer
if bias_initializer is None:
bias_initializer = self.backbone_model.bias_initializer
# Construct sop_classifier for sentence order prediction
self.sop_classifier = nn.Dense(units=2,
in_units=self.backbone_model.units,
weight_initializer=weight_initializer)
self.mlm_decoder = nn.HybridSequential()
# Extra non-linear layer
self.mlm_decoder.add(nn.Dense(units=self.backbone_model.embed_size,
in_units=self.backbone_model.units,
flatten=False,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer))
self.mlm_decoder.add(get_activation(self.backbone_model.activation))
self.mlm_decoder.add(nn.LayerNorm(epsilon=self.backbone_model.layer_norm_eps,
in_channels=self.backbone_model.embed_size))
# only load the dense weights with a re-initialized bias
# parameters are stored in 'word_embed_bias' which is
# not used in original embedding
self.mlm_decoder.add(nn.Dense(units=self.backbone_model.vocab_size,
in_units=self.backbone_model.embed_size,
flatten=False,
bias_initializer=bias_initializer))
self.mlm_decoder[-1].weight = self.backbone_model.word_embed.weight |
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
Type of the tokens. If the inputs contain two sequences, we will set different
token types for the first sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
masked_positions
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
sop_score :
Shape (batch_size, 2)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
"""
contextual_embeddings, pooled_out = self.backbone_model(inputs, token_types, valid_length)
sop_score = self.sop_classifier(pooled_out)
if self.layout == 'NT':
mlm_features = select_vectors_by_position(contextual_embeddings, masked_positions)
else:
mlm_features = select_vectors_by_position(np.swapaxes(contextual_embeddings, 0, 1),
masked_positions)
mlm_scores = self.mlm_decoder(mlm_features)
return contextual_embeddings, pooled_out, sop_score, mlm_scores | Generate the representation given the inputs.
This is used in training or fine-tuning a Albert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
Type of the tokens. If the inputs contain two sequences, we will set different
token types for the first sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
masked_positions
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
sop_score :
Shape (batch_size, 2)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def get_pretrained_albert(model_name: str = 'google_albert_base_v2',
root: str = get_model_zoo_home_dir(),
load_backbone: str = True,
load_mlm: str = False)\
-> Tuple[CN, SentencepieceTokenizer, str, str]:
"""Get the pretrained Albert weights
Parameters
----------
model_name
The name of the Albert model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_mlm
Whether to load the weights of MLM
Returns
-------
cfg
Network configuration
tokenizer
The SentencepieceTokenizer
backbone_params_path
Path to the parameter of the backbone network
mlm_params_path
Path to the parameter that includes both the backbone and the MLM
"""
assert model_name in PRETRAINED_URL, '{} is not found. All available are {}'.format(
model_name, list_pretrained_albert())
cfg_path = PRETRAINED_URL[model_name]['cfg']
if isinstance(cfg_path, CN):
cfg = cfg_path
else:
cfg = None
spm_model_path = PRETRAINED_URL[model_name]['spm_model']
vocab_path = PRETRAINED_URL[model_name]['vocab']
params_path = PRETRAINED_URL[model_name]['params']
mlm_params_path = PRETRAINED_URL[model_name]['mlm_params']
local_paths = dict()
download_jobs = [('spm_model', spm_model_path), ('vocab', vocab_path)]
if cfg is None:
download_jobs.append(('cfg', cfg_path))
for key, path in download_jobs:
local_paths[key] = download(url=get_repo_model_zoo_url() + path,
path=os.path.join(root, path),
sha1_hash=FILE_STATS[path])
if load_backbone:
local_params_path = download(url=get_repo_model_zoo_url() + params_path,
path=os.path.join(root, params_path),
sha1_hash=FILE_STATS[params_path])
else:
local_params_path = None
if load_mlm:
local_mlm_params_path = download(url=get_repo_model_zoo_url() + mlm_params_path,
path=os.path.join(root, mlm_params_path),
sha1_hash=FILE_STATS[mlm_params_path])
else:
local_mlm_params_path = None
do_lower = True if 'lowercase' in PRETRAINED_URL[model_name]\
and PRETRAINED_URL[model_name]['lowercase'] else False
tokenizer = SentencepieceTokenizer(local_paths['spm_model'],
vocab=local_paths['vocab'],
lowercase=do_lower)
if cfg is None:
cfg = AlbertModel.get_cfg().clone_merge(local_paths['cfg'])
return cfg, tokenizer, local_params_path, local_mlm_params_path | Get the pretrained Albert weights
Parameters
----------
model_name
The name of the Albert model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_mlm
Whether to load the weights of MLM
Returns
-------
cfg
Network configuration
tokenizer
The SentencepieceTokenizer
backbone_params_path
Path to the parameter of the backbone network
mlm_params_path
Path to the parameter that includes both the backbone and the MLM
| get_pretrained_albert | python | dmlc/gluon-nlp | src/gluonnlp/models/albert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/albert.py | Apache-2.0 |
def __init__(self,
use_pooler: bool = False,
classifier_activation: bool = False,
extract_feature: bool = False,
pooler_activation='tanh',
**kwargs):
"""
Parameters
----------
use_pooler
Whether to use pooler
classifier_activation
extract_feature
Whether to extract the feature
pooler_activation
**kwargs
"""
super().__init__(**kwargs)
assert self._src_vocab_size == self._tgt_vocab_size, \
'Vocab size mismatch between encoder and decoder'
self._vocab_size = self._src_vocab_size
self.extract_feature = extract_feature
self.use_pooler = use_pooler
self.classifier_activation = classifier_activation
if not extract_feature:
if self.tie_weights:
self.tgt_final_layer = \
nn.Dense(units=self._tgt_vocab_size,
in_units=self.dec_units,
flatten=False,
use_bias=False,
dtype=self._dtype)
self.tgt_final_layer.weight = self.tgt_embed_layer.weight
else:
self.tgt_final_layer = \
nn.Dense(units=self._tgt_vocab_size,
in_units=self.dec_units,
flatten=False,
weight_initializer=self.weight_initializer,
use_bias=False,
dtype=self._dtype)
elif use_pooler and classifier_activation:
# Construct pooler
self.pooler = nn.Dense(units=self.units,
in_units=self.units,
flatten=False,
activation=pooler_activation,
weight_initializer=self.weight_initializer,
bias_initializer=self.bias_initializer,
dtype=self._dtype) |
Parameters
----------
use_pooler
Whether to use pooler
classifier_activation
extract_feature
Whether to extract the feature
pooler_activation
**kwargs
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def forward(self, src_data, src_valid_length, tgt_data, tgt_valid_length):
"""
Parameters
----------
src_data
- layout = 'NT'
Shape (batch_size, src_length)
- layout = 'TN'
Shape (src_length, batch_size)
src_valid_length
Shape (batch_size,)
tgt_data
- layout = 'NT'
Shape (batch_size, tgt_length)
- layout = 'TN'
Shape (tgt_length, batch_size)
tgt_valid_length
Shape (batch_size,)
Returns
-------
A tuple contains
- If 'self.extract_feature' = True
- contextual_embedding
- layout = 'NT'
Shape (batch_size, tgt_length, units)
- layout = 'TN'
Shape (tgt_length, batch_size, units)
- pooled_output, optional, only enabled if use_pooler = True
Shape (batch_size, units)
- If 'self.extract_feature' = False
- dec_out
- layout = 'NT'
Shape (batch_size, tgt_length, tgt_vocab_size)
- layout = 'TN'
Shape (tgt_length, batch_size, tgt_vocab_size)
"""
enc_out = self.encode(src_data, src_valid_length)
contextual_embedding = self.decode_seq(tgt_data, tgt_valid_length, enc_out,
src_valid_length)
if self.extract_feature:
if self.use_pooler:
pooled_output = self.apply_pooling(contextual_embedding, tgt_valid_length)
return contextual_embedding, pooled_output
else:
return contextual_embedding
else:
dec_out = self.tgt_final_layer(contextual_embedding)
return dec_out |
Parameters
----------
src_data
- layout = 'NT'
Shape (batch_size, src_length)
- layout = 'TN'
Shape (src_length, batch_size)
src_valid_length
Shape (batch_size,)
tgt_data
- layout = 'NT'
Shape (batch_size, tgt_length)
- layout = 'TN'
Shape (tgt_length, batch_size)
tgt_valid_length
Shape (batch_size,)
Returns
-------
A tuple contains
- If 'self.extract_feature' = True
- contextual_embedding
- layout = 'NT'
Shape (batch_size, tgt_length, units)
- layout = 'TN'
Shape (tgt_length, batch_size, units)
- pooled_output, optional, only enabled if use_pooler = True
Shape (batch_size, units)
- If 'self.extract_feature' = False
- dec_out
- layout = 'NT'
Shape (batch_size, tgt_length, tgt_vocab_size)
- layout = 'TN'
Shape (tgt_length, batch_size, tgt_vocab_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def apply_pooling(self, sequence, valid_length):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a BART model.
In BART, the pooled output is the embedding of the last token.
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
valid_length
Valid length of each sequence
Shape (batch_size,)
Returns
-------
outputs
Shape (batch_size, units)
"""
if self._layout == 'NT':
batch_indices = mx.npx.arange_like(sequence, axis=0).astype(mx.np.int32)
outputs = sequence[batch_indices, valid_length - 1]
elif self._layout == 'TN':
batch_indices = mx.npx.arange_like(sequence, axis=1).astype(mx.np.int32)
outputs = sequence[valid_length - 1, batch_indices]
else:
raise NotImplementedError
if self.classifier_activation:
return self.pooler(outputs)
else:
return outputs | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a BART model.
In BART, the pooled output is the embedding of the last token.
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
valid_length
Valid length of each sequence
Shape (batch_size,)
Returns
-------
outputs
Shape (batch_size, units)
| apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def from_cfg(cls, cfg,
dtype=None,
extract_feature=False,
use_pooler=True,
classifier_activation=False):
"""
Parameters
----------
cfg
The configuration
dtype
Data type of the loaded config
extract_feature
Whether to only extract feature.
If so, the output of the layer will be contextual embeddings or the
contextual embedding + pooled output
use_pooler
Whether to use pooler
classifier_activation
Whether to use the classifier activation
Returns
-------
model
The initialized BartModel
"""
cfg = cls.get_cfg().clone_merge(cfg)
embed_initializer = mx.init.create(*cfg.INITIALIZER.embed)
weight_initializer = mx.init.create(*cfg.INITIALIZER.weight)
bias_initializer = mx.init.create(*cfg.INITIALIZER.bias)
if dtype is None:
dtype = cfg.MODEL.dtype
return cls(src_vocab_size=cfg.MODEL.vocab_size,
tgt_vocab_size=cfg.MODEL.vocab_size,
max_src_length=cfg.MODEL.max_src_length,
max_tgt_length=cfg.MODEL.max_tgt_length,
scale_embed=cfg.MODEL.scale_embed,
pos_embed_type=cfg.MODEL.pos_embed_type,
shared_embed=cfg.MODEL.shared_embed,
tie_weights=cfg.MODEL.tie_weights,
data_norm=cfg.MODEL.data_norm,
extract_feature=extract_feature,
use_pooler=use_pooler,
classifier_activation=classifier_activation,
attention_dropout=cfg.MODEL.attention_dropout,
activation_dropout=cfg.MODEL.activation_dropout,
dropout=cfg.MODEL.dropout,
pooler_activation=cfg.MODEL.pooler_activation,
layer_norm_eps=cfg.MODEL.layer_norm_eps,
enc_num_layers=cfg.MODEL.ENCODER.num_layers,
enc_units=cfg.MODEL.ENCODER.units,
enc_num_heads=cfg.MODEL.ENCODER.num_heads,
enc_hidden_size=cfg.MODEL.ENCODER.hidden_size,
enc_recurrent=cfg.MODEL.ENCODER.recurrent,
enc_activation=cfg.MODEL.ENCODER.activation,
enc_pre_norm=cfg.MODEL.ENCODER.pre_norm,
dec_num_layers=cfg.MODEL.DECODER.num_layers,
dec_units=cfg.MODEL.DECODER.units,
dec_num_heads=cfg.MODEL.DECODER.num_heads,
dec_hidden_size=cfg.MODEL.DECODER.hidden_size,
dec_recurrent=cfg.MODEL.DECODER.recurrent,
dec_activation=cfg.MODEL.DECODER.activation,
dec_pre_norm=cfg.MODEL.DECODER.pre_norm,
layout=cfg.MODEL.layout,
embed_initializer=embed_initializer,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=dtype) |
Parameters
----------
cfg
The configuration
dtype
Data type of the loaded config
extract_feature
Whether to only extract feature.
If so, the output of the layer will be contextual embeddings or the
contextual embedding + pooled output
use_pooler
Whether to use pooler
classifier_activation
Whether to use the classifier activation
Returns
-------
model
The initialized BartModel
| from_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def get_pretrained_bart(model_name: str = 'fairseq_bart_base',
root: str = get_model_zoo_home_dir(),
load_backbone: bool = True) \
-> Tuple[CN, HuggingFaceByteBPETokenizer, str, List]:
"""Get the pretrained RoBERTa weights
Parameters
----------
model_name
The name of the RoBERTa model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceByteBPETokenizer
params_path
Path to the parameters
additional_output
The additional outputs
"""
assert model_name in PRETRAINED_URL, '{} is not found. All available are {}'.format(
model_name, list_pretrained_bart())
cfg_path = PRETRAINED_URL[model_name]['cfg']
if isinstance(cfg_path, CN):
cfg = cfg_path
else:
cfg = None
merges_path = PRETRAINED_URL[model_name]['merges']
vocab_path = PRETRAINED_URL[model_name]['vocab']
params_path = PRETRAINED_URL[model_name]['params']
local_paths = dict()
download_jobs = [('vocab', vocab_path), ('merges', merges_path)]
if cfg is None:
download_jobs.append(('cfg', cfg_path))
for k, path in download_jobs:
local_paths[k] = download(url=get_repo_model_zoo_url() + path,
path=os.path.join(root, path),
sha1_hash=FILE_STATS[path])
if load_backbone:
local_params_path = download(url=get_repo_model_zoo_url() + params_path,
path=os.path.join(root, params_path),
sha1_hash=FILE_STATS[params_path])
else:
local_params_path = None
do_lower = True if 'lowercase' in PRETRAINED_URL[model_name]\
and PRETRAINED_URL[model_name]['lowercase'] else False
tokenizer = HuggingFaceByteBPETokenizer(
merges_file=local_paths['merges'],
vocab_file=local_paths['vocab'],
lowercase=do_lower)
additional_out = []
if cfg is None:
cfg = BartModel.get_cfg().clone_merge(local_paths['cfg'])
return cfg, tokenizer, local_params_path, additional_out | Get the pretrained RoBERTa weights
Parameters
----------
model_name
The name of the RoBERTa model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceByteBPETokenizer
params_path
Path to the parameters
additional_output
The additional outputs
| get_pretrained_bart | python | dmlc/gluon-nlp | src/gluonnlp/models/bart.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bart.py | Apache-2.0 |
def get_backbone(model_name: str,
root: str = get_model_zoo_home_dir(),
**kwargs) -> Tuple['Block', str, BaseTokenizer, str, List]:
"""Get the backbone network
Parameters
----------
model_name
The name of the pretrained model
root
Downloaded directory of the model zoo
Returns
-------
model_cls
The class to construct the backbone network
cfg
Path to the config file of the backbone
tokenizer
The tokenizer that is bound to the backbone model
backbone_param_path
The path to the pretrained backbone weights
others
The other items returned by the create function.
Will be wrapped into a list
Examples
--------
>>> from gluonnlp.models import get_backbone
>>> model_cls, cfg, tokenizer, backbone_param_path, _ = get_backbone('google_en_cased_bert_base')
>>> model = model_cls.from_cfg(cfg)
>>> model.load_parameters(backbone_param_path)
"""
model_cls, local_create_fn = None, None
for backbone_type in BACKBONE_REGISTRY.list_keys():
ele_model_cls, ele_local_create_fn, list_key_fn = BACKBONE_REGISTRY.get(backbone_type)
if model_name in list_key_fn():
model_cls = ele_model_cls
local_create_fn = ele_local_create_fn
if model_cls is None or local_create_fn is None:
raise KeyError('The backbone model "{}" is not found! '
'Here are all available backbone models = {}'
.format(model_name,
list_backbone_names()))
cfg, tokenizer, local_params_path, *others = local_create_fn(model_name=model_name, root=root,
**kwargs)
return model_cls, cfg, tokenizer, local_params_path, others | Get the backbone network
Parameters
----------
model_name
The name of the pretrained model
root
Downloaded directory of the model zoo
Returns
-------
model_cls
The class to construct the backbone network
cfg
Path to the config file of the backbone
tokenizer
The tokenizer that is bound to the backbone model
backbone_param_path
The path to the pretrained backbone weights
others
The other items returned by the create function.
Will be wrapped into a list
Examples
--------
>>> from gluonnlp.models import get_backbone
>>> model_cls, cfg, tokenizer, backbone_param_path, _ = get_backbone('google_en_cased_bert_base')
>>> model = model_cls.from_cfg(cfg)
>>> model.load_parameters(backbone_param_path)
| get_backbone | python | dmlc/gluon-nlp | src/gluonnlp/models/base.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/base.py | Apache-2.0 |
def forward(self, data, valid_length):
"""
Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
"""
if self.layout == 'NT':
time_axis, batch_axis = 1, 0
else:
time_axis, batch_axis = 0, 1
# 1. Embed the data
attn_mask = gen_self_attn_mask(data, valid_length, dtype=self._dtype,
attn_type='full', layout=self.layout)
out = data
all_encodings_outputs = []
additional_outputs = []
for layer_idx in range(self._num_layers):
layer = self.all_layers[layer_idx]
out, attention_weights = layer(out, attn_mask)
# out : [batch_size, seq_len, units] or [seq_len, batch_size, units]
# attention_weights : [batch_size, num_heads, seq_len, seq_len]
if self._output_all_encodings:
out = npx.sequence_mask(out,
sequence_length=valid_length,
use_sequence_length=True, axis=time_axis)
all_encodings_outputs.append(out)
if self._output_attention:
additional_outputs.append(attention_weights)
if not self._output_all_encodings:
# if self._output_all_encodings, SequenceMask is already applied above
out = npx.sequence_mask(out, sequence_length=valid_length,
use_sequence_length=True, axis=time_axis)
return out, additional_outputs
else:
return all_encodings_outputs, additional_outputs |
Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length):
# pylint: disable=arguments-differ
"""Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (batch_size, seq_length)
valid_length :
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_output
This is optional. Shape (batch_size, units)
"""
initial_embedding = self.get_initial_embedding(inputs, token_types)
prev_out = initial_embedding
outputs = []
if self._compute_layout != self._layout:
# Swap the axes if the compute_layout and layout mismatch
contextual_embeddings, additional_outputs = self.encoder(np.swapaxes(prev_out, 0, 1),
valid_length)
contextual_embeddings = np.swapaxes(contextual_embeddings, 0, 1)
else:
contextual_embeddings, additional_outputs = self.encoder(prev_out, valid_length)
outputs.append(contextual_embeddings)
if self.use_pooler:
pooled_out = self.apply_pooling(contextual_embeddings)
outputs.append(pooled_out)
return tuple(outputs) if len(outputs) > 1 else outputs[0] | Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (batch_size, seq_length)
valid_length :
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_output
This is optional. Shape (batch_size, units)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of tokens. If None, it will be initialized as all zero.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
- layout = 'NT'
Shape (batch_size, seq_length, C_emb)
- layout = 'TN'
Shape (seq_length, batch_size, C_emb)
"""
if self.layout == 'NT':
time_axis, batch_axis = 1, 0
else:
time_axis, batch_axis = 0, 1
embedding = self.word_embed(inputs)
if token_types is None:
token_types = np.zeros_like(inputs)
type_embedding = self.token_type_embed(token_types)
embedding = embedding + type_embedding
if self.pos_embed_type is not None:
positional_embedding = self.token_pos_embed(npx.arange_like(inputs, axis=time_axis))
positional_embedding = np.expand_dims(positional_embedding, axis=batch_axis)
embedding = embedding + positional_embedding
# Extra layer normalization plus dropout
embedding = self.embed_layer_norm(embedding)
embedding = self.embed_dropout(embedding)
return embedding | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of tokens. If None, it will be initialized as all zero.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
- layout = 'NT'
Shape (batch_size, seq_length, C_emb)
- layout = 'TN'
Shape (seq_length, batch_size, C_emb)
| get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def apply_pooling(self, sequence):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a bert model.
Get the first token of the whole sequence which is [CLS].
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
Returns
-------
outputs
Shape (batch_size, units)
"""
if self.layout == 'NT':
outputs = sequence[:, 0, :]
else:
outputs = sequence[0, :, :]
return self.pooler(outputs) | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a bert model.
Get the first token of the whole sequence which is [CLS].
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
Returns
-------
outputs
Shape (batch_size, units)
| apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def from_cfg(cls, cfg, use_pooler=True, dtype=None) -> 'BertModel':
"""
Parameters
----------
cfg
Configuration
use_pooler
Whether to output the pooled feature
dtype
data type of the model
Returns
-------
ret
The constructed BertModel
"""
cfg = BertModel.get_cfg().clone_merge(cfg)
assert cfg.VERSION == 1, 'Wrong version!'
embed_initializer = mx.init.create(*cfg.INITIALIZER.embed)
weight_initializer = mx.init.create(*cfg.INITIALIZER.weight)
bias_initializer = mx.init.create(*cfg.INITIALIZER.bias)
if dtype is None:
dtype = cfg.MODEL.dtype
return cls(vocab_size=cfg.MODEL.vocab_size,
units=cfg.MODEL.units,
hidden_size=cfg.MODEL.hidden_size,
num_layers=cfg.MODEL.num_layers,
num_heads=cfg.MODEL.num_heads,
max_length=cfg.MODEL.max_length,
hidden_dropout_prob=cfg.MODEL.hidden_dropout_prob,
attention_dropout_prob=cfg.MODEL.attention_dropout_prob,
num_token_types=cfg.MODEL.num_token_types,
pos_embed_type=cfg.MODEL.pos_embed_type,
activation=cfg.MODEL.activation,
layer_norm_eps=cfg.MODEL.layer_norm_eps,
dtype=dtype,
embed_initializer=embed_initializer,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
use_pooler=use_pooler,
layout=cfg.MODEL.layout,
compute_layout=cfg.MODEL.compute_layout) |
Parameters
----------
cfg
Configuration
use_pooler
Whether to output the pooled feature
dtype
data type of the model
Returns
-------
ret
The constructed BertModel
| from_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length :
The valid length of each sequence
Shape (batch_size,)
masked_positions :
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units)
pooled_out
Shape (batch_size, units)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
"""
contextual_embeddings, pooled_out = self.backbone_model(inputs, token_types, valid_length)
if self.layout == 'NT':
mlm_features = select_vectors_by_position(contextual_embeddings, masked_positions)
else:
mlm_features = select_vectors_by_position(np.swapaxes(contextual_embeddings, 0, 1),
masked_positions)
mlm_scores = self.mlm_decoder(mlm_features)
return contextual_embeddings, pooled_out, mlm_scores | Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length :
The valid length of each sequence
Shape (batch_size,)
masked_positions :
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units)
pooled_out
Shape (batch_size, units)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def __init__(self, backbone_cfg,
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
"""
super().__init__()
self.backbone_model = BertModel.from_cfg(backbone_cfg)
if weight_initializer is None:
weight_initializer = self.backbone_model.weight_initializer
if bias_initializer is None:
bias_initializer = self.backbone_model.bias_initializer
# Construct nsp_classifier for next sentence prediction
self.nsp_classifier = nn.Dense(units=2,
in_units=self.backbone_model.units,
weight_initializer=weight_initializer)
self.mlm_decoder = nn.HybridSequential()
# Extra non-linear layer
self.mlm_decoder.add(nn.Dense(units=self.backbone_model.units,
in_units=self.backbone_model.units,
flatten=False,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer))
self.mlm_decoder.add(get_activation(self.backbone_model.activation))
self.mlm_decoder.add(nn.LayerNorm(epsilon=self.backbone_model.layer_norm_eps,
in_channels=self.backbone_model.units))
# only load the dense weights with a re-initialized bias
# parameters are stored in 'word_embed_bias' which is
# not used in original embedding
self.mlm_decoder.add(nn.Dense(units=self.backbone_model.vocab_size,
in_units=self.backbone_model.units,
flatten=False,
bias_initializer=bias_initializer))
self.mlm_decoder[-1].weight = self.backbone_model.word_embed.weight |
Parameters
----------
backbone_cfg
The cfg of the backbone model
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
masked_positions):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
masked_positions
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
nsp_score :
Shape (batch_size, 2)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
"""
contextual_embeddings, pooled_out = self.backbone_model(inputs, token_types, valid_length)
nsp_score = self.nsp_classifier(pooled_out)
if self.layout == 'NT':
mlm_features = select_vectors_by_position(contextual_embeddings, masked_positions)
else:
mlm_features = select_vectors_by_position(np.swapaxes(contextual_embeddings, 0, 1),
masked_positions)
mlm_scores = self.mlm_decoder(mlm_features)
return contextual_embeddings, pooled_out, nsp_score, mlm_scores | Generate the representation given the inputs.
This is used in training or fine-tuning a bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
masked_positions
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
nsp_score :
Shape (batch_size, 2)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def get_pretrained_bert(model_name: str = 'google_en_cased_bert_base',
root: str = get_model_zoo_home_dir(),
load_backbone: str = True,
load_mlm: str = False)\
-> Tuple[CN, HuggingFaceWordPieceTokenizer, str, str]:
"""Get the pretrained bert weights
Parameters
----------
model_name
The name of the bert model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_mlm
Whether to load the weights of MLM
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceWordPieceTokenizer
backbone_params_path
Path to the parameter of the backbone network
mlm_params_path
Path to the parameter that includes both the backbone and the MLM
"""
assert model_name in PRETRAINED_URL, '{} is not found. All available are {}'.format(
model_name, list_pretrained_bert())
cfg_path = PRETRAINED_URL[model_name]['cfg']
if isinstance(cfg_path, CN):
cfg = cfg_path
else:
cfg = None
vocab_path = PRETRAINED_URL[model_name]['vocab']
params_path = PRETRAINED_URL[model_name]['params']
mlm_params_path = PRETRAINED_URL[model_name]['mlm_params']
local_paths = dict()
download_jobs = [('vocab', vocab_path)]
if cfg is None:
download_jobs.append(('cfg', cfg_path))
for key, path in download_jobs:
local_paths[key] = download(url=get_repo_model_zoo_url() + path,
path=os.path.join(root, path),
sha1_hash=FILE_STATS[path])
if load_backbone:
local_params_path = download(url=get_repo_model_zoo_url() + params_path,
path=os.path.join(root, params_path),
sha1_hash=FILE_STATS[params_path])
else:
local_params_path = None
if load_mlm and mlm_params_path is not None:
local_mlm_params_path = download(url=get_repo_model_zoo_url() + mlm_params_path,
path=os.path.join(root, mlm_params_path),
sha1_hash=FILE_STATS[mlm_params_path])
else:
local_mlm_params_path = None
do_lower = True if 'lowercase' in PRETRAINED_URL[model_name]\
and PRETRAINED_URL[model_name]['lowercase'] else False
tokenizer = HuggingFaceWordPieceTokenizer(
vocab_file=local_paths['vocab'],
unk_token='[UNK]',
pad_token='[PAD]',
cls_token='[CLS]',
sep_token='[SEP]',
mask_token='[MASK]',
lowercase=do_lower)
if cfg is None:
cfg = BertModel.get_cfg().clone_merge(local_paths['cfg'])
return cfg, tokenizer, local_params_path, local_mlm_params_path | Get the pretrained bert weights
Parameters
----------
model_name
The name of the bert model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_mlm
Whether to load the weights of MLM
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceWordPieceTokenizer
backbone_params_path
Path to the parameter of the backbone network
mlm_params_path
Path to the parameter that includes both the backbone and the MLM
| get_pretrained_bert | python | dmlc/gluon-nlp | src/gluonnlp/models/bert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/bert.py | Apache-2.0 |
def get_generator_cfg(model_config):
"""
Get the generator configuration from the Electra model config.
The size of generator is usually smaller than discriminator but same in electra small,
which exists a conflict between source code and original paper.
"""
generator_cfg = model_config.clone()
generator_layers_scale = model_config.MODEL.generator_layers_scale
generator_units_scale = model_config.MODEL.generator_units_scale
generator_cfg.defrost()
# the round function is used to slove int(0.3333*768)!=256 for electra base
generator_cfg.MODEL.units = round(generator_units_scale * model_config.MODEL.units)
generator_cfg.MODEL.hidden_size = round(generator_units_scale * model_config.MODEL.hidden_size)
generator_cfg.MODEL.num_heads = round(generator_units_scale * model_config.MODEL.num_heads)
generator_cfg.MODEL.num_layers = round(generator_layers_scale * model_config.MODEL.num_layers)
generator_cfg.freeze()
return generator_cfg |
Get the generator configuration from the Electra model config.
The size of generator is usually smaller than discriminator but same in electra small,
which exists a conflict between source code and original paper.
| get_generator_cfg | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def __init__(self, units=512,
hidden_size=2048,
num_layers=6,
num_heads=8,
attention_dropout_prob=0.,
hidden_dropout_prob=0.,
output_attention=False,
dtype='float32',
output_all_encodings=False,
layer_norm_eps=1E-12,
weight_initializer=TruncNorm(stdev=0.02),
bias_initializer='zeros',
activation='gelu',
layout='NT'):
"""
Parameters
----------
units
The number of units
hidden_size
The hidden size
num_layers
Number of layers
num_heads
Number of heads
attention_dropout_prob
Dropout probability of the attention layer
hidden_dropout_prob
Dropout probability
output_attention
Whether to output the attention weights
dtype
Data type of the weights
output_all_encodings
layer_norm_eps
weight_initializer
bias_initializer
activation
layout
"""
super().__init__()
assert units % num_heads == 0, \
'In ElectraEncoder, The units should be divisible ' \
'by the number of heads. Received units={}, num_heads={}' \
.format(units, num_heads)
self._dtype = dtype
self._layout = layout
self._num_layers = num_layers
self._output_attention = output_attention
self._output_all_encodings = output_all_encodings
self.all_encoder_layers = nn.HybridSequential()
for layer_idx in range(num_layers):
self.all_encoder_layers.add(
TransformerEncoderLayer(units=units,
hidden_size=hidden_size,
num_heads=num_heads,
attention_dropout_prob=attention_dropout_prob,
hidden_dropout_prob=hidden_dropout_prob,
layer_norm_eps=layer_norm_eps,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
activation=activation,
dtype=dtype,
layout=layout)) |
Parameters
----------
units
The number of units
hidden_size
The hidden size
num_layers
Number of layers
num_heads
Number of heads
attention_dropout_prob
Dropout probability of the attention layer
hidden_dropout_prob
Dropout probability
output_attention
Whether to output the attention weights
dtype
Data type of the weights
output_all_encodings
layer_norm_eps
weight_initializer
bias_initializer
activation
layout
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, data, valid_length):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
"""
if self.layout == 'NT':
time_axis, batch_axis = 1, 0
else:
time_axis, batch_axis = 0, 1
# 1. Embed the data
attn_mask = gen_self_attn_mask(data, valid_length,
dtype=self._dtype,
layout=self._layout,
attn_type='full')
out = data
all_encodings_outputs = []
additional_outputs = []
for layer_idx in range(self._num_layers):
layer = self.all_encoder_layers[layer_idx]
out, attention_weights = layer(out, attn_mask)
# out : [batch_size, seq_len, units]
# attention_weights : [batch_size, num_heads, seq_len, seq_len]
if self._output_all_encodings:
out = npx.sequence_mask(out,
sequence_length=valid_length,
use_sequence_length=True,
axis=time_axis)
all_encodings_outputs.append(out)
if self._output_attention:
additional_outputs.append(attention_weights)
if not self._output_all_encodings:
# if self._output_all_encodings, SequenceMask is already applied above
out = npx.sequence_mask(out, sequence_length=valid_length,
use_sequence_length=True, axis=time_axis)
return out, additional_outputs
else:
return all_encodings_outputs, additional_outputs | Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length=None):
"""Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_output
This is optional. Shape (batch_size, units)
"""
initial_embedding = self.get_initial_embedding(inputs, token_types)
# Projecting the embedding into units
prev_out = initial_embedding
if self.embed_size != self.units:
prev_out = self.embed_factorized_proj(prev_out)
outputs = []
if self._compute_layout != self._layout:
# Swap the axes if the compute_layout and layout mismatch
contextual_embeddings, additional_outputs = self.encoder(np.swapaxes(prev_out, 0, 1),
valid_length)
contextual_embeddings = np.swapaxes(contextual_embeddings, 0, 1)
else:
contextual_embeddings, additional_outputs = self.encoder(prev_out, valid_length)
outputs.append(contextual_embeddings)
if self.use_pooler:
# Here we just get the first token ([CLS]) without any pooling strategy,
# which is slightly different from bert model with the pooled_out
# the attribute name is keeping the same as bert and albert model with defualt
# use_pooler=True
if self._layout == 'NT':
pooled_out = contextual_embeddings[:, 0, :]
else:
pooled_out = contextual_embeddings[0, :, :]
outputs.append(pooled_out)
return tuple(outputs) if len(outputs) > 1 else outputs[0] | Generate the representation given the inputs.
This is used in training or fine-tuning a Electra model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_output
This is optional. Shape (batch_size, units)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of tokens. If None, it will be initialized as all zero.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
- layout = 'NT'
Shape (batch_size, seq_length, C_embed)
- layout = 'TN'
Shape (seq_length, batch_size, C_embed)
"""
if self.layout == 'NT':
time_axis, batch_axis = 1, 0
else:
time_axis, batch_axis = 0, 1
embedding = self.word_embed(inputs)
if token_types is None:
token_types = np.zeros_like(inputs)
type_embedding = self.token_type_embed(token_types)
embedding = embedding + type_embedding
if self.pos_embed_type is not None:
positional_embedding = self.token_pos_embed(npx.arange_like(inputs, axis=time_axis))
positional_embedding = np.expand_dims(positional_embedding, axis=batch_axis)
embedding = embedding + positional_embedding
# Extra layer normalization plus dropout
embedding = self.embed_layer_norm(embedding)
embedding = self.embed_dropout(embedding)
return embedding | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The type of tokens. If None, it will be initialized as all zero.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
- layout = 'NT'
Shape (batch_size, seq_length, C_embed)
- layout = 'TN'
Shape (seq_length, batch_size, C_embed)
| get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def apply_layerwise_decay(self, layerwise_decay: int,
not_included: Optional[List[str]] = None,
num_additional_layers: int = 2):
"""Apply the layer-wise gradient decay
.. math::
lr = lr * layerwise_decay^(max_depth - layer_depth)
Parameters
----------
layerwise_decay
Power rate of the layer-wise decay
not_included
A list or parameter names that not included in the layer-wise decay
num_additional_layers
The number of layers after the current backbone. This helps determine the max depth
"""
# Consider the task specific finetuning layer as the last layer, following with pooler
# In addition, the embedding parameters have the smaller learning rate based on this
# setting.
max_depth = self.num_layers + num_additional_layers
for _, value in self.collect_params('.*embed*').items():
value.lr_mult = layerwise_decay ** max_depth
for (layer_depth, layer) in enumerate(self.encoder.all_encoder_layers):
layer_params = layer.collect_params()
for key, value in layer_params.items():
if not_included:
for pn in not_included:
if pn in key:
continue
value.lr_mult = layerwise_decay**(max_depth - (layer_depth + 1)) | Apply the layer-wise gradient decay
.. math::
lr = lr * layerwise_decay^(max_depth - layer_depth)
Parameters
----------
layerwise_decay
Power rate of the layer-wise decay
not_included
A list or parameter names that not included in the layer-wise decay
num_additional_layers
The number of layers after the current backbone. This helps determine the max depth
| apply_layerwise_decay | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def frozen_params(self, untunable_depth: int, not_included: Optional[List[str]] = None):
"""Froze part of parameters according to layer depth.
That is, make all layer that shallower than `untunable_depth` untunable
to stop the gradient backward computation and accelerate the training.
Parameters
----------
untunable_depth
the depth of the neural network starting from 1 to number of layers
not_included
A list or parameter names that not included in the untunable parameters
"""
all_layers = self.encoder.all_encoder_layers
for _, value in self.collect_params('.*embed*').items():
value.grad_req = 'null'
for layer in all_layers[:untunable_depth]:
for key, value in layer.collect_params().items():
if not_included:
for pn in not_included:
if pn in key:
continue
value.grad_req = 'null' | Froze part of parameters according to layer depth.
That is, make all layer that shallower than `untunable_depth` untunable
to stop the gradient backward computation and accelerate the training.
Parameters
----------
untunable_depth
the depth of the neural network starting from 1 to number of layers
not_included
A list or parameter names that not included in the untunable parameters
| frozen_params | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length):
"""Getting the scores of the replaced token detection of the whole sentence
based on the corrupted tokens produced from a generator.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
rtd_scores
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
"""
contextual_embeddings, pooled_out = self.backbone_model(inputs, token_types, valid_length)
rtd_scores = self.rtd_encoder(contextual_embeddings).squeeze(-1)
return contextual_embeddings, pooled_out, rtd_scores | Getting the scores of the replaced token detection of the whole sentence
based on the corrupted tokens produced from a generator.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
rtd_scores
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def __init__(self, backbone_cfg,
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
backbone_cfg
Configuration of the backbone model
weight_initializer
bias_initializer
"""
super().__init__()
self.backbone_model = ElectraModel.from_cfg(backbone_cfg)
if weight_initializer is None:
weight_initializer = self.backbone_model.weight_initializer
if bias_initializer is None:
bias_initializer = self.backbone_model.bias_initializer
self.mlm_decoder = nn.HybridSequential()
# Extra non-linear layer
self.mlm_decoder.add(nn.Dense(units=self.backbone_model.embed_size,
in_units=self.backbone_model.units,
flatten=False,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer))
self.mlm_decoder.add(get_activation(self.backbone_model.activation))
self.mlm_decoder.add(nn.LayerNorm(epsilon=self.backbone_model.layer_norm_eps,
in_channels=self.backbone_model.embed_size))
# only load the dense weights with a re-initialized bias
# parameters are stored in 'word_embed_bias' which is
# not used in original embedding
self.mlm_decoder.add(
nn.Dense(
units=self.backbone_model.vocab_size,
in_units=self.backbone_model.embed_size,
flatten=False,
bias_initializer=bias_initializer))
self.mlm_decoder[-1].weight = self.backbone_model.word_embed.weight |
Parameters
----------
backbone_cfg
Configuration of the backbone model
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def tie_embeddings(self, word_embed_params=None,
token_type_embed_params=None,
token_pos_embed_params=None,
embed_layer_norm_params=None):
"""Tie the embedding layers between the backbone and the MLM decoder
Parameters
----------
word_embed_params
token_type_embed_params
token_pos_embed_params
embed_layer_norm_params
"""
self.backbone_model.word_embed.share_parameters(word_embed_params)
self.mlm_decoder[-1].share_parameters(word_embed_params)
self.backbone_model.token_type_embed.share_parameters(token_type_embed_params)
self.backbone_model.token_pos_embed.share_parameters(token_pos_embed_params)
self.backbone_model.embed_layer_norm.share_parameters(embed_layer_norm_params) | Tie the embedding layers between the backbone and the MLM decoder
Parameters
----------
word_embed_params
token_type_embed_params
token_pos_embed_params
embed_layer_norm_params
| tie_embeddings | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length, masked_positions):
"""Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length :
The valid length of each sequence
Shape (batch_size,)
masked_positions :
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
"""
contextual_embeddings, pooled_out = self.backbone_model(inputs, token_types, valid_length)
if self.backbone_model.layout == 'NT':
mlm_features = select_vectors_by_position(contextual_embeddings, masked_positions)
else:
mlm_features = select_vectors_by_position(np.swapaxes(contextual_embeddings, 0, 1),
masked_positions)
mlm_scores = self.mlm_decoder(mlm_features)
return contextual_embeddings, pooled_out, mlm_scores | Getting the scores of the masked positions.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length :
The valid length of each sequence
Shape (batch_size,)
masked_positions :
The masked position of the sequence
Shape (batch_size, num_masked_positions).
Returns
-------
contextual_embedding
- layout = 'NT'
Shape (batch_size, seq_length, units).
- layout = 'TN'
Shape (seq_length, batch_size, units).
pooled_out
Shape (batch_size, units)
mlm_scores :
Shape (batch_size, num_masked_positions, vocab_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def __init__(self,
disc_cfg,
uniform_generator=False,
tied_generator=False,
tied_embeddings=True,
disallow_correct=False,
temperature=1.0,
gumbel_eps=1E-9,
dtype='float32',
weight_initializer=None,
bias_initializer=None):
"""
Parameters
----------
disc_cfg :
Config for discriminator model including scaled size for generator
uniform_generator :
Wether to get a generator with uniform weights, the mlm_scores from
which are totally random. In this case , a discriminator learns from
a random 15% of the input tokens distinct from the subset.
tied_generator :
Whether to tie backbone model weights of generator and discriminator.
The size of G and D are required to be same if set to True.
tied_embeddings :
Whether to tie the embeddings of generator and discriminator
disallow_correct :
Whether the correct smaples of generator are allowed,
that is 15% of tokens are always fake.
temperature :
Temperature of gumbel distribution for sampling from generator
weight_initializer
bias_initializer
"""
super().__init__()
self._uniform_generator = uniform_generator
self._tied_generator = tied_generator
self._tied_embeddings = tied_embeddings
self._disallow_correct = disallow_correct
self._temperature = temperature
self._gumbel_eps = gumbel_eps
self._dtype = dtype
self.disc_cfg = disc_cfg
self.vocab_size = disc_cfg.MODEL.vocab_size
self.gen_cfg = get_generator_cfg(disc_cfg)
self.discriminator = ElectraDiscriminator(disc_cfg,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer)
self.disc_backbone = self.discriminator.backbone_model
if not uniform_generator and not tied_generator:
self.generator = ElectraGenerator(self.gen_cfg,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer)
if tied_embeddings:
self.generator.tie_embeddings(self.disc_backbone.word_embed.collect_params(),
self.disc_backbone.token_type_embed.collect_params(),
self.disc_backbone.token_pos_embed.collect_params(),
self.disc_backbone.embed_layer_norm.collect_params())
elif tied_generator:
# Reuse the weight of the discriminator backbone model
self.generator = ElectraGenerator(self.gen_cfg,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer)
# TODO(sxjscience, zheyu) Verify
self.generator.backbone_model = self.disc_backbone
elif uniform_generator:
# get the mlm_scores randomly over vocab
self.generator = None |
Parameters
----------
disc_cfg :
Config for discriminator model including scaled size for generator
uniform_generator :
Wether to get a generator with uniform weights, the mlm_scores from
which are totally random. In this case , a discriminator learns from
a random 15% of the input tokens distinct from the subset.
tied_generator :
Whether to tie backbone model weights of generator and discriminator.
The size of G and D are required to be same if set to True.
tied_embeddings :
Whether to tie the embeddings of generator and discriminator
disallow_correct :
Whether the correct smaples of generator are allowed,
that is 15% of tokens are always fake.
temperature :
Temperature of gumbel distribution for sampling from generator
weight_initializer
bias_initializer
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length,
original_tokens, masked_positions):
"""Getting the mlm scores of each masked positions from a generator,
then produces the corrupted tokens sampling from a gumbel distribution.
We also get the ground-truth and scores of the replaced token detection
which is output by a discriminator. The ground-truth is an array with same
shape as the input using 1 stand for original token and 0 for replacement.
Notice: There is a problem when the masked positions have duplicate indexs.
Try to avoid that in the data preprocessing process. In addition, loss calculation
should be done in the training scripts as well.
Parameters
----------
inputs
The masked input
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The token types. If the inputs contain two sequences, we will set different token types
for the first sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence.
Shape (batch_size,)
original_tokens
The original tokens that appear in the unmasked input sequence.
Shape (batch_size, num_masked_positions).
masked_positions :
The masked position of the sequence.
Shape (batch_size, num_masked_positions).
Returns
-------
mlm_scores
The masked language model score.
Shape (batch_size, num_masked_positions, vocab_size)
rtd_scores
The replaced-token-detection score. Predicts whether the tokens are replaced or not.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
replaced_inputs
Shape (batch_size, num_masked_positions)
labels
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
"""
if self._uniform_generator:
# generate the corrupt tokens randomly with a mlm_scores vector whose value is all 0
zero_logits = np.zeros((1, 1, self.vocab_size), dtype=self._dtype)
mlm_scores = np.expand_dims(np.zeros_like(masked_positions, dtype=self._dtype),
axis=-1)
mlm_scores = mlm_scores + zero_logits
else:
_, _, mlm_scores = self.generator(inputs, token_types, valid_length, masked_positions)
corrupted_tokens, fake_data, labels = self.get_corrupted_tokens(
inputs, original_tokens, masked_positions, mlm_scores)
# The discriminator takes the same input as the generator and the token_ids are
# replaced with fake data
_, _, rtd_scores = self.discriminator(fake_data, token_types, valid_length)
return mlm_scores, rtd_scores, corrupted_tokens, labels | Getting the mlm scores of each masked positions from a generator,
then produces the corrupted tokens sampling from a gumbel distribution.
We also get the ground-truth and scores of the replaced token detection
which is output by a discriminator. The ground-truth is an array with same
shape as the input using 1 stand for original token and 0 for replacement.
Notice: There is a problem when the masked positions have duplicate indexs.
Try to avoid that in the data preprocessing process. In addition, loss calculation
should be done in the training scripts as well.
Parameters
----------
inputs
The masked input
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
The token types. If the inputs contain two sequences, we will set different token types
for the first sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence.
Shape (batch_size,)
original_tokens
The original tokens that appear in the unmasked input sequence.
Shape (batch_size, num_masked_positions).
masked_positions :
The masked position of the sequence.
Shape (batch_size, num_masked_positions).
Returns
-------
mlm_scores
The masked language model score.
Shape (batch_size, num_masked_positions, vocab_size)
rtd_scores
The replaced-token-detection score. Predicts whether the tokens are replaced or not.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
replaced_inputs
Shape (batch_size, num_masked_positions)
labels
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def get_corrupted_tokens(self, inputs, original_tokens, masked_positions, logits):
"""
Sample from the generator to create corrupted input.
Parameters
----------
inputs
The masked input
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
original_tokens
The original tokens that appear in the unmasked input sequence
Shape (batch_size, num_masked_positions).
masked_positions
The masked position of the sequence
Shape (batch_size, num_masked_positions).
logits
The logits of each tokens
Shape (batch_size, num_masked_positions, vocab_size)
Returns
-------
corrupted_tokens
Shape (batch_size, )
fake_data
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
labels
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
"""
if self._disallow_correct:
# TODO(sxjscience), Revise the implementation
disallow = npx.one_hot(masked_positions, depth=self.vocab_size, dtype=self._dtype)
logits = logits - 1000.0 * disallow
# gumbel_softmax() samples from the logits with a noise of Gumbel distribution
prob = gumbel_softmax(
logits,
temperature=self._temperature,
eps=self._gumbel_eps,
use_np_gumbel=False)
corrupted_tokens = np.argmax(prob, axis=-1).astype(np.int32)
if self.disc_backbone.layout == 'TN':
inputs = inputs.T
original_data = update_vectors_by_position(inputs, original_tokens, masked_positions)
fake_data = update_vectors_by_position(inputs, corrupted_tokens, masked_positions)
updates_mask = add_vectors_by_position(np.zeros_like(inputs),
np.ones_like(masked_positions), masked_positions)
# Dealing with multiple zeros in masked_positions which
# results in a non-zero value in the first index [CLS]
updates_mask = np.minimum(updates_mask, 1)
labels = updates_mask * np.not_equal(fake_data, original_data)
if self.disc_backbone.layout == 'TN':
return corrupted_tokens, fake_data.T, labels.T
else:
return corrupted_tokens, fake_data, labels |
Sample from the generator to create corrupted input.
Parameters
----------
inputs
The masked input
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
original_tokens
The original tokens that appear in the unmasked input sequence
Shape (batch_size, num_masked_positions).
masked_positions
The masked position of the sequence
Shape (batch_size, num_masked_positions).
logits
The logits of each tokens
Shape (batch_size, num_masked_positions, vocab_size)
Returns
-------
corrupted_tokens
Shape (batch_size, )
fake_data
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
labels
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
| get_corrupted_tokens | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def get_pretrained_electra(model_name: str = 'google_electra_small',
root: str = get_model_zoo_home_dir(),
load_backbone: bool = True,
load_disc: bool = False,
load_gen: bool = False) \
-> Tuple[CN, HuggingFaceWordPieceTokenizer,
Optional[str],
Tuple[Optional[str], Optional[str]]]:
"""Get the pretrained Electra weights
Parameters
----------
model_name
The name of the Electra model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_disc
Whether to load the weights of the discriminator
load_gen
Whether to load the weights of the generator
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceWordPieceTokenizer
backbone_params_path
Path to the parameter of the backbone network
other_net_params_paths
Path to the parameter of the discriminator and the generator.
They will be returned inside a tuple.
"""
assert model_name in PRETRAINED_URL, '{} is not found. All available are {}'.format(
model_name, list_pretrained_electra())
cfg_path = PRETRAINED_URL[model_name]['cfg']
if isinstance(cfg_path, CN):
cfg = cfg_path
else:
cfg = None
vocab_path = PRETRAINED_URL[model_name]['vocab']
params_path = PRETRAINED_URL[model_name]['params']
disc_params_path = PRETRAINED_URL[model_name]['disc_model']
gen_params_path = PRETRAINED_URL[model_name]['gen_model']
local_paths = dict()
download_jobs = [('vocab', vocab_path)]
if cfg is None:
download_jobs.append(('cfg', cfg_path))
for k, path in download_jobs:
local_paths[k] = download(url=get_repo_model_zoo_url() + path,
path=os.path.join(root, path),
sha1_hash=FILE_STATS[path])
if load_backbone:
local_params_path = download(url=get_repo_model_zoo_url() + params_path,
path=os.path.join(root, params_path),
sha1_hash=FILE_STATS[params_path])
else:
local_params_path = None
if load_disc:
local_disc_params_path = download(url=get_repo_model_zoo_url() + disc_params_path,
path=os.path.join(root, disc_params_path),
sha1_hash=FILE_STATS[disc_params_path])
else:
local_disc_params_path = None
if load_gen:
local_gen_params_path = download(url=get_repo_model_zoo_url() + gen_params_path,
path=os.path.join(root, gen_params_path),
sha1_hash=FILE_STATS[gen_params_path])
else:
local_gen_params_path = None
do_lower = True if 'lowercase' in PRETRAINED_URL[model_name]\
and PRETRAINED_URL[model_name]['lowercase'] else False
tokenizer = HuggingFaceWordPieceTokenizer(
vocab_file=local_paths['vocab'],
unk_token='[UNK]',
pad_token='[PAD]',
cls_token='[CLS]',
sep_token='[SEP]',
mask_token='[MASK]',
lowercase=do_lower)
if cfg is None:
cfg = ElectraModel.get_cfg().clone_merge(local_paths['cfg'])
return cfg, tokenizer, local_params_path, (local_disc_params_path, local_gen_params_path) | Get the pretrained Electra weights
Parameters
----------
model_name
The name of the Electra model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_disc
Whether to load the weights of the discriminator
load_gen
Whether to load the weights of the generator
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceWordPieceTokenizer
backbone_params_path
Path to the parameter of the backbone network
other_net_params_paths
Path to the parameter of the discriminator and the generator.
They will be returned inside a tuple.
| get_pretrained_electra | python | dmlc/gluon-nlp | src/gluonnlp/models/electra.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/electra.py | Apache-2.0 |
def forward(self, x, layer_states):
"""
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
Shape (2, batch_size, prev_len, C_in)
- layout = 'TN'
Shape (2, prev_len, batch_size, C_in)
"""
x = self.ln(x)
if self._layout == 'NT':
batch_axis, time_axis = 0, 1
prev_len = npx.shape_array(layer_states)[2]
else:
batch_axis, time_axis = 1, 0
prev_len = npx.shape_array(layer_states)[1]
query, key, value = np.split(self.qkv(x), 3, axis=-1)
if layer_states is not None:
prev_key, prev_value = layer_states[0], layer_states[1]
key = np.concatenate([prev_key, key], axis=time_axis)
value = np.concatenate([prev_value, value], axis=time_axis)
new_states = np.stack([key, value], axis=0)
# gen mask
query_pos = npx.arange_like(query, axis=time_axis)
if prev_len is not None:
query_pos = query_pos + prev_len
key_pos = npx.arange_like(key, axis=time_axis)
# (query_len, key_len)
mask = (npx.reshape(key_pos, (1, -1)) <=
npx.reshape(query_pos, (-1, 1))).astype(self._dtype)
# broadcast to (batch_size, query_len, key_len)
mask = npx.broadcast_like(
np.expand_dims(mask, axis=0),
query,
lhs_axes=0,
rhs_axes=batch_axis
)
query = npx.reshape(query, (-2, -2, self._num_heads, -1))
key = npx.reshape(key, (-2, -2, self._num_heads, -1))
value = npx.reshape(value, (-2, -2, self._num_heads, -1))
out, [_, attn_weight] = self.attention_cell(query, key, value, mask)
out = self.out_proj(out)
out = self.hidden_dropout(out)
return out, new_states |
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
Shape (2, batch_size, prev_len, C_in)
- layout = 'TN'
Shape (2, prev_len, batch_size, C_in)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def forward(self, x, layer_states):
"""
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
Shape (2, batch_size, prev_len, C_in)
- layout = 'TN'
Shape (2, prev_len, batch_size, C_in)
Returns
-------
new_x
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
new_states
- layout = 'NT'
Shape (2, batch_size, prev_len + seq_length, C_in)
- layout = 'TN'
Shape (2, prev_len + seq_length, batch_size, C_in)
"""
h, new_layer_states = self.atten(x, layer_states)
x = x + h
h = self.ffn(x)
return h, new_layer_states |
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
layer_states
- layout = 'NT'
Shape (2, batch_size, prev_len, C_in)
- layout = 'TN'
Shape (2, prev_len, batch_size, C_in)
Returns
-------
new_x
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
new_states
- layout = 'NT'
Shape (2, batch_size, prev_len + seq_length, C_in)
- layout = 'TN'
Shape (2, prev_len + seq_length, batch_size, C_in)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def forward(self, x, states):
"""
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The previous states
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len, C_in)]
- layout = 'TN'
Shape (num_layers, 2, prev_len, batch_size, C_in)]
Returns
-------
new_x
Output
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
new_states
The new states
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len + seq_length, C_in)
- layout = 'TN'
Shape (num_layers, 2, prev_len + seq_length, batch_size, C_in)
"""
prev_len = npx.shape_array(states)[3] if self._layout == 'NT' else \
npx.shape_array(states)[2]
x = self.get_initial_embedding(x, prev_len)
if self._layout != self._compute_layout:
x = np.swapaxes(x, 0, 1)
states = np.swapaxes(states, 2, 3)
new_states = []
for layer_idx in range(self._num_layers):
layer_states = None if states is None else states[layer_idx]
x, new_layer_states = self._layers[layer_idx](x, layer_states)
new_states.append(new_layer_states)
new_states = np.stack(new_states, axis=0)
x = self._final_ln(x)
if self._layout != self._compute_layout:
x = np.swapaxes(x, 0, 1)
new_states = np.swapaxes(new_states, 2, 3)
return x, new_states |
Parameters
----------
x
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The previous states
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len, C_in)]
- layout = 'TN'
Shape (num_layers, 2, prev_len, batch_size, C_in)]
Returns
-------
new_x
Output
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
new_states
The new states
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len + seq_length, C_in)
- layout = 'TN'
Shape (num_layers, 2, prev_len + seq_length, batch_size, C_in)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def get_initial_embedding(self, inputs, prev_len):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
prev_len
The previous length. It will be a scalar.
Returns
-------
embedding
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
"""
embedding = self._embed(inputs)
if self._layout == 'NT':
batch_axis, time_axis = 0, 1
else:
batch_axis, time_axis = 1, 0
if self._pos_embed_type is not None:
pos = npx.arange_like(inputs, axis=time_axis)
if prev_len is not None:
pos = pos + prev_len
positional_embedding = self._pos_embed(pos)
positional_embedding = np.expand_dims(positional_embedding, axis=batch_axis)
embedding = embedding + positional_embedding
embedding = self._embed_dropout(embedding)
return embedding | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
prev_len
The previous length. It will be a scalar.
Returns
-------
embedding
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
| get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def init_states(self, batch_size, ctx, dtype=None):
"""Initialize the states required for incremental decoding
Returns
-------
init_states
- layout = 'NT'
Shape (num_layers, 2, batch_size, 0, C_in)
- layout = 'TN'
Shape (num_layers, 2, 0, batch_size, C_in)
"""
if dtype is None:
dtype = self._dtype
return mx.np.zeros(shape=(self._num_layers, 2, batch_size, 0,
self._units), ctx=ctx, dtype=dtype) if self.layout == 'NT' else \
mx.np.zeros(shape=(self._num_layers, 2, 0, batch_size,
self._units), ctx=ctx, dtype=dtype) | Initialize the states required for incremental decoding
Returns
-------
init_states
- layout = 'NT'
Shape (num_layers, 2, batch_size, 0, C_in)
- layout = 'TN'
Shape (num_layers, 2, 0, batch_size, C_in)
| init_states | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def forward(self, inputs, states):
"""Getting the logits. This can be used for language modeling.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The states.
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len, C_in)
- layout = 'TN'
Shape (num_layers, 2, prev_len, batch_size, C_in)
Returns
-------
logits
- layout = 'NT'
Shape (batch_size, seq_length, vocab_size).
- layout = 'TN'
Shape (seq_length, batch_size, vocab_size).
new_states
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len + seq_length, C_in)
- layout = 'TN'
Shape (num_layers, 2, prev_len + seq_length, batch_size, C_in)
"""
contextual_embeddings, new_states = self._backbone_model(inputs, states)
logits = self._lm_head(contextual_embeddings)
return logits, new_states | Getting the logits. This can be used for language modeling.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
states
The states.
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len, C_in)
- layout = 'TN'
Shape (num_layers, 2, prev_len, batch_size, C_in)
Returns
-------
logits
- layout = 'NT'
Shape (batch_size, seq_length, vocab_size).
- layout = 'TN'
Shape (seq_length, batch_size, vocab_size).
new_states
- layout = 'NT'
Shape (num_layers, 2, batch_size, prev_len + seq_length, C_in)
- layout = 'TN'
Shape (num_layers, 2, prev_len + seq_length, batch_size, C_in)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def get_pretrained_gpt2(model_name: str = 'gpt2_124M',
root: str = get_model_zoo_home_dir(),
load_backbone: bool = True,
load_lm: bool = False)\
-> Tuple[CN, HuggingFaceByteBPETokenizer, str, str]:
"""Get the pretrained GPT-2 weights
Parameters
----------
model_name
The name of the GPT-2 model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_lm
Whether to load the weights of LM
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceByteBPETokenizer
params_path
Path to the parameters
lm_params_path
Path to the parameter that includes both the backbone and the LM
"""
assert model_name in PRETRAINED_URL, '{} is not found. All available are {}'.format(
model_name, list_pretrained_gpt2())
cfg_path = PRETRAINED_URL[model_name]['cfg']
if isinstance(cfg_path, CN):
cfg = cfg_path
else:
cfg = None
merges_path = PRETRAINED_URL[model_name]['merges']
vocab_path = PRETRAINED_URL[model_name]['vocab']
params_path = PRETRAINED_URL[model_name]['params']
lm_params_path = PRETRAINED_URL[model_name]['lm_params']
local_paths = dict()
download_jobs = [('vocab', vocab_path), ('merges', merges_path)]
if cfg is None:
download_jobs.append(('cfg', cfg_path))
for k, path in download_jobs:
local_paths[k] = download(url=get_repo_model_zoo_url() + path,
path=os.path.join(root, path),
sha1_hash=FILE_STATS[path])
if load_backbone:
local_params_path = download(url=get_repo_model_zoo_url() + params_path,
path=os.path.join(root, params_path),
sha1_hash=FILE_STATS[params_path])
else:
local_params_path = None
if load_lm and lm_params_path is not None:
local_lm_params_path = download(url=get_repo_model_zoo_url() + lm_params_path,
path=os.path.join(root, lm_params_path),
sha1_hash=FILE_STATS[lm_params_path])
else:
local_lm_params_path = None
tokenizer = HuggingFaceByteBPETokenizer(
merges_file=local_paths['merges'],
vocab_file=local_paths['vocab'])
if cfg is None:
cfg = GPT2Model.get_cfg().clone_merge(local_paths['cfg'])
return cfg, tokenizer, local_params_path, local_lm_params_path | Get the pretrained GPT-2 weights
Parameters
----------
model_name
The name of the GPT-2 model.
root
The downloading root
load_backbone
Whether to load the weights of the backbone network
load_lm
Whether to load the weights of LM
Returns
-------
cfg
Network configuration
tokenizer
The HuggingFaceByteBPETokenizer
params_path
Path to the parameters
lm_params_path
Path to the parameter that includes both the backbone and the LM
| get_pretrained_gpt2 | python | dmlc/gluon-nlp | src/gluonnlp/models/gpt2.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/gpt2.py | Apache-2.0 |
def __init__(self,
use_bottleneck: bool = True,
units: int = 512,
real_units: int = 128,
hidden_size: int = 2048,
num_heads: int = 8,
num_stacked_ffn: int = 1,
bottleneck_strategy: str = 'qk_sharing',
attention_dropout_prob: float = 0.1,
hidden_dropout_prob: float = 0.1,
activation_dropout_prob: float = 0.0,
activation: str = 'gelu',
normalization: str = 'layer_norm',
layer_norm_eps: float = 1e-12,
use_qkv_bias: bool = True,
weight_initializer: Optional[InitializerType] = None,
bias_initializer: Optional[InitializerType] = 'zeros',
dtype='float32',
layout='NT'):
"""
Parameters
----------
use_bottleneck
Whether to use the bottleneck layer.
units
size of inter-bottleneck
real_units
size of intra-bottleneck
hidden_size
size of feed-forward network
num_heads
num_stacked_ffn
attention_dropout_prob
hidden_dropout_prob
activation_dropout_prob
activation
normalization
layer_norm_eps
onlyv valid when normalization is 'layer_norm'
use_qkv_bias
weight_initializer
bias_initializer
dtype
Data type of the block
layout
Layout of the input + output
"""
super().__init__()
self._use_bottleneck = use_bottleneck
self._units = units
self._real_units = real_units
self._num_heads = num_heads
self._num_stacked_ffn = num_stacked_ffn
self._bottleneck_strategy = bottleneck_strategy
self._dtype = dtype
self._layout = layout
assert real_units % num_heads == 0, 'units must be divisive by the number of heads'
self.dropout_layer = nn.Dropout(hidden_dropout_prob)
if use_bottleneck:
self.in_bottleneck_proj = nn.Dense(units=real_units,
in_units=units,
flatten=False,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
self.in_bottleneck_ln = get_norm_layer(normalization=normalization,
in_channels=real_units,
epsilon=layer_norm_eps)
self.out_bottleneck_proj = nn.Dense(units=units,
in_units=real_units,
flatten=False,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
self.out_bottleneck_ln = get_norm_layer(normalization=normalization,
in_channels=units,
epsilon=layer_norm_eps)
if bottleneck_strategy == 'qk_sharing':
self.shared_qk = nn.Dense(units=real_units,
in_units=units,
flatten=False,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
self.shared_qk_ln = get_norm_layer(normalization=normalization,
in_channels=real_units,
epsilon=layer_norm_eps)
self.attention_proj = nn.Dense(units=real_units,
flatten=False,
in_units=real_units,
use_bias=True,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
# The in_units of qkv varies according to the sharing strategy
if self._use_bottleneck:
if self._bottleneck_strategy == 'qk_sharing':
attn_query_in_units = real_units
attn_key_in_units = real_units
attn_value_in_units = units
elif self._bottleneck_strategy == 'from_bottleneck':
attn_query_in_units = real_units
attn_key_in_units = real_units
attn_value_in_units = real_units
elif self._bottleneck_strategy == 'from_input':
attn_query_in_units = units
attn_key_in_units = units
attn_value_in_units = units
else:
raise NotImplementedError
else:
attn_query_in_units = units
attn_key_in_units = units
attn_value_in_units = units
self.attn_query = nn.Dense(units=real_units,
in_units=attn_query_in_units,
flatten=False,
use_bias=use_qkv_bias,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
self.attn_key = nn.Dense(units=real_units,
in_units=attn_key_in_units,
flatten=False,
use_bias=use_qkv_bias,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
self.attn_value = nn.Dense(units=real_units,
in_units=attn_value_in_units,
flatten=False,
use_bias=use_qkv_bias,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
dtype=self._dtype)
attention_layout = 'NTK' if self._layout == 'NT' else 'TNK'
self.attention_cell = \
MultiHeadAttentionCell(
query_units=real_units,
num_heads=num_heads,
attention_dropout=attention_dropout_prob,
scaled=True,
dtype=self._dtype,
layout=attention_layout
)
self.layer_norm = get_norm_layer(normalization=normalization,
in_channels=real_units,
epsilon=layer_norm_eps)
self.stacked_ffn = nn.HybridSequential()
for ffn_idx in range(num_stacked_ffn):
is_last_ffn = (ffn_idx == (num_stacked_ffn - 1))
# only apply dropout on last ffn layer if use bottleneck
dropout = float(hidden_dropout_prob * (not use_bottleneck) * is_last_ffn)
self.stacked_ffn.add(
PositionwiseFFN(units=real_units,
hidden_size=hidden_size,
dropout=dropout,
activation_dropout=activation_dropout_prob,
weight_initializer=weight_initializer,
bias_initializer=bias_initializer,
activation=activation,
normalization=normalization,
layer_norm_eps=layer_norm_eps,
dtype=self._dtype)) |
Parameters
----------
use_bottleneck
Whether to use the bottleneck layer.
units
size of inter-bottleneck
real_units
size of intra-bottleneck
hidden_size
size of feed-forward network
num_heads
num_stacked_ffn
attention_dropout_prob
hidden_dropout_prob
activation_dropout_prob
activation
normalization
layer_norm_eps
onlyv valid when normalization is 'layer_norm'
use_qkv_bias
weight_initializer
bias_initializer
dtype
Data type of the block
layout
Layout of the input + output
| __init__ | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def forward(self, data, attn_mask):
"""
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
attn_mask
The attention mask
Shape (batch_size, seq_length, seq_length)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
attn_weight
Shape (batch_size, seq_length, seq_length)
"""
if self._use_bottleneck:
bn_proj = self.in_bottleneck_proj(data)
bn_proj = self.in_bottleneck_ln(bn_proj)
input = bn_proj
if self._bottleneck_strategy == 'qk_sharing':
# for Mobile Bert
qk_shared = self.shared_qk(data)
qk_shared = self.shared_qk_ln(qk_shared)
query = qk_shared
key = qk_shared
value = data
elif self._bottleneck_strategy == 'from_bottleneck':
# for Mobile Bert Tiny
query = bn_proj
key = bn_proj
value = bn_proj
elif self._bottleneck_strategy == 'from_input':
query = data
key = data
value = data
else:
raise NotImplementedError
else:
input = data
query = data
key = data
value = data
query = npx.reshape(self.attn_query(query), (-2, -2, self._num_heads, -1))
key = npx.reshape(self.attn_key(key), (-2, -2, self._num_heads, -1))
value = npx.reshape(self.attn_value(value), (-2, -2, self._num_heads, -1))
out, [_, attn_weight] = self.attention_cell(query, key, value, attn_mask)
out = self.attention_proj(out)
if not self._use_bottleneck:
out = self.dropout_layer(out)
out = out + input
out = self.layer_norm(out)
for ffn_idx in range(self._num_stacked_ffn):
ffn = self.stacked_ffn[ffn_idx]
out = ffn(out)
if self._use_bottleneck:
out = self.out_bottleneck_proj(out)
out = self.dropout_layer(out)
out = out + data
out = self.out_bottleneck_ln(out)
return out, attn_weight |
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C_in)
- layout = 'TN'
Shape (seq_length, batch_size, C_in)
attn_mask
The attention mask
Shape (batch_size, seq_length, seq_length)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
attn_weight
Shape (batch_size, seq_length, seq_length)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def forward(self, data, valid_length):
"""
Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
"""
if self._layout == 'NT':
batch_axis, time_axis = 0, 1
elif self._layout == 'TN':
batch_axis, time_axis = 1, 0
else:
raise NotImplementedError('Received layout="{}". '
'Only "NT" and "TN" are supported.'.format(self._layout))
# 1. Embed the data
attn_mask = gen_self_attn_mask(data, valid_length,
dtype=self._dtype,
layout=self._layout,
attn_type='full')
out = data
all_encodings_outputs = []
additional_outputs = []
all_encodings_outputs.append(out)
for layer_idx in range(self._num_layers):
layer = self.all_layers[layer_idx]
out, attention_weights = layer(out, attn_mask)
# out : [batch_size, seq_len, units]
# attention_weights : [batch_size, num_heads, seq_len, seq_len]
if self._output_all_encodings:
out = npx.sequence_mask(out,
sequence_length=valid_length,
use_sequence_length=True,
axis=time_axis)
all_encodings_outputs.append(out)
if self._output_attention:
additional_outputs.append(attention_weights)
if not self._output_all_encodings:
# if self._output_all_encodings, SequenceMask is already applied above
out = npx.sequence_mask(out, sequence_length=valid_length,
use_sequence_length=True,
axis=time_axis)
return out, additional_outputs
else:
return all_encodings_outputs, additional_outputs |
Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
data
- layout = 'NT'
Shape (batch_size, seq_length, C)
- layout = 'TN'
Shape (seq_length, batch_size, C)
valid_length
Shape (batch_size,)
Returns
-------
out
- layout = 'NT'
Shape (batch_size, seq_length, C_out)
- layout = 'TN'
Shape (seq_length, batch_size, C_out)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def forward(self, inputs, token_types, valid_length):
# pylint: disable=arguments-differ
"""Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding :
Shape (batch_size, seq_length, units).
pooled_output :
This is optional. Shape (batch_size, units)
"""
embedding = self.get_initial_embedding(inputs, token_types)
if self._compute_layout != self._layout:
contextual_embeddings, additional_outputs = self.encoder(np.swapaxes(embedding, 0, 1),
valid_length)
contextual_embeddings = np.swapaxes(contextual_embeddings, 0, 1)
else:
contextual_embeddings, additional_outputs = self.encoder(embedding, valid_length)
if self.use_pooler:
pooled_out = self.apply_pooling(contextual_embeddings)
return contextual_embeddings, pooled_out
else:
return contextual_embeddings | Generate the representation given the inputs.
This is used in training or fine-tuning a mobile bert model.
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
If the inputs contain two sequences, we will set different token types for the first
sentence and the second sentence.
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
valid_length
The valid length of each sequence
Shape (batch_size,)
Returns
-------
contextual_embedding :
Shape (batch_size, seq_length, units).
pooled_output :
This is optional. Shape (batch_size, units)
| forward | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def get_initial_embedding(self, inputs, token_types=None):
"""Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
Type of tokens. If None, it will be initialized as all zero
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
"""
if self._layout == 'NT':
batch_axis, time_axis = 0, 1
elif self._layout == 'TN':
batch_axis, time_axis = 1, 0
else:
raise NotImplementedError
word_embedding = self.word_embed(inputs)
if self.trigram_embed:
if self._layout == 'NT':
word_embedding = np.concatenate(
[np.pad(word_embedding[:, 1:], ((0, 0), (0, 1), (0, 0))),
word_embedding,
np.pad(word_embedding[:, :-1], ((0, 0), (1, 0), (0, 0)))], axis=-1)
elif self._layout == 'TN':
word_embedding = np.concatenate(
[np.pad(word_embedding[1:, :], ((0, 1), (0, 0), (0, 0))),
word_embedding,
np.pad(word_embedding[:-1, :], ((1, 0), (0, 0), (0, 0)))], axis=-1)
else:
raise NotImplementedError
# Projecting the embedding into units only for word embedding
if self.trigram_embed or self.embed_size != self.units:
word_embedding = self.embed_factorized_proj(word_embedding)
if token_types is None:
token_types = np.zeros_like(inputs)
type_embedding = self.token_type_embed(token_types)
embedding = word_embedding + type_embedding
if self.pos_embed_type is not None:
positional_embedding =\
self.token_pos_embed(npx.arange_like(embedding, axis=time_axis))
positional_embedding = np.expand_dims(positional_embedding, axis=batch_axis)
embedding = embedding + positional_embedding
# Extra layer normalization plus dropout
embedding = self.embed_layer_norm(embedding)
embedding = self.embed_dropout(embedding)
return embedding | Get the initial token embeddings that considers the token type and positional embeddings
Parameters
----------
inputs
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
token_types
Type of tokens. If None, it will be initialized as all zero
- layout = 'NT'
Shape (batch_size, seq_length)
- layout = 'TN'
Shape (seq_length, batch_size)
Returns
-------
embedding
The initial embedding that will be fed into the encoder
| get_initial_embedding | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
def apply_pooling(self, sequence):
"""Generate the representation given the inputs.
This is used for pre-training or fine-tuning a mobile bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
Returns
-------
outputs
Shape (batch_size, units)
"""
if self._layout == 'NT':
outputs = sequence[:, 0, :]
else:
outputs = sequence[0, :, :]
if self.classifier_activation:
return self.pooler(outputs)
else:
return outputs | Generate the representation given the inputs.
This is used for pre-training or fine-tuning a mobile bert model.
Get the first token of the whole sequence which is [CLS]
Parameters
----------
sequence
- layout = 'NT'
Shape (batch_size, sequence_length, units)
- layout = 'TN'
Shape (sequence_length, batch_size, units)
Returns
-------
outputs
Shape (batch_size, units)
| apply_pooling | python | dmlc/gluon-nlp | src/gluonnlp/models/mobilebert.py | https://github.com/dmlc/gluon-nlp/blob/master/src/gluonnlp/models/mobilebert.py | Apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.