text
stringlengths
5
22M
id
stringlengths
12
177
metadata
dict
__index_level_0__
int64
0
1.37k
.. Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Blenderbot Small ----------------------------------------------------------------------------------------------------------------------- Note that :class:`~transformers.BlenderbotSmallModel` and :class:`~transformers.BlenderbotSmallForConditionalGeneration` are only used in combination with the checkpoint `facebook/blenderbot-90M <https://huggingface.co/facebook/blenderbot-90M>`__. Larger Blenderbot checkpoints should instead be used with :class:`~transformers.BlenderbotModel` and :class:`~transformers.BlenderbotForConditionalGeneration` Overview ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The Blender chatbot model was proposed in `Recipes for building an open-domain chatbot <https://arxiv.org/pdf/2004.13637.pdf>`__ Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: *Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.* The authors' code can be found `here <https://github.com/facebookresearch/ParlAI>`__ . BlenderbotSmallConfig ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.BlenderbotSmallConfig :members: BlenderbotSmallTokenizer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.BlenderbotSmallTokenizer :members: build_inputs_with_special_tokens, get_special_tokens_mask, create_token_type_ids_from_sequences, save_vocabulary BlenderbotSmallModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.BlenderbotSmallModel :members: forward BlenderbotSmallForConditionalGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.BlenderbotSmallForConditionalGeneration :members: forward BlenderbotSmallForCausalLM ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.BlenderbotSmallForCausalLM :members: forward TFBlenderbotSmallModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFBlenderbotSmallModel :members: call TFBlenderbotSmallForConditionalGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFBlenderbotSmallForConditionalGeneration :members: call
AdaMix/docs/source/model_doc/blenderbot_small.rst/0
{ "file_path": "AdaMix/docs/source/model_doc/blenderbot_small.rst", "repo_id": "AdaMix", "token_count": 1018 }
33
.. Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. OpenAI GPT2 ----------------------------------------------------------------------------------------------------------------------- Overview ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ OpenAI GPT-2 model was proposed in `Language Models are Unsupervised Multitask Learners <https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf>`_ by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever. It's a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. The abstract from the paper is the following: *GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data.* Tips: - GPT-2 is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. - GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the `run_generation.py` example script. - The PyTorch models can take the `past` as input, which is the previously computed key/value attention pairs. Using this `past` value prevents the model from re-computing pre-computed values in the context of text generation. See `reusing the past in generative models <../quickstart.html#using-the-past>`__ for more information on the usage of this argument. `Write With Transformer <https://transformer.huggingface.co/doc/gpt2-large>`__ is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: `distilgpt-2`. The original code can be found `here <https://openai.com/blog/better-language-models/>`__. GPT2Config ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2Config :members: GPT2Tokenizer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2Tokenizer :members: save_vocabulary GPT2TokenizerFast ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2TokenizerFast :members: GPT2 specific outputs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput :members: .. autoclass:: transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput :members: GPT2Model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2Model :members: forward, parallelize, deparallelize GPT2LMHeadModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2LMHeadModel :members: forward, parallelize, deparallelize GPT2DoubleHeadsModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2DoubleHeadsModel :members: forward GPT2ForSequenceClassification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.GPT2ForSequenceClassification :members: forward TFGPT2Model ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFGPT2Model :members: call TFGPT2LMHeadModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFGPT2LMHeadModel :members: call TFGPT2DoubleHeadsModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFGPT2DoubleHeadsModel :members: call TFGPT2ForSequenceClassification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFGPT2ForSequenceClassification :members: call TFSequenceClassifierOutputWithPast ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast :members:
AdaMix/docs/source/model_doc/gpt2.rst/0
{ "file_path": "AdaMix/docs/source/model_doc/gpt2.rst", "repo_id": "AdaMix", "token_count": 1411 }
34
.. Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. RAG ----------------------------------------------------------------------------------------------------------------------- Overview ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks. It is based on the paper `Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks <https://arxiv.org/abs/2005.11401>`__ by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. The abstract from the paper is the following: *Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, their ability to access and precisely manipulate knowledge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-specific architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre-trained models with a differentiable access mechanism to explicit nonparametric memory can overcome this issue, but have so far been only investigated for extractive downstream tasks. We explore a general-purpose fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine pre-trained parametric and non-parametric memory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, the other can use different passages per token. We fine-tune and evaluate our models on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art on three open domain QA tasks, outperforming parametric seq2seq models and task-specific retrieve-and-extract architectures. For language generation tasks, we find that RAG models generate more specific, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.* RagConfig ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.RagConfig :members: RagTokenizer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.RagTokenizer :members: Rag specific outputs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.models.rag.modeling_rag.RetrievAugLMMarginOutput :members: .. autoclass:: transformers.models.rag.modeling_rag.RetrievAugLMOutput :members: RagRetriever ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.RagRetriever :members: RagModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.RagModel :members: forward RagSequenceForGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.RagSequenceForGeneration :members: forward, generate RagTokenForGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.RagTokenForGeneration :members: forward, generate TFRagModel ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFRagModel :members: call TFRagSequenceForGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFRagSequenceForGeneration :members: call, generate TFRagTokenForGeneration ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. autoclass:: transformers.TFRagTokenForGeneration :members: call, generate
AdaMix/docs/source/model_doc/rag.rst/0
{ "file_path": "AdaMix/docs/source/model_doc/rag.rst", "repo_id": "AdaMix", "token_count": 1215 }
35
.. Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Summary of the models ======================================================================================================================= This is a summary of the models available in 🤗 Transformers. It assumes you’re familiar with the original `transformer model <https://arxiv.org/abs/1706.03762>`_. For a gentle introduction check the `annotated transformer <http://nlp.seas.harvard.edu/2018/04/03/attention.html>`_. Here we focus on the high-level differences between the models. You can check them more in detail in their respective documentation. Also check out the :doc:`pretrained model page </pretrained_models>` to see the checkpoints available for each type of model and all `the community models <https://huggingface.co/models>`_. Each one of the models in the library falls into one of the following categories: * :ref:`autoregressive-models` * :ref:`autoencoding-models` * :ref:`seq-to-seq-models` * :ref:`multimodal-models` * :ref:`retrieval-based-models` Autoregressive models are pretrained on the classic language modeling task: guess the next token having read all the previous ones. They correspond to the decoder of the original transformer model, and a mask is used on top of the full sentence so that the attention heads can only see what was before in the text, and not what’s after. Although those models can be fine-tuned and achieve great results on many tasks, the most natural application is text generation. A typical example of such models is GPT. Autoencoding models are pretrained by corrupting the input tokens in some way and trying to reconstruct the original sentence. They correspond to the encoder of the original transformer model in the sense that they get access to the full inputs without any mask. Those models usually build a bidirectional representation of the whole sentence. They can be fine-tuned and achieve great results on many tasks such as text generation, but their most natural application is sentence classification or token classification. A typical example of such models is BERT. Note that the only difference between autoregressive models and autoencoding models is in the way the model is pretrained. Therefore, the same architecture can be used for both autoregressive and autoencoding models. When a given model has been used for both types of pretraining, we have put it in the category corresponding to the article where it was first introduced. Sequence-to-sequence models use both the encoder and the decoder of the original transformer, either for translation tasks or by transforming other tasks to sequence-to-sequence problems. They can be fine-tuned to many tasks but their most natural applications are translation, summarization and question answering. The original transformer model is an example of such a model (only for translation), T5 is an example that can be fine-tuned on other tasks. Multimodal models mix text inputs with other kinds (e.g. images) and are more specific to a given task. .. _autoregressive-models: Autoregressive models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned before, these models rely on the decoder part of the original transformer and use an attention mask so that at each position, the model can only look at the tokens before the attention heads. Original GPT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=openai-gpt"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-openai--gpt-blueviolet"> </a> <a href="model_doc/gpt.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-openai--gpt-blueviolet"> </a> `Improving Language Understanding by Generative Pre-Training <https://cdn.openai.com/research-covers/language-unsupervised/language_understanding_paper.pdf>`_, Alec Radford et al. The first autoregressive model based on the transformer architecture, pretrained on the Book Corpus dataset. The library provides versions of the model for language modeling and multitask language modeling/multiple choice classification. GPT-2 ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=gpt2"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-gpt2-blueviolet"> </a> <a href="model_doc/gpt2.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-gpt2-blueviolet"> </a> `Language Models are Unsupervised Multitask Learners <https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf>`_, Alec Radford et al. A bigger and better version of GPT, pretrained on WebText (web pages from outgoing links in Reddit with 3 karmas or more). The library provides versions of the model for language modeling and multitask language modeling/multiple choice classification. CTRL ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=ctrl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-ctrl-blueviolet"> </a> <a href="model_doc/ctrl.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-ctrl-blueviolet"> </a> `CTRL: A Conditional Transformer Language Model for Controllable Generation <https://arxiv.org/abs/1909.05858>`_, Nitish Shirish Keskar et al. Same as the GPT model but adds the idea of control codes. Text is generated from a prompt (can be empty) and one (or several) of those control codes which are then used to influence the text generation: generate with the style of wikipedia article, a book or a movie review. The library provides a version of the model for language modeling only. Transformer-XL ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=transfo-xl"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-transfo--xl-blueviolet"> </a> <a href="model_doc/transformerxl.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-transfo--xl-blueviolet"> </a> `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context <https://arxiv.org/abs/1901.02860>`_, Zihang Dai et al. Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model. Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments. This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed. The library provides a version of the model for language modeling only. .. _reformer: Reformer ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=reformer"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-reformer-blueviolet"> </a> <a href="model_doc/reformer.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-reformer-blueviolet"> </a> `Reformer: The Efficient Transformer <https://arxiv.org/abs/2001.04451>`_, Nikita Kitaev et al . An autoregressive transformer model with lots of tricks to reduce memory footprint and compute time. Those tricks include: * Use :ref:`Axial position encoding <axial-pos-encoding>` (see below for more details). It’s a mechanism to avoid having a huge positional encoding matrix (when the sequence length is very big) by factorizing it into smaller matrices. * Replace traditional attention by :ref:`LSH (local-sensitive hashing) attention <lsh-attention>` (see below for more details). It's a technique to avoid computing the full product query-key in the attention layers. * Avoid storing the intermediate results of each layer by using reversible transformer layers to obtain them during the backward pass (subtracting the residuals from the input of the next layer gives them back) or recomputing them for results inside a given layer (less efficient than storing them but saves memory). * Compute the feedforward operations by chunks and not on the whole batch. With those tricks, the model can be fed much larger sentences than traditional transformer autoregressive models. **Note:** This model could be very well be used in an autoencoding setting, there is no checkpoint for such a pretraining yet, though. The library provides a version of the model for language modeling only. XLNet ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=xlnet"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlnet-blueviolet"> </a> <a href="model_doc/xlnet.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-xlnet-blueviolet"> </a> `XLNet: Generalized Autoregressive Pretraining for Language Understanding <https://arxiv.org/abs/1906.08237>`_, Zhilin Yang et al. XLNet is not a traditional autoregressive model but uses a training strategy that builds on that. It permutes the tokens in the sentence, then allows the model to use the last n tokens to predict the token n+1. Since this is all done with a mask, the sentence is actually fed in the model in the right order, but instead of masking the first n tokens for n+1, XLNet uses a mask that hides the previous tokens in some given permutation of 1,...,sequence length. XLNet also uses the same recurrence mechanism as Transformer-XL to build long-term dependencies. The library provides a version of the model for language modeling, token classification, sentence classification, multiple choice classification and question answering. .. _autoencoding-models: Autoencoding models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned before, these models rely on the encoder part of the original transformer and use no mask so the model can look at all the tokens in the attention heads. For pretraining, targets are the original sentences and inputs are their corrupted versions. BERT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=bert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-bert-blueviolet"> </a> <a href="model_doc/bert.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-bert-blueviolet"> </a> `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding <https://arxiv.org/abs/1810.04805>`_, Jacob Devlin et al. Corrupts the inputs by using random masking, more precisely, during pretraining, a given percentage of tokens (usually 15%) is masked by: * a special mask token with probability 0.8 * a random token different from the one masked with probability 0.1 * the same token with probability 0.1 The model must predict the original sentence, but has a second objective: inputs are two sentences A and B (with a separation token in between). With probability 50%, the sentences are consecutive in the corpus, in the remaining 50% they are not related. The model has to predict if the sentences are consecutive or not. The library provides a version of the model for language modeling (traditional or masked), next sentence prediction, token classification, sentence classification, multiple choice classification and question answering. ALBERT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=albert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-albert-blueviolet"> </a> <a href="model_doc/albert.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-albert-blueviolet"> </a> `ALBERT: A Lite BERT for Self-supervised Learning of Language Representations <https://arxiv.org/abs/1909.11942>`_, Zhenzhong Lan et al. Same as BERT but with a few tweaks: * Embedding size E is different from hidden size H justified because the embeddings are context independent (one embedding vector represents one token), whereas hidden states are context dependent (one hidden state represents a sequence of tokens) so it's more logical to have H >> E. Also, the embedding matrix is large since it's V x E (V being the vocab size). If E < H, it has less parameters. * Layers are split in groups that share parameters (to save memory). * Next sentence prediction is replaced by a sentence ordering prediction: in the inputs, we have two sentences A and B (that are consecutive) and we either feed A followed by B or B followed by A. The model must predict if they have been swapped or not. The library provides a version of the model for masked language modeling, token classification, sentence classification, multiple choice classification and question answering. RoBERTa ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=roberta"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-roberta-blueviolet"> </a> <a href="model_doc/roberta.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-roberta-blueviolet"> </a> `RoBERTa: A Robustly Optimized BERT Pretraining Approach <https://arxiv.org/abs/1907.11692>`_, Yinhan Liu et al. Same as BERT with better pretraining tricks: * dynamic masking: tokens are masked differently at each epoch, whereas BERT does it once and for all * no NSP (next sentence prediction) loss and instead of putting just two sentences together, put a chunk of contiguous texts together to reach 512 tokens (so the sentences are in an order than may span several documents) * train with larger batches * use BPE with bytes as a subunit and not characters (because of unicode characters) The library provides a version of the model for masked language modeling, token classification, sentence classification, multiple choice classification and question answering. DistilBERT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=distilbert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-distilbert-blueviolet"> </a> <a href="model_doc/distilbert.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-distilbert-blueviolet"> </a> `DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter <https://arxiv.org/abs/1910.01108>`_, Victor Sanh et al. Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it's been trained to predict the same probabilities as the larger model. The actual objective is a combination of: * finding the same probabilities as the teacher model * predicting the masked tokens correctly (but no next-sentence objective) * a cosine similarity between the hidden states of the student and the teacher model The library provides a version of the model for masked language modeling, token classification, sentence classification and question answering. ConvBERT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=convbert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-convbert-blueviolet"> </a> <a href="model_doc/convbert.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-convbert-blueviolet"> </a> `ConvBERT: Improving BERT with Span-based Dynamic Convolution <https://arxiv.org/abs/1910.01108>`_, Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost. The library provides a version of the model for masked language modeling, token classification, sentence classification and question answering. XLM ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=xlm"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm-blueviolet"> </a> <a href="model_doc/xlm.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-xlm-blueviolet"> </a> `Cross-lingual Language Model Pretraining <https://arxiv.org/abs/1901.07291>`_, Guillaume Lample and Alexis Conneau A transformer model trained on several languages. There are three different type of training for this model and the library provides checkpoints for all of them: * Causal language modeling (CLM) which is the traditional autoregressive training (so this model could be in the previous section as well). One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages. * Masked language modeling (MLM) which is like RoBERTa. One of the languages is selected for each training sample, and the model input is a sentence of 256 tokens, that may span over several documents in one of those languages, with dynamic masking of the tokens. * A combination of MLM and translation language modeling (TLM). This consists of concatenating a sentence in two different languages, with random masking. To predict one of the masked tokens, the model can use both, the surrounding context in language 1 and the context given by language 2. Checkpoints refer to which method was used for pretraining by having `clm`, `mlm` or `mlm-tlm` in their names. On top of positional embeddings, the model has language embeddings. When training using MLM/CLM, this gives the model an indication of the language used, and when training using MLM+TLM, an indication of the language used for each part. The library provides a version of the model for language modeling, token classification, sentence classification and question answering. XLM-RoBERTa ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=xlm-roberta"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-xlm--roberta-blueviolet"> </a> <a href="model_doc/xlmroberta.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-xlm--roberta-blueviolet"> </a> `Unsupervised Cross-lingual Representation Learning at Scale <https://arxiv.org/abs/1911.02116>`_, Alexis Conneau et al. Uses RoBERTa tricks on the XLM approach, but does not use the translation language modeling objective. It only uses masked language modeling on sentences coming from one language. However, the model is trained on many more languages (100) and doesn't use the language embeddings, so it's capable of detecting the input language by itself. The library provides a version of the model for masked language modeling, token classification, sentence classification, multiple choice classification and question answering. FlauBERT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=flaubert"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-flaubert-blueviolet"> </a> <a href="model_doc/flaubert.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-flaubert-blueviolet"> </a> `FlauBERT: Unsupervised Language Model Pre-training for French <https://arxiv.org/abs/1912.05372>`_, Hang Le et al. Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective). The library provides a version of the model for language modeling and sentence classification. ELECTRA ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=electra"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-electra-blueviolet"> </a> <a href="model_doc/electra.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-electra-blueviolet"> </a> `ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators <https://arxiv.org/abs/2003.10555>`_, Kevin Clark et al. ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps. The library provides a version of the model for masked language modeling, token classification and sentence classification. Funnel Transformer ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=funnel"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-funnel-blueviolet"> </a> <a href="model_doc/funnel.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-funnel-blueviolet"> </a> `Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing <https://arxiv.org/abs/2006.03236>`_, Zihang Dai et al. Funnel Transformer is a transformer model using pooling, a bit like a ResNet model: layers are grouped in blocks, and at the beginning of each block (except the first one), the hidden states are pooled among the sequence dimension. This way, their length is divided by 2, which speeds up the computation of the next hidden states. All pretrained models have three blocks, which means the final hidden state has a sequence length that is one fourth of the original sequence length. For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with "-base" contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers. The pretrained models available use the same pretraining objective as ELECTRA. The library provides a version of the model for masked language modeling, token classification, sentence classification, multiple choice classification and question answering. .. _longformer: Longformer ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=longformer"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-longformer-blueviolet"> </a> <a href="model_doc/longformer.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-longformer-blueviolet"> </a> `Longformer: The Long-Document Transformer <https://arxiv.org/abs/2004.05150>`_, Iz Beltagy et al. A transformer model replacing the attention matrices by sparse matrices to go faster. Often, the local context (e.g., what are the two tokens left and right?) is enough to take action for a given token. Some preselected input tokens are still given global attention, but the attention matrix has way less parameters, resulting in a speed-up. See the :ref:`local attention section <local-attention>` for more information. It is pretrained the same way a RoBERTa otherwise. **Note:** This model could be very well be used in an autoregressive setting, there is no checkpoint for such a pretraining yet, though. The library provides a version of the model for masked language modeling, token classification, sentence classification, multiple choice classification and question answering. .. _seq-to-seq-models: Sequence-to-sequence models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned before, these models keep both the encoder and the decoder of the original transformer. BART ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=bart"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-bart-blueviolet"> </a> <a href="model_doc/bart.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-bart-blueviolet"> </a> `BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension <https://arxiv.org/abs/1910.13461>`_, Mike Lewis et al. Sequence-to-sequence model with an encoder and a decoder. Encoder is fed a corrupted version of the tokens, decoder is fed the original tokens (but has a mask to hide the future words like a regular transformers decoder). A composition of the following transformations are applied on the pretraining tasks for the encoder: * mask random tokens (like in BERT) * delete random tokens * mask a span of k tokens with a single mask token (a span of 0 tokens is an insertion of a mask token) * permute sentences * rotate the document to make it start at a specific token The library provides a version of this model for conditional generation and sequence classification. Pegasus ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=pegasus"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-pegasus-blueviolet"> </a> <a href="model_doc/pegasus.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-pegasus-blueviolet"> </a> `PEGASUS: Pre-training with Extracted Gap-sentences forAbstractive Summarization <https://arxiv.org/pdf/1912.08777.pdf>`_, Jingqing Zhang, Yao Zhao, Mohammad Saleh and Peter J. Liu on Dec 18, 2019. Sequence-to-sequence model with the same encoder-decoder model architecture as BART. Pegasus is pre-trained jointly on two self-supervised objective functions: Masked Language Modeling (MLM) and a novel summarization specific pretraining objective, called Gap Sentence Generation (GSG). * MLM: encoder input tokens are randomly replaced by a mask tokens and have to be predicted by the encoder (like in BERT) * GSG: whole encoder input sentences are replaced by a second mask token and fed to the decoder, but which has a causal mask to hide the future words like a regular auto-regressive transformer decoder. In contrast to BART, Pegasus' pretraining task is intentionally similar to summarization: important sentences are masked and are generated together as one output sequence from the remaining sentences, similar to an extractive summary. The library provides a version of this model for conditional generation, which should be used for summarization. MarianMT ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=marian"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-marian-blueviolet"> </a> <a href="model_doc/marian.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-marian-blueviolet"> </a> `Marian: Fast Neural Machine Translation in C++ <https://arxiv.org/abs/1804.00344>`_, Marcin Junczys-Dowmunt et al. A framework for translation models, using the same models as BART The library provides a version of this model for conditional generation. T5 ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=t5"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-t5-blueviolet"> </a> <a href="model_doc/t5.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-t5-blueviolet"> </a> `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer <https://arxiv.org/abs/1910.10683>`_, Colin Raffel et al. Uses the traditional transformer model (with a slight change in the positional embeddings, which are learned at each layer). To be able to operate on all NLP tasks, it transforms them into text-to-text problems by using specific prefixes: “summarize: ”, “question: ”, “translate English to German: ” and so forth. The pretraining includes both supervised and self-supervised training. Supervised training is conducted on downstream tasks provided by the GLUE and SuperGLUE benchmarks (converting them into text-to-text tasks as explained above). Self-supervised training uses corrupted tokens, by randomly removing 15% of the tokens and replacing them with individual sentinel tokens (if several consecutive tokens are marked for removal, the whole group is replaced with a single sentinel token). The input of the encoder is the corrupted sentence, the input of the decoder is the original sentence and the target is then the dropped out tokens delimited by their sentinel tokens. For instance, if we have the sentence “My dog is very cute .”, and we decide to remove the tokens: "dog", "is" and "cute", the encoder input becomes “My <x> very <y> .” and the target input becomes “<x> dog is <y> cute .<z>” The library provides a version of this model for conditional generation. MT5 ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=mt5"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-mt5-blueviolet"> </a> <a href="model_doc/mt5.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-mt5-blueviolet"> </a> `mT5: A massively multilingual pre-trained text-to-text transformer <https://arxiv.org/abs/2010.11934>`_, Linting Xue et al. The model architecture is same as T5. mT5's pretraining objective includes T5's self-supervised training, but not T5's supervised training. mT5 is trained on 101 languages. The library provides a version of this model for conditional generation. MBart ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=mbart"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-mbart-blueviolet"> </a> <a href="model_doc/mbart.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-mbart-blueviolet"> </a> `Multilingual Denoising Pre-training for Neural Machine Translation <https://arxiv.org/abs/2001.08210>`_ by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer. The model architecture and pretraining objective is same as BART, but MBart is trained on 25 languages and is intended for supervised and unsupervised machine translation. MBart is one of the first methods for pretraining a complete sequence-to-sequence model by denoising full texts in multiple languages, The library provides a version of this model for conditional generation. The `mbart-large-en-ro checkpoint <https://huggingface.co/facebook/mbart-large-en-ro>`_ can be used for english -> romanian translation. The `mbart-large-cc25 <https://huggingface.co/facebook/mbart-large-cc25>`_ checkpoint can be finetuned for other translation and summarization tasks, using code in ```examples/seq2seq/``` , but is not very useful without finetuning. ProphetNet ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=prophetnet"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-prophetnet-blueviolet"> </a> <a href="model_doc/prophetnet.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-prophetnet-blueviolet"> </a> `ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, <https://arxiv.org/abs/2001.04063>`__ by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou. ProphetNet introduces a novel *sequence-to-sequence* pretraining objective, called *future n-gram prediction*. In future n-gram prediction, the model predicts the next n tokens simultaneously based on previous context tokens at each time step instead instead of just the single next token. The future n-gram prediction explicitly encourages the model to plan for the future tokens and prevent overfitting on strong local correlations. The model architecture is based on the original Transformer, but replaces the "standard" self-attention mechanism in the decoder by a a main self-attention mechanism and a self and n-stream (predict) self-attention mechanism. The library provides a pre-trained version of this model for conditional generation and a fine-tuned version for summarization. XLM-ProphetNet ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=xprophetnet"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-xprophetnet-blueviolet"> </a> <a href="model_doc/xlmprophetnet.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-xprophetnet-blueviolet"> </a> `ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training, <https://arxiv.org/abs/2001.04063>`__ by Yu Yan, Weizhen Qi, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, Ming Zhou. XLM-ProphetNet's model architecture and pretraining objective is same as ProphetNet, but XLM-ProphetNet was pre-trained on the cross-lingual dataset `XGLUE <https://arxiv.org/abs/2004.01401>`__. The library provides a pre-trained version of this model for multi-lingual conditional generation and fine-tuned versions for headline generation and question generation, respectively. .. _multimodal-models: Multimodal models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ There is one multimodal model in the library which has not been pretrained in the self-supervised fashion like the others. MMBT ----------------------------------------------------------------------------------------------------------------------- `Supervised Multimodal Bitransformers for Classifying Images and Text <https://arxiv.org/abs/1909.02950>`_, Douwe Kiela et al. A transformers model used in multimodal settings, combining a text and an image to make predictions. The transformer model takes as inputs the embeddings of the tokenized text and the final activations of a pretrained on images resnet (after the pooling layer) that goes through a linear layer (to go from number of features at the end of the resnet to the hidden state dimension of the transformer). The different inputs are concatenated, and on top of the positional embeddings, a segment embedding is added to let the model know which part of the input vector corresponds to the text and which to the image. The pretrained model only works for classification. .. More information in this :doc:`model documentation </model_doc/mmbt.html>`. TODO: write this page .. _retrieval-based-models: Retrieval-based models ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Some models use documents retrieval during (pre)training and inference for open-domain question answering, for example. DPR ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=dpr"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-dpr-blueviolet"> </a> <a href="model_doc/dpr.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-dpr-blueviolet"> </a> `Dense Passage Retrieval for Open-Domain Question Answering <https://arxiv.org/abs/2004.04906>`_, Vladimir Karpukhin et al. Dense Passage Retrieval (DPR) - is a set of tools and models for state-of-the-art open-domain question-answering research. DPR consists in three models: * Question encoder: encode questions as vectors * Context encoder: encode contexts as vectors * Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question). DPR's pipeline (not implemented yet) uses a retrieval step to find the top k contexts given a certain question, and then it calls the reader with the question and the retrieved documents to get the answer. RAG ----------------------------------------------------------------------------------------------------------------------- .. raw:: html <a href="https://huggingface.co/models?filter=rag"> <img alt="Models" src="https://img.shields.io/badge/All_model_pages-rag-blueviolet"> </a> <a href="model_doc/rag.html"> <img alt="Doc" src="https://img.shields.io/badge/Model_documentation-rag-blueviolet"> </a> `Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks <https://arxiv.org/abs/2005.11401>`_, Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and Seq2Seq models. RAG models retrieve docs, pass them to a seq2seq model, then marginalize to generate outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing both retrieval and generation to adapt to downstream tasks. The two models RAG-Token and RAG-Sequence are available for generation. More technical aspects ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Full vs sparse attention ----------------------------------------------------------------------------------------------------------------------- Most transformer models use full attention in the sense that the attention matrix is square. It can be a big computational bottleneck when you have long texts. Longformer and reformer are models that try to be more efficient and use a sparse version of the attention matrix to speed up training. .. _lsh-attention: **LSH attention** :ref:`Reformer <reformer>` uses LSH attention. In the softmax(QK^t), only the biggest elements (in the softmax dimension) of the matrix QK^t are going to give useful contributions. So for each query q in Q, we can consider only the keys k in K that are close to q. A hash function is used to determine if q and k are close. The attention mask is modified to mask the current token (except at the first position), because it will give a query and a key equal (so very similar to each other). Since the hash can be a bit random, several hash functions are used in practice (determined by a n_rounds parameter) and then are averaged together. .. _local-attention: **Local attention** :ref:`Longformer <longformer>` uses local attention: often, the local context (e.g., what are the two tokens to the left and right?) is enough to take action for a given token. Also, by stacking attention layers that have a small window, the last layer will have a receptive field of more than just the tokens in the window, allowing them to build a representation of the whole sentence. Some preselected input tokens are also given global attention: for those few tokens, the attention matrix can access all tokens and this process is symmetric: all other tokens have access to those specific tokens (on top of the ones in their local window). This is shown in Figure 2d of the paper, see below for a sample attention mask: .. image:: imgs/local_attention_mask.png :scale: 50 % :align: center Using those attention matrices with less parameters then allows the model to have inputs having a bigger sequence length. Other tricks ----------------------------------------------------------------------------------------------------------------------- .. _axial-pos-encoding: **Axial positional encodings** :ref:`Reformer <reformer>` uses axial positional encodings: in traditional transformer models, the positional encoding E is a matrix of size :math:`l` by :math:`d`, :math:`l` being the sequence length and :math:`d` the dimension of the hidden state. If you have very long texts, this matrix can be huge and take way too much space on the GPU. To alleviate that, axial positional encodings consist of factorizing that big matrix E in two smaller matrices E1 and E2, with dimensions :math:`l_{1} \times d_{1}` and :math:`l_{2} \times d_{2}`, such that :math:`l_{1} \times l_{2} = l` and :math:`d_{1} + d_{2} = d` (with the product for the lengths, this ends up being way smaller). The embedding for time step :math:`j` in E is obtained by concatenating the embeddings for timestep :math:`j \% l1` in E1 and :math:`j // l1` in E2.
AdaMix/docs/source/model_summary.rst/0
{ "file_path": "AdaMix/docs/source/model_summary.rst", "repo_id": "AdaMix", "token_count": 12293 }
36
import argparse import logging import os from pathlib import Path from typing import Any, Dict import pytorch_lightning as pl from pytorch_lightning.utilities import rank_zero_info from transformers import ( AdamW, AutoConfig, AutoModel, AutoModelForPreTraining, AutoModelForQuestionAnswering, AutoModelForSeq2SeqLM, AutoModelForSequenceClassification, AutoModelForTokenClassification, AutoModelWithLMHead, AutoTokenizer, PretrainedConfig, PreTrainedTokenizer, ) from transformers.optimization import ( Adafactor, get_cosine_schedule_with_warmup, get_cosine_with_hard_restarts_schedule_with_warmup, get_linear_schedule_with_warmup, get_polynomial_decay_schedule_with_warmup, ) from transformers.utils.versions import require_version_examples logger = logging.getLogger(__name__) require_version_examples("pytorch_lightning>=1.0.4") MODEL_MODES = { "base": AutoModel, "sequence-classification": AutoModelForSequenceClassification, "question-answering": AutoModelForQuestionAnswering, "pretraining": AutoModelForPreTraining, "token-classification": AutoModelForTokenClassification, "language-modeling": AutoModelWithLMHead, "summarization": AutoModelForSeq2SeqLM, "translation": AutoModelForSeq2SeqLM, } # update this and the import above to support new schedulers from transformers.optimization arg_to_scheduler = { "linear": get_linear_schedule_with_warmup, "cosine": get_cosine_schedule_with_warmup, "cosine_w_restarts": get_cosine_with_hard_restarts_schedule_with_warmup, "polynomial": get_polynomial_decay_schedule_with_warmup, # '': get_constant_schedule, # not supported for now # '': get_constant_schedule_with_warmup, # not supported for now } arg_to_scheduler_choices = sorted(arg_to_scheduler.keys()) arg_to_scheduler_metavar = "{" + ", ".join(arg_to_scheduler_choices) + "}" class BaseTransformer(pl.LightningModule): def __init__( self, hparams: argparse.Namespace, num_labels=None, mode="base", config=None, tokenizer=None, model=None, **config_kwargs ): """Initialize a model, tokenizer and config.""" super().__init__() # TODO: move to self.save_hyperparameters() # self.save_hyperparameters() # can also expand arguments into trainer signature for easier reading self.save_hyperparameters(hparams) self.step_count = 0 self.output_dir = Path(self.hparams.output_dir) cache_dir = self.hparams.cache_dir if self.hparams.cache_dir else None if config is None: self.config = AutoConfig.from_pretrained( self.hparams.config_name if self.hparams.config_name else self.hparams.model_name_or_path, **({"num_labels": num_labels} if num_labels is not None else {}), cache_dir=cache_dir, **config_kwargs, ) else: self.config: PretrainedConfig = config extra_model_params = ("encoder_layerdrop", "decoder_layerdrop", "dropout", "attention_dropout") for p in extra_model_params: if getattr(self.hparams, p, None): assert hasattr(self.config, p), f"model config doesn't have a `{p}` attribute" setattr(self.config, p, getattr(self.hparams, p)) if tokenizer is None: self.tokenizer = AutoTokenizer.from_pretrained( self.hparams.tokenizer_name if self.hparams.tokenizer_name else self.hparams.model_name_or_path, cache_dir=cache_dir, ) else: self.tokenizer: PreTrainedTokenizer = tokenizer self.model_type = MODEL_MODES[mode] if model is None: self.model = self.model_type.from_pretrained( self.hparams.model_name_or_path, from_tf=bool(".ckpt" in self.hparams.model_name_or_path), config=self.config, cache_dir=cache_dir, ) else: self.model = model def load_hf_checkpoint(self, *args, **kwargs): self.model = self.model_type.from_pretrained(*args, **kwargs) def get_lr_scheduler(self): get_schedule_func = arg_to_scheduler[self.hparams.lr_scheduler] scheduler = get_schedule_func( self.opt, num_warmup_steps=self.hparams.warmup_steps, num_training_steps=self.total_steps() ) scheduler = {"scheduler": scheduler, "interval": "step", "frequency": 1} return scheduler def configure_optimizers(self): """Prepare optimizer and schedule (linear warmup and decay)""" model = self.model no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": self.hparams.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] if self.hparams.adafactor: optimizer = Adafactor( optimizer_grouped_parameters, lr=self.hparams.learning_rate, scale_parameter=False, relative_step=False ) else: optimizer = AdamW( optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon ) self.opt = optimizer scheduler = self.get_lr_scheduler() return [optimizer], [scheduler] def test_step(self, batch, batch_nb): return self.validation_step(batch, batch_nb) def test_epoch_end(self, outputs): return self.validation_end(outputs) def total_steps(self) -> int: """The number of total training steps that will be run. Used for lr scheduler purposes.""" num_devices = max(1, self.hparams.gpus) # TODO: consider num_tpu_cores effective_batch_size = self.hparams.train_batch_size * self.hparams.accumulate_grad_batches * num_devices return (self.dataset_size / effective_batch_size) * self.hparams.max_epochs def setup(self, mode): if mode == "test": self.dataset_size = len(self.test_dataloader().dataset) else: self.train_loader = self.get_dataloader("train", self.hparams.train_batch_size, shuffle=True) self.dataset_size = len(self.train_dataloader().dataset) def get_dataloader(self, type_path: str, batch_size: int, shuffle: bool = False): raise NotImplementedError("You must implement this for your task") def train_dataloader(self): return self.train_loader def val_dataloader(self): return self.get_dataloader("dev", self.hparams.eval_batch_size, shuffle=False) def test_dataloader(self): return self.get_dataloader("test", self.hparams.eval_batch_size, shuffle=False) def _feature_file(self, mode): return os.path.join( self.hparams.data_dir, "cached_{}_{}_{}".format( mode, list(filter(None, self.hparams.model_name_or_path.split("/"))).pop(), str(self.hparams.max_seq_length), ), ) @pl.utilities.rank_zero_only def on_save_checkpoint(self, checkpoint: Dict[str, Any]) -> None: save_path = self.output_dir.joinpath("best_tfmr") self.model.config.save_step = self.step_count self.model.save_pretrained(save_path) self.tokenizer.save_pretrained(save_path) @staticmethod def add_model_specific_args(parser, root_dir): parser.add_argument( "--model_name_or_path", default=None, type=str, required=True, help="Path to pretrained model or model identifier from huggingface.co/models", ) parser.add_argument( "--config_name", default="", type=str, help="Pretrained config name or path if not the same as model_name" ) parser.add_argument( "--tokenizer_name", default=None, type=str, help="Pretrained tokenizer name or path if not the same as model_name", ) parser.add_argument( "--cache_dir", default="", type=str, help="Where do you want to store the pre-trained models downloaded from huggingface.co", ) parser.add_argument( "--encoder_layerdrop", type=float, help="Encoder layer dropout probability (Optional). Goes into model.config", ) parser.add_argument( "--decoder_layerdrop", type=float, help="Decoder layer dropout probability (Optional). Goes into model.config", ) parser.add_argument( "--dropout", type=float, help="Dropout probability (Optional). Goes into model.config", ) parser.add_argument( "--attention_dropout", type=float, help="Attention dropout probability (Optional). Goes into model.config", ) parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.") parser.add_argument( "--lr_scheduler", default="linear", choices=arg_to_scheduler_choices, metavar=arg_to_scheduler_metavar, type=str, help="Learning rate scheduler", ) parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight decay if we apply some.") parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.") parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.") parser.add_argument("--num_workers", default=4, type=int, help="kwarg passed to DataLoader") parser.add_argument("--num_train_epochs", dest="max_epochs", default=3, type=int) parser.add_argument("--train_batch_size", default=32, type=int) parser.add_argument("--eval_batch_size", default=32, type=int) parser.add_argument("--adafactor", action="store_true") class LoggingCallback(pl.Callback): def on_batch_end(self, trainer, pl_module): lr_scheduler = trainer.lr_schedulers[0]["scheduler"] lrs = {f"lr_group_{i}": lr for i, lr in enumerate(lr_scheduler.get_lr())} pl_module.logger.log_metrics(lrs) def on_validation_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule): rank_zero_info("***** Validation results *****") metrics = trainer.callback_metrics # Log results for key in sorted(metrics): if key not in ["log", "progress_bar"]: rank_zero_info("{} = {}\n".format(key, str(metrics[key]))) def on_test_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule): rank_zero_info("***** Test results *****") metrics = trainer.callback_metrics # Log and save results to file output_test_results_file = os.path.join(pl_module.hparams.output_dir, "test_results.txt") with open(output_test_results_file, "w") as writer: for key in sorted(metrics): if key not in ["log", "progress_bar"]: rank_zero_info("{} = {}\n".format(key, str(metrics[key]))) writer.write("{} = {}\n".format(key, str(metrics[key]))) def add_generic_args(parser, root_dir) -> None: # To allow all pl args uncomment the following line # parser = pl.Trainer.add_argparse_args(parser) parser.add_argument( "--output_dir", default=None, type=str, required=True, help="The output directory where the model predictions and checkpoints will be written.", ) parser.add_argument( "--fp16", action="store_true", help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit", ) parser.add_argument( "--fp16_opt_level", type=str, default="O2", help="For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']." "See details at https://nvidia.github.io/apex/amp.html", ) parser.add_argument("--n_tpu_cores", dest="tpu_cores", type=int) parser.add_argument("--max_grad_norm", dest="gradient_clip_val", default=1.0, type=float, help="Max gradient norm") parser.add_argument("--do_train", action="store_true", help="Whether to run training.") parser.add_argument("--do_predict", action="store_true", help="Whether to run predictions on the test set.") parser.add_argument( "--gradient_accumulation_steps", dest="accumulate_grad_batches", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument("--seed", type=int, default=42, help="random seed for initialization") parser.add_argument( "--data_dir", default=None, type=str, required=True, help="The input data dir. Should contain the training files for the CoNLL-2003 NER task.", ) def generic_train( model: BaseTransformer, args: argparse.Namespace, early_stopping_callback=None, logger=True, # can pass WandbLogger() here extra_callbacks=[], checkpoint_callback=None, logging_callback=None, **extra_train_kwargs ): pl.seed_everything(args.seed) # init model odir = Path(model.hparams.output_dir) odir.mkdir(exist_ok=True) # add custom checkpoints if checkpoint_callback is None: checkpoint_callback = pl.callbacks.ModelCheckpoint( filepath=args.output_dir, prefix="checkpoint", monitor="val_loss", mode="min", save_top_k=1 ) if early_stopping_callback: extra_callbacks.append(early_stopping_callback) if logging_callback is None: logging_callback = LoggingCallback() train_params = {} # TODO: remove with PyTorch 1.6 since pl uses native amp if args.fp16: train_params["precision"] = 16 train_params["amp_level"] = args.fp16_opt_level if args.gpus > 1: train_params["distributed_backend"] = "ddp" train_params["accumulate_grad_batches"] = args.accumulate_grad_batches train_params["accelerator"] = extra_train_kwargs.get("accelerator", None) train_params["profiler"] = extra_train_kwargs.get("profiler", None) trainer = pl.Trainer.from_argparse_args( args, weights_summary=None, callbacks=[logging_callback] + extra_callbacks, logger=logger, checkpoint_callback=checkpoint_callback, **train_params, ) if args.do_train: trainer.fit(model) return trainer
AdaMix/examples/legacy/pytorch-lightning/lightning_base.py/0
{ "file_path": "AdaMix/examples/legacy/pytorch-lightning/lightning_base.py", "repo_id": "AdaMix", "token_count": 6580 }
37
import os import sys sys.path.insert(1, os.path.dirname(os.path.realpath(__file__)))
AdaMix/examples/legacy/seq2seq/__init__.py/0
{ "file_path": "AdaMix/examples/legacy/seq2seq/__init__.py", "repo_id": "AdaMix", "token_count": 34 }
38
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import fire from utils import calculate_rouge, save_json def calculate_rouge_path(pred_path, tgt_path, save_path=None, **kwargs): """Kwargs will be passed to calculate_rouge""" pred_lns = [x.strip() for x in open(pred_path).readlines()] tgt_lns = [x.strip() for x in open(tgt_path).readlines()][: len(pred_lns)] metrics = calculate_rouge(pred_lns, tgt_lns, **kwargs) if save_path is not None: save_json(metrics, save_path, indent=None) return metrics # these print nicely if __name__ == "__main__": fire.Fire(calculate_rouge_path)
AdaMix/examples/legacy/seq2seq/rouge_cli.py/0
{ "file_path": "AdaMix/examples/legacy/seq2seq/rouge_cli.py", "repo_id": "AdaMix", "token_count": 385 }
39
import logging import os from typing import List, TextIO, Union from conllu import parse_incr from utils_ner import InputExample, Split, TokenClassificationTask logger = logging.getLogger(__name__) class NER(TokenClassificationTask): def __init__(self, label_idx=-1): # in NER datasets, the last column is usually reserved for NER label self.label_idx = label_idx def read_examples_from_file(self, data_dir, mode: Union[Split, str]) -> List[InputExample]: if isinstance(mode, Split): mode = mode.value file_path = os.path.join(data_dir, f"{mode}.txt") guid_index = 1 examples = [] with open(file_path, encoding="utf-8") as f: words = [] labels = [] for line in f: if line.startswith("-DOCSTART-") or line == "" or line == "\n": if words: examples.append(InputExample(guid=f"{mode}-{guid_index}", words=words, labels=labels)) guid_index += 1 words = [] labels = [] else: splits = line.split(" ") words.append(splits[0]) if len(splits) > 1: labels.append(splits[self.label_idx].replace("\n", "")) else: # Examples could have no label for mode = "test" labels.append("O") if words: examples.append(InputExample(guid=f"{mode}-{guid_index}", words=words, labels=labels)) return examples def write_predictions_to_file(self, writer: TextIO, test_input_reader: TextIO, preds_list: List): example_id = 0 for line in test_input_reader: if line.startswith("-DOCSTART-") or line == "" or line == "\n": writer.write(line) if not preds_list[example_id]: example_id += 1 elif preds_list[example_id]: output_line = line.split()[0] + " " + preds_list[example_id].pop(0) + "\n" writer.write(output_line) else: logger.warning("Maximum sequence length exceeded: No prediction for '%s'.", line.split()[0]) def get_labels(self, path: str) -> List[str]: if path: with open(path, "r") as f: labels = f.read().splitlines() if "O" not in labels: labels = ["O"] + labels return labels else: return ["O", "B-MISC", "I-MISC", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC"] class Chunk(NER): def __init__(self): # in CONLL2003 dataset chunk column is second-to-last super().__init__(label_idx=-2) def get_labels(self, path: str) -> List[str]: if path: with open(path, "r") as f: labels = f.read().splitlines() if "O" not in labels: labels = ["O"] + labels return labels else: return [ "O", "B-ADVP", "B-INTJ", "B-LST", "B-PRT", "B-NP", "B-SBAR", "B-VP", "B-ADJP", "B-CONJP", "B-PP", "I-ADVP", "I-INTJ", "I-LST", "I-PRT", "I-NP", "I-SBAR", "I-VP", "I-ADJP", "I-CONJP", "I-PP", ] class POS(TokenClassificationTask): def read_examples_from_file(self, data_dir, mode: Union[Split, str]) -> List[InputExample]: if isinstance(mode, Split): mode = mode.value file_path = os.path.join(data_dir, f"{mode}.txt") guid_index = 1 examples = [] with open(file_path, encoding="utf-8") as f: for sentence in parse_incr(f): words = [] labels = [] for token in sentence: words.append(token["form"]) labels.append(token["upos"]) assert len(words) == len(labels) if words: examples.append(InputExample(guid=f"{mode}-{guid_index}", words=words, labels=labels)) guid_index += 1 return examples def write_predictions_to_file(self, writer: TextIO, test_input_reader: TextIO, preds_list: List): example_id = 0 for sentence in parse_incr(test_input_reader): s_p = preds_list[example_id] out = "" for token in sentence: out += f'{token["form"]} ({token["upos"]}|{s_p.pop(0)}) ' out += "\n" writer.write(out) example_id += 1 def get_labels(self, path: str) -> List[str]: if path: with open(path, "r") as f: return f.read().splitlines() else: return [ "ADJ", "ADP", "ADV", "AUX", "CCONJ", "DET", "INTJ", "NOUN", "NUM", "PART", "PRON", "PROPN", "PUNCT", "SCONJ", "SYM", "VERB", "X", ]
AdaMix/examples/legacy/token-classification/tasks.py/0
{ "file_path": "AdaMix/examples/legacy/token-classification/tasks.py", "repo_id": "AdaMix", "token_count": 3164 }
40
#! /usr/bin/python3 import argparse import logging import os import sys from collections import namedtuple import torch from torch.utils.data import DataLoader, SequentialSampler from tqdm import tqdm from modeling_bertabs import BertAbs, build_predictor from transformers import BertTokenizer from .utils_summarization import ( CNNDMDataset, build_mask, compute_token_type_ids, encode_for_summarization, truncate_or_pad, ) logger = logging.getLogger(__name__) logging.basicConfig(stream=sys.stdout, level=logging.INFO) Batch = namedtuple("Batch", ["document_names", "batch_size", "src", "segs", "mask_src", "tgt_str"]) def evaluate(args): tokenizer = BertTokenizer.from_pretrained("bert-base-uncased", do_lower_case=True) model = BertAbs.from_pretrained("remi/bertabs-finetuned-extractive-abstractive-summarization") model.to(args.device) model.eval() symbols = { "BOS": tokenizer.vocab["[unused0]"], "EOS": tokenizer.vocab["[unused1]"], "PAD": tokenizer.vocab["[PAD]"], } if args.compute_rouge: reference_summaries = [] generated_summaries = [] import nltk import rouge nltk.download("punkt") rouge_evaluator = rouge.Rouge( metrics=["rouge-n", "rouge-l"], max_n=2, limit_length=True, length_limit=args.beam_size, length_limit_type="words", apply_avg=True, apply_best=False, alpha=0.5, # Default F1_score weight_factor=1.2, stemming=True, ) # these (unused) arguments are defined to keep the compatibility # with the legacy code and will be deleted in a next iteration. args.result_path = "" args.temp_dir = "" data_iterator = build_data_iterator(args, tokenizer) predictor = build_predictor(args, tokenizer, symbols, model) logger.info("***** Running evaluation *****") logger.info(" Number examples = %d", len(data_iterator.dataset)) logger.info(" Batch size = %d", args.batch_size) logger.info("") logger.info("***** Beam Search parameters *****") logger.info(" Beam size = %d", args.beam_size) logger.info(" Minimum length = %d", args.min_length) logger.info(" Maximum length = %d", args.max_length) logger.info(" Alpha (length penalty) = %.2f", args.alpha) logger.info(" Trigrams %s be blocked", ("will" if args.block_trigram else "will NOT")) for batch in tqdm(data_iterator): batch_data = predictor.translate_batch(batch) translations = predictor.from_batch(batch_data) summaries = [format_summary(t) for t in translations] save_summaries(summaries, args.summaries_output_dir, batch.document_names) if args.compute_rouge: reference_summaries += batch.tgt_str generated_summaries += summaries if args.compute_rouge: scores = rouge_evaluator.get_scores(generated_summaries, reference_summaries) str_scores = format_rouge_scores(scores) save_rouge_scores(str_scores) print(str_scores) def save_summaries(summaries, path, original_document_name): """Write the summaries in fies that are prefixed by the original files' name with the `_summary` appended. Attributes: original_document_names: List[string] Name of the document that was summarized. path: string Path were the summaries will be written summaries: List[string] The summaries that we produced. """ for summary, document_name in zip(summaries, original_document_name): # Prepare the summary file's name if "." in document_name: bare_document_name = ".".join(document_name.split(".")[:-1]) extension = document_name.split(".")[-1] name = bare_document_name + "_summary." + extension else: name = document_name + "_summary" file_path = os.path.join(path, name) with open(file_path, "w") as output: output.write(summary) def format_summary(translation): """Transforms the output of the `from_batch` function into nicely formatted summaries. """ raw_summary, _, _ = translation summary = ( raw_summary.replace("[unused0]", "") .replace("[unused3]", "") .replace("[PAD]", "") .replace("[unused1]", "") .replace(r" +", " ") .replace(" [unused2] ", ". ") .replace("[unused2]", "") .strip() ) return summary def format_rouge_scores(scores): return """\n ****** ROUGE SCORES ****** ** ROUGE 1 F1 >> {:.3f} Precision >> {:.3f} Recall >> {:.3f} ** ROUGE 2 F1 >> {:.3f} Precision >> {:.3f} Recall >> {:.3f} ** ROUGE L F1 >> {:.3f} Precision >> {:.3f} Recall >> {:.3f}""".format( scores["rouge-1"]["f"], scores["rouge-1"]["p"], scores["rouge-1"]["r"], scores["rouge-2"]["f"], scores["rouge-2"]["p"], scores["rouge-2"]["r"], scores["rouge-l"]["f"], scores["rouge-l"]["p"], scores["rouge-l"]["r"], ) def save_rouge_scores(str_scores): with open("rouge_scores.txt", "w") as output: output.write(str_scores) # # LOAD the dataset # def build_data_iterator(args, tokenizer): dataset = load_and_cache_examples(args, tokenizer) sampler = SequentialSampler(dataset) def collate_fn(data): return collate(data, tokenizer, block_size=512, device=args.device) iterator = DataLoader( dataset, sampler=sampler, batch_size=args.batch_size, collate_fn=collate_fn, ) return iterator def load_and_cache_examples(args, tokenizer): dataset = CNNDMDataset(args.documents_dir) return dataset def collate(data, tokenizer, block_size, device): """Collate formats the data passed to the data loader. In particular we tokenize the data batch after batch to avoid keeping them all in memory. We output the data as a namedtuple to fit the original BertAbs's API. """ data = [x for x in data if not len(x[1]) == 0] # remove empty_files names = [name for name, _, _ in data] summaries = [" ".join(summary_list) for _, _, summary_list in data] encoded_text = [encode_for_summarization(story, summary, tokenizer) for _, story, summary in data] encoded_stories = torch.tensor( [truncate_or_pad(story, block_size, tokenizer.pad_token_id) for story, _ in encoded_text] ) encoder_token_type_ids = compute_token_type_ids(encoded_stories, tokenizer.cls_token_id) encoder_mask = build_mask(encoded_stories, tokenizer.pad_token_id) batch = Batch( document_names=names, batch_size=len(encoded_stories), src=encoded_stories.to(device), segs=encoder_token_type_ids.to(device), mask_src=encoder_mask.to(device), tgt_str=summaries, ) return batch def decode_summary(summary_tokens, tokenizer): """Decode the summary and return it in a format suitable for evaluation. """ summary_tokens = summary_tokens.to("cpu").numpy() summary = tokenizer.decode(summary_tokens) sentences = summary.split(".") sentences = [s + "." for s in sentences] return sentences def main(): """The main function defines the interface with the users.""" parser = argparse.ArgumentParser() parser.add_argument( "--documents_dir", default=None, type=str, required=True, help="The folder where the documents to summarize are located.", ) parser.add_argument( "--summaries_output_dir", default=None, type=str, required=False, help="The folder in wich the summaries should be written. Defaults to the folder where the documents are", ) parser.add_argument( "--compute_rouge", default=False, type=bool, required=False, help="Compute the ROUGE metrics during evaluation. Only available for the CNN/DailyMail dataset.", ) # EVALUATION options parser.add_argument( "--no_cuda", default=False, type=bool, help="Whether to force the execution on CPU.", ) parser.add_argument( "--batch_size", default=4, type=int, help="Batch size per GPU/CPU for training.", ) # BEAM SEARCH arguments parser.add_argument( "--min_length", default=50, type=int, help="Minimum number of tokens for the summaries.", ) parser.add_argument( "--max_length", default=200, type=int, help="Maixmum number of tokens for the summaries.", ) parser.add_argument( "--beam_size", default=5, type=int, help="The number of beams to start with for each example.", ) parser.add_argument( "--alpha", default=0.95, type=float, help="The value of alpha for the length penalty in the beam search.", ) parser.add_argument( "--block_trigram", default=True, type=bool, help="Whether to block the existence of repeating trigrams in the text generated by beam search.", ) args = parser.parse_args() # Select device (distibuted not available) args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") # Check the existence of directories if not args.summaries_output_dir: args.summaries_output_dir = args.documents_dir if not documents_dir_is_valid(args.documents_dir): raise FileNotFoundError( "We could not find the directory you specified for the documents to summarize, or it was empty. Please specify a valid path." ) os.makedirs(args.summaries_output_dir, exist_ok=True) evaluate(args) def documents_dir_is_valid(path): if not os.path.exists(path): return False file_list = os.listdir(path) if len(file_list) == 0: return False return True if __name__ == "__main__": main()
AdaMix/examples/research_projects/bertabs/run_summarization.py/0
{ "file_path": "AdaMix/examples/research_projects/bertabs/run_summarization.py", "repo_id": "AdaMix", "token_count": 4301 }
41
# Distil* Author: @VictorSanh This folder contains the original code used to train Distil* as well as examples showcasing how to use DistilBERT, DistilRoBERTa and DistilGPT2. **January 20, 2020 - Bug fixing** We have recently discovered and fixed [a bug](https://github.com/huggingface/transformers/commit/48cbf267c988b56c71a2380f748a3e6092ccaed3) in the evaluation of our `run_*.py` scripts that caused the reported metrics to be over-estimated on average. We have updated all the metrics with the latest runs. **December 6, 2019 - Update** We release **DistilmBERT**: 92% of `bert-base-multilingual-cased` on XNLI. The model supports 104 different languages listed [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). **November 19, 2019 - Update** We release German **DistilBERT**: 98.8% of `bert-base-german-dbmdz-cased` on NER tasks. **October 23, 2019 - Update** We release **DistilRoBERTa**: 95% of `RoBERTa-base`'s performance on GLUE, twice as fast as RoBERTa while being 35% smaller. **October 3, 2019 - Update** We release our [NeurIPS workshop paper](https://arxiv.org/abs/1910.01108) explaining our approach on **DistilBERT**. It includes updated results and further experiments. We applied the same method to GPT2 and release the weights of **DistilGPT2**. DistilGPT2 is two times faster and 33% smaller than GPT2. **The paper supersedes our [previous blogpost](https://medium.com/huggingface/distilbert-8cf3380435b5) with a different distillation loss and better performances. Please use the paper as a reference when comparing/reporting results on DistilBERT.** **September 19, 2019 - Update:** We fixed bugs in the code and released an updated version of the weights trained with a modification of the distillation loss. DistilBERT now reaches 99% of `BERT-base`'s performance on GLUE, and 86.9 F1 score on SQuAD v1.1 dev set (compared to 88.5 for `BERT-base`). We will publish a formal write-up of our approach in the near future! ## What is Distil* Distil* is a class of compressed models that started with DistilBERT. DistilBERT stands for Distilled-BERT. DistilBERT is a small, fast, cheap and light Transformer model based on Bert architecture. It has 40% less parameters than `bert-base-uncased`, runs 60% faster while preserving 97% of BERT's performances as measured on the GLUE language understanding benchmark. DistilBERT is trained using knowledge distillation, a technique to compress a large model called the teacher into a smaller model called the student. By distillating Bert, we obtain a smaller Transformer model that bears a lot of similarities with the original BERT model while being lighter, smaller and faster to run. DistilBERT is thus an interesting option to put large-scaled trained Transformer model into production. We have applied the same method to other Transformer architectures and released the weights: - GPT2: on the [WikiText-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark, GPT2 reaches a perplexity on the test set of 16.3 compared to 21.1 for **DistilGPT2** (after fine-tuning on the train set). - RoBERTa: **DistilRoBERTa** reaches 95% of `RoBERTa-base`'s performance on GLUE while being twice faster and 35% smaller. - German BERT: **German DistilBERT** reaches 99% of `bert-base-german-dbmdz-cased`'s performance on German NER (CoNLL-2003). - Multilingual BERT: **DistilmBERT** reaches 92% of Multilingual BERT's performance on XNLI while being twice faster and 25% smaller. The model supports 104 languages listed [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). For more information on DistilBERT, please refer to our [NeurIPS workshop paper](https://arxiv.org/abs/1910.01108). Here are the results on the dev sets of GLUE: | Model | Macro-score | CoLA | MNLI | MRPC | QNLI | QQP | RTE | SST-2| STS-B| WNLI | | :---: | :---: | :---:| :---:| :---:| :---:| :---:| :---:| :---:| :---:| :---: | | BERT-base-uncased | **79.5** | 56.3 | 84.7 | 88.6 | 91.8 | 89.6 | 69.3 | 92.7 | 89.0 | 53.5 | | DistilBERT-base-uncased | **77.0** | 51.3 | 82.1 | 87.5 | 89.2 | 88.5 | 59.9 | 91.3 | 86.9 | 56.3 | | BERT-base-cased | **78.2** | 58.2 | 83.9 | 87.8 | 91.0 | 89.2 | 66.1 | 91.7 | 89.2 | 46.5 | | DistilBERT-base-cased | **75.9** | 47.2 | 81.5 | 85.6 | 88.2 | 87.8 | 60.6 | 90.4 | 85.5 | 56.3 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | RoBERTa-base (reported) | **83.2**/**86.4**<sup>2</sup> | 63.6 | 87.6 | 90.2 | 92.8 | 91.9 | 78.7 | 94.8 | 91.2 | 57.7<sup>3</sup> | | DistilRoBERTa<sup>1</sup> | **79.0**/**82.3**<sup>2</sup> | 59.3 | 84.0 | 86.6 | 90.8 | 89.4 | 67.9 | 92.5 | 88.3 | 52.1 | <sup>1</sup> We did not use the MNLI checkpoint for fine-tuning but directly perform transfer learning on the pre-trained DistilRoBERTa. <sup>2</sup> Macro-score computed without WNLI. <sup>3</sup> We compute this score ourselves for completeness. Here are the results on the *test* sets for 6 of the languages available in XNLI. The results are computed in the zero shot setting (trained on the English portion and evaluated on the target language portion): | Model | English | Spanish | Chinese | German | Arabic | Urdu | | :---: | :---: | :---: | :---: | :---: | :---: | :---:| | mBERT base cased (computed) | 82.1 | 74.6 | 69.1 | 72.3 | 66.4 | 58.5 | | mBERT base uncased (reported)| 81.4 | 74.3 | 63.8 | 70.5 | 62.1 | 58.3 | | DistilmBERT | 78.2 | 69.1 | 64.0 | 66.3 | 59.1 | 54.7 | ## Setup This part of the library has only be tested with Python3.6+. There are few specific dependencies to install before launching a distillation, you can install them with the command `pip install -r requirements.txt`. **Important note:** The training scripts have been updated to support PyTorch v1.2.0 (there are breaking changes compared to v1.1.0). ## How to use DistilBERT Transformers includes five pre-trained Distil* models, currently only provided for English and German (we are investigating the possibility to train and release a multilingual version of DistilBERT): - `distilbert-base-uncased`: DistilBERT English language model pretrained on the same data used to pretrain Bert (concatenation of the Toronto Book Corpus and full English Wikipedia) using distillation with the supervision of the `bert-base-uncased` version of Bert. The model has 6 layers, 768 dimension and 12 heads, totalizing 66M parameters. - `distilbert-base-uncased-distilled-squad`: A finetuned version of `distilbert-base-uncased` finetuned using (a second step of) knowledge distillation on SQuAD 1.0. This model reaches a F1 score of 86.9 on the dev set (for comparison, Bert `bert-base-uncased` version reaches a 88.5 F1 score). - `distilbert-base-cased`: DistilBERT English language model pretrained on the same data used to pretrain Bert (concatenation of the Toronto Book Corpus and full English Wikipedia) using distillation with the supervision of the `bert-base-cased` version of Bert. The model has 6 layers, 768 dimension and 12 heads, totalizing 65M parameters. - `distilbert-base-cased-distilled-squad`: A finetuned version of `distilbert-base-cased` finetuned using (a second step of) knowledge distillation on SQuAD 1.0. This model reaches a F1 score of 87.1 on the dev set (for comparison, Bert `bert-base-cased` version reaches a 88.7 F1 score). - `distilbert-base-german-cased`: DistilBERT German language model pretrained on 1/2 of the data used to pretrain Bert using distillation with the supervision of the `bert-base-german-dbmdz-cased` version of German DBMDZ Bert. For NER tasks the model reaches a F1 score of 83.49 on the CoNLL-2003 test set (for comparison, `bert-base-german-dbmdz-cased` reaches a 84.52 F1 score), and a F1 score of 85.23 on the GermEval 2014 test set (`bert-base-german-dbmdz-cased` reaches a 86.89 F1 score). - `distilgpt2`: DistilGPT2 English language model pretrained with the supervision of `gpt2` (the smallest version of GPT2) on [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), a reproduction of OpenAI's WebText dataset. The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 124M parameters for GPT2). On average, DistilGPT2 is two times faster than GPT2. - `distilroberta-base`: DistilRoBERTa English language model pretrained with the supervision of `roberta-base` solely on [OpenWebTextCorpus](https://skylion007.github.io/OpenWebTextCorpus/), a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa). The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base. - `distilbert-base-multilingual-cased`: DistilmBERT multilingual model pretrained with the supervision of `bert-base-multilingual-cased` on the concatenation of Wikipedia in 104 different languages. The model supports the 104 languages listed [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages). The model has 6 layers, 768 dimension and 12 heads, totalizing 134M parameters (compared to 177M parameters for mBERT-base). On average DistilmBERT is twice as fast as mBERT-base. Using DistilBERT is very similar to using BERT. DistilBERT share the same tokenizer as BERT's `bert-base-uncased` even though we provide a link to this tokenizer under the `DistilBertTokenizer` name to have a consistent naming between the library models. ```python tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-cased') model = DistilBertModel.from_pretrained('distilbert-base-cased') input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) outputs = model(input_ids) last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple ``` Similarly, using the other Distil* models simply consists in calling the base classes with a different pretrained checkpoint: - DistilBERT uncased: `model = DistilBertModel.from_pretrained('distilbert-base-uncased')` - DistilGPT2: `model = GPT2Model.from_pretrained('distilgpt2')` - DistilRoBERTa: `model = RobertaModel.from_pretrained('distilroberta-base')` - DistilmBERT: `model = DistilBertModel.from_pretrained('distilbert-base-multilingual-cased')` ## How to train Distil* In the following, we will explain how you can train DistilBERT. ### A. Preparing the data The weights we release are trained using a concatenation of Toronto Book Corpus and English Wikipedia (same training data as the English version of BERT). To avoid processing the data several time, we do it once and for all before the training. From now on, will suppose that you have a text file `dump.txt` which contains one sequence per line (a sequence being composed of one of several coherent sentences). First, we will binarize the data, i.e. tokenize the data and convert each token in an index in our model's vocabulary. ```bash python scripts/binarized_data.py \ --file_path data/dump.txt \ --tokenizer_type bert \ --tokenizer_name bert-base-uncased \ --dump_file data/binarized_text ``` Our implementation of masked language modeling loss follows [XLM](https://github.com/facebookresearch/XLM)'s one and smooths the probability of masking with a factor that put more emphasis on rare words. Thus we count the occurrences of each tokens in the data: ```bash python scripts/token_counts.py \ --data_file data/binarized_text.bert-base-uncased.pickle \ --token_counts_dump data/token_counts.bert-base-uncased.pickle \ --vocab_size 30522 ``` ### B. Training Training with distillation is really simple once you have pre-processed the data: ```bash python train.py \ --student_type distilbert \ --student_config training_configs/distilbert-base-uncased.json \ --teacher_type bert \ --teacher_name bert-base-uncased \ --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm \ --freeze_pos_embs \ --dump_path serialization_dir/my_first_training \ --data_file data/binarized_text.bert-base-uncased.pickle \ --token_counts data/token_counts.bert-base-uncased.pickle \ --force # overwrites the `dump_path` if it already exists. ``` By default, this will launch a training on a single GPU (even if more are available on the cluster). Other parameters are available in the command line, please look in `train.py` or run `python train.py --help` to list them. We highly encourage you to use distributed training for training DistilBERT as the training corpus is quite large. Here's an example that runs a distributed training on a single node having 4 GPUs: ```bash export NODE_RANK=0 export N_NODES=1 export N_GPU_NODE=4 export WORLD_SIZE=4 export MASTER_PORT=<AN_OPEN_PORT> export MASTER_ADDR=<I.P.> pkill -f 'python -u train.py' python -m torch.distributed.launch \ --nproc_per_node=$N_GPU_NODE \ --nnodes=$N_NODES \ --node_rank $NODE_RANK \ --master_addr $MASTER_ADDR \ --master_port $MASTER_PORT \ train.py \ --force \ --gpus $WORLD_SIZE \ --student_type distilbert \ --student_config training_configs/distilbert-base-uncased.json \ --teacher_type bert \ --teacher_name bert-base-uncased \ --alpha_ce 0.33 --alpha_mlm 0.33 --alpha_cos 0.33 --alpha_clm 0.0 --mlm \ --freeze_pos_embs \ --dump_path serialization_dir/my_first_training \ --data_file data/binarized_text.bert-base-uncased.pickle \ --token_counts data/token_counts.bert-base-uncased.pickle ``` **Tips:** Starting distilled training with good initialization of the model weights is crucial to reach decent performance. In our experiments, we initialized our model from a few layers of the teacher (Bert) itself! Please refer to `scripts/extract.py` and `scripts/extract_distilbert.py` to create a valid initialization checkpoint and use `--student_pretrained_weights` argument to use this initialization for the distilled training! Happy distillation! ## Citation If you find the resource useful, you should cite the following paper: ``` @inproceedings{sanh2019distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Sanh, Victor and Debut, Lysandre and Chaumond, Julien and Wolf, Thomas}, booktitle={NeurIPS EMC^2 Workshop}, year={2019} } ```
AdaMix/examples/research_projects/distillation/README.md/0
{ "file_path": "AdaMix/examples/research_projects/distillation/README.md", "repo_id": "AdaMix", "token_count": 5071 }
42
# coding=utf-8 # Copyright 2019-present, the HuggingFace Inc. team and Facebook, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Utils to train DistilBERT adapted in part from Facebook, Inc XLM model (https://github.com/facebookresearch/XLM) """ import json import logging import os import socket import git import numpy as np import torch logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - PID: %(process)d - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger = logging.getLogger(__name__) def git_log(folder_path: str): """ Log commit info. """ repo = git.Repo(search_parent_directories=True) repo_infos = { "repo_id": str(repo), "repo_sha": str(repo.head.object.hexsha), "repo_branch": str(repo.active_branch), } with open(os.path.join(folder_path, "git_log.json"), "w") as f: json.dump(repo_infos, f, indent=4) def init_gpu_params(params): """ Handle single and multi-GPU / multi-node. """ if params.n_gpu <= 0: params.local_rank = 0 params.master_port = -1 params.is_master = True params.multi_gpu = False return assert torch.cuda.is_available() logger.info("Initializing GPUs") if params.n_gpu > 1: assert params.local_rank != -1 params.world_size = int(os.environ["WORLD_SIZE"]) params.n_gpu_per_node = int(os.environ["N_GPU_NODE"]) params.global_rank = int(os.environ["RANK"]) # number of nodes / node ID params.n_nodes = params.world_size // params.n_gpu_per_node params.node_id = params.global_rank // params.n_gpu_per_node params.multi_gpu = True assert params.n_nodes == int(os.environ["N_NODES"]) assert params.node_id == int(os.environ["NODE_RANK"]) # local job (single GPU) else: assert params.local_rank == -1 params.n_nodes = 1 params.node_id = 0 params.local_rank = 0 params.global_rank = 0 params.world_size = 1 params.n_gpu_per_node = 1 params.multi_gpu = False # sanity checks assert params.n_nodes >= 1 assert 0 <= params.node_id < params.n_nodes assert 0 <= params.local_rank <= params.global_rank < params.world_size assert params.world_size == params.n_nodes * params.n_gpu_per_node # define whether this is the master process / if we are in multi-node distributed mode params.is_master = params.node_id == 0 and params.local_rank == 0 params.multi_node = params.n_nodes > 1 # summary PREFIX = f"--- Global rank: {params.global_rank} - " logger.info(PREFIX + "Number of nodes: %i" % params.n_nodes) logger.info(PREFIX + "Node ID : %i" % params.node_id) logger.info(PREFIX + "Local rank : %i" % params.local_rank) logger.info(PREFIX + "World size : %i" % params.world_size) logger.info(PREFIX + "GPUs per node : %i" % params.n_gpu_per_node) logger.info(PREFIX + "Master : %s" % str(params.is_master)) logger.info(PREFIX + "Multi-node : %s" % str(params.multi_node)) logger.info(PREFIX + "Multi-GPU : %s" % str(params.multi_gpu)) logger.info(PREFIX + "Hostname : %s" % socket.gethostname()) # set GPU device torch.cuda.set_device(params.local_rank) # initialize multi-GPU if params.multi_gpu: logger.info("Initializing PyTorch distributed") torch.distributed.init_process_group( init_method="env://", backend="nccl", ) def set_seed(args): """ Set the random seed. """ np.random.seed(args.seed) torch.manual_seed(args.seed) if args.n_gpu > 0: torch.cuda.manual_seed_all(args.seed)
AdaMix/examples/research_projects/distillation/utils.py/0
{ "file_path": "AdaMix/examples/research_projects/distillation/utils.py", "repo_id": "AdaMix", "token_count": 1772 }
43
# coding=utf-8 # Copyright 2020 The HuggingFace Team All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Fine-tuning the library models for masked language modeling (BERT, ALBERT, RoBERTa...) with whole word masking on a text file or a dataset. Here is the full list of checkpoints on the hub that can be fine-tuned by this script: https://huggingface.co/models?filter=masked-lm """ # You can also adapt this script on your own masked language modeling task. Pointers for this are left as comments. import json import logging import math import os import sys from dataclasses import dataclass, field from typing import Optional from datasets import Dataset, load_dataset import transformers from transformers import ( CONFIG_MAPPING, MODEL_FOR_MASKED_LM_MAPPING, AutoConfig, AutoModelForMaskedLM, AutoTokenizer, DataCollatorForWholeWordMask, HfArgumentParser, Trainer, TrainingArguments, set_seed, ) from transformers.trainer_utils import get_last_checkpoint, is_main_process logger = logging.getLogger(__name__) MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys()) MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. """ model_name_or_path: Optional[str] = field( default=None, metadata={ "help": "The model checkpoint for weights initialization." "Don't set if you want to train a model from scratch." }, ) model_type: Optional[str] = field( default=None, metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)}, ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, ) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_auth_token: bool = field( default=False, metadata={ "help": "Will use the token generated when running `transformers-cli login` (necessary to use this script " "with private models)." }, ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) validation_file: Optional[str] = field( default=None, metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."}, ) train_ref_file: Optional[str] = field( default=None, metadata={"help": "An optional input train ref data file for whole word masking in Chinese."}, ) validation_ref_file: Optional[str] = field( default=None, metadata={"help": "An optional input validation ref data file for whole word masking in Chinese."}, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) validation_split_percentage: Optional[int] = field( default=5, metadata={ "help": "The percentage of the train set used as validation set in case there's no validation split" }, ) max_seq_length: Optional[int] = field( default=None, metadata={ "help": "The maximum total input sequence length after tokenization. Sequences longer " "than this will be truncated. Default to the max input length of the model." }, ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) mlm_probability: float = field( default=0.15, metadata={"help": "Ratio of tokens to mask for masked language modeling loss"} ) pad_to_max_length: bool = field( default=False, metadata={ "help": "Whether to pad all samples to `max_seq_length`. " "If False, will pad the samples dynamically when batching to the maximum length in the batch." }, ) def __post_init__(self): if self.train_file is not None: extension = self.train_file.split(".")[-1] assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file." if self.validation_file is not None: extension = self.validation_file.split(".")[-1] assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file." def add_chinese_references(dataset, ref_file): with open(ref_file, "r", encoding="utf-8") as f: refs = [json.loads(line) for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())] assert len(dataset) == len(refs) dataset_dict = {c: dataset[c] for c in dataset.column_names} dataset_dict["chinese_ref"] = refs return Dataset.from_dict(dataset_dict) def main(): # See all possible arguments in src/transformers/training_args.py # or by passing the --help flag to this script. # We now keep distinct sets of args, for a cleaner separation of concerns. parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) else: model_args, data_args, training_args = parser.parse_args_into_dataclasses() # Detecting last checkpoint. last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) # Set the verbosity to info of the Transformers logger (on main process only): if is_main_process(training_args.local_rank): transformers.utils.logging.set_verbosity_info() transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() logger.info("Training/evaluation parameters %s", training_args) # Set seed before initializing model. set_seed(training_args.seed) # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ # (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called # 'text' is found. You can easily tweak this behavior (see below). # # In distributed training, the load_dataset function guarantee that only one local process can concurrently # download the dataset. if data_args.dataset_name is not None: # Downloading and loading a dataset from the hub. datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) if "validation" not in datasets.keys(): datasets["validation"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=f"train[:{data_args.validation_split_percentage}%]", ) datasets["train"] = load_dataset( data_args.dataset_name, data_args.dataset_config_name, split=f"train[{data_args.validation_split_percentage}%:]", ) else: data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file extension = data_args.train_file.split(".")[-1] if extension == "txt": extension = "text" datasets = load_dataset(extension, data_files=data_files) # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at # https://huggingface.co/docs/datasets/loading_datasets.html. # Load pretrained model and tokenizer # # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. config_kwargs = { "cache_dir": model_args.cache_dir, "revision": model_args.model_revision, "use_auth_token": True if model_args.use_auth_token else None, } if model_args.config_name: config = AutoConfig.from_pretrained(model_args.config_name, **config_kwargs) elif model_args.model_name_or_path: config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs) else: config = CONFIG_MAPPING[model_args.model_type]() logger.warning("You are instantiating a new config instance from scratch.") tokenizer_kwargs = { "cache_dir": model_args.cache_dir, "use_fast": model_args.use_fast_tokenizer, "revision": model_args.model_revision, "use_auth_token": True if model_args.use_auth_token else None, } if model_args.tokenizer_name: tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs) elif model_args.model_name_or_path: tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, **tokenizer_kwargs) else: raise ValueError( "You are instantiating a new tokenizer from scratch. This is not supported by this script." "You can do it from another script, save it, and load it from here, using --tokenizer_name." ) if model_args.model_name_or_path: model = AutoModelForMaskedLM.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) else: logger.info("Training new model from scratch") model = AutoModelForMaskedLM.from_config(config) model.resize_token_embeddings(len(tokenizer)) # Preprocessing the datasets. # First we tokenize all the texts. if training_args.do_train: column_names = datasets["train"].column_names else: column_names = datasets["validation"].column_names text_column_name = "text" if "text" in column_names else column_names[0] padding = "max_length" if data_args.pad_to_max_length else False def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length) tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=[text_column_name], load_from_cache_file=not data_args.overwrite_cache, ) # Add the chinese references if provided if data_args.train_ref_file is not None: tokenized_datasets["train"] = add_chinese_references(tokenized_datasets["train"], data_args.train_ref_file) if data_args.validation_ref_file is not None: tokenized_datasets["validation"] = add_chinese_references( tokenized_datasets["validation"], data_args.validation_ref_file ) # If we have ref files, need to avoid it removed by trainer has_ref = data_args.train_ref_file or data_args.validation_ref_file if has_ref: training_args.remove_unused_columns = False # Data collator # This one will take care of randomly masking the tokens. data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=tokenized_datasets["train"] if training_args.do_train else None, eval_dataset=tokenized_datasets["validation"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) # Training if training_args.do_train: if last_checkpoint is not None: checkpoint = last_checkpoint elif model_args.model_name_or_path is not None and os.path.isdir(model_args.model_name_or_path): checkpoint = model_args.model_name_or_path else: checkpoint = None train_result = trainer.train(resume_from_checkpoint=checkpoint) trainer.save_model() # Saves the tokenizer too for easy upload output_train_file = os.path.join(training_args.output_dir, "train_results.txt") if trainer.is_world_process_zero(): with open(output_train_file, "w") as writer: logger.info("***** Train results *****") for key, value in sorted(train_result.metrics.items()): logger.info(f" {key} = {value}") writer.write(f"{key} = {value}\n") # Need to save the state, since Trainer.save_model saves only the tokenizer with the model trainer.state.save_to_json(os.path.join(training_args.output_dir, "trainer_state.json")) # Evaluation results = {} if training_args.do_eval: logger.info("*** Evaluate ***") eval_output = trainer.evaluate() perplexity = math.exp(eval_output["eval_loss"]) results["perplexity"] = perplexity output_eval_file = os.path.join(training_args.output_dir, "eval_results_mlm_wwm.txt") if trainer.is_world_process_zero(): with open(output_eval_file, "w") as writer: logger.info("***** Eval results *****") for key, value in sorted(results.items()): logger.info(f" {key} = {value}") writer.write(f"{key} = {value}\n") return results def _mp_fn(index): # For xla_spawn (TPUs) main() if __name__ == "__main__": main()
AdaMix/examples/research_projects/mlm_wwm/run_mlm_wwm.py/0
{ "file_path": "AdaMix/examples/research_projects/mlm_wwm/run_mlm_wwm.py", "repo_id": "AdaMix", "token_count": 6686 }
44
import json import logging import os import sys from pathlib import Path import finetune_rag from transformers.file_utils import is_apex_available from transformers.testing_utils import ( TestCasePlus, execute_subprocess_async, require_ray, require_torch_gpu, require_torch_multi_gpu, ) logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger() class RagFinetuneExampleTests(TestCasePlus): def _create_dummy_data(self, data_dir): os.makedirs(data_dir, exist_ok=True) contents = {"source": "What is love ?", "target": "life"} n_lines = {"train": 12, "val": 2, "test": 2} for split in ["train", "test", "val"]: for field in ["source", "target"]: content = "\n".join([contents[field]] * n_lines[split]) with open(os.path.join(data_dir, f"{split}.{field}"), "w") as f: f.write(content) def _run_finetune(self, gpus: int, distributed_retriever: str = "pytorch"): stream_handler = logging.StreamHandler(sys.stdout) logger.addHandler(stream_handler) tmp_dir = self.get_auto_remove_tmp_dir() output_dir = os.path.join(tmp_dir, "output") data_dir = os.path.join(tmp_dir, "data") self._create_dummy_data(data_dir=data_dir) testargs = f""" --data_dir {data_dir} \ --output_dir {output_dir} \ --model_name_or_path facebook/rag-sequence-base \ --model_type rag_sequence \ --do_train \ --do_predict \ --n_val -1 \ --val_check_interval 1.0 \ --train_batch_size 2 \ --eval_batch_size 1 \ --max_source_length 25 \ --max_target_length 25 \ --val_max_target_length 25 \ --test_max_target_length 25 \ --label_smoothing 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ --weight_decay 0.001 \ --adam_epsilon 1e-08 \ --max_grad_norm 0.1 \ --lr_scheduler polynomial \ --learning_rate 3e-04 \ --num_train_epochs 1 \ --warmup_steps 4 \ --gradient_accumulation_steps 1 \ --distributed-port 8787 \ --use_dummy_dataset 1 \ --distributed_retriever {distributed_retriever} \ """.split() if gpus > 0: testargs.append(f"--gpus={gpus}") if is_apex_available(): testargs.append("--fp16") else: testargs.append("--gpus=0") testargs.append("--distributed_backend=ddp_cpu") testargs.append("--num_processes=2") cmd = [sys.executable, str(Path(finetune_rag.__file__).resolve())] + testargs execute_subprocess_async(cmd, env=self.get_env()) metrics_save_path = os.path.join(output_dir, "metrics.json") with open(metrics_save_path) as f: result = json.load(f) return result @require_torch_gpu def test_finetune_gpu(self): result = self._run_finetune(gpus=1) self.assertGreaterEqual(result["test"][0]["test_avg_em"], 0.2) @require_torch_multi_gpu def test_finetune_multigpu(self): result = self._run_finetune(gpus=2) self.assertGreaterEqual(result["test"][0]["test_avg_em"], 0.2) @require_torch_gpu @require_ray def test_finetune_gpu_ray_retrieval(self): result = self._run_finetune(gpus=1, distributed_retriever="ray") self.assertGreaterEqual(result["test"][0]["test_avg_em"], 0.2) @require_torch_multi_gpu @require_ray def test_finetune_multigpu_ray_retrieval(self): result = self._run_finetune(gpus=1, distributed_retriever="ray") self.assertGreaterEqual(result["test"][0]["test_avg_em"], 0.2)
AdaMix/examples/research_projects/rag/_test_finetune_rag.py/0
{ "file_path": "AdaMix/examples/research_projects/rag/_test_finetune_rag.py", "repo_id": "AdaMix", "token_count": 2005 }
45
## Sequence to Sequence Training and Evaluation This directory contains examples for finetuning and evaluating transformers on summarization and translation tasks. Author: Sam Shleifer (https://github.com/sshleifer) ### Supported Architectures - `BartForConditionalGeneration` (and anything that inherits from it) - `MarianMTModel` - `PegasusForConditionalGeneration` - `MBartForConditionalGeneration` - `FSMTForConditionalGeneration` - `T5ForConditionalGeneration` ## Datasets #### XSUM ```bash cd examples/contrib/pytorch-lightning/seq2seq wget https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz tar -xzvf xsum.tar.gz export XSUM_DIR=${PWD}/xsum ``` this should make a directory called `xsum/` with files like `test.source`. To use your own data, copy that files format. Each article to be summarized is on its own line. #### CNN/DailyMail ```bash cd examples/contrib/pytorch-lightning/seq2seq wget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz tar -xzvf cnn_dm_v2.tgz # empty lines removed mv cnn_cln cnn_dm export CNN_DIR=${PWD}/cnn_dm ``` this should make a directory called `cnn_dm/` with 6 files. #### WMT16 English-Romanian Translation Data download with this command: ```bash wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz tar -xzvf wmt_en_ro.tar.gz export ENRO_DIR=${PWD}/wmt_en_ro ``` this should make a directory called `wmt_en_ro/` with 6 files. #### WMT English-German ```bash wget https://cdn-datasets.huggingface.co/translation/wmt_en_de.tgz tar -xzvf wmt_en_de.tgz export DATA_DIR=${PWD}/wmt_en_de ``` #### FSMT datasets (wmt) Refer to the scripts starting with `eval_` under: https://github.com/huggingface/transformers/tree/master/scripts/fsmt #### Pegasus (multiple datasets) Multiple eval datasets are available for download from: https://github.com/stas00/porting/tree/master/datasets/pegasus #### Your Data If you are using your own data, it must be formatted as one directory with 6 files: ``` train.source train.target val.source val.target test.source test.target ``` The `.source` files are the input, the `.target` files are the desired output. ### Potential issues - native AMP (`--fp16` and no apex) may lead to a huge memory leak and require 10x gpu memory. This has been fixed in pytorch-nightly and the minimal official version to have this fix will be pytorch-1.8. Until then if you have to use mixed precision please use AMP only with pytorch-nightly or NVIDIA's apex. Reference: https://github.com/huggingface/transformers/issues/8403 ### Tips and Tricks General Tips: - since you need to run from this folder, and likely need to modify code, the easiest workflow is fork transformers, clone your fork, and run `pip install -e .` before you get started. - try `--freeze_encoder` or `--freeze_embeds` for faster training/larger batch size. (3hr per epoch with bs=8, see the "xsum_shared_task" command below) - `fp16_opt_level=O1` (the default works best). - In addition to the pytorch-lightning .ckpt checkpoint, a transformers checkpoint will be saved. Load it with `BartForConditionalGeneration.from_pretrained(f'{output_dir}/best_tfmr)`. - At the moment, `--do_predict` does not work in a multi-gpu setting. You need to use `evaluate_checkpoint` or the `run_eval.py` code. - This warning can be safely ignored: > "Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large-xsum and are newly initialized: ['final_logits_bias']" - Both finetuning and eval are 30% faster with `--fp16`. For that you need to [install apex](https://github.com/NVIDIA/apex#quick-start). - Read scripts before you run them! Summarization Tips: - (summ) 1 epoch at batch size 1 for bart-large takes 24 hours and requires 13GB GPU RAM with fp16 on an NVIDIA-V100. - If you want to run experiments on improving the summarization finetuning process, try the XSUM Shared Task (below). It's faster to train than CNNDM because the summaries are shorter. - For CNN/DailyMail, the default `val_max_target_length` and `test_max_target_length` will truncate the ground truth labels, resulting in slightly higher rouge scores. To get accurate rouge scores, you should rerun calculate_rouge on the `{output_dir}/test_generations.txt` file saved by `trainer.test()` - `--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 ` is a reasonable setting for XSUM. - `wandb` can be used by specifying `--logger_name wandb`. It is useful for reproducibility. Specify the environment variable `WANDB_PROJECT='hf_xsum'` to do the XSUM shared task. - If you are finetuning on your own dataset, start from `distilbart-cnn-12-6` if you want long summaries and `distilbart-xsum-12-6` if you want short summaries. (It rarely makes sense to start from `bart-large` unless you are a researching finetuning methods). **Update 2018-07-18** Datasets: `LegacySeq2SeqDataset` will be used for all tokenizers without a `prepare_seq2seq_batch` method. Otherwise, `Seq2SeqDataset` will be used. Future work/help wanted: A new dataset to support multilingual tasks. ### Finetuning Scripts All finetuning bash scripts call finetune.py (or distillation.py) with reasonable command line arguments. They usually require extra command line arguments to work. To see all the possible command line options, run: ```bash ./finetune.py --help ``` ### Finetuning Training Params To override the pretrained model's training params, you can pass them to `./finetune.sh`: ```bash ./finetune.sh \ [...] --encoder_layerdrop 0.1 \ --decoder_layerdrop 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ ``` ### Summarization Finetuning Run/modify `finetune.sh` The following command should work on a 16GB GPU: ```bash ./finetune.sh \ --data_dir $XSUM_DIR \ --train_batch_size=1 \ --eval_batch_size=1 \ --output_dir=xsum_results \ --num_train_epochs 6 \ --model_name_or_path facebook/bart-large ``` There is a starter finetuning script for pegasus at `finetune_pegasus_xsum.sh`. ### Translation Finetuning First, follow the wmt_en_ro download instructions. Then you can finetune mbart_cc25 on english-romanian with the following command. **Recommendation:** Read and potentially modify the fairly opinionated defaults in `train_mbart_cc25_enro.sh` script before running it. Best performing command: ```bash # optionally export ENRO_DIR='wmt_en_ro' # Download instructions above # export WANDB_PROJECT="MT" # optional export MAX_LEN=128 export BS=4 ./train_mbart_cc25_enro.sh --output_dir enro_finetune_baseline --label_smoothing 0.1 --fp16_opt_level=O1 --logger_name wandb --sortish_sampler ``` This should take < 6h/epoch on a 16GB v100 and achieve test BLEU above 26 To get results in line with fairseq, you need to do some postprocessing. (see `romanian_postprocessing.md`) MultiGPU command (using 8 GPUS as an example) ```bash export ENRO_DIR='wmt_en_ro' # Download instructions above # export WANDB_PROJECT="MT" # optional export MAX_LEN=128 export BS=4 ./train_mbart_cc25_enro.sh --output_dir enro_finetune_baseline --gpus 8 --logger_name wandb ``` ### Finetuning Outputs As you train, `output_dir` will be filled with files, that look kind of like this (comments are mine). Some of them are metrics, some of them are checkpoints, some of them are metadata. Here is a quick tour: ```bash output_dir ├── best_tfmr # this is a huggingface checkpoint generated by save_pretrained. It is the same model as the PL .ckpt file below │   ├── config.json │   ├── merges.txt │   ├── pytorch_model.bin │   ├── special_tokens_map.json │   ├── tokenizer_config.json │   └── vocab.json ├── git_log.json # repo, branch, and commit hash ├── val_avg_rouge2=0.1984-step_count=11.ckpt # this is a pytorch lightning checkpoint associated with the best val score. (it will be called BLEU for MT) ├── metrics.json # new validation metrics will continually be appended to this ├── student # this is a huggingface checkpoint generated by SummarizationDistiller. It is the student before it gets finetuned. │   ├── config.json │   └── pytorch_model.bin ├── test_generations.txt # ^^ are the summaries or translations produced by your best checkpoint on the test data. Populated when training is done ├── test_results.txt # a convenience file with the test set metrics. This data is also in metrics.json['test'] ├── hparams.pkl # the command line args passed after some light preprocessing. Should be saved fairly quickly. ``` After training, you can recover the best checkpoint by running ```python from transformers import AutoModelForSeq2SeqLM model = AutoModelForSeq2SeqLM.from_pretrained(f'{output_dir}/best_tfmr') ``` ### Converting pytorch-lightning checkpoints pytorch lightning ``-do_predict`` often fails, after you are done training, the best way to evaluate your model is to convert it. This should be done for you, with a file called `{save_dir}/best_tfmr`. If that file doesn't exist but you have a lightning `.ckpt` file, you can run ```bash python convert_pl_checkpoint_to_hf.py PATH_TO_CKPT randomly_initialized_hf_model_path save_dir/best_tfmr ``` Then either `run_eval` or `run_distributed_eval` with `save_dir/best_tfmr` (see previous sections) # Experimental Features These features are harder to use and not always useful. ### Dynamic Batch Size for MT `finetune.py` has a command line arg `--max_tokens_per_batch` that allows batches to be dynamically sized. This feature can only be used: - with fairseq installed - on 1 GPU - without sortish sampler - after calling `./save_len_file.py $tok $data_dir` For example, ```bash ./save_len_file.py Helsinki-NLP/opus-mt-en-ro wmt_en_ro ./dynamic_bs_example.sh --max_tokens_per_batch=2000 --output_dir benchmark_dynamic_bs ``` splits `wmt_en_ro/train` into 11,197 uneven lengthed batches and can finish 1 epoch in 8 minutes on a v100. For comparison, ```bash ./dynamic_bs_example.sh --sortish_sampler --train_batch_size 48 ``` uses 12,723 batches of length 48 and takes slightly more time 9.5 minutes. The feature is still experimental, because: + we can make it much more robust if we have memory mapped/preprocessed datasets. + The speedup over sortish sampler is not that large at the moment. # DistilBART <!---It should be called distilling bart and pegasus, but I don't want to break the link in the paper.--> This section describes all code and artifacts from our [Paper](http://arxiv.org/abs/2010.13002) ![DBART](https://huggingface.co/front/thumbnails/distilbart_large.png) + For the CNN/DailyMail dataset, (relatively longer, more extractive summaries), we found a simple technique that works, which we call "Shrink and Fine-tune", or SFT. you just copy alternating layers from `facebook/bart-large-cnn` and fine-tune more on the cnn/dm data. `sshleifer/distill-pegasus-cnn-16-4`, `sshleifer/distilbart-cnn-12-6` and all other checkpoints under `sshleifer` that start with `distilbart-cnn` were trained this way. + For the XSUM dataset, training on pseudo-labels worked best for Pegasus (`sshleifer/distill-pegasus-16-4`), while training with KD worked best for `distilbart-xsum-12-6` + For `sshleifer/dbart-xsum-12-3` + We ran 100s experiments, and didn't want to document 100s of commands. If you want a command to replicate a figure from the paper that is not documented below, feel free to ask on the [forums](https://discuss.huggingface.co/t/seq2seq-distillation-methodology-questions/1270) and tag `@sshleifer`. + You can see the performance tradeoffs of model sizes [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0). and more granular timing results [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=1753259047&range=B2:I23). ### Evaluation use [run_distributed_eval](./run_distributed_eval.py), with the following convenient alias ```bash deval () { proc=$1 m=$2 dd=$3 sd=$4 shift shift shift shift python -m torch.distributed.launch --nproc_per_node=$proc run_distributed_eval.py \ --model_name $m --save_dir $sd --data_dir $dd $@ } ``` On a 1 GPU system, here are four commands (that assume `xsum`, `cnn_dm` are downloaded, cmd-F for those links in this file). `distilBART`: ```bash deval 1 sshleifer/distilbart-xsum-12-3 xsum dbart_12_3_xsum_eval --fp16 # --help for more choices. deval 1 sshleifer/distilbart-cnn_dm-12-6 cnn_dm dbart_12_6_cnn_eval --fp16 ``` `distill-pegasus`: ```bash deval 1 sshleifer/distill-pegasus-cnn-16-4 cnn_dm dpx_cnn_eval deval 1 sshleifer/distill-pegasus-xsum-16-4 xsum dpx_xsum_eval ``` ### Distillation + For all of the following commands, you can get roughly equivalent result and faster run times by passing `--num_beams=4`. That's not what we did for the paper. + Besides the KD section, you can also run commands with the built-in transformers trainer. See, for example, [builtin_trainer/train_distilbart_cnn.sh](./builtin_trainer/train_distilbart_cnn.sh). + Large performance deviations (> 5X slower or more than 0.5 Rouge-2 worse), should be reported. + Multi-gpu (controlled with `--gpus` should work, but might require more epochs). #### Recommended Workflow + Get your dataset in the right format. (see 6 files above). + Find a teacher model [Pegasus](https://huggingface.co/models?search=pegasus) (slower, better ROUGE) or `facebook/bart-large-xsum`/`facebook/bart-large-cnn` (faster, slightly lower.). Choose the checkpoint where the corresponding dataset is most similar (or identical to) your dataset. + Follow the sections in order below. You can stop after SFT if you are satisfied, or move on to pseudo-labeling if you want more performance. + student size: If you want a close to free 50% speedup, cut the decoder in half. If you want a larger speedup, cut it in 4. + If your SFT run starts at a validation ROUGE-2 that is more than 10 pts below the teacher's validation ROUGE-2, you have a bug. Switching to a more expensive technique will not help. Try setting a breakpoint and looking at generation and truncation defaults/hyper-parameters, and share your experience on the forums! #### Initialization We use [make_student.py](./make_student.py) to copy alternating layers from the teacher, and save the resulting model to disk ```bash python make_student.py facebook/bart-large-xsum --save_path dbart_xsum_12_3 -e 12 -d 3 ``` or for `pegasus-xsum` ```bash python make_student.py google/pegasus-xsum --save_path dpx_xsum_16_4 --e 16 --d 4 ``` we now have an initialized student saved to `dbart_xsum_12_3`, which we will use for the following commands. + Extension: To replicate more complicated initialize experiments in section 6.1, or try your own. Use the `create_student_by_copying_alternating_layers` function. #### Pegasus + The following commands are written for BART and will require, at minimum, the following modifications + reduce batch size, and increase gradient accumulation steps so that the product `gpus * batch size * gradient_accumulation_steps = 256`. We used `--learning-rate` = 1e-4 * gradient accumulation steps. + don't use fp16 + `--tokenizer_name google/pegasus-large` ### SFT (No Teacher Distillation) You don't need `distillation.py`, you can just run: ```bash python finetune.py \ --data_dir xsum \ --freeze_encoder --freeze_embeds \ --learning_rate=3e-4 \ --do_train \ --do_predict \ --fp16 --fp16_opt_level=O1 \ --val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 \ --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \ --model_name_or_path dbart_xsum_12_3 \ --train_batch_size=64 --eval_batch_size=64 \ --sortish_sampler \ --num_train_epochs=6 \ --warmup_steps 500 \ --output_dir distilbart_xsum_sft_12_3 --gpus 1 ``` + Note: The command that produced `sshleifer/distilbart-cnn-12-6` is at [train_distilbart_cnn.sh](./[train_distilbart_cnn.sh) ```bash ./train_distilbart_cnn.sh ``` <!--- runtime: 6H on NVIDIA RTX 24GB GPU --> + Tip: You can get the same simple distillation logic by using `distillation.py --no_teacher ` followed by identical arguments as the ones in `train_distilbart_cnn.sh`. If you are using `wandb` and comparing the two distillation methods, using this entry point will make your logs consistent, because you will have the same hyper-parameters logged in every run. ### Pseudo-Labeling + You don't need `distillation.py`. + Instructions to generate pseudo-labels and use pre-computed pseudo-labels can be found [here](./precomputed_pseudo_labels.md). Simply run `finetune.py` with one of those pseudo-label datasets as `--data_dir` (`DATA`, below). ```bash python finetune.py \ --teacher facebook/bart-large-xsum --data_dir DATA \ --freeze_encoder --freeze_embeds \ --learning_rate=3e-4 \ --do_train \ --do_predict \ --fp16 --fp16_opt_level=O1 \ --val_check_interval 0.1 --n_val 1000 --eval_beams 2 --length_penalty=0.5 \ --max_target_length=60 --val_max_target_length=60 --test_max_target_length=100 \ --model_name_or_path dbart_xsum_12_3 \ --train_batch_size=32 --eval_batch_size=32 \ --sortish_sampler \ --num_train_epochs=5 \ --warmup_steps 500 \ --output_dir dbart_xsum_12_3_PL --gpus 1 --logger_name wandb ``` To combine datasets, as in Section 6.2, try something like: ```bash curl -S https://cdn-datasets.huggingface.co/pseudo/xsum/bart_xsum_pl.tgz | tar -xvz -C . curl -S https://cdn-datasets.huggingface.co/pseudo/xsum/pegasus_xsum.tgz | tar -xvz -C . curl -S https://cdn-datasets.huggingface.co/summarization/xsum.tar.gz | tar -xvz -C . mkdir all_pl cat bart_xsum_pl/train.source pegasus_xsum/train.source xsum/train.source > all_pl/train.source cat bart_xsum_pl/train.target pegasus_xsum/train.target xsum/train.target > all_pl/train.target cp xsum/val* all_pl cp xsum/test* all_pl ``` then use `all_pl` as DATA in the command above. #### Direct Knowledge Distillation (KD) + In this method, we use try to enforce that the student and teacher produce similar encoder_outputs, logits, and hidden_states using `SummarizationDistiller`. + This method was used for `sshleifer/distilbart-xsum-12-6`, `6-6`, and `9-6` checkpoints were produced. + You must use [`distillation.py`](./distillation.py). Note that this command initializes the student for you. The command that produced `sshleifer/distilbart-xsum-12-6` is at [./train_distilbart_xsum.sh](train_distilbart_xsum.sh) ```bash ./train_distilbart_xsum.sh --logger_name wandb --gpus 1 ``` + Expected ROUGE-2 between 21.3 and 21.6, run time ~13H. + direct KD + Pegasus is VERY slow and works best with `--supervise_forward --normalize_hidden`. <!--- runtime: 13H on V-100 16GB GPU. --> ### Citation ```bibtex @misc{shleifer2020pretrained, title={Pre-trained Summarization Distillation}, author={Sam Shleifer and Alexander M. Rush}, year={2020}, eprint={2010.13002}, archivePrefix={arXiv}, primaryClass={cs.CL} } @article{Wolf2019HuggingFacesTS, title={HuggingFace's Transformers: State-of-the-art Natural Language Processing}, author={Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush}, journal={ArXiv}, year={2019}, volume={abs/1910.03771} } ```
AdaMix/examples/research_projects/seq2seq-distillation/README.md/0
{ "file_path": "AdaMix/examples/research_projects/seq2seq-distillation/README.md", "repo_id": "AdaMix", "token_count": 6492 }
46
import logging import os import sys from dataclasses import dataclass, field from typing import List, Optional import torch from datasets import Dataset from torch import nn from tqdm.auto import tqdm from transformers import ( AutoModelForSequenceClassification, AutoTokenizer, HfArgumentParser, Trainer, TrainingArguments, set_seed, utils, ) from transformers.trainer_utils import get_last_checkpoint, is_main_process DESCRIPTION = """ Distills an NLI-based zero-shot classifier to a smaller, more efficient model with a fixed set of candidate class names. Useful for speeding up zero-shot classification in cases where labeled training data is not available, but when only a single fixed set of classes is needed. Takes a teacher NLI model, student classifier model, unlabeled dataset, and set of K possible class names. Yields a single classifier with K outputs corresponding to the provided class names. """ logger = logging.getLogger(__name__) @dataclass class TeacherModelArguments: teacher_name_or_path: Optional[str] = field( default="roberta-large-mnli", metadata={"help": "The NLI/zero-shot teacher model to be distilled."} ) hypothesis_template: Optional[str] = field( default="This example is {}.", metadata={ "help": ( "Template used to turn class names into mock hypotheses for teacher NLI model. Must include {{}}" "where class name is inserted." ) }, ) teacher_batch_size: Optional[int] = field( default=32, metadata={"help": "Batch size for generating teacher predictions."} ) multi_label: Optional[bool] = field( default=False, metadata={ "help": ( "Allow multiple classes to be true rather than forcing them to sum to 1 (sometimes called" "multi-class multi-label classification)." ) }, ) temperature: Optional[float] = field( default=1.0, metadata={"help": "Temperature applied to teacher softmax for distillation."} ) @dataclass class StudentModelArguments: student_name_or_path: Optional[str] = field( default="distilbert-base-uncased", metadata={"help": "The NLI/zero-shot teacher model to be distilled."} ) @dataclass class DataTrainingArguments: data_file: str = field(metadata={"help": "Text file with one unlabeled instance per line."}) class_names_file: str = field(metadata={"help": "Text file with one class name per line."}) use_fast_tokenizer: bool = field( default=True, metadata={"help": "Whether to use one of the fast tokenizer (backed by the Rust tokenizers library) or not."}, ) @dataclass class DistillTrainingArguments(TrainingArguments): output_dir: Optional[str] = field( default=None, metadata={"help": "The output directory where the model predictions and checkpoints will be written."}, ) per_device_train_batch_size: int = field( default=32, metadata={"help": "Batch size per GPU/TPU core/CPU for training."} ) per_device_eval_batch_size: int = field( default=128, metadata={"help": "Batch size per GPU/TPU core/CPU for evaluation."} ) num_train_epochs: float = field(default=1.0, metadata={"help": "Total number of training epochs to perform."}) do_train: bool = field(default=True, metadata={"help": "Whether to run training of student model."}) do_eval: bool = field( default=True, metadata={ "help": ( "Whether to evaluate the agreement of the final student predictions and the teacher predictions" "after training." ) }, ) save_total_limit: Optional[int] = field( default=0, metadata={ "help": ( "Limit the total amount of checkpoints." "Deletes the older checkpoints in the output_dir. Default is 0 (no checkpoints)." ) }, ) class DistillationTrainer(Trainer): def compute_loss(self, model, inputs, return_outputs=False): target_p = inputs["labels"] outputs = model(inputs["input_ids"], attention_mask=inputs["attention_mask"]) logits = outputs[0] loss = -torch.sum(target_p * logits.log_softmax(dim=-1), axis=-1).mean() if return_outputs: return loss, outputs return loss def read_lines(path): lines = [] with open(path, "r") as f: for line in f: line = line.strip() if len(line) > 0: lines.append(line) return lines def get_premise_hypothesis_pairs(examples, class_names, hypothesis_template): premises = [] hypotheses = [] for example in examples: for name in class_names: premises.append(example) hypotheses.append(hypothesis_template.format(name)) return premises, hypotheses def get_entailment_id(config): for label, ind in config.label2id.items(): if label.lower().startswith("entail"): return ind logging.warning("Could not identify entailment dimension from teacher config label2id. Setting to -1.") return -1 def get_teacher_predictions( model_path: str, examples: List[str], class_names: List[str], hypothesis_template: str, batch_size: int, temperature: float, multi_label: bool, use_fast_tokenizer: bool, no_cuda: bool, fp16: bool, ): """ Gets predictions by the same method as the zero-shot pipeline but with DataParallel & more efficient batching """ model = AutoModelForSequenceClassification.from_pretrained(model_path) model_config = model.config if not no_cuda and torch.cuda.is_available(): model = nn.DataParallel(model.cuda()) batch_size *= len(model.device_ids) tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=use_fast_tokenizer) premises, hypotheses = get_premise_hypothesis_pairs(examples, class_names, hypothesis_template) logits = [] for i in tqdm(range(0, len(premises), batch_size)): batch_premises = premises[i : i + batch_size] batch_hypotheses = hypotheses[i : i + batch_size] encodings = tokenizer( batch_premises, batch_hypotheses, padding=True, truncation="only_first", return_tensors="pt", ) with torch.cuda.amp.autocast(enabled=fp16): with torch.no_grad(): outputs = model(**encodings) logits.append(outputs.logits.detach().cpu().float()) entail_id = get_entailment_id(model_config) contr_id = -1 if entail_id == 0 else 0 logits = torch.cat(logits, dim=0) # N*K x 3 nli_logits = logits.reshape(len(examples), len(class_names), -1)[..., [contr_id, entail_id]] # N x K x 2 if multi_label: # softmax over (contr, entail) logits for each class independently nli_prob = (nli_logits / temperature).softmax(-1) else: # softmax over entail logits across classes s.t. class probabilities sum to 1. nli_prob = (nli_logits / temperature).softmax(1) return nli_prob[..., 1] # N x K def main(): parser = HfArgumentParser( (DataTrainingArguments, TeacherModelArguments, StudentModelArguments, DistillTrainingArguments), description=DESCRIPTION, ) if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): # If we pass only one argument to the script and it's the path to a json file, # let's parse it to get our arguments. data_args, teacher_args, student_args, training_args = parser.parse_json_file( json_file=os.path.abspath(sys.argv[1]) ) else: data_args, teacher_args, student_args, training_args = parser.parse_args_into_dataclasses() # Detecting last checkpoint. last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) # Set the verbosity to info of the Transformers logger (on main process only): if is_main_process(training_args.local_rank): utils.logging.set_verbosity_info() utils.logging.enable_default_handler() utils.logging.enable_explicit_format() if training_args.local_rank != -1: raise ValueError("Distributed training is not currently supported.") if training_args.tpu_num_cores is not None: raise ValueError("TPU acceleration is not currently supported.") logger.info(f"Training/evaluation parameters {training_args}") # Set seed before initializing model. set_seed(training_args.seed) # 1. read in data examples = read_lines(data_args.data_file) class_names = read_lines(data_args.class_names_file) # 2. get teacher predictions and load into dataset logger.info("Generating predictions from zero-shot teacher model") teacher_soft_preds = get_teacher_predictions( teacher_args.teacher_name_or_path, examples, class_names, teacher_args.hypothesis_template, teacher_args.teacher_batch_size, teacher_args.temperature, teacher_args.multi_label, data_args.use_fast_tokenizer, training_args.no_cuda, training_args.fp16, ) dataset = Dataset.from_dict( { "text": examples, "labels": teacher_soft_preds, } ) # 3. create student logger.info("Initializing student model") model = AutoModelForSequenceClassification.from_pretrained( student_args.student_name_or_path, num_labels=len(class_names) ) tokenizer = AutoTokenizer.from_pretrained(student_args.student_name_or_path, use_fast=data_args.use_fast_tokenizer) model.config.id2label = {i: label for i, label in enumerate(class_names)} model.config.label2id = {label: i for i, label in enumerate(class_names)} # 4. train student on teacher predictions dataset = dataset.map(tokenizer, input_columns="text") dataset.set_format("torch") def compute_metrics(p, return_outputs=False): preds = p.predictions.argmax(-1) proxy_labels = p.label_ids.argmax(-1) # "label_ids" are actually distributions return {"agreement": (preds == proxy_labels).mean().item()} trainer = DistillationTrainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=dataset, compute_metrics=compute_metrics, ) if training_args.do_train: logger.info("Training student model on teacher predictions") trainer.train() if training_args.do_eval: agreement = trainer.evaluate(eval_dataset=dataset)["eval_agreement"] logger.info(f"Agreement of student and teacher predictions: {agreement * 100:0.2f}%") trainer.save_model() if __name__ == "__main__": main()
AdaMix/examples/research_projects/zero-shot-distillation/distill_classifier.py/0
{ "file_path": "AdaMix/examples/research_projects/zero-shot-distillation/distill_classifier.py", "repo_id": "AdaMix", "token_count": 4822 }
47
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Finetuning a 🤗 Transformers model for sequence classification on GLUE.""" import argparse import logging import math import os import random import datasets from datasets import load_dataset, load_metric from torch.utils.data.dataloader import DataLoader from tqdm.auto import tqdm import transformers from accelerate import Accelerator from transformers import ( AdamW, AutoConfig, AutoModelForSequenceClassification, AutoTokenizer, DataCollatorWithPadding, PretrainedConfig, SchedulerType, default_data_collator, get_scheduler, set_seed, ) logger = logging.getLogger(__name__) task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } def parse_args(): parser = argparse.ArgumentParser(description="Finetune a transformers model on a text classification task") parser.add_argument( "--task_name", type=str, default=None, help="The name of the glue task to train on.", choices=list(task_to_keys.keys()), ) parser.add_argument( "--train_file", type=str, default=None, help="A csv or a json file containing the training data." ) parser.add_argument( "--validation_file", type=str, default=None, help="A csv or a json file containing the validation data." ) parser.add_argument( "--max_length", type=int, default=128, help=( "The maximum total input sequence length after tokenization. Sequences longer than this will be truncated," " sequences shorter will be padded if `--pad_to_max_lengh` is passed." ), ) parser.add_argument( "--pad_to_max_length", action="store_true", help="If passed, pad all samples to `max_length`. Otherwise, dynamic padding is used.", ) parser.add_argument( "--model_name_or_path", type=str, help="Path to pretrained model or model identifier from huggingface.co/models.", required=True, ) parser.add_argument( "--use_slow_tokenizer", action="store_true", help="If passed, will use a slow tokenizer (not backed by the 🤗 Tokenizers library).", ) parser.add_argument( "--per_device_train_batch_size", type=int, default=8, help="Batch size (per device) for the training dataloader.", ) parser.add_argument( "--per_device_eval_batch_size", type=int, default=8, help="Batch size (per device) for the evaluation dataloader.", ) parser.add_argument( "--learning_rate", type=float, default=5e-5, help="Initial learning rate (after the potential warmup period) to use.", ) parser.add_argument("--weight_decay", type=float, default=0.0, help="Weight decay to use.") parser.add_argument("--num_train_epochs", type=int, default=3, help="Total number of training epochs to perform.") parser.add_argument( "--max_train_steps", type=int, default=None, help="Total number of training steps to perform. If provided, overrides num_train_epochs.", ) parser.add_argument( "--gradient_accumulation_steps", type=int, default=1, help="Number of updates steps to accumulate before performing a backward/update pass.", ) parser.add_argument( "--lr_scheduler_type", type=SchedulerType, default="linear", help="The scheduler type to use.", choices=["linear", "cosine", "cosine_with_restarts", "polynomial", "constant", "constant_with_warmup"], ) parser.add_argument( "--num_warmup_steps", type=int, default=0, help="Number of steps for the warmup in the lr scheduler." ) parser.add_argument("--output_dir", type=str, default=None, help="Where to store the final model.") parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") args = parser.parse_args() # Sanity checks if args.task_name is None and args.train_file is None and args.validation_file is None: raise ValueError("Need either a task name or a training/validation file.") else: if args.train_file is not None: extension = args.train_file.split(".")[-1] assert extension in ["csv", "json"], "`train_file` should be a csv or a json file." if args.validation_file is not None: extension = args.validation_file.split(".")[-1] assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file." if args.output_dir is not None: os.makedirs(args.output_dir, exist_ok=True) return args def main(): args = parse_args() # Initialize the accelerator. We will let the accelerator handle device placement for us in this example. accelerator = Accelerator() # Make one log on every process with the configuration for debugging. logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", level=logging.INFO, ) logger.info(accelerator.state) # Setup logging, we only want one process per machine to log things on the screen. # accelerator.is_local_main_process is only True for one process per machine. logger.setLevel(logging.INFO if accelerator.is_local_main_process else logging.ERROR) if accelerator.is_local_main_process: datasets.utils.logging.set_verbosity_warning() transformers.utils.logging.set_verbosity_info() else: datasets.utils.logging.set_verbosity_error() transformers.utils.logging.set_verbosity_error() # If passed along, set the training seed now. if args.seed is not None: set_seed(args.seed) # Get the datasets: you can either provide your own CSV/JSON training and evaluation files (see below) # or specify a GLUE benchmark task (the dataset will be downloaded automatically from the datasets Hub). # For CSV/JSON files, this script will use as labels the column called 'label' and as pair of sentences the # sentences in columns called 'sentence1' and 'sentence2' if such column exists or the first two columns not named # label if at least two columns are provided. # If the CSVs/JSONs contain only one non-label column, the script does single sentence classification on this # single column. You can easily tweak this behavior (see below) # In distributed training, the load_dataset function guarantee that only one local process can concurrently # download the dataset. if args.task_name is not None: # Downloading and loading a dataset from the hub. raw_datasets = load_dataset("glue", args.task_name) else: # Loading the dataset from local csv or json file. data_files = {} if args.train_file is not None: data_files["train"] = args.train_file if args.validation_file is not None: data_files["validation"] = args.validation_file extension = (args.train_file if args.train_file is not None else args.valid_file).split(".")[-1] raw_datasets = load_dataset(extension, data_files=data_files) # See more about loading any type of standard or custom dataset at # https://huggingface.co/docs/datasets/loading_datasets.html. # Labels if args.task_name is not None: is_regression = args.task_name == "stsb" if not is_regression: label_list = raw_datasets["train"].features["label"].names num_labels = len(label_list) else: num_labels = 1 else: # Trying to have good defaults here, don't hesitate to tweak to your needs. is_regression = datasets["train"].features["label"].dtype in ["float32", "float64"] if is_regression: num_labels = 1 else: # A useful fast method: # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.unique label_list = datasets["train"].unique("label") label_list.sort() # Let's sort it for determinism num_labels = len(label_list) # Load pretrained model and tokenizer # # In distributed training, the .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. config = AutoConfig.from_pretrained(args.model_name_or_path, num_labels=num_labels, finetuning_task=args.task_name) tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path, use_fast=not args.use_slow_tokenizer) model = AutoModelForSequenceClassification.from_pretrained( args.model_name_or_path, from_tf=bool(".ckpt" in args.model_name_or_path), config=config, ) # Preprocessing the datasets if args.task_name is not None: sentence1_key, sentence2_key = task_to_keys[args.task_name] else: # Again, we try to have some nice defaults but don't hesitate to tweak to your use case. non_label_column_names = [name for name in datasets["train"].column_names if name != "label"] if "sentence1" in non_label_column_names and "sentence2" in non_label_column_names: sentence1_key, sentence2_key = "sentence1", "sentence2" else: if len(non_label_column_names) >= 2: sentence1_key, sentence2_key = non_label_column_names[:2] else: sentence1_key, sentence2_key = non_label_column_names[0], None # Some models have set the order of the labels to use, so let's make sure we do use it. label_to_id = None if ( model.config.label2id != PretrainedConfig(num_labels=num_labels).label2id and args.task_name is not None and not is_regression ): # Some have all caps in their config, some don't. label_name_to_id = {k.lower(): v for k, v in model.config.label2id.items()} if list(sorted(label_name_to_id.keys())) == list(sorted(label_list)): logger.info( f"The configuration of the model provided the following label correspondence: {label_name_to_id}. " "Using it!" ) label_to_id = {i: label_name_to_id[label_list[i]] for i in range(num_labels)} else: logger.warn( "Your model seems to have been trained with labels, but they don't match the dataset: ", f"model labels: {list(sorted(label_name_to_id.keys()))}, dataset labels: {list(sorted(label_list))}." "\nIgnoring the model labels as a result.", ) elif args.task_name is None: label_to_id = {v: i for i, v in enumerate(label_list)} padding = "max_length" if args.pad_to_max_length else False def preprocess_function(examples): # Tokenize the texts texts = ( (examples[sentence1_key],) if sentence2_key is None else (examples[sentence1_key], examples[sentence2_key]) ) result = tokenizer(*texts, padding=padding, max_length=args.max_length, truncation=True) if "label" in examples: if label_to_id is not None: # Map labels to IDs (not necessary for GLUE tasks) result["labels"] = [label_to_id[l] for l in examples["label"]] else: # In all cases, rename the column to labels because the model will expect that. result["labels"] = examples["label"] return result processed_datasets = raw_datasets.map( preprocess_function, batched=True, remove_columns=raw_datasets["train"].column_names ) train_dataset = processed_datasets["train"] eval_dataset = processed_datasets["validation_matched" if args.task_name == "mnli" else "validation"] # Log a few random samples from the training set: for index in random.sample(range(len(train_dataset)), 3): logger.info(f"Sample {index} of the training set: {train_dataset[index]}.") # DataLoaders creation: if args.pad_to_max_length: # If padding was already done ot max length, we use the default data collator that will just convert everything # to tensors. data_collator = default_data_collator else: # Otherwise, `DataCollatorWithPadding` will apply dynamic padding for us (by padding to the maximum length of # the samples passed). When using mixed precision, we add `pad_to_multiple_of=8` to pad all tensors to multiple # of 8s, which will enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). data_collator = DataCollatorWithPadding(tokenizer, pad_to_multiple_of=(8 if accelerator.use_fp16 else None)) train_dataloader = DataLoader( train_dataset, shuffle=True, collate_fn=data_collator, batch_size=args.per_device_train_batch_size ) eval_dataloader = DataLoader(eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size) # Optimizer # Split weights in two groups, one with weight decay and the other not. no_decay = ["bias", "LayerNorm.weight"] optimizer_grouped_parameters = [ { "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], "weight_decay": args.weight_decay, }, { "params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0, }, ] optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate) # Prepare everything with our `accelerator`. model, optimizer, train_dataloader, eval_dataloader = accelerator.prepare( model, optimizer, train_dataloader, eval_dataloader ) # Note -> the training dataloader needs to be prepared before we grab his length below (cause its length will be # shorter in multiprocess) # Scheduler and math around the number of training steps. num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) if args.max_train_steps is None: args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch else: args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) lr_scheduler = get_scheduler( name=args.lr_scheduler_type, optimizer=optimizer, num_warmup_steps=args.num_warmup_steps, num_training_steps=args.max_train_steps, ) # Get the metric function if args.task_name is not None: metric = load_metric("glue", args.task_name) # Train! total_batch_size = args.per_device_train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps logger.info("***** Running training *****") logger.info(f" Num examples = {len(train_dataset)}") logger.info(f" Num Epochs = {args.num_train_epochs}") logger.info(f" Instantaneous batch size per device = {args.per_device_train_batch_size}") logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") logger.info(f" Total optimization steps = {args.max_train_steps}") # Only show the progress bar once on each machine. progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) completed_steps = 0 for epoch in range(args.num_train_epochs): model.train() for step, batch in enumerate(train_dataloader): outputs = model(**batch) loss = outputs.loss loss = loss / args.gradient_accumulation_steps accelerator.backward(loss) if step % args.gradient_accumulation_steps == 0 or step == len(train_dataloader) - 1: optimizer.step() lr_scheduler.step() optimizer.zero_grad() progress_bar.update(1) completed_steps += 1 if completed_steps >= args.max_train_steps: break model.eval() for step, batch in enumerate(eval_dataloader): outputs = model(**batch) predictions = outputs.logits.argmax(dim=-1) metric.add_batch( predictions=accelerator.gather(predictions), references=accelerator.gather(batch["labels"]), ) eval_metric = metric.compute() logger.info(f"epoch {epoch}: {eval_metric}") if args.output_dir is not None: accelerator.wait_for_everyone() unwrapped_model = accelerator.unwrap_model(model) unwrapped_model.save_pretrained(args.output_dir, save_function=accelerator.save) if args.task_name == "mnli": # Final evaluation on mismatched validation set eval_dataset = processed_datasets["validation_mismatched"] eval_dataloader = DataLoader( eval_dataset, collate_fn=data_collator, batch_size=args.per_device_eval_batch_size ) eval_dataloader = accelerator.prepare(eval_dataloader) model.eval() for step, batch in enumerate(eval_dataloader): outputs = model(**batch) predictions = outputs.logits.argmax(dim=-1) metric.add_batch( predictions=accelerator.gather(predictions), references=accelerator.gather(batch["labels"]), ) eval_metric = metric.compute() logger.info(f"mnli-mm: {eval_metric}") if __name__ == "__main__": main()
AdaMix/examples/text-classification/run_glue_no_trainer.py/0
{ "file_path": "AdaMix/examples/text-classification/run_glue_no_trainer.py", "repo_id": "AdaMix", "token_count": 7402 }
48
from collections import Counter import datasets import transformers from transformers.convert_slow_tokenizer import SLOW_TO_FAST_CONVERTERS from transformers.utils import logging logging.set_verbosity_info() TOKENIZER_CLASSES = { name: (getattr(transformers, name), getattr(transformers, name + "Fast")) for name in SLOW_TO_FAST_CONVERTERS } dataset = datasets.load_dataset("xnli", split="test+validation") total = 0 perfect = 0 imperfect = 0 wrong = 0 def check_diff(spm_diff, tok_diff, slow, fast): if spm_diff == list(reversed(tok_diff)): # AAA -> AA+A vs A+AA case. return True elif len(spm_diff) == len(tok_diff) and fast.decode(spm_diff) == fast.decode(tok_diff): # Second order OK # Barrich -> Barr + ich vs Bar + rich return True spm_reencoded = slow.encode(slow.decode(spm_diff)) tok_reencoded = fast.encode(fast.decode(spm_diff)) if spm_reencoded != spm_diff and spm_reencoded == tok_reencoded: # Type 3 error. # Snehagatha -> # Sne, h, aga, th, a # Sne, ha, gat, ha # Encoding the wrong with sp does not even recover what spm gave us # It fits tokenizer however... return True return False def check_LTR_mark(line, idx, fast): enc = fast.encode_plus(line)[0] offsets = enc.offsets curr, prev = offsets[idx], offsets[idx - 1] if curr is not None and line[curr[0] : curr[1]] == "\u200f": return True if prev is not None and line[prev[0] : prev[1]] == "\u200f": return True def check_details(line, spm_ids, tok_ids, slow, fast): # Encoding can be the same with same result AAA -> A + AA vs AA + A # We can check that we use at least exactly the same number of tokens. for i, (spm_id, tok_id) in enumerate(zip(spm_ids, tok_ids)): if spm_id != tok_id: break first = i for i, (spm_id, tok_id) in enumerate(zip(reversed(spm_ids), reversed(tok_ids))): if spm_id != tok_id: break last = len(spm_ids) - i spm_diff = spm_ids[first:last] tok_diff = tok_ids[first:last] if check_diff(spm_diff, tok_diff, slow, fast): return True if check_LTR_mark(line, first, fast): return True if last - first > 5: # We might have twice a single problem, attempt to subdivide the disjointed tokens into smaller problems spms = Counter(spm_ids[first:last]) toks = Counter(tok_ids[first:last]) removable_tokens = {spm_ for (spm_, si) in spms.items() if toks.get(spm_, 0) == si} min_width = 3 for i in range(last - first - min_width): if all(spm_ids[first + i + j] in removable_tokens for j in range(min_width)): possible_matches = [ k for k in range(last - first - min_width) if tok_ids[first + k : first + k + min_width] == spm_ids[first + i : first + i + min_width] ] for j in possible_matches: if check_diff(spm_ids[first : first + i], tok_ids[first : first + j], sp, tok) and check_details( line, spm_ids[first + i : last], tok_ids[first + j : last], slow, fast, ): return True print(f"Spm: {[fast.decode([spm_ids[i]]) for i in range(first, last)]}") try: print(f"Tok: {[fast.decode([tok_ids[i]]) for i in range(first, last)]}") except Exception: pass ok_start = fast.decode(spm_ids[:first]) ok_end = fast.decode(spm_ids[last:]) wrong = fast.decode(spm_ids[first:last]) print() print(wrong) return False def test_string(slow, fast, text): global perfect global imperfect global wrong global total slow_ids = slow.encode(text) fast_ids = fast.encode(text) skip_assert = False total += 1 if slow_ids != fast_ids: if check_details(text, slow_ids, fast_ids, slow, fast): skip_assert = True imperfect += 1 else: wrong += 1 else: perfect += 1 if total % 10000 == 0: print(f"({perfect} / {imperfect} / {wrong} ----- {perfect + imperfect + wrong})") if skip_assert: return assert ( slow_ids == fast_ids ), f"line {text} : \n\n{slow_ids}\n{fast_ids}\n\n{slow.tokenize(text)}\n{fast.tokenize(text)}" def test_tokenizer(slow, fast): global batch_total for i in range(len(dataset)): # premise, all languages for text in dataset[i]["premise"].values(): test_string(slow, fast, text) # hypothesis, all languages for text in dataset[i]["hypothesis"]["translation"]: test_string(slow, fast, text) if __name__ == "__main__": for name, (slow_class, fast_class) in TOKENIZER_CLASSES.items(): checkpoint_names = list(slow_class.max_model_input_sizes.keys()) for checkpoint in checkpoint_names: imperfect = 0 perfect = 0 wrong = 0 total = 0 print(f"========================== Checking {name}: {checkpoint} ==========================") slow = slow_class.from_pretrained(checkpoint, force_download=True) fast = fast_class.from_pretrained(checkpoint, force_download=True) test_tokenizer(slow, fast) print(f"Accuracy {perfect * 100 / total:.2f}")
AdaMix/scripts/check_tokenizers.py/0
{ "file_path": "AdaMix/scripts/check_tokenizers.py", "repo_id": "AdaMix", "token_count": 2570 }
49
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> Setup transformers following instructions in README.md, (I would fork first). ```bash git clone [email protected]:huggingface/transformers.git cd transformers pip install -e . pip install pandas GitPython wget ``` Get required metadata ``` curl https://cdn-datasets.huggingface.co/language_codes/language-codes-3b2.csv > language-codes-3b2.csv curl https://cdn-datasets.huggingface.co/language_codes/iso-639-3.csv > iso-639-3.csv ``` Install Tatoeba-Challenge repo inside transformers ```bash git clone [email protected]:Helsinki-NLP/Tatoeba-Challenge.git ``` To convert a few models, call the conversion script from command line: ```bash python src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py --models heb-eng eng-heb --save_dir converted ``` To convert lots of models you can pass your list of Tatoeba model names to `resolver.convert_models` in a python client or script. ```python from transformers.convert_marian_tatoeba_to_pytorch import TatoebaConverter resolver = TatoebaConverter(save_dir='converted') resolver.convert_models(['heb-eng', 'eng-heb']) ``` ### Upload converted models Since version v3.5.0, the model sharing workflow is switched to git-based system . Refer to [model sharing doc](https://huggingface.co/transformers/master/model_sharing.html#model-sharing-and-uploading) for more details. To upload all converted models, 1. Install [git-lfs](https://git-lfs.github.com/). 2. Login to `transformers-cli` ```bash transformers-cli login ``` 3. Run the `upload_models` script ```bash ./scripts/tatoeba/upload_models.sh ``` ### Modifications - To change naming logic, change the code near `os.rename`. The model card creation code may also need to change. - To change model card content, you must modify `TatoebaCodeResolver.write_model_card`
AdaMix/scripts/tatoeba/README.md/0
{ "file_path": "AdaMix/scripts/tatoeba/README.md", "repo_id": "AdaMix", "token_count": 756 }
50
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from argparse import ArgumentParser from . import BaseTransformersCLICommand def download_command_factory(args): return DownloadCommand(args.model, args.cache_dir, args.force) class DownloadCommand(BaseTransformersCLICommand): @staticmethod def register_subcommand(parser: ArgumentParser): download_parser = parser.add_parser("download") download_parser.add_argument( "--cache-dir", type=str, default=None, help="Path to location to store the models" ) download_parser.add_argument( "--force", action="store_true", help="Force the model to be download even if already in cache-dir" ) download_parser.add_argument("model", type=str, help="Name of the model to download") download_parser.set_defaults(func=download_command_factory) def __init__(self, model: str, cache: str, force: bool): self._model = model self._cache = cache self._force = force def run(self): from ..models.auto import AutoModel, AutoTokenizer AutoModel.from_pretrained(self._model, cache_dir=self._cache, force_download=self._force) AutoTokenizer.from_pretrained(self._model, cache_dir=self._cache, force_download=self._force)
AdaMix/src/transformers/commands/download.py/0
{ "file_path": "AdaMix/src/transformers/commands/download.py", "repo_id": "AdaMix", "token_count": 611 }
51
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os from functools import partial from multiprocessing import Pool, cpu_count import numpy as np from tqdm import tqdm from ...file_utils import is_tf_available, is_torch_available from ...models.bert.tokenization_bert import whitespace_tokenize from ...tokenization_utils_base import BatchEncoding, PreTrainedTokenizerBase, TruncationStrategy from ...utils import logging from .utils import DataProcessor # Store the tokenizers which insert 2 separators tokens MULTI_SEP_TOKENS_TOKENIZERS_SET = {"roberta", "camembert", "bart", "mpnet"} if is_torch_available(): import torch from torch.utils.data import TensorDataset if is_tf_available(): import tensorflow as tf logger = logging.get_logger(__name__) def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer, orig_answer_text): """Returns tokenized answer spans that better match the annotated answer.""" tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text)) for new_start in range(input_start, input_end + 1): for new_end in range(input_end, new_start - 1, -1): text_span = " ".join(doc_tokens[new_start : (new_end + 1)]) if text_span == tok_answer_text: return (new_start, new_end) return (input_start, input_end) def _check_is_max_context(doc_spans, cur_span_index, position): """Check if this is the 'max context' doc span for the token.""" best_score = None best_span_index = None for (span_index, doc_span) in enumerate(doc_spans): end = doc_span.start + doc_span.length - 1 if position < doc_span.start: continue if position > end: continue num_left_context = position - doc_span.start num_right_context = end - position score = min(num_left_context, num_right_context) + 0.01 * doc_span.length if best_score is None or score > best_score: best_score = score best_span_index = span_index return cur_span_index == best_span_index def _new_check_is_max_context(doc_spans, cur_span_index, position): """Check if this is the 'max context' doc span for the token.""" # if len(doc_spans) == 1: # return True best_score = None best_span_index = None for (span_index, doc_span) in enumerate(doc_spans): end = doc_span["start"] + doc_span["length"] - 1 if position < doc_span["start"]: continue if position > end: continue num_left_context = position - doc_span["start"] num_right_context = end - position score = min(num_left_context, num_right_context) + 0.01 * doc_span["length"] if best_score is None or score > best_score: best_score = score best_span_index = span_index return cur_span_index == best_span_index def _is_whitespace(c): if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F: return True return False def squad_convert_example_to_features( example, max_seq_length, doc_stride, max_query_length, padding_strategy, is_training ): features = [] if is_training and not example.is_impossible: # Get start and end position start_position = example.start_position end_position = example.end_position # If the answer cannot be found in the text, then skip this example. actual_text = " ".join(example.doc_tokens[start_position : (end_position + 1)]) cleaned_answer_text = " ".join(whitespace_tokenize(example.answer_text)) if actual_text.find(cleaned_answer_text) == -1: logger.warning("Could not find answer: '%s' vs. '%s'", actual_text, cleaned_answer_text) return [] tok_to_orig_index = [] orig_to_tok_index = [] all_doc_tokens = [] for (i, token) in enumerate(example.doc_tokens): orig_to_tok_index.append(len(all_doc_tokens)) if tokenizer.__class__.__name__ in [ "RobertaTokenizer", "LongformerTokenizer", "BartTokenizer", "RobertaTokenizerFast", "LongformerTokenizerFast", "BartTokenizerFast", ]: sub_tokens = tokenizer.tokenize(token, add_prefix_space=True) else: sub_tokens = tokenizer.tokenize(token) for sub_token in sub_tokens: tok_to_orig_index.append(i) all_doc_tokens.append(sub_token) if is_training and not example.is_impossible: tok_start_position = orig_to_tok_index[example.start_position] if example.end_position < len(example.doc_tokens) - 1: tok_end_position = orig_to_tok_index[example.end_position + 1] - 1 else: tok_end_position = len(all_doc_tokens) - 1 (tok_start_position, tok_end_position) = _improve_answer_span( all_doc_tokens, tok_start_position, tok_end_position, tokenizer, example.answer_text ) spans = [] truncated_query = tokenizer.encode( example.question_text, add_special_tokens=False, truncation=True, max_length=max_query_length ) # Tokenizers who insert 2 SEP tokens in-between <context> & <question> need to have special handling # in the way they compute mask of added tokens. tokenizer_type = type(tokenizer).__name__.replace("Tokenizer", "").lower() sequence_added_tokens = ( tokenizer.model_max_length - tokenizer.max_len_single_sentence + 1 if tokenizer_type in MULTI_SEP_TOKENS_TOKENIZERS_SET else tokenizer.model_max_length - tokenizer.max_len_single_sentence ) sequence_pair_added_tokens = tokenizer.model_max_length - tokenizer.max_len_sentences_pair span_doc_tokens = all_doc_tokens while len(spans) * doc_stride < len(all_doc_tokens): # Define the side we want to truncate / pad and the text/pair sorting if tokenizer.padding_side == "right": texts = truncated_query pairs = span_doc_tokens truncation = TruncationStrategy.ONLY_SECOND.value else: texts = span_doc_tokens pairs = truncated_query truncation = TruncationStrategy.ONLY_FIRST.value encoded_dict = tokenizer.encode_plus( # TODO(thom) update this logic texts, pairs, truncation=truncation, padding=padding_strategy, max_length=max_seq_length, return_overflowing_tokens=True, stride=max_seq_length - doc_stride - len(truncated_query) - sequence_pair_added_tokens, return_token_type_ids=True, ) paragraph_len = min( len(all_doc_tokens) - len(spans) * doc_stride, max_seq_length - len(truncated_query) - sequence_pair_added_tokens, ) if tokenizer.pad_token_id in encoded_dict["input_ids"]: if tokenizer.padding_side == "right": non_padded_ids = encoded_dict["input_ids"][: encoded_dict["input_ids"].index(tokenizer.pad_token_id)] else: last_padding_id_position = ( len(encoded_dict["input_ids"]) - 1 - encoded_dict["input_ids"][::-1].index(tokenizer.pad_token_id) ) non_padded_ids = encoded_dict["input_ids"][last_padding_id_position + 1 :] else: non_padded_ids = encoded_dict["input_ids"] tokens = tokenizer.convert_ids_to_tokens(non_padded_ids) token_to_orig_map = {} for i in range(paragraph_len): index = len(truncated_query) + sequence_added_tokens + i if tokenizer.padding_side == "right" else i token_to_orig_map[index] = tok_to_orig_index[len(spans) * doc_stride + i] encoded_dict["paragraph_len"] = paragraph_len encoded_dict["tokens"] = tokens encoded_dict["token_to_orig_map"] = token_to_orig_map encoded_dict["truncated_query_with_special_tokens_length"] = len(truncated_query) + sequence_added_tokens encoded_dict["token_is_max_context"] = {} encoded_dict["start"] = len(spans) * doc_stride encoded_dict["length"] = paragraph_len spans.append(encoded_dict) if "overflowing_tokens" not in encoded_dict or ( "overflowing_tokens" in encoded_dict and len(encoded_dict["overflowing_tokens"]) == 0 ): break span_doc_tokens = encoded_dict["overflowing_tokens"] for doc_span_index in range(len(spans)): for j in range(spans[doc_span_index]["paragraph_len"]): is_max_context = _new_check_is_max_context(spans, doc_span_index, doc_span_index * doc_stride + j) index = ( j if tokenizer.padding_side == "left" else spans[doc_span_index]["truncated_query_with_special_tokens_length"] + j ) spans[doc_span_index]["token_is_max_context"][index] = is_max_context for span in spans: # Identify the position of the CLS token cls_index = span["input_ids"].index(tokenizer.cls_token_id) # p_mask: mask with 1 for token than cannot be in the answer (0 for token which can be in an answer) # Original TF implem also keep the classification token (set to 0) p_mask = np.ones_like(span["token_type_ids"]) if tokenizer.padding_side == "right": p_mask[len(truncated_query) + sequence_added_tokens :] = 0 else: p_mask[-len(span["tokens"]) : -(len(truncated_query) + sequence_added_tokens)] = 0 pad_token_indices = np.where(span["input_ids"] == tokenizer.pad_token_id) special_token_indices = np.asarray( tokenizer.get_special_tokens_mask(span["input_ids"], already_has_special_tokens=True) ).nonzero() p_mask[pad_token_indices] = 1 p_mask[special_token_indices] = 1 # Set the cls index to 0: the CLS index can be used for impossible answers p_mask[cls_index] = 0 span_is_impossible = example.is_impossible start_position = 0 end_position = 0 if is_training and not span_is_impossible: # For training, if our document chunk does not contain an annotation # we throw it out, since there is nothing to predict. doc_start = span["start"] doc_end = span["start"] + span["length"] - 1 out_of_span = False if not (tok_start_position >= doc_start and tok_end_position <= doc_end): out_of_span = True if out_of_span: start_position = cls_index end_position = cls_index span_is_impossible = True else: if tokenizer.padding_side == "left": doc_offset = 0 else: doc_offset = len(truncated_query) + sequence_added_tokens start_position = tok_start_position - doc_start + doc_offset end_position = tok_end_position - doc_start + doc_offset features.append( SquadFeatures( span["input_ids"], span["attention_mask"], span["token_type_ids"], cls_index, p_mask.tolist(), example_index=0, # Can not set unique_id and example_index here. They will be set after multiple processing. unique_id=0, paragraph_len=span["paragraph_len"], token_is_max_context=span["token_is_max_context"], tokens=span["tokens"], token_to_orig_map=span["token_to_orig_map"], start_position=start_position, end_position=end_position, is_impossible=span_is_impossible, qas_id=example.qas_id, ) ) return features def squad_convert_example_to_features_init(tokenizer_for_convert: PreTrainedTokenizerBase): global tokenizer tokenizer = tokenizer_for_convert def squad_convert_examples_to_features( examples, tokenizer, max_seq_length, doc_stride, max_query_length, is_training, padding_strategy="max_length", return_dataset=False, threads=1, tqdm_enabled=True, ): """ Converts a list of examples into a list of features that can be directly given as input to a model. It is model-dependant and takes advantage of many of the tokenizer's features to create the model's inputs. Args: examples: list of :class:`~transformers.data.processors.squad.SquadExample` tokenizer: an instance of a child of :class:`~transformers.PreTrainedTokenizer` max_seq_length: The maximum sequence length of the inputs. doc_stride: The stride used when the context is too large and is split across several features. max_query_length: The maximum length of the query. is_training: whether to create features for model evaluation or model training. padding_strategy: Default to "max_length". Which padding strategy to use return_dataset: Default False. Either 'pt' or 'tf'. if 'pt': returns a torch.data.TensorDataset, if 'tf': returns a tf.data.Dataset threads: multiple processing threads. Returns: list of :class:`~transformers.data.processors.squad.SquadFeatures` Example:: processor = SquadV2Processor() examples = processor.get_dev_examples(data_dir) features = squad_convert_examples_to_features( examples=examples, tokenizer=tokenizer, max_seq_length=args.max_seq_length, doc_stride=args.doc_stride, max_query_length=args.max_query_length, is_training=not evaluate, ) """ # Defining helper methods features = [] threads = min(threads, cpu_count()) with Pool(threads, initializer=squad_convert_example_to_features_init, initargs=(tokenizer,)) as p: annotate_ = partial( squad_convert_example_to_features, max_seq_length=max_seq_length, doc_stride=doc_stride, max_query_length=max_query_length, padding_strategy=padding_strategy, is_training=is_training, ) features = list( tqdm( p.imap(annotate_, examples, chunksize=32), total=len(examples), desc="convert squad examples to features", disable=not tqdm_enabled, ) ) new_features = [] unique_id = 1000000000 example_index = 0 for example_features in tqdm( features, total=len(features), desc="add example index and unique id", disable=not tqdm_enabled ): if not example_features: continue for example_feature in example_features: example_feature.example_index = example_index example_feature.unique_id = unique_id new_features.append(example_feature) unique_id += 1 example_index += 1 features = new_features del new_features if return_dataset == "pt": if not is_torch_available(): raise RuntimeError("PyTorch must be installed to return a PyTorch dataset.") # Convert to Tensors and build dataset all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) all_attention_masks = torch.tensor([f.attention_mask for f in features], dtype=torch.long) all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) all_cls_index = torch.tensor([f.cls_index for f in features], dtype=torch.long) all_p_mask = torch.tensor([f.p_mask for f in features], dtype=torch.float) all_is_impossible = torch.tensor([f.is_impossible for f in features], dtype=torch.float) if not is_training: all_feature_index = torch.arange(all_input_ids.size(0), dtype=torch.long) dataset = TensorDataset( all_input_ids, all_attention_masks, all_token_type_ids, all_feature_index, all_cls_index, all_p_mask ) else: all_start_positions = torch.tensor([f.start_position for f in features], dtype=torch.long) all_end_positions = torch.tensor([f.end_position for f in features], dtype=torch.long) dataset = TensorDataset( all_input_ids, all_attention_masks, all_token_type_ids, all_start_positions, all_end_positions, all_cls_index, all_p_mask, all_is_impossible, ) return features, dataset elif return_dataset == "tf": if not is_tf_available(): raise RuntimeError("TensorFlow must be installed to return a TensorFlow dataset.") def gen(): for i, ex in enumerate(features): if ex.token_type_ids is None: yield ( { "input_ids": ex.input_ids, "attention_mask": ex.attention_mask, "feature_index": i, "qas_id": ex.qas_id, }, { "start_positions": ex.start_position, "end_positions": ex.end_position, "cls_index": ex.cls_index, "p_mask": ex.p_mask, "is_impossible": ex.is_impossible, }, ) else: yield ( { "input_ids": ex.input_ids, "attention_mask": ex.attention_mask, "token_type_ids": ex.token_type_ids, "feature_index": i, "qas_id": ex.qas_id, }, { "start_positions": ex.start_position, "end_positions": ex.end_position, "cls_index": ex.cls_index, "p_mask": ex.p_mask, "is_impossible": ex.is_impossible, }, ) # Why have we split the batch into a tuple? PyTorch just has a list of tensors. if "token_type_ids" in tokenizer.model_input_names: train_types = ( { "input_ids": tf.int32, "attention_mask": tf.int32, "token_type_ids": tf.int32, "feature_index": tf.int64, "qas_id": tf.string, }, { "start_positions": tf.int64, "end_positions": tf.int64, "cls_index": tf.int64, "p_mask": tf.int32, "is_impossible": tf.int32, }, ) train_shapes = ( { "input_ids": tf.TensorShape([None]), "attention_mask": tf.TensorShape([None]), "token_type_ids": tf.TensorShape([None]), "feature_index": tf.TensorShape([]), "qas_id": tf.TensorShape([]), }, { "start_positions": tf.TensorShape([]), "end_positions": tf.TensorShape([]), "cls_index": tf.TensorShape([]), "p_mask": tf.TensorShape([None]), "is_impossible": tf.TensorShape([]), }, ) else: train_types = ( {"input_ids": tf.int32, "attention_mask": tf.int32, "feature_index": tf.int64, "qas_id": tf.string}, { "start_positions": tf.int64, "end_positions": tf.int64, "cls_index": tf.int64, "p_mask": tf.int32, "is_impossible": tf.int32, }, ) train_shapes = ( { "input_ids": tf.TensorShape([None]), "attention_mask": tf.TensorShape([None]), "feature_index": tf.TensorShape([]), "qas_id": tf.TensorShape([]), }, { "start_positions": tf.TensorShape([]), "end_positions": tf.TensorShape([]), "cls_index": tf.TensorShape([]), "p_mask": tf.TensorShape([None]), "is_impossible": tf.TensorShape([]), }, ) return tf.data.Dataset.from_generator(gen, train_types, train_shapes) else: return features class SquadProcessor(DataProcessor): """ Processor for the SQuAD data set. overridden by SquadV1Processor and SquadV2Processor, used by the version 1.1 and version 2.0 of SQuAD, respectively. """ train_file = None dev_file = None def _get_example_from_tensor_dict(self, tensor_dict, evaluate=False): if not evaluate: answer = tensor_dict["answers"]["text"][0].numpy().decode("utf-8") answer_start = tensor_dict["answers"]["answer_start"][0].numpy() answers = [] else: answers = [ {"answer_start": start.numpy(), "text": text.numpy().decode("utf-8")} for start, text in zip(tensor_dict["answers"]["answer_start"], tensor_dict["answers"]["text"]) ] answer = None answer_start = None return SquadExample( qas_id=tensor_dict["id"].numpy().decode("utf-8"), question_text=tensor_dict["question"].numpy().decode("utf-8"), context_text=tensor_dict["context"].numpy().decode("utf-8"), answer_text=answer, start_position_character=answer_start, title=tensor_dict["title"].numpy().decode("utf-8"), answers=answers, ) def get_examples_from_dataset(self, dataset, evaluate=False): """ Creates a list of :class:`~transformers.data.processors.squad.SquadExample` using a TFDS dataset. Args: dataset: The tfds dataset loaded from `tensorflow_datasets.load("squad")` evaluate: Boolean specifying if in evaluation mode or in training mode Returns: List of SquadExample Examples:: >>> import tensorflow_datasets as tfds >>> dataset = tfds.load("squad") >>> training_examples = get_examples_from_dataset(dataset, evaluate=False) >>> evaluation_examples = get_examples_from_dataset(dataset, evaluate=True) """ if evaluate: dataset = dataset["validation"] else: dataset = dataset["train"] examples = [] for tensor_dict in tqdm(dataset): examples.append(self._get_example_from_tensor_dict(tensor_dict, evaluate=evaluate)) return examples def get_train_examples(self, data_dir, filename=None): """ Returns the training examples from the data directory. Args: data_dir: Directory containing the data files used for training and evaluating. filename: None by default, specify this if the training file has a different name than the original one which is `train-v1.1.json` and `train-v2.0.json` for squad versions 1.1 and 2.0 respectively. """ if data_dir is None: data_dir = "" if self.train_file is None: raise ValueError("SquadProcessor should be instantiated via SquadV1Processor or SquadV2Processor") with open( os.path.join(data_dir, self.train_file if filename is None else filename), "r", encoding="utf-8" ) as reader: input_data = json.load(reader)["data"] return self._create_examples(input_data, "train") def get_dev_examples(self, data_dir, filename=None): """ Returns the evaluation example from the data directory. Args: data_dir: Directory containing the data files used for training and evaluating. filename: None by default, specify this if the evaluation file has a different name than the original one which is `dev-v1.1.json` and `dev-v2.0.json` for squad versions 1.1 and 2.0 respectively. """ if data_dir is None: data_dir = "" if self.dev_file is None: raise ValueError("SquadProcessor should be instantiated via SquadV1Processor or SquadV2Processor") with open( os.path.join(data_dir, self.dev_file if filename is None else filename), "r", encoding="utf-8" ) as reader: input_data = json.load(reader)["data"] return self._create_examples(input_data, "dev") def _create_examples(self, input_data, set_type): is_training = set_type == "train" examples = [] for entry in tqdm(input_data): title = entry["title"] for paragraph in entry["paragraphs"]: context_text = paragraph["context"] for qa in paragraph["qas"]: qas_id = qa["id"] question_text = qa["question"] start_position_character = None answer_text = None answers = [] is_impossible = qa.get("is_impossible", False) if not is_impossible: if is_training: answer = qa["answers"][0] answer_text = answer["text"] start_position_character = answer["answer_start"] else: answers = qa["answers"] example = SquadExample( qas_id=qas_id, question_text=question_text, context_text=context_text, answer_text=answer_text, start_position_character=start_position_character, title=title, is_impossible=is_impossible, answers=answers, ) examples.append(example) return examples class SquadV1Processor(SquadProcessor): train_file = "train-v1.1.json" dev_file = "dev-v1.1.json" class SquadV2Processor(SquadProcessor): train_file = "train-v2.0.json" dev_file = "dev-v2.0.json" class SquadExample: """ A single training/test example for the Squad dataset, as loaded from disk. Args: qas_id: The example's unique identifier question_text: The question string context_text: The context string answer_text: The answer string start_position_character: The character position of the start of the answer title: The title of the example answers: None by default, this is used during evaluation. Holds answers as well as their start positions. is_impossible: False by default, set to True if the example has no possible answer. """ def __init__( self, qas_id, question_text, context_text, answer_text, start_position_character, title, answers=[], is_impossible=False, ): self.qas_id = qas_id self.question_text = question_text self.context_text = context_text self.answer_text = answer_text self.title = title self.is_impossible = is_impossible self.answers = answers self.start_position, self.end_position = 0, 0 doc_tokens = [] char_to_word_offset = [] prev_is_whitespace = True # Split on whitespace so that different tokens may be attributed to their original position. for c in self.context_text: if _is_whitespace(c): prev_is_whitespace = True else: if prev_is_whitespace: doc_tokens.append(c) else: doc_tokens[-1] += c prev_is_whitespace = False char_to_word_offset.append(len(doc_tokens) - 1) self.doc_tokens = doc_tokens self.char_to_word_offset = char_to_word_offset # Start and end positions only has a value during evaluation. if start_position_character is not None and not is_impossible: self.start_position = char_to_word_offset[start_position_character] self.end_position = char_to_word_offset[ min(start_position_character + len(answer_text) - 1, len(char_to_word_offset) - 1) ] class SquadFeatures: """ Single squad example features to be fed to a model. Those features are model-specific and can be crafted from :class:`~transformers.data.processors.squad.SquadExample` using the :method:`~transformers.data.processors.squad.squad_convert_examples_to_features` method. Args: input_ids: Indices of input sequence tokens in the vocabulary. attention_mask: Mask to avoid performing attention on padding token indices. token_type_ids: Segment token indices to indicate first and second portions of the inputs. cls_index: the index of the CLS token. p_mask: Mask identifying tokens that can be answers vs. tokens that cannot. Mask with 1 for tokens than cannot be in the answer and 0 for token that can be in an answer example_index: the index of the example unique_id: The unique Feature identifier paragraph_len: The length of the context token_is_max_context: List of booleans identifying which tokens have their maximum context in this feature object. If a token does not have their maximum context in this feature object, it means that another feature object has more information related to that token and should be prioritized over this feature for that token. tokens: list of tokens corresponding to the input ids token_to_orig_map: mapping between the tokens and the original text, needed in order to identify the answer. start_position: start of the answer token index end_position: end of the answer token index encoding: optionally store the BatchEncoding with the fast-tokenizer alignment methods. """ def __init__( self, input_ids, attention_mask, token_type_ids, cls_index, p_mask, example_index, unique_id, paragraph_len, token_is_max_context, tokens, token_to_orig_map, start_position, end_position, is_impossible, qas_id: str = None, encoding: BatchEncoding = None, ): self.input_ids = input_ids self.attention_mask = attention_mask self.token_type_ids = token_type_ids self.cls_index = cls_index self.p_mask = p_mask self.example_index = example_index self.unique_id = unique_id self.paragraph_len = paragraph_len self.token_is_max_context = token_is_max_context self.tokens = tokens self.token_to_orig_map = token_to_orig_map self.start_position = start_position self.end_position = end_position self.is_impossible = is_impossible self.qas_id = qas_id self.encoding = encoding class SquadResult: """ Constructs a SquadResult which can be used to evaluate a model's output on the SQuAD dataset. Args: unique_id: The unique identifier corresponding to that example. start_logits: The logits corresponding to the start of the answer end_logits: The logits corresponding to the end of the answer """ def __init__(self, unique_id, start_logits, end_logits, start_top_index=None, end_top_index=None, cls_logits=None): self.start_logits = start_logits self.end_logits = end_logits self.unique_id = unique_id if start_top_index: self.start_top_index = start_top_index self.end_top_index = end_top_index self.cls_logits = cls_logits
AdaMix/src/transformers/data/processors/squad.py/0
{ "file_path": "AdaMix/src/transformers/data/processors/squad.py", "repo_id": "AdaMix", "token_count": 15633 }
52
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Integrations with other Python libraries. """ import importlib.util import io import json import numbers import os import re import tempfile from pathlib import Path from types import SimpleNamespace from .trainer_utils import SchedulerType from .utils import logging from .utils.versions import require_version logger = logging.get_logger(__name__) # comet_ml requires to be imported before any ML frameworks _has_comet = importlib.util.find_spec("comet_ml") is not None and os.getenv("COMET_MODE", "").upper() != "DISABLED" if _has_comet: try: import comet_ml # noqa: F401 if hasattr(comet_ml, "config") and comet_ml.config.get_config("comet.api_key"): _has_comet = True else: if os.getenv("COMET_MODE", "").upper() != "DISABLED": logger.warning("comet_ml is installed but `COMET_API_KEY` is not set.") _has_comet = False except (ImportError, ValueError): _has_comet = False from .file_utils import ENV_VARS_TRUE_VALUES, is_torch_tpu_available # noqa: E402 from .trainer_callback import TrainerCallback # noqa: E402 from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun, IntervalStrategy # noqa: E402 # Integration functions: def is_wandb_available(): # any value of WANDB_DISABLED disables wandb if os.getenv("WANDB_DISABLED", "").upper() in ENV_VARS_TRUE_VALUES: logger.warn( "Using the `WAND_DISABLED` environment variable is deprecated and will be removed in v5. Use the " "--report_to flag to control the integrations used for logging result (for instance --report_to none)." ) return False return importlib.util.find_spec("wandb") is not None def is_comet_available(): return _has_comet def is_tensorboard_available(): return importlib.util.find_spec("tensorboard") is not None or importlib.util.find_spec("tensorboardX") is not None def is_optuna_available(): return importlib.util.find_spec("optuna") is not None def is_ray_available(): return importlib.util.find_spec("ray") is not None def is_ray_tune_available(): if not is_ray_available(): return False return importlib.util.find_spec("ray.tune") is not None def is_azureml_available(): if importlib.util.find_spec("azureml") is None: return False if importlib.util.find_spec("azureml.core") is None: return False return importlib.util.find_spec("azureml.core.run") is not None def is_mlflow_available(): return importlib.util.find_spec("mlflow") is not None def is_fairscale_available(): return importlib.util.find_spec("fairscale") is not None def is_deepspeed_available(): return importlib.util.find_spec("deepspeed") is not None def hp_params(trial): if is_optuna_available(): import optuna if isinstance(trial, optuna.Trial): return trial.params if is_ray_tune_available(): if isinstance(trial, dict): return trial raise RuntimeError(f"Unknown type for trial {trial.__class__}") def default_hp_search_backend(): if is_optuna_available(): return "optuna" elif is_ray_tune_available(): return "ray" def run_hp_search_optuna(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: import optuna def _objective(trial, checkpoint_dir=None): checkpoint = None if checkpoint_dir: for subdir in os.listdir(checkpoint_dir): if subdir.startswith(PREFIX_CHECKPOINT_DIR): checkpoint = os.path.join(checkpoint_dir, subdir) trainer.objective = None trainer.train(resume_from_checkpoint=checkpoint, trial=trial) # If there hasn't been any evaluation during the training loop. if getattr(trainer, "objective", None) is None: metrics = trainer.evaluate() trainer.objective = trainer.compute_objective(metrics) return trainer.objective timeout = kwargs.pop("timeout", None) n_jobs = kwargs.pop("n_jobs", 1) study = optuna.create_study(direction=direction, **kwargs) study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs) best_trial = study.best_trial return BestRun(str(best_trial.number), best_trial.value, best_trial.params) def run_hp_search_ray(trainer, n_trials: int, direction: str, **kwargs) -> BestRun: import ray def _objective(trial, local_trainer, checkpoint_dir=None): checkpoint = None if checkpoint_dir: for subdir in os.listdir(checkpoint_dir): if subdir.startswith(PREFIX_CHECKPOINT_DIR): checkpoint = os.path.join(checkpoint_dir, subdir) local_trainer.objective = None local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial) # If there hasn't been any evaluation during the training loop. if getattr(local_trainer, "objective", None) is None: metrics = local_trainer.evaluate() local_trainer.objective = local_trainer.compute_objective(metrics) local_trainer._tune_save_checkpoint() ray.tune.report(objective=local_trainer.objective, **metrics, done=True) # The model and TensorBoard writer do not pickle so we have to remove them (if they exists) # while doing the ray hp search. _tb_writer = trainer.pop_callback(TensorBoardCallback) trainer.model = None # Setup default `resources_per_trial`. if "resources_per_trial" not in kwargs: # Default to 1 CPU and 1 GPU (if applicable) per trial. kwargs["resources_per_trial"] = {"cpu": 1} if trainer.args.n_gpu > 0: kwargs["resources_per_trial"]["gpu"] = 1 resource_msg = "1 CPU" + (" and 1 GPU" if trainer.args.n_gpu > 0 else "") logger.info( "No `resources_per_trial` arg was passed into " "`hyperparameter_search`. Setting it to a default value " f"of {resource_msg} for each trial." ) # Make sure each trainer only uses GPUs that were allocated per trial. gpus_per_trial = kwargs["resources_per_trial"].get("gpu", 0) trainer.args._n_gpu = gpus_per_trial # Setup default `progress_reporter`. if "progress_reporter" not in kwargs: from ray.tune import CLIReporter kwargs["progress_reporter"] = CLIReporter(metric_columns=["objective"]) if "keep_checkpoints_num" in kwargs and kwargs["keep_checkpoints_num"] > 0: # `keep_checkpoints_num=0` would disabled checkpointing trainer.use_tune_checkpoints = True if kwargs["keep_checkpoints_num"] > 1: logger.warning( f"Currently keeping {kwargs['keep_checkpoint_num']} checkpoints for each trial. " "Checkpoints are usually huge, " "consider setting `keep_checkpoints_num=1`." ) if "scheduler" in kwargs: from ray.tune.schedulers import ASHAScheduler, HyperBandForBOHB, MedianStoppingRule, PopulationBasedTraining # Check if checkpointing is enabled for PopulationBasedTraining if isinstance(kwargs["scheduler"], PopulationBasedTraining): if not trainer.use_tune_checkpoints: logger.warning( "You are using PopulationBasedTraining but you haven't enabled checkpointing. " "This means your trials will train from scratch everytime they are exploiting " "new configurations. Consider enabling checkpointing by passing " "`keep_checkpoints_num=1` as an additional argument to `Trainer.hyperparameter_search`." ) # Check for `do_eval` and `eval_during_training` for schedulers that require intermediate reporting. if isinstance( kwargs["scheduler"], (ASHAScheduler, MedianStoppingRule, HyperBandForBOHB, PopulationBasedTraining) ) and (not trainer.args.do_eval or trainer.args.evaluation_strategy == IntervalStrategy.NO): raise RuntimeError( "You are using {cls} as a scheduler but you haven't enabled evaluation during training. " "This means your trials will not report intermediate results to Ray Tune, and " "can thus not be stopped early or used to exploit other trials parameters. " "If this is what you want, do not use {cls}. If you would like to use {cls}, " "make sure you pass `do_eval=True` and `evaluation_strategy='steps'` in the " "Trainer `args`.".format(cls=type(kwargs["scheduler"]).__name__) ) analysis = ray.tune.run( ray.tune.with_parameters(_objective, local_trainer=trainer), config=trainer.hp_space(None), num_samples=n_trials, **kwargs, ) best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3]) best_run = BestRun(best_trial.trial_id, best_trial.last_result["objective"], best_trial.config) if _tb_writer is not None: trainer.add_callback(_tb_writer) return best_run def get_available_reporting_integrations(): integrations = [] if is_azureml_available(): integrations.append("azure_ml") if is_comet_available(): integrations.append("comet_ml") if is_mlflow_available(): integrations.append("mlflow") if is_tensorboard_available(): integrations.append("tensorboard") if is_wandb_available(): integrations.append("wandb") return integrations def rewrite_logs(d): new_d = {} eval_prefix = "eval_" eval_prefix_len = len(eval_prefix) for k, v in d.items(): if k.startswith(eval_prefix): new_d["eval/" + k[eval_prefix_len:]] = v else: new_d["train/" + k] = v return new_d def init_deepspeed(trainer, num_training_steps): """ Init DeepSpeed, after converting any relevant Trainer's args into DeepSpeed configuration Args: trainer: Trainer object num_training_steps: per single gpu Returns: model, optimizer, lr_scheduler """ import deepspeed require_version("deepspeed>0.3.10") args = trainer.args ds_config_file = args.deepspeed model = trainer.model optimizer, lr_scheduler = None, None with io.open(ds_config_file, "r", encoding="utf-8") as f: config = json.load(f) # The following code translates relevant trainer's cl args into the DS config # First to ensure that there is no mismatch between cl args values and presets in the config # file, ask to not set in ds config file: # - "train_batch_size", # - "train_micro_batch_size_per_gpu", # - "gradient_accumulation_steps" bs_keys = ["train_batch_size", "train_micro_batch_size_per_gpu"] if len([x for x in bs_keys if x in config.keys()]): raise ValueError( f"Do not include {bs_keys} entries in the ds config file, as they will be set via --per_device_train_batch_size or its default" ) if "gradient_accumulation_steps" in config.keys(): raise ValueError( "Do not include gradient_accumulation_steps entries in the ds config file, as they will be set via --gradient_accumulation_steps or its default" ) # DeepSpeed does: # train_batch_size = n_gpus * train_micro_batch_size_per_gpu * gradient_accumulation_steps # therefore we just need to set: config["train_micro_batch_size_per_gpu"] = args.per_device_train_batch_size config["gradient_accumulation_steps"] = args.gradient_accumulation_steps if "gradient_clipping" in config: logger.info( f"Keeping the `gradient_clipping` config from {ds_config_file} intact, ignoring any gradient clipping-specific cl args" ) else: # override only if the ds config doesn't already have this section config["gradient_clipping"] = args.max_grad_norm if "optimizer" in config: logger.info( f"Keeping the `optimizer` config from {ds_config_file} intact, ignoring any optimizer-specific cl args" ) else: # user wants optimizer from cl args, so create and pass to DS optimizer = trainer.create_optimizer() if "scheduler" in config: logger.info( f"Keeping the `scheduler` config from {ds_config_file} intact, ignoring any scheduler-specific cl args" ) else: # user wants to LR scheduler from cl args, so create it if optimizer already avaialble. if optimizer is not None: lr_scheduler = trainer.create_scheduler( optimizer=optimizer, num_training_steps=num_training_steps ) # fp16 if trainer.fp16_backend is not None: # Deepspeed has 2 possible fp16 config entries: # - `fp16`: for the native amp - it has a bunch of optional params but we won't set any here unless the user did the work # - `amp`: which delegates amp work to apex (which needs to be available), but it cannot be used with any ZeRO features, so probably best to be avoided. if trainer.fp16_backend == "apex": if "amp" in config: logger.info( f"Keeping the `amp` config from {ds_config_file} intact, ignoring any amp-specific cl args" ) else: config["amp"] = { "enabled": True, "opt_level": args.fp16_opt_level, } elif trainer.fp16_backend == "amp": if "fp16" in config: logger.info( f"Keeping the `fp16` config from {ds_config_file} intact, ignoring any fp16-specific cl args" ) else: config["fp16"] = { "enabled": True, } # for clarity extract the specific cl args that are being passed to deepspeed ds_args = dict(local_rank=args.local_rank) # init that takes part of the config via `args`, and the bulk of it via `config_params` model_parameters = filter(lambda p: p.requires_grad, model.parameters()) model, optimizer, _, lr_scheduler = deepspeed.initialize( args=SimpleNamespace(**ds_args), # expects an obj model=model, model_parameters=model_parameters, optimizer=optimizer, lr_scheduler=lr_scheduler, config_params=config, ) if lr_scheduler is None: lr_scheduler = trainer.create_scheduler( optimizer=optimizer, num_training_steps=num_training_steps ) return model, optimizer, lr_scheduler class TensorBoardCallback(TrainerCallback): """ A :class:`~transformers.TrainerCallback` that sends the logs to `TensorBoard <https://www.tensorflow.org/tensorboard>`__. Args: tb_writer (:obj:`SummaryWriter`, `optional`): The writer to use. Will instantiate one if not set. """ def __init__(self, tb_writer=None): has_tensorboard = is_tensorboard_available() assert ( has_tensorboard ), "TensorBoardCallback requires tensorboard to be installed. Either update your PyTorch version or install tensorboardX." if has_tensorboard: try: from torch.utils.tensorboard import SummaryWriter # noqa: F401 self._SummaryWriter = SummaryWriter except ImportError: try: from tensorboardX import SummaryWriter self._SummaryWriter = SummaryWriter except ImportError: self._SummaryWriter = None else: self._SummaryWriter = None self.tb_writer = tb_writer def _init_summary_writer(self, args, log_dir=None): log_dir = log_dir or args.logging_dir if self._SummaryWriter is not None: self.tb_writer = self._SummaryWriter(log_dir=log_dir) def on_train_begin(self, args, state, control, **kwargs): if not state.is_world_process_zero: return log_dir = None if state.is_hyper_param_search: trial_name = state.trial_name if trial_name is not None: log_dir = os.path.join(args.logging_dir, trial_name) self._init_summary_writer(args, log_dir) if self.tb_writer is not None: self.tb_writer.add_text("args", args.to_json_string()) if "model" in kwargs: model = kwargs["model"] if hasattr(model, "config") and model.config is not None: model_config_json = model.config.to_json_string() self.tb_writer.add_text("model_config", model_config_json) # Version of TensorBoard coming from tensorboardX does not have this method. if hasattr(self.tb_writer, "add_hparams"): self.tb_writer.add_hparams(args.to_sanitized_dict(), metric_dict={}) def on_log(self, args, state, control, logs=None, **kwargs): if state.is_world_process_zero: if self.tb_writer is None: self._init_summary_writer(args) if self.tb_writer is not None: logs = rewrite_logs(logs) for k, v in logs.items(): if isinstance(v, (int, float)): self.tb_writer.add_scalar(k, v, state.global_step) else: logger.warning( "Trainer is attempting to log a value of " '"%s" of type %s for key "%s" as a scalar. ' "This invocation of Tensorboard's writer.add_scalar() " "is incorrect so we dropped this attribute.", v, type(v), k, ) self.tb_writer.flush() def on_train_end(self, args, state, control, **kwargs): if self.tb_writer: self.tb_writer.close() class WandbCallback(TrainerCallback): """ A :class:`~transformers.TrainerCallback` that sends the logs to `Weight and Biases <https://www.wandb.com/>`__. """ def __init__(self): has_wandb = is_wandb_available() assert has_wandb, "WandbCallback requires wandb to be installed. Run `pip install wandb`." if has_wandb: import wandb wandb.ensure_configured() if wandb.api.api_key is None: has_wandb = False logger.warning( "W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable." ) self._wandb = None else: self._wandb = wandb self._initialized = False # log outputs self._log_model = os.getenv("WANDB_LOG_MODEL", "FALSE").upper() in ENV_VARS_TRUE_VALUES.union({"TRUE"}) def setup(self, args, state, model, reinit, **kwargs): """ Setup the optional Weights & Biases (`wandb`) integration. One can subclass and override this method to customize the setup if needed. Find more information `here <https://docs.wandb.ai/integrations/huggingface>`__. You can also override the following environment variables: Environment: WANDB_LOG_MODEL (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to log model as artifact at the end of training. WANDB_WATCH (:obj:`str`, `optional` defaults to :obj:`"gradients"`): Can be :obj:`"gradients"`, :obj:`"all"` or :obj:`"false"`. Set to :obj:`"false"` to disable gradient logging or :obj:`"all"` to log gradients and parameters. WANDB_PROJECT (:obj:`str`, `optional`, defaults to :obj:`"huggingface"`): Set this to a custom string to store results in a different project. WANDB_DISABLED (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to disable wandb entirely. Set `WANDB_DISABLED=true` to disable. """ if self._wandb is None: return self._initialized = True if state.is_world_process_zero: logger.info( 'Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"' ) combined_dict = {**args.to_sanitized_dict()} if hasattr(model, "config") and model.config is not None: model_config = model.config.to_dict() combined_dict = {**model_config, **combined_dict} trial_name = state.trial_name init_args = {} if trial_name is not None: run_name = trial_name init_args["group"] = args.run_name else: run_name = args.run_name self._wandb.init( project=os.getenv("WANDB_PROJECT", "huggingface"), config=combined_dict, name=run_name, reinit=reinit, **init_args, ) # keep track of model topology and gradients, unsupported on TPU if not is_torch_tpu_available() and os.getenv("WANDB_WATCH") != "false": self._wandb.watch( model, log=os.getenv("WANDB_WATCH", "gradients"), log_freq=max(100, args.logging_steps) ) def on_train_begin(self, args, state, control, model=None, **kwargs): if self._wandb is None: return hp_search = state.is_hyper_param_search if not self._initialized or hp_search: self.setup(args, state, model, reinit=hp_search, **kwargs) def on_train_end(self, args, state, control, model=None, tokenizer=None, **kwargs): if self._wandb is None: return # commit last step if state.is_world_process_zero: self._wandb.log({}) if self._log_model and self._initialized and state.is_world_process_zero: from .trainer import Trainer fake_trainer = Trainer(args=args, model=model, tokenizer=tokenizer) with tempfile.TemporaryDirectory() as temp_dir: fake_trainer.save_model(temp_dir) # use run name and ensure it's a valid Artifact name artifact_name = re.sub(r"[^a-zA-Z0-9_\.\-]", "", self._wandb.run.name) metadata = ( { k: v for k, v in dict(self._wandb.summary).items() if isinstance(v, numbers.Number) and not k.startswith("_") } if not args.load_best_model_at_end else { f"eval/{args.metric_for_best_model}": state.best_metric, "train/total_floss": state.total_flos, } ) artifact = self._wandb.Artifact(name=f"run-{artifact_name}", type="model", metadata=metadata) for f in Path(temp_dir).glob("*"): if f.is_file(): with artifact.new_file(f.name, mode="wb") as fa: fa.write(f.read_bytes()) self._wandb.run.log_artifact(artifact) def on_log(self, args, state, control, model=None, logs=None, **kwargs): if self._wandb is None: return if not self._initialized: self.setup(args, state, model, reinit=False) if state.is_world_process_zero: logs = rewrite_logs(logs) self._wandb.log(logs, step=state.global_step) class CometCallback(TrainerCallback): """ A :class:`~transformers.TrainerCallback` that sends the logs to `Comet ML <https://www.comet.ml/site/>`__. """ def __init__(self): assert _has_comet, "CometCallback requires comet-ml to be installed. Run `pip install comet-ml`." self._initialized = False def setup(self, args, state, model): """ Setup the optional Comet.ml integration. Environment: COMET_MODE (:obj:`str`, `optional`): "OFFLINE", "ONLINE", or "DISABLED" COMET_PROJECT_NAME (:obj:`str`, `optional`): Comet.ml project name for experiments COMET_OFFLINE_DIRECTORY (:obj:`str`, `optional`): Folder to use for saving offline experiments when :obj:`COMET_MODE` is "OFFLINE" For a number of configurable items in the environment, see `here <https://www.comet.ml/docs/python-sdk/advanced/#comet-configuration-variables>`__. """ self._initialized = True if state.is_world_process_zero: comet_mode = os.getenv("COMET_MODE", "ONLINE").upper() args = {"project_name": os.getenv("COMET_PROJECT_NAME", "huggingface")} experiment = None if comet_mode == "ONLINE": experiment = comet_ml.Experiment(**args) logger.info("Automatic Comet.ml online logging enabled") elif comet_mode == "OFFLINE": args["offline_directory"] = os.getenv("COMET_OFFLINE_DIRECTORY", "./") experiment = comet_ml.OfflineExperiment(**args) logger.info("Automatic Comet.ml offline logging enabled; use `comet upload` when finished") if experiment is not None: experiment._set_model_graph(model, framework="transformers") experiment._log_parameters(args, prefix="args/", framework="transformers") if hasattr(model, "config"): experiment._log_parameters(model.config, prefix="config/", framework="transformers") def on_train_begin(self, args, state, control, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) def on_log(self, args, state, control, model=None, logs=None, **kwargs): if not self._initialized: self.setup(args, state, model) if state.is_world_process_zero: experiment = comet_ml.config.get_global_experiment() if experiment is not None: experiment._log_metrics(logs, step=state.global_step, epoch=state.epoch, framework="transformers") class AzureMLCallback(TrainerCallback): """ A :class:`~transformers.TrainerCallback` that sends the logs to `AzureML <https://pypi.org/project/azureml-sdk/>`__. """ def __init__(self, azureml_run=None): assert ( is_azureml_available() ), "AzureMLCallback requires azureml to be installed. Run `pip install azureml-sdk`." self.azureml_run = azureml_run def on_init_end(self, args, state, control, **kwargs): from azureml.core.run import Run if self.azureml_run is None and state.is_world_process_zero: self.azureml_run = Run.get_context() def on_log(self, args, state, control, logs=None, **kwargs): if self.azureml_run: for k, v in logs.items(): if isinstance(v, (int, float)): self.azureml_run.log(k, v, description=k) class MLflowCallback(TrainerCallback): """ A :class:`~transformers.TrainerCallback` that sends the logs to `MLflow <https://www.mlflow.org/>`__. """ def __init__(self): assert is_mlflow_available(), "MLflowCallback requires mlflow to be installed. Run `pip install mlflow`." import mlflow self._MAX_PARAM_VAL_LENGTH = mlflow.utils.validation.MAX_PARAM_VAL_LENGTH self._MAX_PARAMS_TAGS_PER_BATCH = mlflow.utils.validation.MAX_PARAMS_TAGS_PER_BATCH self._initialized = False self._log_artifacts = False self._ml_flow = mlflow def setup(self, args, state, model): """ Setup the optional MLflow integration. Environment: HF_MLFLOW_LOG_ARTIFACTS (:obj:`str`, `optional`): Whether to use MLflow .log_artifact() facility to log artifacts. This only makes sense if logging to a remote server, e.g. s3 or GCS. If set to `True` or `1`, will copy whatever is in TrainerArgument's output_dir to the local or remote artifact storage. Using it without a remote storage will just copy the files to your artifact location. """ log_artifacts = os.getenv("HF_MLFLOW_LOG_ARTIFACTS", "FALSE").upper() if log_artifacts in {"TRUE", "1"}: self._log_artifacts = True if state.is_world_process_zero: self._ml_flow.start_run() combined_dict = args.to_dict() if hasattr(model, "config") and model.config is not None: model_config = model.config.to_dict() combined_dict = {**model_config, **combined_dict} # remove params that are too long for MLflow for name, value in list(combined_dict.items()): # internally, all values are converted to str in MLflow if len(str(value)) > self._MAX_PARAM_VAL_LENGTH: logger.warning( f"Trainer is attempting to log a value of " f'"{value}" for key "{name}" as a parameter. ' f"MLflow's log_param() only accepts values no longer than " f"250 characters so we dropped this attribute." ) del combined_dict[name] # MLflow cannot log more than 100 values in one go, so we have to split it combined_dict_items = list(combined_dict.items()) for i in range(0, len(combined_dict_items), self._MAX_PARAMS_TAGS_PER_BATCH): self._ml_flow.log_params(dict(combined_dict_items[i : i + self._MAX_PARAMS_TAGS_PER_BATCH])) self._initialized = True def on_train_begin(self, args, state, control, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) def on_log(self, args, state, control, logs, model=None, **kwargs): if not self._initialized: self.setup(args, state, model) if state.is_world_process_zero: for k, v in logs.items(): if isinstance(v, (int, float)): self._ml_flow.log_metric(k, v, step=state.global_step) else: logger.warning( f"Trainer is attempting to log a value of " f'"{v}" of type {type(v)} for key "{k}" as a metric. ' f"MLflow's log_metric() only accepts float and " f"int types so we dropped this attribute." ) def on_train_end(self, args, state, control, **kwargs): if self._initialized and state.is_world_process_zero: if self._log_artifacts: logger.info("Logging artifacts. This may take time.") self._ml_flow.log_artifacts(args.output_dir) def __del__(self): # if the previous run is not terminated correctly, the fluent API will # not let you start a new run before the previous one is killed if self._ml_flow.active_run is not None: self._ml_flow.end_run() INTEGRATION_TO_CALLBACK = { "azure_ml": AzureMLCallback, "comet_ml": CometCallback, "mlflow": MLflowCallback, "tensorboard": TensorBoardCallback, "wandb": WandbCallback, } def get_reporting_integration_callbacks(report_to): for integration in report_to: if integration not in INTEGRATION_TO_CALLBACK: raise ValueError( f"{integration} is not supported, only {', '.join(INTEGRATION_TO_CALLBACK.keys())} are supported." ) return [INTEGRATION_TO_CALLBACK[integration] for integration in report_to]
AdaMix/src/transformers/integrations.py/0
{ "file_path": "AdaMix/src/transformers/integrations.py", "repo_id": "AdaMix", "token_count": 14373 }
53
# coding=utf-8 # Copyright 2018 The Google Flax Team Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Auto Model class. """ from collections import OrderedDict from ...configuration_utils import PretrainedConfig from ...utils import logging from ..bert.modeling_flax_bert import FlaxBertModel from ..roberta.modeling_flax_roberta import FlaxRobertaModel from .configuration_auto import AutoConfig, BertConfig, RobertaConfig logger = logging.get_logger(__name__) FLAX_MODEL_MAPPING = OrderedDict( [ (RobertaConfig, FlaxRobertaModel), (BertConfig, FlaxBertModel), ] ) class FlaxAutoModel(object): r""" :class:`~transformers.FlaxAutoModel` is a generic model class that will be instantiated as one of the base model classes of the library when created with the `FlaxAutoModel.from_pretrained(pretrained_model_name_or_path)` or the `FlaxAutoModel.from_config(config)` class methods. This class cannot be instantiated using `__init__()` (throws an error). """ def __init__(self): raise EnvironmentError( "FlaxAutoModel is designed to be instantiated " "using the `FlaxAutoModel.from_pretrained(pretrained_model_name_or_path)` or " "`FlaxAutoModel.from_config(config)` methods." ) @classmethod def from_config(cls, config): r""" Instantiates one of the base model classes of the library from a configuration. Args: config (:class:`~transformers.PretrainedConfig`): The model class to instantiate is selected based on the configuration class: - isInstance of `roberta` configuration class: :class:`~transformers.FlaxRobertaModel` (RoBERTa model) - isInstance of `bert` configuration class: :class:`~transformers.FlaxBertModel` (Bert model Examples:: config = BertConfig.from_pretrained('bert-base-uncased') # Download configuration from huggingface.co and cache. model = FlaxAutoModel.from_config(config) # E.g. model was saved using `save_pretrained('./test/saved_model/')` """ for config_class, model_class in FLAX_MODEL_MAPPING.items(): if isinstance(config, config_class): return model_class(config) raise ValueError( f"Unrecognized configuration class {config.__class__} " f"for this kind of FlaxAutoModel: {cls.__name__}.\n" f"Model type should be one of {', '.join(c.__name__ for c in FLAX_MODEL_MAPPING.keys())}." ) @classmethod def from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs): r""" Instantiates one of the base model classes of the library from a pre-trained model configuration. The `from_pretrained()` method takes care of returning the correct model class instance based on the `model_type` property of the config object, or when it's missing, falling back to using pattern matching on the `pretrained_model_name_or_path` string. The base model class to instantiate is selected as the first pattern matching in the `pretrained_model_name_or_path` string (in the following order): - contains `roberta`: :class:`~transformers.FlaxRobertaModel` (RoBERTa model) - contains `bert`: :class:`~transformers.FlaxBertModel` (Bert model) The model is set in evaluation mode by default using `model.eval()` (Dropout modules are deactivated) To train the model, you should first set it back in training mode with `model.train()` Args: pretrained_model_name_or_path: either: - a string, the `model id` of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like ``bert-base-uncased``, or namespaced under a user or organization name, like ``dbmdz/bert-base-german-cased``. - a path to a `directory` containing model weights saved using :func:`~transformers.FlaxPreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``. - a path or url to a `pytorch index checkpoint file` (e.g. `./pt_model/pytorch_model.bin`). In this case, ``from_pt`` should be set to True and a configuration object should be provided as ``config`` argument. model_args: (`optional`) Sequence of positional arguments: All remaining positional arguments will be passed to the underlying model's ``__init__`` method config: (`optional`) instance of a class derived from :class:`~transformers.PretrainedConfig`: Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: - the model is a model provided by the library (loaded with the ``shortcut-name`` string of a pretrained model), or - the model was saved using :func:`~transformers.FlaxPreTrainedModel.save_pretrained` and is reloaded by supplying the save directory. - the model is loaded by supplying a local directory as ``pretrained_model_name_or_path`` and a configuration JSON file named `config.json` is found in the directory. cache_dir: (`optional`) string: Path to a directory in which a downloaded pre-trained model configuration should be cached if the standard cache should not be used. force_download: (`optional`) boolean, default False: Force to (re-)download the model weights and configuration files and override the cached versions if they exists. resume_download: (`optional`) boolean, default False: Do not delete incompletely received file. Attempt to resume the download if such a file exists. proxies: (`optional`) dict, default None: A dictionary of proxy servers to use by protocol or endpoint, e.g.: {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info: (`optional`) boolean: Set to ``True`` to also return a dictionary containing missing keys, unexpected keys and error messages. kwargs: (`optional`) Remaining dictionary of keyword arguments: These arguments will be passed to the configuration and the model. Examples:: model = FlaxAutoModel.from_pretrained('bert-base-uncased') # Download model and configuration from huggingface.co and cache. model = FlaxAutoModel.from_pretrained('./test/bert_model/') # E.g. model was saved using `save_pretrained('./test/saved_model/')` assert model.config.output_attention == True """ config = kwargs.pop("config", None) if not isinstance(config, PretrainedConfig): config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) for config_class, model_class in FLAX_MODEL_MAPPING.items(): if isinstance(config, config_class): return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) raise ValueError( f"Unrecognized configuration class {config.__class__} " f"for this kind of FlaxAutoModel: {cls.__name__}.\n" f"Model type should be one of {', '.join(c.__name__ for c in FLAX_MODEL_MAPPING.keys())}" )
AdaMix/src/transformers/models/auto/modeling_flax_auto.py/0
{ "file_path": "AdaMix/src/transformers/models/auto/modeling_flax_auto.py", "repo_id": "AdaMix", "token_count": 3141 }
54
# coding=utf-8 # Copyright 2018 The Google Flax Team Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import Callable, Dict, Tuple import numpy as np import flax.linen as nn import jax import jax.numpy as jnp from flax.core.frozen_dict import FrozenDict from jax.random import PRNGKey from ...file_utils import add_start_docstrings, add_start_docstrings_to_model_forward from ...modeling_flax_utils import ACT2FN, FlaxPreTrainedModel from ...utils import logging from .configuration_bert import BertConfig logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "BertConfig" _TOKENIZER_FOR_DOC = "BertTokenizer" BERT_START_DOCSTRING = r""" This model inherits from :class:`~transformers.FlaxPreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen `flax.nn.Module <https://flax.readthedocs.io/en/latest/_autosummary/flax.nn.module.html>`__ subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: - `Just-In-Time (JIT) compilation <https://jax.readthedocs.io/en/latest/jax.html#just-in-time-compilation-jit>`__ - `Automatic Differentiation <https://jax.readthedocs.io/en/latest/jax.html#automatic-differentiation>`__ - `Vectorization <https://jax.readthedocs.io/en/latest/jax.html#vectorization-vmap>`__ - `Parallelization <https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap>`__ Parameters: config (:class:`~transformers.BertConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. """ BERT_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`numpy.ndarray` of shape :obj:`({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.BertTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :func:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`numpy.ndarray` of shape :obj:`({0})`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ token_type_ids (:obj:`numpy.ndarray` of shape :obj:`({0})`, `optional`): Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0, 1]``: - 0 corresponds to a `sentence A` token, - 1 corresponds to a `sentence B` token. `What are token type IDs? <../glossary.html#token-type-ids>`__ position_ids (:obj:`numpy.ndarray` of shape :obj:`({0})`, `optional`): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0, config.max_position_embeddings - 1]``. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. """ class FlaxBertLayerNorm(nn.Module): """ Layer normalization (https://arxiv.org/abs/1607.06450). Operates on the last axis of the input data. """ epsilon: float = 1e-6 dtype: jnp.dtype = jnp.float32 # the dtype of the computation bias: bool = True # If True, bias (beta) is added. scale: bool = True # If True, multiply by scale (gamma). When the next layer is linear # (also e.g. nn.relu), this can be disabled since the scaling will be # done by the next layer. scale_init: Callable[..., np.ndarray] = jax.nn.initializers.ones bias_init: Callable[..., np.ndarray] = jax.nn.initializers.zeros @nn.compact def __call__(self, x): """ Applies layer normalization on the input. It normalizes the activations of the layer for each given example in a batch independently, rather than across a batch like Batch Normalization. i.e. applies a transformation that maintains the mean activation within each example close to 0 and the activation standard deviation close to 1 Args: x: the inputs Returns: Normalized inputs (the same shape as inputs). """ features = x.shape[-1] mean = jnp.mean(x, axis=-1, keepdims=True) mean2 = jnp.mean(jax.lax.square(x), axis=-1, keepdims=True) var = mean2 - jax.lax.square(mean) mul = jax.lax.rsqrt(var + self.epsilon) if self.scale: mul = mul * jnp.asarray(self.param("gamma", self.scale_init, (features,))) y = (x - mean) * mul if self.bias: y = y + jnp.asarray(self.param("beta", self.bias_init, (features,))) return y class FlaxBertEmbedding(nn.Module): """ Specify a new class for doing the embedding stuff as Flax's one use 'embedding' for the parameter name and PyTorch use 'weight' """ vocab_size: int hidden_size: int kernel_init_scale: float = 0.2 emb_init: Callable[..., np.ndarray] = jax.nn.initializers.normal(stddev=kernel_init_scale) dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, inputs): embedding = self.param("weight", self.emb_init, (self.vocab_size, self.hidden_size)) return jnp.take(embedding, inputs, axis=0) class FlaxBertEmbeddings(nn.Module): """Construct the embeddings from word, position and token_type embeddings.""" vocab_size: int hidden_size: int type_vocab_size: int max_length: int kernel_init_scale: float = 0.2 dropout_rate: float = 0.0 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, input_ids, token_type_ids, position_ids, attention_mask, deterministic: bool = True): # Embed w_emb = FlaxBertEmbedding( self.vocab_size, self.hidden_size, kernel_init_scale=self.kernel_init_scale, name="word_embeddings", dtype=self.dtype, )(jnp.atleast_2d(input_ids.astype("i4"))) p_emb = FlaxBertEmbedding( self.max_length, self.hidden_size, kernel_init_scale=self.kernel_init_scale, name="position_embeddings", dtype=self.dtype, )(jnp.atleast_2d(position_ids.astype("i4"))) t_emb = FlaxBertEmbedding( self.type_vocab_size, self.hidden_size, kernel_init_scale=self.kernel_init_scale, name="token_type_embeddings", dtype=self.dtype, )(jnp.atleast_2d(token_type_ids.astype("i4"))) # Sum all embeddings summed_emb = w_emb + jnp.broadcast_to(p_emb, w_emb.shape) + t_emb # Layer Norm layer_norm = FlaxBertLayerNorm(name="layer_norm", dtype=self.dtype)(summed_emb) embeddings = nn.Dropout(rate=self.dropout_rate)(layer_norm, deterministic=deterministic) return embeddings class FlaxBertAttention(nn.Module): num_heads: int head_size: int dropout_rate: float = 0.0 kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, hidden_states, attention_mask, deterministic: bool = True): # Attention mask comes in as attention_mask.shape == (*batch_sizes, kv_length) # FLAX expects: attention_mask.shape == (*batch_sizes, 1, 1, kv_length) such that it is broadcastable # with attn_weights.shape == (*batch_sizes, num_heads, q_length, kv_length) attention_mask = jnp.expand_dims(attention_mask, axis=(-3, -2)) self_att = nn.attention.SelfAttention( num_heads=self.num_heads, qkv_features=self.head_size, dropout_rate=self.dropout_rate, deterministic=deterministic, kernel_init=jax.nn.initializers.normal(self.kernel_init_scale, self.dtype), bias_init=jax.nn.initializers.zeros, name="self", dtype=self.dtype, )(hidden_states, attention_mask) layer_norm = FlaxBertLayerNorm(name="layer_norm", dtype=self.dtype)(self_att + hidden_states) return layer_norm class FlaxBertIntermediate(nn.Module): output_size: int hidden_act: str = "gelu" kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, hidden_states): hidden_states = nn.Dense( features=self.output_size, kernel_init=jax.nn.initializers.normal(self.kernel_init_scale, self.dtype), name="dense", dtype=self.dtype, )(hidden_states) hidden_states = ACT2FN[self.hidden_act](hidden_states) return hidden_states class FlaxBertOutput(nn.Module): dropout_rate: float = 0.0 kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, intermediate_output, attention_output, deterministic: bool = True): hidden_states = nn.Dense( attention_output.shape[-1], kernel_init=jax.nn.initializers.normal(self.kernel_init_scale, self.dtype), name="dense", dtype=self.dtype, )(intermediate_output) hidden_states = nn.Dropout(rate=self.dropout_rate)(hidden_states, deterministic=deterministic) hidden_states = FlaxBertLayerNorm(name="layer_norm", dtype=self.dtype)(hidden_states + attention_output) return hidden_states class FlaxBertLayer(nn.Module): num_heads: int head_size: int intermediate_size: int hidden_act: str = "gelu" dropout_rate: float = 0.0 kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, hidden_states, attention_mask, deterministic: bool = True): attention = FlaxBertAttention( self.num_heads, self.head_size, kernel_init_scale=self.kernel_init_scale, dropout_rate=self.dropout_rate, name="attention", dtype=self.dtype, )(hidden_states, attention_mask, deterministic=deterministic) intermediate = FlaxBertIntermediate( self.intermediate_size, kernel_init_scale=self.kernel_init_scale, hidden_act=self.hidden_act, name="intermediate", dtype=self.dtype, )(attention) output = FlaxBertOutput( kernel_init_scale=self.kernel_init_scale, dropout_rate=self.dropout_rate, name="output", dtype=self.dtype )(intermediate, attention, deterministic=deterministic) return output class FlaxBertLayerCollection(nn.Module): """ Stores N BertLayer(s) """ num_layers: int num_heads: int head_size: int intermediate_size: int hidden_act: str = "gelu" dropout_rate: float = 0.0 kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, inputs, attention_mask, deterministic: bool = True): assert self.num_layers > 0, f"num_layers should be >= 1, got ({self.num_layers})" # Initialize input / output input_i = inputs # Forward over all encoders for i in range(self.num_layers): layer = FlaxBertLayer( self.num_heads, self.head_size, self.intermediate_size, kernel_init_scale=self.kernel_init_scale, dropout_rate=self.dropout_rate, hidden_act=self.hidden_act, name=f"{i}", dtype=self.dtype, ) input_i = layer(input_i, attention_mask, deterministic=deterministic) return input_i class FlaxBertEncoder(nn.Module): num_layers: int num_heads: int head_size: int intermediate_size: int hidden_act: str = "gelu" dropout_rate: float = 0.0 kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, hidden_states, attention_mask, deterministic: bool = True): layer = FlaxBertLayerCollection( self.num_layers, self.num_heads, self.head_size, self.intermediate_size, hidden_act=self.hidden_act, kernel_init_scale=self.kernel_init_scale, dropout_rate=self.dropout_rate, name="layer", dtype=self.dtype, )(hidden_states, attention_mask, deterministic=deterministic) return layer class FlaxBertPooler(nn.Module): kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation @nn.compact def __call__(self, hidden_states): cls_token = hidden_states[:, 0] out = nn.Dense( hidden_states.shape[-1], kernel_init=jax.nn.initializers.normal(self.kernel_init_scale, self.dtype), name="dense", dtype=self.dtype, )(cls_token) return nn.tanh(out) class FlaxBertPredictionHeadTransform(nn.Module): hidden_act: str = "gelu" dtype: jnp.dtype = jnp.float32 @nn.compact def __call__(self, hidden_states): hidden_states = nn.Dense(hidden_states.shape[-1], name="dense", dtype=self.dtype)(hidden_states) hidden_states = ACT2FN[self.hidden_act](hidden_states) return FlaxBertLayerNorm(name="layer_norm", dtype=self.dtype)(hidden_states) class FlaxBertLMPredictionHead(nn.Module): vocab_size: int hidden_act: str = "gelu" dtype: jnp.dtype = jnp.float32 @nn.compact def __call__(self, hidden_states): # TODO: The output weights are the same as the input embeddings, but there is # an output-only bias for each token. # Need a link between the two variables so that the bias is correctly # resized with `resize_token_embeddings` hidden_states = FlaxBertPredictionHeadTransform( name="transform", hidden_act=self.hidden_act, dtype=self.dtype )(hidden_states) hidden_states = nn.Dense(self.vocab_size, name="decoder", dtype=self.dtype)(hidden_states) return hidden_states class FlaxBertOnlyMLMHead(nn.Module): vocab_size: int hidden_act: str = "gelu" dtype: jnp.dtype = jnp.float32 @nn.compact def __call__(self, hidden_states): hidden_states = FlaxBertLMPredictionHead( vocab_size=self.vocab_size, hidden_act=self.hidden_act, name="predictions", dtype=self.dtype )(hidden_states) return hidden_states class FlaxBertPreTrainedModel(FlaxPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = BertConfig base_model_prefix = "bert" def _check_inputs(self, input_ids, attention_mask, token_type_ids, position_ids): if token_type_ids is None: token_type_ids = jnp.ones_like(input_ids) if position_ids is None: position_ids = jnp.arange(jnp.atleast_2d(input_ids).shape[-1]) if attention_mask is None: attention_mask = jnp.ones_like(input_ids) return input_ids, attention_mask, token_type_ids, position_ids def init(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict: input_ids, attention_mask, token_type_ids, position_ids = self._check_inputs( jnp.zeros(input_shape, dtype="i4"), None, None, None ) params_rng, dropout_rng = jax.random.split(rng) rngs = {"params": params_rng, "dropout": dropout_rng} return self.module.init(rngs, input_ids, attention_mask, token_type_ids, position_ids)["params"] @staticmethod def convert_from_pytorch(pt_state: Dict, config: BertConfig) -> Dict: jax_state = dict(pt_state) # Need to change some parameters name to match Flax names so that we don't have to fork any layer for key, tensor in pt_state.items(): # Key parts key_parts = set(key.split(".")) # Every dense layer has "kernel" parameters instead of "weight" if "dense.weight" in key: del jax_state[key] key = key.replace("weight", "kernel") jax_state[key] = tensor if "decoder.weight" in key: del jax_state[key] key = key.replace("weight", "kernel") jax_state[key] = tensor.T # SelfAttention needs also to replace "weight" by "kernel" if {"query", "key", "value"} & key_parts: # Flax SelfAttention decomposes the heads (num_head, size // num_heads) if "bias" in key: jax_state[key] = tensor.reshape((config.num_attention_heads, -1)) elif "weight": del jax_state[key] key = key.replace("weight", "kernel") tensor = tensor.reshape((config.num_attention_heads, -1, config.hidden_size)).transpose((2, 0, 1)) jax_state[key] = tensor # SelfAttention output is not a separate layer, remove one nesting if "attention.output.dense" in key: del jax_state[key] key = key.replace("attention.output.dense", "attention.self.out") jax_state[key] = tensor # SelfAttention output is not a separate layer, remove nesting on layer norm if "attention.output.LayerNorm" in key: del jax_state[key] key = key.replace("attention.output.LayerNorm", "attention.LayerNorm") jax_state[key] = tensor # There are some transposed parameters w.r.t their PyTorch counterpart if "intermediate.dense.kernel" in key or "output.dense.kernel" in key or "transform.dense.kernel" in key: jax_state[key] = tensor.T # Self Attention output projection needs to be transposed if "out.kernel" in key: jax_state[key] = tensor.reshape((config.hidden_size, config.num_attention_heads, -1)).transpose( 1, 2, 0 ) # Pooler needs to transpose its kernel if "pooler.dense.kernel" in key: jax_state[key] = tensor.T # Hack to correctly load some pytorch models if "predictions.bias" in key: del jax_state[key] jax_state[".".join(key.split(".")[:2]) + ".decoder.bias"] = tensor # Handle LayerNorm conversion if "LayerNorm" in key: del jax_state[key] # Replace LayerNorm by layer_norm new_key = key.replace("LayerNorm", "layer_norm") if "weight" in key: new_key = new_key.replace("weight", "gamma") elif "bias" in key: new_key = new_key.replace("bias", "beta") jax_state[new_key] = tensor return jax_state @add_start_docstrings( "The bare Bert Model transformer outputting raw hidden-states without any specific head on top.", BERT_START_DOCSTRING, ) class FlaxBertModel(FlaxBertPreTrainedModel): """ The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in `Attention is all you need <https://arxiv.org/abs/1706.03762>`__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. """ def __init__( self, config: BertConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: jnp.dtype = jnp.float32, **kwargs ): module = FlaxBertModule( vocab_size=config.vocab_size, hidden_size=config.hidden_size, type_vocab_size=config.type_vocab_size, max_length=config.max_position_embeddings, num_encoder_layers=config.num_hidden_layers, num_heads=config.num_attention_heads, head_size=config.hidden_size, intermediate_size=config.intermediate_size, dropout_rate=config.hidden_dropout_prob, hidden_act=config.hidden_act, dtype=dtype, **kwargs, ) super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) def __call__( self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: PRNGKey = None, train: bool = False, ): input_ids, attention_mask, token_type_ids, position_ids = self._check_inputs( input_ids, attention_mask, token_type_ids, position_ids ) # Handle any PRNG if needed rngs = {} if dropout_rng is not None: rngs["dropout"] = dropout_rng return self.module.apply( {"params": params or self.params}, jnp.array(input_ids, dtype="i4"), jnp.array(attention_mask, dtype="i4"), jnp.array(token_type_ids, dtype="i4"), jnp.array(position_ids, dtype="i4"), not train, rngs=rngs, ) class FlaxBertModule(nn.Module): vocab_size: int hidden_size: int type_vocab_size: int max_length: int num_encoder_layers: int num_heads: int head_size: int intermediate_size: int hidden_act: str = "gelu" dropout_rate: float = 0.0 kernel_init_scale: float = 0.2 dtype: jnp.dtype = jnp.float32 # the dtype of the computation add_pooling_layer: bool = True @nn.compact def __call__(self, input_ids, attention_mask, token_type_ids, position_ids, deterministic: bool = True): # Embedding embeddings = FlaxBertEmbeddings( self.vocab_size, self.hidden_size, self.type_vocab_size, self.max_length, kernel_init_scale=self.kernel_init_scale, dropout_rate=self.dropout_rate, name="embeddings", dtype=self.dtype, )(input_ids, token_type_ids, position_ids, attention_mask, deterministic=deterministic) # N stacked encoding layers encoder = FlaxBertEncoder( self.num_encoder_layers, self.num_heads, self.head_size, self.intermediate_size, kernel_init_scale=self.kernel_init_scale, dropout_rate=self.dropout_rate, hidden_act=self.hidden_act, name="encoder", dtype=self.dtype, )(embeddings, attention_mask, deterministic=deterministic) if not self.add_pooling_layer: return encoder pooled = FlaxBertPooler(kernel_init_scale=self.kernel_init_scale, name="pooler", dtype=self.dtype)(encoder) return encoder, pooled class FlaxBertForMaskedLM(FlaxBertPreTrainedModel): def __init__( self, config: BertConfig, input_shape: Tuple = (1, 1), seed: int = 0, dtype: jnp.dtype = jnp.float32, **kwargs ): module = FlaxBertForMaskedLMModule( vocab_size=config.vocab_size, type_vocab_size=config.type_vocab_size, hidden_size=config.hidden_size, intermediate_size=config.intermediate_size, head_size=config.hidden_size, num_heads=config.num_attention_heads, num_encoder_layers=config.num_hidden_layers, max_length=config.max_position_embeddings, hidden_act=config.hidden_act, **kwargs, ) super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) def __call__( self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, params: dict = None, dropout_rng: PRNGKey = None, train: bool = False, ): input_ids, attention_mask, token_type_ids, position_ids = self._check_inputs( input_ids, attention_mask, token_type_ids, position_ids ) # Handle any PRNG if needed rngs = {} if dropout_rng is not None: rngs["dropout"] = dropout_rng return self.module.apply( {"params": params or self.params}, jnp.array(input_ids, dtype="i4"), jnp.array(attention_mask, dtype="i4"), jnp.array(token_type_ids, dtype="i4"), jnp.array(position_ids, dtype="i4"), not train, rngs=rngs, ) class FlaxBertForMaskedLMModule(nn.Module): vocab_size: int hidden_size: int intermediate_size: int head_size: int num_heads: int num_encoder_layers: int type_vocab_size: int max_length: int hidden_act: str dropout_rate: float = 0.0 dtype: jnp.dtype = jnp.float32 @nn.compact def __call__( self, input_ids, attention_mask=None, token_type_ids=None, position_ids=None, deterministic: bool = True ): # Model encoder = FlaxBertModule( vocab_size=self.vocab_size, type_vocab_size=self.type_vocab_size, hidden_size=self.hidden_size, intermediate_size=self.intermediate_size, head_size=self.hidden_size, num_heads=self.num_heads, num_encoder_layers=self.num_encoder_layers, max_length=self.max_length, dropout_rate=self.dropout_rate, hidden_act=self.hidden_act, dtype=self.dtype, add_pooling_layer=False, name="bert", )(input_ids, attention_mask, token_type_ids, position_ids, deterministic=deterministic) # Compute the prediction scores encoder = nn.Dropout(rate=self.dropout_rate)(encoder, deterministic=deterministic) logits = FlaxBertOnlyMLMHead( vocab_size=self.vocab_size, hidden_act=self.hidden_act, name="cls", dtype=self.dtype )(encoder) return (logits,)
AdaMix/src/transformers/models/bert/modeling_flax_bert.py/0
{ "file_path": "AdaMix/src/transformers/models/bert/modeling_flax_bert.py", "repo_id": "AdaMix", "token_count": 12405 }
55
# coding=utf-8 # Copyright 2019 Inria, Facebook AI Research and the HuggingFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch CamemBERT model. """ from ...file_utils import add_start_docstrings from ...utils import logging from ..roberta.modeling_roberta import ( RobertaForCausalLM, RobertaForMaskedLM, RobertaForMultipleChoice, RobertaForQuestionAnswering, RobertaForSequenceClassification, RobertaForTokenClassification, RobertaModel, ) from .configuration_camembert import CamembertConfig logger = logging.get_logger(__name__) _TOKENIZER_FOR_DOC = "CamembertTokenizer" CAMEMBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ "camembert-base", "Musixmatch/umberto-commoncrawl-cased-v1", "Musixmatch/umberto-wikipedia-uncased-v1", # See all CamemBERT models at https://huggingface.co/models?filter=camembert ] CAMEMBERT_START_DOCSTRING = r""" This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`__ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (:class:`~transformers.CamembertConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. """ @add_start_docstrings( "The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top.", CAMEMBERT_START_DOCSTRING, ) class CamembertModel(RobertaModel): """ This class overrides :class:`~transformers.RobertaModel`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig @add_start_docstrings( """CamemBERT Model with a `language modeling` head on top. """, CAMEMBERT_START_DOCSTRING, ) class CamembertForMaskedLM(RobertaForMaskedLM): """ This class overrides :class:`~transformers.RobertaForMaskedLM`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig @add_start_docstrings( """ CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. """, CAMEMBERT_START_DOCSTRING, ) class CamembertForSequenceClassification(RobertaForSequenceClassification): """ This class overrides :class:`~transformers.RobertaForSequenceClassification`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig @add_start_docstrings( """ CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """, CAMEMBERT_START_DOCSTRING, ) class CamembertForMultipleChoice(RobertaForMultipleChoice): """ This class overrides :class:`~transformers.RobertaForMultipleChoice`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig @add_start_docstrings( """ CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, CAMEMBERT_START_DOCSTRING, ) class CamembertForTokenClassification(RobertaForTokenClassification): """ This class overrides :class:`~transformers.RobertaForTokenClassification`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig @add_start_docstrings( """ CamemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits` """, CAMEMBERT_START_DOCSTRING, ) class CamembertForQuestionAnswering(RobertaForQuestionAnswering): """ This class overrides :class:`~transformers.RobertaForQuestionAnswering`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig @add_start_docstrings( """CamemBERT Model with a `language modeling` head on top for CLM fine-tuning. """, CAMEMBERT_START_DOCSTRING ) class CamembertForCausalLM(RobertaForCausalLM): """ This class overrides :class:`~transformers.RobertaForCausalLM`. Please check the superclass for the appropriate documentation alongside usage examples. """ config_class = CamembertConfig
AdaMix/src/transformers/models/camembert/modeling_camembert.py/0
{ "file_path": "AdaMix/src/transformers/models/camembert/modeling_camembert.py", "repo_id": "AdaMix", "token_count": 1812 }
56
# coding=utf-8 # Copyright 2020 Microsoft and the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tokenization class for model DeBERTa.""" import os import pathlib import random import unicodedata from functools import lru_cache from typing import Optional, Tuple from zipfile import ZipFile import tqdm import requests from ...tokenization_utils import PreTrainedTokenizer from ...utils import logging try: import regex as re except ImportError: raise ImportError("Please install regex with: pip install regex") logger = logging.get_logger(__name__) VOCAB_FILES_NAMES = {"vocab_file": "bpe_encoder.bin"} PRETRAINED_VOCAB_FILES_MAP = { "vocab_file": { "microsoft/deberta-base": "https://huggingface.co/microsoft/deberta-base/resolve/main/bpe_encoder.bin", "microsoft/deberta-large": "https://huggingface.co/microsoft/deberta-large/resolve/main/bpe_encoder.bin", "microsoft/deberta-xlarge": "https://huggingface.co/microsoft/deberta-xlarge/resolve/main/bpe_encoder.bin", "microsoft/deberta-base-mnli": "https://huggingface.co/microsoft/deberta-base-mnli/resolve/main/bpe_encoder.bin", "microsoft/deberta-large-mnli": "https://huggingface.co/microsoft/deberta-large-mnli/resolve/main/bpe_encoder.bin", "microsoft/deberta-xlarge-mnli": "https://huggingface.co/microsoft/deberta-xlarge-mnli/resolve/main/bpe_encoder.bin", } } PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { "microsoft/deberta-base": 512, "microsoft/deberta-large": 512, "microsoft/deberta-xlarge": 512, "microsoft/deberta-base-mnli": 512, "microsoft/deberta-large-mnli": 512, "microsoft/deberta-xlarge-mnli": 512, } PRETRAINED_INIT_CONFIGURATION = { "microsoft/deberta-base": {"do_lower_case": False}, "microsoft/deberta-large": {"do_lower_case": False}, } __all__ = ["DebertaTokenizer"] @lru_cache() def bytes_to_unicode(): """ Returns list of utf-8 byte and a corresponding list of unicode strings. The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. This is a signficant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup tables between utf-8 bytes and unicode strings. And avoids mapping to whitespace/control characters the bpe code barfs on. """ bs = ( list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1)) ) cs = bs[:] n = 0 for b in range(2 ** 8): if b not in bs: bs.append(b) cs.append(2 ** 8 + n) n += 1 cs = [chr(n) for n in cs] return dict(zip(bs, cs)) def get_pairs(word): """ Return set of symbol pairs in a word. Word is represented as tuple of symbols (symbols being variable-length strings). """ pairs = set() prev_char = word[0] for char in word[1:]: pairs.add((prev_char, char)) prev_char = char return pairs class Encoder: def __init__(self, encoder, bpe_merges, errors="replace"): self.encoder = encoder self.decoder = {v: k for k, v in self.encoder.items()} self.errors = errors # how to handle errors in decoding self.byte_encoder = bytes_to_unicode() self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} self.bpe_ranks = dict(zip([tuple(k) for k in bpe_merges], range(len(bpe_merges)))) self.cache = {} self.random = random.Random(0) # Should haved added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""") def bpe(self, token): if token in self.cache: return self.cache[token] word = tuple(token) pairs = get_pairs(word) if not pairs: return token while True: bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) if bigram not in self.bpe_ranks: break first, second = bigram new_word = [] i = 0 while i < len(word): try: j = word.index(first, i) new_word.extend(word[i:j]) i = j except Exception: new_word.extend(word[i:]) break if word[i] == first and i < len(word) - 1 and word[i + 1] == second: new_word.append(first + second) i += 2 else: new_word.append(word[i]) i += 1 new_word = tuple(new_word) word = new_word if len(word) == 1: break else: pairs = get_pairs(word) word = " ".join(word) self.cache[token] = word return word def split_to_words(self, text): return list(re.findall(self.pat, text)) def encode(self, text): bpe_tokens = [] for token in self.split_to_words(text): token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ")) return bpe_tokens def decode(self, tokens): text = "".join([self.decoder[token] for token in tokens]) text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) return text def get_encoder(encoder, vocab): return Encoder( encoder=encoder, bpe_merges=vocab, ) def _is_whitespace(char): """Checks whether `chars` is a whitespace character.""" # \t, \n, and \r are technically contorl characters but we treat them # as whitespace since they are generally considered as such. if char == " " or char == "\t" or char == "\n" or char == "\r": return True cat = unicodedata.category(char) if cat == "Zs": return True return False def _is_control(char): """Checks whether `chars` is a control character.""" # These are technically control characters but we count them as whitespace # characters. if char == "\t" or char == "\n" or char == "\r": return False cat = unicodedata.category(char) if cat.startswith("C"): return True return False def _is_punctuation(char): """Checks whether `chars` is a punctuation character.""" cp = ord(char) # We treat all non-letter/number ASCII as punctuation. # Characters such as "^", "$", and "`" are not in the Unicode # Punctuation class but we treat them as punctuation anyways, for # consistency. if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126): return True cat = unicodedata.category(char) if cat.startswith("P"): return True return False def download_asset(name, tag=None, no_cache=False, cache_dir=None): _tag = tag if _tag is None: _tag = "latest" if not cache_dir: cache_dir = os.path.join(pathlib.Path.home(), f".~DeBERTa/assets/{_tag}/") os.makedirs(cache_dir, exist_ok=True) output = os.path.join(cache_dir, name) if os.path.exists(output) and (not no_cache): return output repo = "https://api.github.com/repos/microsoft/DeBERTa/releases" releases = requests.get(repo).json() if tag and tag != "latest": release = [r for r in releases if r["name"].lower() == tag.lower()] if len(release) != 1: raise Exception(f"{tag} can't be found in the repository.") else: release = releases[0] asset = [s for s in release["assets"] if s["name"].lower() == name.lower()] if len(asset) != 1: raise Exception(f"{name} can't be found in the release.") url = asset[0]["url"] headers = {} headers["Accept"] = "application/octet-stream" resp = requests.get(url, stream=True, headers=headers) if resp.status_code != 200: raise Exception(f"Request for {url} return {resp.status_code}, {resp.text}") try: with open(output, "wb") as fs: progress = tqdm( total=int(resp.headers["Content-Length"]) if "Content-Length" in resp.headers else -1, ncols=80, desc=f"Downloading {name}", ) for c in resp.iter_content(chunk_size=1024 * 1024): fs.write(c) progress.update(len(c)) progress.close() except Exception: os.remove(output) raise return output def load_vocab(name=None, tag=None, no_cache=False, cache_dir=None): import torch if name is None: name = "bpe_encoder" model_path = name if model_path and (not os.path.exists(model_path)) and not (("/" in model_path) or ("\\" in model_path)): _tag = tag if _tag is None: _tag = "latest" if not cache_dir: cache_dir = os.path.join(pathlib.Path.home(), f".~DeBERTa/assets/{_tag}/") os.makedirs(cache_dir, exist_ok=True) out_dir = os.path.join(cache_dir, name) model_path = os.path.join(out_dir, "bpe_encoder.bin") if (not os.path.exists(model_path)) or no_cache: asset = download_asset(name + ".zip", tag=tag, no_cache=no_cache, cache_dir=cache_dir) with ZipFile(asset, "r") as zipf: for zip_info in zipf.infolist(): if zip_info.filename[-1] == "/": continue zip_info.filename = os.path.basename(zip_info.filename) zipf.extract(zip_info, out_dir) elif not model_path: return None, None encoder_state = torch.load(model_path) return encoder_state class GPT2Tokenizer(object): """ A wrapper of GPT2 tokenizer with similar interface as BERT tokenizer Args: vocab_file (:obj:`str`, optional): The local path of vocabulary package or the release name of vocabulary in `DeBERTa GitHub releases <https://github.com/microsoft/DeBERTa/releases>`_, e.g. "bpe_encoder", default: `None`. If it's `None`, then it will download the vocabulary in the latest release from GitHub. The vocabulary file is a state dictionary with three items, "dict_map", "vocab", "encoder" which correspond to three files used in `RoBERTa`, i.e. `dict.txt`, `vocab.txt` and `encoder.json`. The difference between our wrapped GPT2 tokenizer and RoBERTa wrapped tokenizer are, - Special tokens, unlike `RoBERTa` which use `<s>`, `</s>` as the `start` token and `end` token of a sentence. We use `[CLS]` and `[SEP]` as the `start` and `end` token of input sentence which is the same as `BERT`. - We remapped the token ids in our dictionary with regarding to the new special tokens, `[PAD]` => 0, `[CLS]` => 1, `[SEP]` => 2, `[UNK]` => 3, `[MASK]` => 50264 special_tokens (:obj:`list`, optional): List of special tokens to be added to the end of the vocabulary. """ def __init__(self, vocab_file=None, special_tokens=None): self.pad_token = "[PAD]" self.sep_token = "[SEP]" self.unk_token = "[UNK]" self.cls_token = "[CLS]" self.symbols = [] self.count = [] self.indices = {} self.pad_token_id = self.add_symbol(self.pad_token) self.cls_token_id = self.add_symbol(self.cls_token) self.sep_token_id = self.add_symbol(self.sep_token) self.unk_token_id = self.add_symbol(self.unk_token) self.gpt2_encoder = load_vocab(vocab_file) self.bpe = get_encoder(self.gpt2_encoder["encoder"], self.gpt2_encoder["vocab"]) for w, n in self.gpt2_encoder["dict_map"]: self.add_symbol(w, n) self.mask_token = "[MASK]" self.mask_id = self.add_symbol(self.mask_token) self.special_tokens = ["[MASK]", "[SEP]", "[PAD]", "[UNK]", "[CLS]"] if special_tokens is not None: for t in special_tokens: self.add_special_token(t) self.vocab = self.indices self.ids_to_tokens = self.symbols def tokenize(self, text): """ Convert an input text to tokens. Args: text (:obj:`str`): input text to be tokenized. Returns: A list of byte tokens where each token represent the byte id in GPT2 byte dictionary Example:: >>> tokenizer = GPT2Tokenizer() >>> text = "Hello world!" >>> tokens = tokenizer.tokenize(text) >>> print(tokens) ['15496', '995', '0'] """ bpe = self._encode(text) return [t for t in bpe.split(" ") if t] def convert_tokens_to_ids(self, tokens): """ Convert list of tokens to ids Args: tokens (:obj:`list<str>`): list of tokens Returns: List of ids """ return [self.vocab[t] for t in tokens] def convert_ids_to_tokens(self, ids): """ Convert list of ids to tokens Args: ids (:obj:`list<int>`): list of ids Returns: List of tokens """ tokens = [] for i in ids: tokens.append(self.ids_to_tokens[i]) return tokens def split_to_words(self, text): return self.bpe.split_to_words(text) def decode(self, tokens): """ Decode list of tokens to text strings Args: tokens (:obj:`list<str>`): list of tokens. Returns: Text string corresponds to the input tokens. Example:: >>> tokenizer = GPT2Tokenizer() >>> text = "Hello world!" >>> tokens = tokenizer.tokenize(text) >>> print(tokens) ['15496', '995', '0'] >>> tokenizer.decode(tokens) 'Hello world!' """ return self.bpe.decode([int(t) for t in tokens if t not in self.special_tokens]) def add_special_token(self, token): """ Adds a special token to the dictionary Args: token (:obj:`str`): Tthe new token/word to be added to the vocabulary. Returns: The id of new token in the vocabulary. """ self.special_tokens.append(token) return self.add_symbol(token) def part_of_whole_word(self, token, is_bos=False): if is_bos: return True s = self._decode(token) if len(s) == 1 and (_is_whitespace(list(s)[0]) or _is_control(list(s)[0]) or _is_punctuation(list(s)[0])): return False return not s.startswith(" ") def sym(self, id): return self.ids_to_tokens[id] def id(self, sym): return self.vocab[sym] def _encode(self, x: str) -> str: return " ".join(map(str, self.bpe.encode(x))) def _decode(self, x: str) -> str: return self.bpe.decode(map(int, x.split())) def add_symbol(self, word, n=1): """ Adds a word to the dictionary Args: word (:obj:`str`): Tthe new token/word to be added to the vocabulary. n (int, optional): The frequency of the word. Returns: The id of the new word. """ if word in self.indices: idx = self.indices[word] self.count[idx] = self.count[idx] + n return idx else: idx = len(self.symbols) self.indices[word] = idx self.symbols.append(word) self.count.append(n) return idx def save_pretrained(self, path: str, filename_prefix: str = None): import torch filename = VOCAB_FILES_NAMES[list(VOCAB_FILES_NAMES.keys())[0]] if filename_prefix is not None: filename = filename_prefix + "-" + filename full_path = os.path.join(path, filename) torch.save(self.gpt2_encoder, full_path) return (full_path,) class DebertaTokenizer(PreTrainedTokenizer): r""" Constructs a DeBERTa tokenizer, which runs end-to-end tokenization: punctuation splitting + wordpiece Args: vocab_file (:obj:`str`): File containing the vocabulary. do_lower_case (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not to lowercase the input when tokenizing. unk_token (:obj:`str`, `optional`, defaults to :obj:`"[UNK]"`): The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (:obj:`str`, `optional`, defaults to :obj:`"[SEP]"`): The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (:obj:`str`, `optional`, defaults to :obj:`"[PAD]"`): The token used for padding, for example when batching sequences of different lengths. cls_token (:obj:`str`, `optional`, defaults to :obj:`"[CLS]"`): The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (:obj:`str`, `optional`, defaults to :obj:`"[MASK]"`): The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. """ vocab_files_names = VOCAB_FILES_NAMES pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP pretrained_init_configuration = PRETRAINED_INIT_CONFIGURATION max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES def __init__( self, vocab_file, do_lower_case=False, unk_token="[UNK]", sep_token="[SEP]", pad_token="[PAD]", cls_token="[CLS]", mask_token="[MASK]", **kwargs ): super().__init__( do_lower_case=do_lower_case, unk_token=unk_token, sep_token=sep_token, pad_token=pad_token, cls_token=cls_token, mask_token=mask_token, **kwargs, ) if not os.path.isfile(vocab_file): raise ValueError( "Can't find a vocabulary file at path '{}'. To load the vocabulary from a Google pretrained " "model use `tokenizer = XxxTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)`".format(vocab_file) ) self.do_lower_case = do_lower_case self.gpt2_tokenizer = GPT2Tokenizer(vocab_file) @property def vocab_size(self): return len(self.vocab) @property def vocab(self): return self.gpt2_tokenizer.vocab def get_vocab(self): vocab = self.vocab.copy() vocab.update(self.get_added_vocab()) return vocab def _tokenize(self, text): """Take as input a string and return a list of strings (tokens) for words/sub-words""" if self.do_lower_case: text = text.lower() return self.gpt2_tokenizer.tokenize(text) def _convert_token_to_id(self, token): """ Converts a token (str) in an id using the vocab. """ return self.vocab.get(token, self.vocab.get(self.unk_token)) def _convert_id_to_token(self, index): """Converts an index (integer) in a token (str) using the vocab.""" return self.gpt2_tokenizer.sym(index) if index < self.vocab_size else self.unk_token def convert_tokens_to_string(self, tokens): """ Converts a sequence of tokens (string) in a single string. """ return self.gpt2_tokenizer.decode(tokens) def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): """ Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A DeBERTa sequence has the following format: - single sequence: [CLS] X [SEP] - pair of sequences: [CLS] A [SEP] B [SEP] Args: token_ids_0 (:obj:`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (:obj:`List[int]`, `optional`): Optional second list of IDs for sequence pairs. Returns: :obj:`List[int]`: List of `input IDs <../glossary.html#input-ids>`__ with the appropriate special tokens. """ if token_ids_1 is None: return [self.cls_token_id] + token_ids_0 + [self.sep_token_id] cls = [self.cls_token_id] sep = [self.sep_token_id] return cls + token_ids_0 + sep + token_ids_1 + sep def get_special_tokens_mask(self, token_ids_0, token_ids_1=None, already_has_special_tokens=False): """ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods. Args: token_ids_0 (:obj:`List[int]`): List of IDs. token_ids_1 (:obj:`List[int]`, `optional`): Optional second list of IDs for sequence pairs. already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not the token list is already formatted with special tokens for the model. Returns: :obj:`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ if already_has_special_tokens: if token_ids_1 is not None: raise ValueError( "You should not supply a second sequence if the provided sequence of " "ids is already formatted with special tokens for the model." ) return list( map( lambda x: 1 if x in [self.sep_token_id, self.cls_token_id] else 0, token_ids_0, ) ) if token_ids_1 is not None: return [1] + ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] return [1] + ([0] * len(token_ids_0)) + [1] def create_token_type_ids_from_sequences(self, token_ids_0, token_ids_1=None): """ Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa sequence pair mask has the following format: :: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If :obj:`token_ids_1` is :obj:`None`, this method only returns the first portion of the mask (0s). Args: token_ids_0 (:obj:`List[int]`): List of IDs. token_ids_1 (:obj:`List[int]`, `optional`): Optional second list of IDs for sequence pairs. Returns: :obj:`List[int]`: List of `token type IDs <../glossary.html#token-type-ids>`_ according to the given sequence(s). """ sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep) * [0] + len(token_ids_1 + sep) * [1] def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs): add_prefix_space = kwargs.pop("add_prefix_space", False) if is_split_into_words or add_prefix_space: text = " " + text return (text, kwargs) def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: return self.gpt2_tokenizer.save_pretrained(save_directory, filename_prefix=filename_prefix)
AdaMix/src/transformers/models/deberta/tokenization_deberta.py/0
{ "file_path": "AdaMix/src/transformers/models/deberta/tokenization_deberta.py", "repo_id": "AdaMix", "token_count": 11243 }
57
# coding=utf-8 # Copyright 2018 DPR Authors, The Hugging Face Team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ TensorFlow DPR model for Open Domain Question Answering.""" from dataclasses import dataclass from typing import Optional, Tuple, Union import tensorflow as tf from ...file_utils import ( ModelOutput, add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings, ) from ...modeling_tf_outputs import TFBaseModelOutputWithPooling from ...modeling_tf_utils import TFPreTrainedModel, get_initializer, input_processing, shape_list from ...utils import logging from ..bert.modeling_tf_bert import TFBertMainLayer from .configuration_dpr import DPRConfig logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "DPRConfig" TF_DPR_CONTEXT_ENCODER_PRETRAINED_MODEL_ARCHIVE_LIST = [ "facebook/dpr-ctx_encoder-single-nq-base", "facebook/dpr-ctx_encoder-multiset-base", ] TF_DPR_QUESTION_ENCODER_PRETRAINED_MODEL_ARCHIVE_LIST = [ "facebook/dpr-question_encoder-single-nq-base", "facebook/dpr-question_encoder-multiset-base", ] TF_DPR_READER_PRETRAINED_MODEL_ARCHIVE_LIST = [ "facebook/dpr-reader-single-nq-base", "facebook/dpr-reader-multiset-base", ] ########## # Outputs ########## @dataclass class TFDPRContextEncoderOutput(ModelOutput): r""" Class for outputs of :class:`~transformers.TFDPRContextEncoder`. Args: pooler_output: (:obj:``tf.Tensor`` of shape ``(batch_size, embeddings_size)``): The DPR encoder outputs the `pooler_output` that corresponds to the context representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed contexts for nearest neighbors queries with questions embeddings. hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ pooler_output: tf.Tensor = None hidden_states: Optional[Tuple[tf.Tensor]] = None attentions: Optional[Tuple[tf.Tensor]] = None @dataclass class TFDPRQuestionEncoderOutput(ModelOutput): """ Class for outputs of :class:`~transformers.TFDPRQuestionEncoder`. Args: pooler_output: (:obj:``tf.Tensor`` of shape ``(batch_size, embeddings_size)``): The DPR encoder outputs the `pooler_output` that corresponds to the question representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed questions for nearest neighbors queries with context embeddings. hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ pooler_output: tf.Tensor = None hidden_states: Optional[Tuple[tf.Tensor]] = None attentions: Optional[Tuple[tf.Tensor]] = None @dataclass class TFDPRReaderOutput(ModelOutput): """ Class for outputs of :class:`~transformers.TFDPRReaderEncoder`. Args: start_logits: (:obj:``tf.Tensor`` of shape ``(n_passages, sequence_length)``): Logits of the start index of the span for each passage. end_logits: (:obj:``tf.Tensor`` of shape ``(n_passages, sequence_length)``): Logits of the end index of the span for each passage. relevance_logits: (:obj:`tf.Tensor`` of shape ``(n_passages, )``): Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the question, compared to all the other passages. hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ start_logits: tf.Tensor = None end_logits: tf.Tensor = None relevance_logits: tf.Tensor = None hidden_states: Optional[Tuple[tf.Tensor]] = None attentions: Optional[Tuple[tf.Tensor]] = None class TFDPREncoderLayer(tf.keras.layers.Layer): base_model_prefix = "bert_model" def __init__(self, config: DPRConfig, **kwargs): super().__init__(**kwargs) # resolve name conflict with TFBertMainLayer instead of TFBertModel self.bert_model = TFBertMainLayer(config, name="bert_model") self.config = config assert self.config.hidden_size > 0, "Encoder hidden_size can't be zero" self.projection_dim = config.projection_dim if self.projection_dim > 0: self.encode_proj = tf.keras.layers.Dense( config.projection_dim, kernel_initializer=get_initializer(config.initializer_range), name="encode_proj" ) def call( self, input_ids: tf.Tensor = None, attention_mask: Optional[tf.Tensor] = None, token_type_ids: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions: bool = None, output_hidden_states: bool = None, return_dict: bool = None, training: bool = False, **kwargs, ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor, ...]]: inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) outputs = self.bert_model( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output, pooled_output = outputs[:2] pooled_output = sequence_output[:, 0, :] if self.projection_dim > 0: pooled_output = self.encode_proj(pooled_output) if not inputs["return_dict"]: return (sequence_output, pooled_output) + outputs[2:] return TFBaseModelOutputWithPooling( last_hidden_state=sequence_output, pooler_output=pooled_output, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @property def embeddings_size(self) -> int: if self.projection_dim > 0: return self.projection_dim return self.bert_model.config.hidden_size class TFDPRSpanPredictorLayer(tf.keras.layers.Layer): base_model_prefix = "encoder" def __init__(self, config: DPRConfig, **kwargs): super().__init__(**kwargs) self.config = config self.encoder = TFDPREncoderLayer(config, name="encoder") self.qa_outputs = tf.keras.layers.Dense( 2, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" ) self.qa_classifier = tf.keras.layers.Dense( 1, kernel_initializer=get_initializer(config.initializer_range), name="qa_classifier" ) def call( self, input_ids: tf.Tensor = None, attention_mask: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = False, training: bool = False, **kwargs, ) -> Union[TFDPRReaderOutput, Tuple[tf.Tensor, ...]]: # notations: N - number of questions in a batch, M - number of passages per questions, L - sequence length n_passages, sequence_length = shape_list(input_ids) if input_ids is not None else shape_list(inputs_embeds)[:2] # feed encoder inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) outputs = self.encoder( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output = outputs[0] # compute logits logits = self.qa_outputs(sequence_output) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1) end_logits = tf.squeeze(end_logits, axis=-1) relevance_logits = self.qa_classifier(sequence_output[:, 0, :]) # resize start_logits = tf.reshape(start_logits, [n_passages, sequence_length]) end_logits = tf.reshape(end_logits, [n_passages, sequence_length]) relevance_logits = tf.reshape(relevance_logits, [n_passages]) if not inputs["return_dict"]: return (start_logits, end_logits, relevance_logits) + outputs[2:] return TFDPRReaderOutput( start_logits=start_logits, end_logits=end_logits, relevance_logits=relevance_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) class TFDPRSpanPredictor(TFPreTrainedModel): base_model_prefix = "encoder" def __init__(self, config: DPRConfig, **kwargs): super().__init__(config, **kwargs) self.encoder = TFDPRSpanPredictorLayer(config) def call( self, input_ids: tf.Tensor = None, attention_mask: Optional[tf.Tensor] = None, token_type_ids: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = False, training: bool = False, **kwargs, ) -> Union[TFDPRReaderOutput, Tuple[tf.Tensor, ...]]: inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) outputs = self.encoder( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) return outputs class TFDPREncoder(TFPreTrainedModel): base_model_prefix = "encoder" def __init__(self, config: DPRConfig, **kwargs): super().__init__(config, **kwargs) self.encoder = TFDPREncoderLayer(config) def call( self, input_ids: tf.Tensor = None, attention_mask: Optional[tf.Tensor] = None, token_type_ids: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = False, training: bool = False, **kwargs, ) -> Union[TFDPRReaderOutput, Tuple[tf.Tensor, ...]]: inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) outputs = self.encoder( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) return outputs ################## # PreTrainedModel ################## class TFDPRPretrainedContextEncoder(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = DPRConfig base_model_prefix = "ctx_encoder" class TFDPRPretrainedQuestionEncoder(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = DPRConfig base_model_prefix = "question_encoder" class TFDPRPretrainedReader(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = DPRConfig base_model_prefix = "reader" @tf.function( input_signature=[ { "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), } ] ) def serving(self, inputs): output = self.call(inputs) return self.serving_output(output) ############### # Actual Models ############### TF_DPR_START_DOCSTRING = r""" This model inherits from :class:`~transformers.TFPreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Tensorflow `tf.keras.Model <https://www.tensorflow.org/api_docs/python/tf/keras/Model>`__ subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. .. note:: TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using :meth:`tf.keras.Model.fit` method which currently requires having all the tensors in the first argument of the model call function: :obj:`model(inputs)`. If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with :obj:`input_ids` only and nothing else: :obj:`model(inputs_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: :obj:`model([input_ids, attention_mask])` or :obj:`model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: :obj:`model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Parameters: config (:class:`~transformers.DPRConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.TFPreTrainedModel.from_pretrained` method to load the model weights. """ TF_DPR_ENCODERS_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be formatted with [CLS] and [SEP] tokens as follows: (a) For sequence pairs (for a pair title+text for example): :: tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP] token_type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1 (b) For single sequences (for a question for example): :: tokens: [CLS] the dog is hairy . [SEP] token_type_ids: 0 0 0 0 0 0 0 DPR is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. Indices can be obtained using :class:`~transformers.DPRTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ token_type_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0, 1]``: - 0 corresponds to a `sentence A` token, - 1 corresponds to a `sentence B` token. `What are token type IDs? <../glossary.html#token-type-ids>`_ inputs_embeds (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ TF_DPR_READER_INPUTS_DOCSTRING = r""" Args: input_ids: (:obj:`Numpy array` or :obj:`tf.Tensor` of shapes :obj:`(n_passages, sequence_length)`): Indices of input sequence tokens in the vocabulary. It has to be a sequence triplet with 1) the question and 2) the passages titles and 3) the passages texts To match pretraining, DPR :obj:`input_ids` sequence should be formatted with [CLS] and [SEP] with the format: ``[CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids>`` DPR is a model with absolute position embeddings so it's usually advised to pad the inputs on the right rather than the left. Indices can be obtained using :class:`~transformers.DPRReaderTokenizer`. See this class documentation for more details. attention_mask (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(n_passages, sequence_length)`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ inputs_embeds (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(n_passages, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ @add_start_docstrings( "The bare DPRContextEncoder transformer outputting pooler outputs as context representations.", TF_DPR_START_DOCSTRING, ) class TFDPRContextEncoder(TFDPRPretrainedContextEncoder): def __init__(self, config: DPRConfig, *args, **kwargs): super().__init__(config, *args, **kwargs) self.ctx_encoder = TFDPREncoderLayer(config, name="ctx_encoder") def get_input_embeddings(self): try: return self.ctx_encoder.bert_model.get_input_embeddings() except AttributeError: self(self.dummy_inputs) return self.ctx_encoder.bert_model.get_input_embeddings() @add_start_docstrings_to_model_forward(TF_DPR_ENCODERS_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=TFDPRContextEncoderOutput, config_class=_CONFIG_FOR_DOC) def call( self, input_ids=None, attention_mask: Optional[tf.Tensor] = None, token_type_ids: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions=None, output_hidden_states=None, return_dict=None, training: bool = False, **kwargs, ) -> Union[TFDPRContextEncoderOutput, Tuple[tf.Tensor, ...]]: r""" Return: Examples:: >>> from transformers import TFDPRContextEncoder, DPRContextEncoderTokenizer >>> tokenizer = DPRContextEncoderTokenizer.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base') >>> model = TFDPRContextEncoder.from_pretrained('facebook/dpr-ctx_encoder-single-nq-base', from_pt=True) >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors='tf')["input_ids"] >>> embeddings = model(input_ids).pooler_output """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif inputs["input_ids"] is not None: input_shape = shape_list(inputs["input_ids"]) elif inputs["inputs_embeds"] is not None: input_shape = shape_list(inputs["inputs_embeds"])[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs["attention_mask"] is None: inputs["attention_mask"] = ( tf.ones(input_shape, dtype=tf.dtypes.int32) if inputs["input_ids"] is None else (inputs["input_ids"] != self.config.pad_token_id) ) if inputs["token_type_ids"] is None: inputs["token_type_ids"] = tf.zeros(input_shape, dtype=tf.dtypes.int32) outputs = self.ctx_encoder( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) if not inputs["return_dict"]: return outputs[1:] return TFDPRContextEncoderOutput( pooler_output=outputs.pooler_output, hidden_states=outputs.hidden_states, attentions=outputs.attentions ) def serving_output(self, output): hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFDPRContextEncoderOutput(pooler_output=output.pooler_output, hidden_states=hs, attentions=attns) @add_start_docstrings( "The bare DPRQuestionEncoder transformer outputting pooler outputs as question representations.", TF_DPR_START_DOCSTRING, ) class TFDPRQuestionEncoder(TFDPRPretrainedQuestionEncoder): def __init__(self, config: DPRConfig, *args, **kwargs): super().__init__(config, *args, **kwargs) self.question_encoder = TFDPREncoderLayer(config, name="question_encoder") def get_input_embeddings(self): try: return self.question_encoder.bert_model.get_input_embeddings() except AttributeError: self(self.dummy_inputs) return self.question_encoder.bert_model.get_input_embeddings() @add_start_docstrings_to_model_forward(TF_DPR_ENCODERS_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=TFDPRQuestionEncoderOutput, config_class=_CONFIG_FOR_DOC) def call( self, input_ids=None, attention_mask: Optional[tf.Tensor] = None, token_type_ids: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions=None, output_hidden_states=None, return_dict=None, training: bool = False, **kwargs, ) -> Union[TFDPRQuestionEncoderOutput, Tuple[tf.Tensor, ...]]: r""" Return: Examples:: >>> from transformers import TFDPRQuestionEncoder, DPRQuestionEncoderTokenizer >>> tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base') >>> model = TFDPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base', from_pt=True) >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors='tf')["input_ids"] >>> embeddings = model(input_ids).pooler_output """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif inputs["input_ids"] is not None: input_shape = shape_list(inputs["input_ids"]) elif inputs["inputs_embeds"] is not None: input_shape = shape_list(inputs["inputs_embeds"])[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs["attention_mask"] is None: inputs["attention_mask"] = ( tf.ones(input_shape, dtype=tf.dtypes.int32) if inputs["input_ids"] is None else (inputs["input_ids"] != self.config.pad_token_id) ) if inputs["token_type_ids"] is None: inputs["token_type_ids"] = tf.zeros(input_shape, dtype=tf.dtypes.int32) outputs = self.question_encoder( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) if not inputs["return_dict"]: return outputs[1:] return TFDPRQuestionEncoderOutput( pooler_output=outputs.pooler_output, hidden_states=outputs.hidden_states, attentions=outputs.attentions ) def serving_output(self, output): hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFDPRQuestionEncoderOutput(pooler_output=output.pooler_output, hidden_states=hs, attentions=attns) @add_start_docstrings( "The bare DPRReader transformer outputting span predictions.", TF_DPR_START_DOCSTRING, ) class TFDPRReader(TFDPRPretrainedReader): def __init__(self, config: DPRConfig, *args, **kwargs): super().__init__(config, *args, **kwargs) self.span_predictor = TFDPRSpanPredictorLayer(config, name="span_predictor") def get_input_embeddings(self): try: return self.span_predictor.encoder.bert_model.get_input_embeddings() except AttributeError: self(self.dummy_inputs) return self.span_predictor.encoder.bert_model.get_input_embeddings() @add_start_docstrings_to_model_forward(TF_DPR_READER_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=TFDPRReaderOutput, config_class=_CONFIG_FOR_DOC) def call( self, input_ids=None, attention_mask: Optional[tf.Tensor] = None, inputs_embeds: Optional[tf.Tensor] = None, output_attentions: bool = None, output_hidden_states: bool = None, return_dict=None, training: bool = False, **kwargs, ) -> Union[TFDPRReaderOutput, Tuple[tf.Tensor, ...]]: r""" Return: Examples:: >>> from transformers import TFDPRReader, DPRReaderTokenizer >>> tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base') >>> model = TFDPRReader.from_pretrained('facebook/dpr-reader-single-nq-base', from_pt=True) >>> encoded_inputs = tokenizer( ... questions=["What is love ?"], ... titles=["Haddaway"], ... texts=["'What Is Love' is a song recorded by the artist Haddaway"], ... return_tensors='tf' ... ) >>> outputs = model(encoded_inputs) >>> start_logits = outputs.start_logits >>> end_logits = outputs.end_logits >>> relevance_logits = outputs.relevance_logits """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif inputs["input_ids"] is not None: input_shape = shape_list(inputs["input_ids"]) elif inputs["inputs_embeds"] is not None: input_shape = shape_list(inputs["inputs_embeds"])[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs["attention_mask"] is None: inputs["attention_mask"] = tf.ones(input_shape, dtype=tf.dtypes.int32) return self.span_predictor( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) def serving_output(self, output): hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFDPRReaderOutput( start_logits=output.start_logits, end_logits=output.end_logits, relevance_logits=output.relevance_logits, hidden_states=hs, attentions=attns, )
AdaMix/src/transformers/models/dpr/modeling_tf_dpr.py/0
{ "file_path": "AdaMix/src/transformers/models/dpr/modeling_tf_dpr.py", "repo_id": "AdaMix", "token_count": 15865 }
58
# coding=utf-8 # Copyright 2020-present Google Brain and Carnegie Mellon University Authors and the HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ PyTorch Funnel Transformer model. """ import os from dataclasses import dataclass from typing import Optional, Tuple import numpy as np import torch from torch import nn from torch.nn import CrossEntropyLoss, MSELoss from torch.nn import functional as F from ...activations import ACT2FN from ...file_utils import ( ModelOutput, add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings, ) from ...modeling_outputs import ( BaseModelOutput, MaskedLMOutput, MultipleChoiceModelOutput, QuestionAnsweringModelOutput, SequenceClassifierOutput, TokenClassifierOutput, ) from ...modeling_utils import PreTrainedModel from ...utils import logging from .configuration_funnel import FunnelConfig logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "FunnelConfig" _TOKENIZER_FOR_DOC = "FunnelTokenizer" FUNNEL_PRETRAINED_MODEL_ARCHIVE_LIST = [ "funnel-transformer/small", # B4-4-4H768 "funnel-transformer/small-base", # B4-4-4H768, no decoder "funnel-transformer/medium", # B6-3x2-3x2H768 "funnel-transformer/medium-base", # B6-3x2-3x2H768, no decoder "funnel-transformer/intermediate", # B6-6-6H768 "funnel-transformer/intermediate-base", # B6-6-6H768, no decoder "funnel-transformer/large", # B8-8-8H1024 "funnel-transformer/large-base", # B8-8-8H1024, no decoder "funnel-transformer/xlarge-base", # B10-10-10H1024 "funnel-transformer/xlarge", # B10-10-10H1024, no decoder ] INF = 1e6 def load_tf_weights_in_funnel(model, config, tf_checkpoint_path): """Load tf checkpoints in a pytorch model.""" try: import re import numpy as np import tensorflow as tf except ImportError: logger.error( "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " "https://www.tensorflow.org/install/ for installation instructions." ) raise tf_path = os.path.abspath(tf_checkpoint_path) logger.info("Converting TensorFlow checkpoint from {}".format(tf_path)) # Load weights from TF model init_vars = tf.train.list_variables(tf_path) names = [] arrays = [] for name, shape in init_vars: logger.info("Loading TF weight {} with shape {}".format(name, shape)) array = tf.train.load_variable(tf_path, name) names.append(name) arrays.append(array) _layer_map = { "k": "k_head", "q": "q_head", "v": "v_head", "o": "post_proj", "layer_1": "linear_1", "layer_2": "linear_2", "rel_attn": "attention", "ff": "ffn", "kernel": "weight", "gamma": "weight", "beta": "bias", "lookup_table": "weight", "word_embedding": "word_embeddings", "input": "embeddings", } for name, array in zip(names, arrays): name = name.split("/") # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v # which are not required for using pretrained model if any( n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"] for n in name ): logger.info("Skipping {}".format("/".join(name))) continue if name[0] == "generator": continue pointer = model skipped = False for m_name in name[1:]: if not isinstance(pointer, FunnelPositionwiseFFN) and re.fullmatch(r"layer_\d+", m_name): layer_index = int(re.search(r"layer_(\d+)", m_name).groups()[0]) if layer_index < config.num_hidden_layers: block_idx = 0 while layer_index >= config.block_sizes[block_idx]: layer_index -= config.block_sizes[block_idx] block_idx += 1 pointer = pointer.blocks[block_idx][layer_index] else: layer_index -= config.num_hidden_layers pointer = pointer.layers[layer_index] elif m_name == "r" and isinstance(pointer, FunnelRelMultiheadAttention): pointer = pointer.r_kernel break elif m_name in _layer_map: pointer = getattr(pointer, _layer_map[m_name]) else: try: pointer = getattr(pointer, m_name) except AttributeError: print("Skipping {}".format("/".join(name)), array.shape) skipped = True break if not skipped: if len(pointer.shape) != len(array.shape): array = array.reshape(pointer.shape) if m_name == "kernel": array = np.transpose(array) pointer.data = torch.from_numpy(array) return model class FunnelEmbeddings(nn.Module): def __init__(self, config): super().__init__() self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) self.layer_norm = nn.LayerNorm(config.d_model, eps=config.layer_norm_eps) self.dropout = nn.Dropout(config.hidden_dropout) def forward(self, input_ids=None, inputs_embeds=None): if inputs_embeds is None: inputs_embeds = self.word_embeddings(input_ids) embeddings = self.layer_norm(inputs_embeds) embeddings = self.dropout(embeddings) return embeddings class FunnelAttentionStructure(nn.Module): """ Contains helpers for `FunnelRelMultiheadAttention `. """ cls_token_type_id: int = 2 def __init__(self, config): super().__init__() self.config = config self.sin_dropout = nn.Dropout(config.hidden_dropout) self.cos_dropout = nn.Dropout(config.hidden_dropout) # Track where we are at in terms of pooling from the original input, e.g., by how much the sequence length was # dividide. self.pooling_mult = None def init_attention_inputs(self, inputs_embeds, attention_mask=None, token_type_ids=None): """ Returns the attention inputs associated to the inputs of the model. """ # inputs_embeds has shape batch_size x seq_len x d_model # attention_mask and token_type_ids have shape batch_size x seq_len self.pooling_mult = 1 self.seq_len = seq_len = inputs_embeds.size(1) position_embeds = self.get_position_embeds(seq_len, inputs_embeds.dtype, inputs_embeds.device) token_type_mat = self.token_type_ids_to_mat(token_type_ids) if token_type_ids is not None else None cls_mask = ( F.pad(inputs_embeds.new_ones([seq_len - 1, seq_len - 1]), (1, 0, 1, 0)) if self.config.separate_cls else None ) return (position_embeds, token_type_mat, attention_mask, cls_mask) def token_type_ids_to_mat(self, token_type_ids): """Convert `token_type_ids` to `token_type_mat`.""" token_type_mat = token_type_ids[:, :, None] == token_type_ids[:, None] # Treat <cls> as in the same segment as both A & B cls_ids = token_type_ids == self.cls_token_type_id cls_mat = cls_ids[:, :, None] | cls_ids[:, None] return cls_mat | token_type_mat def get_position_embeds(self, seq_len, dtype, device): """ Create and cache inputs related to relative position encoding. Those are very different depending on whether we are using the factorized or the relative shift attention: For the factorized attention, it returns the matrices (phi, pi, psi, omega) used in the paper, appendix A.2.2, final formula. For the relative shif attention, it returns all possible vectors R used in the paper, appendix A.2.1, final formula. Paper link: https://arxiv.org/abs/2006.03236 """ d_model = self.config.d_model if self.config.attention_type == "factorized": # Notations from the paper, appending A.2.2, final formula. # We need to create and return the matrices phi, psi, pi and omega. pos_seq = torch.arange(0, seq_len, 1.0, dtype=dtype, device=device) freq_seq = torch.arange(0, d_model // 2, 1.0, dtype=dtype, device=device) inv_freq = 1 / (10000 ** (freq_seq / (d_model // 2))) sinusoid = pos_seq[:, None] * inv_freq[None] sin_embed = torch.sin(sinusoid) sin_embed_d = self.sin_dropout(sin_embed) cos_embed = torch.cos(sinusoid) cos_embed_d = self.cos_dropout(cos_embed) # This is different from the formula on the paper... phi = torch.cat([sin_embed_d, sin_embed_d], dim=-1) psi = torch.cat([cos_embed, sin_embed], dim=-1) pi = torch.cat([cos_embed_d, cos_embed_d], dim=-1) omega = torch.cat([-sin_embed, cos_embed], dim=-1) return (phi, pi, psi, omega) else: # Notations from the paper, appending A.2.1, final formula. # We need to create and return all the possible vectors R for all blocks and shifts. freq_seq = torch.arange(0, d_model // 2, 1.0, dtype=dtype, device=device) inv_freq = 1 / (10000 ** (freq_seq / (d_model // 2))) # Maximum relative positions for the first input rel_pos_id = torch.arange(-seq_len * 2, seq_len * 2, 1.0, dtype=dtype, device=device) zero_offset = seq_len * 2 sinusoid = rel_pos_id[:, None] * inv_freq[None] sin_embed = self.sin_dropout(torch.sin(sinusoid)) cos_embed = self.cos_dropout(torch.cos(sinusoid)) pos_embed = torch.cat([sin_embed, cos_embed], dim=-1) pos = torch.arange(0, seq_len, dtype=dtype, device=device) pooled_pos = pos position_embeds_list = [] for block_index in range(0, self.config.num_blocks): # For each block with block_index > 0, we need two types position embeddings: # - Attention(pooled-q, unpooled-kv) # - Attention(pooled-q, pooled-kv) # For block_index = 0 we only need the second one and leave the first one as None. # First type if block_index == 0: position_embeds_pooling = None else: pooled_pos = self.stride_pool_pos(pos, block_index) # construct rel_pos_id stride = 2 ** (block_index - 1) rel_pos = self.relative_pos(pos, stride, pooled_pos, shift=2) rel_pos = rel_pos[:, None] + zero_offset rel_pos = rel_pos.expand(rel_pos.size(0), d_model) position_embeds_pooling = torch.gather(pos_embed, 0, rel_pos) # Second type pos = pooled_pos stride = 2 ** block_index rel_pos = self.relative_pos(pos, stride) rel_pos = rel_pos[:, None] + zero_offset rel_pos = rel_pos.expand(rel_pos.size(0), d_model) position_embeds_no_pooling = torch.gather(pos_embed, 0, rel_pos) position_embeds_list.append([position_embeds_no_pooling, position_embeds_pooling]) return position_embeds_list def stride_pool_pos(self, pos_id, block_index): """ Pool `pos_id` while keeping the cls token separate (if `config.separate_cls=True`). """ if self.config.separate_cls: # Under separate <cls>, we treat the <cls> as the first token in # the previous block of the 1st real block. Since the 1st real # block always has position 1, the position of the previous block # will be at `1 - 2 ** block_index`. cls_pos = pos_id.new_tensor([-(2 ** block_index) + 1]) pooled_pos_id = pos_id[1:-1] if self.config.truncate_seq else pos_id[1:] return torch.cat([cls_pos, pooled_pos_id[::2]], 0) else: return pos_id[::2] def relative_pos(self, pos, stride, pooled_pos=None, shift=1): """ Build the relative positional vector between `pos` and `pooled_pos`. """ if pooled_pos is None: pooled_pos = pos ref_point = pooled_pos[0] - pos[0] num_remove = shift * len(pooled_pos) max_dist = ref_point + num_remove * stride min_dist = pooled_pos[0] - pos[-1] return torch.arange(max_dist, min_dist - 1, -stride, dtype=torch.long, device=pos.device) def stride_pool(self, tensor, axis): """ Perform pooling by stride slicing the tensor along the given axis. """ if tensor is None: return None # Do the stride pool recursively if axis is a list or a tuple of ints. if isinstance(axis, (list, tuple)): for ax in axis: tensor = self.stride_pool(tensor, ax) return tensor # Do the stride pool recursively if tensor is a list or tuple of tensors. if isinstance(tensor, (tuple, list)): return type(tensor)(self.stride_pool(x, axis) for x in tensor) # Deal with negative axis axis %= tensor.ndim axis_slice = ( slice(None, -1, 2) if self.config.separate_cls and self.config.truncate_seq else slice(None, None, 2) ) enc_slice = [slice(None)] * axis + [axis_slice] if self.config.separate_cls: cls_slice = [slice(None)] * axis + [slice(None, 1)] tensor = torch.cat([tensor[cls_slice], tensor], axis=axis) return tensor[enc_slice] def pool_tensor(self, tensor, mode="mean", stride=2): """Apply 1D pooling to a tensor of size [B x T (x H)].""" if tensor is None: return None # Do the pool recursively if tensor is a list or tuple of tensors. if isinstance(tensor, (tuple, list)): return type(tensor)(self.pool_tensor(tensor, mode=mode, stride=stride) for x in tensor) if self.config.separate_cls: suffix = tensor[:, :-1] if self.config.truncate_seq else tensor tensor = torch.cat([tensor[:, :1], suffix], dim=1) ndim = tensor.ndim if ndim == 2: tensor = tensor[:, None, :, None] elif ndim == 3: tensor = tensor[:, None, :, :] # Stride is applied on the second-to-last dimension. stride = (stride, 1) if mode == "mean": tensor = F.avg_pool2d(tensor, stride, stride=stride, ceil_mode=True) elif mode == "max": tensor = F.max_pool2d(tensor, stride, stride=stride, ceil_mode=True) elif mode == "min": tensor = -F.max_pool2d(-tensor, stride, stride=stride, ceil_mode=True) else: raise NotImplementedError("The supported modes are 'mean', 'max' and 'min'.") if ndim == 2: return tensor[:, 0, :, 0] elif ndim == 3: return tensor[:, 0] return tensor def pre_attention_pooling(self, output, attention_inputs): """ Pool `output` and the proper parts of `attention_inputs` before the attention layer. """ position_embeds, token_type_mat, attention_mask, cls_mask = attention_inputs if self.config.pool_q_only: if self.config.attention_type == "factorized": position_embeds = self.stride_pool(position_embeds[:2], 0) + position_embeds[2:] token_type_mat = self.stride_pool(token_type_mat, 1) cls_mask = self.stride_pool(cls_mask, 0) output = self.pool_tensor(output, mode=self.config.pooling_type) else: self.pooling_mult *= 2 if self.config.attention_type == "factorized": position_embeds = self.stride_pool(position_embeds, 0) token_type_mat = self.stride_pool(token_type_mat, [1, 2]) cls_mask = self.stride_pool(cls_mask, [1, 2]) attention_mask = self.pool_tensor(attention_mask, mode="min") output = self.pool_tensor(output, mode=self.config.pooling_type) attention_inputs = (position_embeds, token_type_mat, attention_mask, cls_mask) return output, attention_inputs def post_attention_pooling(self, attention_inputs): """ Pool the proper parts of `attention_inputs` after the attention layer. """ position_embeds, token_type_mat, attention_mask, cls_mask = attention_inputs if self.config.pool_q_only: self.pooling_mult *= 2 if self.config.attention_type == "factorized": position_embeds = position_embeds[:2] + self.stride_pool(position_embeds[2:], 0) token_type_mat = self.stride_pool(token_type_mat, 2) cls_mask = self.stride_pool(cls_mask, 1) attention_mask = self.pool_tensor(attention_mask, mode="min") attention_inputs = (position_embeds, token_type_mat, attention_mask, cls_mask) return attention_inputs def _relative_shift_gather(positional_attn, context_len, shift): batch_size, n_head, seq_len, max_rel_len = positional_attn.shape # max_rel_len = 2 * context_len + shift -1 is the numbers of possible relative positions i-j # What's next is the same as doing the following gather, which might be clearer code but less efficient. # idxs = context_len + torch.arange(0, context_len).unsqueeze(0) - torch.arange(0, seq_len).unsqueeze(1) # # matrix of context_len + i-j # return positional_attn.gather(3, idxs.expand([batch_size, n_head, context_len, context_len])) positional_attn = torch.reshape(positional_attn, [batch_size, n_head, max_rel_len, seq_len]) positional_attn = positional_attn[:, :, shift:, :] positional_attn = torch.reshape(positional_attn, [batch_size, n_head, seq_len, max_rel_len - shift]) positional_attn = positional_attn[..., :context_len] return positional_attn class FunnelRelMultiheadAttention(nn.Module): def __init__(self, config, block_index): super().__init__() self.config = config self.block_index = block_index d_model, n_head, d_head = config.d_model, config.n_head, config.d_head self.hidden_dropout = nn.Dropout(config.hidden_dropout) self.attention_dropout = nn.Dropout(config.attention_dropout) self.q_head = nn.Linear(d_model, n_head * d_head, bias=False) self.k_head = nn.Linear(d_model, n_head * d_head) self.v_head = nn.Linear(d_model, n_head * d_head) self.r_w_bias = nn.Parameter(torch.zeros([n_head, d_head])) self.r_r_bias = nn.Parameter(torch.zeros([n_head, d_head])) self.r_kernel = nn.Parameter(torch.zeros([d_model, n_head, d_head])) self.r_s_bias = nn.Parameter(torch.zeros([n_head, d_head])) self.seg_embed = nn.Parameter(torch.zeros([2, n_head, d_head])) self.post_proj = nn.Linear(n_head * d_head, d_model) self.layer_norm = nn.LayerNorm(d_model, eps=config.layer_norm_eps) self.scale = 1.0 / (d_head ** 0.5) def relative_positional_attention(self, position_embeds, q_head, context_len, cls_mask=None): """ Relative attention score for the positional encodings """ # q_head has shape batch_size x sea_len x n_head x d_head if self.config.attention_type == "factorized": # Notations from the paper, appending A.2.2, final formula (https://arxiv.org/abs/2006.03236) # phi and pi have shape seq_len x d_model, psi and omega have shape context_len x d_model phi, pi, psi, omega = position_embeds # Shape n_head x d_head u = self.r_r_bias * self.scale # Shape d_model x n_head x d_head w_r = self.r_kernel # Shape batch_size x sea_len x n_head x d_model q_r_attention = torch.einsum("binh,dnh->bind", q_head + u, w_r) q_r_attention_1 = q_r_attention * phi[:, None] q_r_attention_2 = q_r_attention * pi[:, None] # Shape batch_size x n_head x seq_len x context_len positional_attn = torch.einsum("bind,jd->bnij", q_r_attention_1, psi) + torch.einsum( "bind,jd->bnij", q_r_attention_2, omega ) else: shift = 2 if q_head.shape[1] != context_len else 1 # Notations from the paper, appending A.2.1, final formula (https://arxiv.org/abs/2006.03236) # Grab the proper positional encoding, shape max_rel_len x d_model r = position_embeds[self.block_index][shift - 1] # Shape n_head x d_head v = self.r_r_bias * self.scale # Shape d_model x n_head x d_head w_r = self.r_kernel # Shape max_rel_len x n_head x d_model r_head = torch.einsum("td,dnh->tnh", r, w_r) # Shape batch_size x n_head x seq_len x max_rel_len positional_attn = torch.einsum("binh,tnh->bnit", q_head + v, r_head) # Shape batch_size x n_head x seq_len x context_len positional_attn = _relative_shift_gather(positional_attn, context_len, shift) if cls_mask is not None: positional_attn *= cls_mask return positional_attn def relative_token_type_attention(self, token_type_mat, q_head, cls_mask=None): """ Relative attention score for the token_type_ids """ if token_type_mat is None: return 0 batch_size, seq_len, context_len = token_type_mat.shape # q_head has shape batch_size x seq_len x n_head x d_head # Shape n_head x d_head r_s_bias = self.r_s_bias * self.scale # Shape batch_size x n_head x seq_len x 2 token_type_bias = torch.einsum("bind,snd->bnis", q_head + r_s_bias, self.seg_embed) # Shape batch_size x n_head x seq_len x context_len token_type_mat = token_type_mat[:, None].expand([batch_size, q_head.shape[2], seq_len, context_len]) # Shapes batch_size x n_head x seq_len diff_token_type, same_token_type = torch.split(token_type_bias, 1, dim=-1) # Shape batch_size x n_head x seq_len x context_len token_type_attn = torch.where( token_type_mat, same_token_type.expand(token_type_mat.shape), diff_token_type.expand(token_type_mat.shape) ) if cls_mask is not None: token_type_attn *= cls_mask return token_type_attn def forward(self, query, key, value, attention_inputs, output_attentions=False): # query has shape batch_size x seq_len x d_model # key and value have shapes batch_size x context_len x d_model position_embeds, token_type_mat, attention_mask, cls_mask = attention_inputs batch_size, seq_len, _ = query.shape context_len = key.shape[1] n_head, d_head = self.config.n_head, self.config.d_head # Shape batch_size x seq_len x n_head x d_head q_head = self.q_head(query).view(batch_size, seq_len, n_head, d_head) # Shapes batch_size x context_len x n_head x d_head k_head = self.k_head(key).view(batch_size, context_len, n_head, d_head) v_head = self.v_head(value).view(batch_size, context_len, n_head, d_head) q_head = q_head * self.scale # Shape n_head x d_head r_w_bias = self.r_w_bias * self.scale # Shapes batch_size x n_head x seq_len x context_len content_score = torch.einsum("bind,bjnd->bnij", q_head + r_w_bias, k_head) positional_attn = self.relative_positional_attention(position_embeds, q_head, context_len, cls_mask) token_type_attn = self.relative_token_type_attention(token_type_mat, q_head, cls_mask) # merge attention scores attn_score = content_score + positional_attn + token_type_attn # precision safe in case of mixed precision training dtype = attn_score.dtype attn_score = attn_score.float() # perform masking if attention_mask is not None: attn_score = attn_score - INF * (1 - attention_mask[:, None, None].float()) # attention probability attn_prob = torch.softmax(attn_score, dim=-1, dtype=dtype) attn_prob = self.attention_dropout(attn_prob) # attention output, shape batch_size x seq_len x n_head x d_head attn_vec = torch.einsum("bnij,bjnd->bind", attn_prob, v_head) # Shape shape batch_size x seq_len x d_model attn_out = self.post_proj(attn_vec.reshape(batch_size, seq_len, n_head * d_head)) attn_out = self.hidden_dropout(attn_out) output = self.layer_norm(query + attn_out) return (output, attn_prob) if output_attentions else (output,) class FunnelPositionwiseFFN(nn.Module): def __init__(self, config): super().__init__() self.linear_1 = nn.Linear(config.d_model, config.d_inner) self.activation_function = ACT2FN[config.hidden_act] self.activation_dropout = nn.Dropout(config.activation_dropout) self.linear_2 = nn.Linear(config.d_inner, config.d_model) self.dropout = nn.Dropout(config.hidden_dropout) self.layer_norm = nn.LayerNorm(config.d_model, config.layer_norm_eps) def forward(self, hidden): h = self.linear_1(hidden) h = self.activation_function(h) h = self.activation_dropout(h) h = self.linear_2(h) h = self.dropout(h) return self.layer_norm(hidden + h) class FunnelLayer(nn.Module): def __init__(self, config, block_index): super().__init__() self.attention = FunnelRelMultiheadAttention(config, block_index) self.ffn = FunnelPositionwiseFFN(config) def forward(self, query, key, value, attention_inputs, output_attentions=False): attn = self.attention(query, key, value, attention_inputs, output_attentions=output_attentions) output = self.ffn(attn[0]) return (output, attn[1]) if output_attentions else (output,) class FunnelEncoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.attention_structure = FunnelAttentionStructure(config) self.blocks = nn.ModuleList( [ nn.ModuleList([FunnelLayer(config, block_index) for _ in range(block_size)]) for block_index, block_size in enumerate(config.block_sizes) ] ) def forward( self, inputs_embeds, attention_mask=None, token_type_ids=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): # The pooling is not implemented on long tensors, so we convert this mask. attention_mask = attention_mask.type_as(inputs_embeds) attention_inputs = self.attention_structure.init_attention_inputs( inputs_embeds, attention_mask=attention_mask, token_type_ids=token_type_ids, ) hidden = inputs_embeds all_hidden_states = (inputs_embeds,) if output_hidden_states else None all_attentions = () if output_attentions else None for block_index, block in enumerate(self.blocks): pooling_flag = hidden.size(1) > (2 if self.config.separate_cls else 1) pooling_flag = pooling_flag and block_index > 0 if pooling_flag: pooled_hidden, attention_inputs = self.attention_structure.pre_attention_pooling( hidden, attention_inputs ) for (layer_index, layer) in enumerate(block): for repeat_index in range(self.config.block_repeats[block_index]): do_pooling = (repeat_index == 0) and (layer_index == 0) and pooling_flag if do_pooling: query = pooled_hidden key = value = hidden if self.config.pool_q_only else pooled_hidden else: query = key = value = hidden layer_output = layer(query, key, value, attention_inputs, output_attentions=output_attentions) hidden = layer_output[0] if do_pooling: attention_inputs = self.attention_structure.post_attention_pooling(attention_inputs) if output_attentions: all_attentions = all_attentions + layer_output[1:] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden,) if not return_dict: return tuple(v for v in [hidden, all_hidden_states, all_attentions] if v is not None) return BaseModelOutput(last_hidden_state=hidden, hidden_states=all_hidden_states, attentions=all_attentions) def upsample(x, stride, target_len, separate_cls=True, truncate_seq=False): """ Upsample tensor `x` to match `target_len` by repeating the tokens `stride` time on the sequence length dimension. """ if stride == 1: return x if separate_cls: cls = x[:, :1] x = x[:, 1:] output = torch.repeat_interleave(x, repeats=stride, dim=1) if separate_cls: if truncate_seq: output = nn.functional.pad(output, (0, 0, 0, stride - 1, 0, 0)) output = output[:, : target_len - 1] output = torch.cat([cls, output], dim=1) else: output = output[:, :target_len] return output class FunnelDecoder(nn.Module): def __init__(self, config): super().__init__() self.config = config self.attention_structure = FunnelAttentionStructure(config) self.layers = nn.ModuleList([FunnelLayer(config, 0) for _ in range(config.num_decoder_layers)]) def forward( self, final_hidden, first_block_hidden, attention_mask=None, token_type_ids=None, output_attentions=False, output_hidden_states=False, return_dict=True, ): upsampled_hidden = upsample( final_hidden, stride=2 ** (len(self.config.block_sizes) - 1), target_len=first_block_hidden.shape[1], separate_cls=self.config.separate_cls, truncate_seq=self.config.truncate_seq, ) hidden = upsampled_hidden + first_block_hidden all_hidden_states = (hidden,) if output_hidden_states else None all_attentions = () if output_attentions else None attention_inputs = self.attention_structure.init_attention_inputs( hidden, attention_mask=attention_mask, token_type_ids=token_type_ids, ) for layer in self.layers: layer_output = layer(hidden, hidden, hidden, attention_inputs, output_attentions=output_attentions) hidden = layer_output[0] if output_attentions: all_attentions = all_attentions + layer_output[1:] if output_hidden_states: all_hidden_states = all_hidden_states + (hidden,) if not return_dict: return tuple(v for v in [hidden, all_hidden_states, all_attentions] if v is not None) return BaseModelOutput(last_hidden_state=hidden, hidden_states=all_hidden_states, attentions=all_attentions) class FunnelDiscriminatorPredictions(nn.Module): """Prediction module for the discriminator, made up of two dense layers.""" def __init__(self, config): super().__init__() self.config = config self.dense = nn.Linear(config.d_model, config.d_model) self.dense_prediction = nn.Linear(config.d_model, 1) def forward(self, discriminator_hidden_states): hidden_states = self.dense(discriminator_hidden_states) hidden_states = ACT2FN[self.config.hidden_act](hidden_states) logits = self.dense_prediction(hidden_states).squeeze() return logits class FunnelPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = FunnelConfig load_tf_weights = load_tf_weights_in_funnel base_model_prefix = "funnel" def _init_weights(self, module): classname = module.__class__.__name__ if classname.find("Linear") != -1: if getattr(module, "weight", None) is not None: if self.config.initializer_std is None: fan_out, fan_in = module.weight.shape std = np.sqrt(1.0 / float(fan_in + fan_out)) else: std = self.config.initializer_std nn.init.normal_(module.weight, std=std) if getattr(module, "bias", None) is not None: nn.init.constant_(module.bias, 0.0) elif classname == "FunnelRelMultiheadAttention": nn.init.uniform_(module.r_w_bias, b=self.config.initializer_range) nn.init.uniform_(module.r_r_bias, b=self.config.initializer_range) nn.init.uniform_(module.r_kernel, b=self.config.initializer_range) nn.init.uniform_(module.r_s_bias, b=self.config.initializer_range) nn.init.uniform_(module.seg_embed, b=self.config.initializer_range) elif classname == "FunnelEmbeddings": std = 1.0 if self.config.initializer_std is None else self.config.initializer_std nn.init.normal_(module.word_embeddings.weight, std=std) if module.word_embeddings.padding_idx is not None: module.word_embeddings.weight.data[module.padding_idx].zero_() class FunnelClassificationHead(nn.Module): def __init__(self, config, n_labels): super().__init__() self.linear_hidden = nn.Linear(config.d_model, config.d_model) self.dropout = nn.Dropout(config.hidden_dropout) self.linear_out = nn.Linear(config.d_model, n_labels) def forward(self, hidden): hidden = self.linear_hidden(hidden) hidden = torch.tanh(hidden) hidden = self.dropout(hidden) return self.linear_out(hidden) @dataclass class FunnelForPreTrainingOutput(ModelOutput): """ Output type of :class:`~transformers.FunnelForPreTraining`. Args: loss (`optional`, returned when ``labels`` is provided, ``torch.FloatTensor`` of shape :obj:`(1,)`): Total loss of the ELECTRA-style objective. logits (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`): Prediction scores of the head (scores for each token before SoftMax). hidden_states (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(torch.FloatTensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`torch.FloatTensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. """ loss: Optional[torch.FloatTensor] = None logits: torch.FloatTensor = None hidden_states: Optional[Tuple[torch.FloatTensor]] = None attentions: Optional[Tuple[torch.FloatTensor]] = None FUNNEL_START_DOCSTRING = r""" The Funnel Transformer model was proposed in `Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing <https://arxiv.org/abs/2006.03236>`__ by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`__ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (:class:`~transformers.FunnelConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. """ FUNNEL_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.BertTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0, 1]``: - 0 corresponds to a `sentence A` token, - 1 corresponds to a `sentence B` token. `What are token type IDs? <../glossary.html#token-type-ids>`_ inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`({0}, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. """ @add_start_docstrings( """ The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called decoder) or any task-specific head on top. """, FUNNEL_START_DOCSTRING, ) class FunnelBaseModel(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.embeddings = FunnelEmbeddings(config) self.encoder = FunnelEncoder(config) self.init_weights() def get_input_embeddings(self): return self.embeddings.word_embeddings def set_input_embeddings(self, new_embeddings): self.embeddings.word_embeddings = new_embeddings @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small-base", output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") device = input_ids.device if input_ids is not None else inputs_embeds.device if attention_mask is None: attention_mask = torch.ones(input_shape, device=device) if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) # TODO: deal with head_mask if inputs_embeds is None: inputs_embeds = self.embeddings(input_ids) encoder_outputs = self.encoder( inputs_embeds, attention_mask=attention_mask, token_type_ids=token_type_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) return encoder_outputs @add_start_docstrings( "The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top.", FUNNEL_START_DOCSTRING, ) class FunnelModel(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.config = config self.embeddings = FunnelEmbeddings(config) self.encoder = FunnelEncoder(config) self.decoder = FunnelDecoder(config) self.init_weights() def get_input_embeddings(self): return self.embeddings.word_embeddings def set_input_embeddings(self, new_embeddings): self.embeddings.word_embeddings = new_embeddings @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") device = input_ids.device if input_ids is not None else inputs_embeds.device if attention_mask is None: attention_mask = torch.ones(input_shape, device=device) if token_type_ids is None: token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) # TODO: deal with head_mask if inputs_embeds is None: inputs_embeds = self.embeddings(input_ids) encoder_outputs = self.encoder( inputs_embeds, attention_mask=attention_mask, token_type_ids=token_type_ids, output_attentions=output_attentions, output_hidden_states=True, return_dict=return_dict, ) decoder_outputs = self.decoder( final_hidden=encoder_outputs[0], first_block_hidden=encoder_outputs[1][self.config.block_sizes[0]], attention_mask=attention_mask, token_type_ids=token_type_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if not return_dict: idx = 0 outputs = (decoder_outputs[0],) if output_hidden_states: idx += 1 outputs = outputs + (encoder_outputs[1] + decoder_outputs[idx],) if output_attentions: idx += 1 outputs = outputs + (encoder_outputs[2] + decoder_outputs[idx],) return outputs return BaseModelOutput( last_hidden_state=decoder_outputs[0], hidden_states=(encoder_outputs.hidden_states + decoder_outputs.hidden_states) if output_hidden_states else None, attentions=(encoder_outputs.attentions + decoder_outputs.attentions) if output_attentions else None, ) add_start_docstrings( """ Funnel Transformer model with a binary classification head on top as used during pretraining for identifying generated tokens. """, FUNNEL_START_DOCSTRING, ) class FunnelForPreTraining(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.funnel = FunnelModel(config) self.discriminator_predictions = FunnelDiscriminatorPredictions(config) self.init_weights() @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @replace_return_docstrings(output_type=FunnelForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (``torch.LongTensor`` of shape ``(batch_size, sequence_length)``, `optional`): Labels for computing the ELECTRA-style loss. Input should be a sequence of tokens (see :obj:`input_ids` docstring) Indices should be in ``[0, 1]``: - 0 indicates the token is an original token, - 1 indicates the token was replaced. Returns: Examples:: >>> from transformers import FunnelTokenizer, FunnelForPreTraining >>> import torch >>> tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small') >>> model = FunnelForPreTraining.from_pretrained('funnel-transformer/small') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors= "pt") >>> logits = model(**inputs).logits """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict discriminator_hidden_states = self.funnel( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) discriminator_sequence_output = discriminator_hidden_states[0] logits = self.discriminator_predictions(discriminator_sequence_output) loss = None if labels is not None: loss_fct = nn.BCEWithLogitsLoss() if attention_mask is not None: active_loss = attention_mask.view(-1, discriminator_sequence_output.shape[1]) == 1 active_logits = logits.view(-1, discriminator_sequence_output.shape[1])[active_loss] active_labels = labels[active_loss] loss = loss_fct(active_logits, active_labels.float()) else: loss = loss_fct(logits.view(-1, discriminator_sequence_output.shape[1]), labels.float()) if not return_dict: output = (logits,) + discriminator_hidden_states[1:] return ((loss,) + output) if loss is not None else output return FunnelForPreTrainingOutput( loss=loss, logits=logits, hidden_states=discriminator_hidden_states.hidden_states, attentions=discriminator_hidden_states.attentions, ) @add_start_docstrings("""Funnel Transformer Model with a `language modeling` head on top. """, FUNNEL_START_DOCSTRING) class FunnelForMaskedLM(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.funnel = FunnelModel(config) self.lm_head = nn.Linear(config.d_model, config.vocab_size) self.init_weights() def get_output_embeddings(self): return self.lm_head def set_output_embeddings(self, new_embeddings): self.lm_head = new_embeddings @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=MaskedLMOutput, config_class=_CONFIG_FOR_DOC, mask="<mask>", ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.funnel( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = outputs[0] prediction_logits = self.lm_head(last_hidden_state) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() # -100 index = padding token masked_lm_loss = loss_fct(prediction_logits.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: output = (prediction_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return MaskedLMOutput( loss=masked_lm_loss, logits=prediction_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Transformer Model with a sequence classification/regression head on top (two linear layer on top of the first timestep of the last hidden state) e.g. for GLUE tasks. """, FUNNEL_START_DOCSTRING, ) class FunnelForSequenceClassification(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.funnel = FunnelBaseModel(config) self.classifier = FunnelClassificationHead(config, config.num_labels) self.init_weights() @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small-base", output_type=SequenceClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ..., config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss), If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.funnel( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = outputs[0] pooled_output = last_hidden_state[:, 0] logits = self.classifier(pooled_output) loss = None if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Transformer Model with a multiple choice classification head on top (two linear layer on top of the first timestep of the last hidden state, and a softmax) e.g. for RocStories/SWAG tasks. """, FUNNEL_START_DOCSTRING, ) class FunnelForMultipleChoice(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.funnel = FunnelBaseModel(config) self.classifier = FunnelClassificationHead(config, 1) self.init_weights() @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small-base", output_type=MultipleChoiceModelOutput, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the multiple choice classification loss. Indices should be in ``[0, ..., num_choices-1]`` where :obj:`num_choices` is the size of the second dimension of the input tensors. (See :obj:`input_ids` above) """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None inputs_embeds = ( inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) if inputs_embeds is not None else None ) outputs = self.funnel( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = outputs[0] pooled_output = last_hidden_state[:, 0] logits = self.classifier(pooled_output) reshaped_logits = logits.view(-1, num_choices) loss = None if labels is not None: loss_fct = CrossEntropyLoss() loss = loss_fct(reshaped_logits, labels) if not return_dict: output = (reshaped_logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return MultipleChoiceModelOutput( loss=loss, logits=reshaped_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Transformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, FUNNEL_START_DOCSTRING, ) class FunnelForTokenClassification(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.funnel = FunnelModel(config) self.dropout = nn.Dropout(config.hidden_dropout) self.classifier = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=TokenClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - 1]``. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.funnel( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = outputs[0] last_hidden_state = self.dropout(last_hidden_state) logits = self.classifier(last_hidden_state) loss = None if labels is not None: loss_fct = CrossEntropyLoss() # Only keep active parts of the loss if attention_mask is not None: active_loss = attention_mask.view(-1) == 1 active_logits = logits.view(-1, self.num_labels) active_labels = torch.where( active_loss, labels.view(-1), torch.tensor(loss_fct.ignore_index).type_as(labels) ) loss = loss_fct(active_logits, active_labels) else: loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) if not return_dict: output = (logits,) + outputs[1:] return ((loss,) + output) if loss is not None else output return TokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @add_start_docstrings( """ Funnel Transformer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`). """, FUNNEL_START_DOCSTRING, ) class FunnelForQuestionAnswering(FunnelPreTrainedModel): def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.funnel = FunnelModel(config) self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() @add_start_docstrings_to_model_forward(FUNNEL_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="funnel-transformer/small", output_type=QuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, ) def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.funnel( input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) last_hidden_state = outputs[0] logits = self.qa_outputs(last_hidden_state) start_logits, end_logits = logits.split(1, dim=-1) start_logits = start_logits.squeeze(-1) end_logits = end_logits.squeeze(-1) total_loss = None if start_positions is not None and end_positions is not None: # If we are on multi-GPU, split add a dimension if len(start_positions.size()) > 1: start_positions = start_positions.squeze(-1) if len(end_positions.size()) > 1: end_positions = end_positions.squeeze(-1) # sometimes the start/end positions are outside our model inputs, we ignore these terms ignored_index = start_logits.size(1) start_positions.clamp_(0, ignored_index) end_positions.clamp_(0, ignored_index) loss_fct = CrossEntropyLoss(ignore_index=ignored_index) start_loss = loss_fct(start_logits, start_positions) end_loss = loss_fct(end_logits, end_positions) total_loss = (start_loss + end_loss) / 2 if not return_dict: output = (start_logits, end_logits) + outputs[1:] return ((total_loss,) + output) if total_loss is not None else output return QuestionAnsweringModelOutput( loss=total_loss, start_logits=start_logits, end_logits=end_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, )
AdaMix/src/transformers/models/funnel/modeling_funnel.py/0
{ "file_path": "AdaMix/src/transformers/models/funnel/modeling_funnel.py", "repo_id": "AdaMix", "token_count": 29379 }
59
# flake8: noqa # There's no way to ignore "F401 '...' imported but unused" warnings in this # module, but to preserve other warnings. So, don't check this module at all. # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from typing import TYPE_CHECKING from ...file_utils import _BaseLazyModule, is_tokenizers_available _import_structure = { "tokenization_herbert": ["HerbertTokenizer"], } if is_tokenizers_available(): _import_structure["tokenization_herbert_fast"] = ["HerbertTokenizerFast"] if TYPE_CHECKING: from .tokenization_herbert import HerbertTokenizer if is_tokenizers_available(): from .tokenization_herbert_fast import HerbertTokenizerFast else: import importlib import os import sys class _LazyModule(_BaseLazyModule): """ Module class that surfaces all objects but only performs associated imports when the objects are requested. """ __file__ = globals()["__file__"] __path__ = [os.path.dirname(__file__)] def _get_module(self, module_name: str): return importlib.import_module("." + module_name, self.__name__) sys.modules[__name__] = _LazyModule(__name__, _import_structure)
AdaMix/src/transformers/models/herbert/__init__.py/0
{ "file_path": "AdaMix/src/transformers/models/herbert/__init__.py", "repo_id": "AdaMix", "token_count": 563 }
60
# coding=utf-8 # Copyright 2021 Iz Beltagy, Matthew E. Peters, Arman Cohan and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ TF 2.0 LED model. """ import random from dataclasses import dataclass from typing import Dict, List, Optional, Tuple, Union import tensorflow as tf from ...activations_tf import get_tf_activation from ...file_utils import ( ModelOutput, add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings, ) from ...modeling_tf_outputs import TFBaseModelOutputWithPast # Public API from ...modeling_tf_utils import ( TFPreTrainedModel, TFSharedEmbeddings, TFWrappedEmbeddings, get_initializer, input_processing, keras_serializable, shape_list, ) from ...utils import logging from .configuration_led import LEDConfig logger = logging.get_logger(__name__) _CHECKPOINT_FOR_DOC = "allenai/led-base-16384" _CONFIG_FOR_DOC = "LEDConfig" _TOKENIZER_FOR_DOC = "LEDTokenizer" LARGE_NEGATIVE = -1e8 def shift_tokens_right(input_ids: tf.Tensor, pad_token_id: int, decoder_start_token_id: int): shifted_input_ids = tf.roll(input_ids, 1, axis=-1) start_tokens = tf.fill((shape_list(shifted_input_ids)[0], 1), decoder_start_token_id) shifted_input_ids = tf.concat([start_tokens, shifted_input_ids[:, 1:]], -1) # replace possible -100 values in labels by `pad_token_id` shifted_input_ids = tf.where( shifted_input_ids == -100, tf.fill(shape_list(shifted_input_ids), pad_token_id), shifted_input_ids ) # "Verify that `labels` has only positive values and -100" if tf.executing_eagerly(): assert_gte0 = tf.debugging.assert_greater_equal(shifted_input_ids, tf.constant(0)) # Make sure the assertion op is called by wrapping the result in an identity no-op with tf.control_dependencies([assert_gte0]): shifted_input_ids = tf.identity(shifted_input_ids) return shifted_input_ids def _make_causal_mask(input_ids_shape: tf.TensorShape, past_key_values_length: int = 0): """ Make causal mask used for bi-directional self-attention. """ bsz, tgt_len = input_ids_shape mask = tf.ones((tgt_len, tgt_len)) * LARGE_NEGATIVE mask_cond = tf.range(shape_list(mask)[-1]) mask = tf.where(mask_cond < tf.reshape(mask_cond + 1, (shape_list(mask)[-1], 1)), 0.0, mask) if past_key_values_length > 0: mask = tf.concat([tf.zeros((tgt_len, past_key_values_length)), mask], axis=-1) return tf.tile(mask[None, None, :, :], (bsz, 1, 1, 1)) def _expand_mask(mask: tf.Tensor, tgt_len: Optional[int] = None, past_key_values_length: int = 0): """ Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. """ src_len = shape_list(mask)[1] tgt_len = tgt_len if tgt_len is not None else src_len one_cst = tf.constant(1.0) mask = tf.cast(mask, dtype=one_cst.dtype) expanded_mask = tf.tile(mask[:, None, None, :], (1, 1, tgt_len, 1)) return (one_cst - expanded_mask) * LARGE_NEGATIVE class TFLEDLearnedPositionalEmbedding(TFSharedEmbeddings): """ This module learns positional embeddings up to a fixed maximum size. """ def __init__(self, num_embeddings: int, embedding_dim: int, **kwargs): super().__init__(num_embeddings, embedding_dim, **kwargs) def call(self, input_shape: tf.TensorShape, past_key_values_length: int = 0): """Input is expected to be of size [bsz x seqlen].""" bsz, seq_len = input_shape[:2] positions = tf.range(past_key_values_length, seq_len + past_key_values_length, delta=1, name="range") return super().call(positions) # Copied from transformers.models.longformer.modeling_tf_longformer.TFLongformerSelfAttention with TFLongformer->TFLEDEncoder class TFLEDEncoderSelfAttention(tf.keras.layers.Layer): def __init__(self, config, layer_id, **kwargs): super().__init__(**kwargs) if config.hidden_size % config.num_attention_heads != 0: raise ValueError( "The hidden size (%d) is not a multiple of the number of attention " "heads (%d)" % (config.hidden_size, config.num_attention_heads) ) self.num_heads = config.num_attention_heads self.head_dim = int(config.hidden_size / config.num_attention_heads) self.embed_dim = config.hidden_size self.query = tf.keras.layers.Dense( self.embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="query", ) self.key = tf.keras.layers.Dense( self.embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="key", ) self.value = tf.keras.layers.Dense( self.embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="value", ) # separate projection layers for tokens with global attention self.query_global = tf.keras.layers.Dense( self.embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="query_global", ) self.key_global = tf.keras.layers.Dense( self.embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="key_global", ) self.value_global = tf.keras.layers.Dense( self.embed_dim, kernel_initializer=get_initializer(config.initializer_range), name="value_global", ) self.dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) self.global_dropout = tf.keras.layers.Dropout(config.attention_probs_dropout_prob) self.layer_id = layer_id attention_window = config.attention_window[self.layer_id] assert ( attention_window % 2 == 0 ), f"`attention_window` for layer {self.layer_id} has to be an even value. Given {attention_window}" assert ( attention_window > 0 ), f"`attention_window` for layer {self.layer_id} has to be positive. Given {attention_window}" self.one_sided_attn_window_size = attention_window // 2 def call( self, inputs, training=False, ): """ LongformerSelfAttention expects `len(hidden_states)` to be multiple of `attention_window`. Padding to `attention_window` happens in LongformerModel.forward to avoid redoing the padding on each layer. The `attention_mask` is changed in :meth:`LongformerModel.forward` from 0, 1, 2 to: * -10000: no attention * 0: local attention * +10000: global attention """ # retrieve input args ( hidden_states, attention_mask, layer_head_mask, is_index_masked, is_index_global_attn, is_global_attn, ) = inputs # project hidden states query_vectors = self.query(hidden_states) key_vectors = self.key(hidden_states) value_vectors = self.value(hidden_states) batch_size, seq_len, embed_dim = shape_list(hidden_states) if tf.executing_eagerly(): tf.debugging.assert_equal( embed_dim, self.embed_dim, message=f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}", ) # normalize query query_vectors /= tf.math.sqrt(tf.cast(self.head_dim, dtype=query_vectors.dtype)) query_vectors = tf.reshape(query_vectors, (batch_size, seq_len, self.num_heads, self.head_dim)) key_vectors = tf.reshape(key_vectors, (batch_size, seq_len, self.num_heads, self.head_dim)) # attn_probs = (batch_size, seq_len, num_heads, window*2+1) attn_scores = self._sliding_chunks_query_key_matmul( query_vectors, key_vectors, self.one_sided_attn_window_size ) # diagonal mask with zeros everywhere and -inf inplace of padding diagonal_mask = self._sliding_chunks_query_key_matmul( tf.ones(shape_list(attention_mask)), attention_mask, self.one_sided_attn_window_size, ) # pad local attention probs attn_scores += diagonal_mask if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(attn_scores), [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1], message=f"attn_probs should be of size ({batch_size}, {seq_len}, {self.num_heads}, {self.one_sided_attn_window_size * 2 + 1}), but is of size {shape_list(attn_scores)}", ) # compute global attn indices required through out forward fn ( max_num_global_attn_indices, is_index_global_attn_nonzero, is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, ) = self._get_global_attn_indices(is_index_global_attn) # this function is only relevant for global attention attn_scores = tf.cond( is_global_attn, lambda: self._concat_with_global_key_attn_probs( attn_scores=attn_scores, query_vectors=query_vectors, key_vectors=key_vectors, max_num_global_attn_indices=max_num_global_attn_indices, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, ), lambda: attn_scores, ) attn_probs = tf.nn.softmax(attn_scores, axis=-1) # softmax sometimes inserts NaN if all positions are masked, replace them with 0 # Make sure to create a mask with the proper shape: # if is_global_attn==True => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1] # if is_global_attn==False => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1] masked_index = tf.cond( is_global_attn, lambda: tf.tile( is_index_masked[:, :, None, None], (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1), ), lambda: tf.tile( is_index_masked[:, :, None, None], (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + 1), ), ) attn_probs = tf.where( masked_index, tf.zeros(shape_list(masked_index), dtype=attn_probs.dtype), attn_probs, ) if layer_head_mask is not None: if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(layer_head_mask), [self.num_heads], message=f"Head mask for a single layer should be of size {(self.num_heads)}, but is {shape_list(layer_head_mask)}", ) attn_probs = tf.reshape(layer_head_mask, (1, 1, -1, 1)) * attn_probs # apply dropout attn_probs = self.dropout(attn_probs, training=training) value_vectors = tf.reshape(value_vectors, (batch_size, seq_len, self.num_heads, self.head_dim)) # if global attention, compute sum of global and local attn attn_output = tf.cond( is_global_attn, lambda: self._compute_attn_output_with_global_indices( value_vectors=value_vectors, attn_probs=attn_probs, max_num_global_attn_indices=max_num_global_attn_indices, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, ), lambda: self._sliding_chunks_matmul_attn_probs_value( attn_probs, value_vectors, self.one_sided_attn_window_size ), ) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(attn_output), [batch_size, seq_len, self.num_heads, self.head_dim], message="Unexpected size", ) attn_output = tf.reshape(attn_output, (batch_size, seq_len, embed_dim)) # compute value for global attention and overwrite to attention output # TODO: remove the redundant computation attn_output, global_attn_probs = tf.cond( is_global_attn, lambda: self._compute_global_attn_output_from_hidden( attn_output=attn_output, hidden_states=hidden_states, max_num_global_attn_indices=max_num_global_attn_indices, layer_head_mask=layer_head_mask, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, is_index_masked=is_index_masked, training=training, ), lambda: (attn_output, tf.zeros((batch_size, self.num_heads, max_num_global_attn_indices, seq_len))), ) # make sure that local attention probabilities are set to 0 for indices of global attn # Make sure to create a mask with the proper shape: # if is_global_attn==True => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1] # if is_global_attn==False => [batch_size, seq_len, self.num_heads, self.one_sided_attn_window_size * 2 + 1] masked_global_attn_index = tf.cond( is_global_attn, lambda: tf.tile( is_index_global_attn[:, :, None, None], (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + max_num_global_attn_indices + 1), ), lambda: tf.tile( is_index_global_attn[:, :, None, None], (1, 1, self.num_heads, self.one_sided_attn_window_size * 2 + 1), ), ) attn_probs = tf.where( masked_global_attn_index, tf.zeros(shape_list(masked_global_attn_index), dtype=attn_probs.dtype), attn_probs, ) outputs = (attn_output, attn_probs, global_attn_probs) return outputs def _sliding_chunks_query_key_matmul(self, query, key, window_overlap): """ Matrix multiplication of query and key tensors using with a sliding window attention pattern. This implementation splits the input into overlapping chunks of size 2w (e.g. 512 for pretrained Longformer) with an overlap of size window_overlap """ batch_size, seq_len, num_heads, head_dim = shape_list(query) if tf.executing_eagerly(): tf.debugging.assert_equal( seq_len % (window_overlap * 2), 0, message=f"Sequence length should be multiple of {window_overlap * 2}. Given {seq_len}", ) tf.debugging.assert_equal( shape_list(query), shape_list(key), message=f"Shape of query and key should be equal, but got query: {shape_list(query)} and key: {shape_list(key)}", ) chunks_count = seq_len // window_overlap - 1 # group batch_size and num_heads dimensions into one, then chunk seq_len into chunks of size window_overlap * 2 query = tf.reshape( tf.transpose(query, (0, 2, 1, 3)), (batch_size * num_heads, seq_len, head_dim), ) key = tf.reshape(tf.transpose(key, (0, 2, 1, 3)), (batch_size * num_heads, seq_len, head_dim)) chunked_query = self._chunk(query, window_overlap) chunked_key = self._chunk(key, window_overlap) # matrix multiplication # bcxd: batch_size * num_heads x chunks x 2window_overlap x head_dim # bcyd: batch_size * num_heads x chunks x 2window_overlap x head_dim # bcxy: batch_size * num_heads x chunks x 2window_overlap x 2window_overlap chunked_query = tf.cast(chunked_query, dtype=chunked_key.dtype) chunked_attention_scores = tf.einsum("bcxd,bcyd->bcxy", chunked_query, chunked_key) # multiply # convert diagonals into columns paddings = tf.convert_to_tensor([[0, 0], [0, 0], [0, 1], [0, 0]]) diagonal_chunked_attention_scores = self._pad_and_transpose_last_two_dims(chunked_attention_scores, paddings) # allocate space for the overall attention matrix where the chunks are combined. The last dimension # has (window_overlap * 2 + 1) columns. The first (window_overlap) columns are the window_overlap lower triangles (attention from a word to # window_overlap previous words). The following column is attention score from each word to itself, then # followed by window_overlap columns for the upper triangle. # copy parts from diagonal_chunked_attention_scores into the combined matrix of attentions # - copying the main diagonal and the upper triangle # TODO: This code is most likely not very efficient and should be improved diagonal_attn_scores_up_triang = tf.concat( [ diagonal_chunked_attention_scores[:, :, :window_overlap, : window_overlap + 1], diagonal_chunked_attention_scores[:, -1:, window_overlap:, : window_overlap + 1], ], axis=1, ) # - copying the lower triangle diagonal_attn_scores_low_triang = tf.concat( [ tf.zeros( (batch_size * num_heads, 1, window_overlap, window_overlap), dtype=diagonal_chunked_attention_scores.dtype, ), diagonal_chunked_attention_scores[:, :, -(window_overlap + 1) : -1, window_overlap + 1 :], ], axis=1, ) diagonal_attn_scores_first_chunk = tf.concat( [ tf.roll( diagonal_chunked_attention_scores, shift=[1, window_overlap], axis=[2, 3], )[:, :, :window_overlap, :window_overlap], tf.zeros( (batch_size * num_heads, 1, window_overlap, window_overlap), dtype=diagonal_chunked_attention_scores.dtype, ), ], axis=1, ) first_chunk_mask = ( tf.tile( tf.range(chunks_count + 1)[None, :, None, None], (batch_size * num_heads, 1, window_overlap, window_overlap), ) < 1 ) diagonal_attn_scores_low_triang = tf.where( first_chunk_mask, diagonal_attn_scores_first_chunk, diagonal_attn_scores_low_triang, ) # merging upper and lower triangle diagonal_attention_scores = tf.concat( [diagonal_attn_scores_low_triang, diagonal_attn_scores_up_triang], axis=-1 ) # separate batch_size and num_heads dimensions again diagonal_attention_scores = tf.transpose( tf.reshape( diagonal_attention_scores, (batch_size, num_heads, seq_len, 2 * window_overlap + 1), ), (0, 2, 1, 3), ) diagonal_attention_scores = self._mask_invalid_locations(diagonal_attention_scores, window_overlap) return diagonal_attention_scores @staticmethod def _mask_invalid_locations(input_tensor, window_overlap): # create correct upper triangle bool mask mask_2d_upper = tf.reverse( tf.linalg.band_part(tf.ones(shape=(window_overlap, window_overlap + 1)), -1, 0), axis=[0], ) # pad to full matrix padding = tf.convert_to_tensor( [[0, shape_list(input_tensor)[1] - window_overlap], [0, shape_list(input_tensor)[3] - window_overlap - 1]] ) # create lower mask mask_2d = tf.pad(mask_2d_upper, padding) # combine with upper mask mask_2d = mask_2d + tf.reverse(mask_2d, axis=[0, 1]) # broadcast to full matrix mask_4d = tf.tile(mask_2d[None, :, None, :], (shape_list(input_tensor)[0], 1, 1, 1)) # inf tensor used for masking inf_tensor = -float("inf") * tf.ones_like(input_tensor) # mask input_tensor = tf.where(tf.math.greater(mask_4d, 0), inf_tensor, input_tensor) return input_tensor def _sliding_chunks_matmul_attn_probs_value(self, attn_probs, value, window_overlap): """ Same as _sliding_chunks_query_key_matmul but for attn_probs and value tensors. Returned tensor will be of the same shape as `attn_probs` """ batch_size, seq_len, num_heads, head_dim = shape_list(value) if tf.executing_eagerly(): tf.debugging.assert_equal( seq_len % (window_overlap * 2), 0, message="Seq_len has to be multiple of 2 * window_overlap", ) tf.debugging.assert_equal( shape_list(attn_probs)[:3], shape_list(value)[:3], message="value and attn_probs must have same dims (except head_dim)", ) tf.debugging.assert_equal( shape_list(attn_probs)[3], 2 * window_overlap + 1, message="attn_probs last dim has to be 2 * window_overlap + 1", ) chunks_count = seq_len // window_overlap - 1 # group batch_size and num_heads dimensions into one, then chunk seq_len into chunks of size 2 window overlap chunked_attn_probs = tf.reshape( tf.transpose(attn_probs, (0, 2, 1, 3)), ( batch_size * num_heads, seq_len // window_overlap, window_overlap, 2 * window_overlap + 1, ), ) # group batch_size and num_heads dimensions into one value = tf.reshape( tf.transpose(value, (0, 2, 1, 3)), (batch_size * num_heads, seq_len, head_dim), ) # pad seq_len with w at the beginning of the sequence and another window overlap at the end paddings = tf.convert_to_tensor([[0, 0], [window_overlap, window_overlap], [0, 0]]) padded_value = tf.pad(value, paddings, constant_values=-1) # chunk padded_value into chunks of size 3 window overlap and an overlap of size window overlap frame_size = 3 * window_overlap * head_dim frame_hop_size = (shape_list(padded_value)[1] * head_dim - frame_size) // chunks_count chunked_value = tf.signal.frame( tf.reshape(padded_value, (batch_size * num_heads, -1)), frame_size, frame_hop_size, ) chunked_value = tf.reshape( chunked_value, (batch_size * num_heads, chunks_count + 1, 3 * window_overlap, head_dim), ) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(chunked_value), [batch_size * num_heads, chunks_count + 1, 3 * window_overlap, head_dim], message="Chunked value has the wrong shape", ) chunked_attn_probs = self._pad_and_diagonalize(chunked_attn_probs) context = tf.einsum("bcwd,bcdh->bcwh", chunked_attn_probs, chunked_value) context = tf.transpose( tf.reshape(context, (batch_size, num_heads, seq_len, head_dim)), (0, 2, 1, 3), ) return context @staticmethod def _pad_and_transpose_last_two_dims(hidden_states_padded, paddings): """pads rows and then flips rows and columns""" hidden_states_padded = tf.pad( hidden_states_padded, paddings ) # padding value is not important because it will be overwritten batch_size, chunk_size, seq_length, hidden_dim = shape_list(hidden_states_padded) hidden_states_padded = tf.reshape(hidden_states_padded, (batch_size, chunk_size, hidden_dim, seq_length)) return hidden_states_padded @staticmethod def _pad_and_diagonalize(chunked_hidden_states): """ shift every row 1 step right, converting columns into diagonals. Example:: chunked_hidden_states: [ 0.4983, 2.6918, -0.0071, 1.0492, -1.8348, 0.7672, 0.2986, 0.0285, -0.7584, 0.4206, -0.0405, 0.1599, 2.0514, -1.1600, 0.5372, 0.2629 ] window_overlap = num_rows = 4 (pad & diagonalize) => [ 0.4983, 2.6918, -0.0071, 1.0492, 0.0000, 0.0000, 0.0000 0.0000, -1.8348, 0.7672, 0.2986, 0.0285, 0.0000, 0.0000 0.0000, 0.0000, -0.7584, 0.4206, -0.0405, 0.1599, 0.0000 0.0000, 0.0000, 0.0000, 2.0514, -1.1600, 0.5372, 0.2629 ] """ total_num_heads, num_chunks, window_overlap, hidden_dim = shape_list(chunked_hidden_states) paddings = tf.convert_to_tensor([[0, 0], [0, 0], [0, 0], [0, window_overlap + 1]]) chunked_hidden_states = tf.pad( chunked_hidden_states, paddings ) # total_num_heads x num_chunks x window_overlap x (hidden_dim+window_overlap+1). Padding value is not important because it'll be overwritten chunked_hidden_states = tf.reshape( chunked_hidden_states, (total_num_heads, num_chunks, -1) ) # total_num_heads x num_chunks x window_overlapL+window_overlapwindow_overlap+window_overlap chunked_hidden_states = chunked_hidden_states[ :, :, :-window_overlap ] # total_num_heads x num_chunks x window_overlapL+window_overlapwindow_overlap chunked_hidden_states = tf.reshape( chunked_hidden_states, (total_num_heads, num_chunks, window_overlap, window_overlap + hidden_dim), ) # total_num_heads x num_chunks, window_overlap x hidden_dim+window_overlap chunked_hidden_states = chunked_hidden_states[:, :, :, :-1] return chunked_hidden_states @staticmethod def _chunk(hidden_states, window_overlap): """convert into overlapping chunks. Chunk size = 2w, overlap size = w""" batch_size, seq_length, hidden_dim = shape_list(hidden_states) num_output_chunks = 2 * (seq_length // (2 * window_overlap)) - 1 # define frame size and frame stride (similar to convolution) frame_hop_size = window_overlap * hidden_dim frame_size = 2 * frame_hop_size hidden_states = tf.reshape(hidden_states, (batch_size, seq_length * hidden_dim)) # chunk with overlap chunked_hidden_states = tf.signal.frame(hidden_states, frame_size, frame_hop_size) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(chunked_hidden_states), [batch_size, num_output_chunks, frame_size], message=f"Make sure chunking is correctly applied. `Chunked hidden states should have output dimension {[batch_size, frame_size, num_output_chunks]}, but got {shape_list(chunked_hidden_states)}.", ) chunked_hidden_states = tf.reshape( chunked_hidden_states, (batch_size, num_output_chunks, 2 * window_overlap, hidden_dim), ) return chunked_hidden_states @staticmethod def _get_global_attn_indices(is_index_global_attn): """ compute global attn indices required throughout forward pass """ # helper variable num_global_attn_indices = tf.math.count_nonzero(is_index_global_attn, axis=1) num_global_attn_indices = tf.cast(num_global_attn_indices, dtype=tf.constant(1).dtype) # max number of global attn indices in batch max_num_global_attn_indices = tf.reduce_max(num_global_attn_indices) # indices of global attn is_index_global_attn_nonzero = tf.where(is_index_global_attn) # helper variable is_local_index_global_attn = tf.range(max_num_global_attn_indices) < tf.expand_dims( num_global_attn_indices, axis=-1 ) # location of the non-padding values within global attention indices is_local_index_global_attn_nonzero = tf.where(is_local_index_global_attn) # location of the padding values within global attention indices is_local_index_no_global_attn_nonzero = tf.where(tf.math.logical_not(is_local_index_global_attn)) return ( max_num_global_attn_indices, is_index_global_attn_nonzero, is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, ) def _concat_with_global_key_attn_probs( self, attn_scores, key_vectors, query_vectors, max_num_global_attn_indices, is_index_global_attn_nonzero, is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, ): batch_size = shape_list(key_vectors)[0] # select global key vectors global_key_vectors = tf.gather_nd(key_vectors, is_index_global_attn_nonzero) # create only global key vectors key_vectors_only_global = tf.scatter_nd( is_local_index_global_attn_nonzero, global_key_vectors, shape=( batch_size, max_num_global_attn_indices, self.num_heads, self.head_dim, ), ) # (batch_size, seq_len, num_heads, max_num_global_attn_indices) attn_probs_from_global_key = tf.einsum("blhd,bshd->blhs", query_vectors, key_vectors_only_global) # (batch_size, max_num_global_attn_indices, seq_len, num_heads) attn_probs_from_global_key_trans = tf.transpose(attn_probs_from_global_key, (0, 3, 1, 2)) mask_shape = (shape_list(is_local_index_no_global_attn_nonzero)[0],) + tuple( shape_list(attn_probs_from_global_key_trans)[-2:] ) mask = tf.ones(mask_shape) * -10000.0 mask = tf.cast(mask, dtype=attn_probs_from_global_key_trans.dtype) # scatter mask attn_probs_from_global_key_trans = tf.tensor_scatter_nd_update( attn_probs_from_global_key_trans, is_local_index_no_global_attn_nonzero, mask, ) # (batch_size, seq_len, num_heads, max_num_global_attn_indices) attn_probs_from_global_key = tf.transpose(attn_probs_from_global_key_trans, (0, 2, 3, 1)) # concat to attn_probs # (batch_size, seq_len, num_heads, extra attention count + 2*window+1) attn_scores = tf.concat((attn_probs_from_global_key, attn_scores), axis=-1) return attn_scores def _compute_attn_output_with_global_indices( self, value_vectors, attn_probs, max_num_global_attn_indices, is_index_global_attn_nonzero, is_local_index_global_attn_nonzero, ): batch_size = shape_list(attn_probs)[0] # cut local attn probs to global only attn_probs_only_global = attn_probs[:, :, :, :max_num_global_attn_indices] # select global value vectors global_value_vectors = tf.gather_nd(value_vectors, is_index_global_attn_nonzero) # create only global value vectors value_vectors_only_global = tf.scatter_nd( is_local_index_global_attn_nonzero, global_value_vectors, shape=( batch_size, max_num_global_attn_indices, self.num_heads, self.head_dim, ), ) # compute attn output only global attn_output_only_global = tf.einsum("blhs,bshd->blhd", attn_probs_only_global, value_vectors_only_global) # reshape attn probs attn_probs_without_global = attn_probs[:, :, :, max_num_global_attn_indices:] # compute attn output with global attn_output_without_global = self._sliding_chunks_matmul_attn_probs_value( attn_probs_without_global, value_vectors, self.one_sided_attn_window_size ) return attn_output_only_global + attn_output_without_global def _compute_global_attn_output_from_hidden( self, attn_output, hidden_states, max_num_global_attn_indices, layer_head_mask, is_local_index_global_attn_nonzero, is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, is_index_masked, training, ): batch_size, seq_len = shape_list(hidden_states)[:2] # prepare global hidden states global_attn_hidden_states = tf.gather_nd(hidden_states, is_index_global_attn_nonzero) global_attn_hidden_states = tf.scatter_nd( is_local_index_global_attn_nonzero, global_attn_hidden_states, shape=(batch_size, max_num_global_attn_indices, self.embed_dim), ) # global key, query, value global_query_vectors_only_global = self.query_global(global_attn_hidden_states) global_key_vectors = self.key_global(hidden_states) global_value_vectors = self.value_global(hidden_states) # normalize global_query_vectors_only_global /= tf.math.sqrt( tf.cast(self.head_dim, dtype=global_query_vectors_only_global.dtype) ) global_query_vectors_only_global = self.reshape_and_transpose(global_query_vectors_only_global, batch_size) global_key_vectors = self.reshape_and_transpose(global_key_vectors, batch_size) global_value_vectors = self.reshape_and_transpose(global_value_vectors, batch_size) # compute attn scores global_attn_scores = tf.matmul(global_query_vectors_only_global, global_key_vectors, transpose_b=True) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(global_attn_scores), [batch_size * self.num_heads, max_num_global_attn_indices, seq_len], message=f"global_attn_scores have the wrong size. Size should be {(batch_size * self.num_heads, max_num_global_attn_indices, seq_len)}, but is {shape_list(global_attn_scores)}.", ) global_attn_scores = tf.reshape( global_attn_scores, (batch_size, self.num_heads, max_num_global_attn_indices, seq_len), ) global_attn_scores_trans = tf.transpose(global_attn_scores, (0, 2, 1, 3)) mask_shape = (shape_list(is_local_index_no_global_attn_nonzero)[0],) + tuple( shape_list(global_attn_scores_trans)[-2:] ) global_attn_mask = tf.ones(mask_shape) * -10000.0 global_attn_mask = tf.cast(global_attn_mask, dtype=global_attn_scores_trans.dtype) # scatter mask global_attn_scores_trans = tf.tensor_scatter_nd_update( global_attn_scores_trans, is_local_index_no_global_attn_nonzero, global_attn_mask, ) global_attn_scores = tf.transpose(global_attn_scores_trans, (0, 2, 1, 3)) # mask global attn scores attn_mask = tf.tile(is_index_masked[:, None, None, :], (1, shape_list(global_attn_scores)[1], 1, 1)) global_attn_scores = tf.where(attn_mask, -10000.0, global_attn_scores) global_attn_scores = tf.reshape( global_attn_scores, (batch_size * self.num_heads, max_num_global_attn_indices, seq_len), ) # compute global attn probs global_attn_probs_float = tf.nn.softmax(global_attn_scores, axis=-1) # apply layer head maskin if layer_head_mask is not None: if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(layer_head_mask), [self.num_heads], message=f"Head mask for a single layer should be of size {(self.num_heads)}, but is {shape_list(layer_head_mask)}", ) global_attn_probs_float = tf.reshape(layer_head_mask, (1, -1, 1, 1)) * tf.reshape( global_attn_probs_float, (batch_size, self.num_heads, max_num_global_attn_indices, seq_len) ) global_attn_probs_float = tf.reshape( global_attn_probs_float, (batch_size * self.num_heads, max_num_global_attn_indices, seq_len) ) # dropout global_attn_probs = self.global_dropout(global_attn_probs_float, training=training) # global attn output global_attn_output = tf.matmul(global_attn_probs, global_value_vectors) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(global_attn_output), [batch_size * self.num_heads, max_num_global_attn_indices, self.head_dim], message=f"global_attn_output tensor has the wrong size. Size should be {(batch_size * self.num_heads, max_num_global_attn_indices, self.head_dim)}, but is {shape_list(global_attn_output)}.", ) global_attn_output = tf.reshape( global_attn_output, (batch_size, self.num_heads, max_num_global_attn_indices, self.head_dim), ) # get only non zero global attn output nonzero_global_attn_output = tf.gather_nd( tf.transpose(global_attn_output, (0, 2, 1, 3)), is_local_index_global_attn_nonzero, ) nonzero_global_attn_output = tf.reshape( nonzero_global_attn_output, (shape_list(is_local_index_global_attn_nonzero)[0], -1), ) # overwrite values with global attention attn_output = tf.tensor_scatter_nd_update( attn_output, is_index_global_attn_nonzero, nonzero_global_attn_output ) global_attn_probs = tf.reshape( global_attn_probs, (batch_size, self.num_heads, max_num_global_attn_indices, seq_len) ) return attn_output, global_attn_probs def reshape_and_transpose(self, vector, batch_size): return tf.reshape( tf.transpose( tf.reshape(vector, (batch_size, -1, self.num_heads, self.head_dim)), (0, 2, 1, 3), ), (batch_size * self.num_heads, -1, self.head_dim), ) class TFLEDEncoderAttention(tf.keras.layers.Layer): def __init__(self, config, layer_id, **kwargs): super().__init__(**kwargs) self.longformer_self_attn = TFLEDEncoderSelfAttention(config, layer_id=layer_id, name="longformer_self_attn") self.output_dense = tf.keras.layers.Dense(config.d_model, use_bias=True, name="output") def call(self, inputs, training=False): ( hidden_states, attention_mask, layer_head_mask, is_index_masked, is_index_global_attn, is_global_attn, ) = inputs self_outputs = self.longformer_self_attn( [hidden_states, attention_mask, layer_head_mask, is_index_masked, is_index_global_attn, is_global_attn], training=training, ) attention_output = self.output_dense(self_outputs[0], training=training) outputs = (attention_output,) + self_outputs[1:] return outputs class TFLEDDecoderAttention(tf.keras.layers.Layer): """Multi-headed attention from "Attention Is All You Need""" def __init__( self, embed_dim: int, num_heads: int, dropout: float = 0.0, is_decoder: bool = False, bias: bool = True, **kwargs, ): super().__init__(**kwargs) self.embed_dim = embed_dim self.num_heads = num_heads self.dropout = tf.keras.layers.Dropout(dropout) self.head_dim = embed_dim // num_heads assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads" self.scaling = self.head_dim ** -0.5 self.is_decoder = is_decoder self.k_proj = tf.keras.layers.Dense(embed_dim, use_bias=bias, name="k_proj") self.q_proj = tf.keras.layers.Dense(embed_dim, use_bias=bias, name="q_proj") self.v_proj = tf.keras.layers.Dense(embed_dim, use_bias=bias, name="v_proj") self.out_proj = tf.keras.layers.Dense(embed_dim, use_bias=bias, name="out_proj") def _shape(self, tensor: tf.Tensor, seq_len: int, bsz: int): return tf.transpose(tf.reshape(tensor, (bsz, seq_len, self.num_heads, self.head_dim)), (0, 2, 1, 3)) def call( self, hidden_states: tf.Tensor, key_value_states: Optional[tf.Tensor] = None, past_key_value: Optional[Tuple[Tuple[tf.Tensor]]] = None, attention_mask: Optional[tf.Tensor] = None, layer_head_mask: Optional[tf.Tensor] = None, training=False, ) -> Tuple[tf.Tensor, Optional[tf.Tensor]]: """Input shape: Batch x Time x Channel""" # if key_value_states are provided this layer is used as a cross-attention layer # for the decoder is_cross_attention = key_value_states is not None bsz, tgt_len, embed_dim = shape_list(hidden_states) # get query proj query_states = self.q_proj(hidden_states) * self.scaling # get key, value proj if is_cross_attention and past_key_value is not None: # reuse k,v, cross_attentions key_states = past_key_value[0] value_states = past_key_value[1] elif is_cross_attention: # cross_attentions key_states = self._shape(self.k_proj(key_value_states), -1, bsz) value_states = self._shape(self.v_proj(key_value_states), -1, bsz) elif past_key_value is not None: # reuse k, v, self_attention key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz) key_states = tf.concat([past_key_value[0], key_states], axis=2) value_states = tf.concat([past_key_value[1], value_states], axis=2) else: # self_attention key_states = self._shape(self.k_proj(hidden_states), -1, bsz) value_states = self._shape(self.v_proj(hidden_states), -1, bsz) if self.is_decoder: # if cross_attention save Tuple(tf.Tensor, tf.Tensor) of all cross attention key/value_states. # Further calls to cross_attention layer can then reuse all cross-attention # key/value_states (first "if" case) # if uni-directional self-attention (decoder) save Tuple(tf.Tensor, tf.Tensor) of # all previous decoder key/value_states. Further calls to uni-directional self-attention # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) # if encoder bi-directional self-attention `past_key_value` is always `None` past_key_value = (key_states, value_states) proj_shape = (bsz * self.num_heads, -1, self.head_dim) query_states = tf.reshape(self._shape(query_states, tgt_len, bsz), proj_shape) key_states = tf.reshape(key_states, proj_shape) value_states = tf.reshape(value_states, proj_shape) src_len = shape_list(key_states)[1] attn_weights = tf.matmul(query_states, key_states, transpose_b=True) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(attn_weights), [bsz * self.num_heads, tgt_len, src_len], message=f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is {shape_list(attn_weights)}", ) if attention_mask is not None: if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(attention_mask), [bsz, 1, tgt_len, src_len], message=f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {shape_list(attention_mask)}", ) attn_weights = tf.reshape(attn_weights, (bsz, self.num_heads, tgt_len, src_len)) + tf.cast( attention_mask, dtype=attn_weights.dtype ) attn_weights = tf.reshape(attn_weights, (bsz * self.num_heads, tgt_len, src_len)) attn_weights = tf.nn.softmax(attn_weights, axis=-1) if layer_head_mask is not None: if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(layer_head_mask), [self.num_heads], message=f"Head mask for a single layer should be of size {(self.num_heads)}, but is {shape_list(layer_head_mask)}", ) attn_weights = tf.reshape(layer_head_mask, (1, -1, 1, 1)) * tf.reshape( attn_weights, (bsz, self.num_heads, tgt_len, src_len) ) attn_weights = tf.reshape(attn_weights, (bsz * self.num_heads, tgt_len, src_len)) attn_probs = self.dropout(attn_weights, training=training) attn_output = tf.matmul(attn_probs, value_states) if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(attn_output), [bsz * self.num_heads, tgt_len, self.head_dim], message=f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is {shape_list(attn_output)}", ) attn_output = tf.transpose( tf.reshape(attn_output, (bsz, self.num_heads, tgt_len, self.head_dim)), (0, 2, 1, 3) ) attn_output = tf.reshape(attn_output, (bsz, tgt_len, embed_dim)) attn_output = self.out_proj(attn_output) attn_weights: tf.Tensor = tf.reshape(attn_weights, (bsz, self.num_heads, tgt_len, src_len)) return attn_output, attn_weights, past_key_value class TFLEDEncoderLayer(tf.keras.layers.Layer): def __init__(self, config: LEDConfig, layer_id: int, **kwargs): super().__init__(**kwargs) self.embed_dim = config.d_model self.self_attn = TFLEDEncoderAttention(config, layer_id, name="self_attn") self.self_attn_layer_norm = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="self_attn_layer_norm") self.dropout = tf.keras.layers.Dropout(config.dropout) self.activation_fn = get_tf_activation(config.activation_function) self.activation_dropout = tf.keras.layers.Dropout(config.activation_dropout) self.fc1 = tf.keras.layers.Dense(config.encoder_ffn_dim, name="fc1") self.fc2 = tf.keras.layers.Dense(self.embed_dim, name="fc2") self.final_layer_norm = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="final_layer_norm") def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, layer_head_mask: tf.Tensor, is_index_masked: tf.Tensor, is_index_global_attn: tf.Tensor, is_global_attn: bool, training=False, ): """ Args: hidden_states (:obj:`tf.Tensor`): input to the layer of shape `(seq_len, batch, embed_dim)` attention_mask (:obj:`tf.Tensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (:obj:`tf.Tensor`): mask for attention heads in a given layer of size `(config.encoder_attention_heads,)`. """ residual = hidden_states layer_outputs = self.self_attn( [hidden_states, attention_mask, layer_head_mask, is_index_masked, is_index_global_attn, is_global_attn], training=training, ) hidden_states = layer_outputs[0] if tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(hidden_states), shape_list(residual), message=f"Self attn modified the shape of query {shape_list(residual)} to {shape_list(hidden_states)}", ) hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) residual = hidden_states hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = self.activation_dropout(hidden_states, training=training) hidden_states = self.fc2(hidden_states) hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.final_layer_norm(hidden_states) return (hidden_states,) + layer_outputs[1:] class TFLEDDecoderLayer(tf.keras.layers.Layer): def __init__(self, config: LEDConfig, **kwargs): super().__init__(**kwargs) self.embed_dim = config.d_model self.self_attn = TFLEDDecoderAttention( embed_dim=self.embed_dim, num_heads=config.decoder_attention_heads, dropout=config.attention_dropout, name="self_attn", is_decoder=True, ) self.dropout = tf.keras.layers.Dropout(config.dropout) self.activation_fn = get_tf_activation(config.activation_function) self.activation_dropout = tf.keras.layers.Dropout(config.activation_dropout) self.self_attn_layer_norm = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="self_attn_layer_norm") self.encoder_attn = TFLEDDecoderAttention( self.embed_dim, config.decoder_attention_heads, dropout=config.attention_dropout, name="encoder_attn", is_decoder=True, ) self.encoder_attn_layer_norm = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="encoder_attn_layer_norm") self.fc1 = tf.keras.layers.Dense(config.decoder_ffn_dim, name="fc1") self.fc2 = tf.keras.layers.Dense(self.embed_dim, name="fc2") self.final_layer_norm = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="final_layer_norm") def call( self, hidden_states, attention_mask: Optional[tf.Tensor] = None, encoder_hidden_states: Optional[tf.Tensor] = None, encoder_attention_mask: Optional[tf.Tensor] = None, layer_head_mask: Optional[tf.Tensor] = None, encoder_layer_head_mask: Optional[tf.Tensor] = None, past_key_value: Optional[Tuple[tf.Tensor]] = None, training=False, ) -> Tuple[tf.Tensor, tf.Tensor, Tuple[Tuple[tf.Tensor]]]: """ Args: hidden_states (:obj:`tf.Tensor`): input to the layer of shape `(seq_len, batch, embed_dim)` attention_mask (:obj:`tf.Tensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. encoder_hidden_states (:obj:`tf.Tensor`): cross attention input to the layer of shape `(seq_len, batch, embed_dim)` encoder_attention_mask (:obj:`tf.Tensor`): encoder attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (:obj:`tf.Tensor`): mask for attention heads in a given layer of size `(config.encoder_attention_heads,)`. encoder_layer_head_mask (:obj:`tf.Tensor`): mask for encoder attention heads in a given layer of size `(config.encoder_attention_heads,)`. past_key_value (:obj:`Tuple(tf.Tensor)`): cached past key and value projection states """ residual = hidden_states # Self Attention # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None # add present self-attn cache to positions 1,2 of present_key_value tuple hidden_states, self_attn_weights, present_key_value = self.self_attn( hidden_states=hidden_states, past_key_value=self_attn_past_key_value, attention_mask=attention_mask, layer_head_mask=layer_head_mask, ) hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) # Cross-Attention Block cross_attn_present_key_value = None if encoder_hidden_states is not None: residual = hidden_states # cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None hidden_states, _, cross_attn_present_key_value = self.encoder_attn( hidden_states=hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, layer_head_mask=encoder_layer_head_mask, past_key_value=cross_attn_past_key_value, ) hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.encoder_attn_layer_norm(hidden_states) # add cross-attn to positions 3,4 of present_key_value tuple present_key_value = present_key_value + cross_attn_present_key_value # Fully Connected residual = hidden_states hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = self.activation_dropout(hidden_states, training=training) hidden_states = self.fc2(hidden_states) hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.final_layer_norm(hidden_states) return ( hidden_states, self_attn_weights, present_key_value, ) class TFLEDPreTrainedModel(TFPreTrainedModel): config_class = LEDConfig base_model_prefix = "led" @property def dummy_inputs(self): input_ids = tf.convert_to_tensor([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0]]) # make sure global layers are initialized attention_mask = tf.convert_to_tensor([[1, 1, 0, 0, 1], [1, 1, 1, 0, 0]]) global_attention_mask = tf.convert_to_tensor([[0, 0, 0, 0, 1], [0, 0, 1, 0, 0]]) dummy_inputs = { "input_ids": input_ids, "attention_mask": attention_mask, "global_attention_mask": global_attention_mask, "decoder_input_ids": input_ids, } return dummy_inputs @tf.function( input_signature=[ { "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), "decoder_input_ids": tf.TensorSpec((None, None), tf.int32, name="decoder_input_ids"), "decoder_attention_mask": tf.TensorSpec((None, None), tf.int32, name="decoder_attention_mask"), } ] ) def serving(self, inputs): output = self.call(inputs) return self.serving_output(output) @dataclass # Copied from transformers.models.longformer.modeling_tf_longformer.TFLongformerBaseModelOutput with TFLongformer->TFLEDEncoder class TFLEDEncoderBaseModelOutput(ModelOutput): """ Base class for Longformer's outputs, with potential hidden states, local and global attentions. Args: last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the model. hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, x + attention_window + 1)`, where ``x`` is the number of tokens with global attention mask. Local attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token in the sequence to every token with global attention (first ``x`` values) and to every token in the attention window (remaining ``attention_window + 1`` values). Note that the first ``x`` values refer to tokens with fixed positions in the text, but the remaining ``attention_window + 1`` values refer to tokens with relative positions: the attention weight of a token to itself is located at index ``x + attention_window / 2`` and the ``attention_window / 2`` preceding (succeeding) values are the attention weights to the ``attention_window / 2`` preceding (succeeding) tokens. If the attention window contains a token with global attention, the attention weight at the corresponding index is set to 0; the value should be accessed from the first ``x`` attention weights. If a token has global attention, the attention weights to all other tokens in :obj:`attentions` is set to 0, the values should be accessed from :obj:`global_attentions`. global_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. """ last_hidden_state: tf.Tensor = None hidden_states: Optional[Tuple[tf.Tensor]] = None attentions: Optional[Tuple[tf.Tensor]] = None global_attentions: Optional[Tuple[tf.Tensor]] = None @dataclass class TFLEDSeq2SeqModelOutput(ModelOutput): """ Base class for model encoder's outputs that also contains : pre-computed hidden states that can speed up sequential decoding. Args: last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`): Sequence of hidden-states at the output of the last layer of the decoder of the model. If :obj:`past_key_values` is used only the last hidden-state of the sequences of shape :obj:`(batch_size, 1, hidden_size)` is output. past_key_values (:obj:`List[tf.Tensor]`, `optional`, returned when ``use_cache=True`` is passed or when ``config.use_cache=True``): List of :obj:`tf.Tensor` of length :obj:`config.n_layers`, with each tensor of shape :obj:`(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see :obj:`past_key_values` input) to speed up sequential decoding. decoder_hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. encoder_global_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. """ last_hidden_state: tf.Tensor = None past_key_values: Optional[List[tf.Tensor]] = None decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None decoder_attentions: Optional[Tuple[tf.Tensor]] = None cross_attentions: Optional[Tuple[tf.Tensor]] = None encoder_last_hidden_state: Optional[tf.Tensor] = None encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None encoder_attentions: Optional[Tuple[tf.Tensor]] = None encoder_global_attentions: Optional[Tuple[tf.Tensor]] = None @dataclass class TFLEDSeq2SeqLMOutput(ModelOutput): """ Base class for sequence-to-sequence language models outputs. Args: loss (:obj:`tf.Tensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`labels` is provided): Language modeling loss. logits (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (:obj:`List[tf.Tensor]`, `optional`, returned when ``use_cache=True`` is passed or when ``config.use_cache=True``): List of :obj:`tf.Tensor` of length :obj:`config.n_layers`, with each tensor of shape :obj:`(2, batch_size, num_heads, sequence_length, embed_size_per_head)`). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see :obj:`past_key_values` input) to speed up sequential decoding. decoder_hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the decoder's cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_hidden_states=True`` is passed or when ``config.output_hidden_states=True``): Tuple of :obj:`tf.Tensor` (one for the output of the embeddings + one for the output of each layer) of shape :obj:`(batch_size, sequence_length, hidden_size)`. Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, sequence_length)`. Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. encoder_global_attentions (:obj:`tuple(tf.Tensor)`, `optional`, returned when ``output_attentions=True`` is passed or when ``config.output_attentions=True``): Tuple of :obj:`tf.Tensor` (one for each layer) of shape :obj:`(batch_size, num_heads, sequence_length, x)`, where ``x`` is the number of tokens with global attention mask. Global attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Those are the attention weights from every token with global attention to every token in the sequence. """ loss: Optional[tf.Tensor] = None logits: tf.Tensor = None past_key_values: Optional[List[tf.Tensor]] = None decoder_hidden_states: Optional[Tuple[tf.Tensor]] = None decoder_attentions: Optional[Tuple[tf.Tensor]] = None cross_attentions: Optional[Tuple[tf.Tensor]] = None encoder_last_hidden_state: Optional[tf.Tensor] = None encoder_hidden_states: Optional[Tuple[tf.Tensor]] = None encoder_attentions: Optional[Tuple[tf.Tensor]] = None encoder_global_attentions: Optional[Tuple[tf.Tensor]] = None LED_START_DOCSTRING = r""" This model inherits from :class:`~transformers.TFPreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a `tf.keras.Model <https://www.tensorflow.org/api_docs/python/tf/keras/Model>`__ subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. .. note:: TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using :meth:`tf.keras.Model.fit` method which currently requires having all the tensors in the first argument of the model call function: :obj:`model(inputs)`. If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with :obj:`input_ids` only and nothing else: :obj:`model(input_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: :obj:`model([input_ids, attention_mask])` or :obj:`model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: :obj:`model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Args: config (:class:`~transformers.LEDConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.TFPreTrainedModel.from_pretrained` method to load the model weights. """ LED_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`tf.Tensor` of shape :obj:`({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.BertTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`tf.Tensor` of shape :obj:`({0})`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ decoder_input_ids (:obj:`tf.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.LedTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ LED uses the :obj:`eos_token_id` as the starting token for :obj:`decoder_input_ids` generation. If :obj:`past_key_values` is used, optionally only the last :obj:`decoder_input_ids` have to be input (see :obj:`past_key_values`). decoder_attention_mask (:obj:`tf.Tensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): will be made by default and ignore pad tokens. It is not recommended to set this for most use cases. head_mask (:obj:`tf.Tensor` of shape :obj:`(encoder_layers, encoder_attention_heads)`, `optional`): Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the heas is **masked**. decoder_head_mask (:obj:`tf.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`): Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. encoder_outputs (:obj:`tf.FloatTensor`, `optional`): hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape :obj:`(batch_size, sequence_length, hidden_size)` is a sequence of past_key_values (:obj:`Tuple[Tuple[tf.Tensor]]` of length :obj:`config.n_layers`) contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. use_cache (:obj:`bool`, `optional`, defaults to :obj:`True`): If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up decoding (see :obj:`past_key_values`). Set to :obj:`False` during training, :obj:`True` during generation output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ @keras_serializable class TFLEDEncoder(tf.keras.layers.Layer): config_class = LEDConfig """ Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a :class:`TFLEDEncoderLayer`. Args: config: LEDConfig """ def __init__(self, config: LEDConfig, embed_tokens: Optional[TFSharedEmbeddings] = None, **kwargs): super().__init__(**kwargs) self.config = config self.dropout = tf.keras.layers.Dropout(config.dropout) self.layerdrop = config.encoder_layerdrop self.padding_idx = config.pad_token_id if isinstance(config.attention_window, int): assert config.attention_window % 2 == 0, "`config.attention_window` has to be an even value" assert config.attention_window > 0, "`config.attention_window` has to be positive" config.attention_window = [config.attention_window] * config.num_hidden_layers # one value per layer else: assert len(config.attention_window) == config.num_hidden_layers, ( "`len(config.attention_window)` should equal `config.num_hidden_layers`. " f"Expected {config.num_hidden_layers}, given {len(config.attention_window)}" ) self.attention_window = config.attention_window self.embed_tokens = embed_tokens self.embed_positions = TFLEDLearnedPositionalEmbedding( config.max_encoder_position_embeddings, config.d_model, name="embed_positions", ) self.layers = [TFLEDEncoderLayer(config, i, name=f"layers.{i}") for i in range(config.encoder_layers)] self.layernorm_embedding = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="layernorm_embedding") def get_embed_tokens(self): return self.embed_tokens def set_embed_tokens(self, embed_tokens): self.embed_tokens = embed_tokens def call( self, input_ids=None, inputs_embeds=None, attention_mask=None, global_attention_mask=None, head_mask=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs, ): """ Args: input_ids (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using :class:`~transformers.LEDTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ head_mask (:obj:`tf.Tensor` of shape :obj:`(num_layers, num_heads)`, `optional`): Mask to nullify selected heads of the attention modules. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the heas is **masked**. inputs_embeds (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, global_attention_mask=global_attention_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif inputs["input_ids"] is not None: input_shape = shape_list(inputs["input_ids"]) inputs["inputs_embeds"] = self.embed_tokens(inputs["input_ids"]) elif inputs["inputs_embeds"] is not None: input_shape = shape_list(inputs["inputs_embeds"])[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs["attention_mask"] is None: inputs["attention_mask"] = tf.fill(input_shape, 1) # merge `global_attention_mask` and `attention_mask` if inputs["global_attention_mask"] is not None: inputs["attention_mask"] = inputs["global_attention_mask"] + 1 ( padding_len, inputs["input_ids"], inputs["attention_mask"], inputs["inputs_embeds"], ) = self._pad_to_window_size( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], inputs_embeds=inputs["inputs_embeds"], pad_token_id=self.padding_idx, ) input_shape = shape_list(inputs["attention_mask"]) # is index masked or global attention is_index_masked = tf.math.less(tf.cast(inputs["attention_mask"], tf.int8), 1) is_index_global_attn = tf.math.greater(tf.cast(inputs["attention_mask"], tf.int8), 1) is_global_attn = tf.math.reduce_any(is_index_global_attn) embed_pos = self.embed_positions(input_shape) hidden_states = inputs["inputs_embeds"] + embed_pos hidden_states = self.layernorm_embedding(hidden_states) hidden_states = self.dropout(hidden_states, training=inputs["training"]) # check attention mask and invert if inputs["attention_mask"] is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] inputs["attention_mask"] = _expand_mask(inputs["attention_mask"])[:, 0, 0, :] inputs["attention_mask"] = inputs["attention_mask"][:, :, None, None] encoder_states = () if inputs["output_hidden_states"] else None all_attentions = all_global_attentions = () if inputs["output_attentions"] else None # check if head_mask has a correct number of layers specified if desired if inputs["head_mask"] is not None and tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(inputs["head_mask"])[0], len(self.layers), message=f"The head_mask should be specified for {len(self.layers)} layers, but it is for {shape_list(inputs['head_mask'])[0]}.", ) # encoder layers for idx, encoder_layer in enumerate(self.layers): if inputs["output_hidden_states"]: hidden_states_to_add = self.compute_hidden_states(hidden_states, padding_len) encoder_states = encoder_states + (hidden_states_to_add,) # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) dropout_probability = random.uniform(0, 1) if inputs["training"] and (dropout_probability < self.layerdrop): # skip the layer continue layer_outputs = encoder_layer( hidden_states=hidden_states, attention_mask=inputs["attention_mask"], layer_head_mask=inputs["head_mask"][idx] if inputs["head_mask"] is not None else None, is_index_masked=is_index_masked, is_index_global_attn=is_index_global_attn, is_global_attn=is_global_attn, ) hidden_states = layer_outputs[0] if inputs["output_attentions"]: # bzs x seq_len x num_attn_heads x (num_global_attn + attention_window_len + 1) => bzs x num_attn_heads x seq_len x (num_global_attn + attention_window_len + 1) all_attentions = all_attentions + (tf.transpose(layer_outputs[1], (0, 2, 1, 3)),) # bzs x num_attn_heads x num_global_attn x seq_len => bzs x num_attn_heads x seq_len x num_global_attn all_global_attentions = all_global_attentions + (tf.transpose(layer_outputs[2], (0, 1, 3, 2)),) # undo padding # unpad `hidden_states` because the calling function is expecting a length == input_ids.size(1) hidden_states = self.compute_hidden_states(hidden_states, padding_len) if inputs["output_hidden_states"]: encoder_states = encoder_states + (hidden_states,) if not inputs["return_dict"]: return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None) return TFLEDEncoderBaseModelOutput( last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions, global_attentions=all_global_attentions, ) @tf.function def compute_hidden_states(self, hidden_states, padding_len): return hidden_states[:, :-padding_len] if padding_len > 0 else hidden_states def _pad_to_window_size( self, input_ids, attention_mask, inputs_embeds, pad_token_id, ): """A helper function to pad tokens and mask to work with implementation of Longformer selfattention.""" # padding attention_window = ( self.attention_window if isinstance(self.attention_window, int) else max(self.attention_window) ) assert attention_window % 2 == 0, f"`attention_window` should be an even value. Given {attention_window}" input_shape = shape_list(input_ids) if input_ids is not None else shape_list(inputs_embeds) batch_size, seq_len = input_shape[:2] padding_len = (attention_window - seq_len % attention_window) % attention_window if padding_len > 0: logger.info( "Input ids are automatically padded from {} to {} to be a multiple of `config.attention_window`: {}".format( seq_len, seq_len + padding_len, attention_window ) ) paddings = tf.convert_to_tensor([[0, 0], [0, padding_len]]) if input_ids is not None: input_ids = tf.pad(input_ids, paddings, constant_values=pad_token_id) if inputs_embeds is not None: def pad_embeddings(): input_ids_padding = tf.fill((batch_size, padding_len), pad_token_id) inputs_embeds_padding = self.embed_tokens(input_ids_padding) return tf.concat([inputs_embeds, inputs_embeds_padding], axis=-2) inputs_embeds = tf.cond(tf.math.greater(padding_len, 0), pad_embeddings, lambda: inputs_embeds) attention_mask = tf.pad(attention_mask, paddings, constant_values=False) # no attention on the padding tokens return ( padding_len, input_ids, attention_mask, inputs_embeds, ) @keras_serializable class TFLEDDecoder(tf.keras.layers.Layer): config_class = LEDConfig """ Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a :class:`TFLEDDecoderLayer` Args: config: LEDConfig embed_tokens: output embedding """ def __init__(self, config: LEDConfig, embed_tokens: Optional[TFSharedEmbeddings] = None, **kwargs): super().__init__(**kwargs) self.config = config self.padding_idx = config.pad_token_id self.embed_tokens = embed_tokens self.layerdrop = config.decoder_layerdrop self.embed_positions = TFLEDLearnedPositionalEmbedding( config.max_decoder_position_embeddings, config.d_model, name="embed_positions", ) self.layers = [TFLEDDecoderLayer(config, name=f"layers.{i}") for i in range(config.decoder_layers)] self.layernorm_embedding = tf.keras.layers.LayerNormalization(epsilon=1e-5, name="layernorm_embedding") self.dropout = tf.keras.layers.Dropout(config.dropout) def set_embed_tokens(self, embed_tokens): self.embed_tokens = embed_tokens def call( self, input_ids=None, inputs_embeds=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, head_mask=None, encoder_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs, ): r""" Args: input_ids (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using :class:`~transformers.LEDTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ encoder_hidden_states (:obj:`tf.Tensor` of shape :obj:`(batch_size, encoder_sequence_length, hidden_size)`, `optional`): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. encoder_attention_mask (:obj:`tf.Tensor` of shape :obj:`(batch_size, encoder_sequence_length)`, `optional`): Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ head_mask (:obj:`tf.Tensor` of shape :obj:`(decoder_layers, decoder_attention_heads)`, `optional`): Mask to nullify selected heads of the attention modules. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the heas is **masked**. encoder_head_mask (:obj:`tf.Tensor` of shape :obj:`(encoder_layers, encoder_attention_heads)`, `optional`): Mask to nullify selected heads of the attention modules in encoder to avoid performing cross-attention on hidden heads. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the heas is **masked**. past_key_values (:obj:`Tuple[Tuple[tf.Tensor]]` of length :obj:`config.n_layers` with each tuple having 2 tuples each of which has 2 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden-states of the attention blocks. Can be used to speed up decoding. If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` instead of all :obj:`decoder_input_ids`` of shape :obj:`(batch_size, sequence_length)`. inputs_embeds (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, head_mask=head_mask, encoder_head_mask=encoder_head_mask, inputs_embeds=inputs_embeds, past_key_values=past_key_values, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None: raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") elif inputs["input_ids"] is not None: input_shape = shape_list(inputs["input_ids"]) elif inputs["inputs_embeds"] is not None: input_shape = shape_list(inputs["inputs_embeds"])[:-1] else: raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") past_key_values_length = ( shape_list(inputs["past_key_values"][0][0])[2] if inputs["past_key_values"] is not None else 0 ) # embed positions positions = self.embed_positions(input_shape, past_key_values_length) if inputs["inputs_embeds"] is None: inputs["inputs_embeds"] = self.embed_tokens(inputs["input_ids"]) hidden_states = inputs["inputs_embeds"] # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] if input_shape[-1] > 1: combined_attention_mask = _make_causal_mask(input_shape, past_key_values_length=past_key_values_length) else: combined_attention_mask = _expand_mask( tf.ones((input_shape[0], input_shape[1] + past_key_values_length)), tgt_len=input_shape[-1] ) if inputs["attention_mask"] is not None and input_shape[-1] > 1: combined_attention_mask = combined_attention_mask + _expand_mask( inputs["attention_mask"], tgt_len=input_shape[-1] ) if inputs["encoder_hidden_states"] is not None and inputs["encoder_attention_mask"] is not None: # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] inputs["encoder_attention_mask"] = _expand_mask(inputs["encoder_attention_mask"], tgt_len=input_shape[-1]) hidden_states = self.layernorm_embedding(hidden_states + positions) hidden_states = self.dropout(hidden_states, training=inputs["training"]) # decoder layers all_hidden_states = () all_self_attns = () present_key_values = () # check if head_mask has a correct number of layers specified if desired if inputs["head_mask"] is not None and tf.executing_eagerly(): tf.debugging.assert_equal( shape_list(inputs["head_mask"])[0], len(self.layers), message=f"The head_mask should be specified for {len(self.layers)} layers, but it is for {shape_list(inputs['head_mask'])[0]}.", ) for idx, decoder_layer in enumerate(self.layers): # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) if inputs["output_hidden_states"]: all_hidden_states += (hidden_states,) dropout_probability = random.uniform(0, 1) if inputs["training"] and (dropout_probability < self.layerdrop): continue past_key_value = inputs["past_key_values"][idx] if inputs["past_key_values"] is not None else None hidden_states, layer_self_attn, present_key_value = decoder_layer( hidden_states, attention_mask=combined_attention_mask, encoder_hidden_states=inputs["encoder_hidden_states"], encoder_attention_mask=inputs["encoder_attention_mask"], layer_head_mask=inputs["head_mask"][idx] if inputs["head_mask"] is not None else None, encoder_layer_head_mask=inputs["encoder_head_mask"][idx] if inputs["encoder_head_mask"] is not None else None, past_key_value=past_key_value, ) if inputs["use_cache"]: present_key_values += (present_key_value,) if inputs["output_attentions"]: all_self_attns += (layer_self_attn,) if inputs["output_hidden_states"]: all_hidden_states += (hidden_states,) else: all_hidden_states = None all_self_attns = list(all_self_attns) if inputs["output_attentions"] else None present_key_values = (encoder_hidden_states, present_key_values) if inputs["use_cache"] else None if not inputs["return_dict"]: return hidden_states, present_key_values, all_hidden_states, all_self_attns else: return TFBaseModelOutputWithPast( last_hidden_state=hidden_states, past_key_values=present_key_values, hidden_states=all_hidden_states, attentions=all_self_attns, ) @keras_serializable class TFLEDMainLayer(tf.keras.layers.Layer): config_class = LEDConfig def __init__(self, config: LEDConfig, **kwargs): super().__init__(**kwargs) self.config = config self.shared = TFSharedEmbeddings(config.vocab_size, config.d_model, config.pad_token_id, name="led.shared") with tf.compat.v1.variable_scope("led.shared") as shared_abs_scope_name: pass # Wraps layer to avoid problems with weight restoring and ensuring we're in the correct TF scope. embed_tokens = TFWrappedEmbeddings(self.shared, abs_scope_name=shared_abs_scope_name) embed_tokens.vocab_size = self.shared.vocab_size embed_tokens.hidden_size = self.shared.hidden_size self.encoder = TFLEDEncoder(config, embed_tokens, name="encoder") self.decoder = TFLEDDecoder(config, embed_tokens, name="decoder") def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared.weight = new_embeddings self.shared.vocab_size = self.shared.weight.shape[0] # retrieve correct absolute scope for embed token wrapper with tf.compat.v1.variable_scope("led.shared") as shared_abs_scope_name: pass # Wraps layer to avoid problems with weight restoring and ensuring we're in the correct TF scope. embed_tokens = TFWrappedEmbeddings(self.shared, abs_scope_name=shared_abs_scope_name) self.encoder.set_embed_tokens(embed_tokens) self.decoder.set_embed_tokens(embed_tokens) def call( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs: Optional[Union[Tuple, TFLEDEncoderBaseModelOutput]] = None, global_attention_mask=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs ): inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, encoder_outputs=encoder_outputs, global_attention_mask=global_attention_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["decoder_input_ids"] is None and inputs["decoder_inputs_embeds"] is None: inputs["use_cache"] = False if inputs["encoder_outputs"] is None: inputs["encoder_outputs"] = self.encoder( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], global_attention_mask=inputs["global_attention_mask"], head_mask=inputs["head_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) # If the user passed a tuple for encoder_outputs, we wrap it in a TFLEDEncoderBaseModelOutput when return_dict=True elif inputs["return_dict"] and not isinstance(inputs["encoder_outputs"], TFLEDEncoderBaseModelOutput): inputs["encoder_outputs"] = TFLEDEncoderBaseModelOutput( last_hidden_state=inputs["encoder_outputs"][0], hidden_states=inputs["encoder_outputs"][1] if len(inputs["encoder_outputs"]) > 1 else None, attentions=inputs["encoder_outputs"][2] if len(inputs["encoder_outputs"]) > 2 else None, ) # If the user passed a TFLEDEncoderBaseModelOutput for encoder_outputs, we wrap it in a tuple when return_dict=False elif not inputs["return_dict"] and not isinstance(inputs["encoder_outputs"], tuple): inputs["encoder_outputs"] = inputs["encoder_outputs"].to_tuple() decoder_outputs = self.decoder( inputs["decoder_input_ids"], attention_mask=inputs["decoder_attention_mask"], encoder_hidden_states=inputs["encoder_outputs"][0], encoder_attention_mask=inputs["attention_mask"], head_mask=inputs["decoder_head_mask"], encoder_head_mask=inputs["head_mask"], past_key_values=inputs["past_key_values"], inputs_embeds=inputs["decoder_inputs_embeds"], use_cache=inputs["use_cache"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) if not inputs["return_dict"]: return decoder_outputs + inputs["encoder_outputs"] return TFLEDSeq2SeqModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, encoder_last_hidden_state=inputs["encoder_outputs"].last_hidden_state, encoder_hidden_states=inputs["encoder_outputs"].hidden_states, encoder_attentions=inputs["encoder_outputs"].attentions, encoder_global_attentions=inputs["encoder_outputs"].global_attentions, ) @add_start_docstrings( "The bare LED Model outputting raw hidden-states without any specific head on top.", LED_START_DOCSTRING, ) class TFLEDModel(TFLEDPreTrainedModel): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.led = TFLEDMainLayer(config, name="led") def get_encoder(self): return self.led.encoder def get_decoder(self): return self.led.decoder @add_start_docstrings_to_model_forward(LED_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFLEDSeq2SeqModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs: Optional[Union[Tuple, TFLEDEncoderBaseModelOutput]] = None, global_attention_mask=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs ): inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, encoder_outputs=encoder_outputs, global_attention_mask=global_attention_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) outputs = self.led( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], decoder_input_ids=inputs["decoder_input_ids"], decoder_attention_mask=inputs["decoder_attention_mask"], encoder_outputs=inputs["encoder_outputs"], global_attention_mask=inputs["global_attention_mask"], head_mask=inputs["head_mask"], decoder_head_mask=inputs["decoder_head_mask"], past_key_values=inputs["past_key_values"], inputs_embeds=inputs["inputs_embeds"], decoder_inputs_embeds=inputs["decoder_inputs_embeds"], use_cache=inputs["use_cache"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) return outputs def serving_output(self, output): pkv = tf.tuple(output.past_key_values)[1] if self.config.use_cache else None dec_hs = tf.convert_to_tensor(output.decoder_hidden_states) if self.config.output_hidden_states else None dec_attns = tf.convert_to_tensor(output.decoder_attentions) if self.config.output_attentions else None enc_hs = tf.convert_to_tensor(output.encoder_hidden_states) if self.config.output_hidden_states else None enc_attns = tf.convert_to_tensor(output.encoder_attentions) if self.config.output_attentions else None enc_g_attns = tf.convert_to_tensor(output.encoder_global_attentions) if self.config.output_attentions else None return TFLEDSeq2SeqModelOutput( last_hidden_state=output.last_hidden_state, past_key_values=pkv, decoder_hidden_states=dec_hs, decoder_attentions=dec_attns, encoder_last_hidden_state=output.encoder_last_hidden_state, encoder_hidden_states=enc_hs, encoder_attentions=enc_attns, encoder_global_attentions=enc_g_attns, ) @add_start_docstrings( "The LED Model with a language modeling head. Can be used for summarization.", LED_START_DOCSTRING, ) class TFLEDForConditionalGeneration(TFLEDPreTrainedModel): _keys_to_ignore_on_load_unexpected = [ r"led.encoder.embed_tokens.weight", r"led.decoder.embed_tokens.weight", ] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.led = TFLEDMainLayer(config, name="led") self.use_cache = config.use_cache # final_bias_logits is registered as a buffer in pytorch, so not trainable for the the sake of consistency. self.final_logits_bias = self.add_weight( name="final_logits_bias", shape=[1, config.vocab_size], initializer="zeros", trainable=False ) def get_decoder(self): return self.led.decoder def get_encoder(self): return self.led.encoder def get_bias(self): return {"final_logits_bias": self.final_logits_bias} def set_bias(self, value): self.final_logits_bias = value["final_logits_bias"] def get_output_embeddings(self): return self.get_input_embeddings() def set_output_embeddings(self, value): self.set_input_embeddings(value) @add_start_docstrings_to_model_forward(LED_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=TFLEDSeq2SeqLMOutput, config_class=_CONFIG_FOR_DOC) def call( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs: Optional[TFLEDEncoderBaseModelOutput] = None, global_attention_mask=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs, ): """ Returns: Examples:: >>> from transformers import LEDTokenizer, TFLEDForConditionalGeneration >>> import tensorflow as tf >>> mname = 'allenai/led-base-16384' >>> tokenizer = LEDTokenizer.from_pretrained(mname) >>> TXT = "My friends are <mask> but they eat too many carbs." >>> model = TFLEDForConditionalGeneration.from_pretrained(mname) >>> batch = tokenizer([TXT], return_tensors='tf') >>> logits = model(inputs=batch.input_ids).logits >>> probs = tf.nn.softmax(logits[0]) >>> # probs[5] is associated with the mask token """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, head_mask=head_mask, decoder_head_mask=decoder_head_mask, encoder_outputs=encoder_outputs, global_attention_mask=global_attention_mask, past_key_values=past_key_values, inputs_embeds=inputs_embeds, decoder_inputs_embeds=decoder_inputs_embeds, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, labels=labels, training=training, kwargs_call=kwargs, ) if inputs["labels"] is not None: inputs["use_cache"] = False if inputs["decoder_input_ids"] is None: inputs["decoder_input_ids"] = shift_tokens_right( inputs["labels"], self.config.pad_token_id, self.config.decoder_start_token_id ) outputs = self.led( inputs["input_ids"], attention_mask=inputs["attention_mask"], decoder_input_ids=inputs["decoder_input_ids"], decoder_attention_mask=inputs["decoder_attention_mask"], encoder_outputs=inputs["encoder_outputs"], global_attention_mask=inputs["global_attention_mask"], head_mask=inputs["head_mask"], decoder_head_mask=inputs["decoder_head_mask"], past_key_values=inputs["past_key_values"], inputs_embeds=inputs["inputs_embeds"], decoder_inputs_embeds=inputs["decoder_inputs_embeds"], use_cache=inputs["use_cache"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) lm_logits = self.led.shared(outputs[0], mode="linear") lm_logits = lm_logits + self.final_logits_bias masked_lm_loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], lm_logits) if not inputs["return_dict"]: output = (lm_logits,) + outputs[1:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return TFLEDSeq2SeqLMOutput( loss=masked_lm_loss, logits=lm_logits, past_key_values=outputs.past_key_values, # index 1 of d outputs decoder_hidden_states=outputs.decoder_hidden_states, # index 2 of d outputs decoder_attentions=outputs.decoder_attentions, # index 3 of d outputs encoder_last_hidden_state=outputs.last_hidden_state, # index 0 of encoder outputs encoder_hidden_states=outputs.encoder_hidden_states, # 1 of e out encoder_attentions=outputs.encoder_attentions, # 2 of e out encoder_global_attentions=outputs.encoder_global_attentions, ) def serving_output(self, output): pkv = tf.tuple(output.past_key_values)[1] if self.config.use_cache else None dec_hs = tf.convert_to_tensor(output.decoder_hidden_states) if self.config.output_hidden_states else None dec_attns = tf.convert_to_tensor(output.decoder_attentions) if self.config.output_attentions else None enc_hs = tf.convert_to_tensor(output.encoder_hidden_states) if self.config.output_hidden_states else None enc_attns = tf.convert_to_tensor(output.encoder_attentions) if self.config.output_attentions else None enc_g_attns = tf.convert_to_tensor(output.encoder_global_attentions) if self.config.output_attentions else None return TFLEDSeq2SeqLMOutput( logits=output.logits, past_key_values=pkv, decoder_hidden_states=dec_hs, decoder_attentions=dec_attns, encoder_last_hidden_state=output.encoder_last_hidden_state, encoder_hidden_states=enc_hs, encoder_attentions=enc_attns, encoder_global_attentions=enc_g_attns, ) def prepare_inputs_for_generation(self, decoder_input_ids, past, attention_mask, use_cache, **kwargs) -> Dict: assert past is not None and len(past) in {1, 2}, f"past has to be an iterable of length 1,2 got {past}" if len(past) == 1: assert isinstance(past[0], tf.Tensor), f"`past[0]` has to be of type `tf.Tensor`, but is {type(past[0])}" encoder_outputs = TFLEDEncoderBaseModelOutput(last_hidden_state=past[0]) past_key_values = None else: assert ( len(past) == 2 ), "`past` has to be of length 2 with the encoder_outputs at the first position and past_key_values at the second position." encoder_outputs, past_key_values = past if isinstance(encoder_outputs, tuple): assert isinstance( encoder_outputs[0], tf.Tensor ), f"`encoder_outputs[0]` has to be of type `tf.Tensor`, but is {type(encoder_outputs[0])}" encoder_outputs = TFLEDEncoderBaseModelOutput(last_hidden_state=encoder_outputs[0]) elif isinstance(encoder_outputs, tf.Tensor): encoder_outputs = TFLEDEncoderBaseModelOutput(last_hidden_state=encoder_outputs) assert ( past_key_values ), f"decoder cached states must be truthy. got {past_key_values} from the 2nd element of past" decoder_input_ids = decoder_input_ids[:, -1:] assert isinstance( encoder_outputs, TFLEDEncoderBaseModelOutput, ), f"encoder_outputs should be a TFLEDEncoderBaseModelOutput, Instead got {type(encoder_outputs)}." return { "input_ids": None, # encoder_outputs is defined. input_ids not needed "encoder_outputs": encoder_outputs, "past_key_values": past_key_values, "decoder_input_ids": decoder_input_ids, "attention_mask": attention_mask, "use_cache": use_cache, # change this to avoid caching (presumably for debugging) } @staticmethod def _reorder_cache(past, beam_idx): if len(past) == 1: return past past_key_values = past[1] reordered_past = () for layer_past_key_values in past_key_values: reordered_past += ( tuple(tf.gather(layer_past_key_value, beam_idx) for layer_past_key_value in layer_past_key_values[:2]) + layer_past_key_values[2:], ) return (past[0], reordered_past) def compute_loss(self, labels, logits): """CrossEntropyLoss that ignores pad tokens""" loss_fn = tf.keras.losses.SparseCategoricalCrossentropy( from_logits=True, reduction=tf.keras.losses.Reduction.NONE, ) melted_labels = tf.reshape(labels, (-1,)) active_loss = tf.not_equal(melted_labels, self.config.pad_token_id) reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) labels = tf.boolean_mask(melted_labels, active_loss) return loss_fn(labels, reduced_logits)
AdaMix/src/transformers/models/led/modeling_tf_led.py/0
{ "file_path": "AdaMix/src/transformers/models/led/modeling_tf_led.py", "repo_id": "AdaMix", "token_count": 53995 }
61
# coding=utf-8 # Copyright 2021 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ M2M100 model configuration """ from ...configuration_utils import PretrainedConfig from ...utils import logging logger = logging.get_logger(__name__) M2M_100_PRETRAINED_CONFIG_ARCHIVE_MAP = { "facebook/m2m100_418M": "https://huggingface.co/facebook/m2m100_418M/resolve/main/config.json", # See all M2M100 models at https://huggingface.co/models?filter=m2m_100 } class M2M100Config(PretrainedConfig): r""" This is the configuration class to store the configuration of a :class:`~transformers.M2M100Model`. It is used to instantiate an M2M100 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the M2M100 `m2m100_418M <https://huggingface.co/facebook/m2m100_418M>`__ architecture. Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. Args: vocab_size (:obj:`int`, `optional`, defaults to 50265): Vocabulary size of the M2M100 model. Defines the number of different tokens that can be represented by the :obj:`inputs_ids` passed when calling :class:`~transformers.M2M100Model` or d_model (:obj:`int`, `optional`, defaults to 1024): Dimensionality of the layers and the pooler layer. encoder_layers (:obj:`int`, `optional`, defaults to 12): Number of encoder layers. decoder_layers (:obj:`int`, `optional`, defaults to 12): Number of decoder layers. encoder_attention_heads (:obj:`int`, `optional`, defaults to 16): Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (:obj:`int`, `optional`, defaults to 16): Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (:obj:`int`, `optional`, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. encoder_ffn_dim (:obj:`int`, `optional`, defaults to 4096): Dimensionality of the "intermediate" (often named feed-forward) layer in decoder. activation_function (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, :obj:`"gelu"`, :obj:`"relu"`, :obj:`"silu"` and :obj:`"gelu_new"` are supported. dropout (:obj:`float`, `optional`, defaults to 0.1): The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (:obj:`float`, `optional`, defaults to 0.0): The dropout ratio for the attention probabilities. activation_dropout (:obj:`float`, `optional`, defaults to 0.0): The dropout ratio for activations inside the fully connected layer. classifier_dropout (:obj:`float`, `optional`, defaults to 0.0): The dropout ratio for classifier. max_position_embeddings (:obj:`int`, `optional`, defaults to 1024): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (:obj:`float`, `optional`, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop: (:obj:`float`, `optional`, defaults to 0.0): The LayerDrop probability for the encoder. See the `LayerDrop paper <see https://arxiv.org/abs/1909.11556>`__ for more details. decoder_layerdrop: (:obj:`float`, `optional`, defaults to 0.0): The LayerDrop probability for the decoder. See the `LayerDrop paper <see https://arxiv.org/abs/1909.11556>`__ for more details. use_cache (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not the model should return the last key/values attentions (not used by all models). gradient_checkpointing (:obj:`bool`, `optional`, defaults to :obj:`False`): If True, use gradient checkpointing to save memory at the expense of slower backward pass. Example:: >>> from transformers import M2M100Model, M2M100Config >>> # Initializing a M2M100 facebook/m2m100_418M style configuration >>> configuration = M2M100Config() >>> # Initializing a model from the facebook/m2m100_418M style configuration >>> model = M2M100Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config """ model_type = "m2m_100" keys_to_ignore_at_inference = ["past_key_values"] def __init__( self, vocab_size=128112, max_position_embeddings=1024, encoder_layers=12, encoder_ffn_dim=4096, encoder_attention_heads=16, decoder_layers=12, decoder_ffn_dim=4096, decoder_attention_heads=16, encoder_layerdrop=0.05, decoder_layerdrop=0.05, use_cache=True, is_encoder_decoder=True, activation_function="relu", d_model=1024, dropout=0.1, attention_dropout=0.1, activation_dropout=0.0, init_std=0.02, decoder_start_token_id=2, scale_embedding=True, gradient_checkpointing=False, pad_token_id=1, bos_token_id=0, eos_token_id=2, **kwargs ): super().__init__( pad_token_id=pad_token_id, bos_token_id=bos_token_id, eos_token_id=eos_token_id, is_encoder_decoder=is_encoder_decoder, decoder_start_token_id=decoder_start_token_id, **kwargs, ) self.vocab_size = vocab_size self.max_position_embeddings = max_position_embeddings self.d_model = d_model self.encoder_ffn_dim = encoder_ffn_dim self.encoder_layers = encoder_layers self.encoder_attention_heads = encoder_attention_heads self.decoder_ffn_dim = decoder_ffn_dim self.decoder_layers = decoder_layers self.decoder_attention_heads = decoder_attention_heads self.dropout = dropout self.attention_dropout = attention_dropout self.activation_dropout = activation_dropout self.activation_function = activation_function self.init_std = init_std self.encoder_layerdrop = encoder_layerdrop self.decoder_layerdrop = decoder_layerdrop self.use_cache = use_cache self.num_hidden_layers = encoder_layers self.gradient_checkpointing = gradient_checkpointing self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True @property def num_attention_heads(self) -> int: return self.encoder_attention_heads @property def hidden_size(self) -> int: return self.d_model
AdaMix/src/transformers/models/m2m_100/configuration_m2m_100.py/0
{ "file_path": "AdaMix/src/transformers/models/m2m_100/configuration_m2m_100.py", "repo_id": "AdaMix", "token_count": 3118 }
62
# coding=utf-8 # Copyright 2020 Mesh TensorFlow authors, T5 Authors and HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tensorflow mT5 model. """ from ...utils import logging from ..t5.modeling_tf_t5 import TFT5EncoderModel, TFT5ForConditionalGeneration, TFT5Model from .configuration_mt5 import MT5Config logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "T5Config" _TOKENIZER_FOR_DOC = "T5Tokenizer" class TFMT5Model(TFT5Model): r""" This class overrides :class:`~transformers.TFT5Model`. Please check the superclass for the appropriate documentation alongside usage examples. Examples:: >>> from transformers import TFMT5Model, T5Tokenizer >>> model = TFMT5Model.from_pretrained("google/mt5-small") >>> tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") >>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien." >>> summary = "Weiter Verhandlung in Syrien." >>> inputs = tokenizer(article, return_tensors="tf") >>> with tokenizer.as_target_tokenizer(): ... labels = tokenizer(summary, return_tensors="tf") >>> outputs = model(input_ids=inputs["input_ids"], decoder_input_ids=labels["input_ids"]) >>> hidden_states = outputs.last_hidden_state """ model_type = "mt5" config_class = MT5Config class TFMT5ForConditionalGeneration(TFT5ForConditionalGeneration): r""" This class overrides :class:`~transformers.TFT5ForConditionalGeneration`. Please check the superclass for the appropriate documentation alongside usage examples. Examples:: >>> from transformers import TFMT5ForConditionalGeneration, T5Tokenizer >>> model = TFMT5ForConditionalGeneration.from_pretrained("google/mt5-small") >>> tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") >>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien." >>> summary = "Weiter Verhandlung in Syrien." >>> inputs = tokenizer(article, return_tensors="tf") >>> with tokenizer.as_target_tokenizer(): ... labels = tokenizer(summary, return_tensors="tf") >>> outputs = model(**inputs,labels=labels["input_ids"]) >>> loss = outputs.loss """ model_type = "mt5" config_class = MT5Config class TFMT5EncoderModel(TFT5EncoderModel): r""" This class overrides :class:`~transformers.TFT5EncoderModel`. Please check the superclass for the appropriate documentation alongside usage examples. Examples:: >>> from transformers import TFMT5EncoderModel, T5Tokenizer >>> model = TFMT5EncoderModel.from_pretrained("google/mt5-small") >>> tokenizer = T5Tokenizer.from_pretrained("google/mt5-small") >>> article = "UN Offizier sagt, dass weiter verhandelt werden muss in Syrien." >>> input_ids = tokenizer(article, return_tensors="tf").input_ids >>> outputs = model(input_ids) >>> hidden_state = outputs.last_hidden_state """ model_type = "mt5" config_class = MT5Config
AdaMix/src/transformers/models/mt5/modeling_tf_mt5.py/0
{ "file_path": "AdaMix/src/transformers/models/mt5/modeling_tf_mt5.py", "repo_id": "AdaMix", "token_count": 1282 }
63
# coding=utf-8 # Copyright 2020, The RAG Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tokenization classes for RAG.""" import os import warnings from contextlib import contextmanager from typing import List, Optional from ...tokenization_utils_base import BatchEncoding from ...utils import logging from .configuration_rag import RagConfig logger = logging.get_logger(__name__) class RagTokenizer: def __init__(self, question_encoder, generator): self.question_encoder = question_encoder self.generator = generator self.current_tokenizer = self.question_encoder def save_pretrained(self, save_directory): if os.path.isfile(save_directory): raise ValueError("Provided path ({}) should be a directory, not a file".format(save_directory)) os.makedirs(save_directory, exist_ok=True) question_encoder_path = os.path.join(save_directory, "question_encoder_tokenizer") generator_path = os.path.join(save_directory, "generator_tokenizer") self.question_encoder.save_pretrained(question_encoder_path) self.generator.save_pretrained(generator_path) @classmethod def from_pretrained(cls, pretrained_model_name_or_path, **kwargs): # dynamically import AutoTokenizer from ..auto.tokenization_auto import AutoTokenizer config = kwargs.pop("config", None) if config is None: config = RagConfig.from_pretrained(pretrained_model_name_or_path) question_encoder = AutoTokenizer.from_pretrained( pretrained_model_name_or_path, config=config.question_encoder, subfolder="question_encoder_tokenizer" ) generator = AutoTokenizer.from_pretrained( pretrained_model_name_or_path, config=config.generator, subfolder="generator_tokenizer" ) return cls(question_encoder=question_encoder, generator=generator) def __call__(self, *args, **kwargs): return self.current_tokenizer(*args, **kwargs) def batch_decode(self, *args, **kwargs): return self.generator.batch_decode(*args, **kwargs) def decode(self, *args, **kwargs): return self.generator.decode(*args, **kwargs) @contextmanager def as_target_tokenizer(self): """ Temporarily sets the tokenizer for encoding the targets. Useful for tokenizer associated to sequence-to-sequence models that need a slightly different processing for the labels. """ self.current_tokenizer = self.generator yield self.current_tokenizer = self.question_encoder def prepare_seq2seq_batch( self, src_texts: List[str], tgt_texts: Optional[List[str]] = None, max_length: Optional[int] = None, max_target_length: Optional[int] = None, padding: str = "longest", return_tensors: str = None, truncation: bool = True, **kwargs, ) -> BatchEncoding: warnings.warn( "`prepare_seq2seq_batch` is deprecated and will be removed in version 5 of 🤗 Transformers. Use the " "regular `__call__` method to prepare your inputs and the tokenizer under the `with_target_tokenizer` " "context manager to prepare your targets. See the documentation of your specific tokenizer for more " "details", FutureWarning, ) if max_length is None: max_length = self.current_tokenizer.model_max_length model_inputs = self( src_texts, add_special_tokens=True, return_tensors=return_tensors, max_length=max_length, padding=padding, truncation=truncation, **kwargs, ) if tgt_texts is None: return model_inputs # Process tgt_texts with self.as_target_tokenizer(): if max_target_length is None: max_target_length = self.current_tokenizer.model_max_length labels = self( tgt_texts, add_special_tokens=True, return_tensors=return_tensors, padding=padding, max_length=max_target_length, truncation=truncation, **kwargs, ) model_inputs["labels"] = labels["input_ids"] return model_inputs
AdaMix/src/transformers/models/rag/tokenization_rag.py/0
{ "file_path": "AdaMix/src/transformers/models/rag/tokenization_rag.py", "repo_id": "AdaMix", "token_count": 1986 }
64
# coding=utf-8 # Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ TF 2.0 RoBERTa model. """ import math import warnings from typing import Optional, Tuple, Union import numpy as np import tensorflow as tf from ...activations_tf import get_tf_activation from ...file_utils import ( MULTIPLE_CHOICE_DUMMY_INPUTS, add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward, ) from ...modeling_tf_outputs import ( TFBaseModelOutput, TFBaseModelOutputWithPooling, TFMaskedLMOutput, TFMultipleChoiceModelOutput, TFQuestionAnsweringModelOutput, TFSequenceClassifierOutput, TFTokenClassifierOutput, ) from ...modeling_tf_utils import ( TFMaskedLanguageModelingLoss, TFModelInputType, TFMultipleChoiceLoss, TFPreTrainedModel, TFQuestionAnsweringLoss, TFSequenceClassificationLoss, TFTokenClassificationLoss, get_initializer, input_processing, keras_serializable, shape_list, ) from ...utils import logging from .configuration_roberta import RobertaConfig logger = logging.get_logger(__name__) _CHECKPOINT_FOR_DOC = "roberta-base" _CONFIG_FOR_DOC = "RobertaConfig" _TOKENIZER_FOR_DOC = "RobertaTokenizer" TF_ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST = [ "roberta-base", "roberta-large", "roberta-large-mnli", "distilroberta-base", # See all RoBERTa models at https://huggingface.co/models?filter=roberta ] class TFRobertaEmbeddings(tf.keras.layers.Layer): """ Same as BertEmbeddings with a tiny tweak for positional embeddings indexing. """ def __init__(self, config, **kwargs): super().__init__(**kwargs) self.padding_idx = 1 self.vocab_size = config.vocab_size self.type_vocab_size = config.type_vocab_size self.hidden_size = config.hidden_size self.max_position_embeddings = config.max_position_embeddings self.initializer_range = config.initializer_range self.embeddings_sum = tf.keras.layers.Add() self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) def build(self, input_shape: tf.TensorShape): with tf.name_scope("word_embeddings"): self.weight = self.add_weight( name="weight", shape=[self.vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range), ) with tf.name_scope("token_type_embeddings"): self.token_type_embeddings = self.add_weight( name="embeddings", shape=[self.type_vocab_size, self.hidden_size], initializer=get_initializer(self.initializer_range), ) with tf.name_scope("position_embeddings"): self.position_embeddings = self.add_weight( name="embeddings", shape=[self.max_position_embeddings, self.hidden_size], initializer=get_initializer(self.initializer_range), ) super().build(input_shape) def create_position_ids_from_input_ids(self, input_ids): """ Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols are ignored. This is modified from fairseq's `utils.make_positions`. Args: input_ids: tf.Tensor Returns: tf.Tensor """ mask = tf.cast(tf.math.not_equal(input_ids, self.padding_idx), dtype=input_ids.dtype) incremental_indices = tf.math.cumsum(mask, axis=1) * mask return incremental_indices + self.padding_idx def call(self, input_ids=None, position_ids=None, token_type_ids=None, inputs_embeds=None, training=False): """ Applies embedding based on inputs tensor. Returns: final_embeddings (:obj:`tf.Tensor`): output embedding tensor. """ assert not (input_ids is None and inputs_embeds is None) if input_ids is not None: inputs_embeds = tf.gather(params=self.weight, indices=input_ids) input_shape = shape_list(inputs_embeds)[:-1] if token_type_ids is None: token_type_ids = tf.fill(dims=input_shape, value=0) if position_ids is None: if input_ids is not None: # Create the position ids from the input token ids. Any padded tokens remain padded. position_ids = self.create_position_ids_from_input_ids(input_ids=input_ids) else: position_ids = tf.expand_dims( tf.range(start=self.padding_idx + 1, limit=input_shape[-1] + self.padding_idx + 1), axis=0 ) position_ids = tf.tile(input=position_ids, multiples=(input_shape[0], 1)) position_embeds = tf.gather(params=self.position_embeddings, indices=position_ids) token_type_embeds = tf.gather(params=self.token_type_embeddings, indices=token_type_ids) final_embeddings = self.embeddings_sum(inputs=[inputs_embeds, position_embeds, token_type_embeds]) final_embeddings = self.LayerNorm(inputs=final_embeddings) final_embeddings = self.dropout(inputs=final_embeddings, training=training) return final_embeddings # Copied from transformers.models.bert.modeling_tf_bert.TFBertPooler with Bert->Roberta class TFRobertaPooler(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.dense = tf.keras.layers.Dense( units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), activation="tanh", name="dense", ) def call(self, hidden_states: tf.Tensor) -> tf.Tensor: # We "pool" the model by simply taking the hidden state corresponding # to the first token. first_token_tensor = hidden_states[:, 0] pooled_output = self.dense(inputs=first_token_tensor) return pooled_output # Copied from transformers.models.bert.modeling_tf_bert.TFBertSelfAttention with Bert->Roberta class TFRobertaSelfAttention(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) if config.hidden_size % config.num_attention_heads != 0: raise ValueError( f"The hidden size ({config.hidden_size}) is not a multiple of the number " f"of attention heads ({config.num_attention_heads})" ) self.num_attention_heads = config.num_attention_heads self.attention_head_size = int(config.hidden_size / config.num_attention_heads) self.all_head_size = self.num_attention_heads * self.attention_head_size self.sqrt_att_head_size = math.sqrt(self.attention_head_size) self.query = tf.keras.layers.Dense( units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="query" ) self.key = tf.keras.layers.Dense( units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="key" ) self.value = tf.keras.layers.Dense( units=self.all_head_size, kernel_initializer=get_initializer(config.initializer_range), name="value" ) self.dropout = tf.keras.layers.Dropout(rate=config.attention_probs_dropout_prob) def transpose_for_scores(self, tensor: tf.Tensor, batch_size: int) -> tf.Tensor: # Reshape from [batch_size, seq_length, all_head_size] to [batch_size, seq_length, num_attention_heads, attention_head_size] tensor = tf.reshape(tensor=tensor, shape=(batch_size, -1, self.num_attention_heads, self.attention_head_size)) # Transpose the tensor from [batch_size, seq_length, num_attention_heads, attention_head_size] to [batch_size, num_attention_heads, seq_length, attention_head_size] return tf.transpose(tensor, perm=[0, 2, 1, 3]) def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, output_attentions: bool, training: bool = False, ) -> Tuple[tf.Tensor]: batch_size = shape_list(hidden_states)[0] mixed_query_layer = self.query(inputs=hidden_states) mixed_key_layer = self.key(inputs=hidden_states) mixed_value_layer = self.value(inputs=hidden_states) query_layer = self.transpose_for_scores(mixed_query_layer, batch_size) key_layer = self.transpose_for_scores(mixed_key_layer, batch_size) value_layer = self.transpose_for_scores(mixed_value_layer, batch_size) # Take the dot product between "query" and "key" to get the raw attention scores. # (batch size, num_heads, seq_len_q, seq_len_k) attention_scores = tf.matmul(query_layer, key_layer, transpose_b=True) dk = tf.cast(self.sqrt_att_head_size, dtype=attention_scores.dtype) attention_scores = tf.divide(attention_scores, dk) if attention_mask is not None: # Apply the attention mask is (precomputed for all layers in TFRobertaModel call() function) attention_scores = tf.add(attention_scores, attention_mask) # Normalize the attention scores to probabilities. attention_probs = tf.nn.softmax(logits=attention_scores, axis=-1) # This is actually dropping out entire tokens to attend to, which might # seem a bit unusual, but is taken from the original Transformer paper. attention_probs = self.dropout(inputs=attention_probs, training=training) # Mask heads if we want to if head_mask is not None: attention_probs = tf.multiply(attention_probs, head_mask) attention_output = tf.matmul(attention_probs, value_layer) attention_output = tf.transpose(attention_output, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, all_head_size) attention_output = tf.reshape(tensor=attention_output, shape=(batch_size, -1, self.all_head_size)) outputs = (attention_output, attention_probs) if output_attentions else (attention_output,) return outputs # Copied from transformers.models.bert.modeling_tf_bert.TFBertSelfOutput with Bert->Roberta class TFRobertaSelfOutput(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.dense = tf.keras.layers.Dense( units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: hidden_states = self.dense(inputs=hidden_states) hidden_states = self.dropout(inputs=hidden_states, training=training) hidden_states = self.LayerNorm(inputs=hidden_states + input_tensor) return hidden_states # Copied from transformers.models.bert.modeling_tf_bert.TFBertAttention with Bert->Roberta class TFRobertaAttention(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.self_attention = TFRobertaSelfAttention(config, name="self") self.dense_output = TFRobertaSelfOutput(config, name="output") def prune_heads(self, heads): raise NotImplementedError def call( self, input_tensor: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, output_attentions: bool, training: bool = False, ) -> Tuple[tf.Tensor]: self_outputs = self.self_attention( hidden_states=input_tensor, attention_mask=attention_mask, head_mask=head_mask, output_attentions=output_attentions, training=training, ) attention_output = self.dense_output( hidden_states=self_outputs[0], input_tensor=input_tensor, training=training ) outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them return outputs # Copied from transformers.models.bert.modeling_tf_bert.TFBertIntermediate with Bert->Roberta class TFRobertaIntermediate(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.dense = tf.keras.layers.Dense( units=config.intermediate_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) if isinstance(config.hidden_act, str): self.intermediate_act_fn = get_tf_activation(config.hidden_act) else: self.intermediate_act_fn = config.hidden_act def call(self, hidden_states: tf.Tensor) -> tf.Tensor: hidden_states = self.dense(inputs=hidden_states) hidden_states = self.intermediate_act_fn(hidden_states) return hidden_states # Copied from transformers.models.bert.modeling_tf_bert.TFBertOutput with Bert->Roberta class TFRobertaOutput(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.dense = tf.keras.layers.Dense( units=config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) self.LayerNorm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="LayerNorm") self.dropout = tf.keras.layers.Dropout(rate=config.hidden_dropout_prob) def call(self, hidden_states: tf.Tensor, input_tensor: tf.Tensor, training: bool = False) -> tf.Tensor: hidden_states = self.dense(inputs=hidden_states) hidden_states = self.dropout(inputs=hidden_states, training=training) hidden_states = self.LayerNorm(inputs=hidden_states + input_tensor) return hidden_states # Copied from transformers.models.bert.modeling_tf_bert.TFBertLayer with Bert->Roberta class TFRobertaLayer(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.attention = TFRobertaAttention(config, name="attention") self.intermediate = TFRobertaIntermediate(config, name="intermediate") self.bert_output = TFRobertaOutput(config, name="output") def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, output_attentions: bool, training: bool = False, ) -> Tuple[tf.Tensor]: attention_outputs = self.attention( input_tensor=hidden_states, attention_mask=attention_mask, head_mask=head_mask, output_attentions=output_attentions, training=training, ) attention_output = attention_outputs[0] intermediate_output = self.intermediate(hidden_states=attention_output) layer_output = self.bert_output( hidden_states=intermediate_output, input_tensor=attention_output, training=training ) outputs = (layer_output,) + attention_outputs[1:] # add attentions if we output them return outputs # Copied from transformers.models.bert.modeling_tf_bert.TFBertEncoder with Bert->Roberta class TFRobertaEncoder(tf.keras.layers.Layer): def __init__(self, config: RobertaConfig, **kwargs): super().__init__(**kwargs) self.layer = [TFRobertaLayer(config, name="layer_._{}".format(i)) for i in range(config.num_hidden_layers)] def call( self, hidden_states: tf.Tensor, attention_mask: tf.Tensor, head_mask: tf.Tensor, output_attentions: bool, output_hidden_states: bool, return_dict: bool, training: bool = False, ) -> Union[TFBaseModelOutput, Tuple[tf.Tensor]]: all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_outputs = layer_module( hidden_states=hidden_states, attention_mask=attention_mask, head_mask=head_mask[i], output_attentions=output_attentions, training=training, ) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) # Add last layer if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None) return TFBaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions ) @keras_serializable class TFRobertaMainLayer(tf.keras.layers.Layer): config_class = RobertaConfig def __init__(self, config, add_pooling_layer=True, **kwargs): super().__init__(**kwargs) self.config = config self.num_hidden_layers = config.num_hidden_layers self.initializer_range = config.initializer_range self.output_attentions = config.output_attentions self.output_hidden_states = config.output_hidden_states self.return_dict = config.use_return_dict self.encoder = TFRobertaEncoder(config, name="encoder") self.pooler = TFRobertaPooler(config, name="pooler") if add_pooling_layer else None # The embeddings must be the last declaration in order to follow the weights order self.embeddings = TFRobertaEmbeddings(config, name="embeddings") # Copied from transformers.models.bert.modeling_tf_bert.TFBertMainLayer.get_input_embeddings def get_input_embeddings(self) -> tf.keras.layers.Layer: return self.embeddings # Copied from transformers.models.bert.modeling_tf_bert.TFBertMainLayer.set_input_embeddings def set_input_embeddings(self, value: tf.Variable): self.embeddings.weight = value self.embeddings.vocab_size = shape_list(value)[0] # Copied from transformers.models.bert.modeling_tf_bert.TFBertMainLayer._prune_heads def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ raise NotImplementedError # Copied from transformers.models.bert.modeling_tf_bert.TFBertMainLayer.call def call( self, input_ids: Optional[TFModelInputType] = None, attention_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, token_type_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, position_ids: Optional[Union[np.ndarray, tf.Tensor]] = None, head_mask: Optional[Union[np.ndarray, tf.Tensor]] = None, inputs_embeds: Optional[Union[np.ndarray, tf.Tensor]] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, training: bool = False, **kwargs, ) -> Union[TFBaseModelOutputWithPooling, Tuple[tf.Tensor]]: inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None and inputs["inputs_embeds"] is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif inputs["input_ids"] is not None: input_shape = shape_list(tensor=inputs["input_ids"]) elif inputs["inputs_embeds"] is not None: input_shape = shape_list(tensor=inputs["inputs_embeds"])[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") if inputs["attention_mask"] is None: inputs["attention_mask"] = tf.fill(dims=input_shape, value=1) if inputs["token_type_ids"] is None: inputs["token_type_ids"] = tf.fill(dims=input_shape, value=0) embedding_output = self.embeddings( input_ids=inputs["input_ids"], position_ids=inputs["position_ids"], token_type_ids=inputs["token_type_ids"], inputs_embeds=inputs["inputs_embeds"], training=inputs["training"], ) # We create a 3D attention mask from a 2D tensor mask. # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] # this attention mask is more simple than the triangular masking of causal attention # used in OpenAI GPT, we just need to prepare the broadcast dimension here. extended_attention_mask = tf.reshape(inputs["attention_mask"], (input_shape[0], 1, 1, input_shape[1])) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. extended_attention_mask = tf.cast(extended_attention_mask, dtype=embedding_output.dtype) extended_attention_mask = tf.multiply(tf.subtract(1.0, extended_attention_mask), -10000.0) # Prepare head mask if needed # 1.0 in head_mask indicate we keep the head # attention_probs has shape bsz x n_heads x N x N # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] if inputs["head_mask"] is not None: raise NotImplementedError else: inputs["head_mask"] = [None] * self.config.num_hidden_layers encoder_outputs = self.encoder( hidden_states=embedding_output, attention_mask=extended_attention_mask, head_mask=inputs["head_mask"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output = encoder_outputs[0] pooled_output = self.pooler(hidden_states=sequence_output) if self.pooler is not None else None if not inputs["return_dict"]: return ( sequence_output, pooled_output, ) + encoder_outputs[1:] return TFBaseModelOutputWithPooling( last_hidden_state=sequence_output, pooler_output=pooled_output, hidden_states=encoder_outputs.hidden_states, attentions=encoder_outputs.attentions, ) class TFRobertaPreTrainedModel(TFPreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = RobertaConfig base_model_prefix = "roberta" @tf.function( input_signature=[ { "input_ids": tf.TensorSpec((None, None), tf.int32, name="input_ids"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), } ] ) def serving(self, inputs): output = self.call(inputs) return self.serving_output(output) ROBERTA_START_DOCSTRING = r""" This model inherits from :class:`~transformers.TFPreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a `tf.keras.Model <https://www.tensorflow.org/api_docs/python/tf/keras/Model>`__ subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. .. note:: TF 2.0 models accepts two formats as inputs: - having all inputs as keyword arguments (like PyTorch models), or - having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using :meth:`tf.keras.Model.fit` method which currently requires having all the tensors in the first argument of the model call function: :obj:`model(inputs)`. If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : - a single Tensor with :obj:`input_ids` only and nothing else: :obj:`model(inputs_ids)` - a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: :obj:`model([input_ids, attention_mask])` or :obj:`model([input_ids, attention_mask, token_type_ids])` - a dictionary with one or several input Tensors associated to the input names given in the docstring: :obj:`model({"input_ids": input_ids, "token_type_ids": token_type_ids})` Parameters: config (:class:`~transformers.RobertaConfig`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. """ ROBERTA_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`): Indices of input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.RobertaTokenizer`. See :func:`transformers.PreTrainedTokenizer.__call__` and :func:`transformers.PreTrainedTokenizer.encode` for details. `What are input IDs? <../glossary.html#input-ids>`__ attention_mask (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ token_type_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`, `optional`): Segment token indices to indicate first and second portions of the inputs. Indices are selected in ``[0, 1]``: - 0 corresponds to a `sentence A` token, - 1 corresponds to a `sentence B` token. `What are token type IDs? <../glossary.html#token-type-ids>`__ position_ids (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`({0})`, `optional`): Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ``[0, config.max_position_embeddings - 1]``. `What are position IDs? <../glossary.html#position-ids>`__ head_mask (:obj:`Numpy array` or :obj:`tf.Tensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`): Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (:obj:`tf.Tensor` of shape :obj:`({0}, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). """ @add_start_docstrings( "The bare RoBERTa Model transformer outputting raw hidden-states without any specific head on top.", ROBERTA_START_DOCSTRING, ) class TFRobertaModel(TFRobertaPreTrainedModel): def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.roberta = TFRobertaMainLayer(config, name="roberta") @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFBaseModelOutputWithPooling, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs, ): inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, training=training, kwargs_call=kwargs, ) outputs = self.roberta( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], position_ids=inputs["position_ids"], head_mask=inputs["head_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) return outputs # Copied from transformers.models.bert.modeling_tf_bert.TFBertModel.serving_output def serving_output(self, output: TFBaseModelOutputWithPooling) -> TFBaseModelOutputWithPooling: hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFBaseModelOutputWithPooling( last_hidden_state=output.last_hidden_state, pooler_output=output.pooler_output, hidden_states=hs, attentions=attns, ) class TFRobertaLMHead(tf.keras.layers.Layer): """Roberta Head for masked language modeling.""" def __init__(self, config, input_embeddings, **kwargs): super().__init__(**kwargs) self.vocab_size = config.vocab_size self.hidden_size = config.hidden_size self.dense = tf.keras.layers.Dense( config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), name="dense" ) self.layer_norm = tf.keras.layers.LayerNormalization(epsilon=config.layer_norm_eps, name="layer_norm") self.act = get_tf_activation("gelu") # The output weights are the same as the input embeddings, but there is # an output-only bias for each token. self.decoder = input_embeddings def build(self, input_shape): self.bias = self.add_weight(shape=(self.vocab_size,), initializer="zeros", trainable=True, name="bias") super().build(input_shape) def get_output_embeddings(self): return self.decoder def set_output_embeddings(self, value): self.decoder.weight = value self.decoder.vocab_size = shape_list(value)[0] def get_bias(self): return {"bias": self.bias} def set_bias(self, value): self.bias = value["bias"] self.vocab_size = shape_list(value["bias"])[0] def call(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.layer_norm(hidden_states) # project back to size of vocabulary with bias seq_length = shape_list(tensor=hidden_states)[1] hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, self.hidden_size]) hidden_states = tf.matmul(a=hidden_states, b=self.decoder.weight, transpose_b=True) hidden_states = tf.reshape(tensor=hidden_states, shape=[-1, seq_length, self.vocab_size]) hidden_states = tf.nn.bias_add(value=hidden_states, bias=self.bias) return hidden_states @add_start_docstrings("""RoBERTa Model with a `language modeling` head on top. """, ROBERTA_START_DOCSTRING) class TFRobertaForMaskedLM(TFRobertaPreTrainedModel, TFMaskedLanguageModelingLoss): # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"pooler", r"lm_head.decoder.weight"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.roberta = TFRobertaMainLayer(config, add_pooling_layer=False, name="roberta") self.lm_head = TFRobertaLMHead(config, self.roberta.embeddings, name="lm_head") def get_lm_head(self): return self.lm_head def get_prefix_bias_name(self): warnings.warn("The method get_prefix_bias_name is deprecated. Please use `get_bias` instead.", FutureWarning) return self.name + "/" + self.lm_head.name @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFMaskedLMOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, labels=labels, training=training, kwargs_call=kwargs, ) outputs = self.roberta( inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], position_ids=inputs["position_ids"], head_mask=inputs["head_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output = outputs[0] prediction_scores = self.lm_head(sequence_output) loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], prediction_scores) if not inputs["return_dict"]: output = (prediction_scores,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TFMaskedLMOutput( loss=loss, logits=prediction_scores, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMaskedLM.serving_output def serving_output(self, output: TFMaskedLMOutput) -> TFMaskedLMOutput: hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFMaskedLMOutput(logits=output.logits, hidden_states=hs, attentions=attns) class TFRobertaClassificationHead(tf.keras.layers.Layer): """Head for sentence-level classification tasks.""" def __init__(self, config, **kwargs): super().__init__(**kwargs) self.dense = tf.keras.layers.Dense( config.hidden_size, kernel_initializer=get_initializer(config.initializer_range), activation="tanh", name="dense", ) self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.out_proj = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="out_proj" ) def call(self, features, training=False): x = features[:, 0, :] # take <s> token (equiv. to [CLS]) x = self.dropout(x, training=training) x = self.dense(x) x = self.dropout(x, training=training) x = self.out_proj(x) return x @add_start_docstrings( """ RoBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. """, ROBERTA_START_DOCSTRING, ) class TFRobertaForSequenceClassification(TFRobertaPreTrainedModel, TFSequenceClassificationLoss): # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"pooler", r"lm_head"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.roberta = TFRobertaMainLayer(config, add_pooling_layer=False, name="roberta") self.classifier = TFRobertaClassificationHead(config, name="classifier") @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFSequenceClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ..., config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss), If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy). """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, labels=labels, training=training, kwargs_call=kwargs, ) outputs = self.roberta( inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], position_ids=inputs["position_ids"], head_mask=inputs["head_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output = outputs[0] logits = self.classifier(sequence_output, training=inputs["training"]) loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], logits) if not inputs["return_dict"]: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TFSequenceClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # Copied from transformers.models.bert.modeling_tf_bert.TFBertForSequenceClassification.serving_output def serving_output(self, output: TFSequenceClassifierOutput) -> TFSequenceClassifierOutput: hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFSequenceClassifierOutput(logits=output.logits, hidden_states=hs, attentions=attns) @add_start_docstrings( """ Roberta Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """, ROBERTA_START_DOCSTRING, ) class TFRobertaForMultipleChoice(TFRobertaPreTrainedModel, TFMultipleChoiceLoss): # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"lm_head"] _keys_to_ignore_on_load_missing = [r"dropout"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.roberta = TFRobertaMainLayer(config, name="roberta") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.classifier = tf.keras.layers.Dense( 1, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) @property def dummy_inputs(self): """ Dummy inputs to build the network. Returns: tf.Tensor with dummy inputs """ return {"input_ids": tf.constant(MULTIPLE_CHOICE_DUMMY_INPUTS)} @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFMultipleChoiceModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the multiple choice classification loss. Indices should be in ``[0, ..., num_choices]`` where :obj:`num_choices` is the size of the second dimension of the input tensors. (See :obj:`input_ids` above) """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, labels=labels, training=training, kwargs_call=kwargs, ) if inputs["input_ids"] is not None: num_choices = shape_list(inputs["input_ids"])[1] seq_length = shape_list(inputs["input_ids"])[2] else: num_choices = shape_list(inputs_embeds)[1] seq_length = shape_list(inputs_embeds)[2] flat_input_ids = tf.reshape(inputs["input_ids"], (-1, seq_length)) if inputs["input_ids"] is not None else None flat_attention_mask = ( tf.reshape(inputs["attention_mask"], (-1, seq_length)) if inputs["attention_mask"] is not None else None ) flat_token_type_ids = ( tf.reshape(inputs["token_type_ids"], (-1, seq_length)) if inputs["token_type_ids"] is not None else None ) flat_position_ids = ( tf.reshape(inputs["position_ids"], (-1, seq_length)) if inputs["position_ids"] is not None else None ) outputs = self.roberta( flat_input_ids, flat_attention_mask, flat_token_type_ids, flat_position_ids, inputs["head_mask"], inputs["inputs_embeds"], inputs["output_attentions"], inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output, training=inputs["training"]) logits = self.classifier(pooled_output) reshaped_logits = tf.reshape(logits, (-1, num_choices)) loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], reshaped_logits) if not inputs["return_dict"]: output = (reshaped_logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TFMultipleChoiceModelOutput( loss=loss, logits=reshaped_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) @tf.function( input_signature=[ { "input_ids": tf.TensorSpec((None, None, None), tf.int32, name="input_ids"), "attention_mask": tf.TensorSpec((None, None, None), tf.int32, name="attention_mask"), } ] ) def serving(self, inputs): output = self.call(inputs) return self.serving_output(output) # Copied from transformers.models.bert.modeling_tf_bert.TFBertForMultipleChoice.serving_output def serving_output(self, output: TFMultipleChoiceModelOutput) -> TFMultipleChoiceModelOutput: hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFMultipleChoiceModelOutput(logits=output.logits, hidden_states=hs, attentions=attns) @add_start_docstrings( """ RoBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """, ROBERTA_START_DOCSTRING, ) class TFRobertaForTokenClassification(TFRobertaPreTrainedModel, TFTokenClassificationLoss): # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"pooler", r"lm_head"] _keys_to_ignore_on_load_missing = [r"dropout"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.roberta = TFRobertaMainLayer(config, add_pooling_layer=False, name="roberta") self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob) self.classifier = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="classifier" ) @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFTokenClassifierOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs, ): r""" labels (:obj:`tf.Tensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the token classification loss. Indices should be in ``[0, ..., config.num_labels - 1]``. """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, labels=labels, training=training, kwargs_call=kwargs, ) outputs = self.roberta( inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], position_ids=inputs["position_ids"], head_mask=inputs["head_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output = outputs[0] sequence_output = self.dropout(sequence_output, training=inputs["training"]) logits = self.classifier(sequence_output) loss = None if inputs["labels"] is None else self.compute_loss(inputs["labels"], logits) if not inputs["return_dict"]: output = (logits,) + outputs[2:] return ((loss,) + output) if loss is not None else output return TFTokenClassifierOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # Copied from transformers.models.bert.modeling_tf_bert.TFBertForTokenClassification.serving_output def serving_output(self, output: TFTokenClassifierOutput) -> TFTokenClassifierOutput: hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFTokenClassifierOutput(logits=output.logits, hidden_states=hs, attentions=attns) @add_start_docstrings( """ RoBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute `span start logits` and `span end logits`). """, ROBERTA_START_DOCSTRING, ) class TFRobertaForQuestionAnswering(TFRobertaPreTrainedModel, TFQuestionAnsweringLoss): # names with a '.' represents the authorized unexpected/missing layers when a TF model is loaded from a PT model _keys_to_ignore_on_load_unexpected = [r"pooler", r"lm_head"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) self.num_labels = config.num_labels self.roberta = TFRobertaMainLayer(config, add_pooling_layer=False, name="roberta") self.qa_outputs = tf.keras.layers.Dense( config.num_labels, kernel_initializer=get_initializer(config.initializer_range), name="qa_outputs" ) @add_start_docstrings_to_model_forward(ROBERTA_INPUTS_DOCSTRING.format("batch_size, sequence_length")) @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint=_CHECKPOINT_FOR_DOC, output_type=TFQuestionAnsweringModelOutput, config_class=_CONFIG_FOR_DOC, ) def call( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, start_positions=None, end_positions=None, training=False, **kwargs, ): r""" start_positions (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (:obj:`tf.Tensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ inputs = input_processing( func=self.call, config=self.config, input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, position_ids=position_ids, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, start_positions=start_positions, end_positions=end_positions, training=training, kwargs_call=kwargs, ) outputs = self.roberta( inputs["input_ids"], attention_mask=inputs["attention_mask"], token_type_ids=inputs["token_type_ids"], position_ids=inputs["position_ids"], head_mask=inputs["head_mask"], inputs_embeds=inputs["inputs_embeds"], output_attentions=inputs["output_attentions"], output_hidden_states=inputs["output_hidden_states"], return_dict=inputs["return_dict"], training=inputs["training"], ) sequence_output = outputs[0] logits = self.qa_outputs(sequence_output) start_logits, end_logits = tf.split(logits, 2, axis=-1) start_logits = tf.squeeze(start_logits, axis=-1) end_logits = tf.squeeze(end_logits, axis=-1) loss = None if inputs["start_positions"] is not None and inputs["end_positions"] is not None: labels = {"start_position": inputs["start_positions"]} labels["end_position"] = inputs["end_positions"] loss = self.compute_loss(labels, (start_logits, end_logits)) if not inputs["return_dict"]: output = (start_logits, end_logits) + outputs[2:] return ((loss,) + output) if loss is not None else output return TFQuestionAnsweringModelOutput( loss=loss, start_logits=start_logits, end_logits=end_logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) # Copied from transformers.models.bert.modeling_tf_bert.TFBertForQuestionAnswering.serving_output def serving_output(self, output: TFQuestionAnsweringModelOutput) -> TFQuestionAnsweringModelOutput: hs = tf.convert_to_tensor(output.hidden_states) if self.config.output_hidden_states else None attns = tf.convert_to_tensor(output.attentions) if self.config.output_attentions else None return TFQuestionAnsweringModelOutput( start_logits=output.start_logits, end_logits=output.end_logits, hidden_states=hs, attentions=attns )
AdaMix/src/transformers/models/roberta/modeling_tf_roberta.py/0
{ "file_path": "AdaMix/src/transformers/models/roberta/modeling_tf_roberta.py", "repo_id": "AdaMix", "token_count": 25539 }
65
# coding=utf-8 # Copyright 2018 Mesh TensorFlow authors, T5 Authors and HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ PyTorch T5 model. """ import copy import math import os import warnings import torch import torch.nn.functional as F from torch import nn from torch.nn import CrossEntropyLoss from ...activations import ACT2FN from ...file_utils import ( DUMMY_INPUTS, DUMMY_MASK, add_start_docstrings, add_start_docstrings_to_model_forward, replace_return_docstrings, ) from ...modeling_outputs import ( BaseModelOutput, BaseModelOutputWithPastAndCrossAttentions, Seq2SeqLMOutput, Seq2SeqModelOutput, ) from ...modeling_utils import PreTrainedModel, find_pruneable_heads_and_indices, prune_linear_layer from ...utils import logging from ...utils.model_parallel_utils import assert_device_map, get_device_map from .configuration_t5 import T5Config logger = logging.get_logger(__name__) _CONFIG_FOR_DOC = "T5Config" _TOKENIZER_FOR_DOC = "T5Tokenizer" #################################################### # This dict contains ids and associated url # for the pretrained weights provided with the models #################################################### T5_PRETRAINED_MODEL_ARCHIVE_LIST = [ "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", # See all T5 models at https://huggingface.co/models?filter=t5 ] #################################################### # This is a conversion method from TF 1.0 to PyTorch # More details: https://medium.com/huggingface/from-tensorflow-to-pytorch-265f40ef2a28 #################################################### def load_tf_weights_in_t5(model, config, tf_checkpoint_path): """Load tf checkpoints in a pytorch model.""" try: import re import numpy as np import tensorflow as tf except ImportError: logger.error( "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " "https://www.tensorflow.org/install/ for installation instructions." ) raise tf_path = os.path.abspath(tf_checkpoint_path) logger.info("Converting TensorFlow checkpoint from {}".format(tf_path)) # Load weights from TF model init_vars = tf.train.list_variables(tf_path) names = [] tf_weights = {} for name, shape in init_vars: logger.info("Loading TF weight {} with shape {}".format(name, shape)) array = tf.train.load_variable(tf_path, name) names.append(name) tf_weights[name] = array for txt_name in names: name = txt_name.split("/") # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v # which are not required for using pretrained model if any( n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"] for n in name ): logger.info("Skipping {}".format("/".join(name))) tf_weights.pop(txt_name, None) continue if "_slot_" in name[-1]: logger.info("Skipping {}".format("/".join(name))) tf_weights.pop(txt_name, None) continue pointer = model array = tf_weights[txt_name] for m_name in name: if re.fullmatch(r"[A-Za-z]+_\d+", m_name): scope_names = re.split(r"_(\d+)", m_name) else: scope_names = [m_name] if scope_names[0] in ["kernel", "scale", "embedding"]: pointer = getattr(pointer, "weight") elif scope_names[0] == "self_attention": pointer = getattr(pointer, "layer") pointer = pointer[0] elif scope_names[0] == "enc_dec_attention": pointer = getattr(pointer, "layer") pointer = pointer[1] elif scope_names[0] == "dense_relu_dense": pointer = getattr(pointer, "layer") pointer = pointer[2] elif scope_names[0] == "rms_norm": if hasattr(pointer, "layer_norm"): pointer = getattr(pointer, "layer_norm") elif hasattr(pointer, "final_layer_norm"): pointer = getattr(pointer, "final_layer_norm") elif scope_names[0] == "scale": pointer = getattr(pointer, "weight") elif scope_names[0] == "output_bias" or scope_names[0] == "beta": pointer = getattr(pointer, "bias") elif scope_names[0] == "squad": pointer = getattr(pointer, "classifier") elif scope_names[0] == "decoder" and name[1] == "logits": continue elif scope_names[0] == "logits": pointer = getattr(pointer, "lm_head") elif scope_names[0] == "wi" and len(scope_names) > 1 and scope_names[1].isdigit(): pointer = getattr(pointer, f"wi_{scope_names[1]}") continue else: try: pointer = getattr(pointer, scope_names[0]) except AttributeError: logger.info("Skipping {}".format("/".join(name))) continue if len(scope_names) >= 2: num = int(scope_names[1]) pointer = pointer[num] if scope_names[0] not in ["kernel", "scale", "embedding"]: pointer = getattr(pointer, "weight") if scope_names[0] != "embedding": logger.info("Transposing numpy weight of shape {} for {}".format(array.shape, name)) array = np.transpose(array) try: assert ( pointer.shape == array.shape ), f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched" except AssertionError as e: e.args += (pointer.shape, array.shape) raise logger.info("Initialize PyTorch weight {}".format(name)) pointer.data = torch.from_numpy(array.astype(np.float32)) tf_weights.pop(txt_name, None) logger.info("Weights not copied to PyTorch model: {}".format(", ".join(tf_weights.keys()))) return model #################################################### # PyTorch Models are constructed by sub-classing # - torch.nn.Module for the layers and # - PreTrainedModel for the models (it-self a sub-class of torch.nn.Module) #################################################### PARALLELIZE_DOCSTRING = r""" This is an experimental feature and is a subject to change at a moment's notice. Uses a device map to distribute attention modules of the model across several devices. If no device map is given, it will evenly distribute blocks across all devices. Args: device_map (:obj:`Dict[int, list]`, optional, defaults to None): A dictionary that maps attention modules to devices. Note that the embedding module and LMHead are always automatically mapped to the first device (for esoteric reasons). That means that the first device should have fewer attention modules mapped to it than other devices. For reference, the t5 models have the following number of attention modules: - t5-small: 6 - t5-base: 12 - t5-large: 24 - t5-3b: 24 - t5-11b: 24 Example:: # Here is an example of a device map on a machine with 4 GPUs using t5-3b, which has a total of 24 attention modules: model = T5ForConditionalGeneration.from_pretrained('t5-3b') device_map = {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9], 2: [10, 11, 12, 13, 14, 15, 16], 3: [17, 18, 19, 20, 21, 22, 23]} model.parallelize(device_map) """ DEPARALLELIZE_DOCSTRING = r""" Moves the model to cpu from a model parallel state. Example:: # On a 4 GPU machine with t5-3b: model = T5ForConditionalGeneration.from_pretrained('t5-3b') device_map = {0: [0, 1, 2], 1: [3, 4, 5, 6, 7, 8, 9], 2: [10, 11, 12, 13, 14, 15, 16], 3: [17, 18, 19, 20, 21, 22, 23]} model.parallelize(device_map) # Splits the model across several devices model.deparallelize() # Put the model back on cpu and cleans memory by calling torch.cuda.empty_cache() """ class T5LayerNorm(nn.Module): def __init__(self, hidden_size, eps=1e-6): """ Construct a layernorm module in the T5 style No bias and no subtraction of mean. """ super().__init__() self.weight = nn.Parameter(torch.ones(hidden_size)) self.variance_epsilon = eps def forward(self, hidden_states): # layer norm should always be calculated in float32 variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) # convert into float16 if necessary if self.weight.dtype == torch.float16: hidden_states = hidden_states.to(torch.float16) return self.weight * hidden_states class T5DenseReluDense(nn.Module): def __init__(self, config): super().__init__() self.wi = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, hidden_states): hidden_states = self.wi(hidden_states) hidden_states = F.relu(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states class T5DenseGatedGeluDense(nn.Module): def __init__(self, config): super().__init__() self.wi_0 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wi_1 = nn.Linear(config.d_model, config.d_ff, bias=False) self.wo = nn.Linear(config.d_ff, config.d_model, bias=False) self.dropout = nn.Dropout(config.dropout_rate) self.gelu_act = ACT2FN["gelu_new"] def forward(self, hidden_states): hidden_gelu = self.gelu_act(self.wi_0(hidden_states)) hidden_linear = self.wi_1(hidden_states) hidden_states = hidden_gelu * hidden_linear hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states class T5LayerFF(nn.Module): def __init__(self, config): super().__init__() if config.feed_forward_proj == "relu": self.DenseReluDense = T5DenseReluDense(config) elif config.feed_forward_proj == "gated-gelu": self.DenseReluDense = T5DenseGatedGeluDense(config) else: raise ValueError( f"{self.config.feed_forward_proj} is not supported. Choose between `relu` and `gated-gelu`" ) self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) forwarded_states = self.DenseReluDense(forwarded_states) hidden_states = hidden_states + self.dropout(forwarded_states) return hidden_states class T5Attention(nn.Module): def __init__(self, config: T5Config, has_relative_attention_bias=False): super().__init__() self.is_decoder = config.is_decoder self.has_relative_attention_bias = has_relative_attention_bias self.relative_attention_num_buckets = config.relative_attention_num_buckets self.d_model = config.d_model self.key_value_proj_dim = config.d_kv self.n_heads = config.num_heads self.dropout = config.dropout_rate self.inner_dim = self.n_heads * self.key_value_proj_dim # Mesh TensorFlow initialization to avoid scaling before softmax self.q = nn.Linear(self.d_model, self.inner_dim, bias=False) self.k = nn.Linear(self.d_model, self.inner_dim, bias=False) self.v = nn.Linear(self.d_model, self.inner_dim, bias=False) self.o = nn.Linear(self.inner_dim, self.d_model, bias=False) if self.has_relative_attention_bias: self.relative_attention_bias = nn.Embedding(self.relative_attention_num_buckets, self.n_heads) self.pruned_heads = set() def prune_heads(self, heads): if len(heads) == 0: return heads, index = find_pruneable_heads_and_indices( heads, self.n_heads, self.key_value_proj_dim, self.pruned_heads ) # Prune linear layers self.q = prune_linear_layer(self.q, index) self.k = prune_linear_layer(self.k, index) self.v = prune_linear_layer(self.v, index) self.o = prune_linear_layer(self.o, index, dim=1) # Update hyper params self.n_heads = self.n_heads - len(heads) self.inner_dim = self.key_value_proj_dim * self.n_heads self.pruned_heads = self.pruned_heads.union(heads) @staticmethod def _relative_position_bucket(relative_position, bidirectional=True, num_buckets=32, max_distance=128): """ Adapted from Mesh Tensorflow: https://github.com/tensorflow/mesh/blob/0cb87fe07da627bf0b7e60475d59f95ed6b5be3d/mesh_tensorflow/transformer/transformer_layers.py#L593 Translate relative position to a bucket number for relative attention. The relative position is defined as memory_position - query_position, i.e. the distance in tokens from the attending position to the attended-to position. If bidirectional=False, then positive relative positions are invalid. We use smaller buckets for small absolute relative_position and larger buckets for larger absolute relative_positions. All relative positions >=max_distance map to the same bucket. All relative positions <=-max_distance map to the same bucket. This should allow for more graceful generalization to longer sequences than the model has been trained on Args: relative_position: an int32 Tensor bidirectional: a boolean - whether the attention is bidirectional num_buckets: an integer max_distance: an integer Returns: a Tensor with the same shape as relative_position, containing int32 values in the range [0, num_buckets) """ relative_buckets = 0 if bidirectional: num_buckets //= 2 relative_buckets += (relative_position > 0).to(torch.long) * num_buckets relative_position = torch.abs(relative_position) else: relative_position = -torch.min(relative_position, torch.zeros_like(relative_position)) # now relative_position is in the range [0, inf) # half of the buckets are for exact increments in positions max_exact = num_buckets // 2 is_small = relative_position < max_exact # The other half of the buckets are for logarithmically bigger bins in positions up to max_distance relative_postion_if_large = max_exact + ( torch.log(relative_position.float() / max_exact) / math.log(max_distance / max_exact) * (num_buckets - max_exact) ).to(torch.long) relative_postion_if_large = torch.min( relative_postion_if_large, torch.full_like(relative_postion_if_large, num_buckets - 1) ) relative_buckets += torch.where(is_small, relative_position, relative_postion_if_large) return relative_buckets def compute_bias(self, query_length, key_length): """ Compute binned relative position bias """ context_position = torch.arange(query_length, dtype=torch.long)[:, None] memory_position = torch.arange(key_length, dtype=torch.long)[None, :] relative_position = memory_position - context_position # shape (query_length, key_length) relative_position_bucket = self._relative_position_bucket( relative_position, # shape (query_length, key_length) bidirectional=(not self.is_decoder), num_buckets=self.relative_attention_num_buckets, ) relative_position_bucket = relative_position_bucket.to(self.relative_attention_bias.weight.device) values = self.relative_attention_bias(relative_position_bucket) # shape (query_length, key_length, num_heads) values = values.permute([2, 0, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length) return values def forward( self, hidden_states, mask=None, key_value_states=None, position_bias=None, past_key_value=None, layer_head_mask=None, query_length=None, use_cache=False, output_attentions=False, ): """ Self-attention (if key_value_states is None) or attention over source sentence (provided by key_value_states). """ # Input is (batch_size, seq_length, dim) # Mask is (batch_size, key_length) (non-causal) or (batch_size, key_length, key_length) # past_key_value[0] is (batch_size, n_heads, q_len - 1, dim_per_head) batch_size, seq_length = hidden_states.shape[:2] real_seq_length = seq_length if past_key_value is not None: assert ( len(past_key_value) == 2 ), "past_key_value should have 2 past states: keys and values. Got {} past states".format( len(past_key_value) ) real_seq_length += past_key_value[0].shape[2] if query_length is None else query_length key_length = real_seq_length if key_value_states is None else key_value_states.shape[1] def shape(states): """ projection """ return states.view(batch_size, -1, self.n_heads, self.key_value_proj_dim).transpose(1, 2) def unshape(states): """ reshape """ return states.transpose(1, 2).contiguous().view(batch_size, -1, self.inner_dim) def project(hidden_states, proj_layer, key_value_states, past_key_value): """ projects hidden states correctly to key/query states """ if key_value_states is None: # self-attn # (batch_size, n_heads, seq_length, dim_per_head) hidden_states = shape(proj_layer(hidden_states)) elif past_key_value is None: # cross-attn # (batch_size, n_heads, seq_length, dim_per_head) hidden_states = shape(proj_layer(key_value_states)) if past_key_value is not None: if key_value_states is None: # self-attn # (batch_size, n_heads, key_length, dim_per_head) hidden_states = torch.cat([past_key_value, hidden_states], dim=2) else: # cross-attn hidden_states = past_key_value return hidden_states # get query states query_states = shape(self.q(hidden_states)) # (batch_size, n_heads, seq_length, dim_per_head) # get key/value states key_states = project( hidden_states, self.k, key_value_states, past_key_value[0] if past_key_value is not None else None ) value_states = project( hidden_states, self.v, key_value_states, past_key_value[1] if past_key_value is not None else None ) # compute scores scores = torch.matmul( query_states, key_states.transpose(3, 2) ) # equivalent of torch.einsum("bnqd,bnkd->bnqk", query_states, key_states), compatible with onnx op>9 if position_bias is None: if not self.has_relative_attention_bias: position_bias = torch.zeros( (1, self.n_heads, real_seq_length, key_length), device=scores.device, dtype=scores.dtype ) else: position_bias = self.compute_bias(real_seq_length, key_length) # if key and values are already calculated # we want only the last query position bias if past_key_value is not None: position_bias = position_bias[:, :, -seq_length:, :] if mask is not None: position_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length) scores += position_bias attn_weights = F.softmax(scores.float(), dim=-1).type_as( scores ) # (batch_size, n_heads, seq_length, key_length) attn_weights = F.dropout( attn_weights, p=self.dropout, training=self.training ) # (batch_size, n_heads, seq_length, key_length) # Mask heads if we want to if layer_head_mask is not None: attn_weights = attn_weights * layer_head_mask attn_output = unshape(torch.matmul(attn_weights, value_states)) # (batch_size, seq_length, dim) attn_output = self.o(attn_output) present_key_value_state = (key_states, value_states) if (self.is_decoder and use_cache) else None outputs = (attn_output,) + (present_key_value_state,) + (position_bias,) if output_attentions: outputs = outputs + (attn_weights,) return outputs class T5LayerSelfAttention(nn.Module): def __init__(self, config, has_relative_attention_bias=False): super().__init__() self.SelfAttention = T5Attention(config, has_relative_attention_bias=has_relative_attention_bias) self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) def forward( self, hidden_states, attention_mask=None, position_bias=None, layer_head_mask=None, past_key_value=None, use_cache=False, output_attentions=False, ): normed_hidden_states = self.layer_norm(hidden_states) attention_output = self.SelfAttention( normed_hidden_states, mask=attention_mask, position_bias=position_bias, layer_head_mask=layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states = hidden_states + self.dropout(attention_output[0]) outputs = (hidden_states,) + attention_output[1:] # add attentions if we output them return outputs class T5LayerCrossAttention(nn.Module): def __init__(self, config): super().__init__() self.EncDecAttention = T5Attention(config, has_relative_attention_bias=False) self.layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) def forward( self, hidden_states, key_value_states, attention_mask=None, position_bias=None, layer_head_mask=None, past_key_value=None, use_cache=False, query_length=None, output_attentions=False, ): normed_hidden_states = self.layer_norm(hidden_states) attention_output = self.EncDecAttention( normed_hidden_states, mask=attention_mask, key_value_states=key_value_states, position_bias=position_bias, layer_head_mask=layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, query_length=query_length, output_attentions=output_attentions, ) layer_output = hidden_states + self.dropout(attention_output[0]) outputs = (layer_output,) + attention_output[1:] # add attentions if we output them return outputs class T5Block(nn.Module): def __init__(self, config, has_relative_attention_bias=False): super().__init__() self.is_decoder = config.is_decoder self.layer = nn.ModuleList() self.layer.append(T5LayerSelfAttention(config, has_relative_attention_bias=has_relative_attention_bias)) if self.is_decoder: self.layer.append(T5LayerCrossAttention(config)) self.layer.append(T5LayerFF(config)) def forward( self, hidden_states, attention_mask=None, position_bias=None, encoder_hidden_states=None, encoder_attention_mask=None, encoder_decoder_position_bias=None, layer_head_mask=None, encoder_layer_head_mask=None, past_key_value=None, use_cache=False, output_attentions=False, return_dict=True, ): if past_key_value is not None: assert self.is_decoder, "Only decoder can use `past_key_values`" expected_num_past_key_values = 2 if encoder_hidden_states is None else 4 error_message = "There should be {} past states. 2 (past / key) for self attention.{} Got {} past key / value states".format( expected_num_past_key_values, "2 (past / key) for cross attention" if expected_num_past_key_values == 4 else "", len(past_key_value), ) assert len(past_key_value) == expected_num_past_key_values, error_message self_attn_past_key_value = past_key_value[:2] cross_attn_past_key_value = past_key_value[2:] else: self_attn_past_key_value, cross_attn_past_key_value = None, None self_attention_outputs = self.layer[0]( hidden_states, attention_mask=attention_mask, position_bias=position_bias, layer_head_mask=layer_head_mask, past_key_value=self_attn_past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states, present_key_value_state = self_attention_outputs[:2] attention_outputs = self_attention_outputs[2:] # Keep self-attention outputs and relative position weights # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) do_cross_attention = self.is_decoder and encoder_hidden_states is not None if do_cross_attention: # the actual query length is unknown for cross attention # if using past key value states. Need to inject it here if present_key_value_state is not None: query_length = present_key_value_state[0].shape[2] else: query_length = None cross_attention_outputs = self.layer[1]( hidden_states, key_value_states=encoder_hidden_states, attention_mask=encoder_attention_mask, position_bias=encoder_decoder_position_bias, layer_head_mask=encoder_layer_head_mask, past_key_value=cross_attn_past_key_value, query_length=query_length, use_cache=use_cache, output_attentions=output_attentions, ) hidden_states = cross_attention_outputs[0] # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) # Combine self attn and cross attn key value states if present_key_value_state is not None: present_key_value_state = present_key_value_state + cross_attention_outputs[1] # Keep cross-attention outputs and relative position weights attention_outputs = attention_outputs + cross_attention_outputs[2:] # Apply Feed Forward layer hidden_states = self.layer[-1](hidden_states) # clamp inf values to enable fp16 training if hidden_states.dtype == torch.float16 and torch.isinf(hidden_states).any(): clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states,) outputs = outputs + (present_key_value_state,) + attention_outputs return outputs # hidden-states, present_key_value_states, (self-attention weights), (self-attention position bias), (cross-attention weights), (cross-attention position bias) class T5PreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = T5Config load_tf_weights = load_tf_weights_in_t5 base_model_prefix = "transformer" is_parallelizable = True @property def dummy_inputs(self): input_ids = torch.tensor(DUMMY_INPUTS) input_mask = torch.tensor(DUMMY_MASK) dummy_inputs = { "decoder_input_ids": input_ids, "input_ids": input_ids, "decoder_attention_mask": input_mask, } return dummy_inputs def _init_weights(self, module): """ Initialize the weights """ factor = self.config.initializer_factor # Used for testing weights initialization if isinstance(module, T5LayerNorm): module.weight.data.fill_(factor * 1.0) elif isinstance(module, (T5Model, T5ForConditionalGeneration, T5EncoderModel)): # Mesh TensorFlow embeddings initialization # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L1624 module.shared.weight.data.normal_(mean=0.0, std=factor * 1.0) elif isinstance(module, T5DenseReluDense): # Mesh TensorFlow FF initialization # See https://github.com/tensorflow/mesh/blob/master/mesh_tensorflow/transformer/transformer_layers.py#L56 # and https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L89 module.wi.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5)) if hasattr(module.wi, "bias") and module.wi.bias is not None: module.wi.bias.data.zero_() module.wo.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_ff) ** -0.5)) if hasattr(module.wo, "bias") and module.wo.bias is not None: module.wo.bias.data.zero_() elif isinstance(module, T5DenseGatedGeluDense): module.wi_0.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5)) if hasattr(module.wi_0, "bias") and module.wi_0.bias is not None: module.wi_0.bias.data.zero_() module.wi_1.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_model) ** -0.5)) if hasattr(module.wi_1, "bias") and module.wi_1.bias is not None: module.wi_1.bias.data.zero_() module.wo.weight.data.normal_(mean=0.0, std=factor * ((self.config.d_ff) ** -0.5)) if hasattr(module.wo, "bias") and module.wo.bias is not None: module.wo.bias.data.zero_() elif isinstance(module, T5Attention): # Mesh TensorFlow attention initialization to avoid scaling before softmax # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/attention.py#L136 d_model = self.config.d_model key_value_proj_dim = self.config.d_kv n_heads = self.config.num_heads module.q.weight.data.normal_(mean=0.0, std=factor * ((d_model * key_value_proj_dim) ** -0.5)) module.k.weight.data.normal_(mean=0.0, std=factor * (d_model ** -0.5)) module.v.weight.data.normal_(mean=0.0, std=factor * (d_model ** -0.5)) module.o.weight.data.normal_(mean=0.0, std=factor * ((n_heads * key_value_proj_dim) ** -0.5)) if module.has_relative_attention_bias: module.relative_attention_bias.weight.data.normal_(mean=0.0, std=factor * ((d_model) ** -0.5)) def _shift_right(self, input_ids): decoder_start_token_id = self.config.decoder_start_token_id pad_token_id = self.config.pad_token_id assert ( decoder_start_token_id is not None ), "self.model.config.decoder_start_token_id has to be defined. In T5 it is usually set to the pad_token_id. See T5 docs for more information" # shift inputs to the right shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[..., 1:] = input_ids[..., :-1].clone() shifted_input_ids[..., 0] = decoder_start_token_id assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined." # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) assert torch.all(shifted_input_ids >= 0).item(), "Verify that `shifted_input_ids` has only positive values" return shifted_input_ids class T5Stack(T5PreTrainedModel): def __init__(self, config, embed_tokens=None): super().__init__(config) self.embed_tokens = embed_tokens self.is_decoder = config.is_decoder self.block = nn.ModuleList( [T5Block(config, has_relative_attention_bias=bool(i == 0)) for i in range(config.num_layers)] ) self.final_layer_norm = T5LayerNorm(config.d_model, eps=config.layer_norm_epsilon) self.dropout = nn.Dropout(config.dropout_rate) self.init_weights() # Model parallel self.model_parallel = False self.device_map = None @add_start_docstrings(PARALLELIZE_DOCSTRING) def parallelize(self, device_map=None): # Check validity of device_map self.device_map = ( get_device_map(len(self.block), range(torch.cuda.device_count())) if device_map is None else device_map ) assert_device_map(self.device_map, len(self.block)) self.model_parallel = True self.first_device = "cpu" if "cpu" in self.device_map.keys() else "cuda:" + str(min(self.device_map.keys())) self.last_device = "cuda:" + str(max(self.device_map.keys())) # Load onto devices for k, v in self.device_map.items(): for layer in v: cuda_device = "cuda:" + str(k) self.block[layer] = self.block[layer].to(cuda_device) # Set embed_tokens to first layer self.embed_tokens = self.embed_tokens.to(self.first_device) # Set final layer norm to last device self.final_layer_norm = self.final_layer_norm.to(self.last_device) @add_start_docstrings(PARALLELIZE_DOCSTRING) def deparallelize(self): self.model_parallel = False self.device_map = None self.first_device = "cpu" self.last_device = "cpu" for i in range(len(self.block)): self.block[i] = self.block[i].to("cpu") self.embed_tokens = self.embed_tokens.to("cpu") self.final_layer_norm = self.final_layer_norm.to("cpu") torch.cuda.empty_cache() def get_input_embeddings(self): return self.embed_tokens def set_input_embeddings(self, new_embeddings): self.embed_tokens = new_embeddings def forward( self, input_ids=None, attention_mask=None, encoder_hidden_states=None, encoder_attention_mask=None, inputs_embeds=None, head_mask=None, encoder_head_mask=None, past_key_values=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): # Model parallel if self.model_parallel: torch.cuda.set_device(self.first_device) self.embed_tokens = self.embed_tokens.to(self.first_device) use_cache = use_cache if use_cache is not None else self.config.use_cache output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and inputs_embeds is not None: err_msg_prefix = "decoder_" if self.is_decoder else "" raise ValueError( f"You cannot specify both {err_msg_prefix}inputs and {err_msg_prefix}inputs_embeds at the same time" ) elif input_ids is not None: input_shape = input_ids.size() input_ids = input_ids.view(-1, input_shape[-1]) elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: err_msg_prefix = "decoder_" if self.is_decoder else "" raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds") if inputs_embeds is None: assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings" inputs_embeds = self.embed_tokens(input_ids) batch_size, seq_length = input_shape # required mask seq length can be calculated via length of past mask_seq_length = past_key_values[0][0].shape[2] + seq_length if past_key_values is not None else seq_length if use_cache is True: assert self.is_decoder, ":obj:`use_cache` can only be set to `True` if {} is used as a decoder".format( self ) if attention_mask is None: attention_mask = torch.ones(batch_size, mask_seq_length).to(inputs_embeds.device) if self.is_decoder and encoder_attention_mask is None and encoder_hidden_states is not None: encoder_seq_length = encoder_hidden_states.shape[1] encoder_attention_mask = torch.ones( batch_size, encoder_seq_length, device=inputs_embeds.device, dtype=torch.long ) # initialize past_key_values with `None` if past does not exist if past_key_values is None: past_key_values = [None] * len(self.block) # ourselves in which case we just need to make it broadcastable to all heads. extended_attention_mask = self.get_extended_attention_mask(attention_mask, input_shape, inputs_embeds.device) if self.is_decoder and encoder_attention_mask is not None: encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) else: encoder_extended_attention_mask = None # Prepare head mask if needed head_mask = self.get_head_mask(head_mask, self.config.num_layers) encoder_head_mask = self.get_head_mask(encoder_head_mask, self.config.num_layers) present_key_value_states = () if use_cache else None all_hidden_states = () if output_hidden_states else None all_attentions = () if output_attentions else None all_cross_attentions = () if (output_attentions and self.is_decoder) else None position_bias = None encoder_decoder_position_bias = None hidden_states = self.dropout(inputs_embeds) for i, (layer_module, past_key_value) in enumerate(zip(self.block, past_key_values)): layer_head_mask = head_mask[i] encoder_layer_head_mask = encoder_head_mask[i] # Model parallel if self.model_parallel: torch.cuda.set_device(hidden_states.device) # Ensure that attention_mask is always on the same device as hidden_states if attention_mask is not None: attention_mask = attention_mask.to(hidden_states.device) if position_bias is not None: position_bias = position_bias.to(hidden_states.device) if encoder_hidden_states is not None: encoder_hidden_states = encoder_hidden_states.to(hidden_states.device) if encoder_extended_attention_mask is not None: encoder_extended_attention_mask = encoder_extended_attention_mask.to(hidden_states.device) if encoder_decoder_position_bias is not None: encoder_decoder_position_bias = encoder_decoder_position_bias.to(hidden_states.device) if layer_head_mask is not None: layer_head_mask = layer_head_mask.to(hidden_states.device) if encoder_layer_head_mask is not None: encoder_layer_head_mask = encoder_layer_head_mask.to(hidden_states.device) if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_outputs = layer_module( hidden_states, attention_mask=extended_attention_mask, position_bias=position_bias, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_extended_attention_mask, encoder_decoder_position_bias=encoder_decoder_position_bias, layer_head_mask=layer_head_mask, encoder_layer_head_mask=encoder_layer_head_mask, past_key_value=past_key_value, use_cache=use_cache, output_attentions=output_attentions, ) # layer_outputs is a tuple with: # hidden-states, key-value-states, (self-attention weights), (self-attention position bias), (cross-attention weights), (cross-attention position bias) hidden_states, present_key_value_state = layer_outputs[:2] # We share the position biases between the layers - the first layer store them # layer_outputs = hidden-states, key-value-states (self-attention weights), # (self-attention position bias), (cross-attention weights), (cross-attention position bias) position_bias = layer_outputs[2] if self.is_decoder and encoder_hidden_states is not None: encoder_decoder_position_bias = layer_outputs[4 if output_attentions else 3] # append next layer key value states if use_cache: present_key_value_states = present_key_value_states + (present_key_value_state,) if output_attentions: all_attentions = all_attentions + (layer_outputs[3],) if self.is_decoder: all_cross_attentions = all_cross_attentions + (layer_outputs[5],) # Model Parallel: If it's the last layer for that device, put things on the next device if self.model_parallel: for k, v in self.device_map.items(): if i == v[-1] and "cuda:" + str(k) != self.last_device: hidden_states = hidden_states.to("cuda:" + str(k + 1)) hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.dropout(hidden_states) # Add last layer if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, present_key_value_states, all_hidden_states, all_attentions, all_cross_attentions, ] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=present_key_value_states, hidden_states=all_hidden_states, attentions=all_attentions, cross_attentions=all_cross_attentions, ) T5_START_DOCSTRING = r""" The T5 model was proposed in `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer <https://arxiv.org/abs/1910.10683>`__ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. It's an encoder decoder transformer pre-trained in a text-to-text denoising generative setting. This model inherits from :class:`~transformers.PreTrainedModel`. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch `torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>`__ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters: config (:class:`~transformers.T5Config`): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the :meth:`~transformers.PreTrainedModel.from_pretrained` method to load the model weights. """ T5_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using :class:`~transformers.T5Tokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for detail. `What are input IDs? <../glossary.html#input-ids>`__ To know more on how to prepare :obj:`input_ids` for pretraining take a look a `T5 Training <./t5.html#training>`__. attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using :class:`~transformers.BartTokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for details. `What are input IDs? <../glossary.html#input-ids>`__ T5 uses the :obj:`pad_token_id` as the starting token for :obj:`decoder_input_ids` generation. If :obj:`past_key_values` is used, optionally only the last :obj:`decoder_input_ids` have to be input (see :obj:`past_key_values`). To know more on how to prepare :obj:`decoder_input_ids` for pretraining take a look at `T5 Training <./t5.html#training>`__. If :obj:`decoder_input_ids` and :obj:`decoder_inputs_embeds` are both unset, :obj:`decoder_input_ids` takes the value of :obj:`input_ids`. decoder_attention_mask (:obj:`torch.BoolTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): Default behavior: generate a tensor that ignores pad tokens in :obj:`decoder_input_ids`. Causal mask will also be used by default. head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`): Mask to nullify selected heads of the self-attention modules in the encoder. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. decoder_head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`): Mask to nullify selected heads of the self-attention modules. in the decoder Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. encoder_outputs (:obj:`tuple(tuple(torch.FloatTensor)`, `optional`): Tuple consists of (:obj:`last_hidden_state`, :obj:`optional`: `hidden_states`, :obj:`optional`: `attentions`) :obj:`last_hidden_state` of shape :obj:`(batch_size, sequence_length, hidden_size)` is a sequence of hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. decoder_inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, target_sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`decoder_input_ids` you can choose to directly pass an embedded representation. If :obj:`past_key_values` is used, optionally only the last :obj:`decoder_inputs_embeds` have to be input (see :obj:`past_key_values`). This is useful if you want more control over how to convert :obj:`decoder_input_ids` indices into associated vectors than the model's internal embedding lookup matrix. If :obj:`decoder_input_ids` and :obj:`decoder_inputs_embeds` are both unset, :obj:`decoder_inputs_embeds` takes the value of :obj:`inputs_embeds`. use_cache (:obj:`bool`, `optional`): If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up decoding (see :obj:`past_key_values`). output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. """ T5_ENCODER_INPUTS_DOCSTRING = r""" Args: input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. T5 is a model with relative position embeddings so you should be able to pad the inputs on both the right and the left. Indices can be obtained using :class:`~transformers.T5Tokenizer`. See :meth:`transformers.PreTrainedTokenizer.encode` and :meth:`transformers.PreTrainedTokenizer.__call__` for detail. To know more on how to prepare :obj:`input_ids` for pretraining take a look a `T5 Training <./t5.html#training>`__. attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``: - 1 for tokens that are **not masked**, - 0 for tokens that are **masked**. `What are attention masks? <../glossary.html#attention-mask>`__ head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`): Mask to nullify selected heads of the self-attention modules. Mask values selected in ``[0, 1]``: - 1 indicates the head is **not masked**, - 0 indicates the head is **masked**. inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): Optionally, instead of passing :obj:`input_ids` you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert :obj:`input_ids` indices into associated vectors than the model's internal embedding lookup matrix. output_attentions (:obj:`bool`, `optional`): Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned tensors for more detail. output_hidden_states (:obj:`bool`, `optional`): Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for more detail. return_dict (:obj:`bool`, `optional`): Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple. """ # Warning messafe for FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask __HEAD_MASK_WARNING_MSG = """ The input argument `head_mask` was split into two arguments `head_mask` and `decoder_head_mask`. Currently, `decoder_head_mask` is set to copy `head_mask`, but this feature is deprecated and will be removed in future versions. If you do not want to use any `decoder_head_mask` now, please set `decoder_head_mask = torch.ones(num_layers, num_heads)`. """ @add_start_docstrings( "The bare T5 Model transformer outputting raw hidden-states" "without any specific head on top.", T5_START_DOCSTRING, ) class T5Model(T5PreTrainedModel): _keys_to_ignore_on_load_missing = [ r"encoder\.embed_tokens\.weight", r"decoder\.embed_tokens\.weight", ] _keys_to_ignore_on_load_unexpected = [ r"decoder\.block\.0\.layer\.1\.EncDecAttention\.relative_attention_bias\.weight", ] def __init__(self, config: T5Config): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.d_model) encoder_config = copy.deepcopy(config) encoder_config.is_decoder = False encoder_config.use_cache = False encoder_config.is_encoder_decoder = False self.encoder = T5Stack(encoder_config, self.shared) decoder_config = copy.deepcopy(config) decoder_config.is_decoder = True decoder_config.is_encoder_decoder = False decoder_config.num_layers = config.num_decoder_layers self.decoder = T5Stack(decoder_config, self.shared) self.init_weights() # Model parallel self.model_parallel = False self.device_map = None @add_start_docstrings(PARALLELIZE_DOCSTRING) def parallelize(self, device_map=None): self.device_map = ( get_device_map(len(self.encoder.block), range(torch.cuda.device_count())) if device_map is None else device_map ) assert_device_map(self.device_map, len(self.encoder.block)) self.encoder.parallelize(self.device_map) self.decoder.parallelize(self.device_map) self.model_parallel = True @add_start_docstrings(DEPARALLELIZE_DOCSTRING) def deparallelize(self): self.encoder.deparallelize() self.decoder.deparallelize() self.encoder = self.encoder.to("cpu") self.decoder = self.decoder.to("cpu") self.model_parallel = False self.device_map = None torch.cuda.empty_cache() def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared = new_embeddings self.encoder.set_input_embeddings(new_embeddings) self.decoder.set_input_embeddings(new_embeddings) def get_encoder(self): return self.encoder def get_decoder(self): return self.decoder def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) @add_start_docstrings_to_model_forward(T5_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqModelOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" Returns: Example:: >>> from transformers import T5Tokenizer, T5Model >>> tokenizer = T5Tokenizer.from_pretrained('t5-small') >>> model = T5Model.from_pretrained('t5-small') >>> input_ids = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt").input_ids # Batch size 1 >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids # Batch size 1 >>> outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) >>> last_hidden_states = outputs.last_hidden_state """ use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask if head_mask is not None and decoder_head_mask is None: if self.config.num_layers == self.config.num_decoder_layers: warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) decoder_head_mask = head_mask # Encode if needed (training, first prediction pass) if encoder_outputs is None: encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) hidden_states = encoder_outputs[0] if self.model_parallel: torch.cuda.set_device(self.decoder.first_device) # Set device for model parallelism if self.model_parallel: torch.cuda.set_device(self.decoder.first_device) hidden_states = hidden_states.to(self.decoder.first_device) if decoder_input_ids is not None: decoder_input_ids = decoder_input_ids.to(self.decoder.first_device) if attention_mask is not None: attention_mask = attention_mask.to(self.decoder.first_device) if decoder_attention_mask is not None: decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device) # Decode decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, inputs_embeds=decoder_inputs_embeds, past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, head_mask=decoder_head_mask, encoder_head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) if not return_dict: return decoder_outputs + encoder_outputs return Seq2SeqModelOutput( last_hidden_state=decoder_outputs.last_hidden_state, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) @add_start_docstrings("""T5 Model with a `language modeling` head on top. """, T5_START_DOCSTRING) class T5ForConditionalGeneration(T5PreTrainedModel): _keys_to_ignore_on_load_missing = [ r"encoder\.embed_tokens\.weight", r"decoder\.embed_tokens\.weight", r"lm_head\.weight", ] _keys_to_ignore_on_load_unexpected = [ r"decoder\.block\.0\.layer\.1\.EncDecAttention\.relative_attention_bias\.weight", ] def __init__(self, config): super().__init__(config) self.model_dim = config.d_model self.shared = nn.Embedding(config.vocab_size, config.d_model) encoder_config = copy.deepcopy(config) encoder_config.is_decoder = False encoder_config.use_cache = False encoder_config.is_encoder_decoder = False self.encoder = T5Stack(encoder_config, self.shared) decoder_config = copy.deepcopy(config) decoder_config.is_decoder = True decoder_config.is_encoder_decoder = False decoder_config.num_layers = config.num_decoder_layers self.decoder = T5Stack(decoder_config, self.shared) self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False) self.init_weights() # Model parallel self.model_parallel = False self.device_map = None @add_start_docstrings(PARALLELIZE_DOCSTRING) def parallelize(self, device_map=None): self.device_map = ( get_device_map(len(self.encoder.block), range(torch.cuda.device_count())) if device_map is None else device_map ) assert_device_map(self.device_map, len(self.encoder.block)) self.encoder.parallelize(self.device_map) self.decoder.parallelize(self.device_map) self.lm_head = self.lm_head.to(self.decoder.first_device) self.model_parallel = True @add_start_docstrings(DEPARALLELIZE_DOCSTRING) def deparallelize(self): self.encoder.deparallelize() self.decoder.deparallelize() self.encoder = self.encoder.to("cpu") self.decoder = self.decoder.to("cpu") self.lm_head = self.lm_head.to("cpu") self.model_parallel = False self.device_map = None torch.cuda.empty_cache() def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared = new_embeddings self.encoder.set_input_embeddings(new_embeddings) self.decoder.set_input_embeddings(new_embeddings) def set_output_embeddings(self, new_embeddings): self.lm_head = new_embeddings def get_output_embeddings(self): return self.lm_head def get_encoder(self): return self.encoder def get_decoder(self): return self.decoder @add_start_docstrings_to_model_forward(T5_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[-100, 0, ..., config.vocab_size - 1]`. All labels set to ``-100`` are ignored (masked), the loss is only computed for labels in ``[0, ..., config.vocab_size]`` Returns: Examples:: >>> from transformers import T5Tokenizer, T5ForConditionalGeneration >>> tokenizer = T5Tokenizer.from_pretrained('t5-small') >>> model = T5ForConditionalGeneration.from_pretrained('t5-small') >>> input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids >>> labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2> </s>', return_tensors='pt').input_ids >>> outputs = model(input_ids=input_ids, labels=labels) >>> loss = outputs.loss >>> logits = outputs.logits >>> input_ids = tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1 >>> outputs = model.generate(input_ids) """ use_cache = use_cache if use_cache is not None else self.config.use_cache return_dict = return_dict if return_dict is not None else self.config.use_return_dict # FutureWarning: head_mask was separated into two input args - head_mask, decoder_head_mask if head_mask is not None and decoder_head_mask is None: if self.config.num_layers == self.config.num_decoder_layers: warnings.warn(__HEAD_MASK_WARNING_MSG, FutureWarning) decoder_head_mask = head_mask # Encode if needed (training, first prediction pass) if encoder_outputs is None: # Convert encoder inputs in embeddings if needed encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): encoder_outputs = BaseModelOutput( last_hidden_state=encoder_outputs[0], hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, ) hidden_states = encoder_outputs[0] if self.model_parallel: torch.cuda.set_device(self.decoder.first_device) if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None: # get decoder inputs from shifting lm labels to the right decoder_input_ids = self._shift_right(labels) # If decoding with past key value states, only the last tokens # should be given as an input if past_key_values is not None: assert labels is None, "Decoder should not use cached key value states when training." if decoder_input_ids is not None: decoder_input_ids = decoder_input_ids[:, -1:] if decoder_inputs_embeds is not None: decoder_inputs_embeds = decoder_inputs_embeds[:, -1:] # Set device for model parallelism if self.model_parallel: torch.cuda.set_device(self.decoder.first_device) hidden_states = hidden_states.to(self.decoder.first_device) if decoder_input_ids is not None: decoder_input_ids = decoder_input_ids.to(self.decoder.first_device) if attention_mask is not None: attention_mask = attention_mask.to(self.decoder.first_device) if decoder_attention_mask is not None: decoder_attention_mask = decoder_attention_mask.to(self.decoder.first_device) # Decode decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, inputs_embeds=decoder_inputs_embeds, past_key_values=past_key_values, encoder_hidden_states=hidden_states, encoder_attention_mask=attention_mask, head_mask=decoder_head_mask, encoder_head_mask=head_mask, use_cache=use_cache, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) sequence_output = decoder_outputs[0] # Set device for model parallelism if self.model_parallel: torch.cuda.set_device(self.encoder.first_device) self.lm_head = self.lm_head.to(self.encoder.first_device) sequence_output = sequence_output.to(self.lm_head.weight.device) if self.config.tie_word_embeddings: # Rescale output before projecting on vocab # See https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/transformer/transformer.py#L586 sequence_output = sequence_output * (self.model_dim ** -0.5) lm_logits = self.lm_head(sequence_output) loss = None if labels is not None: loss_fct = CrossEntropyLoss(ignore_index=-100) loss = loss_fct(lm_logits.view(-1, lm_logits.size(-1)), labels.view(-1)) # TODO(thom): Add z_loss https://github.com/tensorflow/mesh/blob/fa19d69eafc9a482aff0b59ddd96b025c0cb207d/mesh_tensorflow/layers.py#L666 if not return_dict: output = (lm_logits,) + decoder_outputs[1:] + encoder_outputs return ((loss,) + output) if loss is not None else output return Seq2SeqLMOutput( loss=loss, logits=lm_logits, past_key_values=decoder_outputs.past_key_values, decoder_hidden_states=decoder_outputs.hidden_states, decoder_attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions, encoder_last_hidden_state=encoder_outputs.last_hidden_state, encoder_hidden_states=encoder_outputs.hidden_states, encoder_attentions=encoder_outputs.attentions, ) def prepare_inputs_for_generation( self, input_ids, past=None, attention_mask=None, use_cache=None, encoder_outputs=None, **kwargs ): # cut decoder_input_ids if past is used if past is not None: input_ids = input_ids[:, -1:] return { "decoder_input_ids": input_ids, "past_key_values": past, "encoder_outputs": encoder_outputs, "attention_mask": attention_mask, "use_cache": use_cache, } def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor): return self._shift_right(labels) def _reorder_cache(self, past, beam_idx): # if decoder past is not included in output # speedy decoding is disabled and no need to reorder if past is None: logger.warning("You might want to consider setting `use_cache=True` to speed up decoding") return past reordered_decoder_past = () for layer_past_states in past: # get the correct batch idx from layer past batch dim # batch dim of `past` is at 2nd position reordered_layer_past_states = () for layer_past_state in layer_past_states: # need to set correct `past` for each of the four key / value states reordered_layer_past_states = reordered_layer_past_states + ( layer_past_state.index_select(0, beam_idx), ) assert reordered_layer_past_states[0].shape == layer_past_states[0].shape assert len(reordered_layer_past_states) == len(layer_past_states) reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,) return reordered_decoder_past @add_start_docstrings( "The bare T5 Model transformer outputting encoder's raw hidden-states" "without any specific head on top.", T5_START_DOCSTRING, ) class T5EncoderModel(T5PreTrainedModel): authorized_missing_keys = [ r"encoder\.embed_tokens\.weight", ] def __init__(self, config: T5Config): super().__init__(config) self.shared = nn.Embedding(config.vocab_size, config.d_model) encoder_config = copy.deepcopy(config) encoder_config.use_cache = False encoder_config.is_encoder_decoder = False self.encoder = T5Stack(encoder_config, self.shared) self.init_weights() # Model parallel self.model_parallel = False self.device_map = None @add_start_docstrings(PARALLELIZE_DOCSTRING) def parallelize(self, device_map=None): self.device_map = ( get_device_map(len(self.encoder.block), range(torch.cuda.device_count())) if device_map is None else device_map ) assert_device_map(self.device_map, len(self.encoder.block)) self.encoder.parallelize(self.device_map) self.model_parallel = True @add_start_docstrings(DEPARALLELIZE_DOCSTRING) def deparallelize(self): self.encoder.deparallelize() self.encoder = self.encoder.to("cpu") self.model_parallel = False self.device_map = None torch.cuda.empty_cache() def get_input_embeddings(self): return self.shared def set_input_embeddings(self, new_embeddings): self.shared = new_embeddings self.encoder.set_input_embeddings(new_embeddings) def get_encoder(self): return self.encoder def _prune_heads(self, heads_to_prune): """ Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base class PreTrainedModel """ for layer, heads in heads_to_prune.items(): self.encoder.layer[layer].attention.prune_heads(heads) @add_start_docstrings_to_model_forward(T5_ENCODER_INPUTS_DOCSTRING) @replace_return_docstrings(output_type=BaseModelOutput, config_class=_CONFIG_FOR_DOC) def forward( self, input_ids=None, attention_mask=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" Returns: Example:: >>> from transformers import T5Tokenizer, T5EncoderModel >>> tokenizer = T5Tokenizer.from_pretrained('t5-small') >>> model = T5EncoderModel.from_pretrained('t5-small') >>> input_ids = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt").input_ids # Batch size 1 >>> outputs = model(input_ids=input_ids) >>> last_hidden_states = outputs.last_hidden_state """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, inputs_embeds=inputs_embeds, head_mask=head_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) return encoder_outputs
AdaMix/src/transformers/models/t5/modeling_t5.py/0
{ "file_path": "AdaMix/src/transformers/models/t5/modeling_t5.py", "repo_id": "AdaMix", "token_count": 34550 }
66
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Convert Wav2Vec2 checkpoint.""" import argparse import fairseq import torch from transformers import Wav2Vec2Config, Wav2Vec2ForCTC, Wav2Vec2Model, logging logging.set_verbosity_info() logger = logging.get_logger(__name__) MAPPING = { "post_extract_proj": "feature_projection.projection", "encoder.pos_conv.0": "encoder.pos_conv_embed.conv", "self_attn.k_proj": "encoder.layers.*.attention.k_proj", "self_attn.v_proj": "encoder.layers.*.attention.v_proj", "self_attn.q_proj": "encoder.layers.*.attention.q_proj", "self_attn.out_proj": "encoder.layers.*.attention.out_proj", "self_attn_layer_norm": "encoder.layers.*.layer_norm", "fc1": "encoder.layers.*.feed_forward.intermediate_dense", "fc2": "encoder.layers.*.feed_forward.output_dense", "final_layer_norm": "encoder.layers.*.final_layer_norm", "encoder.layer_norm": "encoder.layer_norm", "w2v_model.layer_norm": "feature_projection.layer_norm", "w2v_encoder.proj": "lm_head", "mask_emb": "masked_spec_embed", } def set_recursively(hf_pointer, key, value, full_name, weight_type): for attribute in key.split("."): hf_pointer = getattr(hf_pointer, attribute) if weight_type is not None: hf_shape = getattr(hf_pointer, weight_type).shape else: hf_shape = hf_pointer.shape assert ( hf_shape == value.shape ), f"Shape of hf {key + '.' + weight_type} is {hf_shape}, but should be {value.shape} for {full_name}" if weight_type == "weight": hf_pointer.weight.data = value elif weight_type == "weight_g": hf_pointer.weight_g.data = value elif weight_type == "weight_v": hf_pointer.weight_v.data = value elif weight_type == "bias": hf_pointer.bias.data = value else: hf_pointer.data = value logger.info(f"{key + '.' + weight_type if weight_type is not None else ''} was initialized from {full_name}.") def recursively_load_weights(fairseq_model, hf_model, is_finetuned): unused_weights = [] fairseq_dict = fairseq_model.state_dict() feature_extractor = hf_model.wav2vec2.feature_extractor if is_finetuned else hf_model.feature_extractor for name, value in fairseq_dict.items(): is_used = False if "conv_layers" in name: load_conv_layer( name, value, feature_extractor, unused_weights, hf_model.config.feat_extract_norm == "group", ) is_used = True else: for key, mapped_key in MAPPING.items(): mapped_key = "wav2vec2." + mapped_key if (is_finetuned and mapped_key != "lm_head") else mapped_key if key in name or (key.split("w2v_model.")[-1] == name.split(".")[0] and not is_finetuned): is_used = True if "*" in mapped_key: layer_index = name.split(key)[0].split(".")[-2] mapped_key = mapped_key.replace("*", layer_index) if "weight_g" in name: weight_type = "weight_g" elif "weight_v" in name: weight_type = "weight_v" elif "weight" in name: weight_type = "weight" elif "bias" in name: weight_type = "bias" else: weight_type = None set_recursively(hf_model, mapped_key, value, name, weight_type) continue if not is_used: unused_weights.append(name) logger.warn(f"Unused weights: {unused_weights}") def load_conv_layer(full_name, value, feature_extractor, unused_weights, use_group_norm): name = full_name.split("conv_layers.")[-1] items = name.split(".") layer_id = int(items[0]) type_id = int(items[1]) if type_id == 0: if "bias" in name: assert ( value.shape == feature_extractor.conv_layers[layer_id].conv.bias.data.shape ), f"{full_name} has size {value.shape}, but {feature_extractor.conv_layers[layer_id].conv.bias.data.shape} was found." feature_extractor.conv_layers[layer_id].conv.bias.data = value logger.info(f"Feat extract conv layer {layer_id} was initialized from {full_name}.") elif "weight" in name: assert ( value.shape == feature_extractor.conv_layers[layer_id].conv.weight.data.shape ), f"{full_name} has size {value.shape}, but {feature_extractor.conv_layers[layer_id].conv.weight.data.shape} was found." feature_extractor.conv_layers[layer_id].conv.weight.data = value logger.info(f"Feat extract conv layer {layer_id} was initialized from {full_name}.") elif (type_id == 2 and not use_group_norm) or (type_id == 2 and layer_id == 0 and use_group_norm): if "bias" in name: assert ( value.shape == feature_extractor.conv_layers[layer_id].layer_norm.bias.data.shape ), f"{full_name} has size {value.shape}, but {feature_extractor[layer_id].layer_norm.bias.data.shape} was found." feature_extractor.conv_layers[layer_id].layer_norm.bias.data = value logger.info(f"Feat extract layer norm weight of layer {layer_id} was initialized from {full_name}.") elif "weight" in name: assert ( value.shape == feature_extractor.conv_layers[layer_id].layer_norm.weight.data.shape ), f"{full_name} has size {value.shape}, but {feature_extractor[layer_id].layer_norm.weight.data.shape} was found." feature_extractor.conv_layers[layer_id].layer_norm.weight.data = value logger.info(f"Feat extract layer norm weight of layer {layer_id} was initialized from {full_name}.") else: unused_weights.append(full_name) @torch.no_grad() def convert_wav2vec2_checkpoint( checkpoint_path, pytorch_dump_folder_path, config_path=None, dict_path=None, is_finetuned=True ): """ Copy/paste/tweak model's weights to transformers design. """ if config_path is not None: config = Wav2Vec2Config.from_pretrained(config_path) else: config = Wav2Vec2Config() if is_finetuned: hf_wav2vec = Wav2Vec2ForCTC(config) else: hf_wav2vec = Wav2Vec2Model(config) if is_finetuned: model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task( [checkpoint_path], arg_overrides={"data": dict_path} ) else: model, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task([checkpoint_path]) model = model[0].eval() recursively_load_weights(model, hf_wav2vec, is_finetuned) hf_wav2vec.save_pretrained(pytorch_dump_folder_path) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--pytorch_dump_folder_path", default=None, type=str, help="Path to the output PyTorch model.") parser.add_argument("--checkpoint_path", default=None, type=str, help="Path to fairseq checkpoint") parser.add_argument("--dict_path", default=None, type=str, help="Path to dict of fine-tuned model") parser.add_argument("--config_path", default=None, type=str, help="Path to hf config.json of model to convert") parser.add_argument( "--not_finetuned", action="store_true", help="Whether the model to convert is a fine-tuned model or not" ) args = parser.parse_args() convert_wav2vec2_checkpoint( args.checkpoint_path, args.pytorch_dump_folder_path, args.config_path, args.dict_path, not args.not_finetuned )
AdaMix/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py/0
{ "file_path": "AdaMix/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", "repo_id": "AdaMix", "token_count": 3655 }
67
# flake8: noqa # There's no way to ignore "F401 '...' imported but unused" warnings in this # module, but to preserve other warnings. So, don't check this module at all. # coding=utf-8 # Copyright 2018 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import warnings from typing import TYPE_CHECKING, Any, Dict, Optional, Tuple, Union from ..configuration_utils import PretrainedConfig from ..file_utils import is_tf_available, is_torch_available from ..modelcard import ModelCard from ..models.auto.tokenization_auto import AutoTokenizer from ..tokenization_utils import PreTrainedTokenizer from ..utils import logging from .base import ( ArgumentHandler, CsvPipelineDataFormat, JsonPipelineDataFormat, PipedPipelineDataFormat, Pipeline, PipelineDataFormat, PipelineException, get_default_model, get_framework, ) from .conversational import Conversation, ConversationalPipeline from .feature_extraction import FeatureExtractionPipeline from .fill_mask import FillMaskPipeline from .question_answering import QuestionAnsweringArgumentHandler, QuestionAnsweringPipeline from .table_question_answering import TableQuestionAnsweringArgumentHandler, TableQuestionAnsweringPipeline from .text2text_generation import SummarizationPipeline, Text2TextGenerationPipeline, TranslationPipeline from .text_classification import TextClassificationPipeline from .text_generation import TextGenerationPipeline from .token_classification import NerPipeline, TokenClassificationArgumentHandler, TokenClassificationPipeline from .zero_shot_classification import ZeroShotClassificationArgumentHandler, ZeroShotClassificationPipeline if is_tf_available(): import tensorflow as tf from ..models.auto.modeling_tf_auto import ( TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING, TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, TF_MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, TF_MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, TF_MODEL_WITH_LM_HEAD_MAPPING, TFAutoModel, TFAutoModelForCausalLM, TFAutoModelForMaskedLM, TFAutoModelForQuestionAnswering, TFAutoModelForSeq2SeqLM, TFAutoModelForSequenceClassification, TFAutoModelForTokenClassification, ) if is_torch_available(): import torch from ..models.auto.modeling_auto import ( MODEL_FOR_MASKED_LM_MAPPING, MODEL_FOR_QUESTION_ANSWERING_MAPPING, MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING, MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING, MODEL_FOR_TABLE_QUESTION_ANSWERING_MAPPING, MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING, AutoModel, AutoModelForCausalLM, AutoModelForMaskedLM, AutoModelForQuestionAnswering, AutoModelForSeq2SeqLM, AutoModelForSequenceClassification, AutoModelForTableQuestionAnswering, AutoModelForTokenClassification, ) if TYPE_CHECKING: from ..modeling_tf_utils import TFPreTrainedModel from ..modeling_utils import PreTrainedModel logger = logging.get_logger(__name__) # Register all the supported tasks here SUPPORTED_TASKS = { "feature-extraction": { "impl": FeatureExtractionPipeline, "tf": TFAutoModel if is_tf_available() else None, "pt": AutoModel if is_torch_available() else None, "default": {"model": {"pt": "distilbert-base-cased", "tf": "distilbert-base-cased"}}, }, "sentiment-analysis": { "impl": TextClassificationPipeline, "tf": TFAutoModelForSequenceClassification if is_tf_available() else None, "pt": AutoModelForSequenceClassification if is_torch_available() else None, "default": { "model": { "pt": "distilbert-base-uncased-finetuned-sst-2-english", "tf": "distilbert-base-uncased-finetuned-sst-2-english", }, }, }, "ner": { "impl": TokenClassificationPipeline, "tf": TFAutoModelForTokenClassification if is_tf_available() else None, "pt": AutoModelForTokenClassification if is_torch_available() else None, "default": { "model": { "pt": "dbmdz/bert-large-cased-finetuned-conll03-english", "tf": "dbmdz/bert-large-cased-finetuned-conll03-english", }, }, }, "question-answering": { "impl": QuestionAnsweringPipeline, "tf": TFAutoModelForQuestionAnswering if is_tf_available() else None, "pt": AutoModelForQuestionAnswering if is_torch_available() else None, "default": { "model": {"pt": "distilbert-base-cased-distilled-squad", "tf": "distilbert-base-cased-distilled-squad"}, }, }, "table-question-answering": { "impl": TableQuestionAnsweringPipeline, "pt": AutoModelForTableQuestionAnswering if is_torch_available() else None, "tf": None, "default": { "model": { "pt": "google/tapas-base-finetuned-wtq", "tokenizer": "google/tapas-base-finetuned-wtq", "tf": "google/tapas-base-finetuned-wtq", }, }, }, "fill-mask": { "impl": FillMaskPipeline, "tf": TFAutoModelForMaskedLM if is_tf_available() else None, "pt": AutoModelForMaskedLM if is_torch_available() else None, "default": {"model": {"pt": "distilroberta-base", "tf": "distilroberta-base"}}, }, "summarization": { "impl": SummarizationPipeline, "tf": TFAutoModelForSeq2SeqLM if is_tf_available() else None, "pt": AutoModelForSeq2SeqLM if is_torch_available() else None, "default": {"model": {"pt": "sshleifer/distilbart-cnn-12-6", "tf": "t5-small"}}, }, # This task is a special case as it's parametrized by SRC, TGT languages. "translation": { "impl": TranslationPipeline, "tf": TFAutoModelForSeq2SeqLM if is_tf_available() else None, "pt": AutoModelForSeq2SeqLM if is_torch_available() else None, "default": { ("en", "fr"): {"model": {"pt": "t5-base", "tf": "t5-base"}}, ("en", "de"): {"model": {"pt": "t5-base", "tf": "t5-base"}}, ("en", "ro"): {"model": {"pt": "t5-base", "tf": "t5-base"}}, }, }, "text2text-generation": { "impl": Text2TextGenerationPipeline, "tf": TFAutoModelForSeq2SeqLM if is_tf_available() else None, "pt": AutoModelForSeq2SeqLM if is_torch_available() else None, "default": {"model": {"pt": "t5-base", "tf": "t5-base"}}, }, "text-generation": { "impl": TextGenerationPipeline, "tf": TFAutoModelForCausalLM if is_tf_available() else None, "pt": AutoModelForCausalLM if is_torch_available() else None, "default": {"model": {"pt": "gpt2", "tf": "gpt2"}}, }, "zero-shot-classification": { "impl": ZeroShotClassificationPipeline, "tf": TFAutoModelForSequenceClassification if is_tf_available() else None, "pt": AutoModelForSequenceClassification if is_torch_available() else None, "default": { "model": {"pt": "facebook/bart-large-mnli", "tf": "roberta-large-mnli"}, "config": {"pt": "facebook/bart-large-mnli", "tf": "roberta-large-mnli"}, "tokenizer": {"pt": "facebook/bart-large-mnli", "tf": "roberta-large-mnli"}, }, }, "conversational": { "impl": ConversationalPipeline, "tf": TFAutoModelForCausalLM if is_tf_available() else None, "pt": AutoModelForCausalLM if is_torch_available() else None, "default": {"model": {"pt": "microsoft/DialoGPT-medium", "tf": "microsoft/DialoGPT-medium"}}, }, } def check_task(task: str) -> Tuple[Dict, Any]: """ Checks an incoming task string, to validate it's correct and return the default Pipeline and Model classes, and default models if they exist. Args: task (:obj:`str`): The task defining which pipeline will be returned. Currently accepted tasks are: - :obj:`"feature-extraction"` - :obj:`"sentiment-analysis"` - :obj:`"ner"` - :obj:`"question-answering"` - :obj:`"fill-mask"` - :obj:`"summarization"` - :obj:`"translation_xx_to_yy"` - :obj:`"translation"` - :obj:`"text-generation"` - :obj:`"conversational"` Returns: (task_defaults:obj:`dict`, task_options: (:obj:`tuple`, None)) The actual dictionary required to initialize the pipeline and some extra task options for parametrized tasks like "translation_XX_to_YY" """ if task in SUPPORTED_TASKS: targeted_task = SUPPORTED_TASKS[task] return targeted_task, None if task.startswith("translation"): tokens = task.split("_") if len(tokens) == 4 and tokens[0] == "translation" and tokens[2] == "to": targeted_task = SUPPORTED_TASKS["translation"] return targeted_task, (tokens[1], tokens[3]) raise KeyError("Invalid translation task {}, use 'translation_XX_to_YY' format".format(task)) raise KeyError( "Unknown task {}, available tasks are {}".format(task, list(SUPPORTED_TASKS.keys()) + ["translation_XX_to_YY"]) ) def pipeline( task: str, model: Optional = None, config: Optional[Union[str, PretrainedConfig]] = None, tokenizer: Optional[Union[str, PreTrainedTokenizer]] = None, framework: Optional[str] = None, revision: Optional[str] = None, use_fast: bool = True, model_kwargs: Dict[str, Any] = {}, **kwargs ) -> Pipeline: """ Utility factory method to build a :class:`~transformers.Pipeline`. Pipelines are made of: - A :doc:`tokenizer <tokenizer>` in charge of mapping raw textual input to token. - A :doc:`model <model>` to make predictions from the inputs. - Some (optional) post processing for enhancing model's output. Args: task (:obj:`str`): The task defining which pipeline will be returned. Currently accepted tasks are: - :obj:`"feature-extraction"`: will return a :class:`~transformers.FeatureExtractionPipeline`. - :obj:`"sentiment-analysis"`: will return a :class:`~transformers.TextClassificationPipeline`. - :obj:`"ner"`: will return a :class:`~transformers.TokenClassificationPipeline`. - :obj:`"question-answering"`: will return a :class:`~transformers.QuestionAnsweringPipeline`. - :obj:`"fill-mask"`: will return a :class:`~transformers.FillMaskPipeline`. - :obj:`"summarization"`: will return a :class:`~transformers.SummarizationPipeline`. - :obj:`"translation_xx_to_yy"`: will return a :class:`~transformers.TranslationPipeline`. - :obj:`"text2text-generation"`: will return a :class:`~transformers.Text2TextGenerationPipeline`. - :obj:`"text-generation"`: will return a :class:`~transformers.TextGenerationPipeline`. - :obj:`"zero-shot-classification:`: will return a :class:`~transformers.ZeroShotClassificationPipeline`. - :obj:`"conversational"`: will return a :class:`~transformers.ConversationalPipeline`. model (:obj:`str` or :obj:`~transformers.PreTrainedModel` or :obj:`~transformers.TFPreTrainedModel`, `optional`): The model that will be used by the pipeline to make predictions. This can be a model identifier or an actual instance of a pretrained model inheriting from :class:`~transformers.PreTrainedModel` (for PyTorch) or :class:`~transformers.TFPreTrainedModel` (for TensorFlow). If not provided, the default for the :obj:`task` will be loaded. config (:obj:`str` or :obj:`~transformers.PretrainedConfig`, `optional`): The configuration that will be used by the pipeline to instantiate the model. This can be a model identifier or an actual pretrained model configuration inheriting from :class:`~transformers.PretrainedConfig`. If not provided, the default configuration file for the requested model will be used. That means that if :obj:`model` is given, its default configuration will be used. However, if :obj:`model` is not supplied, this :obj:`task`'s default model's config is used instead. tokenizer (:obj:`str` or :obj:`~transformers.PreTrainedTokenizer`, `optional`): The tokenizer that will be used by the pipeline to encode data for the model. This can be a model identifier or an actual pretrained tokenizer inheriting from :class:`~transformers.PreTrainedTokenizer`. If not provided, the default tokenizer for the given :obj:`model` will be loaded (if it is a string). If :obj:`model` is not specified or not a string, then the default tokenizer for :obj:`config` is loaded (if it is a string). However, if :obj:`config` is also not given or not a string, then the default tokenizer for the given :obj:`task` will be loaded. framework (:obj:`str`, `optional`): The framework to use, either :obj:`"pt"` for PyTorch or :obj:`"tf"` for TensorFlow. The specified framework must be installed. If no framework is specified, will default to the one currently installed. If no framework is specified and both frameworks are installed, will default to the framework of the :obj:`model`, or to PyTorch if no model is provided. revision(:obj:`str`, `optional`, defaults to :obj:`"main"`): When passing a task name or a string model identifier: The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so ``revision`` can be any identifier allowed by git. use_fast (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not to use a Fast tokenizer if possible (a :class:`~transformers.PreTrainedTokenizerFast`). model_kwargs: Additional dictionary of keyword arguments passed along to the model's :obj:`from_pretrained(..., **model_kwargs)` function. kwargs: Additional keyword arguments passed along to the specific pipeline init (see the documentation for the corresponding pipeline class for possible values). Returns: :class:`~transformers.Pipeline`: A suitable pipeline for the task. Examples:: >>> from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer >>> # Sentiment analysis pipeline >>> pipeline('sentiment-analysis') >>> # Question answering pipeline, specifying the checkpoint identifier >>> pipeline('question-answering', model='distilbert-base-cased-distilled-squad', tokenizer='bert-base-cased') >>> # Named entity recognition pipeline, passing in a specific model and tokenizer >>> model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> pipeline('ner', model=model, tokenizer=tokenizer) """ # Retrieve the task targeted_task, task_options = check_task(task) # Use default model/config/tokenizer for the task if no model is provided if model is None: # At that point framework might still be undetermined model = get_default_model(targeted_task, framework, task_options) framework = framework or get_framework(model) task_class, model_class = targeted_task["impl"], targeted_task[framework] # Try to infer tokenizer from model or config name (if provided as str) if tokenizer is None: if isinstance(model, str): tokenizer = model elif isinstance(config, str): tokenizer = config else: # Impossible to guest what is the right tokenizer here raise Exception( "Impossible to guess which tokenizer to use. " "Please provided a PretrainedTokenizer class or a path/identifier to a pretrained tokenizer." ) modelcard = None # Try to infer modelcard from model or config name (if provided as str) if isinstance(model, str): modelcard = model elif isinstance(config, str): modelcard = config # Instantiate tokenizer if needed if isinstance(tokenizer, (str, tuple)): if isinstance(tokenizer, tuple): # For tuple we have (tokenizer name, {kwargs}) use_fast = tokenizer[1].pop("use_fast", use_fast) tokenizer = AutoTokenizer.from_pretrained( tokenizer[0], use_fast=use_fast, revision=revision, **tokenizer[1] ) else: tokenizer = AutoTokenizer.from_pretrained(tokenizer, revision=revision, use_fast=use_fast) # Instantiate config if needed if isinstance(config, str): config = AutoConfig.from_pretrained(config, revision=revision) # Instantiate modelcard if needed if isinstance(modelcard, str): modelcard = ModelCard.from_pretrained(modelcard, revision=revision) # Instantiate model if needed if isinstance(model, str): # Handle transparent TF/PT model conversion if framework == "pt" and model.endswith(".h5"): model_kwargs["from_tf"] = True logger.warning( "Model might be a TensorFlow model (ending with `.h5`) but TensorFlow is not available. " "Trying to load the model with PyTorch." ) elif framework == "tf" and model.endswith(".bin"): model_kwargs["from_pt"] = True logger.warning( "Model might be a PyTorch model (ending with `.bin`) but PyTorch is not available. " "Trying to load the model with Tensorflow." ) if model_class is None: raise ValueError( f"Pipeline using {framework} framework, but this framework is not supported by this pipeline." ) model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs) if task == "translation" and model.config.task_specific_params: for key in model.config.task_specific_params: if key.startswith("translation"): task = key warnings.warn( '"translation" task was used, instead of "translation_XX_to_YY", defaulting to "{}"'.format( task ), UserWarning, ) break return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)
AdaMix/src/transformers/pipelines/__init__.py/0
{ "file_path": "AdaMix/src/transformers/pipelines/__init__.py", "repo_id": "AdaMix", "token_count": 7857 }
68
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ Tokenization classes for python tokenizers. For fast tokenizers (provided by HuggingFace's tokenizers library) see tokenization_utils_fast.py """ import itertools import re import unicodedata from typing import Any, Dict, List, Optional, Tuple, Union, overload from .file_utils import PaddingStrategy, TensorType, add_end_docstrings from .tokenization_utils_base import ( ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING, INIT_TOKENIZER_DOCSTRING, AddedToken, BatchEncoding, EncodedInput, EncodedInputPair, PreTokenizedInput, PreTokenizedInputPair, PreTrainedTokenizerBase, TextInput, TextInputPair, TruncationStrategy, ) from .utils import logging logger = logging.get_logger(__name__) # Slow tokenizers are saved in a vocabulary plus three separated files SPECIAL_TOKENS_MAP_FILE = "special_tokens_map.json" ADDED_TOKENS_FILE = "added_tokens.json" TOKENIZER_CONFIG_FILE = "tokenizer_config.json" def _is_whitespace(char): """Checks whether `char` is a whitespace character.""" # \t, \n, and \r are technically control characters but we treat them # as whitespace since they are generally considered as such. if char == " " or char == "\t" or char == "\n" or char == "\r": return True cat = unicodedata.category(char) if cat == "Zs": return True return False def _is_control(char): """Checks whether `char` is a control character.""" # These are technically control characters but we count them as whitespace # characters. if char == "\t" or char == "\n" or char == "\r": return False cat = unicodedata.category(char) if cat.startswith("C"): return True return False def _is_punctuation(char): """Checks whether `char` is a punctuation character.""" cp = ord(char) # We treat all non-letter/number ASCII as punctuation. # Characters such as "^", "$", and "`" are not in the Unicode # Punctuation class but we treat them as punctuation anyways, for # consistency. if (cp >= 33 and cp <= 47) or (cp >= 58 and cp <= 64) or (cp >= 91 and cp <= 96) or (cp >= 123 and cp <= 126): return True cat = unicodedata.category(char) if cat.startswith("P"): return True return False def _is_end_of_word(text): """Checks whether the last character in text is one of a punctuation, control or whitespace character.""" last_char = text[-1] return bool(_is_control(last_char) | _is_punctuation(last_char) | _is_whitespace(last_char)) def _is_start_of_word(text): """Checks whether the first character in text is one of a punctuation, control or whitespace character.""" first_char = text[0] return bool(_is_control(first_char) | _is_punctuation(first_char) | _is_whitespace(first_char)) @add_end_docstrings(INIT_TOKENIZER_DOCSTRING) class PreTrainedTokenizer(PreTrainedTokenizerBase): """ Base class for all slow tokenizers. Inherits from :class:`~transformers.tokenization_utils_base.PreTrainedTokenizerBase`. Handle all the shared methods for tokenization and special tokens as well as methods downloading/caching/loading pretrained tokenizers as well as adding tokens to the vocabulary. This class also contain the added tokens in a unified way on top of all tokenizers so we don't have to handle the specific vocabulary augmentation methods of the various underlying dictionary structures (BPE, sentencepiece...). """ def __init__(self, **kwargs): super().__init__(**kwargs) # Added tokens - We store this for both slow and fast tokenizers # until the serialization of Fast tokenizers is updated self.added_tokens_encoder: Dict[str, int] = {} self.added_tokens_decoder: Dict[int, str] = {} self.unique_no_split_tokens: List[str] = [] self._decode_use_source_tokenizer = False @property def is_fast(self) -> bool: return False @property def vocab_size(self) -> int: """ :obj:`int`: Size of the base vocabulary (without the added tokens). """ raise NotImplementedError def get_added_vocab(self) -> Dict[str, int]: """ Returns the added tokens in the vocabulary as a dictionary of token to index. Returns: :obj:`Dict[str, int]`: The added tokens. """ return self.added_tokens_encoder def __len__(self): """ Size of the full vocabulary with the added tokens. """ return self.vocab_size + len(self.added_tokens_encoder) def _add_tokens(self, new_tokens: Union[List[str], List[AddedToken]], special_tokens: bool = False) -> int: """ Add a list of new tokens to the tokenizer class. If the new tokens are not in the vocabulary, they are added to it with indices starting from length of the current vocabulary. Args: new_tokens (:obj:`List[str]`or :obj:`List[tokenizers.AddedToken]`): Token(s) to add in vocabulary. A token is only added if it's not already in the vocabulary (tested by checking if the tokenizer assign the index of the ``unk_token`` to them). special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not the tokens should be added as special tokens. Returns: :obj:`int`: The number of tokens actually added to the vocabulary. Examples:: # Let's see how to increase the vocabulary of Bert model and tokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained('bert-base-uncased') num_added_toks = tokenizer.add_tokens(['new_tok1', 'my_new-tok2']) print('We have added', num_added_toks, 'tokens') # Note: resize_token_embeddings expects to receive the full size of the new vocabulary, i.e. the length of the tokenizer. model.resize_token_embeddings(len(tokenizer)) """ new_tokens = [str(tok) for tok in new_tokens] tokens_to_add = [] for token in new_tokens: assert isinstance(token, str) if not special_tokens and hasattr(self, "do_lower_case") and self.do_lower_case: token = token.lower() if ( token != self.unk_token and self.convert_tokens_to_ids(token) == self.convert_tokens_to_ids(self.unk_token) and token not in tokens_to_add ): tokens_to_add.append(token) if self.verbose: logger.info("Adding %s to the vocabulary", token) added_tok_encoder = dict((tok, len(self) + i) for i, tok in enumerate(tokens_to_add)) added_tok_decoder = {v: k for k, v in added_tok_encoder.items()} self.added_tokens_encoder.update(added_tok_encoder) self.added_tokens_decoder.update(added_tok_decoder) # Make sure we don't split on any special tokens (even they were already in the vocab before e.g. for Albert) if special_tokens: self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(new_tokens))) else: # Or on the newly added tokens self.unique_no_split_tokens = sorted(set(self.unique_no_split_tokens).union(set(tokens_to_add))) return len(tokens_to_add) def num_special_tokens_to_add(self, pair: bool = False) -> int: """ Returns the number of added tokens when encoding a sequence with special tokens. .. note:: This encodes a dummy input and checks the number of added tokens, and is therefore not efficient. Do not put this inside your training loop. Args: pair (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether the number of added tokens should be computed in the case of a sequence pair or a single sequence. Returns: :obj:`int`: Number of special tokens added to sequences. """ token_ids_0 = [] token_ids_1 = [] return len(self.build_inputs_with_special_tokens(token_ids_0, token_ids_1 if pair else None)) def tokenize(self, text: TextInput, **kwargs) -> List[str]: """ Converts a string in a sequence of tokens, using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Takes care of added tokens. Args: text (:obj:`str`): The sequence to be encoded. **kwargs (additional keyword arguments): Passed along to the model-specific ``prepare_for_tokenization`` preprocessing method. Returns: :obj:`List[str]`: The list of tokens. """ # Simple mapping string => AddedToken for special tokens with specific tokenization behaviors all_special_tokens_extended = dict( (str(t), t) for t in self.all_special_tokens_extended if isinstance(t, AddedToken) ) text, kwargs = self.prepare_for_tokenization(text, **kwargs) if kwargs: logger.warning(f"Keyword arguments {kwargs} not recognized.") # TODO: should this be in the base class? if hasattr(self, "do_lower_case") and self.do_lower_case: # convert non-special tokens to lowercase escaped_special_toks = [re.escape(s_tok) for s_tok in self.all_special_tokens] pattern = r"(" + r"|".join(escaped_special_toks) + r")|" + r"(.+?)" text = re.sub(pattern, lambda m: m.groups()[0] or m.groups()[1].lower(), text) def split_on_token(tok, text): result = [] tok_extended = all_special_tokens_extended.get(tok, None) split_text = text.split(tok) full_word = "" for i, sub_text in enumerate(split_text): # AddedToken can control whitespace stripping around them. # We use them for GPT2 and Roberta to have different behavior depending on the special token # Cf. https://github.com/huggingface/transformers/pull/2778 # and https://github.com/huggingface/transformers/issues/3788 if isinstance(tok_extended, AddedToken): if tok_extended.single_word: # Try to avoid splitting on token if ( i < len(split_text) - 1 and not _is_end_of_word(sub_text) and not _is_start_of_word(split_text[i + 1]) ): # Don't extract the special token full_word += sub_text + tok elif full_word: full_word += sub_text result.append(full_word) full_word = "" continue # Strip white spaces on the right if tok_extended.rstrip and i > 0: # A bit counter-intuitive but we strip the left of the string # since tok_extended.rstrip means the special token is eating all white spaces on its right sub_text = sub_text.lstrip() # Strip white spaces on the left if tok_extended.lstrip and i < len(split_text) - 1: sub_text = sub_text.rstrip() # Opposite here else: # We strip left and right by default if i < len(split_text) - 1: sub_text = sub_text.rstrip() if i > 0: sub_text = sub_text.lstrip() if i == 0 and not sub_text: result.append(tok) elif i == len(split_text) - 1: if sub_text: result.append(sub_text) else: pass else: if sub_text: result.append(sub_text) result.append(tok) return result def split_on_tokens(tok_list, text): if not text.strip(): return [] if not tok_list: return self._tokenize(text) tokenized_text = [] text_list = [text] for tok in tok_list: tokenized_text = [] for sub_text in text_list: if sub_text not in self.unique_no_split_tokens: tokenized_text.extend(split_on_token(tok, sub_text)) else: tokenized_text.append(sub_text) text_list = tokenized_text return list( itertools.chain.from_iterable( ( self._tokenize(token) if token not in self.unique_no_split_tokens else [token] for token in tokenized_text ) ) ) no_split_token = self.unique_no_split_tokens tokenized_text = split_on_tokens(no_split_token, text) return tokenized_text def _tokenize(self, text, **kwargs): """ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Do NOT take care of added tokens. """ raise NotImplementedError def convert_tokens_to_ids(self, tokens: Union[str, List[str]]) -> Union[int, List[int]]: """ Converts a token string (or a sequence of tokens) in a single integer id (or a sequence of ids), using the vocabulary. Args: tokens (:obj:`str` or :obj:`List[str]`): One or several token(s) to convert to token id(s). Returns: :obj:`int` or :obj:`List[int]`: The token id or list of token ids. """ if tokens is None: return None if isinstance(tokens, str): return self._convert_token_to_id_with_added_voc(tokens) ids = [] for token in tokens: ids.append(self._convert_token_to_id_with_added_voc(token)) return ids def _convert_token_to_id_with_added_voc(self, token): if token is None: return None if token in self.added_tokens_encoder: return self.added_tokens_encoder[token] return self._convert_token_to_id(token) def _convert_token_to_id(self, token): raise NotImplementedError def _encode_plus( self, text: Union[TextInput, PreTokenizedInput, EncodedInput], text_pair: Optional[Union[TextInput, PreTokenizedInput, EncodedInput]] = None, add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs ) -> BatchEncoding: def get_input_ids(text): if isinstance(text, str): tokens = self.tokenize(text, **kwargs) return self.convert_tokens_to_ids(tokens) elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str): if is_split_into_words: tokens = list( itertools.chain(*(self.tokenize(t, is_split_into_words=True, **kwargs) for t in text)) ) return self.convert_tokens_to_ids(tokens) else: return self.convert_tokens_to_ids(text) elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], int): return text else: if is_split_into_words: raise ValueError( f"Input {text} is not valid. Should be a string or a list/tuple of strings when `is_split_into_words=True`." ) else: raise ValueError( f"Input {text} is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." ) if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers." "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast." "More information on available tokenizers at " "https://github.com/huggingface/transformers/pull/2674" ) first_ids = get_input_ids(text) second_ids = get_input_ids(text_pair) if text_pair is not None else None return self.prepare_for_model( first_ids, pair_ids=second_ids, add_special_tokens=add_special_tokens, padding=padding_strategy.value, truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_tensors=return_tensors, prepend_batch_axis=True, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, verbose=verbose, ) def _batch_encode_plus( self, batch_text_or_text_pairs: Union[ List[TextInput], List[TextInputPair], List[PreTokenizedInput], List[PreTokenizedInputPair], List[EncodedInput], List[EncodedInputPair], ], add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, is_split_into_words: bool = False, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[Union[str, TensorType]] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_offsets_mapping: bool = False, return_length: bool = False, verbose: bool = True, **kwargs ) -> BatchEncoding: def get_input_ids(text): if isinstance(text, str): tokens = self.tokenize(text, **kwargs) return self.convert_tokens_to_ids(tokens) elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], str): if is_split_into_words: tokens = list( itertools.chain(*(self.tokenize(t, is_split_into_words=True, **kwargs) for t in text)) ) return self.convert_tokens_to_ids(tokens) else: return self.convert_tokens_to_ids(text) elif isinstance(text, (list, tuple)) and len(text) > 0 and isinstance(text[0], int): return text else: raise ValueError( "Input is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers." ) if return_offsets_mapping: raise NotImplementedError( "return_offset_mapping is not available when using Python tokenizers." "To use this feature, change your tokenizer to one deriving from " "transformers.PreTrainedTokenizerFast." ) input_ids = [] for ids_or_pair_ids in batch_text_or_text_pairs: if not isinstance(ids_or_pair_ids, (list, tuple)): ids, pair_ids = ids_or_pair_ids, None elif is_split_into_words and not isinstance(ids_or_pair_ids[0], (list, tuple)): ids, pair_ids = ids_or_pair_ids, None else: ids, pair_ids = ids_or_pair_ids first_ids = get_input_ids(ids) second_ids = get_input_ids(pair_ids) if pair_ids is not None else None input_ids.append((first_ids, second_ids)) batch_outputs = self._batch_prepare_for_model( input_ids, add_special_tokens=add_special_tokens, padding_strategy=padding_strategy, truncation_strategy=truncation_strategy, max_length=max_length, stride=stride, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=return_tensors, verbose=verbose, ) return BatchEncoding(batch_outputs) @add_end_docstrings(ENCODE_KWARGS_DOCSTRING, ENCODE_PLUS_ADDITIONAL_KWARGS_DOCSTRING) def _batch_prepare_for_model( self, batch_ids_pairs: List[Union[PreTokenizedInputPair, Tuple[List[int], None]]], add_special_tokens: bool = True, padding_strategy: PaddingStrategy = PaddingStrategy.DO_NOT_PAD, truncation_strategy: TruncationStrategy = TruncationStrategy.DO_NOT_TRUNCATE, max_length: Optional[int] = None, stride: int = 0, pad_to_multiple_of: Optional[int] = None, return_tensors: Optional[str] = None, return_token_type_ids: Optional[bool] = None, return_attention_mask: Optional[bool] = None, return_overflowing_tokens: bool = False, return_special_tokens_mask: bool = False, return_length: bool = False, verbose: bool = True, ) -> BatchEncoding: """ Prepares a sequence of input id, or a pair of sequences of inputs ids so that it can be used by the model. It adds special tokens, truncates sequences if overflowing while taking into account the special tokens and manages a moving window (with user defined stride) for overflowing tokens Args: batch_ids_pairs: list of tokenized input ids or input ids pairs """ batch_outputs = {} for first_ids, second_ids in batch_ids_pairs: outputs = self.prepare_for_model( first_ids, second_ids, add_special_tokens=add_special_tokens, padding=PaddingStrategy.DO_NOT_PAD.value, # we pad in batch afterward truncation=truncation_strategy.value, max_length=max_length, stride=stride, pad_to_multiple_of=None, # we pad in batch afterward return_attention_mask=False, # we pad in batch afterward return_token_type_ids=return_token_type_ids, return_overflowing_tokens=return_overflowing_tokens, return_special_tokens_mask=return_special_tokens_mask, return_length=return_length, return_tensors=None, # We convert the whole batch to tensors at the end prepend_batch_axis=False, verbose=verbose, ) for key, value in outputs.items(): if key not in batch_outputs: batch_outputs[key] = [] batch_outputs[key].append(value) batch_outputs = self.pad( batch_outputs, padding=padding_strategy.value, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, return_attention_mask=return_attention_mask, ) batch_outputs = BatchEncoding(batch_outputs, tensor_type=return_tensors) return batch_outputs def prepare_for_tokenization( self, text: str, is_split_into_words: bool = False, **kwargs ) -> Tuple[str, Dict[str, Any]]: """ Performs any necessary transformations before tokenization. This method should pop the arguments from kwargs and return the remaining :obj:`kwargs` as well. We test the :obj:`kwargs` at the end of the encoding process to be sure all the arguments have been used. Args: text (:obj:`str`): The text to prepare. is_split_into_words (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not the text has been pretokenized. kwargs: Keyword arguments to use for the tokenization. Returns: :obj:`Tuple[str, Dict[str, Any]]`: The prepared text and the unused kwargs. """ return (text, kwargs) def get_special_tokens_mask( self, token_ids_0: List, token_ids_1: Optional[List] = None, already_has_special_tokens: bool = False ) -> List[int]: """ Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer ``prepare_for_model`` or ``encode_plus`` methods. Args: token_ids_0 (:obj:`List[int]`): List of ids of the first sequence. token_ids_1 (:obj:`List[int]`, `optional`): List of ids of the second sequence. already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not the token list is already formatted with special tokens for the model. Returns: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. """ return [0] * ((len(token_ids_1) if token_ids_1 else 0) + len(token_ids_0)) @overload def convert_ids_to_tokens(self, ids: int, skip_special_tokens: bool = False) -> str: ... @overload def convert_ids_to_tokens(self, ids: List[int], skip_special_tokens: bool = False) -> List[str]: ... def convert_ids_to_tokens( self, ids: Union[int, List[int]], skip_special_tokens: bool = False ) -> Union[str, List[str]]: """ Converts a single index or a sequence of indices in a token or a sequence of tokens, using the vocabulary and added tokens. Args: ids (:obj:`int` or :obj:`List[int]`): The token id (or token ids) to convert to tokens. skip_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to remove special tokens in the decoding. Returns: :obj:`str` or :obj:`List[str]`: The decoded token(s). """ if isinstance(ids, int): if ids in self.added_tokens_decoder: return self.added_tokens_decoder[ids] else: return self._convert_id_to_token(ids) tokens = [] for index in ids: index = int(index) if skip_special_tokens and index in self.all_special_ids: continue if index in self.added_tokens_decoder: tokens.append(self.added_tokens_decoder[index]) else: tokens.append(self._convert_id_to_token(index)) return tokens def _convert_id_to_token(self, index: int) -> str: raise NotImplementedError def convert_tokens_to_string(self, tokens: List[str]) -> str: return " ".join(tokens) def _decode( self, token_ids: List[int], skip_special_tokens: bool = False, clean_up_tokenization_spaces: bool = True, spaces_between_special_tokens: bool = True, **kwargs ) -> str: self._decode_use_source_tokenizer = kwargs.pop("use_source_tokenizer", False) filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens) # To avoid mixing byte-level and unicode for byte-level BPT # we need to build string separately for added tokens and byte-level tokens # cf. https://github.com/huggingface/transformers/issues/1133 sub_texts = [] current_sub_text = [] for token in filtered_tokens: if skip_special_tokens and token in self.all_special_ids: continue if token in self.added_tokens_encoder: if current_sub_text: sub_texts.append(self.convert_tokens_to_string(current_sub_text)) current_sub_text = [] sub_texts.append(token) else: current_sub_text.append(token) if current_sub_text: sub_texts.append(self.convert_tokens_to_string(current_sub_text)) if spaces_between_special_tokens: text = " ".join(sub_texts) else: text = "".join(sub_texts) if clean_up_tokenization_spaces: clean_text = self.clean_up_tokenization(text) return clean_text else: return text
AdaMix/src/transformers/tokenization_utils.py/0
{ "file_path": "AdaMix/src/transformers/tokenization_utils.py", "repo_id": "AdaMix", "token_count": 14321 }
69
<!--- Copyright 2020 The HuggingFace Team. All rights reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> # How to add a new example script in 🤗 Transformers This folder provide a template for adding a new example script implementing a training or inference task with the models in the 🤗 Transformers library. To use it, you will need to install cookiecutter: ``` pip install cookiecutter ``` or refer to the installation page of the [cookiecutter documentation](https://cookiecutter.readthedocs.io/). You can then run the following command inside the `examples` folder of the transformers repo: ``` cookiecutter ../templates/adding_a_new_example_script/ ``` and answer the questions asked, which will generate a new folder where you will find a pre-filled template for your example following the best practices we recommend for them. Adjust the way the data is preprocessed, the model is loaded or the Trainer is instantiated then when you're happy, add a `README.md` in the folder (or complete the existing one if you added a script to an existing folder) telling a user how to run your script. Make a PR to the 🤗 Transformers repo. Don't forget to tweet about your new example with a carbon screenshot of how to run it and tag @huggingface!
AdaMix/templates/adding_a_new_example_script/README.md/0
{ "file_path": "AdaMix/templates/adding_a_new_example_script/README.md", "repo_id": "AdaMix", "token_count": 442 }
70
{ "modelname": "BrandNewBERT", "uppercase_modelname": "BRAND_NEW_BERT", "lowercase_modelname": "brand_new_bert", "camelcase_modelname": "BrandNewBert", "authors": "The HuggingFace Team", "checkpoint_identifier": "brand-new-bert-base-cased", "tokenizer_type": ["Based on BERT", "Based on BART", "Standalone"], "generate_tensorflow_and_pytorch": ["PyTorch & TensorFlow", "PyTorch", "TensorFlow"], "is_encoder_decoder_model": ["True", "False"] }
AdaMix/templates/adding_a_new_model/cookiecutter.json/0
{ "file_path": "AdaMix/templates/adding_a_new_model/cookiecutter.json", "repo_id": "AdaMix", "token_count": 178 }
71
# coding=utf-8 # Copyright 2020 The HuggingFace Team Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a clone of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import is_torch_available from transformers.testing_utils import require_torch, torch_device from .test_modeling_common import floats_tensor, ids_tensor if is_torch_available(): import torch from transformers.generation_beam_search import BeamHypotheses, BeamSearchScorer class BeamSearchTester: def __init__( self, parent, batch_size=3, sequence_length=10, vocab_size=99, pad_token_id=0, max_length=20, num_beams=4, length_penalty=2.0, do_early_stopping=True, num_beam_hyps_to_keep=2, ): self.parent = parent self.batch_size = batch_size self.sequence_length = sequence_length self.vocab_size = vocab_size self.pad_token_id = pad_token_id self.max_length = max_length self.num_beams = num_beams self.length_penalty = length_penalty self.do_early_stopping = do_early_stopping self.num_beam_hyps_to_keep = num_beam_hyps_to_keep # cannot be randomely generated self.eos_token_id = vocab_size + 1 def prepare_beam_scorer(self, **kwargs): return BeamSearchScorer( batch_size=kwargs.get("batch_size", self.batch_size), max_length=kwargs.get("max_length", self.max_length), num_beams=kwargs.get("num_beams", self.num_beams), device=torch_device, length_penalty=kwargs.get("length_penalty", self.length_penalty), do_early_stopping=kwargs.get("do_early_stopping", self.do_early_stopping), num_beam_hyps_to_keep=kwargs.get("num_beam_hyps_to_keep", self.num_beam_hyps_to_keep), ) def prepare_inputs(self): input_ids = ids_tensor((self.batch_size * self.num_beams, self.sequence_length), self.vocab_size) next_tokens = ids_tensor((self.batch_size, 2 * self.num_beams), self.vocab_size).to(torch_device) next_indices = ids_tensor((self.batch_size, 2 * self.num_beams), self.num_beams).to(torch_device) next_scores, _ = (-floats_tensor((self.batch_size, 2 * self.num_beams)).to(torch_device)).sort(descending=True) return (input_ids, next_tokens, next_indices, next_scores) def check_beam_hypotheses(self, input_ids, *args): # check that correct number of beam hypotheses is set in beam scorer beam_scorer = self.prepare_beam_scorer(do_early_stopping=True) beam_hyp = beam_scorer._beam_hyps[0] self.parent.assertEqual(len(beam_scorer._beam_hyps), self.batch_size) # check correct type self.parent.assertTrue(isinstance(beam_hyp, BeamHypotheses)) # check that num_beams is correctly set self.parent.assertEqual(beam_hyp.num_beams, self.num_beams) # check for early stopping deactivated for beam_idx in range(self.num_beams): beam_hyp.add(input_ids[beam_idx], -10.0) # if early stopping True -> score does not matter self.parent.assertTrue(beam_hyp.is_done(-10.0, 5)) # re-init beam_scorer = self.prepare_beam_scorer(do_early_stopping=False) beam_hyp = beam_scorer._beam_hyps[0] # add `num_beams + 1` beams to change `worst_score` for beam_idx in range(self.num_beams + 1): beam_hyp.add(input_ids[beam_idx], -10.0 + float(beam_idx)) # -10.0 is removed => -9.0 is worst score self.parent.assertAlmostEqual(beam_hyp.worst_score, -9.0 / (self.sequence_length ** beam_hyp.length_penalty)) # -5.0 is better than worst score => should not be finished self.parent.assertFalse(beam_hyp.is_done(-5.0, self.sequence_length)) # -20.0 is worse than worst score => should be finished self.parent.assertTrue(beam_hyp.is_done(-20.0, self.sequence_length)) def check_beam_scorer_update(self, input_ids, next_tokens, next_indices, next_scores): # check too many eos tokens beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[0, :] = self.eos_token_id with self.parent.assertRaises(ValueError): beam_scorer.process(input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id) # check all batches are done beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[:, : self.num_beams] = self.eos_token_id beam_scorer.process(input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id) # beam scorer should be done self.parent.assertTrue(beam_scorer.is_done) # check beam_scorer = self.prepare_beam_scorer() tokens = next_tokens.clone() tokens[:, 1] = self.eos_token_id beam_outputs = beam_scorer.process( input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id ) output_scores = beam_outputs["next_beam_scores"] output_tokens = beam_outputs["next_beam_tokens"] output_indices = beam_outputs["next_beam_indices"] def cut_expected_tensor(tensor): return torch.cat([tensor[:, :1], tensor[:, 2 : self.num_beams + 1]], dim=1).flatten() # check all outptus # cut out id of eos token and take best `num_beams` outputs expected_output_tokens = cut_expected_tensor(tokens) expected_output_scores = cut_expected_tensor(next_scores) # add num_beams * batch_idx expected_output_indices = ( cut_expected_tensor(next_indices) + (torch.arange(self.num_beams * self.batch_size, device=torch_device) // self.num_beams) * self.num_beams ) self.parent.assertListEqual(expected_output_tokens.tolist(), output_tokens.tolist()) self.parent.assertListEqual(expected_output_indices.tolist(), output_indices.tolist()) self.parent.assertTrue(torch.allclose(expected_output_scores, output_scores, atol=1e-3)) # make sure ids of eos token are correctly saved in beam_hyps of beam scorer for batch_idx in range(self.batch_size): correct_idx = batch_idx * self.num_beams + next_indices[batch_idx, 1] self.parent.assertListEqual( input_ids[correct_idx].tolist(), beam_scorer._beam_hyps[batch_idx].beams[0][-1].tolist() ) def check_beam_scores_finalize(self, input_ids, next_tokens, next_indices, next_scores): # max_length should be only one more than current input_ids to check that eos is correctly appended max_length = self.sequence_length + 1 beam_scorer = self.prepare_beam_scorer( num_beam_hyps_to_keep=1, max_length=max_length, length_penalty=1.0, do_early_stopping=False ) # update beams and append to input_ids tokens = next_tokens.clone() # first batch, first output has to finish with eos token id since scores are correctly sorted tokens[0, 0] = self.eos_token_id # make sure corresponding score is as good as possible to surely be picked first next_scores[0, 0] = 0.0 beam_outputs = beam_scorer.process( input_ids, next_scores, tokens, next_indices, eos_token_id=self.eos_token_id ) output_scores = beam_outputs["next_beam_scores"] output_tokens = beam_outputs["next_beam_tokens"] output_indices = beam_outputs["next_beam_indices"] input_ids = torch.cat([input_ids[output_indices, :], output_tokens.unsqueeze(-1)], dim=-1) # finalize sequence_output = beam_scorer.finalize( input_ids, output_scores, output_tokens, output_indices, pad_token_id=self.pad_token_id, eos_token_id=self.eos_token_id, ) sequences = sequence_output["sequences"] sequence_scores = sequence_output["sequence_scores"] # since `num_beam_hyps_to_keep` = 1 => only return `batch_size` x `max_length` self.parent.assertListEqual(list(sequences.shape), [self.batch_size, max_length]) self.parent.assertListEqual(list(sequence_scores.shape), [self.batch_size]) # check sequence_scores self.parent.assertFalse((sequence_scores > 0).any().item()) # first batch has to finish with eos_token self.parent.assertEqual(sequences[0, -1].item(), self.eos_token_id) # other batches cannot finish with eos token self.parent.assertNotEqual(sequences[1, -1].item(), self.eos_token_id) self.parent.assertNotEqual(sequences[2, -1].item(), self.eos_token_id) # now test that if `num_beam_hyps_to_keep` is 3 => all beams are returned beam_scorer.num_beam_hyps_to_keep = self.num_beams sequence_output = beam_scorer.finalize( input_ids, output_scores, output_tokens, output_indices, pad_token_id=self.pad_token_id, eos_token_id=self.eos_token_id, ) sequences = sequence_output["sequences"] sequence_scores = sequence_output["sequence_scores"] self.parent.assertListEqual(list(sequences.shape), [self.num_beams * self.batch_size, max_length]) self.parent.assertListEqual(list(sequence_scores.shape), [self.num_beams * self.batch_size]) @require_torch class BeamSearchTest(unittest.TestCase): def setUp(self): self.beam_search_tester = BeamSearchTester(self) def test_beam_hypotheses(self): inputs = self.beam_search_tester.prepare_inputs() self.beam_search_tester.check_beam_hypotheses(*inputs) def test_beam_scorer_update(self): inputs = self.beam_search_tester.prepare_inputs() self.beam_search_tester.check_beam_scorer_update(*inputs) def test_beam_scorer_finalize(self): inputs = self.beam_search_tester.prepare_inputs() self.beam_search_tester.check_beam_scores_finalize(*inputs)
AdaMix/tests/test_generation_beam_search.py/0
{ "file_path": "AdaMix/tests/test_generation_beam_search.py", "repo_id": "AdaMix", "token_count": 4604 }
72
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import is_torch_available from transformers.testing_utils import require_sentencepiece, require_tokenizers, require_torch, slow, torch_device if is_torch_available(): import torch from transformers import AutoModel @require_torch @require_sentencepiece @require_tokenizers class BortIntegrationTest(unittest.TestCase): @slow def test_output_embeds_base_model(self): model = AutoModel.from_pretrained("amazon/bort") model.to(torch_device) input_ids = torch.tensor( [[0, 18077, 4082, 7804, 8606, 6195, 2457, 3321, 11, 10489, 16, 269, 2579, 328, 2]], device=torch_device, dtype=torch.long, ) # Schloß Nymphenburg in Munich is really nice! output = model(input_ids)["last_hidden_state"] expected_shape = torch.Size((1, 15, 1024)) self.assertEqual(output.shape, expected_shape) # compare the actual values for a slice. expected_slice = torch.tensor( [[[-0.0349, 0.0436, -1.8654], [-0.6964, 0.0835, -1.7393], [-0.9819, 0.2956, -0.2868]]], device=torch_device, dtype=torch.float, ) self.assertTrue(torch.allclose(output[:, :3, :3], expected_slice, atol=1e-4))
AdaMix/tests/test_modeling_bort.py/0
{ "file_path": "AdaMix/tests/test_modeling_bort.py", "repo_id": "AdaMix", "token_count": 725 }
73
# coding=utf-8 # Copyright 2020 HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import FunnelTokenizer, is_torch_available from transformers.testing_utils import require_sentencepiece, require_tokenizers, require_torch, slow, torch_device from .test_configuration_common import ConfigTester from .test_modeling_common import ModelTesterMixin, ids_tensor if is_torch_available(): import torch from transformers import ( MODEL_FOR_PRETRAINING_MAPPING, FunnelBaseModel, FunnelConfig, FunnelForMaskedLM, FunnelForMultipleChoice, FunnelForPreTraining, FunnelForQuestionAnswering, FunnelForSequenceClassification, FunnelForTokenClassification, FunnelModel, ) class FunnelModelTester: """You can also import this e.g, from .test_modeling_funnel import FunnelModelTester """ def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=True, use_labels=True, vocab_size=99, block_sizes=[1, 1, 2], num_decoder_layers=1, d_model=32, n_head=4, d_head=8, d_inner=37, hidden_act="gelu_new", hidden_dropout=0.1, attention_dropout=0.1, activation_dropout=0.0, max_position_embeddings=512, type_vocab_size=3, num_labels=3, num_choices=4, scope=None, base=False, ): self.parent = parent self.batch_size = batch_size self.seq_length = seq_length self.is_training = is_training self.use_input_mask = use_input_mask self.use_token_type_ids = use_token_type_ids self.use_labels = use_labels self.vocab_size = vocab_size self.block_sizes = block_sizes self.num_decoder_layers = num_decoder_layers self.d_model = d_model self.n_head = n_head self.d_head = d_head self.d_inner = d_inner self.hidden_act = hidden_act self.hidden_dropout = hidden_dropout self.attention_dropout = attention_dropout self.activation_dropout = activation_dropout self.max_position_embeddings = max_position_embeddings self.type_vocab_size = type_vocab_size self.type_sequence_label_size = 2 self.num_labels = num_labels self.num_choices = num_choices self.scope = scope # Used in the tests to check the size of the first attention layer self.num_attention_heads = n_head # Used in the tests to check the size of the first hidden state self.hidden_size = self.d_model # Used in the tests to check the number of output hidden states/attentions self.num_hidden_layers = sum(self.block_sizes) + (0 if base else self.num_decoder_layers) # FunnelModel adds two hidden layers: input embeddings and the sum of the upsampled encoder hidden state with # the last hidden state of the first block (which is the first hidden state of the decoder). if not base: self.expected_num_hidden_layers = self.num_hidden_layers + 2 def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) fake_token_labels = ids_tensor([self.batch_size, self.seq_length], 1) config = FunnelConfig( vocab_size=self.vocab_size, block_sizes=self.block_sizes, num_decoder_layers=self.num_decoder_layers, d_model=self.d_model, n_head=self.n_head, d_head=self.d_head, d_inner=self.d_inner, hidden_act=self.hidden_act, hidden_dropout=self.hidden_dropout, attention_dropout=self.attention_dropout, activation_dropout=self.activation_dropout, max_position_embeddings=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, ) return ( config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ) def create_and_check_model( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): model = FunnelModel(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.d_model)) model.config.truncate_seq = False result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.d_model)) model.config.separate_cls = False result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.d_model)) def create_and_check_base_model( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): model = FunnelBaseModel(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids) result = model(input_ids, token_type_ids=token_type_ids) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, 2, self.d_model)) model.config.truncate_seq = False result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, 3, self.d_model)) model.config.separate_cls = False result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, 2, self.d_model)) def create_and_check_for_pretraining( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): config.num_labels = self.num_labels model = FunnelForPreTraining(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, labels=fake_token_labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length)) def create_and_check_for_masked_lm( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): model = FunnelForMaskedLM(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, labels=token_labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_for_sequence_classification( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): config.num_labels = self.num_labels model = FunnelForSequenceClassification(config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, labels=sequence_labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_labels)) def create_and_check_for_multiple_choice( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): config.num_choices = self.num_choices model = FunnelForMultipleChoice(config=config) model.to(torch_device) model.eval() multiple_choice_inputs_ids = input_ids.unsqueeze(1).expand(-1, self.num_choices, -1).contiguous() multiple_choice_token_type_ids = token_type_ids.unsqueeze(1).expand(-1, self.num_choices, -1).contiguous() multiple_choice_input_mask = input_mask.unsqueeze(1).expand(-1, self.num_choices, -1).contiguous() result = model( multiple_choice_inputs_ids, attention_mask=multiple_choice_input_mask, token_type_ids=multiple_choice_token_type_ids, labels=choice_labels, ) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_choices)) def create_and_check_for_token_classification( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): config.num_labels = self.num_labels model = FunnelForTokenClassification(config=config) model.to(torch_device) model.eval() result = model(input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, labels=token_labels) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.num_labels)) def create_and_check_for_question_answering( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ): model = FunnelForQuestionAnswering(config=config) model.to(torch_device) model.eval() result = model( input_ids, attention_mask=input_mask, token_type_ids=token_type_ids, start_positions=sequence_labels, end_positions=sequence_labels, ) self.parent.assertEqual(result.start_logits.shape, (self.batch_size, self.seq_length)) self.parent.assertEqual(result.end_logits.shape, (self.batch_size, self.seq_length)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, fake_token_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": input_mask} return config, inputs_dict @require_torch class FunnelModelTest(ModelTesterMixin, unittest.TestCase): test_head_masking = False test_pruning = False all_model_classes = ( ( FunnelModel, FunnelForMaskedLM, FunnelForPreTraining, FunnelForQuestionAnswering, FunnelForTokenClassification, ) if is_torch_available() else () ) # special case for ForPreTraining model def _prepare_for_class(self, inputs_dict, model_class, return_labels=False): inputs_dict = super()._prepare_for_class(inputs_dict, model_class, return_labels=return_labels) if return_labels: if model_class in MODEL_FOR_PRETRAINING_MAPPING.values(): inputs_dict["labels"] = torch.zeros( (self.model_tester.batch_size, self.model_tester.seq_length), dtype=torch.long, device=torch_device ) return inputs_dict def setUp(self): self.model_tester = FunnelModelTester(self) self.config_tester = ConfigTester(self, config_class=FunnelConfig) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_pretraining(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_pretraining(*config_and_inputs) def test_for_masked_lm(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_masked_lm(*config_and_inputs) def test_for_token_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_token_classification(*config_and_inputs) def test_for_question_answering(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_question_answering(*config_and_inputs) @require_torch class FunnelBaseModelTest(ModelTesterMixin, unittest.TestCase): test_head_masking = False test_pruning = False all_model_classes = ( (FunnelBaseModel, FunnelForMultipleChoice, FunnelForSequenceClassification) if is_torch_available() else () ) def setUp(self): self.model_tester = FunnelModelTester(self, base=True) self.config_tester = ConfigTester(self, config_class=FunnelConfig) def test_config(self): self.config_tester.run_common_tests() def test_base_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_base_model(*config_and_inputs) def test_for_sequence_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_sequence_classification(*config_and_inputs) def test_for_multiple_choice(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_multiple_choice(*config_and_inputs) # overwrite from test_modeling_common def test_training(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True for model_class in self.all_model_classes: if model_class.__name__ == "FunnelBaseModel": continue model = model_class(config) model.to(torch_device) model.train() inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True) loss = model(**inputs).loss loss.backward() @require_torch @require_sentencepiece @require_tokenizers class FunnelModelIntegrationTest(unittest.TestCase): def test_inference_tiny_model(self): batch_size = 13 sequence_length = 7 input_ids = torch.arange(0, batch_size * sequence_length).long().reshape(batch_size, sequence_length) lengths = [0, 1, 2, 3, 4, 5, 6, 4, 1, 3, 5, 0, 1] token_type_ids = torch.tensor([[2] + [0] * a + [1] * (sequence_length - a - 1) for a in lengths]) model = FunnelModel.from_pretrained("sgugger/funnel-random-tiny") output = model(input_ids, token_type_ids=token_type_ids)[0].abs() expected_output_sum = torch.tensor(2344.8352) expected_output_mean = torch.tensor(0.8052) self.assertTrue(torch.allclose(output.sum(), expected_output_sum, atol=1e-4)) self.assertTrue(torch.allclose(output.mean(), expected_output_mean, atol=1e-4)) attention_mask = torch.tensor([[1] * 7, [1] * 4 + [0] * 3] * 6 + [[0, 1, 1, 0, 0, 1, 1]]) output = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)[0].abs() expected_output_sum = torch.tensor(2343.8425) expected_output_mean = torch.tensor(0.8049) self.assertTrue(torch.allclose(output.sum(), expected_output_sum, atol=1e-4)) self.assertTrue(torch.allclose(output.mean(), expected_output_mean, atol=1e-4)) @slow def test_inference_model(self): tokenizer = FunnelTokenizer.from_pretrained("huggingface/funnel-small") model = FunnelModel.from_pretrained("huggingface/funnel-small") inputs = tokenizer("Hello! I am the Funnel Transformer model.", return_tensors="pt") output = model(**inputs)[0] expected_output_sum = torch.tensor(235.7246) expected_output_mean = torch.tensor(0.0256) self.assertTrue(torch.allclose(output.sum(), expected_output_sum, atol=1e-4)) self.assertTrue(torch.allclose(output.mean(), expected_output_mean, atol=1e-4))
AdaMix/tests/test_modeling_funnel.py/0
{ "file_path": "AdaMix/tests/test_modeling_funnel.py", "repo_id": "AdaMix", "token_count": 8193 }
74
# coding=utf-8 # Copyright 2020, The RAG Authors and The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import shutil import tempfile import unittest from unittest.mock import patch import numpy as np from transformers import BartTokenizer, T5Tokenizer from transformers.file_utils import cached_property, is_datasets_available, is_faiss_available, is_torch_available from transformers.models.bert.tokenization_bert import VOCAB_FILES_NAMES as DPR_VOCAB_FILES_NAMES from transformers.models.dpr.tokenization_dpr import DPRQuestionEncoderTokenizer from transformers.models.roberta.tokenization_roberta import VOCAB_FILES_NAMES as BART_VOCAB_FILES_NAMES from transformers.testing_utils import ( require_sentencepiece, require_tokenizers, require_torch, require_torch_non_multi_gpu, slow, torch_device, ) from .test_modeling_bart import BartModelTester from .test_modeling_dpr import DPRModelTester from .test_modeling_t5 import T5ModelTester TOLERANCE = 1e-3 T5_SAMPLE_VOCAB = os.path.join(os.path.dirname(os.path.abspath(__file__)), "fixtures/test_sentencepiece.model") if is_torch_available() and is_datasets_available() and is_faiss_available(): import torch from datasets import Dataset import faiss from transformers import ( AutoConfig, AutoModel, AutoModelForSeq2SeqLM, RagConfig, RagModel, RagRetriever, RagSequenceForGeneration, RagTokenForGeneration, RagTokenizer, ) from transformers.modeling_outputs import BaseModelOutput def _assert_tensors_equal(a, b, atol=1e-12, prefix=""): """If tensors not close, or a and b arent both tensors, raise a nice Assertion error.""" if a is None and b is None: return True try: if torch.allclose(a, b, atol=atol): return True raise except Exception: msg = "{} != {}".format(a, b) if prefix: msg = prefix + ": " + msg raise AssertionError(msg) def require_retrieval(test_case): """ Decorator marking a test that requires a set of dependencies necessary for pefrorm retrieval with :class:`~transformers.RagRetriever`. These tests are skipped when respective libraries are not installed. """ if not (is_torch_available() and is_datasets_available() and is_faiss_available()): test_case = unittest.skip("test requires PyTorch, datasets and faiss")(test_case) return test_case @require_torch @require_retrieval @require_sentencepiece class RagTestMixin: all_model_classes = ( (RagModel, RagTokenForGeneration, RagSequenceForGeneration) if is_torch_available() and is_datasets_available() and is_faiss_available() else () ) retrieval_vector_size = 32 n_docs = 3 max_combined_length = 16 def setUp(self): self.tmpdirname = tempfile.mkdtemp() # DPR tok vocab_tokens = [ "[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]", "want", "##want", "##ed", "wa", "un", "runn", "##ing", ",", "low", "lowest", ] dpr_tokenizer_path = os.path.join(self.tmpdirname, "dpr_tokenizer") os.makedirs(dpr_tokenizer_path, exist_ok=True) self.vocab_file = os.path.join(dpr_tokenizer_path, DPR_VOCAB_FILES_NAMES["vocab_file"]) with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) # BART tok vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "\u0120", "\u0120l", "\u0120n", "\u0120lo", "\u0120low", "er", "\u0120lowest", "\u0120newer", "\u0120wider", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["#version: 0.2", "\u0120 l", "\u0120l o", "\u0120lo w", "e r", ""] self.special_tokens_map = {"unk_token": "<unk>"} bart_tokenizer_path = os.path.join(self.tmpdirname, "bart_tokenizer") os.makedirs(bart_tokenizer_path, exist_ok=True) self.vocab_file = os.path.join(bart_tokenizer_path, BART_VOCAB_FILES_NAMES["vocab_file"]) self.merges_file = os.path.join(bart_tokenizer_path, BART_VOCAB_FILES_NAMES["merges_file"]) with open(self.vocab_file, "w", encoding="utf-8") as fp: fp.write(json.dumps(vocab_tokens) + "\n") with open(self.merges_file, "w", encoding="utf-8") as fp: fp.write("\n".join(merges)) t5_tokenizer = T5Tokenizer(T5_SAMPLE_VOCAB) t5_tokenizer_path = os.path.join(self.tmpdirname, "t5_tokenizer") t5_tokenizer.save_pretrained(t5_tokenizer_path) @cached_property def dpr_tokenizer(self) -> DPRQuestionEncoderTokenizer: return DPRQuestionEncoderTokenizer.from_pretrained(os.path.join(self.tmpdirname, "dpr_tokenizer")) @cached_property def bart_tokenizer(self) -> BartTokenizer: return BartTokenizer.from_pretrained(os.path.join(self.tmpdirname, "bart_tokenizer")) @cached_property def t5_tokenizer(self) -> BartTokenizer: return T5Tokenizer.from_pretrained(os.path.join(self.tmpdirname, "t5_tokenizer")) def tearDown(self): shutil.rmtree(self.tmpdirname) def get_retriever(self, config): dataset = Dataset.from_dict( { "id": ["0", "1", "3"], "text": ["foo", "bar", "qux"], "title": ["Foo", "Bar", "Qux"], "embeddings": [ np.ones(self.retrieval_vector_size), 2 * np.ones(self.retrieval_vector_size), 3 * np.ones(self.retrieval_vector_size), ], } ) dataset.add_faiss_index("embeddings", string_factory="Flat", metric_type=faiss.METRIC_INNER_PRODUCT) tokenizer = self.bart_tokenizer if config.generator.model_type == "bart" else self.t5_tokenizer with patch("transformers.models.rag.retrieval_rag.load_dataset") as mock_load_dataset: mock_load_dataset.return_value = dataset retriever = RagRetriever( config, question_encoder_tokenizer=self.dpr_tokenizer, generator_tokenizer=tokenizer, ) return retriever def check_model_with_retriever( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) for model_class in self.all_model_classes: model = model_class(config, retriever=self.get_retriever(config)).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) outputs = model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) # logits self.assertEqual( outputs.logits.shape, (self.n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (self.n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], self.n_docs)) def check_model_generate_from_context_input_ids( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.cpu().detach().to(torch.float32).numpy(), prefix=config.generator.prefix, return_tensors="pt", ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) # cast retrieved_doc_embeds = retrieved_doc_embeds.to(question_hidden_states) context_input_ids = context_input_ids.to(input_ids) context_attention_mask = context_attention_mask.to(input_ids) # compute doc_scores doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), retrieved_doc_embeds.transpose(1, 2)).squeeze( 1 ) outputs = model.generate( context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, do_deduplication=True, ) self.assertIsNotNone(outputs) def check_model_generate( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) for model_class in self.all_model_classes[1:]: model = model_class(config, retriever=self.get_retriever(config)).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) outputs = model.generate( input_ids=input_ids, num_beams=2, num_return_sequences=2, decoder_start_token_id=config.generator.eos_token_id, ) self.assertIsNotNone(outputs) def check_model_without_retriever( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.cpu().detach().to(torch.float32).numpy(), prefix=config.generator.prefix, return_tensors="pt", ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) # cast retrieved_doc_embeds = retrieved_doc_embeds.to(question_hidden_states) context_input_ids = context_input_ids.to(input_ids) context_attention_mask = context_attention_mask.to(input_ids) # compute doc_scores doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), retrieved_doc_embeds.transpose(1, 2)).squeeze( 1 ) outputs = model( context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) # logits self.assertEqual( outputs.logits.shape, (self.n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (self.n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], self.n_docs)) def check_model_custom_n_docs( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, n_docs, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.cpu().detach().to(torch.float32).numpy(), prefix=config.generator.prefix, return_tensors="pt", n_docs=n_docs, ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) # cast retrieved_doc_embeds = retrieved_doc_embeds.to(question_hidden_states) context_input_ids = context_input_ids.to(input_ids) context_attention_mask = context_attention_mask.to(input_ids) # compute doc_scores doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), retrieved_doc_embeds.transpose(1, 2)).squeeze( 1 ) outputs = model( context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, n_docs=n_docs, ) # logits self.assertEqual( outputs.logits.shape, (n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], n_docs)) def check_model_with_mismatch_n_docs_value( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, retriever_n_docs, generator_n_docs, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) retriever = self.get_retriever(config) for model_class in self.all_model_classes: model = model_class(config).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) question_hidden_states = model.question_encoder(input_ids, attention_mask=attention_mask)[0] out = retriever( input_ids, question_hidden_states.cpu().detach().to(torch.float32).numpy(), prefix=config.generator.prefix, return_tensors="pt", n_docs=retriever_n_docs, ) context_input_ids, context_attention_mask, retrieved_doc_embeds = ( out["context_input_ids"], out["context_attention_mask"], out["retrieved_doc_embeds"], ) # cast retrieved_doc_embeds = retrieved_doc_embeds.to(question_hidden_states) context_input_ids = context_input_ids.to(input_ids) context_attention_mask = context_attention_mask.to(input_ids) # compute doc_scores doc_scores = torch.bmm(question_hidden_states.unsqueeze(1), retrieved_doc_embeds.transpose(1, 2)).squeeze( 1 ) self.assertRaises( AssertionError, model.__call__, context_input_ids=context_input_ids, context_attention_mask=context_attention_mask, doc_scores=doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, n_docs=generator_n_docs, ) def check_model_with_encoder_outputs( self, config, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, **kwargs ): self.assertIsNotNone(config.question_encoder) self.assertIsNotNone(config.generator) for model_class in self.all_model_classes: model = model_class(config, retriever=self.get_retriever(config)).to(torch_device) model.eval() self.assertTrue(model.config.is_encoder_decoder) outputs = model( input_ids=input_ids, attention_mask=attention_mask, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) encoder_outputs = BaseModelOutput(outputs.generator_enc_last_hidden_state) # run only generator outputs = model( encoder_outputs=encoder_outputs, doc_scores=outputs.doc_scores, decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, ) # logits self.assertEqual( outputs.logits.shape, (self.n_docs * decoder_input_ids.shape[0], decoder_input_ids.shape[1], config.generator.vocab_size), ) # generator encoder last hidden states self.assertEqual( outputs.generator_enc_last_hidden_state.shape, (self.n_docs * decoder_input_ids.shape[0], self.max_combined_length, config.generator.hidden_size), ) # doc scores self.assertEqual(outputs.doc_scores.shape, (input_ids.shape[0], self.n_docs)) def test_model_with_retriever(self): inputs_dict = self.config_and_inputs self.check_model_with_retriever(**inputs_dict) def test_model_without_retriever(self): inputs_dict = self.config_and_inputs self.check_model_without_retriever(**inputs_dict) def test_model_with_encoder_outputs(self): inputs_dict = self.config_and_inputs self.check_model_with_encoder_outputs(**inputs_dict) def test_model_generate(self): inputs_dict = self.config_and_inputs self.check_model_generate(**inputs_dict) def test_model_with_custom_n_docs(self): inputs_dict = self.config_and_inputs inputs_dict["n_docs"] = 1 self.check_model_custom_n_docs(**inputs_dict) def test_model_with_mismatch_n_docs_value(self): inputs_dict = self.config_and_inputs inputs_dict["retriever_n_docs"] = 3 inputs_dict["generator_n_docs"] = 2 self.check_model_with_mismatch_n_docs_value(**inputs_dict) @require_torch @require_retrieval class RagDPRBartTest(RagTestMixin, unittest.TestCase): @cached_property def config_and_inputs(self): question_encoder_tester = DPRModelTester(self) dpr_config_and_inputs = question_encoder_tester.prepare_config_and_inputs() generator_tester = BartModelTester(self) bart_config_and_inputs = generator_tester.prepare_config_and_inputs_for_common() (question_encoder_config, input_ids, _, input_mask, _, _, _) = dpr_config_and_inputs (generator_config, bart_inputs_dict) = bart_config_and_inputs decoder_input_ids, decoder_attention_mask = bart_inputs_dict["input_ids"], bart_inputs_dict["attention_mask"] config = RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, n_docs=self.n_docs, retrieval_vector_size=self.retrieval_vector_size, max_combined_length=self.max_combined_length, ) return { "config": config, "input_ids": input_ids, "attention_mask": input_mask, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask, } @require_torch @require_retrieval class RagDPRT5Test(RagTestMixin, unittest.TestCase): @cached_property def config_and_inputs(self): question_encoder_tester = DPRModelTester(self) dpr_config_and_inputs = question_encoder_tester.prepare_config_and_inputs() generator_tester = T5ModelTester(self, vocab_size=1100) t5_config_and_inputs = generator_tester.prepare_config_and_inputs() (question_encoder_config, input_ids, _, input_mask, _, _, _) = dpr_config_and_inputs (generator_config, _, decoder_input_ids, _, decoder_attention_mask, _) = t5_config_and_inputs config = RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, n_docs=self.n_docs, retrieval_vector_size=self.retrieval_vector_size, max_combined_length=self.max_combined_length, ) return { "config": config, "input_ids": input_ids, "attention_mask": input_mask, "decoder_input_ids": decoder_input_ids, "decoder_attention_mask": decoder_attention_mask, } @require_torch @require_retrieval @require_sentencepiece @require_tokenizers @require_torch_non_multi_gpu class RagModelIntegrationTests(unittest.TestCase): @cached_property def sequence_model(self): return ( RagSequenceForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn" ) .to(torch_device) .eval() ) @cached_property def token_model(self): return ( RagTokenForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn" ) .to(torch_device) .eval() ) def get_rag_config(self): question_encoder_config = AutoConfig.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_config = AutoConfig.from_pretrained("facebook/bart-large-cnn") return RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, bos_token_id=0, decoder_start_token_id=2, eos_token_id=2, is_encoder_decoder=True, pad_token_id=1, vocab_size=50264, title_sep=" / ", doc_sep=" // ", n_docs=5, max_combined_length=300, dataset="wiki_dpr", dataset_split="train", index_name="exact", index_path=None, use_dummy_dataset=True, retrieval_vector_size=768, retrieval_batch_size=8, ) @slow def test_rag_sequence_inference(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_sequence = self.sequence_model rag_sequence.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="pt" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="pt").input_ids input_ids = input_ids.to(torch_device) decoder_input_ids = decoder_input_ids.to(torch_device) with torch.no_grad(): output = rag_sequence( input_ids, labels=decoder_input_ids, ) expected_shape = torch.Size([5, 5, 50264]) self.assertEqual(output.logits.shape, expected_shape) expected_doc_scores = torch.tensor([[75.0286, 74.4998, 74.0804, 74.0306, 73.9504]]).to(torch_device) _assert_tensors_equal(expected_doc_scores, output.doc_scores, atol=TOLERANCE) expected_loss = torch.tensor([36.7368]).to(torch_device) _assert_tensors_equal(expected_loss, output.loss, atol=TOLERANCE) @slow def test_rag_token_inference(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_token = self.token_model rag_token.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="pt" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="pt").input_ids input_ids = input_ids.to(torch_device) decoder_input_ids = decoder_input_ids.to(torch_device) with torch.no_grad(): output = rag_token( input_ids, labels=decoder_input_ids, ) expected_shape = torch.Size([5, 5, 50264]) self.assertEqual(output.logits.shape, expected_shape) expected_doc_scores = torch.tensor([[75.0286, 74.4998, 74.0804, 74.0306, 73.9504]]).to(torch_device) _assert_tensors_equal(expected_doc_scores, output.doc_scores, atol=TOLERANCE) expected_loss = torch.tensor([36.3557]).to(torch_device) _assert_tensors_equal(expected_loss, output.loss, atol=TOLERANCE) @slow def test_rag_token_generate_beam(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_token = self.token_model rag_token.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="pt" ).input_ids input_ids = input_ids.to(torch_device) output_ids = rag_token.generate( input_ids, decoder_start_token_id=rag_token.generator.config.decoder_start_token_id, num_beams=2, num_return_sequences=2, ) # sequence generate test output_text_1 = rag_decoder_tokenizer.decode(output_ids[0], skip_special_tokens=True) output_text_2 = rag_decoder_tokenizer.decode(output_ids[1], skip_special_tokens=True) # Expected outputs as given by model at integration time. EXPECTED_OUTPUT_TEXT_1 = "\"She's My Kind of Girl" EXPECTED_OUTPUT_TEXT_2 = "\"She's My Kind of Love" self.assertEqual(output_text_1, EXPECTED_OUTPUT_TEXT_1) self.assertEqual(output_text_2, EXPECTED_OUTPUT_TEXT_2) @slow def test_rag_sequence_generate_beam(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) rag_sequence = self.sequence_model rag_sequence.set_retriever(rag_retriever) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="pt" ).input_ids input_ids = input_ids.to(torch_device) output_ids = rag_sequence.generate( input_ids, decoder_start_token_id=rag_sequence.generator.config.decoder_start_token_id, num_beams=2, num_return_sequences=2, ) # sequence generate test output_text_1 = rag_decoder_tokenizer.decode(output_ids[0], skip_special_tokens=True) output_text_2 = rag_decoder_tokenizer.decode(output_ids[1], skip_special_tokens=True) # Expected outputs as given by model at integration time. EXPECTED_OUTPUT_TEXT_1 = """\"She's My Kind of Girl\" was released through Epic Records in Japan in March 1972, giving the duo a Top 10 hit. Two more singles were released in Japan, \"En Carousel\" and \"Love Has Its Ways\" Ulvaeus and Andersson persevered with their songwriting and experimented with new sounds and vocal arrangements.""" EXPECTED_OUTPUT_TEXT_2 = """In September 2018, Björn Ulvaeus revealed that the two new songs, \"I Still Have Faith In You\" and \"Don't Shut Me Down\", would be released no earlier than March 2019. The two new tracks will feature in a TV special set to air later in the year.""" self.assertEqual(output_text_1, EXPECTED_OUTPUT_TEXT_1) self.assertEqual(output_text_2, EXPECTED_OUTPUT_TEXT_2) @property def test_data_questions(self): return [ "who got the first nobel prize in physics", "when is the next deadpool movie being released", "which mode is used for short wave broadcast service", "who is the owner of reading football club", "when is the next scandal episode coming out", "when is the last time the philadelphia won the superbowl", "what is the most current adobe flash player version", "how many episodes are there in dragon ball z", ] @slow def test_rag_sequence_generate_batch(self): tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained( "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True ) rag_sequence = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever).to( torch_device ) input_dict = tokenizer( self.test_data_questions, return_tensors="pt", padding=True, truncation=True, ) input_ids = input_dict.input_ids.to(torch_device) attention_mask = input_dict.attention_mask.to(torch_device) output_ids = rag_sequence.generate( input_ids, attention_mask=attention_mask, ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) EXPECTED_OUTPUTS = [ " albert einstein", " june 22, 2018", " amplitude modulation", " tim besley ( chairman )", " june 20, 2018", " 1980", " 7.0", " 8", ] self.assertListEqual(outputs, EXPECTED_OUTPUTS) @slow def test_rag_sequence_generate_batch_from_context_input_ids(self): tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained( "facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True ) rag_sequence = RagSequenceForGeneration.from_pretrained("facebook/rag-sequence-nq", retriever=retriever).to( torch_device ) input_dict = tokenizer( self.test_data_questions, return_tensors="pt", padding=True, truncation=True, ) input_ids = input_dict.input_ids.to(torch_device) attention_mask = input_dict.attention_mask.to(torch_device) question_hidden_states = rag_sequence.question_encoder(input_ids, attention_mask=attention_mask)[0] docs_dict = retriever( input_ids.cpu().detach().numpy(), question_hidden_states.cpu().detach().numpy(), return_tensors="pt" ) doc_scores = torch.bmm( question_hidden_states.unsqueeze(1), docs_dict["retrieved_doc_embeds"].to(torch_device).float().transpose(1, 2), ).squeeze(1) output_ids = rag_sequence.generate( context_input_ids=docs_dict["context_input_ids"].to(torch_device), context_attention_mask=docs_dict["context_attention_mask"].to(torch_device), doc_scores=doc_scores.to(torch_device), do_deduplication=True, ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) EXPECTED_OUTPUTS = [ " albert einstein", " june 22, 2018", " amplitude modulation", " tim besley ( chairman )", " june 20, 2018", " 1980", " 7.0", " 8", ] self.assertListEqual(outputs, EXPECTED_OUTPUTS) @slow def test_rag_token_generate_batch(self): tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) rag_token = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever).to( torch_device ) input_dict = tokenizer( self.test_data_questions, return_tensors="pt", padding=True, truncation=True, ) input_ids = input_dict.input_ids.to(torch_device) attention_mask = input_dict.attention_mask.to(torch_device) output_ids = rag_token.generate( input_ids, attention_mask=attention_mask, ) outputs = tokenizer.batch_decode(output_ids, skip_special_tokens=True) EXPECTED_OUTPUTS = [ " albert einstein", " september 22, 2017", " amplitude modulation", " stefan persson", " april 20, 2018", " the 1970s", " 7.1. 2", " 13", ] self.assertListEqual(outputs, EXPECTED_OUTPUTS) @require_torch @require_retrieval class RagModelSaveLoadTests(unittest.TestCase): def get_rag_config(self): question_encoder_config = AutoConfig.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator_config = AutoConfig.from_pretrained("facebook/bart-large-cnn") return RagConfig.from_question_encoder_generator_configs( question_encoder_config, generator_config, bos_token_id=0, decoder_start_token_id=2, eos_token_id=2, is_encoder_decoder=True, pad_token_id=1, vocab_size=50264, title_sep=" / ", doc_sep=" // ", n_docs=5, max_combined_length=300, dataset="wiki_dpr", dataset_split="train", index_name="exact", index_path=None, use_dummy_dataset=True, retrieval_vector_size=768, retrieval_batch_size=8, ) @slow def test_rag_sequence_from_pretrained(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="pt" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="pt").input_ids input_ids = input_ids.to(torch_device) decoder_input_ids = decoder_input_ids.to(torch_device) with tempfile.TemporaryDirectory() as tmp_dirname: rag_sequence = RagSequenceForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn", retriever=rag_retriever, config=rag_config, ).to(torch_device) # check that the from pretrained methods work rag_sequence.save_pretrained(tmp_dirname) rag_sequence.from_pretrained(tmp_dirname, retriever=rag_retriever) rag_sequence.to(torch_device) with torch.no_grad(): output = rag_sequence( input_ids, labels=decoder_input_ids, ) loss_pretrained = output.loss del rag_sequence question_encoder = AutoModel.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn") rag_sequence = RagSequenceForGeneration( config=rag_config, question_encoder=question_encoder, generator=generator, retriever=rag_retriever ) rag_sequence.to(torch_device) with torch.no_grad(): output = rag_sequence( input_ids, labels=decoder_input_ids, ) loss_init = output.loss self.assertAlmostEqual(loss_pretrained.item(), loss_init.item(), places=4) @slow def test_rag_token_from_pretrained(self): rag_config = self.get_rag_config() rag_decoder_tokenizer = BartTokenizer.from_pretrained("facebook/bart-large-cnn") rag_question_encoder_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained( "facebook/dpr-question_encoder-single-nq-base" ) rag_retriever = RagRetriever( rag_config, question_encoder_tokenizer=rag_question_encoder_tokenizer, generator_tokenizer=rag_decoder_tokenizer, ) input_ids = rag_question_encoder_tokenizer( "who sings does he love me with reba", return_tensors="pt" ).input_ids decoder_input_ids = rag_decoder_tokenizer("Linda Davis", return_tensors="pt").input_ids input_ids = input_ids.to(torch_device) decoder_input_ids = decoder_input_ids.to(torch_device) with tempfile.TemporaryDirectory() as tmp_dirname: rag_token = RagTokenForGeneration.from_pretrained_question_encoder_generator( "facebook/dpr-question_encoder-single-nq-base", "facebook/bart-large-cnn", retriever=rag_retriever, config=rag_config, ).to(torch_device) # check that the from pretrained methods work rag_token.save_pretrained(tmp_dirname) rag_token.from_pretrained(tmp_dirname, retriever=rag_retriever) rag_token.to(torch_device) with torch.no_grad(): output = rag_token( input_ids, labels=decoder_input_ids, ) loss_pretrained = output.loss del rag_token question_encoder = AutoModel.from_pretrained("facebook/dpr-question_encoder-single-nq-base") generator = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn") rag_token = RagTokenForGeneration( config=rag_config, question_encoder=question_encoder, generator=generator, retriever=rag_retriever ) rag_token.to(torch_device) with torch.no_grad(): output = rag_token( input_ids, labels=decoder_input_ids, ) loss_init = output.loss self.assertAlmostEqual(loss_pretrained.item(), loss_init.item(), places=4)
AdaMix/tests/test_modeling_rag.py/0
{ "file_path": "AdaMix/tests/test_modeling_rag.py", "repo_id": "AdaMix", "token_count": 20360 }
75
# coding=utf-8 # Copyright 2021 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import tempfile import unittest from transformers import ConvBertConfig, is_tf_available from transformers.testing_utils import require_tf, slow from .test_configuration_common import ConfigTester from .test_modeling_tf_common import TFModelTesterMixin, ids_tensor if is_tf_available(): import tensorflow as tf from transformers import ( TFConvBertForMaskedLM, TFConvBertForMultipleChoice, TFConvBertForQuestionAnswering, TFConvBertForSequenceClassification, TFConvBertForTokenClassification, TFConvBertModel, ) class TFConvBertModelTester: def __init__( self, parent, batch_size=13, seq_length=7, is_training=True, use_input_mask=True, use_token_type_ids=True, use_labels=True, vocab_size=99, hidden_size=32, num_hidden_layers=5, num_attention_heads=4, intermediate_size=37, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=16, type_sequence_label_size=2, initializer_range=0.02, num_labels=3, num_choices=4, scope=None, ): self.parent = parent self.batch_size = 13 self.seq_length = 7 self.is_training = True self.use_input_mask = True self.use_token_type_ids = True self.use_labels = True self.vocab_size = 99 self.hidden_size = 384 self.num_hidden_layers = 5 self.num_attention_heads = 4 self.intermediate_size = 37 self.hidden_act = "gelu" self.hidden_dropout_prob = 0.1 self.attention_probs_dropout_prob = 0.1 self.max_position_embeddings = 512 self.type_vocab_size = 16 self.type_sequence_label_size = 2 self.initializer_range = 0.02 self.num_labels = 3 self.num_choices = 4 self.embedding_size = 128 self.head_ratio = 2 self.conv_kernel_size = 9 self.num_groups = 1 self.scope = None def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = ConvBertConfig( vocab_size=self.vocab_size, hidden_size=self.hidden_size, num_hidden_layers=self.num_hidden_layers, num_attention_heads=self.num_attention_heads, intermediate_size=self.intermediate_size, hidden_act=self.hidden_act, hidden_dropout_prob=self.hidden_dropout_prob, attention_probs_dropout_prob=self.attention_probs_dropout_prob, max_position_embeddings=self.max_position_embeddings, type_vocab_size=self.type_vocab_size, initializer_range=self.initializer_range, return_dict=True, ) return config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels def create_and_check_model( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = TFConvBertModel(config=config) inputs = {"input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids} inputs = [input_ids, input_mask] result = model(inputs) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_for_masked_lm( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = TFConvBertForMaskedLM(config=config) inputs = { "input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids, } result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_for_sequence_classification( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): config.num_labels = self.num_labels model = TFConvBertForSequenceClassification(config=config) inputs = { "input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids, } result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_labels)) def create_and_check_for_multiple_choice( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): config.num_choices = self.num_choices model = TFConvBertForMultipleChoice(config=config) multiple_choice_inputs_ids = tf.tile(tf.expand_dims(input_ids, 1), (1, self.num_choices, 1)) multiple_choice_input_mask = tf.tile(tf.expand_dims(input_mask, 1), (1, self.num_choices, 1)) multiple_choice_token_type_ids = tf.tile(tf.expand_dims(token_type_ids, 1), (1, self.num_choices, 1)) inputs = { "input_ids": multiple_choice_inputs_ids, "attention_mask": multiple_choice_input_mask, "token_type_ids": multiple_choice_token_type_ids, } result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_choices)) def create_and_check_for_token_classification( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): config.num_labels = self.num_labels model = TFConvBertForTokenClassification(config=config) inputs = { "input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids, } result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.num_labels)) def create_and_check_for_question_answering( self, config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels ): model = TFConvBertForQuestionAnswering(config=config) inputs = { "input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids, } result = model(inputs) self.parent.assertEqual(result.start_logits.shape, (self.batch_size, self.seq_length)) self.parent.assertEqual(result.end_logits.shape, (self.batch_size, self.seq_length)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, token_type_ids, input_mask, sequence_labels, token_labels, choice_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": input_mask} return config, inputs_dict @require_tf class TFConvBertModelTest(TFModelTesterMixin, unittest.TestCase): all_model_classes = ( ( TFConvBertModel, TFConvBertForMaskedLM, TFConvBertForQuestionAnswering, TFConvBertForSequenceClassification, TFConvBertForTokenClassification, TFConvBertForMultipleChoice, ) if is_tf_available() else () ) test_pruning = False test_head_masking = False test_onnx = False def setUp(self): self.model_tester = TFConvBertModelTester(self) self.config_tester = ConfigTester(self, config_class=ConvBertConfig, hidden_size=37) def test_config(self): self.config_tester.run_common_tests() def test_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_model(*config_and_inputs) def test_for_masked_lm(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_masked_lm(*config_and_inputs) def test_for_multiple_choice(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_multiple_choice(*config_and_inputs) def test_for_question_answering(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_question_answering(*config_and_inputs) def test_for_sequence_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_sequence_classification(*config_and_inputs) def test_for_token_classification(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_for_token_classification(*config_and_inputs) @slow def test_saved_model_creation_extended(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.output_hidden_states = True config.output_attentions = True if hasattr(config, "use_cache"): config.use_cache = True encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", self.model_tester.seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) for model_class in self.all_model_classes: class_inputs_dict = self._prepare_for_class(inputs_dict, model_class) model = model_class(config) num_out = len(model(class_inputs_dict)) with tempfile.TemporaryDirectory() as tmpdirname: model.save_pretrained(tmpdirname, saved_model=True) saved_model_dir = os.path.join(tmpdirname, "saved_model", "1") model = tf.keras.models.load_model(saved_model_dir) outputs = model(class_inputs_dict) if self.is_encoder_decoder: output_hidden_states = outputs["encoder_hidden_states"] output_attentions = outputs["encoder_attentions"] else: output_hidden_states = outputs["hidden_states"] output_attentions = outputs["attentions"] self.assertEqual(len(outputs), num_out) expected_num_layers = getattr( self.model_tester, "expected_num_hidden_layers", self.model_tester.num_hidden_layers + 1 ) self.assertEqual(len(output_hidden_states), expected_num_layers) self.assertListEqual( list(output_hidden_states[0].shape[-2:]), [self.model_tester.seq_length, self.model_tester.hidden_size], ) self.assertEqual(len(output_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(output_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads / 2, encoder_seq_length, encoder_key_length], ) @slow def test_model_from_pretrained(self): model = TFConvBertModel.from_pretrained("YituTech/conv-bert-base") self.assertIsNotNone(model) def test_attention_outputs(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config.return_dict = True decoder_seq_length = getattr(self.model_tester, "decoder_seq_length", self.model_tester.seq_length) encoder_seq_length = getattr(self.model_tester, "encoder_seq_length", self.model_tester.seq_length) decoder_key_length = getattr(self.model_tester, "key_length", decoder_seq_length) encoder_key_length = getattr(self.model_tester, "key_length", encoder_seq_length) def check_decoder_attentions_output(outputs): out_len = len(outputs) self.assertEqual(out_len % 2, 0) decoder_attentions = outputs.decoder_attentions self.assertEqual(len(decoder_attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(decoder_attentions[0].shape[-3:]), [self.model_tester.num_attention_heads / 2, decoder_seq_length, decoder_key_length], ) def check_encoder_attentions_output(outputs): attentions = [ t.numpy() for t in (outputs.encoder_attentions if config.is_encoder_decoder else outputs.attentions) ] self.assertEqual(len(attentions), self.model_tester.num_hidden_layers) self.assertListEqual( list(attentions[0].shape[-3:]), [self.model_tester.num_attention_heads / 2, encoder_seq_length, encoder_key_length], ) for model_class in self.all_model_classes: inputs_dict["output_attentions"] = True inputs_dict["use_cache"] = False config.output_hidden_states = False model = model_class(config) outputs = model(self._prepare_for_class(inputs_dict, model_class)) out_len = len(outputs) self.assertEqual(config.output_hidden_states, False) check_encoder_attentions_output(outputs) if self.is_encoder_decoder: model = model_class(config) outputs = model(self._prepare_for_class(inputs_dict, model_class)) self.assertEqual(config.output_hidden_states, False) check_decoder_attentions_output(outputs) # Check that output attentions can also be changed via the config del inputs_dict["output_attentions"] config.output_attentions = True model = model_class(config) outputs = model(self._prepare_for_class(inputs_dict, model_class)) self.assertEqual(config.output_hidden_states, False) check_encoder_attentions_output(outputs) # Check attention is always last and order is fine inputs_dict["output_attentions"] = True config.output_hidden_states = True model = model_class(config) outputs = model(self._prepare_for_class(inputs_dict, model_class)) self.assertEqual(out_len + (2 if self.is_encoder_decoder else 1), len(outputs)) self.assertEqual(model.config.output_hidden_states, True) check_encoder_attentions_output(outputs) @require_tf class TFConvBertModelIntegrationTest(unittest.TestCase): @slow def test_inference_masked_lm(self): model = TFConvBertModel.from_pretrained("YituTech/conv-bert-base") input_ids = tf.constant([[0, 1, 2, 3, 4, 5]]) output = model(input_ids)[0] expected_shape = [1, 6, 768] self.assertEqual(output.shape, expected_shape) expected_slice = tf.constant( [ [ [-0.03475493, -0.4686034, -0.30638832], [0.22637248, -0.26988646, -0.7423424], [0.10324868, -0.45013508, -0.58280784], ] ] ) tf.debugging.assert_near(output[:, :3, :3], expected_slice, atol=1e-4)
AdaMix/tests/test_modeling_tf_convbert.py/0
{ "file_path": "AdaMix/tests/test_modeling_tf_convbert.py", "repo_id": "AdaMix", "token_count": 7768 }
76
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import unittest from transformers import OpenAIGPTConfig, is_tf_available from transformers.testing_utils import require_tf, slow from .test_configuration_common import ConfigTester from .test_modeling_tf_common import TFModelTesterMixin, ids_tensor if is_tf_available(): import tensorflow as tf from transformers.models.openai.modeling_tf_openai import ( TF_OPENAI_GPT_PRETRAINED_MODEL_ARCHIVE_LIST, TFOpenAIGPTDoubleHeadsModel, TFOpenAIGPTForSequenceClassification, TFOpenAIGPTLMHeadModel, TFOpenAIGPTModel, ) class TFOpenAIGPTModelTester: def __init__( self, parent, ): self.parent = parent self.batch_size = 13 self.seq_length = 7 self.is_training = True self.use_token_type_ids = True self.use_input_mask = True self.use_labels = True self.use_mc_token_ids = True self.vocab_size = 99 self.hidden_size = 32 self.num_hidden_layers = 5 self.num_attention_heads = 4 self.intermediate_size = 37 self.hidden_act = "gelu" self.hidden_dropout_prob = 0.1 self.attention_probs_dropout_prob = 0.1 self.max_position_embeddings = 512 self.type_vocab_size = 16 self.type_sequence_label_size = 2 self.initializer_range = 0.02 self.num_labels = 3 self.num_choices = 4 self.scope = None self.pad_token_id = self.vocab_size - 1 def prepare_config_and_inputs(self): input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size) input_mask = None if self.use_input_mask: input_mask = ids_tensor([self.batch_size, self.seq_length], vocab_size=2) token_type_ids = None if self.use_token_type_ids: token_type_ids = ids_tensor([self.batch_size, self.seq_length], self.type_vocab_size) mc_token_ids = None if self.use_mc_token_ids: mc_token_ids = ids_tensor([self.batch_size, self.num_choices], self.seq_length) sequence_labels = None token_labels = None choice_labels = None if self.use_labels: sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) token_labels = ids_tensor([self.batch_size, self.seq_length], self.num_labels) choice_labels = ids_tensor([self.batch_size], self.num_choices) config = OpenAIGPTConfig( vocab_size=self.vocab_size, n_embd=self.hidden_size, n_layer=self.num_hidden_layers, n_head=self.num_attention_heads, # intermediate_size=self.intermediate_size, # hidden_act=self.hidden_act, # hidden_dropout_prob=self.hidden_dropout_prob, # attention_probs_dropout_prob=self.attention_probs_dropout_prob, n_positions=self.max_position_embeddings, n_ctx=self.max_position_embeddings, # type_vocab_size=self.type_vocab_size, # initializer_range=self.initializer_range, pad_token_id=self.pad_token_id, ) head_mask = ids_tensor([self.num_hidden_layers, self.num_attention_heads], 2) return ( config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, token_labels, choice_labels, ) def create_and_check_openai_gpt_model(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): model = TFOpenAIGPTModel(config=config) inputs = {"input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids} result = model(inputs) inputs = [input_ids, input_mask] result = model(inputs) result = model(input_ids) self.parent.assertEqual(result.last_hidden_state.shape, (self.batch_size, self.seq_length, self.hidden_size)) def create_and_check_openai_gpt_lm_head(self, config, input_ids, input_mask, head_mask, token_type_ids, *args): model = TFOpenAIGPTLMHeadModel(config=config) inputs = {"input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids} result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.seq_length, self.vocab_size)) def create_and_check_openai_gpt_double_head( self, config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, *args ): model = TFOpenAIGPTDoubleHeadsModel(config=config) multiple_choice_inputs_ids = tf.tile(tf.expand_dims(input_ids, 1), (1, self.num_choices, 1)) multiple_choice_input_mask = tf.tile(tf.expand_dims(input_mask, 1), (1, self.num_choices, 1)) multiple_choice_token_type_ids = tf.tile(tf.expand_dims(token_type_ids, 1), (1, self.num_choices, 1)) inputs = { "input_ids": multiple_choice_inputs_ids, "mc_token_ids": mc_token_ids, "attention_mask": multiple_choice_input_mask, "token_type_ids": multiple_choice_token_type_ids, } result = model(inputs) self.parent.assertEqual( result.logits.shape, (self.batch_size, self.num_choices, self.seq_length, self.vocab_size) ) self.parent.assertEqual(result.mc_logits.shape, (self.batch_size, self.num_choices)) def create_and_check_openai_gpt_for_sequence_classification( self, config, input_ids, input_mask, head_mask, token_type_ids, *args ): config.num_labels = self.num_labels sequence_labels = ids_tensor([self.batch_size], self.type_sequence_label_size) inputs = { "input_ids": input_ids, "attention_mask": input_mask, "token_type_ids": token_type_ids, "labels": sequence_labels, } model = TFOpenAIGPTForSequenceClassification(config) result = model(inputs) self.parent.assertEqual(result.logits.shape, (self.batch_size, self.num_labels)) def prepare_config_and_inputs_for_common(self): config_and_inputs = self.prepare_config_and_inputs() ( config, input_ids, input_mask, head_mask, token_type_ids, mc_token_ids, sequence_labels, token_labels, choice_labels, ) = config_and_inputs inputs_dict = {"input_ids": input_ids, "token_type_ids": token_type_ids, "attention_mask": input_mask} return config, inputs_dict @require_tf class TFOpenAIGPTModelTest(TFModelTesterMixin, unittest.TestCase): all_model_classes = ( (TFOpenAIGPTModel, TFOpenAIGPTLMHeadModel, TFOpenAIGPTDoubleHeadsModel, TFOpenAIGPTForSequenceClassification) if is_tf_available() else () ) all_generative_model_classes = ( (TFOpenAIGPTLMHeadModel,) if is_tf_available() else () ) # TODO (PVP): Add Double HeadsModel when generate() function is changed accordingly test_head_masking = False test_onnx = False def setUp(self): self.model_tester = TFOpenAIGPTModelTester(self) self.config_tester = ConfigTester(self, config_class=OpenAIGPTConfig, n_embd=37) def test_config(self): self.config_tester.run_common_tests() def test_openai_gpt_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_openai_gpt_model(*config_and_inputs) def test_openai_gpt_lm_head(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_openai_gpt_lm_head(*config_and_inputs) def test_openai_gpt_double_head(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_openai_gpt_double_head(*config_and_inputs) def test_model_common_attributes(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: model = model_class(config) assert isinstance(model.get_input_embeddings(), tf.keras.layers.Layer) if model_class in self.all_generative_model_classes: x = model.get_output_embeddings() assert isinstance(x, tf.keras.layers.Layer) name = model.get_bias() assert name is None else: x = model.get_output_embeddings() assert x is None name = model.get_bias() assert name is None def test_openai_gpt_sequence_classification_model(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() self.model_tester.create_and_check_openai_gpt_for_sequence_classification(*config_and_inputs) @slow def test_model_from_pretrained(self): for model_name in TF_OPENAI_GPT_PRETRAINED_MODEL_ARCHIVE_LIST[:1]: model = TFOpenAIGPTModel.from_pretrained(model_name) self.assertIsNotNone(model) @require_tf class TFOPENAIGPTModelLanguageGenerationTest(unittest.TestCase): @slow def test_lm_generate_openai_gpt(self): model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt") input_ids = tf.convert_to_tensor([[481, 4735, 544]], dtype=tf.int32) # the president is expected_output_ids = [ 481, 4735, 544, 246, 963, 870, 762, 239, 244, 40477, 244, 249, 719, 881, 487, 544, 240, 244, 603, 481, ] # the president is a very good man. " \n " i\'m sure he is, " said the output_ids = model.generate(input_ids, do_sample=False) self.assertListEqual(output_ids[0].numpy().tolist(), expected_output_ids)
AdaMix/tests/test_modeling_tf_openai.py/0
{ "file_path": "AdaMix/tests/test_modeling_tf_openai.py", "repo_id": "AdaMix", "token_count": 5029 }
77
# Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import subprocess import sys from transformers.testing_utils import TestCasePlus, require_torch class OfflineTests(TestCasePlus): @require_torch def test_offline_mode(self): # this test is a bit tricky since TRANSFORMERS_OFFLINE can only be changed before # `transformers` is loaded, and it's too late for inside pytest - so we are changing it # while running an external program # python one-liner segments # this must be loaded before socket.socket is monkey-patched load = """ from transformers import BertConfig, BertModel, BertTokenizer """ run = """ mname = "lysandre/tiny-bert-random" BertConfig.from_pretrained(mname) BertModel.from_pretrained(mname) BertTokenizer.from_pretrained(mname) print("success") """ mock = """ import socket def offline_socket(*args, **kwargs): raise socket.error("Offline mode is enabled") socket.socket = offline_socket """ # baseline - just load from_pretrained with normal network cmd = [sys.executable, "-c", "\n".join([load, run])] # should succeed env = self.get_env() result = subprocess.run(cmd, env=env, check=False, capture_output=True) self.assertEqual(result.returncode, 0, result.stderr) self.assertIn("success", result.stdout.decode()) # next emulate no network cmd = [sys.executable, "-c", "\n".join([load, mock, run])] # should normally fail as it will fail to lookup the model files w/o the network env["TRANSFORMERS_OFFLINE"] = "0" result = subprocess.run(cmd, env=env, check=False, capture_output=True) self.assertEqual(result.returncode, 1, result.stderr) # should succeed as TRANSFORMERS_OFFLINE=1 tells it to use local files env["TRANSFORMERS_OFFLINE"] = "1" result = subprocess.run(cmd, env=env, check=False, capture_output=True) self.assertEqual(result.returncode, 0, result.stderr) self.assertIn("success", result.stdout.decode())
AdaMix/tests/test_offline.py/0
{ "file_path": "AdaMix/tests/test_offline.py", "repo_id": "AdaMix", "token_count": 930 }
78
# coding=utf-8 # Copyright 2019 HuggingFace Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import inspect import os import pickle import re import shutil import tempfile from collections import OrderedDict from itertools import takewhile from typing import TYPE_CHECKING, Dict, List, Tuple, Union from transformers import ( PreTrainedTokenizer, PreTrainedTokenizerBase, PreTrainedTokenizerFast, is_tf_available, is_torch_available, ) from transformers.testing_utils import ( get_tests_dir, is_pt_tf_cross_test, require_tf, require_tokenizers, require_torch, slow, ) from transformers.tokenization_utils import AddedToken if TYPE_CHECKING: from transformers import PretrainedConfig, PreTrainedModel, TFPreTrainedModel NON_ENGLISH_TAGS = ["chinese", "dutch", "french", "finnish", "german", "multilingual"] def filter_non_english(_, pretrained_name: str): """ Filter all the model for non-english language """ return not any([lang in pretrained_name for lang in NON_ENGLISH_TAGS]) def filter_roberta_detectors(_, pretrained_name: str): return "detector" not in pretrained_name def merge_model_tokenizer_mappings( model_mapping: Dict["PretrainedConfig", Union["PreTrainedModel", "TFPreTrainedModel"]], tokenizer_mapping: Dict["PretrainedConfig", Tuple["PreTrainedTokenizer", "PreTrainedTokenizerFast"]], ) -> Dict[ Union["PreTrainedTokenizer", "PreTrainedTokenizerFast"], Tuple["PretrainedConfig", Union["PreTrainedModel", "TFPreTrainedModel"]], ]: configurations = list(model_mapping.keys()) model_tokenizer_mapping = OrderedDict([]) for configuration in configurations: model = model_mapping[configuration] tokenizer = tokenizer_mapping[configuration][0] tokenizer_fast = tokenizer_mapping[configuration][1] model_tokenizer_mapping.update({tokenizer: (configuration, model)}) if tokenizer_fast is not None: model_tokenizer_mapping.update({tokenizer_fast: (configuration, model)}) return model_tokenizer_mapping class TokenizerTesterMixin: tokenizer_class = None rust_tokenizer_class = None test_rust_tokenizer = False space_between_special_tokens = False from_pretrained_kwargs = None from_pretrained_filter = None from_pretrained_vocab_key = "vocab_file" test_seq2seq = True def setUp(self) -> None: # Tokenizer.filter makes it possible to filter which Tokenizer to case based on all the # information available in Tokenizer (name, rust class, python class, vocab key name) if self.test_rust_tokenizer: tokenizers_list = [ ( self.rust_tokenizer_class, pretrained_name, self.from_pretrained_kwargs if self.from_pretrained_kwargs is not None else {}, ) for pretrained_name in self.rust_tokenizer_class.pretrained_vocab_files_map[ self.from_pretrained_vocab_key ].keys() if self.from_pretrained_filter is None or (self.from_pretrained_filter is not None and self.from_pretrained_filter(pretrained_name)) ] self.tokenizers_list = tokenizers_list[:1] # Let's just test the first pretrained vocab for speed else: self.tokenizers_list = [] with open(f"{get_tests_dir()}/fixtures/sample_text.txt", encoding="utf-8") as f_data: self._data = f_data.read().replace("\n\n", "\n").strip() self.tmpdirname = tempfile.mkdtemp() def tearDown(self): shutil.rmtree(self.tmpdirname) def get_input_output_texts(self, tokenizer): input_txt = self.get_clean_sequence(tokenizer)[0] return input_txt, input_txt def get_clean_sequence(self, tokenizer, with_prefix_space=False, max_length=20, min_length=5) -> Tuple[str, list]: toks = [(i, tokenizer.decode([i], clean_up_tokenization_spaces=False)) for i in range(len(tokenizer))] toks = list(filter(lambda t: re.match(r"^[ a-zA-Z]+$", t[1]), toks)) toks = list(filter(lambda t: [t[0]] == tokenizer.encode(t[1], add_special_tokens=False), toks)) if max_length is not None and len(toks) > max_length: toks = toks[:max_length] if min_length is not None and len(toks) < min_length and len(toks) > 0: while len(toks) < min_length: toks = toks + toks # toks_str = [t[1] for t in toks] toks_ids = [t[0] for t in toks] # Ensure consistency output_txt = tokenizer.decode(toks_ids, clean_up_tokenization_spaces=False) if " " not in output_txt and len(toks_ids) > 1: output_txt = ( tokenizer.decode([toks_ids[0]], clean_up_tokenization_spaces=False) + " " + tokenizer.decode(toks_ids[1:], clean_up_tokenization_spaces=False) ) if with_prefix_space: output_txt = " " + output_txt output_ids = tokenizer.encode(output_txt, add_special_tokens=False) return output_txt, output_ids def get_tokenizers(self, fast=True, **kwargs) -> List[PreTrainedTokenizerBase]: if fast and self.test_rust_tokenizer: return [self.get_tokenizer(**kwargs), self.get_rust_tokenizer(**kwargs)] return [self.get_tokenizer(**kwargs)] def get_tokenizer(self, **kwargs) -> PreTrainedTokenizer: return self.tokenizer_class.from_pretrained(self.tmpdirname, **kwargs) def get_rust_tokenizer(self, **kwargs) -> PreTrainedTokenizerFast: return self.rust_tokenizer_class.from_pretrained(self.tmpdirname, **kwargs) # def get_input_output_texts(self) -> Tuple[str, str]: # """Feel free to overwrite""" # # TODO: @property # return ( # "This is a test", # "This is a test", # ) def assert_padded_input_match(self, input_r: list, input_p: list, max_length: int, pad_token_id: int): # Ensure we match max_length self.assertEqual(len(input_r), max_length) self.assertEqual(len(input_p), max_length) # Ensure the number of padded tokens is the same padded_tokens_r = list(takewhile(lambda i: i == pad_token_id, reversed(input_r))) padded_tokens_p = list(takewhile(lambda i: i == pad_token_id, reversed(input_p))) self.assertSequenceEqual(padded_tokens_r, padded_tokens_p) def assert_batch_padded_input_match( self, input_r: dict, input_p: dict, max_length: int, pad_token_id: int, model_main_input_name: str = "input_ids", ): for i_r in input_r.values(): self.assertEqual(len(i_r), 2), self.assertEqual(len(i_r[0]), max_length), self.assertEqual( len(i_r[1]), max_length ) self.assertEqual(len(i_r), 2), self.assertEqual(len(i_r[0]), max_length), self.assertEqual( len(i_r[1]), max_length ) for i_r, i_p in zip(input_r[model_main_input_name], input_p[model_main_input_name]): self.assert_padded_input_match(i_r, i_p, max_length, pad_token_id) for i_r, i_p in zip(input_r["attention_mask"], input_p["attention_mask"]): self.assertSequenceEqual(i_r, i_p) @staticmethod def convert_batch_encode_plus_format_to_encode_plus(batch_encode_plus_sequences): # Switch from batch_encode_plus format: {'input_ids': [[...], [...]], ...} # to the list of examples/ encode_plus format: [{'input_ids': [...], ...}, {'input_ids': [...], ...}] return [ {value: batch_encode_plus_sequences[value][i] for value in batch_encode_plus_sequences.keys()} for i in range(len(batch_encode_plus_sequences["input_ids"])) ] def test_model_input_names_signature(self): accepted_model_main_input_names = [ "input_ids", # nlp models "input_values", # speech models ] tokenizers = self.get_tokenizers() for tokenizer in tokenizers: # first name of model_input_names has to correspond to main model input name # to make sure `tokenizer.pad(...)` works correctly self.assertTrue(tokenizer.model_input_names[0] in accepted_model_main_input_names) def test_rust_tokenizer_signature(self): if not self.test_rust_tokenizer: return signature = inspect.signature(self.rust_tokenizer_class.__init__) self.assertIn("tokenizer_file", signature.parameters) self.assertIsNone(signature.parameters["tokenizer_file"].default) def test_tokenizer_slow_store_full_signature(self): signature = inspect.signature(self.tokenizer_class.__init__) tokenizer = self.get_tokenizer() for parameter_name, parameter in signature.parameters.items(): if parameter.default != inspect.Parameter.empty: self.assertIn(parameter_name, tokenizer.init_kwargs) def test_tokenizer_fast_store_full_signature(self): if not self.test_rust_tokenizer: return signature = inspect.signature(self.rust_tokenizer_class.__init__) tokenizer = self.get_rust_tokenizer() for parameter_name, parameter in signature.parameters.items(): if parameter.default != inspect.Parameter.empty and parameter_name != "tokenizer_file": self.assertIn(parameter_name, tokenizer.init_kwargs) def test_rust_and_python_full_tokenizers(self): if not self.test_rust_tokenizer: return tokenizer = self.get_tokenizer() rust_tokenizer = self.get_rust_tokenizer() sequence, _ = self.get_input_output_texts(tokenizer) # We don't have an exact equivalence on `tokenize()` between Rust and Slow # Slow tokenizer only split tokens, Rust tokenizers will replace with <unk> # tokens = tokenizer.tokenize(sequence) # rust_tokens = rust_tokenizer.tokenize(sequence) # self.assertListEqual(tokens, rust_tokens) ids = tokenizer.encode(sequence, add_special_tokens=False) rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=False) self.assertListEqual(ids, rust_ids) ids = tokenizer.encode(sequence, add_special_tokens=True) rust_ids = rust_tokenizer.encode(sequence, add_special_tokens=True) self.assertListEqual(ids, rust_ids) def test_tokenizers_common_properties(self): tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): attributes_list = [ "bos_token", "eos_token", "unk_token", "sep_token", "pad_token", "cls_token", "mask_token", ] for attr in attributes_list: self.assertTrue(hasattr(tokenizer, attr)) self.assertTrue(hasattr(tokenizer, attr + "_id")) self.assertTrue(hasattr(tokenizer, "additional_special_tokens")) self.assertTrue(hasattr(tokenizer, "additional_special_tokens_ids")) attributes_list = [ "model_max_length", "init_inputs", "init_kwargs", ] if not isinstance(tokenizer, PreTrainedTokenizerFast): attributes_list += [ "added_tokens_encoder", "added_tokens_decoder", ] for attr in attributes_list: self.assertTrue(hasattr(tokenizer, attr)) def test_save_and_load_tokenizer(self): # safety check on max_len default value so we are sure the test works tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): self.assertNotEqual(tokenizer.model_max_length, 42) # Now let's start the test tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): # Isolate this from the other tests because we save additional tokens/etc tmpdirname = tempfile.mkdtemp() sample_text = " He is very happy, UNwant\u00E9d,running" before_tokens = tokenizer.encode(sample_text, add_special_tokens=False) before_vocab = tokenizer.get_vocab() tokenizer.save_pretrained(tmpdirname) after_tokenizer = tokenizer.__class__.from_pretrained(tmpdirname) after_tokens = after_tokenizer.encode(sample_text, add_special_tokens=False) after_vocab = after_tokenizer.get_vocab() self.assertListEqual(before_tokens, after_tokens) self.assertDictEqual(before_vocab, after_vocab) shutil.rmtree(tmpdirname) tokenizers = self.get_tokenizers(model_max_length=42) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): # Isolate this from the other tests because we save additional tokens/etc tmpdirname = tempfile.mkdtemp() sample_text = " He is very happy, UNwant\u00E9d,running" tokenizer.add_tokens(["bim", "bambam"]) additional_special_tokens = tokenizer.additional_special_tokens additional_special_tokens.append("new_additional_special_token") tokenizer.add_special_tokens({"additional_special_tokens": additional_special_tokens}) before_tokens = tokenizer.encode(sample_text, add_special_tokens=False) before_vocab = tokenizer.get_vocab() tokenizer.save_pretrained(tmpdirname) after_tokenizer = tokenizer.__class__.from_pretrained(tmpdirname) after_tokens = after_tokenizer.encode(sample_text, add_special_tokens=False) after_vocab = after_tokenizer.get_vocab() self.assertListEqual(before_tokens, after_tokens) self.assertDictEqual(before_vocab, after_vocab) self.assertIn("bim", after_vocab) self.assertIn("bambam", after_vocab) self.assertIn("new_additional_special_token", after_tokenizer.additional_special_tokens) self.assertEqual(after_tokenizer.model_max_length, 42) tokenizer = tokenizer.__class__.from_pretrained(tmpdirname, model_max_length=43) self.assertEqual(tokenizer.model_max_length, 43) shutil.rmtree(tmpdirname) # Test that we can also use the non-legacy saving format for fast tokenizers tokenizers = self.get_tokenizers(model_max_length=42) for tokenizer in tokenizers: if not tokenizer.is_fast: continue with self.subTest(f"{tokenizer.__class__.__name__}"): # Isolate this from the other tests because we save additional tokens/etc tmpdirname = tempfile.mkdtemp() sample_text = " He is very happy, UNwant\u00E9d,running" tokenizer.add_tokens(["bim", "bambam"]) additional_special_tokens = tokenizer.additional_special_tokens additional_special_tokens.append("new_additional_special_token") tokenizer.add_special_tokens({"additional_special_tokens": additional_special_tokens}) before_tokens = tokenizer.encode(sample_text, add_special_tokens=False) before_vocab = tokenizer.get_vocab() tokenizer.save_pretrained(tmpdirname) after_tokenizer = tokenizer.__class__.from_pretrained(tmpdirname) after_tokens = after_tokenizer.encode(sample_text, add_special_tokens=False) after_vocab = after_tokenizer.get_vocab() self.assertListEqual(before_tokens, after_tokens) self.assertDictEqual(before_vocab, after_vocab) self.assertIn("bim", after_vocab) self.assertIn("bambam", after_vocab) self.assertIn("new_additional_special_token", after_tokenizer.additional_special_tokens) self.assertEqual(after_tokenizer.model_max_length, 42) tokenizer = tokenizer.__class__.from_pretrained(tmpdirname, model_max_length=43) self.assertEqual(tokenizer.model_max_length, 43) shutil.rmtree(tmpdirname) def test_pickle_tokenizer(self): """Google pickle __getstate__ __setstate__ if you are struggling with this.""" tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): self.assertIsNotNone(tokenizer) text = "Munich and Berlin are nice cities" subwords = tokenizer.tokenize(text) filename = os.path.join(self.tmpdirname, "tokenizer.bin") with open(filename, "wb") as handle: pickle.dump(tokenizer, handle) with open(filename, "rb") as handle: tokenizer_new = pickle.load(handle) subwords_loaded = tokenizer_new.tokenize(text) self.assertListEqual(subwords, subwords_loaded) @require_tokenizers def test_pickle_added_tokens(self): tok1 = AddedToken("<s>", rstrip=True, lstrip=True, normalized=False, single_word=True) tok2 = pickle.loads(pickle.dumps(tok1)) self.assertEqual(tok1.__getstate__(), tok2.__getstate__()) def test_added_tokens_do_lower_case(self): # TODO(thom) activate fast tokenizer tests once Rust tokenizers accepts white spaces in added tokens tokenizers = self.get_tokenizers(fast=False, do_lower_case=True) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if not hasattr(tokenizer, "do_lower_case") or not tokenizer.do_lower_case: continue special_token = tokenizer.all_special_tokens[0] text = special_token + " aaaaa bbbbbb low cccccccccdddddddd l " + special_token text2 = special_token + " AAAAA BBBBBB low CCCCCCCCCDDDDDDDD l " + special_token toks0 = tokenizer.tokenize(text) # toks before adding new_toks new_toks = ["aaaaa bbbbbb", "cccccccccdddddddd", "AAAAA BBBBBB", "CCCCCCCCCDDDDDDDD"] added = tokenizer.add_tokens(new_toks) self.assertEqual(added, 2) toks = tokenizer.tokenize(text) toks2 = tokenizer.tokenize(text2) self.assertEqual(len(toks), len(toks2)) self.assertListEqual(toks, toks2) if not isinstance(tokenizer, PreTrainedTokenizerFast): # Python tokenizers can have added tokens with spaces inside them # cf https://github.com/huggingface/tokenizers/issues/302 self.assertNotEqual(len(toks), len(toks0)) # toks0 should be longer # Check that none of the special tokens are lowercased sequence_with_special_tokens = "A " + " yEs ".join(tokenizer.all_special_tokens) + " B" tokenized_sequence = tokenizer.tokenize(sequence_with_special_tokens) for special_token in tokenizer.all_special_tokens: self.assertTrue(special_token in tokenized_sequence) tokenizers = self.get_tokenizers(fast=False, do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if hasattr(tokenizer, "do_lower_case") and tokenizer.do_lower_case: continue special_token = tokenizer.all_special_tokens[0] text = special_token + " aaaaa bbbbbb low cccccccccdddddddd l " + special_token text2 = special_token + " AAAAA BBBBBB low CCCCCCCCCDDDDDDDD l " + special_token new_toks = ["aaaaa bbbbbb", "cccccccccdddddddd", "AAAAA BBBBBB", "CCCCCCCCCDDDDDDDD"] toks0 = tokenizer.tokenize(text) # toks before adding new_toks added = tokenizer.add_tokens(new_toks) self.assertIn(added, [2, 4]) toks = tokenizer.tokenize(text) toks2 = tokenizer.tokenize(text2) self.assertEqual(len(toks), len(toks2)) # Length should still be the same self.assertNotEqual(toks[1], toks2[1]) # But at least the first non-special tokens should differ if not isinstance(tokenizer, PreTrainedTokenizerFast): # Python tokenizers can have added tokens with spaces inside them # cf https://github.com/huggingface/tokenizers/issues/302 self.assertNotEqual(len(toks), len(toks0)) # toks0 should be longer def test_add_tokens_tokenizer(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): vocab_size = tokenizer.vocab_size all_size = len(tokenizer) self.assertNotEqual(vocab_size, 0) # We usually have added tokens from the start in tests because our vocab fixtures are # smaller than the original vocabs - let's not assert this # self.assertEqual(vocab_size, all_size) new_toks = ["aaaaa bbbbbb", "cccccccccdddddddd"] added_toks = tokenizer.add_tokens(new_toks) vocab_size_2 = tokenizer.vocab_size all_size_2 = len(tokenizer) self.assertNotEqual(vocab_size_2, 0) self.assertEqual(vocab_size, vocab_size_2) self.assertEqual(added_toks, len(new_toks)) self.assertEqual(all_size_2, all_size + len(new_toks)) tokens = tokenizer.encode("aaaaa bbbbbb low cccccccccdddddddd l", add_special_tokens=False) self.assertGreaterEqual(len(tokens), 4) self.assertGreater(tokens[0], tokenizer.vocab_size - 1) self.assertGreater(tokens[-2], tokenizer.vocab_size - 1) new_toks_2 = {"eos_token": ">>>>|||<||<<|<<", "pad_token": "<<<<<|||>|>>>>|>"} added_toks_2 = tokenizer.add_special_tokens(new_toks_2) vocab_size_3 = tokenizer.vocab_size all_size_3 = len(tokenizer) self.assertNotEqual(vocab_size_3, 0) self.assertEqual(vocab_size, vocab_size_3) self.assertEqual(added_toks_2, len(new_toks_2)) self.assertEqual(all_size_3, all_size_2 + len(new_toks_2)) tokens = tokenizer.encode( ">>>>|||<||<<|<< aaaaabbbbbb low cccccccccdddddddd <<<<<|||>|>>>>|> l", add_special_tokens=False ) self.assertGreaterEqual(len(tokens), 6) self.assertGreater(tokens[0], tokenizer.vocab_size - 1) self.assertGreater(tokens[0], tokens[1]) self.assertGreater(tokens[-2], tokenizer.vocab_size - 1) self.assertGreater(tokens[-2], tokens[-3]) self.assertEqual(tokens[0], tokenizer.eos_token_id) self.assertEqual(tokens[-2], tokenizer.pad_token_id) def test_add_special_tokens(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): input_text, ids = self.get_clean_sequence(tokenizer) special_token = "[SPECIAL_TOKEN]" tokenizer.add_special_tokens({"cls_token": special_token}) encoded_special_token = tokenizer.encode(special_token, add_special_tokens=False) self.assertEqual(len(encoded_special_token), 1) text = tokenizer.decode(ids + encoded_special_token, clean_up_tokenization_spaces=False) encoded = tokenizer.encode(text, add_special_tokens=False) input_encoded = tokenizer.encode(input_text, add_special_tokens=False) special_token_id = tokenizer.encode(special_token, add_special_tokens=False) self.assertEqual(encoded, input_encoded + special_token_id) decoded = tokenizer.decode(encoded, skip_special_tokens=True) self.assertTrue(special_token not in decoded) def test_internal_consistency(self): tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): input_text, output_text = self.get_input_output_texts(tokenizer) tokens = tokenizer.tokenize(input_text) ids = tokenizer.convert_tokens_to_ids(tokens) ids_2 = tokenizer.encode(input_text, add_special_tokens=False) self.assertListEqual(ids, ids_2) tokens_2 = tokenizer.convert_ids_to_tokens(ids) self.assertNotEqual(len(tokens_2), 0) text_2 = tokenizer.decode(ids) self.assertIsInstance(text_2, str) self.assertEqual(text_2, output_text) @require_tokenizers def test_encode_decode_with_spaces(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): # new_toks = ["[ABC]", "[DEF]"] # TODO(thom) add this one back when Rust toks are ready: , "GHI IHG"] new_toks = [AddedToken("[ABC]", normalized=False), AddedToken("[DEF]", normalized=False)] tokenizer.add_tokens(new_toks) input = "[ABC][DEF][ABC][DEF]" # TODO(thom) add back cf above: "[ABC] [DEF] [ABC] GHI IHG [DEF]" if self.space_between_special_tokens: output = "[ABC] [DEF] [ABC] [DEF]" else: output = input encoded = tokenizer.encode(input, add_special_tokens=False) decoded = tokenizer.decode(encoded, spaces_between_special_tokens=self.space_between_special_tokens) self.assertIn(decoded, [output, output.lower()]) def test_pretrained_model_lists(self): # We should have at least one default checkpoint for each tokenizer # We should specify the max input length as well (used in some part to list the pretrained checkpoints) self.assertGreaterEqual(len(self.tokenizer_class.pretrained_vocab_files_map), 1) self.assertGreaterEqual(len(list(self.tokenizer_class.pretrained_vocab_files_map.values())[0]), 1) self.assertEqual( len(list(self.tokenizer_class.pretrained_vocab_files_map.values())[0]), len(self.tokenizer_class.max_model_input_sizes), ) weights_list = list(self.tokenizer_class.max_model_input_sizes.keys()) weights_lists_2 = [] for file_id, map_list in self.tokenizer_class.pretrained_vocab_files_map.items(): weights_lists_2.append(list(map_list.keys())) for weights_list_2 in weights_lists_2: self.assertListEqual(weights_list, weights_list_2) def test_mask_output(self): tokenizers = self.get_tokenizers(fast=False, do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if ( tokenizer.build_inputs_with_special_tokens.__qualname__.split(".")[0] != "PreTrainedTokenizer" and "token_type_ids" in tokenizer.model_input_names ): seq_0 = "Test this method." seq_1 = "With these inputs." information = tokenizer.encode_plus(seq_0, seq_1, add_special_tokens=True) sequences, mask = information["input_ids"], information["token_type_ids"] self.assertEqual(len(sequences), len(mask)) def test_token_type_ids(self): tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): seq_0 = "Test this method." # We want to have sequence 0 and sequence 1 are tagged # respectively with 0 and 1 token_ids # (regardless of whether the model use token type ids) # We use this assumption in the QA pipeline among other place output = tokenizer(seq_0, return_token_type_ids=True) self.assertIn(0, output["token_type_ids"]) def test_sequence_ids(self): tokenizers = self.get_tokenizers() for tokenizer in tokenizers: if not tokenizer.is_fast: continue with self.subTest(f"{tokenizer.__class__.__name__}"): seq_0 = "Test this method." seq_1 = "With these inputs." # We want to have sequence 0 and sequence 1 are tagged # respectively with 0 and 1 token_ids # (regardless of whether the model use token type ids) # We use this assumption in the QA pipeline among other place output = tokenizer(seq_0) self.assertIn(0, output.sequence_ids()) output = tokenizer(seq_0, seq_1) self.assertIn(0, output.sequence_ids()) self.assertIn(1, output.sequence_ids()) if tokenizer.num_special_tokens_to_add(pair=True): self.assertIn(None, output.sequence_ids()) def test_number_of_added_tokens(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): seq_0 = "Test this method." seq_1 = "With these inputs." sequences = tokenizer.encode(seq_0, seq_1, add_special_tokens=False) attached_sequences = tokenizer.encode(seq_0, seq_1, add_special_tokens=True) # Method is implemented (e.g. not GPT-2) if len(attached_sequences) != 2: self.assertEqual( tokenizer.num_special_tokens_to_add(pair=True), len(attached_sequences) - len(sequences) ) def test_maximum_encoding_length_single_input(self): tokenizers = self.get_tokenizers(do_lower_case=False, model_max_length=100) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): seq_0, ids = self.get_clean_sequence(tokenizer, max_length=20) sequence = tokenizer.encode(seq_0, add_special_tokens=False) total_length = len(sequence) assert total_length > 4, "Issue with the testing sequence, please update it it's too short" # Test with max model input length model_max_length = tokenizer.model_max_length self.assertEqual(model_max_length, 100) seq_1 = seq_0 * model_max_length sequence1 = tokenizer(seq_1, add_special_tokens=False) total_length1 = len(sequence1["input_ids"]) assert ( total_length1 > model_max_length ), "Issue with the testing sequence, please update it it's too short" # Simple padding_strategies = ( [False, True, "longest"] if tokenizer.pad_token and tokenizer.pad_token_id >= 0 else [False] ) for padding_state in padding_strategies: with self.subTest(f"Padding: {padding_state}"): for truncation_state in [True, "longest_first", "only_first"]: with self.subTest(f"Truncation: {truncation_state}"): output = tokenizer(seq_1, padding=padding_state, truncation=truncation_state) self.assertEqual(len(output["input_ids"]), model_max_length) output = tokenizer([seq_1], padding=padding_state, truncation=truncation_state) self.assertEqual(len(output["input_ids"][0]), model_max_length) # Simple with no truncation # Reset warnings tokenizer.deprecation_warnings = {} with self.assertLogs("transformers", level="WARNING") as cm: output = tokenizer(seq_1, padding=padding_state, truncation=False) self.assertNotEqual(len(output["input_ids"]), model_max_length) self.assertEqual(len(cm.records), 1) self.assertTrue( cm.records[0].message.startswith( "Token indices sequence length is longer than the specified maximum sequence length for this model" ) ) tokenizer.deprecation_warnings = {} with self.assertLogs("transformers", level="WARNING") as cm: output = tokenizer([seq_1], padding=padding_state, truncation=False) self.assertNotEqual(len(output["input_ids"][0]), model_max_length) self.assertEqual(len(cm.records), 1) self.assertTrue( cm.records[0].message.startswith( "Token indices sequence length is longer than the specified maximum sequence length for this model" ) ) # Overflowing tokens stride = 2 information = tokenizer( seq_0, max_length=total_length - 2, add_special_tokens=False, stride=stride, truncation="longest_first", return_overflowing_tokens=True, # add_prefix_space=False, ) # Overflowing tokens are handled quite differently in slow and fast tokenizers if isinstance(tokenizer, PreTrainedTokenizerFast): truncated_sequence = information["input_ids"][0] overflowing_tokens = information["input_ids"][1] self.assertEqual(len(information["input_ids"]), 2) self.assertEqual(len(truncated_sequence), total_length - 2) self.assertEqual(truncated_sequence, sequence[:-2]) self.assertEqual(len(overflowing_tokens), 2 + stride) self.assertEqual(overflowing_tokens, sequence[-(2 + stride) :]) else: truncated_sequence = information["input_ids"] overflowing_tokens = information["overflowing_tokens"] self.assertEqual(len(truncated_sequence), total_length - 2) self.assertEqual(truncated_sequence, sequence[:-2]) self.assertEqual(len(overflowing_tokens), 2 + stride) def test_maximum_encoding_length_pair_input(self): tokenizers = self.get_tokenizers(do_lower_case=False, model_max_length=100) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): # Build a sequence from our model's vocabulary stride = 2 seq_0, ids = self.get_clean_sequence(tokenizer, max_length=20) if len(ids) <= 2 + stride: seq_0 = (seq_0 + " ") * (2 + stride) ids = None seq0_tokens = tokenizer.encode(seq_0, add_special_tokens=False) assert len(seq0_tokens) > 2 + stride seq_1 = "This is another sentence to be encoded." seq1_tokens = tokenizer.encode(seq_1, add_special_tokens=False) if abs(len(seq0_tokens) - len(seq1_tokens)) <= 2: seq1_tokens = seq1_tokens + seq1_tokens seq_1 = tokenizer.decode(seq1_tokens, clean_up_tokenization_spaces=False) seq1_tokens = tokenizer.encode(seq_1, add_special_tokens=False) assert len(seq1_tokens) > 2 + stride smallest = seq1_tokens if len(seq0_tokens) > len(seq1_tokens) else seq0_tokens # We are not using the special tokens - a bit too hard to test all the tokenizers with this # TODO try this again later sequence = tokenizer.encode(seq_0, seq_1, add_special_tokens=False) # , add_prefix_space=False) # Test with max model input length model_max_length = tokenizer.model_max_length self.assertEqual(model_max_length, 100) seq_2 = seq_0 * model_max_length assert len(seq_2) > model_max_length sequence1 = tokenizer(seq_1, add_special_tokens=False) total_length1 = len(sequence1["input_ids"]) sequence2 = tokenizer(seq_2, seq_1, add_special_tokens=False) total_length2 = len(sequence2["input_ids"]) assert total_length1 < model_max_length - 10, "Issue with the testing sequence, please update it." assert total_length2 > model_max_length, "Issue with the testing sequence, please update it." # Simple padding_strategies = ( [False, True, "longest"] if tokenizer.pad_token and tokenizer.pad_token_id >= 0 else [False] ) for padding_state in padding_strategies: with self.subTest(f"{tokenizer.__class__.__name__} Padding: {padding_state}"): for truncation_state in [True, "longest_first", "only_first"]: with self.subTest(f"{tokenizer.__class__.__name__} Truncation: {truncation_state}"): output = tokenizer(seq_2, seq_1, padding=padding_state, truncation=truncation_state) self.assertEqual(len(output["input_ids"]), model_max_length) output = tokenizer( [seq_2], [seq_1], padding=padding_state, truncation=truncation_state ) self.assertEqual(len(output["input_ids"][0]), model_max_length) # Simple output = tokenizer(seq_1, seq_2, padding=padding_state, truncation="only_second") self.assertEqual(len(output["input_ids"]), model_max_length) output = tokenizer([seq_1], [seq_2], padding=padding_state, truncation="only_second") self.assertEqual(len(output["input_ids"][0]), model_max_length) # Simple with no truncation # Reset warnings tokenizer.deprecation_warnings = {} with self.assertLogs("transformers", level="WARNING") as cm: output = tokenizer(seq_1, seq_2, padding=padding_state, truncation=False) self.assertNotEqual(len(output["input_ids"]), model_max_length) self.assertEqual(len(cm.records), 1) self.assertTrue( cm.records[0].message.startswith( "Token indices sequence length is longer than the specified maximum sequence length for this model" ) ) tokenizer.deprecation_warnings = {} with self.assertLogs("transformers", level="WARNING") as cm: output = tokenizer([seq_1], [seq_2], padding=padding_state, truncation=False) self.assertNotEqual(len(output["input_ids"][0]), model_max_length) self.assertEqual(len(cm.records), 1) self.assertTrue( cm.records[0].message.startswith( "Token indices sequence length is longer than the specified maximum sequence length for this model" ) ) truncated_first_sequence = tokenizer.encode(seq_0, add_special_tokens=False)[:-2] + tokenizer.encode( seq_1, add_special_tokens=False ) truncated_second_sequence = ( tokenizer.encode(seq_0, add_special_tokens=False) + tokenizer.encode(seq_1, add_special_tokens=False)[:-2] ) truncated_longest_sequence = ( truncated_first_sequence if len(seq0_tokens) > len(seq1_tokens) else truncated_second_sequence ) overflow_first_sequence = tokenizer.encode(seq_0, add_special_tokens=False)[ -(2 + stride) : ] + tokenizer.encode(seq_1, add_special_tokens=False) overflow_second_sequence = ( tokenizer.encode(seq_0, add_special_tokens=False) + tokenizer.encode(seq_1, add_special_tokens=False)[-(2 + stride) :] ) overflow_longest_sequence = ( overflow_first_sequence if len(seq0_tokens) > len(seq1_tokens) else overflow_second_sequence ) information = tokenizer.encode_plus( seq_0, seq_1, max_length=len(sequence) - 2, add_special_tokens=False, stride=stride, truncation="longest_first", return_overflowing_tokens=True, # add_prefix_space=False, ) # Overflowing tokens are handled quite differently in slow and fast tokenizers if isinstance(tokenizer, PreTrainedTokenizerFast): truncated_sequence = information["input_ids"][0] overflowing_tokens = information["input_ids"][1] self.assertEqual(len(information["input_ids"]), 2) self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_longest_sequence) self.assertEqual(len(overflowing_tokens), 2 + stride + len(smallest)) self.assertEqual(overflowing_tokens, overflow_longest_sequence) else: truncated_sequence = information["input_ids"] overflowing_tokens = information["overflowing_tokens"] self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_longest_sequence) self.assertEqual( len(overflowing_tokens), 2 + stride ) # No overflowing tokens when using 'longest' in python tokenizers information = tokenizer.encode_plus( seq_0, seq_1, max_length=len(sequence) - 2, add_special_tokens=False, stride=stride, truncation=True, return_overflowing_tokens=True, # add_prefix_space=False, ) # Overflowing tokens are handled quite differently in slow and fast tokenizers if isinstance(tokenizer, PreTrainedTokenizerFast): truncated_sequence = information["input_ids"][0] overflowing_tokens = information["input_ids"][1] self.assertEqual(len(information["input_ids"]), 2) self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_longest_sequence) self.assertEqual(len(overflowing_tokens), 2 + stride + len(smallest)) self.assertEqual(overflowing_tokens, overflow_longest_sequence) else: truncated_sequence = information["input_ids"] overflowing_tokens = information["overflowing_tokens"] self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_longest_sequence) self.assertEqual( len(overflowing_tokens), 2 + stride ) # No overflowing tokens when using 'longest' in python tokenizers information_first_truncated = tokenizer.encode_plus( seq_0, seq_1, max_length=len(sequence) - 2, add_special_tokens=False, stride=stride, truncation="only_first", return_overflowing_tokens=True, # add_prefix_space=False, ) # Overflowing tokens are handled quite differently in slow and fast tokenizers if isinstance(tokenizer, PreTrainedTokenizerFast): truncated_sequence = information_first_truncated["input_ids"][0] overflowing_tokens = information_first_truncated["input_ids"][1] self.assertEqual(len(information_first_truncated["input_ids"]), 2) self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_first_sequence) self.assertEqual(len(overflowing_tokens), 2 + stride + len(seq1_tokens)) self.assertEqual(overflowing_tokens, overflow_first_sequence) else: truncated_sequence = information_first_truncated["input_ids"] overflowing_tokens = information_first_truncated["overflowing_tokens"] self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_first_sequence) self.assertEqual(len(overflowing_tokens), 2 + stride) self.assertEqual(overflowing_tokens, seq0_tokens[-(2 + stride) :]) information_second_truncated = tokenizer.encode_plus( seq_0, seq_1, max_length=len(sequence) - 2, add_special_tokens=False, stride=stride, truncation="only_second", return_overflowing_tokens=True, # add_prefix_space=False, ) # Overflowing tokens are handled quite differently in slow and fast tokenizers if isinstance(tokenizer, PreTrainedTokenizerFast): truncated_sequence = information_second_truncated["input_ids"][0] overflowing_tokens = information_second_truncated["input_ids"][1] self.assertEqual(len(information_second_truncated["input_ids"]), 2) self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_second_sequence) self.assertEqual(len(overflowing_tokens), 2 + stride + len(seq0_tokens)) self.assertEqual(overflowing_tokens, overflow_second_sequence) else: truncated_sequence = information_second_truncated["input_ids"] overflowing_tokens = information_second_truncated["overflowing_tokens"] self.assertEqual(len(truncated_sequence), len(sequence) - 2) self.assertEqual(truncated_sequence, truncated_second_sequence) self.assertEqual(len(overflowing_tokens), 2 + stride) self.assertEqual(overflowing_tokens, seq1_tokens[-(2 + stride) :]) # def test_encode_input_type(self): # tokenizers = self.get_tokenizers(do_lower_case=False) # for tokenizer in tokenizers: # with self.subTest(f"{tokenizer.__class__.__name__}"): # sequence = "Let's encode this sequence" # tokens = sequence.split() # tokenizer.tokenize(sequence) # # input_ids = tokenizer.convert_tokens_to_ids(tokens) # formatted_input = tokenizer.encode(sequence, add_special_tokens=True, add_prefix_space=False) # self.assertEqual( # tokenizer.encode(tokens, is_split_into_words=True, add_special_tokens=True), formatted_input # ) # # This is not supported with the Rust tokenizers # # self.assertEqual(tokenizer.encode(input_ids, add_special_tokens=True), formatted_input) # def test_swap_special_token(self): # tokenizers = self.get_tokenizers(do_lower_case=False) # for tokenizer in tokenizers: # with self.subTest(f"{tokenizer.__class__.__name__}"): # # Our mask token # mask = "<mask>" # # We take a single word in the middle of the vocabulary # all_tokens = sorted(tokenizer.get_vocab().keys()) # word = tokenizer.decode(tokenizer.encode(all_tokens[len(all_tokens)//2], add_special_tokens=False)[:1]) # sequence_0 = "Encode " + word + " sequence" # sequence_masked_0 = "Encode " + mask + " sequence" # sequence_1 = word + " this sequence" # sequence_masked_1 = mask + " this sequence" # # Add tokens so that masked token isn't split # # tokens = [AddedToken(t, lstrip=True, normalized=False) for t in sequence.split()] # # tokenizer.add_tokens(tokens) # tokenizer.add_special_tokens( # {"mask_token": AddedToken(mask, normalized=False)} # ) # Eat left space on Byte-level BPE tokenizers # mask_ind = tokenizer.convert_tokens_to_ids(mask) # # Test first masked sequence # encoded_0 = tokenizer.encode(sequence_0, add_special_tokens=False) # encoded_masked = tokenizer.encode(sequence_masked_0, add_special_tokens=False) # assert len(encoded_masked) == len(encoded_0) # mask_loc = encoded_masked.index(mask_ind) # encoded_masked[mask_loc] = encoded_0[mask_loc] # self.assertEqual(encoded_masked, encoded_0) # # Test second masked sequence # encoded_1 = tokenizer.encode(sequence_1, add_special_tokens=False) # encoded_masked = tokenizer.encode(sequence_masked_1, add_special_tokens=False) # assert len(encoded_masked) == len(encoded_1) # mask_loc = encoded_masked.index(mask_ind) # encoded_masked[mask_loc] = encoded_1[mask_loc] # self.assertEqual(encoded_masked, encoded_1) def test_special_tokens_mask(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequence_0 = "Encode this." # Testing single inputs encoded_sequence = tokenizer.encode(sequence_0, add_special_tokens=False) encoded_sequence_dict = tokenizer.encode_plus( sequence_0, add_special_tokens=True, return_special_tokens_mask=True # , add_prefix_space=False ) encoded_sequence_w_special = encoded_sequence_dict["input_ids"] special_tokens_mask = encoded_sequence_dict["special_tokens_mask"] self.assertEqual(len(special_tokens_mask), len(encoded_sequence_w_special)) filtered_sequence = [x for i, x in enumerate(encoded_sequence_w_special) if not special_tokens_mask[i]] self.assertEqual(encoded_sequence, filtered_sequence) def test_special_tokens_mask_input_pairs(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequence_0 = "Encode this." sequence_1 = "This one too please." encoded_sequence = tokenizer.encode(sequence_0, add_special_tokens=False) encoded_sequence += tokenizer.encode(sequence_1, add_special_tokens=False) encoded_sequence_dict = tokenizer.encode_plus( sequence_0, sequence_1, add_special_tokens=True, return_special_tokens_mask=True, # add_prefix_space=False, ) encoded_sequence_w_special = encoded_sequence_dict["input_ids"] special_tokens_mask = encoded_sequence_dict["special_tokens_mask"] self.assertEqual(len(special_tokens_mask), len(encoded_sequence_w_special)) filtered_sequence = [ (x if not special_tokens_mask[i] else None) for i, x in enumerate(encoded_sequence_w_special) ] filtered_sequence = [x for x in filtered_sequence if x is not None] self.assertEqual(encoded_sequence, filtered_sequence) def test_right_and_left_padding(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequence = "Sequence" padding_size = 10 # check correct behaviour if no pad_token_id exists and add it eventually self._check_no_pad_token_padding(tokenizer, sequence) padding_idx = tokenizer.pad_token_id # RIGHT PADDING - Check that it correctly pads when a maximum length is specified along with the padding flag set to True tokenizer.padding_side = "right" encoded_sequence = tokenizer.encode(sequence) sequence_length = len(encoded_sequence) padded_sequence = tokenizer.encode( sequence, max_length=sequence_length + padding_size, padding="max_length" ) padded_sequence_length = len(padded_sequence) assert sequence_length + padding_size == padded_sequence_length assert encoded_sequence + [padding_idx] * padding_size == padded_sequence # LEFT PADDING - Check that it correctly pads when a maximum length is specified along with the padding flag set to True tokenizer.padding_side = "left" encoded_sequence = tokenizer.encode(sequence) sequence_length = len(encoded_sequence) padded_sequence = tokenizer.encode( sequence, max_length=sequence_length + padding_size, padding="max_length" ) padded_sequence_length = len(padded_sequence) assert sequence_length + padding_size == padded_sequence_length assert [padding_idx] * padding_size + encoded_sequence == padded_sequence # RIGHT & LEFT PADDING - Check that nothing is done for 'longest' and 'no_padding' encoded_sequence = tokenizer.encode(sequence) sequence_length = len(encoded_sequence) tokenizer.padding_side = "right" padded_sequence_right = tokenizer.encode(sequence, padding=True) padded_sequence_right_length = len(padded_sequence_right) assert sequence_length == padded_sequence_right_length assert encoded_sequence == padded_sequence_right tokenizer.padding_side = "left" padded_sequence_left = tokenizer.encode(sequence, padding="longest") padded_sequence_left_length = len(padded_sequence_left) assert sequence_length == padded_sequence_left_length assert encoded_sequence == padded_sequence_left tokenizer.padding_side = "right" padded_sequence_right = tokenizer.encode(sequence) padded_sequence_right_length = len(padded_sequence_right) assert sequence_length == padded_sequence_right_length assert encoded_sequence == padded_sequence_right tokenizer.padding_side = "left" padded_sequence_left = tokenizer.encode(sequence, padding=False) padded_sequence_left_length = len(padded_sequence_left) assert sequence_length == padded_sequence_left_length assert encoded_sequence == padded_sequence_left def test_padding_to_max_length(self): """We keep this test for backward compatibility but it should be remove when `pad_to_max_length` will e deprecated""" tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequence = "Sequence" padding_size = 10 # check correct behaviour if no pad_token_id exists and add it eventually self._check_no_pad_token_padding(tokenizer, sequence) padding_idx = tokenizer.pad_token_id # Check that it correctly pads when a maximum length is specified along with the padding flag set to True tokenizer.padding_side = "right" encoded_sequence = tokenizer.encode(sequence) sequence_length = len(encoded_sequence) # FIXME: the next line should be padding(max_length) to avoid warning padded_sequence = tokenizer.encode( sequence, max_length=sequence_length + padding_size, pad_to_max_length=True ) padded_sequence_length = len(padded_sequence) assert sequence_length + padding_size == padded_sequence_length assert encoded_sequence + [padding_idx] * padding_size == padded_sequence # Check that nothing is done when a maximum length is not specified encoded_sequence = tokenizer.encode(sequence) sequence_length = len(encoded_sequence) tokenizer.padding_side = "right" padded_sequence_right = tokenizer.encode(sequence, pad_to_max_length=True) padded_sequence_right_length = len(padded_sequence_right) assert sequence_length == padded_sequence_right_length assert encoded_sequence == padded_sequence_right def test_padding_to_multiple_of(self): tokenizers = self.get_tokenizers() for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if tokenizer.pad_token is None: self.skipTest("No padding token.") else: empty_tokens = tokenizer("", padding=True, pad_to_multiple_of=8) normal_tokens = tokenizer("This is a sample input", padding=True, pad_to_multiple_of=8) for key, value in empty_tokens.items(): self.assertEqual(len(value) % 8, 0, "BatchEncoding.{} is not multiple of 8".format(key)) for key, value in normal_tokens.items(): self.assertEqual(len(value) % 8, 0, "BatchEncoding.{} is not multiple of 8".format(key)) normal_tokens = tokenizer("This", pad_to_multiple_of=8) for key, value in normal_tokens.items(): self.assertNotEqual(len(value) % 8, 0, "BatchEncoding.{} is not multiple of 8".format(key)) # Should also work with truncation normal_tokens = tokenizer("This", padding=True, truncation=True, pad_to_multiple_of=8) for key, value in normal_tokens.items(): self.assertEqual(len(value) % 8, 0, "BatchEncoding.{} is not multiple of 8".format(key)) # truncation to something which is not a multiple of pad_to_multiple_of raises an error self.assertRaises( ValueError, tokenizer.__call__, "This", padding=True, truncation=True, max_length=12, pad_to_multiple_of=8, ) def test_encode_plus_with_padding(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequence = "Sequence" # check correct behaviour if no pad_token_id exists and add it eventually self._check_no_pad_token_padding(tokenizer, sequence) padding_size = 10 padding_idx = tokenizer.pad_token_id token_type_padding_idx = tokenizer.pad_token_type_id encoded_sequence = tokenizer.encode_plus(sequence, return_special_tokens_mask=True) input_ids = encoded_sequence["input_ids"] special_tokens_mask = encoded_sequence["special_tokens_mask"] sequence_length = len(input_ids) # Test 'longest' and 'no_padding' don't do anything tokenizer.padding_side = "right" not_padded_sequence = tokenizer.encode_plus( sequence, padding=True, return_special_tokens_mask=True, ) not_padded_input_ids = not_padded_sequence["input_ids"] not_padded_special_tokens_mask = not_padded_sequence["special_tokens_mask"] not_padded_sequence_length = len(not_padded_input_ids) assert sequence_length == not_padded_sequence_length assert input_ids == not_padded_input_ids assert special_tokens_mask == not_padded_special_tokens_mask not_padded_sequence = tokenizer.encode_plus( sequence, padding=False, return_special_tokens_mask=True, ) not_padded_input_ids = not_padded_sequence["input_ids"] not_padded_special_tokens_mask = not_padded_sequence["special_tokens_mask"] not_padded_sequence_length = len(not_padded_input_ids) assert sequence_length == not_padded_sequence_length assert input_ids == not_padded_input_ids assert special_tokens_mask == not_padded_special_tokens_mask # Test right padding tokenizer.padding_side = "right" right_padded_sequence = tokenizer.encode_plus( sequence, max_length=sequence_length + padding_size, padding="max_length", return_special_tokens_mask=True, ) right_padded_input_ids = right_padded_sequence["input_ids"] right_padded_special_tokens_mask = right_padded_sequence["special_tokens_mask"] right_padded_sequence_length = len(right_padded_input_ids) assert sequence_length + padding_size == right_padded_sequence_length assert input_ids + [padding_idx] * padding_size == right_padded_input_ids assert special_tokens_mask + [1] * padding_size == right_padded_special_tokens_mask # Test left padding tokenizer.padding_side = "left" left_padded_sequence = tokenizer.encode_plus( sequence, max_length=sequence_length + padding_size, padding="max_length", return_special_tokens_mask=True, ) left_padded_input_ids = left_padded_sequence["input_ids"] left_padded_special_tokens_mask = left_padded_sequence["special_tokens_mask"] left_padded_sequence_length = len(left_padded_input_ids) assert sequence_length + padding_size == left_padded_sequence_length assert [padding_idx] * padding_size + input_ids == left_padded_input_ids assert [1] * padding_size + special_tokens_mask == left_padded_special_tokens_mask if "token_type_ids" in tokenizer.model_input_names: token_type_ids = encoded_sequence["token_type_ids"] left_padded_token_type_ids = left_padded_sequence["token_type_ids"] right_padded_token_type_ids = right_padded_sequence["token_type_ids"] assert token_type_ids + [token_type_padding_idx] * padding_size == right_padded_token_type_ids assert [token_type_padding_idx] * padding_size + token_type_ids == left_padded_token_type_ids if "attention_mask" in tokenizer.model_input_names: attention_mask = encoded_sequence["attention_mask"] right_padded_attention_mask = right_padded_sequence["attention_mask"] left_padded_attention_mask = left_padded_sequence["attention_mask"] assert attention_mask + [0] * padding_size == right_padded_attention_mask assert [0] * padding_size + attention_mask == left_padded_attention_mask def test_separate_tokenizers(self): # This tests that tokenizers don't impact others. Unfortunately the case where it fails is when # we're loading an S3 configuration from a pre-trained identifier, and we have no way of testing those today. tokenizer = self.get_tokenizer(random_argument=True) assert tokenizer.init_kwargs["random_argument"] is True new_tokenizer = self.get_tokenizer(random_argument=False) assert tokenizer.init_kwargs["random_argument"] is True assert new_tokenizer.init_kwargs["random_argument"] is False def test_get_vocab(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): vocab_dict = tokenizer.get_vocab() self.assertIsInstance(vocab_dict, dict) self.assertGreaterEqual(len(tokenizer), len(vocab_dict)) vocab = [tokenizer.convert_ids_to_tokens(i) for i in range(len(tokenizer))] self.assertEqual(len(vocab), len(tokenizer)) tokenizer.add_tokens(["asdfasdfasdfasdf"]) vocab = [tokenizer.convert_ids_to_tokens(i) for i in range(len(tokenizer))] self.assertEqual(len(vocab), len(tokenizer)) def test_conversion_reversible(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): vocab = tokenizer.get_vocab() for word, ind in vocab.items(): if word == tokenizer.unk_token: continue self.assertEqual(tokenizer.convert_tokens_to_ids(word), ind) self.assertEqual(tokenizer.convert_ids_to_tokens(ind), word) def test_call(self): # Tests that all call wrap to encode_plus and batch_encode_plus tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequences = [ "Testing batch encode plus", "Testing batch encode plus with different sequence lengths", "Testing batch encode plus with different sequence lengths correctly pads", ] # Test not batched encoded_sequences_1 = tokenizer.encode_plus(sequences[0]) encoded_sequences_2 = tokenizer(sequences[0]) self.assertEqual(encoded_sequences_1, encoded_sequences_2) # Test not batched pairs encoded_sequences_1 = tokenizer.encode_plus(sequences[0], sequences[1]) encoded_sequences_2 = tokenizer(sequences[0], sequences[1]) self.assertEqual(encoded_sequences_1, encoded_sequences_2) # Test batched encoded_sequences_1 = tokenizer.batch_encode_plus(sequences) encoded_sequences_2 = tokenizer(sequences) self.assertEqual(encoded_sequences_1, encoded_sequences_2) # Test batched pairs encoded_sequences_1 = tokenizer.batch_encode_plus(list(zip(sequences, sequences))) encoded_sequences_2 = tokenizer(sequences, sequences) self.assertEqual(encoded_sequences_1, encoded_sequences_2) def test_batch_encode_plus_batch_sequence_length(self): # Tests that all encoded values have the correct size tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequences = [ "Testing batch encode plus", "Testing batch encode plus with different sequence lengths", "Testing batch encode plus with different sequence lengths correctly pads", ] encoded_sequences = [tokenizer.encode_plus(sequence) for sequence in sequences] encoded_sequences_batch = tokenizer.batch_encode_plus(sequences, padding=False) self.assertListEqual( encoded_sequences, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch) ) maximum_length = len( max([encoded_sequence["input_ids"] for encoded_sequence in encoded_sequences], key=len) ) # check correct behaviour if no pad_token_id exists and add it eventually self._check_no_pad_token_padding(tokenizer, sequences) encoded_sequences_padded = [ tokenizer.encode_plus(sequence, max_length=maximum_length, padding="max_length") for sequence in sequences ] encoded_sequences_batch_padded = tokenizer.batch_encode_plus(sequences, padding=True) self.assertListEqual( encoded_sequences_padded, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch_padded), ) # check 'longest' is unsensitive to a max length encoded_sequences_batch_padded_1 = tokenizer.batch_encode_plus(sequences, padding=True) encoded_sequences_batch_padded_2 = tokenizer.batch_encode_plus( sequences, max_length=maximum_length + 10, padding="longest" ) for key in encoded_sequences_batch_padded_1.keys(): self.assertListEqual( encoded_sequences_batch_padded_1[key], encoded_sequences_batch_padded_2[key], ) # check 'no_padding' is unsensitive to a max length encoded_sequences_batch_padded_1 = tokenizer.batch_encode_plus(sequences, padding=False) encoded_sequences_batch_padded_2 = tokenizer.batch_encode_plus( sequences, max_length=maximum_length + 10, padding=False ) for key in encoded_sequences_batch_padded_1.keys(): self.assertListEqual( encoded_sequences_batch_padded_1[key], encoded_sequences_batch_padded_2[key], ) @require_tokenizers def test_added_token_serializable(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): new_token = AddedToken("new_token", lstrip=True) tokenizer.add_special_tokens({"additional_special_tokens": [new_token]}) with tempfile.TemporaryDirectory() as tmp_dir_name: tokenizer.save_pretrained(tmp_dir_name) tokenizer.from_pretrained(tmp_dir_name) def test_batch_encode_plus_padding(self): # Test that padded sequences are equivalent between batch_encode_plus and encode_plus # Right padding tests tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequences = [ "Testing batch encode plus", "Testing batch encode plus with different sequence lengths", "Testing batch encode plus with different sequence lengths correctly pads", ] max_length = 100 # check correct behaviour if no pad_token_id exists and add it eventually self._check_no_pad_token_padding(tokenizer, sequences) encoded_sequences = [ tokenizer.encode_plus(sequence, max_length=max_length, padding="max_length") for sequence in sequences ] encoded_sequences_batch = tokenizer.batch_encode_plus( sequences, max_length=max_length, padding="max_length" ) self.assertListEqual( encoded_sequences, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch) ) # Left padding tests tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): tokenizer.padding_side = "left" sequences = [ "Testing batch encode plus", "Testing batch encode plus with different sequence lengths", "Testing batch encode plus with different sequence lengths correctly pads", ] max_length = 100 # check correct behaviour if no pad_token_id exists and add it eventually self._check_no_pad_token_padding(tokenizer, sequences) encoded_sequences = [ tokenizer.encode_plus(sequence, max_length=max_length, padding="max_length") for sequence in sequences ] encoded_sequences_batch = tokenizer.batch_encode_plus( sequences, max_length=max_length, padding="max_length" ) self.assertListEqual( encoded_sequences, self.convert_batch_encode_plus_format_to_encode_plus(encoded_sequences_batch) ) def test_pretokenized_inputs(self): # Test when inputs are pretokenized tokenizers = self.get_tokenizers(do_lower_case=False) # , add_prefix_space=True) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if hasattr(tokenizer, "add_prefix_space") and not tokenizer.add_prefix_space: continue # Prepare a sequence from our tokenizer vocabulary sequence, ids = self.get_clean_sequence(tokenizer, with_prefix_space=True, max_length=20) # sequence = " " + sequence # To be sure the byte-level tokenizers are feeling good token_sequence = sequence.split() # sequence_no_prefix_space = sequence.strip() # Test encode for pretokenized inputs output = tokenizer.encode(token_sequence, is_split_into_words=True, add_special_tokens=False) output_sequence = tokenizer.encode(sequence, add_special_tokens=False) self.assertEqual(output, output_sequence) output = tokenizer.encode(token_sequence, is_split_into_words=True, add_special_tokens=True) output_sequence = tokenizer.encode(sequence, add_special_tokens=True) self.assertEqual(output, output_sequence) # Test encode_plus for pretokenized inputs output = tokenizer.encode_plus(token_sequence, is_split_into_words=True, add_special_tokens=False) output_sequence = tokenizer.encode_plus(sequence, add_special_tokens=False) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) output = tokenizer.encode_plus(token_sequence, is_split_into_words=True, add_special_tokens=True) output_sequence = tokenizer.encode_plus(sequence, add_special_tokens=True) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) # Test batch_encode_plus for pretokenized inputs sequence_batch = [sequence.strip()] * 2 + [sequence.strip() + " " + sequence.strip()] token_sequence_batch = [s.split() for s in sequence_batch] sequence_batch_cleaned_up_spaces = [" " + " ".join(s) for s in token_sequence_batch] output = tokenizer.batch_encode_plus( token_sequence_batch, is_split_into_words=True, add_special_tokens=False ) output_sequence = tokenizer.batch_encode_plus( sequence_batch_cleaned_up_spaces, add_special_tokens=False ) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) output = tokenizer.batch_encode_plus( token_sequence_batch, is_split_into_words=True, add_special_tokens=True ) output_sequence = tokenizer.batch_encode_plus( sequence_batch_cleaned_up_spaces, add_special_tokens=True ) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) # Test encode for pretokenized inputs pairs output = tokenizer.encode( token_sequence, token_sequence, is_split_into_words=True, add_special_tokens=False ) output_sequence = tokenizer.encode(sequence, sequence, add_special_tokens=False) self.assertEqual(output, output_sequence) output = tokenizer.encode( token_sequence, token_sequence, is_split_into_words=True, add_special_tokens=True ) output_sequence = tokenizer.encode(sequence, sequence, add_special_tokens=True) self.assertEqual(output, output_sequence) # Test encode_plus for pretokenized inputs pairs output = tokenizer.encode_plus( token_sequence, token_sequence, is_split_into_words=True, add_special_tokens=False ) output_sequence = tokenizer.encode_plus(sequence, sequence, add_special_tokens=False) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) output = tokenizer.encode_plus( token_sequence, token_sequence, is_split_into_words=True, add_special_tokens=True ) output_sequence = tokenizer.encode_plus(sequence, sequence, add_special_tokens=True) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) # Test batch_encode_plus for pretokenized inputs pairs sequence_pair_batch = [(sequence.strip(), sequence.strip())] * 2 + [ (sequence.strip() + " " + sequence.strip(), sequence.strip()) ] token_sequence_pair_batch = [tuple(s.split() for s in pair) for pair in sequence_pair_batch] sequence_pair_batch_cleaned_up_spaces = [ tuple(" " + " ".join(s) for s in pair) for pair in token_sequence_pair_batch ] output = tokenizer.batch_encode_plus( token_sequence_pair_batch, is_split_into_words=True, add_special_tokens=False ) output_sequence = tokenizer.batch_encode_plus( sequence_pair_batch_cleaned_up_spaces, add_special_tokens=False ) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) output = tokenizer.batch_encode_plus( token_sequence_pair_batch, is_split_into_words=True, add_special_tokens=True ) output_sequence = tokenizer.batch_encode_plus( sequence_pair_batch_cleaned_up_spaces, add_special_tokens=True ) for key in output.keys(): self.assertEqual(output[key], output_sequence[key]) def test_prepare_for_model(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): string_sequence = "Testing the prepare_for_model method." ids = tokenizer.encode(string_sequence, add_special_tokens=False) prepared_input_dict = tokenizer.prepare_for_model(ids, add_special_tokens=True) input_dict = tokenizer.encode_plus(string_sequence, add_special_tokens=True) self.assertEqual(input_dict, prepared_input_dict) def test_batch_encode_plus_overflowing_tokens(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: string_sequences = ["Testing the prepare_for_model method.", "Test"] if tokenizer.pad_token is None: tokenizer.add_special_tokens({"pad_token": "[PAD]"}) tokenizer.batch_encode_plus( string_sequences, return_overflowing_tokens=True, truncation=True, padding=True, max_length=3 ) @is_pt_tf_cross_test def test_batch_encode_plus_tensors(self): tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): sequences = [ "Testing batch encode plus", "Testing batch encode plus with different sequence lengths", "Testing batch encode plus with different sequence lengths correctly pads", ] # A Tensor cannot be build by sequences which are not the same size self.assertRaises(ValueError, tokenizer.batch_encode_plus, sequences, return_tensors="pt") self.assertRaises(ValueError, tokenizer.batch_encode_plus, sequences, return_tensors="tf") if tokenizer.pad_token_id is None: self.assertRaises( ValueError, tokenizer.batch_encode_plus, sequences, padding=True, return_tensors="pt", ) self.assertRaises( ValueError, tokenizer.batch_encode_plus, sequences, padding="longest", return_tensors="tf", ) else: pytorch_tensor = tokenizer.batch_encode_plus(sequences, padding=True, return_tensors="pt") tensorflow_tensor = tokenizer.batch_encode_plus(sequences, padding="longest", return_tensors="tf") encoded_sequences = tokenizer.batch_encode_plus(sequences, padding=True) for key in encoded_sequences.keys(): pytorch_value = pytorch_tensor[key].tolist() tensorflow_value = tensorflow_tensor[key].numpy().tolist() encoded_value = encoded_sequences[key] self.assertEqual(pytorch_value, tensorflow_value, encoded_value) def _check_no_pad_token_padding(self, tokenizer, sequences): # if tokenizer does not have pad_token_id, an error should be thrown if tokenizer.pad_token_id is None: with self.assertRaises(ValueError): if isinstance(sequences, list): tokenizer.batch_encode_plus(sequences, padding="longest") else: tokenizer.encode_plus(sequences, padding=True) # add pad_token_id to pass subsequent tests tokenizer.add_special_tokens({"pad_token": "<PAD>"}) @require_torch @slow def test_torch_encode_plus_sent_to_model(self): import torch from transformers import MODEL_MAPPING, TOKENIZER_MAPPING MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(MODEL_MAPPING, TOKENIZER_MAPPING) tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING: return config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__] config = config_class() if config.is_encoder_decoder or config.pad_token_id is None: return model = model_class(config) # Make sure the model contains at least the full vocabulary size in its embedding matrix is_using_common_embeddings = hasattr(model.get_input_embeddings(), "weight") assert ( (model.get_input_embeddings().weight.shape[0] >= len(tokenizer)) if is_using_common_embeddings else True ) # Build sequence first_ten_tokens = list(tokenizer.get_vocab().keys())[:10] sequence = " ".join(first_ten_tokens) encoded_sequence = tokenizer.encode_plus(sequence, return_tensors="pt") # Ensure that the BatchEncoding.to() method works. encoded_sequence.to(model.device) batch_encoded_sequence = tokenizer.batch_encode_plus([sequence, sequence], return_tensors="pt") # This should not fail with torch.no_grad(): # saves some time model(**encoded_sequence) model(**batch_encoded_sequence) # if self.test_rust_tokenizer: # fast_tokenizer = self.get_rust_tokenizer() # encoded_sequence_fast = fast_tokenizer.encode_plus(sequence, return_tensors="pt") # batch_encoded_sequence_fast = fast_tokenizer.batch_encode_plus([sequence, sequence], return_tensors="pt") # # This should not fail # model(**encoded_sequence_fast) # model(**batch_encoded_sequence_fast) @require_tf @slow def test_tf_encode_plus_sent_to_model(self): from transformers import TF_MODEL_MAPPING, TOKENIZER_MAPPING MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(TF_MODEL_MAPPING, TOKENIZER_MAPPING) tokenizers = self.get_tokenizers(do_lower_case=False) for tokenizer in tokenizers: with self.subTest(f"{tokenizer.__class__.__name__}"): if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING: return config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__] config = config_class() if config.is_encoder_decoder or config.pad_token_id is None: return model = model_class(config) # Make sure the model contains at least the full vocabulary size in its embedding matrix assert model.config.vocab_size >= len(tokenizer) # Build sequence first_ten_tokens = list(tokenizer.get_vocab().keys())[:10] sequence = " ".join(first_ten_tokens) encoded_sequence = tokenizer.encode_plus(sequence, return_tensors="tf") batch_encoded_sequence = tokenizer.batch_encode_plus([sequence, sequence], return_tensors="tf") # This should not fail model(encoded_sequence) model(batch_encoded_sequence) # TODO: Check if require_torch is the best to test for numpy here ... Maybe move to require_flax when available @require_torch @slow def test_np_encode_plus_sent_to_model(self): from transformers import MODEL_MAPPING, TOKENIZER_MAPPING MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(MODEL_MAPPING, TOKENIZER_MAPPING) tokenizer = self.get_tokenizer() if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING: return config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__] config = config_class() if config.is_encoder_decoder or config.pad_token_id is None: return # Build sequence first_ten_tokens = list(tokenizer.get_vocab().keys())[:10] sequence = " ".join(first_ten_tokens) encoded_sequence = tokenizer.encode_plus(sequence, return_tensors="np") batch_encoded_sequence = tokenizer.batch_encode_plus([sequence, sequence], return_tensors="np") # TODO: add forward through JAX/Flax when PR is merged # This is currently here to make flake8 happy ! if encoded_sequence is None: raise ValueError("Cannot convert list to numpy tensor on encode_plus()") if batch_encoded_sequence is None: raise ValueError("Cannot convert list to numpy tensor on batch_encode_plus()") if self.test_rust_tokenizer: fast_tokenizer = self.get_rust_tokenizer() encoded_sequence_fast = fast_tokenizer.encode_plus(sequence, return_tensors="np") batch_encoded_sequence_fast = fast_tokenizer.batch_encode_plus([sequence, sequence], return_tensors="np") # TODO: add forward through JAX/Flax when PR is merged # This is currently here to make flake8 happy ! if encoded_sequence_fast is None: raise ValueError("Cannot convert list to numpy tensor on encode_plus() (fast)") if batch_encoded_sequence_fast is None: raise ValueError("Cannot convert list to numpy tensor on batch_encode_plus() (fast)") @require_torch def test_prepare_seq2seq_batch(self): if not self.test_seq2seq: return tokenizer = self.get_tokenizer() # Longer text that will definitely require truncation. src_text = [ " UN Chief Says There Is No Military Solution in Syria", " Secretary-General Ban Ki-moon says his response to Russia's stepped up military support for Syria is that 'there is no military solution' to the nearly five-year conflict and more weapons will only worsen the violence and misery for millions of people.", ] tgt_text = [ "Şeful ONU declară că nu există o soluţie militară în Siria", "Secretarul General Ban Ki-moon declară că răspunsul său la intensificarea sprijinului militar al Rusiei " 'pentru Siria este că "nu există o soluţie militară" la conflictul de aproape cinci ani şi că noi arme nu ' "vor face decât să înrăutăţească violenţele şi mizeria pentru milioane de oameni.", ] try: batch = tokenizer.prepare_seq2seq_batch( src_texts=src_text, tgt_texts=tgt_text, max_length=3, max_target_length=10, return_tensors="pt", src_lang="en_XX", # this should be ignored (for all but mbart) but not cause an error ) except NotImplementedError: return self.assertEqual(batch.input_ids.shape[1], 3) self.assertEqual(batch.labels.shape[1], 10) # max_target_length will default to max_length if not specified batch = tokenizer.prepare_seq2seq_batch(src_text, tgt_texts=tgt_text, max_length=3, return_tensors="pt") self.assertEqual(batch.input_ids.shape[1], 3) self.assertEqual(batch.labels.shape[1], 3) batch_encoder_only = tokenizer.prepare_seq2seq_batch( src_texts=src_text, max_length=3, max_target_length=10, return_tensors="pt" ) self.assertEqual(batch_encoder_only.input_ids.shape[1], 3) self.assertEqual(batch_encoder_only.attention_mask.shape[1], 3) self.assertNotIn("decoder_input_ids", batch_encoder_only) def test_is_fast(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) # Check is_fast is set correctly self.assertFalse(tokenizer_p.is_fast) self.assertTrue(tokenizer_r.is_fast) def test_fast_only_inputs(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) # Ensure None raise an error self.assertRaises(TypeError, tokenizer_r.tokenize, None) self.assertRaises(TypeError, tokenizer_r.encode, None) self.assertRaises(TypeError, tokenizer_r.encode_plus, None) self.assertRaises(TypeError, tokenizer_r.batch_encode_plus, None) def test_alignement_methods(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) words = ["Wonderful", "no", "inspiration", "example", "with", "subtoken"] text = " ".join(words) batch_size = 3 encoding = tokenizer_r.encode_plus(text, add_special_tokens=False) batch_encoding = tokenizer_r.batch_encode_plus([text] * batch_size, add_special_tokens=False) num_tokens = len(encoding["input_ids"]) last_word_index = len(words) - 1 last_token_index = num_tokens - 1 last_batch_index = batch_size - 1 last_char_index = len(text) - 1 # words, tokens self.assertEqual(len(encoding.words(0)), num_tokens) self.assertEqual(max(encoding.words(0)), last_word_index) self.assertEqual(min(encoding.words(0)), 0) self.assertEqual(len(batch_encoding.words(last_batch_index)), num_tokens) self.assertEqual(max(batch_encoding.words(last_batch_index)), last_word_index) self.assertEqual(min(batch_encoding.words(last_batch_index)), 0) self.assertEqual(len(encoding.tokens(0)), num_tokens) # Assert token_to_word self.assertEqual(encoding.token_to_word(0), 0) self.assertEqual(encoding.token_to_word(0, 0), 0) self.assertEqual(encoding.token_to_word(last_token_index), last_word_index) self.assertEqual(encoding.token_to_word(0, last_token_index), last_word_index) self.assertEqual(batch_encoding.token_to_word(1, 0), 0) self.assertEqual(batch_encoding.token_to_word(0, last_token_index), last_word_index) self.assertEqual(batch_encoding.token_to_word(last_batch_index, last_token_index), last_word_index) # Assert word_to_tokens self.assertEqual(encoding.word_to_tokens(0).start, 0) self.assertEqual(encoding.word_to_tokens(0, 0).start, 0) self.assertEqual(encoding.word_to_tokens(last_word_index).end, last_token_index + 1) self.assertEqual(encoding.word_to_tokens(0, last_word_index).end, last_token_index + 1) self.assertEqual(batch_encoding.word_to_tokens(1, 0).start, 0) self.assertEqual(batch_encoding.word_to_tokens(0, last_word_index).end, last_token_index + 1) self.assertEqual( batch_encoding.word_to_tokens(last_batch_index, last_word_index).end, last_token_index + 1 ) # Assert token_to_chars self.assertEqual(encoding.token_to_chars(0).start, 0) self.assertEqual(encoding.token_to_chars(0, 0).start, 0) self.assertEqual(encoding.token_to_chars(last_token_index).end, last_char_index + 1) self.assertEqual(encoding.token_to_chars(0, last_token_index).end, last_char_index + 1) self.assertEqual(batch_encoding.token_to_chars(1, 0).start, 0) self.assertEqual(batch_encoding.token_to_chars(0, last_token_index).end, last_char_index + 1) self.assertEqual( batch_encoding.token_to_chars(last_batch_index, last_token_index).end, last_char_index + 1 ) # Assert char_to_token self.assertEqual(encoding.char_to_token(0), 0) self.assertEqual(encoding.char_to_token(0, 0), 0) self.assertEqual(encoding.char_to_token(last_char_index), last_token_index) self.assertEqual(encoding.char_to_token(0, last_char_index), last_token_index) self.assertEqual(batch_encoding.char_to_token(1, 0), 0) self.assertEqual(batch_encoding.char_to_token(0, last_char_index), last_token_index) self.assertEqual(batch_encoding.char_to_token(last_batch_index, last_char_index), last_token_index) # Assert char_to_word self.assertEqual(encoding.char_to_word(0), 0) self.assertEqual(encoding.char_to_word(0, 0), 0) self.assertEqual(encoding.char_to_word(last_char_index), last_word_index) self.assertEqual(encoding.char_to_word(0, last_char_index), last_word_index) self.assertEqual(batch_encoding.char_to_word(1, 0), 0) self.assertEqual(batch_encoding.char_to_word(0, last_char_index), last_word_index) self.assertEqual(batch_encoding.char_to_word(last_batch_index, last_char_index), last_word_index) # Assert word_to_chars self.assertEqual(encoding.word_to_chars(0).start, 0) self.assertEqual(encoding.word_to_chars(0, 0).start, 0) self.assertEqual(encoding.word_to_chars(last_word_index).end, last_char_index + 1) self.assertEqual(encoding.word_to_chars(0, last_word_index).end, last_char_index + 1) self.assertEqual(batch_encoding.word_to_chars(1, 0).start, 0) self.assertEqual(batch_encoding.word_to_chars(0, last_word_index).end, last_char_index + 1) self.assertEqual( batch_encoding.word_to_chars(last_batch_index, last_word_index).end, last_char_index + 1 ) # Assert token_to_sequence self.assertEqual(encoding.token_to_sequence(num_tokens // 2), 0) self.assertEqual(encoding.token_to_sequence(0, num_tokens // 2), 0) self.assertEqual(batch_encoding.token_to_sequence(1, num_tokens // 2), 0) self.assertEqual(batch_encoding.token_to_sequence(0, num_tokens // 2), 0) self.assertEqual(batch_encoding.token_to_sequence(last_batch_index, num_tokens // 2), 0) # Pair of input sequences words = ["Wonderful", "no", "inspiration", "example", "with", "subtoken"] text = " ".join(words) pair_words = ["Amazing", "example", "full", "of", "inspiration"] pair_text = " ".join(pair_words) batch_size = 3 index_word_in_first_seq = words.index("inspiration") index_word_in_pair_seq = pair_words.index("inspiration") index_char_in_first_seq = text.find("inspiration") index_char_in_pair_seq = pair_text.find("inspiration") pair_encoding = tokenizer_r.encode_plus(text, pair_text, add_special_tokens=False) pair_batch_encoding = tokenizer_r.batch_encode_plus( [(text, pair_text)] * batch_size, add_special_tokens=False ) num_tokens = len(encoding["input_ids"]) last_word_index = len(words) - 1 last_token_index = num_tokens - 1 last_batch_index = batch_size - 1 last_char_index = len(text) - 1 # Assert word_to_tokens self.assertNotEqual( pair_encoding.word_to_tokens(index_word_in_first_seq, sequence_index=0).start, pair_encoding.word_to_tokens(index_word_in_pair_seq, sequence_index=1).start, ) self.assertEqual( pair_encoding["input_ids"][ pair_encoding.word_to_tokens(index_word_in_first_seq, sequence_index=0).start ], pair_encoding["input_ids"][ pair_encoding.word_to_tokens(index_word_in_pair_seq, sequence_index=1).start ], ) self.assertNotEqual( pair_batch_encoding.word_to_tokens(1, index_word_in_first_seq, sequence_index=0).start, pair_batch_encoding.word_to_tokens(1, index_word_in_pair_seq, sequence_index=1).start, ) self.assertEqual( pair_batch_encoding["input_ids"][1][ pair_batch_encoding.word_to_tokens(1, index_word_in_first_seq, sequence_index=0).start ], pair_batch_encoding["input_ids"][1][ pair_batch_encoding.word_to_tokens(1, index_word_in_pair_seq, sequence_index=1).start ], ) # Assert char_to_token self.assertNotEqual( pair_encoding.char_to_token(index_char_in_first_seq, sequence_index=0), pair_encoding.char_to_token(index_char_in_pair_seq, sequence_index=1), ) self.assertEqual( pair_encoding["input_ids"][pair_encoding.char_to_token(index_char_in_first_seq, sequence_index=0)], pair_encoding["input_ids"][pair_encoding.char_to_token(index_char_in_pair_seq, sequence_index=1)], ) self.assertNotEqual( pair_batch_encoding.char_to_token(1, index_char_in_first_seq, sequence_index=0), pair_batch_encoding.char_to_token(1, index_char_in_pair_seq, sequence_index=1), ) self.assertEqual( pair_batch_encoding["input_ids"][1][ pair_batch_encoding.char_to_token(1, index_char_in_first_seq, sequence_index=0) ], pair_batch_encoding["input_ids"][1][ pair_batch_encoding.char_to_token(1, index_char_in_pair_seq, sequence_index=1) ], ) # Assert char_to_word self.assertNotEqual( pair_encoding.char_to_word(index_char_in_first_seq, sequence_index=0), pair_encoding.char_to_word(index_char_in_pair_seq, sequence_index=1), ) self.assertEqual( words[pair_encoding.char_to_word(index_char_in_first_seq, sequence_index=0)], pair_words[pair_encoding.char_to_word(index_char_in_pair_seq, sequence_index=1)], ) self.assertNotEqual( pair_batch_encoding.char_to_word(1, index_char_in_first_seq, sequence_index=0), pair_batch_encoding.char_to_word(1, index_char_in_pair_seq, sequence_index=1), ) self.assertEqual( words[pair_batch_encoding.char_to_word(1, index_char_in_first_seq, sequence_index=0)], pair_words[pair_batch_encoding.char_to_word(1, index_char_in_pair_seq, sequence_index=1)], ) # Assert word_to_chars self.assertNotEqual( pair_encoding.word_to_chars(index_word_in_first_seq, sequence_index=0).start, pair_encoding.word_to_chars(index_word_in_pair_seq, sequence_index=1).start, ) self.assertEqual( text[pair_encoding.word_to_chars(index_word_in_first_seq, sequence_index=0).start], pair_text[pair_encoding.word_to_chars(index_word_in_pair_seq, sequence_index=1).start], ) self.assertNotEqual( pair_batch_encoding.word_to_chars(1, index_word_in_first_seq, sequence_index=0).start, pair_batch_encoding.word_to_chars(1, index_word_in_pair_seq, sequence_index=1).start, ) self.assertEqual( text[pair_batch_encoding.word_to_chars(1, index_word_in_first_seq, sequence_index=0).start], pair_text[pair_batch_encoding.word_to_chars(1, index_word_in_pair_seq, sequence_index=1).start], ) # Assert token_to_sequence pair_encoding = tokenizer_r.encode_plus(text, pair_text, add_special_tokens=True) pair_sequence_ids = [ pair_encoding.token_to_sequence(i) for i in range(len(pair_encoding["input_ids"])) ] self.assertIn(0, pair_sequence_ids) self.assertIn(1, pair_sequence_ids) if tokenizer_r.num_special_tokens_to_add(pair=True): self.assertIn(None, pair_sequence_ids) pair_batch_encoding = tokenizer_r.batch_encode_plus( [(text, pair_text)] * batch_size, add_special_tokens=True ) pair_batch_sequence_ids = [ pair_batch_encoding.token_to_sequence(1, i) for i in range(len(pair_batch_encoding["input_ids"][0])) ] self.assertIn(0, pair_batch_sequence_ids) self.assertIn(1, pair_batch_sequence_ids) if tokenizer_r.num_special_tokens_to_add(pair=True): self.assertIn(None, pair_batch_sequence_ids) def test_tokenization_python_rust_equals(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) # Ensure basic input match input_p = tokenizer_p.encode_plus(self._data) input_r = tokenizer_r.encode_plus(self._data) for key in filter(lambda x: x in ["input_ids", "token_type_ids", "attention_mask"], input_p.keys()): self.assertSequenceEqual(input_p[key], input_r[key]) input_pairs_p = tokenizer_p.encode_plus(self._data, self._data) input_pairs_r = tokenizer_r.encode_plus(self._data, self._data) for key in filter(lambda x: x in ["input_ids", "token_type_ids", "attention_mask"], input_p.keys()): self.assertSequenceEqual(input_pairs_p[key], input_pairs_r[key]) # Ensure truncation match input_p = tokenizer_p.encode_plus(self._data, max_length=512, truncation=True) input_r = tokenizer_r.encode_plus(self._data, max_length=512, truncation=True) for key in filter(lambda x: x in ["input_ids", "token_type_ids", "attention_mask"], input_p.keys()): self.assertSequenceEqual(input_p[key], input_r[key]) # Ensure truncation with stride match input_p = tokenizer_p.encode_plus( self._data, max_length=512, truncation=True, stride=3, return_overflowing_tokens=True ) input_r = tokenizer_r.encode_plus( self._data, max_length=512, truncation=True, stride=3, return_overflowing_tokens=True ) for key in filter(lambda x: x in ["input_ids", "token_type_ids", "attention_mask"], input_p.keys()): self.assertSequenceEqual(input_p[key], input_r[key][0]) def test_num_special_tokens_to_add_equal(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) # Check we have the same number of added_tokens for both pair and non-pair inputs. self.assertEqual( tokenizer_r.num_special_tokens_to_add(False), tokenizer_p.num_special_tokens_to_add(False) ) self.assertEqual( tokenizer_r.num_special_tokens_to_add(True), tokenizer_p.num_special_tokens_to_add(True) ) def test_max_length_equal(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) # Check we have the correct max_length for both pair and non-pair inputs. self.assertEqual(tokenizer_r.max_len_single_sentence, tokenizer_p.max_len_single_sentence) self.assertEqual(tokenizer_r.max_len_sentences_pair, tokenizer_p.max_len_sentences_pair) def test_special_tokens_map_equal(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) # Assert the set of special tokens match. self.assertSequenceEqual( tokenizer_p.special_tokens_map.items(), tokenizer_r.special_tokens_map.items(), ) def test_add_tokens(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) vocab_size = len(tokenizer_r) self.assertEqual(tokenizer_r.add_tokens(""), 0) self.assertEqual(tokenizer_r.add_tokens("testoken"), 1) self.assertEqual(tokenizer_r.add_tokens(["testoken1", "testtoken2"]), 2) self.assertEqual(len(tokenizer_r), vocab_size + 3) self.assertEqual(tokenizer_r.add_special_tokens({}), 0) self.assertEqual(tokenizer_r.add_special_tokens({"bos_token": "[BOS]", "eos_token": "[EOS]"}), 2) self.assertRaises( AssertionError, tokenizer_r.add_special_tokens, {"additional_special_tokens": "<testtoken1>"} ) self.assertEqual(tokenizer_r.add_special_tokens({"additional_special_tokens": ["<testtoken2>"]}), 1) self.assertEqual( tokenizer_r.add_special_tokens({"additional_special_tokens": ["<testtoken3>", "<testtoken4>"]}), 2 ) self.assertEqual(len(tokenizer_r), vocab_size + 8) def test_offsets_mapping(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) text = "Wonderful no inspiration example with subtoken" pair = "Along with an awesome pair" # No pair tokens_with_offsets = tokenizer_r.encode_plus( text, return_special_tokens_mask=True, return_offsets_mapping=True, add_special_tokens=True ) added_tokens = tokenizer_r.num_special_tokens_to_add(False) offsets = tokens_with_offsets["offset_mapping"] # Assert there is the same number of tokens and offsets self.assertEqual(len(offsets), len(tokens_with_offsets["input_ids"])) # Assert there is online added_tokens special_tokens self.assertEqual(sum(tokens_with_offsets["special_tokens_mask"]), added_tokens) # Pairs tokens_with_offsets = tokenizer_r.encode_plus( text, pair, return_special_tokens_mask=True, return_offsets_mapping=True, add_special_tokens=True ) added_tokens = tokenizer_r.num_special_tokens_to_add(True) offsets = tokens_with_offsets["offset_mapping"] # Assert there is the same number of tokens and offsets self.assertEqual(len(offsets), len(tokens_with_offsets["input_ids"])) # Assert there is online added_tokens special_tokens self.assertEqual(sum(tokens_with_offsets["special_tokens_mask"]), added_tokens) def test_batch_encode_dynamic_overflowing(self): """ When calling batch_encode with multiple sequence it can returns different number of overflowing encoding for each sequence: [ Sequence 1: [Encoding 1, Encoding 2], Sequence 2: [Encoding 1], Sequence 3: [Encoding 1, Encoding 2, ... Encoding N] ] This needs to be padded so that it can represented as a tensor """ for tokenizer, pretrained_name, kwargs in self.tokenizers_list: tokenizer = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) with self.subTest( "{} ({}, {})".format(tokenizer.__class__.__name__, pretrained_name, tokenizer.__class__.__name__) ): if is_torch_available(): returned_tensor = "pt" elif is_tf_available(): returned_tensor = "tf" else: returned_tensor = "jax" if not tokenizer.pad_token or tokenizer.pad_token_id < 0: return tokens = tokenizer.encode_plus( "HuggingFace is solving NLP one commit at a time", max_length=6, padding=True, truncation=True, return_tensors=returned_tensor, return_overflowing_tokens=True, ) for key in filter(lambda x: "overflow_to_sample_mapping" not in x, tokens.keys()): self.assertEqual(len(tokens[key].shape), 2) # Mono sample tokens = tokenizer.batch_encode_plus( ["HuggingFace is solving NLP one commit at a time"], max_length=6, padding=True, truncation="only_first", return_tensors=returned_tensor, return_overflowing_tokens=True, ) for key in filter(lambda x: "overflow_to_sample_mapping" not in x, tokens.keys()): self.assertEqual(len(tokens[key].shape), 2) self.assertEqual(tokens[key].shape[-1], 6) # Multi sample tokens = tokenizer.batch_encode_plus( ["HuggingFace is solving NLP one commit at a time", "Very tiny input"], max_length=6, padding=True, truncation="only_first", return_tensors=returned_tensor, return_overflowing_tokens=True, ) for key in filter(lambda x: "overflow_to_sample_mapping" not in x, tokens.keys()): self.assertEqual(len(tokens[key].shape), 2) self.assertEqual(tokens[key].shape[-1], 6) def test_compare_pretokenized_inputs(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) if hasattr(tokenizer_p, "add_prefix_space") and not tokenizer_p.add_prefix_space: continue # Too hard to test for now # Input string pretokenized_input_simple = "This is a sample input".split() pretokenized_input_pair = "This is a sample pair".split() # Test encode for pretokenized inputs output_r = tokenizer_r.encode( pretokenized_input_simple, is_split_into_words=True, add_special_tokens=False ) output_p = tokenizer_p.encode( pretokenized_input_simple, is_split_into_words=True, add_special_tokens=False ) self.assertEqual(output_p, output_r) kwargs = { "is_split_into_words": True, # "return_token_type_ids": True, # Use the defaults for each tokenizers # "return_attention_mask": True, # Use the defaults for each tokenizers "return_overflowing_tokens": False, "return_special_tokens_mask": True, "return_offsets_mapping": False, # Not implemented in python tokenizers # "add_special_tokens": False, } batch_kwargs = { "is_split_into_words": True, # "return_token_type_ids": True, # Use the defaults for each tokenizers # "return_attention_mask": True, # Use the defaults for each tokenizers "return_overflowing_tokens": False, "return_special_tokens_mask": True, "return_offsets_mapping": False, # Not implemented in python tokenizers # "add_special_tokens": False, } # Test encode_plus for pretokenized inputs output_r = tokenizer_r.encode_plus(pretokenized_input_simple, **kwargs) output_p = tokenizer_p.encode_plus(pretokenized_input_simple, **kwargs) for key in output_p.keys(): self.assertEqual(output_p[key], output_r[key]) # Test batch_encode_plus for pretokenized inputs input_batch = ([pretokenized_input_simple] * 2) + [pretokenized_input_simple + pretokenized_input_pair] output_r = tokenizer_r.batch_encode_plus(input_batch, **batch_kwargs) output_p = tokenizer_p.batch_encode_plus(input_batch, **batch_kwargs) for key in output_p.keys(): self.assertEqual(output_p[key], output_r[key]) # Test encode for pretokenized inputs pairs output_r = tokenizer_r.encode( pretokenized_input_simple, pretokenized_input_pair, is_split_into_words=True ) output_p = tokenizer_p.encode( pretokenized_input_simple, pretokenized_input_pair, is_split_into_words=True ) self.assertEqual(output_p, output_r) # Test encode_plus for pretokenized inputs output_r = tokenizer_r.encode_plus(pretokenized_input_simple, pretokenized_input_pair, **kwargs) output_p = tokenizer_p.encode_plus(pretokenized_input_simple, pretokenized_input_pair, **kwargs) for key in output_p.keys(): self.assertEqual(output_p[key], output_r[key]) # Test batch_encode_plus for pretokenized inputs input_batch_pair = ([pretokenized_input_simple, pretokenized_input_pair] * 2) + [ pretokenized_input_simple + pretokenized_input_pair, pretokenized_input_pair, ] output_r = tokenizer_r.batch_encode_plus(input_batch_pair, **batch_kwargs) output_p = tokenizer_p.batch_encode_plus(input_batch_pair, **batch_kwargs) for key in output_p.keys(): self.assertEqual(output_p[key], output_r[key]) def test_create_token_type_ids(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) input_simple = [1, 2, 3] input_pair = [1, 2, 3] # Generate output output_r = tokenizer_r.create_token_type_ids_from_sequences(input_simple) output_p = tokenizer_p.create_token_type_ids_from_sequences(input_simple) self.assertEqual(output_p, output_r) # Generate pair output output_r = tokenizer_r.create_token_type_ids_from_sequences(input_simple, input_pair) output_p = tokenizer_p.create_token_type_ids_from_sequences(input_simple, input_pair) self.assertEqual(output_p, output_r) def test_build_inputs_with_special_tokens(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) # # Input string # input_simple = tokenizer_p.tokenize("This is a sample input", add_special_tokens=False) # input_pair = tokenizer_p.tokenize("This is a sample pair", add_special_tokens=False) # # Generate output # output_r = tokenizer_r.build_inputs_with_special_tokens(input_simple) # output_p = tokenizer_p.build_inputs_with_special_tokens(input_simple) # self.assertEqual(output_p, output_r) # # Generate pair output # output_r = tokenizer_r.build_inputs_with_special_tokens(input_simple, input_pair) # output_p = tokenizer_p.build_inputs_with_special_tokens(input_simple, input_pair) # self.assertEqual(output_p, output_r) # Input tokens id input_simple = tokenizer_p.encode("This is a sample input", add_special_tokens=False) input_pair = tokenizer_p.encode("This is a sample pair", add_special_tokens=False) # Generate output output_r = tokenizer_r.build_inputs_with_special_tokens(input_simple) output_p = tokenizer_p.build_inputs_with_special_tokens(input_simple) self.assertEqual(output_p, output_r) # Generate pair output output_r = tokenizer_r.build_inputs_with_special_tokens(input_simple, input_pair) output_p = tokenizer_p.build_inputs_with_special_tokens(input_simple, input_pair) self.assertEqual(output_p, output_r) def test_padding(self, max_length=50): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) self.assertEqual(tokenizer_p.pad_token_id, tokenizer_r.pad_token_id) pad_token_id = tokenizer_p.pad_token_id # Encode - Simple input input_r = tokenizer_r.encode("This is a simple input", max_length=max_length, pad_to_max_length=True) input_p = tokenizer_p.encode("This is a simple input", max_length=max_length, pad_to_max_length=True) self.assert_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.encode("This is a simple input", max_length=max_length, padding="max_length") input_p = tokenizer_p.encode("This is a simple input", max_length=max_length, padding="max_length") self.assert_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.encode("This is a simple input", padding="longest") input_p = tokenizer_p.encode("This is a simple input", padding=True) self.assert_padded_input_match(input_r, input_p, len(input_r), pad_token_id) # Encode - Pair input input_r = tokenizer_r.encode( "This is a simple input", "This is a pair", max_length=max_length, pad_to_max_length=True ) input_p = tokenizer_p.encode( "This is a simple input", "This is a pair", max_length=max_length, pad_to_max_length=True ) self.assert_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.encode( "This is a simple input", "This is a pair", max_length=max_length, padding="max_length" ) input_p = tokenizer_p.encode( "This is a simple input", "This is a pair", max_length=max_length, padding="max_length" ) self.assert_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.encode("This is a simple input", "This is a pair", padding=True) input_p = tokenizer_p.encode("This is a simple input", "This is a pair", padding="longest") self.assert_padded_input_match(input_r, input_p, len(input_r), pad_token_id) # Encode_plus - Simple input input_r = tokenizer_r.encode_plus( "This is a simple input", max_length=max_length, pad_to_max_length=True ) input_p = tokenizer_p.encode_plus( "This is a simple input", max_length=max_length, pad_to_max_length=True ) self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id) self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"]) input_r = tokenizer_r.encode_plus( "This is a simple input", max_length=max_length, padding="max_length" ) input_p = tokenizer_p.encode_plus( "This is a simple input", max_length=max_length, padding="max_length" ) self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id) self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"]) input_r = tokenizer_r.encode_plus("This is a simple input", padding="longest") input_p = tokenizer_p.encode_plus("This is a simple input", padding=True) self.assert_padded_input_match( input_r["input_ids"], input_p["input_ids"], len(input_r["input_ids"]), pad_token_id ) self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"]) # Encode_plus - Pair input input_r = tokenizer_r.encode_plus( "This is a simple input", "This is a pair", max_length=max_length, pad_to_max_length=True ) input_p = tokenizer_p.encode_plus( "This is a simple input", "This is a pair", max_length=max_length, pad_to_max_length=True ) self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id) self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"]) input_r = tokenizer_r.encode_plus( "This is a simple input", "This is a pair", max_length=max_length, padding="max_length" ) input_p = tokenizer_p.encode_plus( "This is a simple input", "This is a pair", max_length=max_length, padding="max_length" ) self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id) self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"]) input_r = tokenizer_r.encode_plus("This is a simple input", "This is a pair", padding="longest") input_p = tokenizer_p.encode_plus("This is a simple input", "This is a pair", padding=True) self.assert_padded_input_match( input_r["input_ids"], input_p["input_ids"], len(input_r["input_ids"]), pad_token_id ) self.assertSequenceEqual(input_r["attention_mask"], input_p["attention_mask"]) # Batch_encode_plus - Simple input input_r = tokenizer_r.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], max_length=max_length, pad_to_max_length=True, ) input_p = tokenizer_p.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], max_length=max_length, pad_to_max_length=True, ) self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], max_length=max_length, padding="max_length", ) input_p = tokenizer_p.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], max_length=max_length, padding="max_length", ) self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], max_length=max_length, padding="longest", ) input_p = tokenizer_p.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], max_length=max_length, padding=True, ) self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id) input_r = tokenizer_r.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], padding="longest" ) input_p = tokenizer_p.batch_encode_plus( ["This is a simple input 1", "This is a simple input 2"], padding=True ) self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id) # Batch_encode_plus - Pair input input_r = tokenizer_r.batch_encode_plus( [ ("This is a simple input 1", "This is a simple input 2"), ("This is a simple pair 1", "This is a simple pair 2"), ], max_length=max_length, truncation=True, padding="max_length", ) input_p = tokenizer_p.batch_encode_plus( [ ("This is a simple input 1", "This is a simple input 2"), ("This is a simple pair 1", "This is a simple pair 2"), ], max_length=max_length, truncation=True, padding="max_length", ) self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id) input_r = tokenizer_r.batch_encode_plus( [ ("This is a simple input 1", "This is a simple input 2"), ("This is a simple pair 1", "This is a simple pair 2"), ], padding=True, ) input_p = tokenizer_p.batch_encode_plus( [ ("This is a simple input 1", "This is a simple input 2"), ("This is a simple pair 1", "This is a simple pair 2"), ], padding="longest", ) self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id) # Using pad on single examples after tokenization input_r = tokenizer_r.encode_plus("This is a input 1") input_r = tokenizer_r.pad(input_r) input_p = tokenizer_r.encode_plus("This is a input 1") input_p = tokenizer_r.pad(input_p) self.assert_padded_input_match( input_r["input_ids"], input_p["input_ids"], len(input_r["input_ids"]), pad_token_id ) # Using pad on single examples after tokenization input_r = tokenizer_r.encode_plus("This is a input 1") input_r = tokenizer_r.pad(input_r, max_length=max_length, padding="max_length") input_p = tokenizer_r.encode_plus("This is a input 1") input_p = tokenizer_r.pad(input_p, max_length=max_length, padding="max_length") self.assert_padded_input_match(input_r["input_ids"], input_p["input_ids"], max_length, pad_token_id) # Using pad after tokenization input_r = tokenizer_r.batch_encode_plus( ["This is a input 1", "This is a much longer input whilch should be padded"] ) input_r = tokenizer_r.pad(input_r) input_p = tokenizer_r.batch_encode_plus( ["This is a input 1", "This is a much longer input whilch should be padded"] ) input_p = tokenizer_r.pad(input_p) self.assert_batch_padded_input_match(input_r, input_p, len(input_r["input_ids"][0]), pad_token_id) # Using pad after tokenization input_r = tokenizer_r.batch_encode_plus( ["This is a input 1", "This is a much longer input whilch should be padded"] ) input_r = tokenizer_r.pad(input_r, max_length=max_length, padding="max_length") input_p = tokenizer_r.batch_encode_plus( ["This is a input 1", "This is a much longer input whilch should be padded"] ) input_p = tokenizer_r.pad(input_p, max_length=max_length, padding="max_length") self.assert_batch_padded_input_match(input_r, input_p, max_length, pad_token_id) def test_padding_different_model_input_name(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) self.assertEqual(tokenizer_p.pad_token_id, tokenizer_r.pad_token_id) pad_token_id = tokenizer_p.pad_token_id input_r = tokenizer_r.batch_encode_plus( ["This is a input 1", "This is a much longer input whilch should be padded"] ) input_p = tokenizer_r.batch_encode_plus( ["This is a input 1", "This is a much longer input whilch should be padded"] ) # rename encoded batch to "inputs" input_r["inputs"] = input_r[tokenizer_r.model_input_names[0]] del input_r[tokenizer_r.model_input_names[0]] input_p["inputs"] = input_p[tokenizer_p.model_input_names[0]] del input_p[tokenizer_p.model_input_names[0]] # Renaming `input_ids` to `inputs` tokenizer_r.model_input_names = ["inputs"] + tokenizer_r.model_input_names[1:] tokenizer_p.model_input_names = ["inputs"] + tokenizer_p.model_input_names[1:] input_r = tokenizer_r.pad(input_r, padding="longest") input_p = tokenizer_r.pad(input_p, padding="longest") max_length = len(input_p["inputs"][0]) self.assert_batch_padded_input_match( input_r, input_p, max_length, pad_token_id, model_main_input_name="inputs" ) def test_save_pretrained(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) tmpdirname2 = tempfile.mkdtemp() tokenizer_r_files = tokenizer_r.save_pretrained(tmpdirname2) tokenizer_p_files = tokenizer_p.save_pretrained(tmpdirname2) # Checks it save with the same files self.assertSequenceEqual(tokenizer_r_files, tokenizer_p_files) # Checks everything loads correctly in the same way tokenizer_rp = tokenizer_r.from_pretrained(tmpdirname2) tokenizer_pp = tokenizer_p.from_pretrained(tmpdirname2) # Check special tokens are set accordingly on Rust and Python for key in tokenizer_pp.special_tokens_map: self.assertTrue(hasattr(tokenizer_rp, key)) # self.assertEqual(getattr(tokenizer_rp, key), getattr(tokenizer_pp, key)) # self.assertEqual(getattr(tokenizer_rp, key + "_id"), getattr(tokenizer_pp, key + "_id")) shutil.rmtree(tmpdirname2) def test_embeded_special_tokens(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) sentence = "A, <mask> AllenNLP sentence." tokens_r = tokenizer_r.encode_plus( sentence, add_special_tokens=True, ) tokens_p = tokenizer_p.encode_plus( sentence, add_special_tokens=True, ) for key in tokens_p.keys(): self.assertEqual(tokens_r[key], tokens_p[key]) if "token_type_ids" in tokens_r: self.assertEqual(sum(tokens_r["token_type_ids"]), sum(tokens_p["token_type_ids"])) tokens_r = tokenizer_r.convert_ids_to_tokens(tokens_r["input_ids"]) tokens_p = tokenizer_p.convert_ids_to_tokens(tokens_p["input_ids"]) self.assertSequenceEqual(tokens_r, tokens_p) def test_compare_add_special_tokens(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) simple_num_special_tokens_to_add = tokenizer_r.num_special_tokens_to_add(pair=False) # pair_num_special_tokens_to_add = tokenizer_r.num_special_tokens_to_add(pair=True) for text in ["", " "]: # tokenize() no_special_tokens = tokenizer_r.tokenize(text, add_special_tokens=False) with_special_tokens = tokenizer_r.tokenize(text, add_special_tokens=True) self.assertEqual( len(no_special_tokens), len(with_special_tokens) - simple_num_special_tokens_to_add ) # encode() no_special_tokens = tokenizer_r.encode(text, add_special_tokens=False) with_special_tokens = tokenizer_r.encode(text, add_special_tokens=True) self.assertEqual( len(no_special_tokens), len(with_special_tokens) - simple_num_special_tokens_to_add ) # encode_plus() no_special_tokens = tokenizer_r.encode_plus(text, add_special_tokens=False) with_special_tokens = tokenizer_r.encode_plus(text, add_special_tokens=True) for key in no_special_tokens.keys(): self.assertEqual( len(no_special_tokens[key]), len(with_special_tokens[key]) - simple_num_special_tokens_to_add, ) # # batch_encode_plus no_special_tokens = tokenizer_r.batch_encode_plus([text, text], add_special_tokens=False) with_special_tokens = tokenizer_r.batch_encode_plus([text, text], add_special_tokens=True) for key in no_special_tokens.keys(): for i_no, i_with in zip(no_special_tokens[key], with_special_tokens[key]): self.assertEqual(len(i_no), len(i_with) - simple_num_special_tokens_to_add) def test_compare_prepare_for_model(self): for tokenizer, pretrained_name, kwargs in self.tokenizers_list: with self.subTest("{} ({})".format(tokenizer.__class__.__name__, pretrained_name)): tokenizer_r = self.rust_tokenizer_class.from_pretrained(pretrained_name, **kwargs) tokenizer_p = self.tokenizer_class.from_pretrained(pretrained_name, **kwargs) string_sequence = "Asserting that both tokenizers are equal" python_output = tokenizer_p.prepare_for_model( tokenizer_p.encode(string_sequence, add_special_tokens=False) ) rust_output = tokenizer_r.prepare_for_model( tokenizer_r.encode(string_sequence, add_special_tokens=False) ) for key in python_output: self.assertEqual(python_output[key], rust_output[key])
AdaMix/tests/test_tokenization_common.py/0
{ "file_path": "AdaMix/tests/test_tokenization_common.py", "repo_id": "AdaMix", "token_count": 73036 }
79
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team, Microsoft Corporation. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import unittest from transformers import MPNetTokenizerFast from transformers.models.mpnet.tokenization_mpnet import VOCAB_FILES_NAMES, MPNetTokenizer from transformers.testing_utils import require_tokenizers, slow from .test_tokenization_common import TokenizerTesterMixin @require_tokenizers class MPNetTokenizerTest(TokenizerTesterMixin, unittest.TestCase): tokenizer_class = MPNetTokenizer rust_tokenizer_class = MPNetTokenizerFast test_rust_tokenizer = True space_between_special_tokens = True def setUp(self): super().setUp() vocab_tokens = [ "[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]", "want", "##want", "##ed", "wa", "un", "runn", "##ing", ",", "low", "lowest", ] self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) with open(self.vocab_file, "w", encoding="utf-8") as vocab_writer: vocab_writer.write("".join([x + "\n" for x in vocab_tokens])) def get_input_output_texts(self, tokenizer): input_text = "UNwant\u00E9d,running" output_text = "unwanted, running" return input_text, output_text def test_full_tokenizer(self): tokenizer = self.tokenizer_class(self.vocab_file) tokens = tokenizer.tokenize("UNwant\u00E9d,running") self.assertListEqual(tokens, ["un", "##want", "##ed", ",", "runn", "##ing"]) self.assertListEqual(tokenizer.convert_tokens_to_ids(tokens), [9, 6, 7, 12, 10, 11]) @slow def test_sequence_builders(self): tokenizer = self.tokenizer_class.from_pretrained("microsoft/mpnet-base") text = tokenizer.encode("sequence builders", add_special_tokens=False) text_2 = tokenizer.encode("multi-sequence build", add_special_tokens=False) encoded_sentence = tokenizer.build_inputs_with_special_tokens(text) encoded_pair = tokenizer.build_inputs_with_special_tokens(text, text_2) assert encoded_sentence == [0] + text + [2] assert encoded_pair == [0] + text + [2] + [2] + text_2 + [2]
AdaMix/tests/test_tokenization_mpnet.py/0
{ "file_path": "AdaMix/tests/test_tokenization_mpnet.py", "repo_id": "AdaMix", "token_count": 1198 }
80
# coding=utf-8 # Copyright 2020 The HuggingFace Team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import json import os import unittest from transformers.models.xlm.tokenization_xlm import VOCAB_FILES_NAMES, XLMTokenizer from transformers.testing_utils import slow from .test_tokenization_common import TokenizerTesterMixin class XLMTokenizationTest(TokenizerTesterMixin, unittest.TestCase): tokenizer_class = XLMTokenizer test_rust_tokenizer = False def setUp(self): super().setUp() # Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt vocab = [ "l", "o", "w", "e", "r", "s", "t", "i", "d", "n", "w</w>", "r</w>", "t</w>", "lo", "low", "er</w>", "low</w>", "lowest</w>", "newer</w>", "wider</w>", "<unk>", ] vocab_tokens = dict(zip(vocab, range(len(vocab)))) merges = ["l o 123", "lo w 1456", "e r</w> 1789", ""] self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"]) self.merges_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["merges_file"]) with open(self.vocab_file, "w") as fp: fp.write(json.dumps(vocab_tokens)) with open(self.merges_file, "w") as fp: fp.write("\n".join(merges)) def get_input_output_texts(self, tokenizer): input_text = "lower newer" output_text = "lower newer" return input_text, output_text def test_full_tokenizer(self): """ Adapted from Sennrich et al. 2015 and https://github.com/rsennrich/subword-nmt """ tokenizer = XLMTokenizer(self.vocab_file, self.merges_file) text = "lower" bpe_tokens = ["low", "er</w>"] tokens = tokenizer.tokenize(text) self.assertListEqual(tokens, bpe_tokens) input_tokens = tokens + ["<unk>"] input_bpe_tokens = [14, 15, 20] self.assertListEqual(tokenizer.convert_tokens_to_ids(input_tokens), input_bpe_tokens) @slow def test_sequence_builders(self): tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-en-2048") text = tokenizer.encode("sequence builders", add_special_tokens=False) text_2 = tokenizer.encode("multi-sequence build", add_special_tokens=False) encoded_sentence = tokenizer.build_inputs_with_special_tokens(text) encoded_pair = tokenizer.build_inputs_with_special_tokens(text, text_2) assert encoded_sentence == [0] + text + [1] assert encoded_pair == [0] + text + [1] + text_2 + [1]
AdaMix/tests/test_tokenization_xlm.py/0
{ "file_path": "AdaMix/tests/test_tokenization_xlm.py", "repo_id": "AdaMix", "token_count": 1506 }
81
# coding=utf-8 # Copyright 2020 The HuggingFace Inc. team. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import json import os from tensorflow.core.protobuf.saved_model_pb2 import SavedModel # All paths are set with the intent you should run this script from the root of the repo with the command # python utils/check_copies.py REPO_PATH = "." # Internal TensorFlow ops that can be safely ignored (mostly specific to a saved model) INTERNAL_OPS = [ "Assert", "AssignVariableOp", "EmptyTensorList", "MergeV2Checkpoints", "ReadVariableOp", "ResourceGather", "RestoreV2", "SaveV2", "ShardedFilename", "StatefulPartitionedCall", "StaticRegexFullMatch", "VarHandleOp", ] def onnx_compliancy(saved_model_path, strict, opset): saved_model = SavedModel() onnx_ops = [] with open(os.path.join(REPO_PATH, "utils", "tf_ops", "onnx.json")) as f: onnx_opsets = json.load(f)["opsets"] for i in range(1, opset + 1): onnx_ops.extend(onnx_opsets[str(i)]) with open(saved_model_path, "rb") as f: saved_model.ParseFromString(f.read()) model_op_names = set() # Iterate over every metagraph in case there is more than one (a saved model can contain multiple graphs) for meta_graph in saved_model.meta_graphs: # Add operations in the graph definition model_op_names.update(node.op for node in meta_graph.graph_def.node) # Go through the functions in the graph definition for func in meta_graph.graph_def.library.function: # Add operations in each function model_op_names.update(node.op for node in func.node_def) # Convert to list, sorted if you want model_op_names = sorted(model_op_names) incompatible_ops = [] for op in model_op_names: if op not in onnx_ops and op not in INTERNAL_OPS: incompatible_ops.append(op) if strict and len(incompatible_ops) > 0: raise Exception(f"Found the following incompatible ops for the opset {opset}:\n" + incompatible_ops) elif len(incompatible_ops) > 0: print(f"Found the following incompatible ops for the opset {opset}:") print(*incompatible_ops, sep="\n") else: print(f"The saved model {saved_model_path} can properly be converted with ONNX.") if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("--saved_model_path", help="Path of the saved model to check (the .pb file).") parser.add_argument( "--opset", default=12, type=int, help="The ONNX opset against which the model has to be tested." ) parser.add_argument( "--framework", choices=["onnx"], default="onnx", help="Frameworks against which to test the saved model." ) parser.add_argument( "--strict", action="store_true", help="Whether make the checking strict (raise errors) or not (raise warnings)" ) args = parser.parse_args() if args.framework == "onnx": onnx_compliancy(args.saved_model_path, args.strict, args.opset)
AdaMix/utils/check_tf_ops.py/0
{ "file_path": "AdaMix/utils/check_tf_ops.py", "repo_id": "AdaMix", "token_count": 1302 }
82
import airsimdroneracinglab as airsim import json import numpy as np import os def to_airsim_vector(np_arr): assert np.size(np_arr) == 3 return airsim.Vector3r( np.float(np_arr[0]), np.float(np_arr[1]), np.float(np_arr[2]) ) def to_airsim_vectors(np_arr): return [to_airsim_vector(np_arr[i, :]) for i in range(np.size(np_arr, 0))] # these clases are only meant to be settings generator. # for everything else, there's airsimdroneracinglab.Pose() class Position: def __init__(self, x=0.0, y=0.0, z=0.0): self.x = x self.y = y self.z = z class Rotation: def __init__(self, yaw=0.0, pitch=0.0, roll=0.0): self.yaw = yaw self.pitch = pitch self.roll = roll class Pose: def __init__(self, position, rotation): self.position = position self.rotation = rotation class AirSimSettingsCreator(object): def __init__(self, sim_mode="Multirotor"): self.sim_mode = sim_mode self.settings_dict = {} def add_minimal(self): self.settings_dict[ "SeeDocsAt" ] = "https://github.com/Microsoft/AirSim/blob/master/docs/settings.md" self.settings_dict["SettingsVersion"] = 1.2 self.settings_dict["SimMode"] = self.sim_mode self.settings_dict["ClockSpeed"] = 1 # can be used for camera pose or vehicle pose by passing in the right settings_key def set_pose(self, setting_key, pose): setting_key["X"] = pose.position.x setting_key["Y"] = pose.position.y setting_key["Z"] = pose.position.z setting_key["Pitch"] = pose.rotation.pitch setting_key["Roll"] = pose.rotation.roll setting_key["Yaw"] = pose.rotation.yaw def add_multirotor(self, vehicle_name, pose): assert self.settings_dict["SimMode"] == "Multirotor" if "Vehicles" not in self.settings_dict.keys(): self.settings_dict["Vehicles"] = {} self.settings_dict["Vehicles"][vehicle_name] = {} self.settings_dict["Vehicles"][vehicle_name]["VehicleType"] = "SimpleFlight" self.set_pose(self.settings_dict["Vehicles"][vehicle_name], pose) def add_camera( self, vehicle_name, camera_name, relative_pose, image_type, image_width, image_height, fov_horizontal_degrees, ): # fetch vehicle setting dict vehicle_setting = self.settings_dict["Vehicles"][vehicle_name] # initialize vehicle's camera setting dict to empty vehicle_setting["Cameras"] = {} vehicle_setting["Cameras"][camera_name] = {} camera_setting = vehicle_setting["Cameras"][camera_name] self.set_pose(camera_setting, relative_pose) capture_setting = {} capture_setting["Width"] = image_width capture_setting["Height"] = image_height capture_setting["ImageType"] = image_type capture_setting["FOV_Degrees"] = fov_horizontal_degrees camera_setting["CaptureSettings"] = [capture_setting] # default linux: /home/$USER/Documents/AirSim/settings.json # default windows: C:\\Users\\%USERNAME%\\Documents\\AirSim\\settings.json def write_airsim_settings_file(self, base_filename="settings.json"): user_dir = os.path.expanduser("~") airsim_settings_dir = os.path.join(user_dir, "Documents", "AirSim") if not os.path.exists(airsim_settings_dir): os.makedirs(airsim_settings_dir) airsim_settings_abs_file_path = os.path.join(airsim_settings_dir, base_filename) with open(airsim_settings_abs_file_path, "w") as f: json.dump(self.settings_dict, f, indent=2, sort_keys=True) # usage: AirSimSettingsCreator().write_airsim_neurips_baseline_settings_file() def write_airsim_neurips_baseline_settings_file(self): instance = self.__class__() instance.add_minimal() instance.add_multirotor( vehicle_name="drone_1", pose=Pose(Position(), Rotation()) ) instance.add_camera( vehicle_name="drone_1", camera_name="fpv_cam", relative_pose=Pose(Position(0.25, 0.0, 0.0), Rotation()), image_type=0, image_width=320, image_height=240, fov_horizontal_degrees=90, ) instance.add_multirotor( vehicle_name="drone_2", pose=Pose(Position(), Rotation()) ) instance.write_airsim_settings_file()
AirSim-Drone-Racing-Lab/baselines/utils.py/0
{ "file_path": "AirSim-Drone-Racing-Lab/baselines/utils.py", "repo_id": "AirSim-Drone-Racing-Lab", "token_count": 1979 }
83
import os import csv import argparse import sys import pandas as pd from scipy.spatial.transform import Rotation as R from scipy.spatial.transform import Slerp import numpy as np TIME_COLUMN = 'TimeStamp' INTERPOLABLE_VEL_COLUMNS = ['vx', 'vy', 'vz', 'vyaw'] INTERPOLABLE_QUAT_COLUMNS = ['odom.quaternion.x', 'odom.quaternion.y', 'odom.quaternion.z', 'odom.quaternion.w'] IMAGE_COLUMNS = [TIME_COLUMN, 'ImageFile'] RESULT_COLUMNS = INTERPOLABLE_VEL_COLUMNS def get_quat(dict): q_x = dict['odom.quaternion.x'] q_y = dict['odom.quaternion.y'] q_z = dict['odom.quaternion.z'] q_w = dict['odom.quaternion.w'] q = np.array([q_x, q_y, q_z, q_w]) return q def get_vel(dict): v_x = dict['vx'] v_y = dict['vy'] v_z = dict['vz'] v_vec = np.array([v_x, v_y, v_z]) return v_vec def get_abspath(filename): return os.path.abspath( os.path.join(os.path.dirname(__file__), './{}'.format(filename))) def create_image_path(image_file_name, image_folder_path): return os.path.abspath(os.path.join(image_folder_path, image_file_name)) def create_suffixed_file(file_path, suffix): _path, _format = os.path.splitext(file_path) return '{}_{}{}'.format(_path, suffix, _format) def interpolate(v0, v1, t): return round((1 - t) * v0 + t * v1, 8) def normalize(v0, v1, x): # makes value between 0 and 1 return (x - v0) / (v1 - v0) def interpolate_record(record1, record2, image_record): """ Returns result record with interpolated values """ # interpolate velocities interpolated_vel_record = {} t = normalize(record1[TIME_COLUMN], record2[TIME_COLUMN], image_record[TIME_COLUMN]) for col in INTERPOLABLE_VEL_COLUMNS: interpolated_vel_record[col] = interpolate(record1[col], record2[col], t) # interpolate rotations of the body frame q0 = get_quat(record1) q1 = get_quat(record2) key_rots = R.from_quat([q0, q1]) key_times = [0, 1] time_interp = [t] slerp = Slerp(key_times, key_rots) interp_rot = slerp(time_interp) v_world = get_vel(interpolated_vel_record) # apply rotation to the velocity vector # needs to be inverse because we interpolated the rotation matrix from body -> world # and what we're doing here is going from world -> body v_body = interp_rot.apply(v_world, inverse=True)[0] # put everything back in dict in body coords interpolated_vel_body = {} interpolated_vel_body['vx'] = v_body[0] interpolated_vel_body['vy'] = v_body[1] interpolated_vel_body['vz'] = v_body[2] interpolated_vel_body['vyaw'] = interpolated_vel_record['vyaw'] return interpolated_vel_body def find_closest_rows(value, iterator): v1, v2 = None, None r1, r2 = None, None for current in iterator: curr_value = current[1] if curr_value[TIME_COLUMN] <= value: v1 = curr_value elif v1 is not None and curr_value[TIME_COLUMN] >= value: v2 = curr_value break elif v1 is None and curr_value[TIME_COLUMN] >= value: break return v1, v2 def split_test_training_data(file_paths, lines_number, test_split=0.2): test_number = int(lines_number * test_split) for file_path in file_paths: f = open(file_path, 'r') f_test = open(create_suffixed_file(file_path, 'test'), 'w') f_train = open(create_suffixed_file(file_path, 'train'), 'w') i = 0 for line in f.readlines(): if i <= test_number: f_test.writelines(line) else: f_train.writelines(line) i += 1 f.close() f_train.close() f_test.close() os.remove(file_path) def process( velocities, images, result_velocities_file_path, result_images_file_path, images_folder_path): """ Process velocities and images frames. For each row in images: 1) Match 2 closest by timestamp velocities rows to the image record. 2) Calculate normalized parameter t: image_time - vt1 / vt2 - vt1. vt1, vt2: velocity records timestamps 3) Interpolate velocities values using t. 4) Create new row using image timestamp, image and interpolated values. """ velocity_iterator = velocities.iterrows() f_velocities = open(result_velocities_file_path, 'w+') f_images = open(result_images_file_path, 'w+') writer_v = csv.DictWriter(f_velocities, RESULT_COLUMNS, delimiter=',') writer_i = csv.DictWriter(f_images, ['ImageFile'], delimiter=',') row_counter, missed = 0, 0 for _, image_row in images.iterrows(): if row_counter % 1000 == 0: print('{} out of {} images processed -> {}%'.format(row_counter, images.shape[0], 100.0*row_counter/images.shape[0])) v1, v2 = find_closest_rows(image_row[TIME_COLUMN], velocity_iterator) # print('{}'.format(v1['TimeStamp'] - image_row[TIME_COLUMN])) if v1 is None or v2 is None: continue interpolated = interpolate_record(v1, v2, image_row) row_counter += 1 image_path = create_image_path( image_row['ImageFile'], images_folder_path ) if not os.path.isfile(image_path): missed += 1 continue writer_v.writerow(interpolated) writer_i.writerow({ 'ImageFile': image_path }) print('--------------------------------') print('Missed files: {}'.format(missed)) f_velocities.close() f_images.close() # split_test_training_data([result_velocities_file_path, result_images_file_path], row_counter) def run( velocities_file_path, images_file_path, result_velocities_file_path, result_images_file_path, images_folder_path): velocities = pd.read_csv(velocities_file_path, delimiter=', ') images = pd.read_csv( images_file_path, delimiter=', ') # sys.exit() process( velocities, images, result_velocities_file_path, result_images_file_path, images_folder_path ) print('------------------------------------') print('Successfully created the results!') if __name__ == "__main__": # parser = argparse.ArgumentParser() # parser.add_argument("velocity", help="Path to the velocities file") # parser.add_argument("images", help="Path to the images file") # parser.add_argument( # "result_velocities", help="Path to the result velocities file") # parser.add_argument("result_images", help="Path to the result images file") # parser.add_argument("images_folder", help="Path to the images folder") # args = parser.parse_args() # run( # args.velocity, # args.images, # args.result_velocities, # args.result_images, # args.images_folder # ) base_path = '/home/rb/all_files/il_datasets/bc_test' run( os.path.join(base_path, 'moveOnSpline_vel_cmd.txt'), os.path.join(base_path, 'images.txt'), os.path.join(base_path, 'proc_vel.txt'), os.path.join(base_path, 'proc_images.txt'), os.path.join(base_path, 'images'))
AirSim-Drone-Racing-VAE-Imitation/datagen/action_generator/data_processor.py/0
{ "file_path": "AirSim-Drone-Racing-VAE-Imitation/datagen/action_generator/data_processor.py", "repo_id": "AirSim-Drone-Racing-VAE-Imitation", "token_count": 3146 }
84
import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.activations import softplus, relu from tensorflow.keras.backend import random_normal from tensorflow.keras.layers import Dense, Flatten, Conv2D, BatchNormalization, Lambda, Concatenate, Conv2DTranspose, Reshape import dronet import decoders import transformer # model definition class class Cmvae(Model): def __init__(self, n_z, gate_dim=4, res=96, trainable_model=True): super(Cmvae, self).__init__() # create the 3 base models: self.q_img = dronet.Dronet(num_outputs=n_z*2, include_top=True) self.p_img = decoders.ImgDecoder() self.p_gate = decoders.GateDecoder(gate_dim=gate_dim) # Create sampler self.mean_params = Lambda(lambda x: x[:, : n_z]) self.stddev_params = Lambda(lambda x: x[:, n_z:]) def call(self, x, mode): # Possible modes for reconstruction: # 0: img -> img + gate # 1: img -> img # 2: img -> gate x = self.q_img(x) means = self.mean_params(x) stddev = tf.math.exp(0.5 * self.stddev_params(x)) eps = random_normal(tf.shape(stddev)) z = means + eps * stddev if mode == 0: img_recon = self.p_img(z) gate_recon = self.p_gate(z) return img_recon, gate_recon, means, stddev, z elif mode == 1: img_recon = self.p_img(z) gate_recon = False return img_recon, gate_recon, means, stddev, z elif mode == 2: img_recon = False gate_recon = self.p_gate(z) return img_recon, gate_recon, means, stddev, z def encode(self, x): x = self.q_img(x) means = self.mean_params(x) stddev = tf.math.exp(0.5 * self.stddev_params(x)) eps = random_normal(tf.shape(stddev)) z = means + eps * stddev return z, means, stddev def decode(self, z, mode): # Possible modes for reconstruction: # 0: z -> img + gate # 1: z -> img # 2: z -> gate if mode == 0: img_recon = self.p_img(z) gate_recon = self.p_gate(z) return img_recon, gate_recon elif mode == 1: img_recon = self.p_img(z) gate_recon = False return img_recon, gate_recon elif mode == 2: img_recon = False gate_recon = self.p_gate(z) return img_recon, gate_recon # model definition class class CmvaeDirect(Model): def __init__(self, n_z, gate_dim=4, res=96, trainable_model=True): super(CmvaeDirect, self).__init__() # create the base models: self.q_img = dronet.Dronet(num_outputs=n_z*2, include_top=True) self.p_img = decoders.ImgDecoder() self.p_R = transformer.NonLinearTransformer() self.p_Theta = transformer.NonLinearTransformer() self.p_Psi = transformer.NonLinearTransformer() self.p_Phi = transformer.NonLinearTransformer() # Create sampler self.mean_params = Lambda(lambda x: x[:, : n_z]) self.stddev_params = Lambda(lambda x: x[:, n_z:]) self.R_params = Lambda(lambda x: x[:, 0]) self.Theta_params = Lambda(lambda x: x[:, 1]) self.Psi_params = Lambda(lambda x: x[:, 2]) self.Phi_params = Lambda(lambda x: x[:, 3]) def call(self, x, mode): # Possible modes for reconstruction: # 0: img -> img + gate # 1: img -> img # 2: img -> gate x = self.q_img(x) means = self.mean_params(x) stddev = tf.math.exp(0.5 * self.stddev_params(x)) eps = random_normal(tf.shape(stddev)) z = means + eps * stddev r_params, theta_params, psi_params, phi_params = self.extract_gate_params(z) if mode == 0: gate_recon = tf.keras.layers.concatenate([self.p_R(r_params), self.p_Theta(theta_params), self.p_Psi(psi_params), self.p_Phi(phi_params)], axis=1) img_recon = self.p_img(z) return img_recon, gate_recon, means, stddev, z elif mode == 1: img_recon = self.p_img(z) gate_recon = False return img_recon, gate_recon, means, stddev, z elif mode == 2: img_recon = False gate_recon = tf.keras.layers.concatenate([self.p_R(r_params), self.p_Theta(theta_params), self.p_Psi(psi_params), self.p_Phi(phi_params)], axis=1) return img_recon, gate_recon, means, stddev, z def encode(self, x): x = self.q_img(x) means = self.mean_params(x) stddev = tf.math.exp(0.5 * self.stddev_params(x)) eps = random_normal(tf.shape(stddev)) z = means + eps * stddev return z, means, stddev def decode(self, z, mode): # Possible modes for reconstruction: # 0: z -> img + gate # 1: z -> img # 2: z -> gate r_params, theta_params, psi_params, phi_params = self.extract_gate_params(z) if mode == 0: gate_recon = tf.keras.layers.concatenate([self.p_R(r_params), self.p_Theta(theta_params), self.p_Psi(psi_params), self.p_Phi(phi_params)], axis=1) img_recon = self.p_img(z) return img_recon, gate_recon elif mode == 1: img_recon = self.p_img(z) gate_recon = False return img_recon, gate_recon elif mode == 2: gate_recon = tf.keras.layers.concatenate([self.p_R(r_params), self.p_Theta(theta_params), self.p_Psi(psi_params), self.p_Phi(phi_params)], axis=1) img_recon = False return img_recon, gate_recon def extract_gate_params(self, z): # extract part of z vector r_params = self.R_params(z) theta_params = self.Theta_params(z) psi_params = self.Psi_params(z) phi_params = self.Phi_params(z) # reshape variables r_params = tf.reshape(r_params, [r_params.shape[0], 1]) theta_params = tf.reshape(theta_params, [theta_params.shape[0], 1]) psi_params = tf.reshape(psi_params, [psi_params.shape[0], 1]) phi_params = tf.reshape(phi_params, [phi_params.shape[0], 1]) return r_params, theta_params, psi_params, phi_params
AirSim-Drone-Racing-VAE-Imitation/racing_models/cmvae.py/0
{ "file_path": "AirSim-Drone-Racing-VAE-Imitation/racing_models/cmvae.py", "repo_id": "AirSim-Drone-Racing-VAE-Imitation", "token_count": 3131 }
85
from argparse import ArgumentParser import subprocess class DockerImageBuilder(): def __init__(self, args): self.args = args def build_docker_image(self): # if a base image is not specified, we use the Ubuntu 18, CUDA 10 image from NVIDIA docker_build_command = ['docker', 'build', '--network=host', \ '-t', self.args.target_image, \ '-f', self.args.dockerfile, \ '--build-arg', 'BASE_IMAGE=' + self.args.base_image, \ '.'] print(" ".join(docker_build_command)) subprocess.call(docker_build_command) def main(args): docker_image_builder = DockerImageBuilder(args) docker_image_builder.build_docker_image() if __name__=="__main__": parser = ArgumentParser(description='AirSim Neurips-Game-of-Drones docker image builder') parser.add_argument('--dockerfile', type=str, default='Dockerfile', help='path to docker file') parser.add_argument('--base_image', type=str, default="nvidia/cudagl:10.0-devel-ubuntu18.04", help='base image name AND tag, on top of which the target image is built') parser.add_argument('--target_image', type=str, help='desired name of target image name AND tag') args = parser.parse_args() # if a target image name is not specified, let's call it airsim_neurips:SOURCE_IMAGE_TAG if not args.target_image: target_image_tag = args.base_image.split(":")[1] args.target_image = 'airsim_neurips' + ':' + target_image_tag main(args)
AirSim-NeurIPS2019-Drone-Racing/docker/build_docker_image.py/0
{ "file_path": "AirSim-NeurIPS2019-Drone-Racing/docker/build_docker_image.py", "repo_id": "AirSim-NeurIPS2019-Drone-Racing", "token_count": 642 }
86
# Application Insights for Python This repository holds components that enable telemetry scenarios for your Python applications to send to [Azure Application Insights][azure_application_insights]. Below is the list of components that are within this repository: [Azure Monitor Events Extension][azure-monitor-events-extension]. For Azure Monitor OpenTelemetry Exporter and Distro, see [Azure SDK for Python][azure-sdk-for-python]. ## Contributing For information about contributing to this repository, see [CONTRIBUTING.md](CONTRIBUTING.md). <!-- LINKS --> [azure_application_insights]: https://azure.microsoft.com/documentation/articles/app-insights-overview/ [azure-monitor-events-extension]: https://github.com/microsoft/ApplicationInsights-Python/blob/main/azure-monitor-events-extension/README.md [azure-sdk-for-python]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor
ApplicationInsights-Python/README.md/0
{ "file_path": "ApplicationInsights-Python/README.md", "repo_id": "ApplicationInsights-Python", "token_count": 261 }
87
root = true [*] charset = utf-8 end_of_line = lf indent_size = 2 indent_style = space insert_final_newline = true trim_trailing_whitespace = true [*.{tf,tfvars}] indent_size = 2 indent_style = space [*.txt] indent_style = tab indent_size = 4 [*.{diff,md}] trim_trailing_whitespace = false [{*.yaml,*.yml}] indent_size = 2 max_line_length = 120 # Keep this updated with the yaml-lint file [Makefile] indent_style = tab [*.py] indent_size = 4 indent_style = space max_line_length = 120 [*.java] max_line_length = 120 # disable wildcard imports ij_java_class_count_to_use_import_on_demand = 99 ij_java_names_count_to_use_import_on_demand = 99
AzureTRE/.editorconfig/0
{ "file_path": "AzureTRE/.editorconfig", "repo_id": "AzureTRE", "token_count": 271 }
88
.venv
AzureTRE/airlock_processor/.funcignore/0
{ "file_path": "AzureTRE/airlock_processor/.funcignore", "repo_id": "AzureTRE", "token_count": 3 }
89
from fastapi import Depends, HTTPException, Path, status from pydantic import UUID4 from api.helpers import get_repository from db.errors import EntityDoesNotExist from resources import strings from models.domain.shared_service import SharedService from models.domain.operation import Operation from db.repositories.shared_services import SharedServiceRepository from db.repositories.operations import OperationRepository async def get_shared_service_by_id(shared_service_id: UUID4, shared_services_repo) -> SharedService: try: return await shared_services_repo.get_shared_service_by_id(shared_service_id) except EntityDoesNotExist: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=strings.SHARED_SERVICE_DOES_NOT_EXIST) async def get_shared_service_by_id_from_path(shared_service_id: UUID4 = Path(...), shared_service_repo=Depends(get_repository(SharedServiceRepository))) -> SharedService: return await get_shared_service_by_id(shared_service_id, shared_service_repo) async def get_operation_by_id_from_path(operation_id: UUID4 = Path(...), operations_repo=Depends(get_repository(OperationRepository))) -> Operation: try: return await operations_repo.get_operation_by_id(operation_id=operation_id) except EntityDoesNotExist: raise HTTPException(status_code=status.HTTP_404_NOT_FOUND, detail=strings.OPERATION_DOES_NOT_EXIST)
AzureTRE/api_app/api/dependencies/shared_services.py/0
{ "file_path": "AzureTRE/api_app/api/dependencies/shared_services.py", "repo_id": "AzureTRE", "token_count": 465 }
90
from fastapi import APIRouter from resources import strings router = APIRouter() @router.get("/ping", name=strings.API_GET_PING) @router.get("/", name=strings.API_GET_PING) def ping() -> str: # The ping endpoint is a simple endpoint that can be called by the Application Gateway # to test if it is able to reach the API return "pong"
AzureTRE/api_app/api/routes/ping.py/0
{ "file_path": "AzureTRE/api_app/api/routes/ping.py", "repo_id": "AzureTRE", "token_count": 115 }
91
import semantic_version from db.repositories.shared_services import SharedServiceRepository from db.repositories.resources import IS_ACTIVE_RESOURCE from services.logging import logger class SharedServiceMigration(SharedServiceRepository): @classmethod async def create(cls): cls = SharedServiceMigration() resource_repo = await super().create() cls._container = resource_repo._container return cls async def deleteDuplicatedSharedServices(self) -> bool: template_names = ['tre-shared-service-firewall', 'tre-shared-service-sonatype-nexus', 'tre-shared-service-gitea'] migrated = False for template_name in template_names: for item in await self.query(query=f'SELECT * FROM c WHERE c.resourceType = "shared-service" \ AND c.templateName = "{template_name}" AND {IS_ACTIVE_RESOURCE} \ ORDER BY c.updatedWhen ASC OFFSET 1 LIMIT 10000'): template_version = semantic_version.Version(item["templateVersion"]) if (template_version < semantic_version.Version('0.3.0')): logger.info(f'Deleting element {item["id"]}') await self.delete_item(item["id"]) migrated = True return migrated async def checkMinFirewallVersion(self) -> bool: template_name = 'tre-shared-service-firewall' min_template_version = semantic_version.Version('0.4.0') resources = await self.query(query=f'SELECT * FROM c WHERE c.resourceType = "shared-service" \ AND c.templateName = "{template_name}" AND {IS_ACTIVE_RESOURCE}') if not resources: raise ValueError(f"Expecting to have an instance of Firewall (template name {template_name}) deployed in a successful TRE deployment") template_version = semantic_version.Version(resources[0]["templateVersion"]) if (template_version < min_template_version): raise ValueError(f"{template_name} deployed version ({template_version}) is below minimum ({min_template_version})!", "Go to https://github.com/microsoft/AzureTRE/blob/main/CHANGELOG.md, and review release 0.5.0 for more info.") return True
AzureTRE/api_app/db/migrations/shared_services.py/0
{ "file_path": "AzureTRE/api_app/db/migrations/shared_services.py", "repo_id": "AzureTRE", "token_count": 952 }
92
from azure.eventgrid import EventGridEvent from azure.eventgrid.aio import EventGridPublisherClient from core import credentials async def publish_event(event: EventGridEvent, topic_endpoint: str): async with credentials.get_credential_async_context() as credential: client = EventGridPublisherClient(topic_endpoint, credential) async with client: await client.send([event])
AzureTRE/api_app/event_grid/helpers.py/0
{ "file_path": "AzureTRE/api_app/event_grid/helpers.py", "repo_id": "AzureTRE", "token_count": 128 }
93
from pydantic import Field from models.domain.resource import Resource, ResourceType class UserResource(Resource): """ User resource """ workspaceId: str = Field("", title="Workspace ID", description="Service target Workspace id") ownerId: str = Field("", title="Owner of the user resource") parentWorkspaceServiceId: str = Field("", title="Parent Workspace Service ID", description="Service target Workspace Service id") azureStatus: dict = Field({}, title="Azure Status", description="Azure status, varies per user resource") resourceType = ResourceType.UserResource
AzureTRE/api_app/models/domain/user_resource.py/0
{ "file_path": "AzureTRE/api_app/models/domain/user_resource.py", "repo_id": "AzureTRE", "token_count": 164 }
94
from enum import Enum from typing import List from pydantic import BaseModel from resources import strings class StatusEnum(str, Enum): ok = strings.OK not_ok = strings.NOT_OK class ServiceStatus(BaseModel): service: str = "" status: StatusEnum = StatusEnum.ok message: str = "" class HealthCheck(BaseModel): services: List[ServiceStatus]
AzureTRE/api_app/models/schemas/status.py/0
{ "file_path": "AzureTRE/api_app/models/schemas/status.py", "repo_id": "AzureTRE", "token_count": 122 }
95
{ "$schema": "http://json-schema.org/draft-07/schema", "$id": "https://github.com/microsoft/AzureTRE/schema/user_resource.json", "type": "object", "title": "User Resource Default Parameters", "description": "These parameters are required for all user resources", "required": [ "display_name", "description" ], "properties": { "display_name": { "type": "string", "title": "Name for the user resource", "description": "The name of the user resource to be displayed to users", "updateable": true }, "description": { "type": "string", "title": "Description of the user resource", "description": "Description of the user resource", "updateable": true }, "overview": { "type": "string", "title": "User Resource Overview", "description": "Long form description of the user resource, in markdown syntax", "updateable": true } } }
AzureTRE/api_app/schemas/user_resource.json/0
{ "file_path": "AzureTRE/api_app/schemas/user_resource.json", "repo_id": "AzureTRE", "token_count": 341 }
96
from datetime import datetime, date, timedelta from enum import Enum from functools import lru_cache from typing import Dict, Optional, Union import pandas as pd from azure.mgmt.costmanagement import CostManagementClient from azure.mgmt.costmanagement.models import QueryGrouping, QueryAggregation, QueryDataset, QueryDefinition, \ TimeframeType, ExportType, QueryTimePeriod, QueryFilter, QueryComparisonExpression, QueryResult from azure.core.exceptions import ResourceNotFoundError, HttpResponseError from azure.mgmt.resource import ResourceManagementClient from core import config, credentials from db.errors import EntityDoesNotExist from db.repositories.shared_services import SharedServiceRepository from db.repositories.user_resources import UserResourceRepository from db.repositories.workspace_services import WorkspaceServiceRepository from db.repositories.workspaces import WorkspaceRepository from models.domain.costs import GranularityEnum, CostReport, WorkspaceCostReport, CostItem, WorkspaceServiceCostItem, \ CostRow from models.domain.resource import Resource from services.logging import logger class ResultColumnDaily(Enum): Cost = 0 Date = 1 ResourceGroup = 2 Tag = 3 Currency = 4 class ResultColumn(Enum): Cost = 0 ResourceGroup = 1 Tag = 2 Currency = 3 class WorkspaceDoesNotExist(Exception): """Raised when the workspace is not found by provided id""" class SubscriptionNotSupported(Exception): """Raised when subscription does not support cost management""" class TooManyRequests(Exception): """Raised when cost management api is being throttled, retry after given number of seconds""" retry_after: int def __init__(self, retry_after: int, *args: object) -> None: super().__init__(*args) self.retry_after = retry_after class ServiceUnavailable(Exception): """Raised when cost management is unavaiable, retry after given number of seconds""" retry_after: int def __init__(self, retry_after: int, *args: object) -> None: super().__init__(*args) self.retry_after = retry_after class CostCacheItem(): """Holds cost qery result and time to leave for storing in cache""" result: QueryResult ttl: datetime def __init__(self, item: QueryResult, ttl: datetime) -> None: self.result = item self.ttl = ttl # make sure CostService is singleton @lru_cache(maxsize=None) class CostService: scope: str client: CostManagementClient cache: Dict[str, CostCacheItem] TRE_ID_TAG: str = "tre_id" TRE_CORE_SERVICE_ID_TAG: str = "tre_core_service_id" TRE_WORKSPACE_ID_TAG: str = "tre_workspace_id" TRE_SHARED_SERVICE_ID_TAG: str = "tre_shared_service_id" TRE_WORKSPACE_SERVICE_ID_TAG: str = "tre_workspace_service_id" TRE_USER_RESOURCE_ID_TAG: str = "tre_user_resource_id" TRE_UNTAGGED: str = "" RATE_LIMIT_RETRY_AFTER_HEADER_KEY: str = "x-ms-ratelimit-microsoft.costmanagement-entity-retry-after" SERVICE_UNAVAILABLE_RETRY_AFTER_HEADER_KEY: str = "Retry-After" def __init__(self) -> None: self.scope = "/subscriptions/{}".format(config.SUBSCRIPTION_ID) self.client = CostManagementClient(credential=credentials.get_credential()) self.resource_client = ResourceManagementClient(credentials.get_credential(), config.SUBSCRIPTION_ID, base_url=config.RESOURCE_MANAGER_ENDPOINT, credential_scopes=config.CREDENTIAL_SCOPES) self.cache = {} def get_cached_result(self, key: str) -> Union[QueryResult, None]: """Returns cached item result. Args: key (str): key of the cached item in cache. Returns: result (Union[QueryResult, None]): cost query result or None if not found or expired. """ cached_item: CostCacheItem = self.cache.get(key, None) # return None if key doesn't exist if cached_item is None: return None # return None if key expired if (datetime.now() > cached_item.ttl): # remove expired cache item self.cache.pop(key) return None return cached_item.result def clear_expired_cache_items(self) -> None: """Clears all expired cache items.""" expired_keys = [key for key in self.cache.keys() if datetime.now() > self.cache[key].ttl] for key in expired_keys: self.cache.pop(key) def cache_result(self, key: str, result: QueryResult, timedelta: timedelta) -> None: """Add cost result to cache. Args: key (str) : key of the cached item in cache. result (QueryResult) : cost query result to cache. """ self.cache[key] = CostCacheItem(result, datetime.now() + timedelta) self.clear_expired_cache_items() async def query_tre_costs(self, tre_id, granularity: GranularityEnum, from_date: datetime, to_date: datetime, workspace_repo: WorkspaceRepository, shared_services_repo: SharedServiceRepository) -> CostReport: resource_groups_dict = self.get_resource_groups_by_tag(self.TRE_ID_TAG, tre_id) cache_key = f"{CostService.TRE_ID_TAG}_{tre_id}_granularity{granularity}_from_date{from_date}_to_date{to_date}_rgs{'_'.join(list(resource_groups_dict.keys()))}" query_result = self.get_cached_result(cache_key) if query_result is None: query_result = self.query_costs(CostService.TRE_ID_TAG, tre_id, granularity, from_date, to_date, list(resource_groups_dict.keys())) self.cache_result(cache_key, query_result, timedelta(hours=2)) summerized_result = self.summerize_untagged(query_result, granularity, resource_groups_dict) query_result_dict = self.__query_result_to_dict(summerized_result, granularity) cost_report = CostReport(core_services=[], shared_services=[], workspaces=[]) cost_report.core_services = self.__extract_cost_rows_by_tag( granularity, query_result_dict, CostService.TRE_CORE_SERVICE_ID_TAG, tre_id) cost_report.shared_services = await self.__get_shared_services_costs( granularity, query_result_dict, shared_services_repo) cost_report.workspaces = await self.__get_workspaces_costs(granularity, query_result_dict, workspace_repo) return cost_report async def query_tre_workspace_costs(self, workspace_id: str, granularity: GranularityEnum, from_date: Optional[datetime], to_date: Optional[datetime], workspace_repo: WorkspaceRepository, workspace_services_repo: WorkspaceServiceRepository, user_resource_repo) -> WorkspaceCostReport: resource_groups_dict = self.get_resource_groups_by_tag(self.TRE_WORKSPACE_ID_TAG, workspace_id) cache_key = f"{CostService.TRE_WORKSPACE_ID_TAG}_{workspace_id}_granularity{granularity}_from_date{from_date}_to_date{to_date}_rgs{'_'.join(list(resource_groups_dict.keys()))}" query_result = self.get_cached_result(cache_key) if query_result is None: query_result = self.query_costs(CostService.TRE_WORKSPACE_ID_TAG, workspace_id, granularity, from_date, to_date, list(resource_groups_dict.keys())) self.cache_result(cache_key, query_result, timedelta(hours=2)) summerized_result = self.summerize_untagged(query_result, granularity, resource_groups_dict) query_result_dict = self.__query_result_to_dict(summerized_result, granularity) try: workspace = await workspace_repo.get_workspace_by_id(workspace_id) workspace_cost_report: WorkspaceCostReport = WorkspaceCostReport( id=workspace_id, name=self.__get_resource_name(workspace), costs=self.__extract_cost_rows_by_tag(granularity, query_result_dict, CostService.TRE_WORKSPACE_ID_TAG, workspace_id), workspace_services=await self.__get_workspace_services_costs(granularity, query_result_dict, workspace_services_repo, user_resource_repo, workspace_id)) return workspace_cost_report except EntityDoesNotExist: raise WorkspaceDoesNotExist(f"workspace_id [{workspace_id}] does not exist") def extract_resource_group_tag(self, tags): if self.TRE_WORKSPACE_ID_TAG in tags: return f'"{self.TRE_WORKSPACE_ID_TAG}":"{tags[self.TRE_WORKSPACE_ID_TAG]}"' else: return f'"{self.TRE_ID_TAG}":"{tags[self.TRE_ID_TAG]}"' def get_resource_groups_by_tag(self, tag_name, tag_value) -> dict: resource_groups = self.resource_client.resource_groups.list(filter=f"tagName eq '{tag_name}' and tagValue eq '{tag_value}'") return {resouce_group.name: self.extract_resource_group_tag(resouce_group.tags) for resouce_group in resource_groups} def summerize_untagged(self, query_result: QueryResult, granularity: GranularityEnum, resource_groups_dict: dict) -> list: if len(query_result.rows) == 0: return [] # convert to pandas DataFrame df = pd.DataFrame.from_records(query_result.rows) columns = [] for i in range(len(query_result.columns)): columns.append(query_result.columns[i].name) df.columns = columns # fill tags for untagged untagged_resource_groups = list(df.loc[df["Tag"] == "", "ResourceGroup"].unique()) for rg in untagged_resource_groups: df.loc[(df["Tag"] == "") & (df["ResourceGroup"] == rg), "Tag"] = resource_groups_dict[rg] # group by if granularity == GranularityEnum.none: c = ["ResourceGroup", "Tag", "Currency"] else: c = ["UsageDate", "ResourceGroup", "Tag", "Currency"] df = df.groupby(c).agg({'PreTaxCost': sum}) # reset index and reorder columns df.reset_index(inplace=True) c.insert(0, "PreTaxCost") df = df[c] # convert to list of rows return df.values.tolist() def __get_resource_name(self, resource: Resource): key = "display_name" if key in resource.properties.keys(): return resource.properties[key] else: return resource.templateName def __extract_cost_item(self, resource: Resource, granularity: GranularityEnum, query_result_dict: dict, tag: str): return CostItem( id=resource.id, name=self.__get_resource_name(resource), costs=self.__extract_cost_rows_by_tag(granularity, query_result_dict, tag, resource.id) ) async def __get_workspaces_costs(self, granularity, query_result_dict, workspace_repo): return [self.__extract_cost_item(workspace, granularity, query_result_dict, CostService.TRE_WORKSPACE_ID_TAG) for workspace in await workspace_repo.get_active_workspaces()] async def __get_shared_services_costs(self, granularity, query_result_dict, shared_services_repo): return [self.__extract_cost_item(shared_service, granularity, query_result_dict, CostService.TRE_SHARED_SERVICE_ID_TAG) for shared_service in await shared_services_repo.get_active_shared_services()] async def __get_workspace_services_costs(self, granularity, query_result_dict, workspace_services_repo: WorkspaceServiceRepository, user_resource_repo: UserResourceRepository, workspace_id: str): workspace_services_costs = [] workspace_services_list = await workspace_services_repo.get_active_workspace_services_for_workspace(workspace_id) for workspace_service in workspace_services_list: workspace_service_cost_item = WorkspaceServiceCostItem( id=workspace_service.id, name=self.__get_resource_name(workspace_service), costs=self.__extract_cost_rows_by_tag(granularity, query_result_dict, CostService.TRE_WORKSPACE_SERVICE_ID_TAG, workspace_service.id), user_resources=[] ) workspace_service_cost_item.user_resources = [self.__extract_cost_item(user_resource, granularity, query_result_dict, CostService.TRE_USER_RESOURCE_ID_TAG) for user_resource in await user_resource_repo.get_user_resources_for_workspace_service( workspace_id, workspace_service.id)] workspace_services_costs.append(workspace_service_cost_item) return workspace_services_costs def __create_cost_row(self, cost, currency: str, cost_date: date): return CostRow(cost=cost, currency=currency, date=cost_date) def __extract_cost_rows_by_tag(self, granularity, query_result_dict, tag_name, tag_value): cost_rows = [] cost_key = f'"{tag_name}":"{tag_value}"' if cost_key in query_result_dict.keys(): costs = query_result_dict[cost_key] if granularity == GranularityEnum.none: cost_rows = [ self.__create_cost_row(cost[ResultColumn.Cost.value], cost[ResultColumn.Currency.value], None) for cost in costs] else: cost_rows = [ self.__create_cost_row(cost[ResultColumnDaily.Cost.value], cost[ResultColumnDaily.Currency.value], self.__parse_cost_management_date_value( cost[ResultColumnDaily.Date.value])) for cost in costs] return cost_rows def query_costs(self, tag_name: str, tag_value: str, granularity: GranularityEnum, from_date: Optional[datetime], to_date: Optional[datetime], resource_groups: list) -> QueryResult: query_definition = self.build_query_definition(granularity, from_date, to_date, tag_name, tag_value, resource_groups) try: return self.client.query.usage(self.scope, query_definition) except ResourceNotFoundError as e: # when cost management API returns 404 with an message: # Given subscription {subscription_id} doesn't have valid WebDirect/AIRS offer type. # it means that the Azure subscription deosn't support cost management if "doesn't have valid WebDirect/AIRS" in e.message: logger.exception("Subscription doesn't support cost management") raise SubscriptionNotSupported(e) else: logger.exception("Unhandled Cost Management API error") raise e except HttpResponseError as e: logger.exception("Cost Management API error") if e.status_code == 429: # Too many requests - Request is throttled. # Retry after waiting for the time specified in the "x-ms-ratelimit-microsoft.consumption-retry-after" header. if self.RATE_LIMIT_RETRY_AFTER_HEADER_KEY in e.response.headers: raise TooManyRequests(int(e.response.headers[self.RATE_LIMIT_RETRY_AFTER_HEADER_KEY])) else: logger.exception(f"{self.RATE_LIMIT_RETRY_AFTER_HEADER_KEY} header was not found in response") raise e elif e.status_code == 503: # Service unavailable - Service is temporarily unavailable. # Retry after waiting for the time specified in the "Retry-After" header. if self.SERVICE_UNAVAILABLE_RETRY_AFTER_HEADER_KEY in e.response.headers: raise ServiceUnavailable(int(e.response.headers[self.SERVICE_UNAVAILABLE_RETRY_AFTER_HEADER_KEY])) else: logger.exception(f"{self.SERVICE_UNAVAILABLE_RETRY_AFTER_HEADER_KEY} header was not found in response") raise e else: raise e def build_query_definition(self, granularity: GranularityEnum, from_date: Optional[datetime], to_date: Optional[datetime], tag_name: str, tag_value: str, resource_groups: list): tag_query_grouping: QueryGrouping = QueryGrouping(name=None, type="Tag") rg_query_grouping: QueryGrouping = QueryGrouping(name="ResourceGroup", type="Dimension") query_aggregation: QueryAggregation = QueryAggregation(name="PreTaxCost", function="Sum") query_aggregation_dict: Dict[str, QueryAggregation] = dict() query_aggregation_dict["totalCost"] = query_aggregation tag_query_filter: QueryFilter = QueryFilter( tags=QueryComparisonExpression(name=tag_name, operator="In", values=[tag_value])) rg_query_filter: QueryFilter = QueryFilter( dimensions=QueryComparisonExpression(name="ResourceGroup", operator="In", values=resource_groups) ) query_filter: QueryFilter = QueryFilter(or_property=[tag_query_filter, rg_query_filter]) query_grouping_list = list() query_grouping_list.append(rg_query_grouping) query_grouping_list.append(tag_query_grouping) query_dataset: QueryDataset = QueryDataset( granularity=granularity, aggregation=query_aggregation_dict, grouping=query_grouping_list, filter=query_filter) if from_date is None or to_date is None: query_definition: QueryDefinition = QueryDefinition( type=ExportType.actual_cost, timeframe=TimeframeType.MONTH_TO_DATE, dataset=query_dataset) else: query_time_period: QueryTimePeriod = QueryTimePeriod( from_property=from_date, to=to_date) query_definition: QueryDefinition = QueryDefinition( type=ExportType.actual_cost, timeframe=TimeframeType.CUSTOM, time_period=query_time_period, dataset=query_dataset) return query_definition def __query_result_to_dict(self, query_result: list, granularity: GranularityEnum): query_result_dict = dict() for row in query_result: tag = row[ResultColumnDaily.Tag.value if granularity == GranularityEnum.daily else ResultColumn.Tag.value] if tag in query_result_dict.keys(): query_result_dict[tag].append(row) else: query_result_dict[tag] = [row] return query_result_dict def __parse_cost_management_date_value(self, date_value: int): return datetime.strptime(str(date_value), "%Y%m%d").date() @lru_cache(maxsize=None) def cost_service_factory() -> CostService: return CostService()
AzureTRE/api_app/services/cost_service.py/0
{ "file_path": "AzureTRE/api_app/services/cost_service.py", "repo_id": "AzureTRE", "token_count": 8950 }
97
import pytest from mock import patch from fastapi import status from models.domain.user_resource import UserResource from models.domain.workspace import Workspace from models.domain.workspace_service import WorkspaceService from resources import strings pytestmark = pytest.mark.asyncio WORKSPACE_ID = '933ad738-7265-4b5f-9eae-a1a62928772e' SERVICE_ID = 'abcad738-7265-4b5f-9eae-a1a62928772e' USER_RESOURCE_ID = 'abcad738-7265-4b5f-9eae-a1a62928772e' def sample_workspace(): return Workspace(id=WORKSPACE_ID, templateName='template name', templateVersion='1.0', etag='', properties={"client_id": "12345"}, resourcePath="test") def sample_workspace_service(): return WorkspaceService(id=SERVICE_ID, templateName='template name', templateVersion='1.0', etag='', resourcePath="test", properties={}) def sample_user_resource(): return UserResource(id=USER_RESOURCE_ID, templateName='template name', templateVersion='1.0', etag='', resourcePath="test", properties={}) # TEMPLATES class TestTemplateRoutesThatRequireAdminRights: @pytest.fixture(autouse=True, scope='class') def log_in_with_non_admin(self, app, non_admin_user): # try accessing the route with a non-admin user with patch('services.aad_authentication.AzureADAuthorization._get_user_from_token', return_value=non_admin_user()): yield async def test_post_workspace_templates_requires_admin_rights(self, app, client): response = await client.post(app.url_path_for(strings.API_CREATE_WORKSPACE_TEMPLATES), json='{}') assert response.status_code == status.HTTP_403_FORBIDDEN async def test_post_workspace_service_templates_requires_admin_rights(self, app, client): response = await client.post(app.url_path_for(strings.API_CREATE_WORKSPACE_SERVICE_TEMPLATES), json='{}') assert response.status_code == status.HTTP_403_FORBIDDEN async def test_post_user_resource_templates_requires_admin_rights(self, app, client): response = await client.post(app.url_path_for(strings.API_CREATE_USER_RESOURCE_TEMPLATES, service_template_name="not-important"), json='{}') assert response.status_code == status.HTTP_403_FORBIDDEN # RESOURCES class TestWorkspaceRoutesThatRequireAdminRights: @pytest.fixture(autouse=True, scope='class') def log_in_with_non_owner(self, app, researcher_user): # try accessing the route with a non-owner user with patch('services.aad_authentication.AzureADAuthorization._get_user_from_token', return_value=researcher_user()): with patch("api.dependencies.workspaces.WorkspaceRepository.get_workspace_by_id", return_value=sample_workspace()): yield async def test_post_workspace_requires_admin_rights(self, app, client): response = await client.post(app.url_path_for(strings.API_CREATE_WORKSPACE), json='{}') assert response.status_code == status.HTTP_403_FORBIDDEN async def test_patch_workspace_requires_admin_rights(self, app, client): response = await client.patch(app.url_path_for(strings.API_UPDATE_WORKSPACE, workspace_id=WORKSPACE_ID), json='{}') assert response.status_code == status.HTTP_403_FORBIDDEN async def test_delete_workspace_requires_admin_rights(self, app, client): response = await client.delete(app.url_path_for(strings.API_DELETE_WORKSPACE, workspace_id=WORKSPACE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN class TestWorkspaceServiceOwnerRoutesAccess: @pytest.fixture(autouse=True, scope='class') def log_in_with_non_owner(self, app, researcher_user): # try accessing the route with a non-admin user with patch('services.aad_authentication.AzureADAuthorization._get_user_from_token', return_value=researcher_user()): with patch("api.dependencies.workspaces.WorkspaceRepository.get_workspace_by_id", return_value=sample_workspace()): yield # [POST] /workspaces/{workspace_id}/workspace-services/ @patch("api.dependencies.workspaces.WorkspaceServiceRepository.get_workspace_service_by_id", return_value=sample_workspace_service()) async def test_post_workspace_service_raises_403_if_user_is_not_owner(self, _, app, client): workspace_service_input = { "templateName": "test-workspace-service", "properties": { "display_name": "display", "client_id": "f0acf127-a672-a672-a672-a15e5bf9f127" } } response = await client.post(app.url_path_for(strings.API_CREATE_WORKSPACE_SERVICE, workspace_id=WORKSPACE_ID), json=workspace_service_input) assert response.status_code == status.HTTP_403_FORBIDDEN @patch("api.dependencies.workspaces.WorkspaceServiceRepository.get_workspace_service_by_id", return_value=None) async def test_delete_workspace_service_raises_403_if_user_is_not_workspace_owner(self, _, app, client) -> None: response = await client.delete(app.url_path_for(strings.API_DELETE_WORKSPACE_SERVICE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN class TestWorkspaceServiceOwnerOrResearcherRoutesAccess: @pytest.fixture(autouse=True, scope='class') def log_in_with_non_owner_or_researcher(self, app, no_workspace_role_user): # try accessing the route with a non-admin user with patch('services.aad_authentication.AzureADAuthorization._get_user_from_token', return_value=no_workspace_role_user()): with patch("api.dependencies.workspaces.WorkspaceRepository.get_workspace_by_id", return_value=sample_workspace()): yield # [GET] /workspaces/{workspace_id}/workspace-services @patch("api.routes.workspaces.WorkspaceServiceRepository.get_active_workspace_services_for_workspace", return_value=[]) async def test_get_workspace_services_raises_403_if_user_is_not_owner_or_researcher_of_workspace(self, _, app, client): response = await client.get(app.url_path_for(strings.API_GET_ALL_WORKSPACE_SERVICES, workspace_id=WORKSPACE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN # [GET] /workspaces/{workspace_id}/workspace-services/{service_id} @patch("api.dependencies.workspaces.WorkspaceServiceRepository.get_workspace_service_by_id", return_value=sample_workspace_service()) async def test_get_workspace_service_raises_403_if_user_is_not_owner_or_researcher_in_workspace(self, _, app, client): response = await client.get(app.url_path_for(strings.API_GET_WORKSPACE_SERVICE_BY_ID, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN # [PATCH] /workspaces/{workspace_id}/services/{service_id} @patch("api.dependencies.workspaces.WorkspaceServiceRepository.get_workspace_service_by_id", return_value=None) async def test_patch_workspaces_service_raises_403_if_user_is_not_workspace_owner(self, _, app, client) -> None: response = await client.patch(app.url_path_for(strings.API_UPDATE_WORKSPACE_SERVICE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID), json={"enabled": False}) assert response.status_code == status.HTTP_403_FORBIDDEN class TestUserResourcesOwnerOrResearcherRoutesAccess: @pytest.fixture(autouse=True, scope='class') def log_in_with_non_owner_or_researcher(self, app, no_workspace_role_user): # try accessing the route with a non-admin user with patch('services.aad_authentication.AzureADAuthorization._get_user_from_token', return_value=no_workspace_role_user()): with patch("api.dependencies.workspaces.WorkspaceRepository.get_workspace_by_id", return_value=sample_workspace()): yield # [GET] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources/{resource_id} @patch("api.dependencies.workspaces.UserResourceRepository.get_user_resource_by_id") async def test_get_user_resource_raises_403_if_user_is_not_workspace_owner_or_researcher(self, _, app, client): response = await client.get( app.url_path_for(strings.API_GET_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID, resource_id=USER_RESOURCE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN # [GET] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources async def test_get_user_resources_raises_403_if_user_is_not_researcher_or_owner_of_workspace(self, app, client): response = await client.get(app.url_path_for(strings.API_GET_MY_USER_RESOURCES, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN # [POST] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources async def test_post_user_resource_raises_403_if_user_is_not_workspace_owner_or_researcher(self, app, client): input_data = { "templateName": "test-user-resource", "properties": {"display_name": "display"} } response = await client.post(app.url_path_for(strings.API_CREATE_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID), json=input_data) assert response.status_code == status.HTTP_403_FORBIDDEN # [PATCH] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources/{resource_id} async def test_patch_user_resource_raises_403_if_user_is_not_workspace_owner_or_researcher(self, app, client): response = await client.patch(app.url_path_for(strings.API_UPDATE_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID, resource_id=USER_RESOURCE_ID), json={"enabled": False}) assert response.status_code == status.HTTP_403_FORBIDDEN # [DELETE] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources/{resource_id} async def test_delete_user_resource_raises_403_if_user_is_not_workspace_owner_or_researcher(self, app, client): response = await client.delete( app.url_path_for(strings.API_DELETE_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID, resource_id=USER_RESOURCE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN class TestUserResourcesRoutesOwnerOrResourceOwnerAccess: @pytest.fixture(autouse=True, scope='class') def log_in_with_non_owner(self, app, researcher_user): # try accessing the route with a non-admin user with patch('services.aad_authentication.AzureADAuthorization._get_user_from_token', return_value=researcher_user()): with patch("api.dependencies.workspaces.WorkspaceRepository.get_workspace_by_id", return_value=sample_workspace()): yield # [GET] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources/{resource_id} @patch("api.dependencies.workspaces.UserResourceRepository.get_user_resource_by_id") async def test_get_user_resource_raises_403_if_user_is_researcher_and_not_owner_of_resource(self, get_user_resource_mock, app, client): user_resource = sample_user_resource() user_resource.ownerId = "11111" # not users id get_user_resource_mock.return_value = user_resource response = await client.get(app.url_path_for(strings.API_GET_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID, resource_id=USER_RESOURCE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN # [DELETE] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources/{resource_id} @patch("api.dependencies.workspaces.WorkspaceServiceRepository.get_workspace_service_by_id") @patch("api.routes.workspaces.ResourceTemplateRepository.get_template_by_name_and_version") @patch("api.dependencies.workspaces.UserResourceRepository.get_user_resource_by_id") async def test_delete_user_resource_raises_403_if_user_is_researcher_and_not_owner_of_resource(self, get_user_resource_mock, resource_template_repo_mock, get_workspace_service_mock, app, client, basic_resource_template): user_resource = sample_user_resource() user_resource.ownerId = "11111" # not users id get_user_resource_mock.return_value = user_resource get_workspace_service_mock.return_value = sample_workspace_service() resource_template_repo_mock.return_value = basic_resource_template response = await client.delete(app.url_path_for(strings.API_DELETE_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID, resource_id=USER_RESOURCE_ID)) assert response.status_code == status.HTTP_403_FORBIDDEN # [PATCH] /workspaces/{workspace_id}/workspace-services/{service_id}/user-resources/{resource_id} @patch("api.dependencies.workspaces.UserResourceRepository.get_user_resource_by_id") @patch("api.dependencies.workspaces.WorkspaceServiceRepository.get_workspace_service_by_id") async def test_patch_user_resource_raises_403_if_user_is_researcher_and_not_owner_of_resource(self, _, get_user_resource_mock, app, client): user_resource = sample_user_resource() user_resource.ownerId = "11111" # not users id get_user_resource_mock.return_value = user_resource response = await client.patch(app.url_path_for(strings.API_UPDATE_USER_RESOURCE, workspace_id=WORKSPACE_ID, service_id=SERVICE_ID, resource_id=USER_RESOURCE_ID), json={"isEnabled": False}, headers={"etag": "some-etag"}) assert response.status_code == status.HTTP_403_FORBIDDEN
AzureTRE/api_app/tests_ma/test_api/test_routes/test_api_access.py/0
{ "file_path": "AzureTRE/api_app/tests_ma/test_api/test_routes/test_api_access.py", "repo_id": "AzureTRE", "token_count": 5199 }
98
from unittest.mock import AsyncMock, MagicMock from fastapi import HTTPException from mock import patch import pytest import pytest_asyncio from tests_ma.test_api.conftest import create_test_user from models.schemas.airlock_request import AirlockRequestInCreate from models.domain.airlock_request import AirlockRequest, AirlockRequestStatus, AirlockRequestType from db.repositories.airlock_requests import AirlockRequestRepository from db.errors import EntityDoesNotExist from azure.cosmos.exceptions import CosmosResourceNotFoundError, CosmosAccessConditionFailedError pytestmark = pytest.mark.asyncio WORKSPACE_ID = "abc000d3-82da-4bfc-b6e9-9a7853ef753e" AIRLOCK_REQUEST_ID = "ce45d43a-e734-469a-88a0-109faf4a611f" DRAFT = AirlockRequestStatus.Draft SUBMITTED = AirlockRequestStatus.Submitted IN_REVIEW = AirlockRequestStatus.InReview APPROVED_IN_PROGRESS = AirlockRequestStatus.ApprovalInProgress APPROVED = AirlockRequestStatus.Approved REJECTION_IN_PROGRESS = AirlockRequestStatus.RejectionInProgress REJECTED = AirlockRequestStatus.Rejected CANCELLED = AirlockRequestStatus.Cancelled BLOCKING_IN_PROGRESS = AirlockRequestStatus.BlockingInProgress BLOCKED = AirlockRequestStatus.Blocked FAILED = AirlockRequestStatus.Failed ALL_STATUSES = [enum.value for enum in AirlockRequestStatus] ALLOWED_STATUS_CHANGES = { DRAFT: [SUBMITTED, CANCELLED, FAILED], SUBMITTED: [IN_REVIEW, BLOCKING_IN_PROGRESS, FAILED], IN_REVIEW: [APPROVED_IN_PROGRESS, REJECTION_IN_PROGRESS, CANCELLED, FAILED], APPROVED_IN_PROGRESS: [APPROVED, FAILED], APPROVED: [], REJECTION_IN_PROGRESS: [REJECTED, FAILED], REJECTED: [], CANCELLED: [], BLOCKING_IN_PROGRESS: [BLOCKED, FAILED], BLOCKED: [], FAILED: [], } @pytest_asyncio.fixture async def airlock_request_repo(): with patch('api.dependencies.database.Database.get_container_proxy', return_value=AsyncMock()): airlock_request_repo_mock = await AirlockRequestRepository.create() yield airlock_request_repo_mock @pytest.fixture def sample_airlock_request_input(): return AirlockRequestInCreate(type=AirlockRequestType.Import, businessJustification="Some business justification") @pytest.fixture def verify_dictionary_contains_all_enum_values(): for status in ALL_STATUSES: if status not in ALLOWED_STATUS_CHANGES: raise Exception(f"Status '{status}' was not added to the ALLOWED_STATUS_CHANGES dictionary") def airlock_request_mock(status=AirlockRequestStatus.Draft): airlock_request = AirlockRequest( id=AIRLOCK_REQUEST_ID, workspaceId=WORKSPACE_ID, type=AirlockRequestType.Import, files=[], businessJustification="some test reason", status=status, reviews=[] ) return airlock_request def get_allowed_status_changes(): for current_status, allowed_new_statuses in ALLOWED_STATUS_CHANGES.items(): for new_status in allowed_new_statuses: yield current_status, new_status def get_forbidden_status_changes(): for current_status, allowed_new_statuses in ALLOWED_STATUS_CHANGES.items(): forbidden_new_statuses = list(set(ALL_STATUSES) - set(allowed_new_statuses)) for new_status in forbidden_new_statuses: yield current_status, new_status async def test_get_airlock_request_by_id(airlock_request_repo): airlock_request = airlock_request_mock() airlock_request_repo.read_item_by_id = AsyncMock(return_value=airlock_request) actual_service = await airlock_request_repo.get_airlock_request_by_id(AIRLOCK_REQUEST_ID) assert actual_service == airlock_request async def test_get_airlock_request_by_id_raises_entity_does_not_exist_if_no_such_request_id(airlock_request_repo): airlock_request_repo.read_item_by_id = AsyncMock() airlock_request_repo.read_item_by_id.side_effect = CosmosResourceNotFoundError with pytest.raises(EntityDoesNotExist): await airlock_request_repo.get_airlock_request_by_id(AIRLOCK_REQUEST_ID) async def test_create_airlock_request_item_creates_an_airlock_request_with_the_right_values(sample_airlock_request_input, airlock_request_repo): airlock_request_item_to_create = sample_airlock_request_input created_by_user = {'id': 'test_user_id'} airlock_request = airlock_request_repo.create_airlock_request_item(airlock_request_item_to_create, WORKSPACE_ID, created_by_user) assert airlock_request.workspaceId == WORKSPACE_ID assert airlock_request.createdBy['id'] == 'test_user_id' @pytest.mark.parametrize("current_status, new_status", get_allowed_status_changes()) async def test_update_airlock_request_with_allowed_new_status_should_update_request_status(airlock_request_repo, current_status, new_status, verify_dictionary_contains_all_enum_values): user = create_test_user() mock_existing_request = airlock_request_mock(status=current_status) airlock_request = await airlock_request_repo.update_airlock_request(mock_existing_request, user, new_status) assert airlock_request.status == new_status @pytest.mark.parametrize("current_status, new_status", get_forbidden_status_changes()) async def test_update_airlock_request_with_forbidden_status_should_fail_on_validation(airlock_request_repo, current_status, new_status, verify_dictionary_contains_all_enum_values): user = create_test_user() mock_existing_request = airlock_request_mock(status=current_status) with pytest.raises(HTTPException): await airlock_request_repo.update_airlock_request(mock_existing_request, user, new_status) @patch("db.repositories.airlock_requests.AirlockRequestRepository.update_airlock_request_item", side_effect=[CosmosAccessConditionFailedError, None]) @patch("db.repositories.airlock_requests.AirlockRequestRepository.get_airlock_request_by_id", return_value=airlock_request_mock(status=DRAFT)) async def test_update_airlock_request_should_retry_update_when_etag_is_not_up_to_date(_, update_airlock_request_item_mock, airlock_request_repo): expected_update_attempts = 2 user = create_test_user() mock_existing_request = airlock_request_mock(status=DRAFT) await airlock_request_repo.update_airlock_request(original_request=mock_existing_request, updated_by=user, new_status=SUBMITTED) assert update_airlock_request_item_mock.call_count == expected_update_attempts async def test_get_airlock_requests_queries_db(airlock_request_repo): airlock_request_repo.container.query_items = MagicMock() expected_query = airlock_request_repo.airlock_requests_query() + f' WHERE c.workspaceId = "{WORKSPACE_ID}"' expected_parameters = [ {"name": "@user_id", "value": None}, {"name": "@status", "value": None}, {"name": "@type", "value": None}, ] await airlock_request_repo.get_airlock_requests(WORKSPACE_ID) airlock_request_repo.container.query_items.assert_called_once_with(query=expected_query, parameters=expected_parameters)
AzureTRE/api_app/tests_ma/test_db/test_repositories/test_airlock_request_repository.py/0
{ "file_path": "AzureTRE/api_app/tests_ma/test_db/test_repositories/test_airlock_request_repository.py", "repo_id": "AzureTRE", "token_count": 2577 }
99
import copy import json from unittest.mock import MagicMock, ANY from pydantic import parse_obj_as import pytest import uuid from mock import AsyncMock, patch from tests_ma.test_api.test_routes.test_resource_helpers import FAKE_CREATE_TIMESTAMP, FAKE_UPDATE_TIMESTAMP from models.domain.request_action import RequestAction from models.domain.resource import ResourceType from db.errors import EntityDoesNotExist from models.domain.workspace import Workspace from models.domain.operation import DeploymentStatusUpdateMessage, Operation, OperationStep, Status from resources import strings from service_bus.deployment_status_updater import DeploymentStatusUpdater pytestmark = pytest.mark.asyncio test_data = [ 'bad', '{"good": "json", "bad": "message"}' ] OPERATION_ID = "0000c8e7-5c42-4fcb-a7fd-294cfc27aa76" test_sb_message = { "operationId": OPERATION_ID, "stepId": "random-uuid", "id": "59b5c8e7-5c42-4fcb-a7fd-294cfc27aa76", "status": Status.Deployed, "message": "test message", "correlation_id": "test_correlation_id" } test_sb_message_with_outputs = { "operationId": OPERATION_ID, "stepId": "random-uuid", "id": "59b5c8e7-5c42-4fcb-a7fd-294cfc27aa76", "status": Status.Deployed, "message": "test message", "outputs": [ {"Name": "string1", "Value": "value1", "Type": "string"}, {"Name": "string2", "Value": "\"value2\"", "Type": "string"}, {"Name": "boolean1", "Value": "True", "Type": "boolean"}, {"Name": "boolean2", "Value": "true", "Type": "boolean"}, {"Name": "boolean3", "Value": "\"true\"", "Type": "boolean"}, {"Name": "list1", "Value": "['one', 'two']", "Type": "string"}, {"Name": "list2", "Value": ['one', 'two'], "Type": "string"} ] } test_sb_message_multi_step_1_complete = { "operationId": OPERATION_ID, "stepId": "random-uuid-1", "id": "59b5c8e7-5c42-4fcb-a7fd-294cfc27aa76", "status": Status.Updated, "message": "upgrade succeeded" } test_sb_message_multi_step_3_complete = { "operationId": OPERATION_ID, "stepId": "random-uuid-3", "id": "59b5c8e7-5c42-4fcb-a7fd-294cfc27aa76", "status": Status.Updated, "message": "upgrade succeeded" } class ServiceBusReceivedMessageMock: def __init__(self, message: dict): self.message = json.dumps(message) self.correlation_id = "test_correlation_id" self.session_id = "test_session_id" def __str__(self): return self.message def create_sample_workspace_object(workspace_id): return Workspace( id=workspace_id, templateName="tre-workspace-base", templateVersion="0.1.0", etag='', properties={}, resourcePath="test" ) def create_sample_operation(resource_id, request_action): return Operation( id=OPERATION_ID, resourceId=resource_id, resourcePath=f'/workspaces/{resource_id}', resourceVersion=0, action=request_action, message="test", createdWhen=FAKE_CREATE_TIMESTAMP, updatedWhen=FAKE_UPDATE_TIMESTAMP, steps=[ OperationStep( id="random-uuid", templateStepId="main", resourceId=resource_id, stepTitle=f"main step for {resource_id}", resourceTemplateName="workspace-base", resourceType=ResourceType.Workspace, resourceAction=request_action, updatedWhen=FAKE_UPDATE_TIMESTAMP, sourceTemplateResourceId=resource_id ) ] ) @pytest.mark.parametrize("payload", test_data) @patch('services.logging.logger.exception') async def test_receiving_bad_json_logs_error(logging_mock, payload): service_bus_received_message_mock = ServiceBusReceivedMessageMock(payload) status_updater = DeploymentStatusUpdater() complete_message = await status_updater.process_message(service_bus_received_message_mock) # bad message data will fail. we don't mark complete=true since we want the message in the DLQ assert complete_message is False # check we logged the error error_message = logging_mock.call_args.args[0] assert error_message.startswith(strings.DEPLOYMENT_STATUS_MESSAGE_FORMAT_INCORRECT) @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') @patch('services.logging.logger.exception') async def test_receiving_good_message(logging_mock, resource_repo, operation_repo, _, __): expected_workspace = create_sample_workspace_object(test_sb_message["id"]) resource_repo.return_value.get_resource_dict_by_id.return_value = expected_workspace.dict() operation = create_sample_operation(test_sb_message["id"], RequestAction.Install) operation_repo.return_value.get_operation_by_id.return_value = operation status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(ServiceBusReceivedMessageMock(test_sb_message)) assert complete_message is True resource_repo.return_value.get_resource_dict_by_id.assert_called_once_with(uuid.UUID(test_sb_message["id"])) resource_repo.return_value.update_item_dict.assert_called_once_with(expected_workspace.dict()) logging_mock.assert_not_called() @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') @patch('services.logging.logger.exception') async def test_when_updating_non_existent_workspace_error_is_logged(logging_mock, resource_repo, operation_repo, _, __): resource_repo.return_value.get_resource_dict_by_id.side_effect = EntityDoesNotExist operation = create_sample_operation(test_sb_message["id"], RequestAction.Install) operation_repo.return_value.get_operation_by_id.return_value = operation status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(ServiceBusReceivedMessageMock(test_sb_message)) assert complete_message is True expected_error_message = strings.DEPLOYMENT_STATUS_ID_NOT_FOUND.format(test_sb_message["id"]) logging_mock.assert_called_once_with(expected_error_message) @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') @patch('services.logging.logger.exception') async def test_when_updating_and_state_store_exception(logging_mock, resource_repo, operation_repo, _, __): resource_repo.return_value.get_resource_dict_by_id.side_effect = Exception operation = create_sample_operation(test_sb_message["id"], RequestAction.Install) operation_repo.return_value.get_operation_by_id.return_value = operation status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(ServiceBusReceivedMessageMock(test_sb_message)) logging_mock.assert_called_once_with("Failed to update status") assert complete_message is False @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch("service_bus.deployment_status_updater.get_timestamp", return_value=FAKE_UPDATE_TIMESTAMP) @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') async def test_state_transitions_from_deployed_to_deleted(resource_repo, operations_repo_mock, _, __, ___): updated_message = test_sb_message updated_message["status"] = Status.Deleted updated_message["message"] = "Has been deleted" service_bus_received_message_mock = ServiceBusReceivedMessageMock(updated_message) workspace = create_sample_workspace_object(test_sb_message["id"]) resource_repo.return_value.get_resource_dict_by_id.return_value = workspace.dict() operation = create_sample_operation(workspace.id, RequestAction.UnInstall) operation.steps[0].status = Status.Deployed operations_repo_mock.return_value.get_operation_by_id.return_value = operation expected_operation = create_sample_operation(workspace.id, RequestAction.UnInstall) expected_operation.steps[0].status = Status.Deleted expected_operation.steps[0].message = updated_message["message"] expected_operation.status = Status.Deleted expected_operation.message = updated_message["message"] status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(service_bus_received_message_mock) assert complete_message is True operations_repo_mock.return_value.update_item.assert_called_once_with(expected_operation) @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') async def test_outputs_are_added_to_resource_item(resource_repo, operations_repo, _, __): received_message = test_sb_message_with_outputs received_message["status"] = Status.Deployed service_bus_received_message_mock = ServiceBusReceivedMessageMock(received_message) resource = create_sample_workspace_object(received_message["id"]) resource.properties = {"exitingName": "exitingValue"} resource_repo.return_value.get_resource_dict_by_id.return_value = resource.dict() new_params = { "string1": "value1", "string2": "value2", "boolean1": True, "boolean2": True, "boolean3": True, "list1": "['one', 'two']", "list2": ["one", "two"], } expected_resource = resource expected_resource.properties = {**resource.properties, **new_params} operation = create_sample_operation(resource.id, RequestAction.UnInstall) operations_repo.return_value.get_operation_by_id.return_value = operation status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(service_bus_received_message_mock) assert complete_message is True resource_repo.return_value.update_item_dict.assert_called_once_with(expected_resource) @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') async def test_properties_dont_change_with_no_outputs(resource_repo, operations_repo, _, __): received_message = test_sb_message received_message["status"] = Status.Deployed service_bus_received_message_mock = ServiceBusReceivedMessageMock(received_message) resource = create_sample_workspace_object(received_message["id"]) resource.properties = {"exitingName": "exitingValue"} resource_repo.return_value.get_resource_dict_by_id.return_value = resource.dict() operation = create_sample_operation(resource.id, RequestAction.UnInstall) operations_repo.return_value.get_operation_by_id.return_value = operation expected_resource = resource status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(service_bus_received_message_mock) assert complete_message is True resource_repo.return_value.update_item_dict.assert_called_once_with(expected_resource.dict()) @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.update_resource_for_step') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') @patch('service_bus.helpers.ServiceBusClient') async def test_multi_step_operation_sends_next_step(sb_sender_client, resource_repo, operations_repo, update_resource_for_step, _, __, multi_step_operation, user_resource_multi, basic_shared_service): received_message = test_sb_message_multi_step_1_complete received_message["status"] = Status.Updated service_bus_received_message_mock = ServiceBusReceivedMessageMock(received_message) sb_sender_client().get_queue_sender().send_messages = AsyncMock() # step 1 resource resource_repo.return_value.get_resource_dict_by_id.return_value = basic_shared_service.dict() # step 2 resource resource_repo.return_value.get_resource_by_id.return_value = user_resource_multi operations_repo.return_value.update_item.return_value = MagicMock(return_value=basic_shared_service) # get the multi-step operation and process it operations_repo.return_value.get_operation_by_id.return_value = multi_step_operation update_resource_for_step.return_value = user_resource_multi status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(service_bus_received_message_mock) assert complete_message is True # check the resource is updated as expected update_resource_for_step.assert_called_once_with( operation_step=ANY, resource_repo=ANY, resource_template_repo=ANY, resource_history_repo=ANY, root_resource=ANY, step_resource=ANY, resource_to_update_id=multi_step_operation.steps[1].resourceId, primary_action=ANY, user=ANY) resource_repo.return_value.get_resource_by_id.assert_called_with(multi_step_operation.resourceId) # check the operation is updated as expected expected_operation = copy.deepcopy(multi_step_operation) expected_operation.status = Status.PipelineRunning expected_operation.message = "Multi step pipeline running. See steps for details." expected_operation.steps[0].status = Status.Updated expected_operation.steps[0].message = "upgrade succeeded" operations_repo.return_value.update_item.assert_called_once_with(expected_operation) # check it sent a message on for the next step sb_sender_client().get_queue_sender().send_messages.assert_called_once() @patch('service_bus.deployment_status_updater.ResourceHistoryRepository.create') @patch('service_bus.deployment_status_updater.ResourceTemplateRepository.create') @patch('service_bus.deployment_status_updater.OperationRepository.create') @patch('service_bus.deployment_status_updater.ResourceRepository.create') @patch('service_bus.helpers.ServiceBusClient') async def test_multi_step_operation_ends_at_last_step(sb_sender_client, resource_repo, operations_repo, _, __, multi_step_operation, user_resource_multi, basic_shared_service): received_message = test_sb_message_multi_step_3_complete received_message["status"] = Status.Updated service_bus_received_message_mock = ServiceBusReceivedMessageMock(received_message) sb_sender_client().get_queue_sender().send_messages = AsyncMock() # step 2 resource resource_repo.return_value.get_resource_dict_by_id.return_value = user_resource_multi.dict() # step 3 resource resource_repo.return_value.get_resource_by_id.return_value = basic_shared_service operations_repo.return_value.update_item.return_value = MagicMock(return_value=user_resource_multi) # get the multi-step operation and process it # simulate what the op would look like after step 2 in_flight_op = copy.deepcopy(multi_step_operation) in_flight_op.status = Status.PipelineRunning in_flight_op.message = "Multi step pipeline running. See steps for details." in_flight_op.steps[0].status = Status.Updated in_flight_op.steps[0].message = "upgrade succeeded" in_flight_op.steps[1].status = Status.Deployed in_flight_op.steps[1].message = "install succeeded" in_flight_op.steps[2].status = Status.Updating operations_repo.return_value.get_operation_by_id.return_value = in_flight_op status_updater = DeploymentStatusUpdater() await status_updater.init_repos() complete_message = await status_updater.process_message(service_bus_received_message_mock) assert complete_message is True # check the operation is updated as expected - both step and overall status expected_operation = copy.deepcopy(in_flight_op) expected_operation.status = Status.Deployed expected_operation.message = "Multi step pipeline completed successfully" expected_operation.steps[2].status = Status.Updated expected_operation.steps[2].message = "upgrade succeeded" operations_repo.return_value.update_item.assert_called_once_with(expected_operation) # check it did _not_ enqueue another message sb_sender_client().get_queue_sender().send_messages.assert_not_called() async def test_convert_outputs_to_dict(): # Test case 1: Empty list of outputs outputs_list = [] expected_result = {} status_updater = DeploymentStatusUpdater() assert status_updater.convert_outputs_to_dict(outputs_list) == expected_result # Test case 2: List of outputs with mixed types deployment_status_update_message = parse_obj_as(DeploymentStatusUpdateMessage, test_sb_message_with_outputs) expected_result = { 'string1': 'value1', 'string2': 'value2', 'boolean1': True, 'boolean2': True, 'boolean3': True, 'list1': "['one', 'two']", 'list2': ['one', 'two'] } assert status_updater.convert_outputs_to_dict(deployment_status_update_message.outputs) == expected_result
AzureTRE/api_app/tests_ma/test_service_bus/test_deployment_status_update.py/0
{ "file_path": "AzureTRE/api_app/tests_ma/test_service_bus/test_deployment_status_update.py", "repo_id": "AzureTRE", "token_count": 6631 }
100
# ----- ADMIN REGISTERS REQUIRED TEMPLATES ----- ### Register workspace template (admin) POST {{baseUrl}}/workspace-templates Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "name": "my-tre-workspace", "version": "0.0.3", "current": "true", "json_schema": { "$schema": "http://json-schema.org/draft-07/schema", "$id": "https://github.com/microsoft/AzureTRE/templates/workspaces/myworkspace/workspace.json", "type": "object", "title": "My Workspace Template", "description": "My Workspace Template", "required": [ "vm_size", "no_of_vms" ], "properties": { "vm_size": { "$id": "#/properties/vm_size", "type": "string", "title": "VM size", "description": "Size of the VMs in my workspace", "default": "Standard_A1", "enum": [ "Standard_A1", "Standard_A2", "Standard_A3" ] }, "no_of_vms": { "$id": "#/properties/no_of_vms", "type": "integer", "title": "Number of VMs", "description": "Number of virtual machines to be deployed in the workspace", "default": 0 } } } } ### Register workspace service template (admin) POST {{baseUrl}}/workspace-service-templates Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "name": "my-tre-workspace-service", "version": "0.0.1", "current": "true", "json_schema": { "$schema": "http://json-schema.org/draft-07/schema", "$id": "https://github.com/microsoft/AzureTRE/templates/workspaces/myworkspace/workspace_service.json", "type": "object", "title": "My Workspace Service Template", "description": "My Workspace Service Template", "required": [], "properties": {} } } ### Register user resource template (admin) POST {{baseUrl}}/workspace-service-templates/my-tre-workspace-service/user-resource-templates Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "name": "my-tre-user-resource", "version": "0.0.1", "current": "true", "json_schema": { "$schema": "http://json-schema.org/draft-07/schema", "$id": "https://github.com/microsoft/AzureTRE/templates/workspaces/myworkspace/user_resource.json", "type": "object", "title": "My User Resource Template", "description": "My User Resource Template", "required": [], "properties": {} } } ### # ----- ADMIN CREATES A WORKSPACE WITH A WORKSPACE SERVICE ----- ### Create a workspace (admin) POST {{baseUrl}}/workspaces Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "templateName": "{{workspaceTemplate}}", "properties": { "display_name": "my workspace", "description": "my workspace", "app_id": "{{appId}}", "vm_size": "Standard_A1", "no_of_vms": 2 } } # ----- WORKSPACE OWNER CREATES A WORKSPACE WITH A WORKSPACE SERVICE ----- ### Create a workspace service (workspace owner) POST {{baseUrl}}/workspaces/{{workspaceId}}/workspace-services Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "templateName": "{{workspaceServiceTemplate}}", "properties": { "display_name": "my workspace service", "description": "my workspace service" } } ### # ----- RESEARCHER CREATES A USER RESOURCE ----- ### Create a user resource (researcher) POST {{baseUrl}}/workspaces/{{workspaceId}}/workspace-services/{{workspaceServiceId}}/user-resources Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "templateName": "{{userResourceTemplate}}", "properties": { "display_name": "my user resource", "description": "my user resource" } } ### # ----- RESEARCHER OR WORKSPACE OWNER DELETES THE USER RESOURCE ----- # ----- WORKSPACE OWNER DELETES THE WORKSPACE SERVICE ----- PATCH {{baseUrl}}/workspaces/{{workspaceId}}/workspace-services/{{workspaceServiceId}} Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "enabled": false } DELETE {{baseUrl}}/workspaces/{{workspaceId}}/workspace-services/{{workspaceServiceId}} Accept: {{contentType}} Authorization: Bearer {{token}} # ----- ADMIN DELETES THE WORKSPACE ----- ### Disable a workspace (admin) PATCH {{baseUrl}}/workspaces/{{workspaceId}} Accept: {{contentType}} Authorization: Bearer {{token}} Content-Type: {{contentType}} { "enabled": false } ### Delete a workspace (admin) DELETE {{baseUrl}}/workspaces/{{workspaceId}} Accept: {{contentType}} Authorization: Bearer {{token}}
AzureTRE/api_http_requests/API User Journey.http/0
{ "file_path": "AzureTRE/api_http_requests/API User Journey.http", "repo_id": "AzureTRE", "token_count": 1707 }
101
import json import click import logging import jwt from tre.api_client import ApiClient from tre.commands.workspaces.workspace import workspace_id_completion from tre.output import output_result, output_option, query_option @click.command(name="get-token", help="Get an access token") @click.option('--scope', required=False, help='The scope to get the token for (defaults to root scope)') @click.option('--workspace', required=False, help='The workspace to the token for (cannot be used with --scope)', shell_complete=workspace_id_completion) @click.option('--decode', is_flag=True, help='Decode the JWT token') @output_option() @query_option() def get_token(scope, workspace, decode, output_format, query): log = logging.getLogger(__name__) client = ApiClient.get_api_client_from_config() if workspace is not None: if scope is not None: raise click.ClickException("Cannot use --scope and --workspace") else: scope = client.get_workspace_scope(log, workspace) token = client.get_auth_token(log, scope) if decode: # Skip signature verification otherwise we get: # 'Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an # unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).' click.echo("Warning: the signature of the token is not verified", err=True) decoded_token = jwt.decode(token, algorithms=['RS256'], options={"verify_signature": False}) output_result(json.dumps(decoded_token), output_format=output_format, query=query) else: if output_format: click.echo("Ignoring --output for non-decoded token", err=True) if query: click.echo("Ignoring --query for non-decoded token", err=True) click.echo(token)
AzureTRE/cli/tre/commands/get_token.py/0
{ "file_path": "AzureTRE/cli/tre/commands/get_token.py", "repo_id": "AzureTRE", "token_count": 745 }
102
import click class WorkspaceServiceTemplateContext(object): def __init__(self, template_name: str): self.template_name = template_name pass_workspace_service_template_context = click.make_pass_decorator(WorkspaceServiceTemplateContext)
AzureTRE/cli/tre/commands/workspace_service_templates/contexts.py/0
{ "file_path": "AzureTRE/cli/tre/commands/workspace_service_templates/contexts.py", "repo_id": "AzureTRE", "token_count": 77 }
103
import click class WorkspaceContext(object): def __init__(self, workspace_id: str): self.workspace_id = workspace_id pass_workspace_context = click.make_pass_decorator(WorkspaceContext) class WorkspaceOperationContext(object): def __init__(self, workspace_id: str, operation_id: str): self.workspace_id = workspace_id self.operation_id = operation_id @staticmethod def add_operation_id_to_context_obj(ctx: click.Context, operation_id: str) -> "WorkspaceOperationContext": workspace_context = ctx.find_object(WorkspaceContext) return WorkspaceOperationContext(workspace_context.workspace_id, operation_id) pass_workspace_operation_context = click.make_pass_decorator(WorkspaceOperationContext)
AzureTRE/cli/tre/commands/workspaces/contexts.py/0
{ "file_path": "AzureTRE/cli/tre/commands/workspaces/contexts.py", "repo_id": "AzureTRE", "token_count": 258 }
104
import click import json import logging from tre.api_client import ApiClient from tre.commands.operation import default_operation_table_query_single, operation_show from tre.output import output, output_option, query_option @click.group(help="List/add workspaces") def workspaces() -> None: pass @click.command(name="list", help="List workspaces") @output_option() @query_option() def workspaces_list(output_format, query): log = logging.getLogger(__name__) client = ApiClient.get_api_client_from_config() response = client.call_api(log, 'GET', '/api/workspaces') output( response, output_format=output_format, query=query, default_table_query=r"workspaces[].{id:id, display_name:properties.display_name, deployment_status:deploymentStatus, workspace_url:workspaceURL}") return response.text @click.command(name="new", help="Create a new workspace") @click.option('--definition', help='JSON definition for the workspace', required=False) @click.option('--definition-file', help='File containing JSON definition for the workspace', required=False, type=click.File("r")) @click.option('--no-wait', flag_value=True, default=False) @output_option() @query_option() @click.pass_context def workspaces_create(ctx, definition, definition_file, no_wait, output_format, query): log = logging.getLogger(__name__) if definition is None: if definition_file is None: raise click.UsageError('Please specify either a definition or a definition file') definition = definition_file.read() definition_dict = json.loads(definition) client = ApiClient.get_api_client_from_config() click.echo("Creating workspace...", err=True) response = client.call_api(log, 'POST', '/api/workspaces', json_data=definition_dict) if no_wait: output(response, output_format=output_format, query=query, default_table_query=default_operation_table_query_single()) return response.text else: operation_url = response.headers['location'] operation_show(log, operation_url, no_wait=False, output_format=output_format, query=query) workspaces.add_command(workspaces_list) workspaces.add_command(workspaces_create)
AzureTRE/cli/tre/commands/workspaces/workspaces.py/0
{ "file_path": "AzureTRE/cli/tre/commands/workspaces/workspaces.py", "repo_id": "AzureTRE", "token_count": 770 }
105
data "local_file" "api_app_version" { filename = "${path.root}/../../api_app/_version.py" } locals { version = replace(replace(replace(data.local_file.api_app_version.content, "__version__ = \"", ""), "\"", ""), "\n", "") } resource "azurerm_service_plan" "core" { name = "plan-${var.tre_id}" resource_group_name = azurerm_resource_group.core.name location = azurerm_resource_group.core.location os_type = "Linux" sku_name = var.core_app_service_plan_sku tags = local.tre_core_tags worker_count = 1 lifecycle { ignore_changes = [tags] } } resource "azurerm_linux_web_app" "api" { name = "api-${var.tre_id}" resource_group_name = azurerm_resource_group.core.name location = azurerm_resource_group.core.location service_plan_id = azurerm_service_plan.core.id https_only = true key_vault_reference_identity_id = azurerm_user_assigned_identity.id.id virtual_network_subnet_id = module.network.web_app_subnet_id tags = local.tre_core_tags app_settings = { "APPLICATIONINSIGHTS_CONNECTION_STRING" = module.azure_monitor.app_insights_connection_string "APPLICATIONINSIGHTS_STATSBEAT_DISABLED_ALL" = "True" "ApplicationInsightsAgent_EXTENSION_VERSION" = "~3" "XDT_MicrosoftApplicationInsights_Mode" = "default" "WEBSITES_PORT" = "8000" "STATE_STORE_ENDPOINT" = azurerm_cosmosdb_account.tre_db_account.endpoint "COSMOSDB_ACCOUNT_NAME" = azurerm_cosmosdb_account.tre_db_account.name "SERVICE_BUS_FULLY_QUALIFIED_NAMESPACE" = local.service_bus_namespace_fqdn "EVENT_GRID_STATUS_CHANGED_TOPIC_ENDPOINT" = module.airlock_resources.event_grid_status_changed_topic_endpoint "EVENT_GRID_AIRLOCK_NOTIFICATION_TOPIC_ENDPOINT" = module.airlock_resources.event_grid_airlock_notification_topic_endpoint "SERVICE_BUS_RESOURCE_REQUEST_QUEUE" = azurerm_servicebus_queue.workspacequeue.name "SERVICE_BUS_DEPLOYMENT_STATUS_UPDATE_QUEUE" = azurerm_servicebus_queue.service_bus_deployment_status_update_queue.name "SERVICE_BUS_STEP_RESULT_QUEUE" = module.airlock_resources.service_bus_step_result_queue "MANAGED_IDENTITY_CLIENT_ID" = azurerm_user_assigned_identity.id.client_id "TRE_ID" = var.tre_id "RESOURCE_LOCATION" = azurerm_resource_group.core.location "ENABLE_SWAGGER" = var.enable_swagger "SWAGGER_UI_CLIENT_ID" = var.swagger_ui_client_id "AAD_TENANT_ID" = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.auth_tenant_id.id})" "API_CLIENT_ID" = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.api_client_id.id})" "API_CLIENT_SECRET" = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.api_client_secret.id})" "RESOURCE_GROUP_NAME" = azurerm_resource_group.core.name "SUBSCRIPTION_ID" = data.azurerm_subscription.current.subscription_id CORE_ADDRESS_SPACE = var.core_address_space TRE_ADDRESS_SPACE = var.tre_address_space ARM_ENVIRONMENT = var.arm_environment AAD_AUTHORITY_URL = module.terraform_azurerm_environment_configuration.active_directory_endpoint RESOURCE_MANAGER_ENDPOINT = module.terraform_azurerm_environment_configuration.resource_manager_endpoint MICROSOFT_GRAPH_URL = module.terraform_azurerm_environment_configuration.microsoft_graph_endpoint STORAGE_ENDPOINT_SUFFIX = module.terraform_azurerm_environment_configuration.storage_suffix LOGGING_LEVEL = var.logging_level OTEL_RESOURCE_ATTRIBUTES = "service.name=api,service.version=${local.version}" OTEL_EXPERIMENTAL_RESOURCE_DETECTORS = "azure_app_service" } identity { type = "UserAssigned" identity_ids = [azurerm_user_assigned_identity.id.id] } lifecycle { ignore_changes = [ tags, ] } site_config { http2_enabled = true vnet_route_all_enabled = true container_registry_use_managed_identity = true container_registry_managed_identity_client_id = azurerm_user_assigned_identity.id.client_id minimum_tls_version = "1.2" ftps_state = "Disabled" application_stack { docker_image = "${local.docker_registry_server}/${var.api_image_repository}" docker_image_tag = local.version } cors { allowed_origins = [ var.enable_local_debugging ? "http://localhost:3000" : "" ] } } logs { application_logs { file_system_level = "Information" } http_logs { file_system { retention_in_days = 7 retention_in_mb = 100 } } } depends_on = [ module.airlock_resources ] } resource "azurerm_private_endpoint" "api_private_endpoint" { name = "pe-api-${var.tre_id}" resource_group_name = azurerm_resource_group.core.name location = azurerm_resource_group.core.location subnet_id = module.network.shared_subnet_id tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } private_service_connection { private_connection_resource_id = azurerm_linux_web_app.api.id name = "psc-api-${var.tre_id}" subresource_names = ["sites"] is_manual_connection = false } private_dns_zone_group { name = module.terraform_azurerm_environment_configuration.private_links["privatelink.azurewebsites.net"] private_dns_zone_ids = [module.network.azurewebsites_dns_zone_id] } } resource "azurerm_monitor_diagnostic_setting" "webapp_api" { name = "diag-${var.tre_id}" target_resource_id = azurerm_linux_web_app.api.id log_analytics_workspace_id = module.azure_monitor.log_analytics_workspace_id dynamic "enabled_log" { for_each = setintersection(data.azurerm_monitor_diagnostic_categories.api.log_category_types, local.api_diagnostic_categories_enabled) content { category = enabled_log.value } } metric { category = "AllMetrics" enabled = true } lifecycle { ignore_changes = [log_analytics_destination_type] } }
AzureTRE/core/terraform/api-webapp.tf/0
{ "file_path": "AzureTRE/core/terraform/api-webapp.tf", "repo_id": "AzureTRE", "token_count": 3532 }
106
#!/bin/bash set -e # if no arguments are provided, return usage function if [[ $# -ne 2 || -z $1 || -z $2 ]]; then echo "Usage: $0 <left_plan_file> <right_plan_file>" exit 1 fi left_tfplan=$1 right_tfplan=$2 echo "Comparing ${left_tfplan} to ${right_tfplan}..." function plan_change() { terraform show -json "$1" | jq -r '.resource_changes[] | select(.change.actions[] | contains("no-op") or contains("read") | not)' > "$1_filtered.json" } plan_change "${left_tfplan}" plan_change "${right_tfplan}" diff <(jq --sort-keys . "${left_tfplan}"_filtered.json) <(jq --sort-keys . "${right_tfplan}"_filtered.json)
AzureTRE/core/terraform/compare_plans.sh/0
{ "file_path": "AzureTRE/core/terraform/compare_plans.sh", "repo_id": "AzureTRE", "token_count": 243 }
107
resource "azurerm_virtual_network" "core" { name = "vnet-${var.tre_id}" location = var.location resource_group_name = var.resource_group_name address_space = [var.core_address_space] tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } } resource "azurerm_subnet" "bastion" { name = "AzureBastionSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.bastion_subnet_address_prefix] } resource "azurerm_subnet" "azure_firewall" { name = "AzureFirewallSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.firewall_subnet_address_space] depends_on = [azurerm_subnet.bastion] } resource "azurerm_subnet" "app_gw" { name = "AppGwSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.app_gw_subnet_address_prefix] private_endpoint_network_policies_enabled = false private_link_service_network_policies_enabled = true depends_on = [azurerm_subnet.azure_firewall] } resource "azurerm_subnet" "web_app" { name = "WebAppSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.web_app_subnet_address_prefix] private_endpoint_network_policies_enabled = false private_link_service_network_policies_enabled = true depends_on = [azurerm_subnet.app_gw] delegation { name = "delegation" service_delegation { name = "Microsoft.Web/serverFarms" actions = ["Microsoft.Network/virtualNetworks/subnets/action"] } } } resource "azurerm_subnet" "shared" { name = "SharedSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.shared_services_subnet_address_prefix] # notice that private endpoints do not adhere to NSG rules private_endpoint_network_policies_enabled = false depends_on = [azurerm_subnet.web_app] } resource "azurerm_subnet" "resource_processor" { name = "ResourceProcessorSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.resource_processor_subnet_address_prefix] # notice that private endpoints do not adhere to NSG rules private_endpoint_network_policies_enabled = false depends_on = [azurerm_subnet.shared] } resource "azurerm_subnet" "airlock_processor" { name = "AirlockProcessorSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.airlock_processor_subnet_address_prefix] # notice that private endpoints do not adhere to NSG rules private_endpoint_network_policies_enabled = false depends_on = [azurerm_subnet.resource_processor] delegation { name = "delegation" service_delegation { name = "Microsoft.Web/serverFarms" actions = ["Microsoft.Network/virtualNetworks/subnets/action"] } } # Todo: needed as we want to open the fw for this subnet in some of the airlock storages (export inprogress) # https://github.com/microsoft/AzureTRE/issues/2098 service_endpoints = ["Microsoft.Storage"] } resource "azurerm_subnet" "airlock_notification" { name = "AirlockNotifiactionSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.airlock_notifications_subnet_address_prefix] # notice that private endpoints do not adhere to NSG rules private_endpoint_network_policies_enabled = false depends_on = [azurerm_subnet.airlock_processor] delegation { name = "delegation" service_delegation { name = "Microsoft.Web/serverFarms" actions = ["Microsoft.Network/virtualNetworks/subnets/action"] } } } resource "azurerm_subnet" "airlock_storage" { name = "AirlockStorageSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.airlock_storage_subnet_address_prefix] # notice that private endpoints do not adhere to NSG rules private_endpoint_network_policies_enabled = false depends_on = [azurerm_subnet.airlock_notification] } resource "azurerm_subnet" "airlock_events" { name = "AirlockEventsSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.airlock_events_subnet_address_prefix] # notice that private endpoints do not adhere to NSG rules private_endpoint_network_policies_enabled = false depends_on = [azurerm_subnet.airlock_storage] # Eventgrid CAN'T send messages over private endpoints, hence we need to allow service endpoints to the service bus # We are using service endpoints + managed identity to send these messaages # https://docs.microsoft.com/en-us/azure/event-grid/consume-private-endpoints service_endpoints = ["Microsoft.ServiceBus"] } resource "azurerm_subnet" "firewall_management" { name = "AzureFirewallManagementSubnet" virtual_network_name = azurerm_virtual_network.core.name resource_group_name = var.resource_group_name address_prefixes = [local.firewall_management_subnet_address_prefix] depends_on = [azurerm_subnet.airlock_events] } resource "azurerm_ip_group" "resource_processor" { name = "ipg-resource-processor" location = var.location resource_group_name = var.resource_group_name cidrs = [local.resource_processor_subnet_address_prefix] tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } } resource "azurerm_ip_group" "shared" { name = "ipg-shared" location = var.location resource_group_name = var.resource_group_name cidrs = [local.shared_services_subnet_address_prefix] tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } } resource "azurerm_ip_group" "webapp" { name = "ipg-web-app" location = var.location resource_group_name = var.resource_group_name cidrs = [local.web_app_subnet_address_prefix] tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } } resource "azurerm_ip_group" "airlock_processor" { name = "ipg-airlock-processor" location = var.location resource_group_name = var.resource_group_name cidrs = [local.airlock_processor_subnet_address_prefix] tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } } module "terraform_azurerm_environment_configuration" { source = "git::https://github.com/microsoft/terraform-azurerm-environment-configuration.git?ref=0.2.0" arm_environment = var.arm_environment }
AzureTRE/core/terraform/network/network.tf/0
{ "file_path": "AzureTRE/core/terraform/network/network.tf", "repo_id": "AzureTRE", "token_count": 3316 }
108
resource "azurerm_storage_account" "stg" { name = lower(replace("stg-${var.tre_id}", "-", "")) resource_group_name = azurerm_resource_group.core.name location = azurerm_resource_group.core.location account_tier = "Standard" account_replication_type = "LRS" allow_nested_items_to_be_public = false tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } } resource "azurerm_storage_share" "storage_state_path" { name = "cnab-state" storage_account_name = azurerm_storage_account.stg.name quota = 50 } resource "azurerm_private_endpoint" "blobpe" { name = "pe-blob-${var.tre_id}" location = azurerm_resource_group.core.location resource_group_name = azurerm_resource_group.core.name subnet_id = module.network.shared_subnet_id tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } private_dns_zone_group { name = "private-dns-zone-group-blobcore" private_dns_zone_ids = [module.network.blob_core_dns_zone_id] } private_service_connection { name = "psc-stg-${var.tre_id}" private_connection_resource_id = azurerm_storage_account.stg.id is_manual_connection = false subresource_names = ["Blob"] } # private endpoints in serial depends_on = [ azurerm_private_endpoint.kvpe ] } resource "azurerm_private_endpoint" "filepe" { name = "pe-file-${var.tre_id}" location = azurerm_resource_group.core.location resource_group_name = azurerm_resource_group.core.name subnet_id = module.network.shared_subnet_id tags = local.tre_core_tags lifecycle { ignore_changes = [tags] } private_dns_zone_group { name = "private-dns-zone-group-filecore" private_dns_zone_ids = [module.network.file_core_dns_zone_id] } private_service_connection { name = "psc-filestg-${var.tre_id}" private_connection_resource_id = azurerm_storage_account.stg.id is_manual_connection = false subresource_names = ["file"] } # private endpoints in serial depends_on = [ azurerm_private_endpoint.blobpe ] }
AzureTRE/core/terraform/storage.tf/0
{ "file_path": "AzureTRE/core/terraform/storage.tf", "repo_id": "AzureTRE", "token_count": 1148 }
109
#!/bin/bash set -o errexit set -o pipefail set -o nounset # Get the directory that this script is in DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" pushd "$DIR/../../ui/app" ui_version=$(jq -r '.version' package.json) activeDirectoryUri="$(az cloud show --query endpoints.activeDirectory --output tsv)" # replace the values in the config file jq --arg rootClientId "${SWAGGER_UI_CLIENT_ID}" \ --arg rootTenantId "${AAD_TENANT_ID}" \ --arg treApplicationId "api://${API_CLIENT_ID}" \ --arg treUrl "/api" \ --arg treId "${TRE_ID}" \ --arg version "${ui_version}" \ --arg activeDirectoryUri "${activeDirectoryUri}" \ '.rootClientId = $rootClientId | .rootTenantId = $rootTenantId | .treApplicationId = $treApplicationId | .treUrl = $treUrl | .treId = $treId | .version = $version | .activeDirectoryUri = $activeDirectoryUri' ./src/config.source.json > ./src/config.json # build and deploy the app yarn install yarn build popd CONTENT_DIR="$DIR/../../ui/app/build" "$DIR/upload_static_web.sh"
AzureTRE/devops/scripts/build_deploy_ui.sh/0
{ "file_path": "AzureTRE/devops/scripts/build_deploy_ui.sh", "repo_id": "AzureTRE", "token_count": 382 }
110
#!/bin/bash set -o errexit set -o pipefail set -o nounset # set -o xtrace # # Usage: # load_and_validate_env.sh # # shellcheck disable=SC1091 source "${DIR}"/construct_tre_url.sh # shellcheck disable=SC1091 source "${DIR}"/convert_azure_env_to_arm_env.sh if [ ! -f "config.yaml" ]; then if [ -z "${USE_ENV_VARS_NOT_FILES:-}" ]; then echo -e "\e[31m»»» 💥 Unable to find config.yaml file, please create file and try again!\e[0m" #exit fi else # Validate no duplicate keys in config has_dupes=$(yq e '.. | select(. == "*") | {(path | .[-1]): .}| keys' config.yaml | sort| uniq -d) if [ -n "${has_dupes:-}" ]; then echo -e "\e[31m»»» 💥 There are duplicate keys in your config, please fix and try again!\e[0m" exit 1 fi # Validate config schema if [[ $(pajv validate -s "$DIR/../../config_schema.json" -d config.yaml) != *valid* ]]; then echo -e "\e[31m»»» ⚠️ Your config.yaml is invalid 😥 Please fix the errors and retry." exit 1 fi # Get any default entries from config schema and export. Any values in config.yaml will override these defaults DEFAULT_VALUES=$(yq '[... |select(has("default"))| {"":path | .[-1] | upcase , " ": .default }| to_entries| map("=" + .value)|join("") ]' --output-format=yaml "$DIR/../../config_schema.json") # Format env string DEFAULT_VALUES=${DEFAULT_VALUES//"- ="} # Catch if no default values have been declared if [ ${#DEFAULT_VALUES} -gt 2 ]; then # Export default values for item in $DEFAULT_VALUES do # Export as UPPERCASE keys env vars # shellcheck disable=SC2163 export "$item" # TF_VAR requires the key in lowercase IFS='=' read -ra arr <<< "$item" tfkey=$(echo "${arr[0]}" | tr '[:upper:]' '[:lower:]') tfvar="TF_VAR_$tfkey=${arr[1]}" # shellcheck disable=SC2163 export "$tfvar" done fi # Get leaf keys yq query GET_LEAF_KEYS=".. | select(. == \"*\") | {(path | .[-1]): .}" # Map keys to uppercase yq query UPCASE_KEYS="with_entries(.key |= upcase)" # Prefix keys with TF_VAR_ yq query TF_KEYS="with_entries(.key |= \"TF_VAR_\" + .)" # Yq query to format the output to be in form: key=value FORMAT_FOR_ENV_EXPORT="to_entries| map(.key + \"=\" + .value)|join(\" \")" # Export as UPPERCASE keys env vars # shellcheck disable=SC2046 export $(yq e "$GET_LEAF_KEYS|$UPCASE_KEYS| $FORMAT_FOR_ENV_EXPORT" config.yaml) # Export as Terraform keys env vars # shellcheck disable=SC2046 export $(yq e "$GET_LEAF_KEYS|$TF_KEYS| $FORMAT_FOR_ENV_EXPORT" config.yaml) # Source AZURE_ENVIRONMENT and setup the ARM_ENVIRONMENT based on it AZURE_ENVIRONMENT=$(az cloud show --query name --output tsv) export AZURE_ENVIRONMENT # The ARM Environment is required by terraform to indicate the destination cloud. ARM_ENVIRONMENT=$(convert_azure_env_to_arm_env "${AZURE_ENVIRONMENT}") export ARM_ENVIRONMENT export TF_VAR_arm_environment="${ARM_ENVIRONMENT}" TRE_URL=$(construct_tre_url "${TRE_ID}" "${LOCATION}" "${AZURE_ENVIRONMENT}") export TRE_URL fi set +o nounset
AzureTRE/devops/scripts/load_and_validate_env.sh/0
{ "file_path": "AzureTRE/devops/scripts/load_and_validate_env.sh", "repo_id": "AzureTRE", "token_count": 1352 }
111
terraform destroy -auto-approve
AzureTRE/devops/terraform/destroy.sh/0
{ "file_path": "AzureTRE/devops/terraform/destroy.sh", "repo_id": "AzureTRE", "token_count": 9 }
112
# Azure TRE Overview ## What is Azure TRE? Across the health industry, be it a pharmaceutical company interrogating clinical trial results, or a public health provider analyzing electronic health records, there is the need to enable researchers, analysts, and developers to work with sensitive data sets. Trusted Research Environments (TREs) enable organisations to provide research teams secure access to these data sets along side tooling to ensure a researchers can be productive. Further information on TREs in general can be found in many places, one good resource is [HDR UK](https://www.hdruk.ac.uk/access-to-health-data/trusted-research-environments/). The Azure Trusted Research Environment project is an accelerator to assist Microsoft customers and partners who want to build out Trusted Research environments on Azure. This project enables authorized users to deploy and configure secure workspaces and researcher tooling without a dependency on IT teams. This project is typically implemented alongside a data platform that provides research ready datasets to TRE workspaces: ![Concepts](assets/TRE_Overview.png) TREs are not “one size fits all”, hence although the Azure TRE has a number of out of the box features, the project has been built be extensible, and hence tooling and data platform agnostic. Core features include: - Self-service for administrators – workspace creation and administration - Self-service for research teams – research tooling creation and administration - Package and repository mirroring - Extensible architecture - build your own service templates as required - Microsoft Entra ID integration - Airlock - Cost reporting - Ready to workspace templates including: - Restricted with data exfiltration control - Unrestricted for open data - Ready to go workspace service templates including: - Virtual Desktops: Windows, Linux - AzureML (Jupyter, R Studio, VS Code) - ML Flow, Gitea
AzureTRE/docs/index.md/0
{ "file_path": "AzureTRE/docs/index.md", "repo_id": "AzureTRE", "token_count": 436 }
113
# Pipelines The [AzureTRE deployment repository](https://github.com/microsoft/AzureTRE-Deployment) contains the following github workflows: 1. Build Validation - validates the code by running linter and terraform validation. 1. Clean Validation Environments - a periodical workflow to clean unused AzureTRE environments. 1. Deploy Azure TRE (branch) - This workflow is intended to be used to test workflow changes. It deploys AzureTRE using the workflows defined on the branch 1. Deploy Azure TRE - This workflow is the integration build run for pushes to the main branch. It also runs on a schedule, serving as the nightly build to keep the main AzureTRE env in sync. 1. Deploy Azure TRE Reusable - responsible to deploy AzureTRE. It is referenced in other Azure TRE deployment workflows. ## Setup Github Environment The workflows are using Github environment to source its environment variables. Follow [this guide](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment#creating-an-environment) to define it in your github repository and provide it as an input for the workflows. The following environment variables should be defined in your github environment: 1. [Auth env vars](../../tre-admins/auth.md##create_authentication_assets) 1. [Core and Devops env vars](../../tre-admins/environment-variables.md) Having all the environment variables set in the Github environment the next step will be to use it in your pipelines: In AzureTRE deployment repository You will find all the pipelines under the folder `.github/workflows` on top of each workflow there is the workflow inputs part where the used Github environment name is set, make sure to update it with yours, for example: ![Setup env in pipeline](../../assets/using-tre/pipelines_set_env.png) ## Publish Custom Templates in Pipelines If you have created custom AzureTRE templates you can publish and register them as part of the CI/CD pipelines. To do so go to `.github/workflows/deploy_tre_reusable.yml` workflow and add your template under the following jobs: 1. publish_bundles ![Publish bundle](../../assets/using-tre/push_bundles_step.png) 1. register_bundles ![Register bundle](../../assets/using-tre/register_bundles.png) 1. If it is a user resource add it also under register_user_resource_bundles ![Register user resource step](../../assets/using-tre/register_user_resource.png) ## How to Contribute to our Documentation If you have any comments or suggestions about our documentation then you can visit our GitHub project and either raise a new issue, or comment on one of the existing ones. You can find our existing documentation issues on GitHub by clicking on the link below: [Existing Documentation Issues](https://github.com/microsoft/AzureTRE/issues?q=is%3Aissue+is%3Aopen+label%3Adocumentation) Or, you can raise a new issue by clicking on this link: [Report an Issue or Make a Suggestion](https://github.com/microsoft/AzureTRE/issues/new/choose) **Thank you for your patience and support!**
AzureTRE/docs/tre-admins/setup-instructions/cicd-deployment.md/0
{ "file_path": "AzureTRE/docs/tre-admins/setup-instructions/cicd-deployment.md", "repo_id": "AzureTRE", "token_count": 808 }
114
# Tear-down To remove the Azure TRE and its resources from your Azure subscription run: ```cmd make tre-destroy ``` Alternatively, you can delete the resource groups in Azure Portal or using the CLI: ```cmd az group delete --name <resource group name> ``` Finally, delete the app registrations in Azure Portal or using the CLI: ```cmd az ad app delete --id <application client ID> ```
AzureTRE/docs/tre-admins/tear-down.md/0
{ "file_path": "AzureTRE/docs/tre-admins/tear-down.md", "repo_id": "AzureTRE", "token_count": 108 }
115
# Azure CycleCloud Shared Service Azure CycleCloud is an enterprise-friendly tool for orchestrating and managing High Performance Computing (HPC) environments on Azure. This shared service deploys a single CycleCloud server, which can be used to by a TRE Administrator to create and manage multiple HPC clusters. ![CycleCloud UI](../..//assets/cyclecloud-create-cluster.jpg) Used "as is", this shared service is only appropriate for proof of concept work and small projects, however can be used as a starting point for more advanced scenarios. Using the CycleCloud cluster properties the TRE Administrator can choose which virtual network the cluster will be deployed into, and hence the workspace the cluster can be accessed from. At present there is no self service cluster creation for research teams, and as such costs are not attributed to individual workspace however this could be added in the future, and is tracked in this issue <https://github.com/microsoft/AzureTRE/issues/2230>. ## Deployment and Configuration The CycleCloud shared service template needs registering with the TRE as per <../../tre-admins/registering-templates/> The templates can be found at `templates/shared_services/cyclecloud`. Prior to deploying the CycleCloud server, the license terms for any Azure VM marketplace images used by CycleCloud must be accepted. This can be done by running the following command while logged into the Azure CLI: ```shell az vm image terms accept --urn azurecyclecloud:azure-cyclecloud:cyclecloud8:latest az vm image terms accept --urn almalinux:almalinux-hpc:8_5-hpc:latest ``` Deploy the CycleCloud server using UI or API. To connect to the CycleCloud server, the TRE Administrator must connect to the CycleCloud server from the administration jumpbox. Use Azure Bastion to connect to the jumpbox a with the username `admin` and the select the password located in your core KeyVault. Connect to the CycleCloud server at the URL: `https://cyclecloud-{TRE_ID}.{LOCATION}.cloudapp.azure.com/`. - Provide a name for the cyclecloud server instance. -Review the terms and conditions and hit next. - Provide your user details, including SSH key - Hit Done, and wait for the add subscription dialog. Select the region your TRE is deployed into, leave the resource group as the default `<Create New Per Cluster>` and select the storage account beginning `stgcc`. This should look similar to: ![Add Subscription](../../assets/cyclecloud-4.jpg) - Hit Save, and then "Back to Clusters" ## Create a Cluster - Before you start creating the cluster retrieve the last 4 digits of the workspace ID that you want to deploy the cluster into. - Create a user in CycleCloud as per <https://docs.microsoft.com/en-us/azure/cyclecloud/concepts/user-management?view=cyclecloud-8#adding-new-users-to-cyclecloud> . The SSH key for the user will need to be created within the workspace and public key exporting. We suggest using the 4 digits retrieved in step 1 as part of the user account. - Select your cluster type, we have tested Slurm and Grid Engine using the methods documented here. - Give the cluster a name - again we suggest using the last 4 digits of the workspace ID as part of the name.Click Next. - Select your required settings. In the **Subnet ID** box, choose the `ServicesSubnet` in the resource group and virtual network containing the 4 digit workspace ID. Click Next. - Configure any storage settings and click Next. - Under advanced settings, under advanced networking - uncheck Return Proxy, and Public Head node. Click Next. - Under *cloud init*, paste the below script, with the appropriate values for TRE ID and Region into each of the nodes to ensure the package mirror is used. ```shell #!/bin/sh TRE_ID="mrtredemo2" REGION="westeurope" ls /etc/yum.repos.d/*.repo | xargs sed -i 's/mirrorlist/# mirrorlist/g' ls /etc/yum.repos.d/*.repo | xargs sed -i "s,# baseurl=https://repo.almalinux.org/,baseurl=https://nexus-$TRE_ID.$REGION.cloudapp.azure.com/repository/almalinux/,g" yum -y install epel-release ls /etc/yum.repos.d/*.repo | xargs sed -i 's/metalink/# metalink/g' ls /etc/yum.repos.d/*.repo | xargs sed -i "s,#baseurl=https://download.fedoraproject.org/,baseurl=https://nexus-$TRE_ID.$REGION.cloudapp.azure.com/repository/fedoraproject/,g" yum -y install python3 python3-pip sudo tee /etc/pip.conf <<EOF [global] index = https://nexus-$TRE_ID.$REGION.cloudapp.azure.com/repository/pypi/pypi index-url = https://nexus-$TRE_ID.$REGION.cloudapp.azure.com/repository/pypi/simple trusted-host = https://nexus-$TRE_ID.$REGION.cloudapp.azure.com EOF sudo cat > /etc/yum.repos.d/cyclecloud.repo <<EOF [cyclecloud] name=cyclecloud baseurl=https://nexus-$TRE_ID.$REGION.cloudapp.azure.com/repository/microsoft-yumrepos/cyclecloud gpgcheck=1 gpgkey=https://nexus-$TRE_ID.$REGION.cloudapp.azure.com/repository/microsoft-keys/microsoft.asc EOF ``` - Click Save. - Under the new cluster, click *Access* and add the user created earlier and configure node access. - Start the cluster, ensure the cluster starts successfully and provide the users connection details as detailed here: <https://docs.microsoft.com/en-us/azure/cyclecloud/how-to/connect-to-node?view=cyclecloud-8>
AzureTRE/docs/tre-templates/shared-services/cyclecloud.md/0
{ "file_path": "AzureTRE/docs/tre-templates/shared-services/cyclecloud.md", "repo_id": "AzureTRE", "token_count": 1485 }
116
# MLflow Workspace Service See: <https://www.mlflow.org> ## Prerequisites - [A base workspace deployed](https://microsoft.github.io/AzureTRE/tre-templates/workspaces/base/) ## MLflow Workspace VM Configuration Each MLflow server deployment creates a PowerShell (for Windows) and a shell script (for Linux) with the same name as the MLflow server, in the shared storage mounted on the researcher VMs. These scripts will configure the researcher VMs (by installing the required packages and setting up the environment variables) to communicate with the MLflow tracking server. !!! note Please ensure that [nexus reposiory](https://microsoft.github.io/AzureTRE/tre-admins/setup-instructions/configuring-shared-services/) is configured before running the above scripts. ## MLflow set tracking URI Researchers will be required to set the remote tracking URI in their scripts ```python remote_server_uri = "https://xxxxxxx.azurewebsites.net/" mlflow.set_tracking_uri(remote_server_uri) ``` ## Using with Conda-Forge If working with Conda-Forge you need to ensure the user resource you are using is configured correctly and using the channels available via the [Nexus repository](../shared-services/nexus/). If the user resource you have deployed used one of the pre-existing Guacamole user resource templates and has conda installed by default, conda will already be configured to use the correct channels via Nexus. If not and conda has been manually deployed on the user resource, the following script can be used to configure conda: ```shell conda config --add channels ${nexus_proxy_url}/repository/conda/ --system conda config --add channels ${nexus_proxy_url}/repository/conda-forge/ --system conda config --remove channels defaults --system conda config --set channel_alias ${nexus_proxy_url}/repository/conda/ --system ``` ### conda.yml When using a `conda.yml` file to configure your MLFlow environment it is required to specify the channels to use. As the traditional channels (conda-forge, defaults etc) have been replaced with Nexus channels, you must ensure that the Nexus channels are being specified here instead. To retireve these channels, run `conda config --show channels` once conda has been configured to use Nexus. !!! note When logging models using sklearn, an optional parameter `conda_env` can be passed as either JSON or YML. If this is not passed a default `conda.yml` will be generate for the model, targeting the channel `conda-forge` causing any subsequent environments created using the model to fail. See the official documentation [here](https://www.mlflow.org/docs/latest/python_api/mlflow.sklearn.html) for the full details.
AzureTRE/docs/tre-templates/workspace-services/mlflow.md/0
{ "file_path": "AzureTRE/docs/tre-templates/workspace-services/mlflow.md", "repo_id": "AzureTRE", "token_count": 705 }
117
# Checking the Virtual Machine Scale Set (VMSS) instance running resource processor If you see messages hanging in the service bus queue then the resource processor is not up and running. Verify that the VMSS instance is up and healthy. ![VMSS Running](../assets/vmss_running.png) The processor runs in a VNET, and you cannot connect to it directly. 1. Connect to the instance using Bastion. Bastion is already deployed, and you can use the username `adminuser`. The password is stored in the keyvault under the secret `resource-processor-vmss-password` !!! info You cannot see secrets unless you are added to a suitable access policy for the Key Vault. ![VMSS Password](../assets/vmss_password.png) ![Bastion](../assets/bastion.png "Bastion") 1. After logging in you should check the status of **cloud-init** which is used to bootstrap the machine with docker and start the processor. Log files for cloud init are: - `/var/log/cloud-init.log` - `/var/log/cloud-init-output.log` If the Docker container is pulled as shown in logs then the resource processor should start. 1. Check the status of the container using `docker ps` If you see nothing (and the container was pulled) then the processor has either not started yet or it has crashed. 1. Check the status of all Docker processes using `docker ps -a` which should show you if the container terminated prematurely. 1. Get the logs from the container using `docker logs <container_id>` command. To start a processor container manually: 1. Find the **runner_image:tag** by running ``docker ps`` 1. Execute the following command from the root (/) of the file system ```cmd docker run -v /var/run/docker.sock:/var/run/docker.sock --env-file .env --name resource_processor_vmss_porter_debug [runner_image:tag] ``` ## Logs All logs from the resource processor are transferred to the App Insights instance, so it is not usually necessary to follow the progress by logging into the instance. Logging into the instance and starting a container manually however, is helpful in live debugging. When doing so, you can use the following aliases to monitor progress: * **rpstatus** - a split screen with `docker ps` to show what containers are running (a bundle action run in its own container), the Resource Processor logs, and a _free_ section for you to type any other command you wish (see below). * **dlf** - runs `docker logs --since 1m --follow`, you should use with the name/id of the container you want to view, e.g. `dlf my_container` * **dlf1** - same as `dlf` but will auto select the last container in the `docker ps` list (usually the last one started). ## Updating the running container If you start a container manually you will probably want to install software, for example, an editor. However, the firewall blocks all ingress traffic, so you cannot run `sudo apt update`. You need to add an override rule in the firewall to allow the traffic. !!! caution Remember to remove this rule when debugging is done.
AzureTRE/docs/troubleshooting-faq/troubleshooting-rp.md/0
{ "file_path": "AzureTRE/docs/troubleshooting-faq/troubleshooting-rp.md", "repo_id": "AzureTRE", "token_count": 785 }
118
from starlette.config import Config try: config = Config('.env') # Workaround needed until FastAPI uses Starlette >= 3.7.1 except FileNotFoundError: config = Config() # Resource Info RESOURCE_LOCATION: str = config("RESOURCE_LOCATION", default="") TRE_ID: str = config("TRE_ID", default="") TRE_URL: str = config("TRE_URL", default="") API_CLIENT_ID: str = config("API_CLIENT_ID", default="") TEST_USER_NAME: str = config("TEST_USER_NAME", default="") TEST_USER_PASSWORD: str = config("TEST_USER_PASSWORD", default="") TEST_APP_ID: str = config("TEST_APP_ID", default="") AAD_TENANT_ID: str = config("AAD_TENANT_ID", default="") TEST_ACCOUNT_CLIENT_ID: str = config("TEST_ACCOUNT_CLIENT_ID", default="") TEST_ACCOUNT_CLIENT_SECRET: str = config("TEST_ACCOUNT_CLIENT_SECRET", default="") TEST_WORKSPACE_APP_ID: str = config("TEST_WORKSPACE_APP_ID", default="") TEST_WORKSPACE_APP_SECRET: str = config("TEST_WORKSPACE_APP_SECRET", default="") TEST_WORKSPACE_APP_PLAN: str = config("WORKSPACE_APP_SERVICE_PLAN_SKU", default="") # Set workspace id of an existing workspace to skip creation of a workspace during E2E tests TEST_WORKSPACE_ID: str = config("TEST_WORKSPACE_ID", default="") TEST_WORKSPACE_SERVICE_ID: str = config("TEST_WORKSPACE_SERVICE_ID", default="") TEST_AAD_WORKSPACE_ID: str = config("TEST_AAD_WORKSPACE_ID", default="") TEST_AIRLOCK_IMPORT_REVIEW_WORKSPACE_ID: str = config("TEST_AIRLOCK_IMPORT_REVIEW_WORKSPACE_ID", default="") TEST_AIRLOCK_IMPORT_REVIEW_WORKSPACE_SERVICE_ID: str = config("TEST_AIRLOCK_IMPORT_REVIEW_WORKSPACE_SERVICE_ID", default="")
AzureTRE/e2e_tests/config.py/0
{ "file_path": "AzureTRE/e2e_tests/config.py", "repo_id": "AzureTRE", "token_count": 598 }
119
import pytest import logging from e2e_tests.conftest import disable_and_delete_tre_resource from datetime import date from resources.resource import post_resource from helpers import get_shared_service_by_name from resources import strings from helpers import get_admin_token LOGGER = logging.getLogger(__name__) @pytest.mark.shared_services async def test_patch_firewall(verify): template_name = strings.FIREWALL_SHARED_SERVICE patch_payload = { "properties": { "display_name": "TEST", "rule_collections": [ { "name": "e2e-rule-collection-1", "action": "Allow", "rules": [ { "name": "e2e test rule 1", "description": "desc here", "protocols": [{"port": "5555", "type": "Http"}], "target_fqdns": [ "one.two.three.microsoft.com", "two.three.microsoft.com" ], "source_addresses": ["172.196.0.0"] } ] }, { "name": "e2e-rule-collection-2", "action": "Allow", "rules": [ { "name": "e2e test rule 1", "description": "desc here", "protocols": [{"port": "5556", "type": "Http"}], "target_fqdns": [ "one.two.microsoft.com", "two.microsoft.com" ], "source_addresses": ["172.196.0.1"] } ] }, { "name": "e2e-rule-collection-3", "action": "Allow", "priority": 501, "rules": [ { "name": "e2e test rule 1", "description": "desc here", "protocols": [{"port": "5557", "type": "Http"}], "target_fqdns": [ "one.two.three.microsoft.com.uk" ], "source_addresses": ["172.196.0.2"] } ] } ], } } admin_token = await get_admin_token(verify) shared_service_firewall = await get_shared_service_by_name( template_name, verify, admin_token ) if shared_service_firewall: shared_service_path = f'/shared-services/{shared_service_firewall["id"]}' await post_resource( payload=patch_payload, endpoint=f"/api{shared_service_path}", access_token=admin_token, verify=verify, method="PATCH", etag=shared_service_firewall['_etag'], ) shared_service_templates_to_create = [ strings.GITEA_SHARED_SERVICE, strings.ADMIN_VM_SHARED_SERVICE, # Tested in test_create_certs_nexus_shared_service # strings.NEXUS_SHARED_SERVICE, strings.AIRLOCK_NOTIFIER_SHARED_SERVICE, # TODO: fix cyclecloud and enable this # strings.CYCLECLOUD_SHARED_SERVICE, ] create_airlock_notifier_properties = { "smtp_server_address": "10.1.2.3", "smtp_username": "smtp_user", "smtpPassword": "abcdefg01234567890", "smtp_from_email": "[email protected]", } @pytest.mark.shared_services @pytest.mark.timeout(50 * 60) @pytest.mark.parametrize("template_name", shared_service_templates_to_create) async def test_create_shared_service(template_name, verify) -> None: await disable_and_delete_shared_service_if_exists(template_name, verify) post_payload = { "templateName": template_name, "properties": { "display_name": f"Shared service {template_name}", "description": f"{template_name} deployed via e2e tests", }, } if template_name == strings.AIRLOCK_NOTIFIER_SHARED_SERVICE: post_payload["properties"].update(create_airlock_notifier_properties) admin_token = await get_admin_token(verify) shared_service_path, _ = await post_resource( payload=post_payload, endpoint="/api/shared-services", access_token=admin_token, verify=verify, ) await disable_and_delete_tre_resource(shared_service_path, verify) @pytest.mark.shared_services @pytest.mark.timeout(60 * 60) @pytest.mark.skipif(date.today().weekday() in [5, 6], reason="LetsEncrypt limits to 5 times a week. Skipping on SAT & SUN.") async def test_create_certs_nexus_shared_service(verify) -> None: await disable_and_delete_shared_service_if_exists(strings.NEXUS_SHARED_SERVICE, verify) await disable_and_delete_shared_service_if_exists(strings.CERTS_SHARED_SERVICE, verify) cert_domain = "nexus" cert_name = "nexus-ssl" certs_post_payload = { "templateName": strings.CERTS_SHARED_SERVICE, "properties": { "display_name": f"Shared service {strings.CERTS_SHARED_SERVICE}", "description": f"{strings.CERTS_SHARED_SERVICE} deployed via e2e tests", "domain_prefix": cert_domain, "cert_name": cert_name, }, } nexus_post_payload = { "templateName": strings.NEXUS_SHARED_SERVICE, "properties": { "display_name": f"Shared service {strings.NEXUS_SHARED_SERVICE}", "description": f"{strings.NEXUS_SHARED_SERVICE} deployed via e2e tests", "ssl_cert_name": cert_name, }, } admin_token = await get_admin_token(verify) certs_shared_service_path, _ = await post_resource( payload=certs_post_payload, endpoint="/api/shared-services", access_token=admin_token, verify=verify, ) nexus_shared_service_path, _ = await post_resource( payload=nexus_post_payload, endpoint="/api/shared-services", access_token=admin_token, verify=verify, ) await disable_and_delete_tre_resource(nexus_shared_service_path, verify) await disable_and_delete_tre_resource(certs_shared_service_path, verify) async def disable_and_delete_shared_service_if_exists(shared_service_name, verify) -> None: admin_token = await get_admin_token(verify) # Check that the shared service hasn't already been created shared_service = await get_shared_service_by_name( shared_service_name, verify, admin_token ) if shared_service: id = shared_service["id"] LOGGER.info( f"Shared service {shared_service_name} already exists (id {id}), deleting it first..." ) await disable_and_delete_tre_resource(f"/shared-services/{id}", verify)
AzureTRE/e2e_tests/test_shared_services.py/0
{ "file_path": "AzureTRE/e2e_tests/test_shared_services.py", "repo_id": "AzureTRE", "token_count": 3575 }
120
# Resource Status RESOURCE_STATUS_AWAITING_DEPLOYMENT = "awaiting_deployment" RESOURCE_STATUS_DEPLOYING = "deploying" RESOURCE_STATUS_DEPLOYED = "deployed" RESOURCE_STATUS_DEPLOYMENT_FAILED = "deployment_failed" RESOURCE_STATUS_AWAITING_DELETION = "awaiting_deletion" RESOURCE_STATUS_DELETING = "deleting" RESOURCE_STATUS_DELETED = "deleted" RESOURCE_STATUS_DELETING_FAILED = "deleting_failed" RESOURCE_STATUS_AWAITING_UPDATE = "awaiting_update" RESOURCE_STATUS_UPDATING = "updating" RESOURCE_STATUS_UPDATED = "updated" RESOURCE_STATUS_UPDATING_FAILED = "updating_failed" # Resource Action Status RESOURCE_STATUS_AWAITING_ACTION = "awaiting_action" RESOURCE_ACTION_STATUS_INVOKING = "invoking_action" RESOURCE_ACTION_STATUS_SUCCEEDED = "action_succeeded" RESOURCE_ACTION_STATUS_FAILED = "action_failed" # Pipeline (multi-step) deployments RESOURCE_ACTION_STATUS_PIPELINE_RUNNING = "pipeline_running" RESOURCE_ACTION_STATUS_PIPELINE_FAILED = "pipeline_failed" RESOURCE_ACTION_STATUS_PIPELINE_SUCCEEDED = "pipeline_succeeded" # General info messages MESSAGE_PROCESSED = "Message processed" WAITING_FOR_RUNNER = "Waiting for Porter bundle to execute" # General errors UNKNOWN_EXCEPTION = "Unknow exception"
AzureTRE/resource_processor/resources/strings.py/0
{ "file_path": "AzureTRE/resource_processor/resources/strings.py", "repo_id": "AzureTRE", "token_count": 468 }
121
{ "schemaType": "CredentialSet", "schemaVersion": "1.0.1", "namespace": "", "name": "aad_auth", "credentials": [ { "name": "auth_tenant_id", "source": { "env": "AAD_TENANT_ID" } }, { "name": "auth_client_id", "source": { "env": "APPLICATION_ADMIN_CLIENT_ID" } }, { "name": "auth_client_secret", "source": { "env": "APPLICATION_ADMIN_CLIENT_SECRET" } } ] }
AzureTRE/resource_processor/vmss_porter/aad_auth_local_debugging.json/0
{ "file_path": "AzureTRE/resource_processor/vmss_porter/aad_auth_local_debugging.json", "repo_id": "AzureTRE", "token_count": 261 }
122
# Azure Provider source and version being used terraform { required_providers { azurerm = { source = "hashicorp/azurerm" version = "=3.23.0" } random = { source = "hashicorp/random" version = "=3.4.3" } } backend "azurerm" {} } provider "azurerm" { features { key_vault { # Don't purge on destroy (this would fail due to purge protection being enabled on keyvault) purge_soft_delete_on_destroy = false purge_soft_deleted_secrets_on_destroy = false purge_soft_deleted_certificates_on_destroy = false purge_soft_deleted_keys_on_destroy = false # When recreating an environment, recover any previously soft deleted secrets - set to true by default recover_soft_deleted_key_vaults = true recover_soft_deleted_secrets = true recover_soft_deleted_certificates = true recover_soft_deleted_keys = true } } }
AzureTRE/templates/shared_services/admin-vm/terraform/main.tf/0
{ "file_path": "AzureTRE/templates/shared_services/admin-vm/terraform/main.tf", "repo_id": "AzureTRE", "token_count": 404 }
123
#!/bin/bash set -e # This script is used to uninstall the bundle directly without having to interact with Porter # This script assumes you have created an .env from the sample and the variables # will come from there. # shellcheck disable=SC2154 terraform init -reconfigure -input=false -backend=true \ -backend-config="resource_group_name=${TF_VAR_mgmt_resource_group_name}" \ -backend-config="storage_account_name=${TF_VAR_mgmt_storage_account_name}" \ -backend-config="container_name=${TF_VAR_terraform_state_container_name}" \ -backend-config="key=${TF_VAR_tre_resource_id}-shared-airlock-notifier" terraform destroy -auto-approve
AzureTRE/templates/shared_services/airlock_notifier/terraform/destroy.sh/0
{ "file_path": "AzureTRE/templates/shared_services/airlock_notifier/terraform/destroy.sh", "repo_id": "AzureTRE", "token_count": 220 }
124
locals { core_resource_group_name = "rg-${var.tre_id}" core_vnet = "vnet-${var.tre_id}" short_service_id = substr(var.tre_resource_id, -4, -1) vm_name = "cyclecloud-${local.short_service_id}" storage_name = lower(replace("stgcc${var.tre_id}${local.short_service_id}", "-", "")) tre_shared_service_tags = { tre_id = var.tre_id tre_shared_service_id = var.tre_resource_id } }
AzureTRE/templates/shared_services/cyclecloud/terraform/locals.tf/0
{ "file_path": "AzureTRE/templates/shared_services/cyclecloud/terraform/locals.tf", "repo_id": "AzureTRE", "token_count": 234 }
125
resource "azurerm_virtual_network" "ws" { name = local.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = local.resource_group_name address_space = local.address_space tags = local.tre_shared_service_tags lifecycle { ignore_changes = [tags] } } resource "azurerm_subnet" "host" { name = local.host_subnet_name resource_group_name = local.resource_group_name virtual_network_name = azurerm_virtual_network.ws.name address_prefixes = [local.host_subnet_address_space] delegation { name = "db-host-vnet-integration" service_delegation { actions = [ "Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action", "Microsoft.Network/virtualNetworks/subnets/unprepareNetworkPolicies/action", ] name = "Microsoft.Databricks/workspaces" } } } resource "azurerm_subnet" "container" { name = local.container_subnet_name resource_group_name = local.resource_group_name virtual_network_name = azurerm_virtual_network.ws.name address_prefixes = [local.container_subnet_address_space] delegation { name = "db-container-vnet-integration" service_delegation { actions = [ "Microsoft.Network/virtualNetworks/subnets/join/action", "Microsoft.Network/virtualNetworks/subnets/prepareNetworkPolicies/action", "Microsoft.Network/virtualNetworks/subnets/unprepareNetworkPolicies/action", ] name = "Microsoft.Databricks/workspaces" } } } resource "azurerm_network_security_group" "nsg" { name = local.network_security_group_name location = azurerm_resource_group.rg.location resource_group_name = local.resource_group_name tags = local.tre_shared_service_tags lifecycle { ignore_changes = [tags] } security_rule { name = "AllowInboundDatabricksWorkerNodesToCluster" description = "Required for worker nodes communication within a cluster." priority = 100 direction = "Inbound" access = "Allow" protocol = "*" source_port_range = "*" destination_port_range = "*" source_address_prefix = "VirtualNetwork" destination_address_prefix = "VirtualNetwork" } security_rule { name = "AllowOutboundDatabricksWorkerNodesToControlPlain" description = "Required for workers communication with Databricks Webapp." priority = 100 direction = "Outbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "443" source_address_prefix = "VirtualNetwork" destination_address_prefix = "AzureDatabricks" } security_rule { name = "AllowOutboundDatabricksWorkerNodesToAzureSQLServices" description = "Required for workers communication with Azure SQL services." priority = 101 direction = "Outbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "3306" source_address_prefix = "VirtualNetwork" destination_address_prefix = "Sql" } security_rule { name = "AllowOutboundDatabricksWorkerNodesToAzureStorage" description = "Required for workers communication with Azure Storage services." priority = 102 direction = "Outbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "443" source_address_prefix = "VirtualNetwork" destination_address_prefix = "Storage" } security_rule { name = "AllowOutboundDatabricksWorkerNodesWithinACluster" description = "Required for worker nodes communication within a cluster." priority = 103 direction = "Outbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "*" source_address_prefix = "VirtualNetwork" destination_address_prefix = "VirtualNetwork" } security_rule { name = "AllowOutboundWorkerNodesToAzureEventhub" description = "Required for worker communication with Azure Eventhub services." priority = 104 direction = "Outbound" access = "Allow" protocol = "Tcp" source_port_range = "*" destination_port_range = "9093" source_address_prefix = "VirtualNetwork" destination_address_prefix = "EventHub" } } resource "azurerm_subnet_network_security_group_association" "container" { subnet_id = azurerm_subnet.container.id network_security_group_id = azurerm_network_security_group.nsg.id } resource "azurerm_subnet_network_security_group_association" "host" { subnet_id = azurerm_subnet.host.id network_security_group_id = azurerm_network_security_group.nsg.id } resource "azurerm_private_endpoint" "databricks_auth_private_endpoint" { name = "pe-adb-auth-${local.service_resource_name_suffix}" location = azurerm_resource_group.rg.location resource_group_name = local.resource_group_name subnet_id = data.azurerm_subnet.services.id tags = local.tre_shared_service_tags lifecycle { ignore_changes = [tags] } private_service_connection { name = "private-service-connection-databricks-auth-${local.service_resource_name_suffix}" private_connection_resource_id = azurerm_databricks_workspace.databricks.id is_manual_connection = false subresource_names = ["browser_authentication"] } private_dns_zone_group { name = "private-dns-zone-group-databricks-auth-${local.service_resource_name_suffix}" private_dns_zone_ids = [data.azurerm_private_dns_zone.databricks.id] } }
AzureTRE/templates/shared_services/databricks-auth/terraform/network.tf/0
{ "file_path": "AzureTRE/templates/shared_services/databricks-auth/terraform/network.tf", "repo_id": "AzureTRE", "token_count": 2959 }
126
#!/bin/bash # This script works together with the import_state.sh script to manually remove the firewall state from the core deployment # and import it into the firewall deployment. It's used for migration purposes only and will be removed when clients are all # using the shared services model echo "REMOVING STATE FOR FIREWALL..." set -e terraform init -input=false -backend=true -reconfigure -upgrade \ -backend-config="resource_group_name=${TF_VAR_mgmt_resource_group_name}" \ -backend-config="storage_account_name=${TF_VAR_mgmt_storage_account_name}" \ -backend-config="container_name=${TF_VAR_terraform_state_container_name}" \ -backend-config="key=${TRE_ID}" tf_state_list="$(terraform state list)" function remove_if_present() { echo -n "Checking $1 ..." found=$(echo "$tf_state_list" | grep -q ^$1$; echo $?) if [[ $found -eq 0 ]]; then echo " removing" terraform state rm $1 else echo " not present" fi } remove_if_present azurerm_route_table.rt remove_if_present azurerm_subnet_route_table_association.rt_resource_processor_subnet_association remove_if_present azurerm_subnet_route_table_association.rt_shared_subnet_association remove_if_present azurerm_subnet_route_table_association.rt_web_app_subnet_association remove_if_present module.firewall remove_if_present module.firewall.azurerm_public_ip.fwpip remove_if_present module.firewall.azurerm_monitor_diagnostic_setting.firewall remove_if_present module.firewall.azurerm_firewall_network_rule_collection.web_app_subnet remove_if_present module.firewall.azurerm_firewall_network_rule_collection.resource_processor_subnet remove_if_present module.firewall.azurerm_firewall_network_rule_collection.general remove_if_present module.firewall.azurerm_firewall_application_rule_collection.web_app_subnet remove_if_present module.firewall.azurerm_firewall_application_rule_collection.shared_subnet remove_if_present module.firewall.azurerm_firewall_application_rule_collection.resource_processor_subnet remove_if_present module.firewall.azurerm_firewall.fw
AzureTRE/templates/shared_services/firewall/terraform/remove_state.sh/0
{ "file_path": "AzureTRE/templates/shared_services/firewall/terraform/remove_state.sh", "repo_id": "AzureTRE", "token_count": 679 }
127
resource "random_password" "gitea_passwd" { length = 20 min_upper = 2 min_lower = 2 min_numeric = 2 min_special = 2 } # we have to use user-assigned to break a cycle in the dependencies: app identity, kv-policy, secrets in app settings resource "azurerm_user_assigned_identity" "gitea_id" { resource_group_name = local.core_resource_group_name location = data.azurerm_resource_group.rg.location tags = local.tre_shared_service_tags name = "id-gitea-${var.tre_id}" lifecycle { ignore_changes = [tags] } } resource "azurerm_linux_web_app" "gitea" { name = local.webapp_name resource_group_name = local.core_resource_group_name location = data.azurerm_resource_group.rg.location service_plan_id = data.azurerm_service_plan.core.id https_only = true key_vault_reference_identity_id = azurerm_user_assigned_identity.gitea_id.id virtual_network_subnet_id = data.azurerm_subnet.web_app.id tags = local.tre_shared_service_tags app_settings = { WEBSITES_PORT = "3000" WEBSITES_ENABLE_APP_SERVICE_STORAGE = false GITEA_USERNAME = "giteaadmin" GITEA_PASSWD = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.gitea_password.id})" GITEA_EMAIL = "[email protected]" GITEA__server__ROOT_URL = "https://${local.webapp_name}.azurewebsites.net/" GITEA__server__LFS_START_SERVER = "true" GITEA__lfs__PATH = "/data/lfs" GITEA__lfs__STORAGE_TYPE = "local" GITEA__log_0x2E_console__COLORIZE = "false" # Azure monitor doens't show colors, so this is easier to read. GITEA__picture__DISABLE_GRAVATAR = "true" # external avaters are not available due to network restrictions GITEA__security__INSTALL_LOCK = true GITEA__service__DISABLE_REGISTRATION = true GITEA__database__SSL_MODE = "true" GITEA__database__DB_TYPE = "mysql" GITEA__database__HOST = azurerm_mysql_flexible_server.gitea.fqdn GITEA__database__NAME = azurerm_mysql_flexible_database.gitea.name GITEA__database__USER = azurerm_mysql_flexible_server.gitea.administrator_login GITEA__database__PASSWD = "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault_secret.db_password.id})" } lifecycle { ignore_changes = [tags] } identity { type = "UserAssigned" identity_ids = [azurerm_user_assigned_identity.gitea_id.id] } site_config { container_registry_use_managed_identity = true container_registry_managed_identity_client_id = azurerm_user_assigned_identity.gitea_id.client_id ftps_state = "Disabled" always_on = true minimum_tls_version = "1.2" vnet_route_all_enabled = true application_stack { docker_image = "${data.azurerm_container_registry.mgmt_acr.login_server}/microsoft/azuretre/gitea" docker_image_tag = local.version } } storage_account { name = "gitea-data" type = "AzureFiles" account_name = data.azurerm_storage_account.gitea.name access_key = data.azurerm_storage_account.gitea.primary_access_key share_name = azurerm_storage_share.gitea.name mount_path = "/data" } logs { application_logs { file_system_level = "Information" } http_logs { file_system { retention_in_days = 7 retention_in_mb = 100 } } } depends_on = [ azurerm_key_vault_secret.gitea_password ] } resource "azurerm_private_endpoint" "gitea_private_endpoint" { name = "pe-${local.webapp_name}" resource_group_name = local.core_resource_group_name location = data.azurerm_resource_group.rg.location subnet_id = data.azurerm_subnet.shared.id tags = local.tre_shared_service_tags private_service_connection { private_connection_resource_id = azurerm_linux_web_app.gitea.id name = "psc-${local.webapp_name}" subresource_names = ["sites"] is_manual_connection = false } private_dns_zone_group { name = module.terraform_azurerm_environment_configuration.private_links["privatelink.azurewebsites.net"] private_dns_zone_ids = [data.azurerm_private_dns_zone.azurewebsites.id] } lifecycle { ignore_changes = [tags] } } resource "azurerm_monitor_diagnostic_setting" "webapp_gitea" { name = "diag-${var.tre_id}" target_resource_id = azurerm_linux_web_app.gitea.id log_analytics_workspace_id = data.azurerm_log_analytics_workspace.tre.id dynamic "log" { for_each = data.azurerm_monitor_diagnostic_categories.webapp.log_category_types content { category = log.value enabled = contains(local.webapp_diagnostic_categories_enabled, log.value) ? true : false } } metric { category = "AllMetrics" enabled = true } } resource "azurerm_key_vault_access_policy" "gitea_policy" { key_vault_id = data.azurerm_key_vault.keyvault.id tenant_id = azurerm_user_assigned_identity.gitea_id.tenant_id object_id = azurerm_user_assigned_identity.gitea_id.principal_id secret_permissions = ["Get", "List", ] } resource "azurerm_key_vault_secret" "gitea_password" { name = "${local.webapp_name}-administrator-password" value = random_password.gitea_passwd.result key_vault_id = data.azurerm_key_vault.keyvault.id tags = local.tre_shared_service_tags depends_on = [ azurerm_key_vault_access_policy.gitea_policy ] lifecycle { ignore_changes = [tags] } } resource "azurerm_storage_share" "gitea" { name = "gitea-data" storage_account_name = data.azurerm_storage_account.gitea.name quota = var.gitea_storage_limit } resource "azurerm_role_assignment" "gitea_acrpull_role" { scope = data.azurerm_container_registry.mgmt_acr.id role_definition_name = "AcrPull" principal_id = azurerm_user_assigned_identity.gitea_id.principal_id }
AzureTRE/templates/shared_services/gitea/terraform/gitea-webapp.tf/0
{ "file_path": "AzureTRE/templates/shared_services/gitea/terraform/gitea-webapp.tf", "repo_id": "AzureTRE", "token_count": 2930 }
128
{ "name": "conda-mirror", "online": true, "storage": { "blobStoreName": "default", "strictContentTypeValidation": true, "write_policy": "ALLOW" }, "proxy": { "remoteUrl": "https://conda.anaconda.org/", "contentMaxAge": 1440, "metadataMaxAge": 1440 }, "negativeCache": { "enabled": true, "timeToLive": 1440 }, "httpClient": { "blocked": false, "autoBlock": false, "connection": { "retries": 0, "userAgentSuffix": "string", "timeout": 60, "enableCircularRedirects": false, "enableCookies": false, "useTrustStore": false } }, "baseType": "conda", "repoType": "proxy" }
AzureTRE/templates/shared_services/sonatype-nexus-vm/scripts/nexus_repos_config/conda_mirror_proxy_conf.json/0
{ "file_path": "AzureTRE/templates/shared_services/sonatype-nexus-vm/scripts/nexus_repos_config/conda_mirror_proxy_conf.json", "repo_id": "AzureTRE", "token_count": 295 }
129
--- #cloud-config package_upgrade: true package_update: true apt: sources: docker.list: source: deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88 keyserver: hkp://keyserver.ubuntu.com:80 azure-cli.list: source: deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $RELEASE main keyid: BC528686B50D79E339D3721CEB3E94ADBE1229CF keyserver: hkp://keyserver.ubuntu.com:80 packages: - docker-ce - docker-ce-cli - containerd.io - docker-compose - gnupg2 - pass - azure-cli - default-jre - xmlstarlet - jq # create the docker group groups: - docker # Add default auto created user to docker group system_info: default_user: groups: [docker] runcmd: - export DEBIAN_FRONTEND=noninteractive # Give the Nexus process write permissions on the folder mounted as persistent volume - chown -R 200 /etc/nexus-data # Deploy Nexus by pulling and running the container - bash /etc/nexus-data/scripts/deploy_nexus_container.sh # Reset the admin password of Nexus to the one created by TF and stored in Key Vault - bash /etc/nexus-data/scripts/reset_nexus_password.sh "${NEXUS_ADMIN_PASSWORD}" # Invoke Nexus SSL configuration (which will also be ran as CRON daily to renew cert) - bash /etc/cron.daily/configure_nexus_ssl # Configure Nexus repositories - bash /etc/nexus-data/scripts/configure_nexus_repos.sh "${NEXUS_ADMIN_PASSWORD}"
AzureTRE/templates/shared_services/sonatype-nexus-vm/terraform/cloud-config.yaml/0
{ "file_path": "AzureTRE/templates/shared_services/sonatype-nexus-vm/terraform/cloud-config.yaml", "repo_id": "AzureTRE", "token_count": 563 }
130
resource "azurerm_container_registry" "acr" { name = local.acr_name location = data.azurerm_resource_group.ws.location resource_group_name = data.azurerm_resource_group.ws.name sku = "Premium" admin_enabled = false public_network_access_enabled = false tags = local.tre_workspace_service_tags lifecycle { ignore_changes = [tags] } } data "azurerm_private_dns_zone" "azurecr" { name = module.terraform_azurerm_environment_configuration.private_links["privatelink.azurecr.io"] resource_group_name = local.core_resource_group_name } resource "azurerm_private_endpoint" "acrpe" { name = "acrpe-${local.service_resource_name_suffix}" location = data.azurerm_resource_group.ws.location resource_group_name = data.azurerm_resource_group.ws.name subnet_id = azurerm_subnet.aml.id tags = local.tre_workspace_service_tags lifecycle { ignore_changes = [tags] } private_dns_zone_group { name = "private-dns-zone-group" private_dns_zone_ids = [data.azurerm_private_dns_zone.azurecr.id] } private_service_connection { name = "acrpesc-${local.service_resource_name_suffix}" private_connection_resource_id = azurerm_container_registry.acr.id is_manual_connection = false subresource_names = ["registry"] } }
AzureTRE/templates/workspace_services/azureml/terraform/acr.tf/0
{ "file_path": "AzureTRE/templates/workspace_services/azureml/terraform/acr.tf", "repo_id": "AzureTRE", "token_count": 721 }
131
{ "schemaType": "ParameterSet", "schemaVersion": "1.0.1", "namespace": "", "name": "tre-user-resource-aml-compute-instance", "parameters": [ { "name": "id", "source": { "env": "ID" } }, { "name": "parent_service_id", "source": { "env": "PARENT_SERVICE_ID" } }, { "name": "workspace_id", "source": { "env": "WORKSPACE_ID" } }, { "name": "tre_id", "source": { "env": "TRE_ID" } }, { "name": "vm_size", "source": { "env": "VM_SIZE" } }, { "name": "user_object_id", "source": { "env": "USER_OBJECT_ID" } }, { "name": "tfstate_container_name", "source": { "env": "TERRAFORM_STATE_CONTAINER_NAME" } }, { "name": "tfstate_resource_group_name", "source": { "env": "MGMT_RESOURCE_GROUP_NAME" } }, { "name": "tfstate_storage_account_name", "source": { "env": "MGMT_STORAGE_ACCOUNT_NAME" } }, { "name": "arm_environment", "source": { "env": "ARM_ENVIRONMENT" } } ] }
AzureTRE/templates/workspace_services/azureml/user_resources/aml_compute/parameters.json/0
{ "file_path": "AzureTRE/templates/workspace_services/azureml/user_resources/aml_compute/parameters.json", "repo_id": "AzureTRE", "token_count": 686 }
132