modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
north/t5_base_NCC
|
north
| 2022-10-13T13:53:50Z | 5 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-21T11:45:48Z |
---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|✔|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/norwegian_NCC_plus_English_t5x_base/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
north/t5_base_NCC_lm
|
north
| 2022-10-13T13:53:23Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-21T11:45:24Z |
---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|✔|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/base/norwegian_NCC_plus_English_pluss100k_lm_t5x_base/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
north/t5_small_NCC
|
north
| 2022-10-13T13:53:07Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-21T11:45:05Z |
---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|✔|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|[🤗](https://huggingface.co/north/t5_small_NCC_lm)|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/small/norwegian_NCC_plus_English_t5x_small/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
north/t5_small_NCC_lm
|
north
| 2022-10-13T13:52:45Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"no",
"nn",
"sv",
"dk",
"is",
"en",
"dataset:nbailab/NCC",
"dataset:mc4",
"dataset:wikipedia",
"arxiv:2104.09617",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-05-21T11:44:41Z |
---
language:
- no
- nn
- sv
- dk
- is
- en
datasets:
- nbailab/NCC
- mc4
- wikipedia
widget:
- text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsråd på <extra_id_1>. Dette organet er øverste <extra_id_2> i Norge. For at møtet skal være <extra_id_3>, må over halvparten av regjeringens <extra_id_4> være til stede.
- text: På <extra_id_0> kan man <extra_id_1> en bok, og man kan også <extra_id_2> seg ned og lese den.
license: apache-2.0
---
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to translation.
| |**Small** <br />_60M_|**Base** <br />_220M_|**Large** <br />_770M_|**XL** <br />_3B_|**XXL** <br />_11B_|
|:-----------|:------------:|:------------:|:------------:|:------------:|:------------:|
|North-T5‑NCC|[🤗](https://huggingface.co/north/t5_small_NCC)|[🤗](https://huggingface.co/north/t5_base_NCC)|[🤗](https://huggingface.co/north/t5_large_NCC)|[🤗](https://huggingface.co/north/t5_xl_NCC)|[🤗](https://huggingface.co/north/t5_xxl_NCC)||
|North-T5‑NCC‑lm|✔|[🤗](https://huggingface.co/north/t5_base_NCC_lm)|[🤗](https://huggingface.co/north/t5_large_NCC_lm)|[🤗](https://huggingface.co/north/t5_xl_NCC_lm)|[🤗](https://huggingface.co/north/t5_xxl_NCC_lm)||
## T5X Checkpoint
The original T5X checkpoint is also available for this model in the [Google Cloud Bucket](gs://north-t5x/pretrained_models/small/norwegian_NCC_plus_English_pluss100k_lm_t5x_small/).
## Performance
A thorough evaluation of the North-T5 models is planned, and I strongly recommend external researchers to make their own evaluation. The main advantage with the T5-models are their flexibility. Traditionally, encoder-only models (like BERT) excels in classification tasks, while seq-2-seq models are easier to train for tasks like translation and Q&A. Despite this, here are the results from using North-T5 on the political classification task explained [here](https://arxiv.org/abs/2104.09617).
|**Model:** | **F1** |
|:-----------|:------------|
|mT5-base|73.2 |
|mBERT-base|78.4 |
|NorBERT-base|78.2 |
|North-T5-small|80.5 |
|nb-bert-base|81.8 |
|North-T5-base|85.3 |
|North-T5-large|86.7 |
|North-T5-xl|88.7 |
|North-T5-xxl|91.8|
These are preliminary results. The [results](https://arxiv.org/abs/2104.09617) from the BERT-models are based on the test-results from the best model after 10 runs with early stopping and a decaying learning rate. The T5-results are the average of five runs on the evaluation set. The small-model was trained for 10.000 steps, while the rest for 5.000 steps. A fixed learning rate was used (no decay), and no early stopping. Neither was the recommended rank classification used. We use a max sequence length of 512. This method simplifies the test setup and gives results that are easy to interpret. However, the results from the T5 model might actually be a bit sub-optimal.
## Sub-versions of North-T5
The following sub-versions are available. More versions will be available shorter.
|**Model** | **Description** |
|:-----------|:-------|
|**North‑T5‑NCC** |This is the main version. It is trained an additonal 500.000 steps on from the mT5 checkpoint. The training corpus is based on [the Norwegian Colossal Corpus (NCC)](https://huggingface.co/datasets/NbAiLab/NCC). In addition there are added data from MC4 and English Wikipedia.|
|**North‑T5‑NCC‑lm**|The model is pretrained for an addtional 100k steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). In a way this turns a masked language model into an autoregressive model. It also prepares the model for some tasks. When for instance doing translation and NLI, it is well documented that there is a clear benefit to do a step of unsupervised LM-training before starting the finetuning.|
## Fine-tuned versions
As explained below, the model really needs to be fine-tuned for specific tasks. This procedure is relatively simple, and the models are not very sensitive to the hyper-parameters used. Usually a decent result can be obtained by using a fixed learning rate of 1e-3. Smaller versions of the model typically needs to be trained for a longer time. It is easy to train the base-models in a Google Colab.
Since some people really want to see what the models are capable of, without going through the training procedure, I provide a couple of test models. These models are by no means optimised, and are just for demonstrating how the North-T5 models can be used.
* Nynorsk Translator. Translates any text from Norwegian Bokmål to Norwegian Nynorsk. Please test the [Streamlit-demo](https://huggingface.co/spaces/north/Nynorsk) and the [HuggingFace repo](https://huggingface.co/north/demo-nynorsk-base)
* DeUnCaser. The model adds punctation, spaces and capitalisation back into the text. The input needs to be in Norwegian but does not have to be divided into sentences or have proper capitalisation of words. You can even remove the spaces from the text, and make the model reconstruct it. It can be tested with the [Streamlit-demo](https://huggingface.co/spaces/north/DeUnCaser) and directly on the [HuggingFace repo](https://huggingface.co/north/demo-deuncaser-base)
## Training details
All models are built using the Flax-based T5X codebase, and all models are initiated with the mT5 pretrained weights. The models are trained using the T5.1.1 training regime, where they are only trained on an unsupervised masking-task. This also means that the models (contrary to the original T5) needs to be finetuned to solve specific tasks. This finetuning is however usually not very compute intensive, and in most cases it can be performed even with free online training resources.
All the main model model versions are trained for 500.000 steps after the mT5 checkpoint (1.000.000 steps). They are trained mainly on a 75GB corpus, consisting of NCC, Common Crawl and some additional high quality English text (Wikipedia). The corpus is roughly 80% Norwegian text. Additional languages are added to retain some of the multilingual capabilities, making the model both more robust to new words/concepts and also more suited as a basis for translation tasks.
While the huge models almost always will give the best results, they are also both more difficult and more expensive to finetune. I will strongly recommended to start with finetuning a base-models. The base-models can easily be finetuned on a standard graphic card or a free TPU through Google Colab.
All models were trained on TPUs. The largest XXL model was trained on a TPU v4-64, the XL model on a TPU v4-32, the Large model on a TPU v4-16 and the rest on TPU v4-8. Since it is possible to reduce the batch size during fine-tuning, it is also possible to finetune on slightly smaller hardware. The rule of thumb is that you can go "one step down" when finetuning. The large models still rewuire access to significant hardware, even for finetuning.
## Formats
All models are trained using the Flax-based T5X library. The original checkpoints are available in T5X format and can be used for both finetuning or interference. All models, except the XXL-model, are also converted to Transformers/HuggingFace. In this framework, the models can be loaded for finetuning or inference both in Flax, PyTorch and TensorFlow format.
## Future
I will continue to train and release additional models to this set. What models that are added is dependent upon the feedbacki from the users
## Thanks
This release would not have been possible without getting support and hardware from the [TPU Research Cloud](https://sites.research.google/trc/about/) at Google Research. Both the TPU Research Cloud Team and the T5X Team has provided extremely useful support for getting this running.
Freddy Wetjen at the National Library of Norway has been of tremendous help in generating the original NCC corpus, and has also contributed to generate the collated coprus used for this training. In addition he has been a dicussion partner in the creation of these models.
Also thanks to Stefan Schweter for writing the [script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py) for converting these models from T5X to HuggingFace and to Javier de la Rosa for writing the dataloader for reading the HuggingFace Datasets in T5X.
## Warranty
Use at your own risk. The models have not yet been thougroughly tested, and may contain both errors and biases.
## Contact/About
These models were trained by Per E Kummervold. Please contact me on [email protected].
|
huggingtweets/boredapeyc-garyvee-opensea
|
huggingtweets
| 2022-10-13T12:52:04Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-13T12:48:37Z |
---
language: en
thumbnail: http://www.huggingtweets.com/boredapeyc-garyvee-opensea/1665665519153/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1493524673962852353/qRxbC9Xq_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1544105652330631168/ZuvjfGkT_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1446569222352654344/Uc-tml-6_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Gary Vaynerchuk & OpenSea & Bored Ape Yacht Club</div>
<div style="text-align: center; font-size: 14px;">@boredapeyc-garyvee-opensea</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Gary Vaynerchuk & OpenSea & Bored Ape Yacht Club.
| Data | Gary Vaynerchuk | OpenSea | Bored Ape Yacht Club |
| --- | --- | --- | --- |
| Tweets downloaded | 3249 | 3239 | 3243 |
| Retweets | 723 | 1428 | 3014 |
| Short tweets | 838 | 410 | 11 |
| Tweets kept | 1688 | 1401 | 218 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/8ylc2l06/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @boredapeyc-garyvee-opensea's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2t159hph) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2t159hph/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/boredapeyc-garyvee-opensea')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jinofcoolnes/sksjinxmerge
|
jinofcoolnes
| 2022-10-13T11:03:07Z | 0 | 11 | null |
[
"region:us"
] | null | 2022-10-12T16:25:28Z |
Use "sks jinx" or just "jinx" should work
|
shed-e/ner_peoples_daily
|
shed-e
| 2022-10-13T10:26:38Z | 138 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:peoples_daily_ner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-13T09:49:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- peoples_daily_ner
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_peoples_daily
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: peoples_daily_ner
type: peoples_daily_ner
config: peoples_daily_ner
split: train
args: peoples_daily_ner
metrics:
- name: Precision
type: precision
value: 0.9205354599829109
- name: Recall
type: recall
value: 0.9365401332946972
- name: F1
type: f1
value: 0.9284688307957485
- name: Accuracy
type: accuracy
value: 0.9929549534505072
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_peoples_daily
This model is a fine-tuned version of [hfl/rbt6](https://huggingface.co/hfl/rbt6) on the peoples_daily_ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0249
- Precision: 0.9205
- Recall: 0.9365
- F1: 0.9285
- Accuracy: 0.9930
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3154 | 1.0 | 164 | 0.0410 | 0.8258 | 0.8684 | 0.8466 | 0.9868 |
| 0.0394 | 2.0 | 328 | 0.0287 | 0.8842 | 0.9070 | 0.8954 | 0.9905 |
| 0.0293 | 3.0 | 492 | 0.0264 | 0.8978 | 0.9168 | 0.9072 | 0.9916 |
| 0.02 | 4.0 | 656 | 0.0254 | 0.9149 | 0.9226 | 0.9188 | 0.9923 |
| 0.016 | 5.0 | 820 | 0.0250 | 0.9167 | 0.9281 | 0.9224 | 0.9927 |
| 0.0124 | 6.0 | 984 | 0.0252 | 0.9114 | 0.9328 | 0.9220 | 0.9928 |
| 0.0108 | 7.0 | 1148 | 0.0249 | 0.9169 | 0.9339 | 0.9254 | 0.9928 |
| 0.0097 | 8.0 | 1312 | 0.0249 | 0.9205 | 0.9365 | 0.9285 | 0.9930 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
osanseviero/titanic_mlconsole
|
osanseviero
| 2022-10-13T09:56:26Z | 0 | 1 |
mlconsole
|
[
"mlconsole",
"tabular-classification",
"dataset:train.csv",
"license:unknown",
"model-index",
"region:us"
] |
tabular-classification
| 2022-10-13T09:56:23Z |
---
license: unknown
inference: false
tags:
- mlconsole
- tabular-classification
library_name: mlconsole
metrics:
- accuracy
- loss
datasets:
- train.csv
model-index:
- name: titanic_mlconsole
results:
- task:
type: tabular-classification
name: tabular-classification
dataset:
type: train.csv
name: train.csv
metrics:
- type: accuracy
name: Accuracy
value: 0.792792797088623
- type: loss
name: Model loss
value: 0.5146282911300659
---
# train.csv (#0)
Trained on [ML Console](https://mlconsole.com).
[Load the model on ML Console](https://mlconsole.com/model/hf/osanseviero/titanic_mlconsole).
|
sd-concepts-library/masyanya
|
sd-concepts-library
| 2022-10-13T09:35:28Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-13T09:35:25Z |
---
license: mit
---
### Masyanya on Stable Diffusion
This is the `<masyanya>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
sanjeev498/vit-base-beans
|
sanjeev498
| 2022-10-13T07:04:07Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:beans",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-13T06:38:50Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- beans
metrics:
- accuracy
model-index:
- name: vit-base-beans
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: beans
type: beans
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0189
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0568 | 1.54 | 100 | 0.0299 | 1.0 |
| 0.0135 | 3.08 | 200 | 0.0189 | 1.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
sxxyxn/kogpt_reduced_vocab
|
sxxyxn
| 2022-10-13T06:56:45Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"KakaoBrain",
"KoGPT",
"GPT",
"GPT3",
"ko",
"arxiv:2104.09864",
"arxiv:2109.04650",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-13T02:04:31Z |
---
language: ko
tags:
- KakaoBrain
- KoGPT
- GPT
- GPT3
license: cc-by-nc-4.0
---
# KoGPT
KakaoBrain's Pre-Trained Language Models.
* KoGPT (Korean Generative Pre-trained Transformer)
* [https://github.com/kakaobrain/kogpt](https://github.com/kakaobrain/kogpt)
* [https://huggingface.co/kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt)
## Model Descriptions
### KoGPT6B-ryan1.5b
* [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b)
* [\[huggingface\]\[kakaobrain/kogpt\]\[KoGPT6B-ryan1.5b-float16\]](https://huggingface.co/kakaobrain/kogpt/tree/KoGPT6B-ryan1.5b-float16)
| Hyperparameter | Value |
|:---------------------|--------------:|
| \\(n_{parameters}\\) | 6,166,502,400 |
| \\(n_{layers}\\) | 28 |
| \\(d_{model}\\) | 4,096 |
| \\(d_{ff}\\) | 16,384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2,048 |
| \\(n_{vocab}\\) | 64,512 |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | 64 |
## Hardware requirements
### KoGPT6B-ryan1.5b
#### GPU
The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.
* `32GB GPU RAM` in the required minimum memory size
### KoGPT6B-ryan1.5b-float16
#### GPU
The following is the recommended minimum GPU hardware guidance for a handful of example KoGPT.
* half-precision requires NVIDIA GPUS based on Volta, Turing or Ampere
* `16GB GPU RAM` in the required minimum memory size
## Usage
### prompt
```bash
python -m kogpt --help
usage: KoGPT inference [-h] [--model MODEL] [--revision {KoGPT6B-ryan1.5b}]
[--device {cpu,cuda}] [-d]
KakaoBrain Korean(hangul) Generative Pre-Training Model
optional arguments:
-h, --help show this help message and exit
--model MODEL huggingface repo (default:kakaobrain/kogpt)
--revision {KoGPT6B-ryan1.5b}
--device {cpu,cuda} (default:cuda)
-d, --debug
```
```bash
python -m kogpt
prompt> 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던
temperature(0.8)>
max_length(128)> 64
인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상
prompt>
...
```
### python
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained(
'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b
bos_token='[BOS]', eos_token='[EOS]', unk_token='[UNK]', pad_token='[PAD]', mask_token='[MASK]'
)
model = AutoModelForCausalLM.from_pretrained(
'kakaobrain/kogpt', revision='KoGPT6B-ryan1.5b-float16', # or float32 version: revision=KoGPT6B-ryan1.5b
pad_token_id=tokenizer.eos_token_id,
torch_dtype='auto', low_cpu_mem_usage=True
).to(device='cuda', non_blocking=True)
_ = model.eval()
prompt = '인간처럼 생각하고, 행동하는 \'지능\'을 통해 인류가 이제까지 풀지 못했던'
with torch.no_grad():
tokens = tokenizer.encode(prompt, return_tensors='pt').to(device='cuda', non_blocking=True)
gen_tokens = model.generate(tokens, do_sample=True, temperature=0.8, max_length=64)
generated = tokenizer.batch_decode(gen_tokens)[0]
print(generated) # print: 인간처럼 생각하고, 행동하는 '지능'을 통해 인류가 이제까지 풀지 못했던 문제의 해답을 찾을 수 있을 것이다. 과학기술이 고도로 발달한 21세기를 살아갈 우리 아이들에게 가장 필요한 것은 사고력 훈련이다. 사고력 훈련을 통해, 세상
```
## Experiments
### In-context Few-Shots
| Models | #params | NSMC (Acc.) | YNAT (F1) | KLUE-STS (F1) |
|:--------------|--------:|------------:|----------:|--------------:|
| HyperCLOVA[1] | 1.3B | 83.9 | 58.7 | 60.9 |
| HyperCLOVA[1] | 6.9B | 83.8 | 67.5 | 59.3 |
| HyperCLOVA[1] | 13.0B | 87.9 | 67.9 | 60.0 |
| HyperCLOVA[1] | 39.0B | 88.0 | 71.4 | 61.6 |
| HyperCLOVA[1] | 82.0B | **88.2** | 72.7 | **65.1** |
| **Ours** | 6.0B | 87.8 | **78.0** | 64.3 |
### Finetuning / P-Tuning
We have been reported to have issues(https://github.com/kakaobrain/kogpt/issues/17) with our downstream evaluation.
The previously published performance evaluation table was deleted because it was difficult to see it as a fair comparison because the comparison target algorithm was different and the performance measurement method could not be confirmed.
You can refer to the above issue link for the existing performance evaluation table and troubleshooting results.
## Limitations
KakaoBrain `KoGPT` was trained on `rayn dataset`, a dataset known to contain profanity, lewd, political changed, and other harsh language.
Therefore, `KoGPT` can generate socially unacceptable texts. As with all language models, It is difficult to predict in advance how `KoGPT` will response to particular prompts and offensive content without warning.
Primarily Korean: `KoGPT` is primarily trained on Korean texts, and is best for classifying, searching, summarizing or generating such texts.
`KoGPT` by default perform worse on inputs that are different from the data distribution it is trained on, including non-Korean as well as specific dialects of Korean that are not well represented in the training data.
[comment]: <> (If abnormal or socially unacceptable text is generated during testing, please send a "prompt" and the "generated text" to [[email protected]](mailto:[email protected]). )
카카오브레인 `KoGPT`는 욕설, 음란, 정치적 내용 및 기타 거친 언어에 대한 처리를 하지 않은 `rayn dataset`으로 학습하였습니다.
따라서 `KoGPT`는 사회적으로 용인되지 않은 텍스트를 생성할 수 있습니다. 다른 언어 모델과 마찬가지로 특정 프롬프트와 공격적인 콘텐츠에 어떠한 결과를 생성할지 사전에 파악하기 어렵습니다.
`KoGPT`는 주로 한국어 텍스트로 학습을 하였으며 이러한 텍스트를 분류, 검색, 요약 또는 생성하는데 가장 적합합니다.
기본적으로 `KoGPT`는 학습 데이터에 잘 나타나지 않는 방언뿐만아니라 한국어가 아닌 경우와 같이 학습 데이터에서 발견하기 어려운 입력에서 좋지 않은 성능을 보입니다.
[comment]: <> (테스트중에 발생한 비정상적인 혹은 사회적으로 용인되지 않는 텍스트가 생성된 경우 [[email protected]](mailto:[email protected])로 "prompt"와 "생성된 문장"을 함께 보내주시기 바랍니다.)
## Citation
If you apply this library or model to any project and research, please cite our code:
```
@misc{kakaobrain2021kogpt,
title = {KoGPT: KakaoBrain Korean(hangul) Generative Pre-trained Transformer},
author = {Ildoo Kim and Gunsoo Han and Jiyeon Ham and Woonhyuk Baek},
year = {2021},
howpublished = {\url{https://github.com/kakaobrain/kogpt}},
}
```
## Contact
This is released as an open source in the hope that it will be helpful to many research institutes and startups for research purposes. We look forward to contacting us from various places who wish to cooperate with us.
[[email protected]](mailto:[email protected])
## License
The `source code` of KakaoBrain `KoGPT` are licensed under [Apache 2.0](LICENSE.apache-2.0) License.
The `pretrained wieghts` of KakaoBrain `KoGPT` are licensed under [CC-BY-NC-ND 4.0 License](https://creativecommons.org/licenses/by-nc-nd/4.0/) License.
카카오브레인 `KoGPT`의 `소스코드(source code)`는 [Apache 2.0](LICENSE.apache-2.0) 라이선스 하에 공개되어 있습니다.
카카오브레인 `KoGPT`의 `사전학습된 가중치(pretrained weights)`는 [CC-BY-NC-ND 4.0 라이선스](https://creativecommons.org/licenses/by-nc-nd/4.0/) 라이선스 하에 공개되어 있습니다.
모델 및 코드, 사전학습된 가중치를 사용할 경우 라이선스 내용을 준수해 주십시오. 라이선스 전문은 [Apache 2.0](LICENSE.apache-2.0), [LICENSE.cc-by-nc-nd-4.0](LICENSE.cc-by-nc-nd-4.0) 파일에서 확인하실 수 있습니다.
## References
[1] [HyperCLOVA](https://arxiv.org/abs/2109.04650): Kim, Boseop, et al. "What changes can large-scale language models bring? intensive study on hyperclova: Billions-scale korean generative pretrained transformers." arXiv preprint arXiv:2109.04650 (2021).
|
luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2
|
luomingshuang
| 2022-10-13T06:43:37Z | 0 | 3 | null |
[
"onnx",
"region:us"
] | null | 2022-05-19T14:32:27Z |
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/349
# Pre-trained Transducer-Stateless2 models for the WenetSpeech dataset with icefall.
The model was trained on the L subset of WenetSpeech with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/wenetspeech/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
./pruned_transducer_stateless2/train.py \
--world-size 8 \
--num-epochs 15 \
--start-epoch 0 \
--exp-dir pruned_transducer_stateless2/exp \
--lang-dir data/lang_char \
--max-duration 180 \
--valid-interval 3000 \
--model-warm-step 3000 \
--save-every-n 8000 \
--training-subset L
```
## Evaluation results
The decoding results (WER%) on WenetSpeech(dev, test-net and test-meeting) are listed below, we got this result by averaging models from epoch 9 to 10.
The WERs are
| | dev | test-net | test-meeting | comment |
|------------------------------------|-------|----------|--------------|------------------------------------------|
| greedy search | 7.80 | 8.75 | 13.49 | --epoch 10, --avg 2, --max-duration 100 |
| modified beam search (beam size 4) | 7.76 | 8.71 | 13.41 | --epoch 10, --avg 2, --max-duration 100 |
| fast beam search (1best) | 7.94 | 8.74 | 13.80 | --epoch 10, --avg 2, --max-duration 1500 |
| fast beam search (nbest) | 9.82 | 10.98 | 16.37 | --epoch 10, --avg 2, --max-duration 600 |
| fast beam search (nbest oracle) | 6.88 | 7.18 | 11.77 | --epoch 10, --avg 2, --max-duration 600 |
| fast beam search (nbest LG) | 14.94 | 16.14 | 22.93 | --epoch 10, --avg 2, --max-duration 600 |
|
SaurabhKaushik/distilbert-base-uncased-finetuned-cola
|
SaurabhKaushik
| 2022-10-13T06:25:21Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-23T08:08:06Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: SaurabhKaushik/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# SaurabhKaushik/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1919
- Validation Loss: 0.5520
- Train Matthews Correlation: 0.5123
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5232 | 0.4685 | 0.4651 | 0 |
| 0.3245 | 0.4620 | 0.4982 | 1 |
| 0.1919 | 0.5520 | 0.5123 | 2 |
### Framework versions
- Transformers 4.22.1
- TensorFlow 2.10.0
- Datasets 2.5.1
- Tokenizers 0.12.1
|
azad-wolf-se/FExGAN
|
azad-wolf-se
| 2022-10-13T06:01:13Z | 0 | 0 | null |
[
"Computer Vision",
"Machine Learning",
"Deep Learning",
"en",
"arxiv:2201.09061",
"region:us"
] | null | 2022-10-13T05:27:57Z |
---
language: en
tags:
- Computer Vision
- Machine Learning
- Deep Learning
---
# Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network

This is the implementation of the FExGAN proposed in the following article:
[Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network](https://www.arxiv.com)
FExGAN takes input an image and a vector of desired affect (e.g. angry,disgust,sad,surprise,joy,neutral and fear) and converts the input image to the desired emotion while keeping the identity of the original image.

# Requirements
In order to run this you need following:
* Python >= 3.7
* Tensorflow >= 2.6
* CUDA enabled GPU (e.g. GTX1070/GTX1080)
# Usage Code
https://www.github.com/azadlab/FExGAN
# Citation
If you use any part of this code or use ideas mentioned in the paper, please cite the following article.
```
@article{Siddiqui_FExGAN_2022,
author = {{Siddiqui}, J. Rafid},
title = {{Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network}},
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
keywords = {Deep Learning, GAN, Facial Expressions},
year = {2022}
url = {http://arxiv.org/abs/2201.09061},
}
```
|
g30rv17ys/ddpm-geeve-dme-10k-200ep
|
g30rv17ys
| 2022-10-13T05:07:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-12T14:38:43Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-geeve-dme-10k-200ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-geeve-dme-10k-200ep/tensorboard?#scalars)
|
format37/PPO-MountainCar-v0
|
format37
| 2022-10-13T04:43:04Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"MountainCar-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-13T04:42:45Z |
---
library_name: stable-baselines3
tags:
- MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -151.80 +/- 16.12
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: MountainCar-v0
type: MountainCar-v0
---
# **PPO** Agent playing **MountainCar-v0**
This is a trained model of a **PPO** agent playing **MountainCar-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
|
fathan/ijelid-bert-base-multilingual
|
fathan
| 2022-10-13T03:51:51Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-05T06:00:07Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ijelid-bert-base-multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ijelid-bert-base-multilingual
This model is a fine-tuned version of [BERT multilingual base model (cased)](https://huggingface.co/bert-base-multilingual-cased) on the Indonesian-Javanese-English code-mixed Twitter dataset.
Label ID and its corresponding name:
| Label ID | Label Name |
|:---------------:|:------------------------------------------:
| LABEL_0 | English (EN) |
| LABEL_1 | Indonesian (ID) |
| LABEL_2 | Javanese (JV) |
| LABEL_3 | Mixed Indonesian-English (MIX-ID-EN) |
| LABEL_4 | Mixed Indonesian-Javanese (MIX-ID-JV) |
| LABEL_5 | Mixed Javanese-English (MIX-JV-EN) |
| LABEL_6 | Other (O) |
It achieves the following results on the evaluation set:
- Loss: 0.3553
- Precision: 0.9189
- Recall: 0.9188
- F1: 0.9187
- Accuracy: 0.9451
It achieves the following results on the test set:
- Overall Precision: 0.9249
- Overall Recall: 0.9251
- Overall F1: 0.925
- Overall Accuracy: 0.951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 386 | 0.2340 | 0.8956 | 0.8507 | 0.8715 | 0.9239 |
| 0.3379 | 2.0 | 772 | 0.2101 | 0.9057 | 0.8904 | 0.8962 | 0.9342 |
| 0.1603 | 3.0 | 1158 | 0.2231 | 0.9252 | 0.8896 | 0.9065 | 0.9367 |
| 0.1079 | 4.0 | 1544 | 0.2013 | 0.9272 | 0.8902 | 0.9070 | 0.9420 |
| 0.1079 | 5.0 | 1930 | 0.2179 | 0.9031 | 0.9179 | 0.9103 | 0.9425 |
| 0.0701 | 6.0 | 2316 | 0.2330 | 0.9075 | 0.9165 | 0.9114 | 0.9435 |
| 0.051 | 7.0 | 2702 | 0.2433 | 0.9117 | 0.9190 | 0.9150 | 0.9432 |
| 0.0384 | 8.0 | 3088 | 0.2545 | 0.9001 | 0.9167 | 0.9078 | 0.9439 |
| 0.0384 | 9.0 | 3474 | 0.2629 | 0.9164 | 0.9159 | 0.9158 | 0.9444 |
| 0.0293 | 10.0 | 3860 | 0.2881 | 0.9263 | 0.9096 | 0.9178 | 0.9427 |
| 0.022 | 11.0 | 4246 | 0.2882 | 0.9167 | 0.9222 | 0.9191 | 0.9450 |
| 0.0171 | 12.0 | 4632 | 0.3028 | 0.9203 | 0.9152 | 0.9177 | 0.9447 |
| 0.0143 | 13.0 | 5018 | 0.3236 | 0.9155 | 0.9167 | 0.9158 | 0.9440 |
| 0.0143 | 14.0 | 5404 | 0.3301 | 0.9237 | 0.9163 | 0.9199 | 0.9444 |
| 0.0109 | 15.0 | 5790 | 0.3290 | 0.9187 | 0.9154 | 0.9169 | 0.9442 |
| 0.0092 | 16.0 | 6176 | 0.3308 | 0.9213 | 0.9178 | 0.9194 | 0.9448 |
| 0.0075 | 17.0 | 6562 | 0.3501 | 0.9273 | 0.9142 | 0.9206 | 0.9445 |
| 0.0075 | 18.0 | 6948 | 0.3520 | 0.9200 | 0.9184 | 0.9190 | 0.9447 |
| 0.0062 | 19.0 | 7334 | 0.3524 | 0.9238 | 0.9183 | 0.9210 | 0.9458 |
| 0.0051 | 20.0 | 7720 | 0.3553 | 0.9189 | 0.9188 | 0.9187 | 0.9451 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.7.1
- Datasets 2.5.1
- Tokenizers 0.12.1
|
masusuka/wav2vec2-large-xls-r-300m-tr-colab
|
masusuka
| 2022-10-13T03:44:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-10T18:43:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tr-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4316
- Wer: 0.2905
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.9953 | 3.67 | 400 | 0.7024 | 0.7226 |
| 0.4046 | 7.34 | 800 | 0.4342 | 0.5343 |
| 0.201 | 11.01 | 1200 | 0.4396 | 0.5290 |
| 0.1513 | 14.68 | 1600 | 0.4319 | 0.4108 |
| 0.1285 | 18.35 | 2000 | 0.4422 | 0.3864 |
| 0.1086 | 22.02 | 2400 | 0.4568 | 0.3796 |
| 0.0998 | 25.69 | 2800 | 0.4687 | 0.3732 |
| 0.0863 | 29.36 | 3200 | 0.4726 | 0.3803 |
| 0.0809 | 33.03 | 3600 | 0.4479 | 0.3601 |
| 0.0747 | 36.7 | 4000 | 0.4624 | 0.3525 |
| 0.0692 | 40.37 | 4400 | 0.4366 | 0.3435 |
| 0.0595 | 44.04 | 4800 | 0.4204 | 0.3510 |
| 0.0584 | 47.71 | 5200 | 0.4202 | 0.3402 |
| 0.0545 | 51.38 | 5600 | 0.4366 | 0.3343 |
| 0.0486 | 55.05 | 6000 | 0.4492 | 0.3678 |
| 0.0444 | 58.72 | 6400 | 0.4471 | 0.3301 |
| 0.0406 | 62.39 | 6800 | 0.4382 | 0.3318 |
| 0.0341 | 66.06 | 7200 | 0.4295 | 0.3258 |
| 0.0297 | 69.72 | 7600 | 0.4336 | 0.3205 |
| 0.0295 | 73.39 | 8000 | 0.4240 | 0.3199 |
| 0.0261 | 77.06 | 8400 | 0.4316 | 0.3143 |
| 0.0247 | 80.73 | 8800 | 0.4300 | 0.3165 |
| 0.0207 | 84.4 | 9200 | 0.4380 | 0.3111 |
| 0.0203 | 88.07 | 9600 | 0.4218 | 0.2998 |
| 0.0174 | 91.74 | 10000 | 0.4271 | 0.2973 |
| 0.015 | 95.41 | 10400 | 0.4330 | 0.2939 |
| 0.0144 | 99.08 | 10800 | 0.4316 | 0.2905 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.5.2
- Tokenizers 0.13.1
|
caotouchan/distilbert_uncase_emo
|
caotouchan
| 2022-10-13T03:23:35Z | 59 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-13T03:23:28Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_uncase_emo
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_uncase_emo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4588
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 0.4588 | 0 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.5.2
- Tokenizers 0.13.1
|
alibaba-pai/pai-ckbert-huge-zh
|
alibaba-pai
| 2022-10-13T01:42:48Z | 141 | 3 |
transformers
|
[
"transformers",
"pytorch",
"megatron-bert",
"bert",
"fill-mask",
"zh",
"arxiv:2205.00258",
"arxiv:2210.05287",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-11T03:27:18Z |
---
language: zh
pipeline_tag: fill-mask
widget:
- text: "巴黎是[MASK]国的首都。"
- text: "生活的真谛是[MASK]。"
tags:
- bert
license: apache-2.0
---
## Chinese Kowledge-enhanced BERT (CKBERT)
Knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications.
For Chinese natural language processing, we provide three **Chinese Kowledge-enhanced BERT (CKBERT)** models named **pai-ckbert-bert-zh**, **pai-ckbert-large-zh** and **pai-ckbert-huge-zh**, from our **EMNLP 2022** paper named **Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training**.
This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP )
## Citation
If you find the resource is useful, please cite the following papers in your work.
- For the EasyNLP framework:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
publisher = {arXiv},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
```
- For CKBERT:
```
@article{ckbert,
title = {Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training},
author = {Zhang, Taolin and Dong, Junwei and Wang, Jianing and Wang, Chengyu and Wang, An and Liu, Yinghui and Huang, Jun and Li, Yong and He, Xiaofeng},
publisher = {EMNLP},
url = {https://arxiv.org/abs/2210.05287},
year = {2022}
}
```
|
alibaba-pai/pai-ckbert-large-zh
|
alibaba-pai
| 2022-10-13T01:42:12Z | 191 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2205.00258",
"arxiv:2210.05287",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-01T08:26:30Z |
---
language: zh
pipeline_tag: fill-mask
tags:
- bert
license: apache-2.0
---
## Chinese Kowledge-enhanced BERT (CKBERT)
Knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications.
For Chinese natural language processing, we provide three **Chinese Kowledge-enhanced BERT (CKBERT)** models named **pai-ckbert-bert-zh**, **pai-ckbert-large-zh** and **pai-ckbert-huge-zh**, from our **EMNLP 2022** paper named **Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training**.
This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP )
## Citation
If you find the resource is useful, please cite the following papers in your work.
- For the EasyNLP framework:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
publisher = {arXiv},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
```
- For CKBERT:
```
@article{ckbert,
title = {Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training},
author = {Zhang, Taolin and Dong, Junwei and Wang, Jianing and Wang, Chengyu and Wang, An and Liu, Yinghui and Huang, Jun and Li, Yong and He, Xiaofeng},
publisher = {EMNLP},
url = {https://arxiv.org/abs/2210.05287},
year = {2022}
}
```
|
alibaba-pai/pai-ckbert-base-zh
|
alibaba-pai
| 2022-10-13T01:41:01Z | 165 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"zh",
"arxiv:2205.00258",
"arxiv:2210.05287",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-30T01:41:01Z |
---
language: zh
pipeline_tag: fill-mask
tags:
- bert
license: apache-2.0
---
## Chinese Kowledge-enhanced BERT (CKBERT)
Knowledge-enhanced pre-trained language models (KEPLMs) improve context-aware representations via learning from structured relations in knowledge graphs, and/or linguistic knowledge from syntactic or dependency analysis. Unlike English, there is a lack of high-performing open-source Chinese KEPLMs in the natural language processing (NLP) community to support various language understanding applications.
For Chinese natural language processing, we provide three **Chinese Kowledge-enhanced BERT (CKBERT)** models named **pai-ckbert-bert-zh**, **pai-ckbert-large-zh** and **pai-ckbert-huge-zh**, from our **EMNLP 2022** paper named **Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training**.
This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP )
## Citation
If you find the resource is useful, please cite the following papers in your work.
- For the EasyNLP framework:
```
@article{easynlp,
title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing},
author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
publisher = {arXiv},
url = {https://arxiv.org/abs/2205.00258},
year = {2022}
}
```
- For CKBERT:
```
@article{ckbert,
title = {Revisiting and Advancing Chinese Natural Language Understanding with Accelerated Heterogeneous Knowledge Pre-training},
author = {Zhang, Taolin and Dong, Junwei and Wang, Jianing and Wang, Chengyu and Wang, An and Liu, Yinghui and Huang, Jun and Li, Yong and He, Xiaofeng},
publisher = {EMNLP},
url = {https://arxiv.org/abs/2210.05287},
year = {2022}
}
```
|
Super-McTea/DialoGPT-small-McTea
|
Super-McTea
| 2022-10-13T00:22:05Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-12T13:17:28Z |
---
tags:
- conversational
---
# DialoGPT model based on the text patterns of Discord user Super McTea
|
jhakaran1/bert-trainer
|
jhakaran1
| 2022-10-13T00:19:46Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T16:33:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1889
- Accuracy: 0.6437
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.751 | 1.0 | 3677 | 0.7828 | 0.6592 |
| 0.6364 | 2.0 | 7354 | 0.8904 | 0.6374 |
| 0.4125 | 3.0 | 11031 | 1.1889 | 0.6437 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
tiagoblima/punctuation-finetune-mec
|
tiagoblima
| 2022-10-13T00:05:42Z | 114 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T23:42:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: punctuation-finetune-mec
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# punctuation-finetune-mec
This model is a fine-tuned version of [tiagoblima/punctuation-taboa-bert](https://huggingface.co/tiagoblima/punctuation-taboa-bert) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 411 | 0.1356 | 0.9791 | 0.7083 | 0.8220 | 0.9553 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
stevhliu/my_awesome_eli5_mlm_model
|
stevhliu
| 2022-10-12T22:42:23Z | 536 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-12T22:30:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_mlm_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_mlm_model
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1706
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 20 | 2.2325 |
| No log | 2.0 | 40 | 2.1603 |
| No log | 3.0 | 60 | 2.2368 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
sd-concepts-library/rahkshi-bionicle
|
sd-concepts-library
| 2022-10-12T21:52:04Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-12T21:31:24Z |
---
license: mit
---
### rahkshi bionicle on Stable Diffusion
This is the `<rahkshi-bionicle>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). To use in the stable diffusion webui app, simply put the `learned_embeds.bin` file in the `embeddings` folder (i reccomend renaming the file to rakshi.bin though). Then in the prompt box just copy and paste the `learned_embeds.bin` file name at the end of your prompt.
Images are from the [Biomedia Project](https://biomediaproject.com/bmp/pictures/advance-art/y2003/sets/rahkshi/)
Here is the new concept you will be able to use as an `object`:






ex output "a rahkshi bionicle in times square"

Use this embedding for whatever you like, no need to credit.
|
juanarturovargas/t5-small-finetuned-xsum
|
juanarturovargas
| 2022-10-12T21:41:54Z | 69 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-12T16:29:14Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: juanarturovargas/t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juanarturovargas/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.6991
- Validation Loss: 2.3063
- Train Rouge1: 12.4391
- Train Rouge2: 4.7412
- Train Rougel: 12.2704
- Train Rougelsum: 12.2483
- Train Gen Len: 8.8482
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.6991 | 2.3063 | 12.4391 | 4.7412 | 12.2704 | 12.2483 | 8.8482 | 0 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.8.2
- Datasets 2.5.2
- Tokenizers 0.13.1
|
soleimanian/financial-roberta-large-sentiment
|
soleimanian
| 2022-10-12T21:04:39Z | 8,579 | 39 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"Sentiment",
"RoBERTa",
"Financial Statements",
"Accounting",
"Finance",
"Business",
"ESG",
"CSR Reports",
"Financial News",
"Earnings Call Transcripts",
"Sustainability",
"Corporate governance",
"eng",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-16T04:09:10Z |
---
license: apache-2.0
language:
- eng
tags:
- text-classification
- Sentiment
- RoBERTa
- Financial Statements
- Accounting
- Finance
- Business
- ESG
- CSR Reports
- Financial News
- Earnings Call Transcripts
- Sustainability
- Corporate governance
---
<!DOCTYPE html>
<html>
<body>
<h1><b>Financial-RoBERTa</b></h1>
<p><b>Financial-RoBERTa</b> is a pre-trained NLP model to analyze sentiment of financial text including:</p>
<ul style="PADDING-LEFT: 40px">
<li>Financial Statements,</li>
<li>Earnings Announcements,</li>
<li>Earnings Call Transcripts,</li>
<li>Corporate Social Responsibility (CSR) Reports,</li>
<li>Environmental, Social, and Governance (ESG) News,</li>
<li>Financial News,</li>
<li>Etc.</li>
</ul>
<p>Financial-RoBERTa is built by further training and fine-tuning the RoBERTa Large language model using a large corpus created from 10k, 10Q, 8K, Earnings Call Transcripts, CSR Reports, ESG News, and Financial News text.</p>
<p>The model will give softmax outputs for three labels: <b>Positive</b>, <b>Negative</b> or <b>Neutral</b>.</p>
<p><b>How to perform sentiment analysis:</b></p>
<p>The easiest way to use the model for single predictions is Hugging Face's sentiment analysis pipeline, which only needs a couple lines of code as shown in the following example:</p>
<pre>
<code>
from transformers import pipeline
sentiment_analysis = pipeline("sentiment-analysis",model="soleimanian/financial-roberta-large-sentiment")
print(sentiment_analysis("In fiscal 2021, we generated a net yield of approximately 4.19% on our investments, compared to approximately 5.10% in fiscal 2020."))
</code>
</pre>
<p>I provide an example script via <a href="https://colab.research.google.com/drive/11RGWU3UDtxnjan8Ug6dyX82m9fBV6CGo?usp=sharing" target="_blank">Google Colab</a>. You can load your data to a Google Drive and run the script for free on a Colab.
<p><b>Citation and contact:</b></p>
<p>Please cite <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4115943" target="_blank">this paper</a> when you use the model. Feel free to reach out to [email protected] with any questions or feedback you may have.<p/>
</body>
</html>
|
EddyGiusepe/bert-finetuned-ner
|
EddyGiusepe
| 2022-10-12T20:00:09Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T19:36:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
- Precision: 0.9335
- Recall: 0.9517
- F1: 0.9425
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 1756 | 0.0685 | 0.9208 | 0.9367 | 0.9287 | 0.9829 |
| 0.0336 | 2.0 | 3512 | 0.0612 | 0.9281 | 0.9495 | 0.9387 | 0.9856 |
| 0.0181 | 3.0 | 5268 | 0.0602 | 0.9335 | 0.9517 | 0.9425 | 0.9864 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.1
|
siddharth963/vit-base-patch16-224-in21k-finetuned-cassava3
|
siddharth963
| 2022-10-12T19:37:08Z | 216 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:image_folder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-12T17:13:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- image_folder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-cassava3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: image_folder
type: image_folder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8855140186915887
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-cassava3
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3419
- Accuracy: 0.8855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5624 | 0.99 | 133 | 0.5866 | 0.8166 |
| 0.4717 | 1.99 | 266 | 0.4245 | 0.8692 |
| 0.4105 | 2.99 | 399 | 0.3708 | 0.8811 |
| 0.3753 | 3.99 | 532 | 0.3646 | 0.8787 |
| 0.2997 | 4.99 | 665 | 0.3655 | 0.8780 |
| 0.3176 | 5.99 | 798 | 0.3545 | 0.8822 |
| 0.2849 | 6.99 | 931 | 0.3441 | 0.8850 |
| 0.2931 | 7.99 | 1064 | 0.3419 | 0.8855 |
| 0.27 | 8.99 | 1197 | 0.3419 | 0.8848 |
| 0.2927 | 9.99 | 1330 | 0.3403 | 0.8853 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
dsilin/detok-deberta-xl
|
dsilin
| 2022-10-12T18:56:34Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"token-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language: en
widget:
- text: "They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,"
- text: "\" We 'll talk go by now ; \" says Shucksmith ;"
- text: "\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said ."
---
This model can be used to more accurately detokenize the moses tokenizer (it does a better job with certain lossy quotes and things)
batched usage:
```python
sentences = [
"They 're a young team . they have great players and amazing freshmen coming in , so think they 'll grow into themselves next year ,",
"\" We 'll talk go by now ; \" says Shucksmith ;",
"He 'll enjoy it more now that this he be dead , if put 'll pardon the expression .",
"I think you 'll be amazed at this way it finds ,",
"Michigan voters ^ are so frightened of fallen in permanent economic collapse that they 'll grab onto anything .",
"You 'll finding outs episode 4 .",
"\" Warren Gatland is a professional person and it wasn 't a case of 's I 'll phone my mate Rob up to if he wants a coaching job ' , he would done a fair amount of homework about , \" Howley air said .",
"You can look at the things I 'm saying about my record and about the events of campaign and history and you 'll find if now and and then I miss a words or I get something slightly off , I 'll correct it , acknowledge where it are wrong .",
"Wonder if 'll alive to see .",
"We 'll have to combine and a numbered of people ."
]
def sentences_to_input_tokens(sentences):
all_tokens = []
max_length = 0
sents_tokens = []
iids = tokenizer(sentences)
for sent_tokens in iids['input_ids']:
sents_tokens.append(sent_tokens)
if len(sent_tokens) > max_length:
max_length = len(sent_tokens)
attention_mask = [1] * len(sent_tokens)
pos_ids = list(range(len(sent_tokens)))
encoding = {
"iids": sent_tokens,
"am": attention_mask,
"pos": pos_ids
}
all_tokens.append(encoding)
input_ids = []
attention_masks = []
position_ids = []
for i in range(len(all_tokens)):
encoding = all_tokens[i]
pad_len = max_length - len(encoding['iids'])
attention_masks.append(encoding['am'] + [0] * pad_len)
position_ids.append(encoding['pos'] + [0] * pad_len)
input_ids.append(encoding['iids'] + [tokenizer.pad_token_id] * pad_len)
encoding = {
"input_ids": torch.tensor(input_ids).to(device),
"attention_mask": torch.tensor(attention_masks).to(device),
"position_ids": torch.tensor(position_ids).to(device)
}
return encoding, sents_tokens
def run_token_predictor_sentences(sentences):
encoding, at = sentences_to_input_tokens(sentences)
predictions = model(**encoding)[0].cpu().tolist()
outstrs = []
for i in range(len(predictions)):
outstr = ""
for p in zip(tokenizer.convert_ids_to_tokens(at[i][1:-1]), predictions[i][1:-1]):
if not "▁" in p[0]:
outstr+=p[0]
else:
if p[1][0] > p[1][1]:
outstr+=p[0].replace("▁", " ")
else:
outstr+=p[0].replace("▁", "")
outstrs.append(outstr.strip())
return outstrs
outs = run_token_predictor_sentences(sentences)
for p in zip(outs, sentences):
print(p[1])
print(p[0])
print('\n------\n')
```
|
r-moret/albert-humor-classification
|
r-moret
| 2022-10-12T18:54:21Z | 67 | 1 |
transformers
|
[
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T18:54:04Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: albert-humor-classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# albert-humor-classification
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.10.0
- Tokenizers 0.13.1
|
stevhliu/my_awesome_swag_model
|
stevhliu
| 2022-10-12T18:53:50Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-10-12T17:52:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: my_awesome_swag_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_swag_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5192
- Accuracy: 0.7981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6873 | 1.0 | 4597 | 0.5192 | 0.7981 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
hatemnoaman/distilbert-base-uncased-finetuned-emotion
|
hatemnoaman
| 2022-10-12T18:04:46Z | 106 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T17:38:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9259822329831515
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2149
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8098 | 1.0 | 250 | 0.3095 | 0.9015 | 0.8975 |
| 0.2416 | 2.0 | 500 | 0.2149 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
amoux/roberta-cord19-1M7k
|
amoux
| 2022-10-12T17:59:56Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.githubassets.com/images/icons/emoji/unicode/2695.png
widget:
- text: Lung infiltrates cause significant morbidity and mortality in immunocompromised
<mask>.
- text: Tuberculosis appears to be an important <mask> in endemic regions especially
in the non-HIV, non-hematologic malignancy group.
- text: For vector-transmitted diseases this places huge significance on vector mortality
rates as vectors usually don't <mask> an infection and instead remain infectious
for life.
- text: The lung lesions were characterized by bronchointerstitial pneumonia with
accumulation of neutrophils, macrophages and necrotic debris in <mask> and bronchiolar
lumens and peribronchiolar/perivascular infiltration of inflammatory cells.
---
# roberta-cord19-1M7k

> This model is based on ***RoBERTa*** and was pre-trained on 1.7 million sentences.
The training corpus was papers taken from *Semantic Scholar*'s CORD-19 historical releases. Corpus size is `13k` papers, `~60M` tokens. I used the full-text `"body_text"` of the papers in training (details below).
#### Usage
```python
from transformers import pipeline
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k")
model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k")
fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients."
masked_text = text.replace("patients", tokenizer.mask_token)
predictions = fillmask(masked_text, top_k=3)
```
- Predicted tokens
```bash
[{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>',
'score': 0.6273621320724487,
'token': 660,
'token_str': 'Ġpatients'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>',
'score': 0.19800445437431335,
'token': 1868,
'token_str': 'Ġindividuals'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>',
'score': 0.022069649770855904,
'token': 1471,
'token_str': 'Ġanimals'}]
```
## Dataset
- About
- name: *CORD-19: The Covid-19 Open Research Dataset*
- date: *2020-03-18*
- md5 | sha1: `a36fe181 | 8fbea927`
- text-key: `body_text`
- subsets (*total*: `13,202`):
- *biorxiv_medrxiv*: `803`
- *comm_use_subset*: `9000`
- *pmc_custom_license*: `1426`
- *noncomm_use_subset*: `1973`
- Splits (*ratio: 0.9*)
- sentences used for training: `1,687,124`
- sentences used for evaluation: `187,459`
- Total training steps: `210,890`
- Total evaluation steps: `23,433`
## Parameters
- Data
- block_size: `256`
- Training
- per_device_train_batch_size: `8`
- per_device_eval_batch_size: `8`
- gradient_accumulation_steps: `2`
- learning_rate: `5e-5`
- num_train_epochs: `2`
- fp16: `True`
- fp16_opt_level: `'01'`
- seed: `42`
- Output
- global_step: `210890`
- training_loss: `3.5964575726682155`
## Evaluation
- Perplexity: `17.469366079957922`
### Citation
> Allen Institute CORD-19 [Historical Releases](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html)
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
```
|
asparius/bert-fine-grained
|
asparius
| 2022-10-12T17:47:10Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T10:48:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-fine-grained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-grained
This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1308
- Accuracy: 0.5761
- F1: 0.5655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0917 | 1.0 | 2188 | 1.0668 | 0.5789 | 0.5448 |
| 0.9603 | 2.0 | 4376 | 1.0620 | 0.5868 | 0.5718 |
| 0.8047 | 3.0 | 6564 | 1.1308 | 0.5761 | 0.5655 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
sd-concepts-library/el-salvador-style-style
|
sd-concepts-library
| 2022-10-12T16:31:26Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-12T16:31:23Z |
---
license: mit
---
### <el-salvador-style> style on Stable Diffusion
This is the `<el-salvador-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:



|
huggingtweets/ugroyp
|
huggingtweets
| 2022-10-12T16:08:18Z | 128 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-12T15:48:53Z |
---
language: en
thumbnail: http://www.huggingtweets.com/ugroyp/1665590893363/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579799738987290627/UY1qNBWl_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BdU</div>
<div style="text-align: center; font-size: 14px;">@ugroyp</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BdU.
| Data | BdU |
| --- | --- |
| Tweets downloaded | 2693 |
| Retweets | 587 |
| Short tweets | 181 |
| Tweets kept | 1925 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1jie4hq9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ugroyp's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/u2hl6e4v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/u2hl6e4v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/ugroyp')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
rgoldstein/autotrain-movie-rationales-1734060527
|
rgoldstein
| 2022-10-12T14:34:03Z | 99 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:rgoldstein/autotrain-data-movie-rationales",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T14:30:47Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- rgoldstein/autotrain-data-movie-rationales
co2_eq_emissions:
emissions: 5.912842155368309
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1734060527
- CO2 Emissions (in grams): 5.9128
## Validation Metrics
- Loss: 0.198
- Accuracy: 0.934
- Precision: 0.937
- Recall: 0.931
- AUC: 0.983
- F1: 0.934
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/rgoldstein/autotrain-movie-rationales-1734060527
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("rgoldstein/autotrain-movie-rationales-1734060527", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("rgoldstein/autotrain-movie-rationales-1734060527", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ntaka/xlm-roberta-base-finetuned-panx-de
|
ntaka
| 2022-10-12T13:32:08Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-08T13:42:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.863677639046538
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1343
- F1: 0.8637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2578 | 1.0 | 525 | 0.1562 | 0.8273 |
| 0.1297 | 2.0 | 1050 | 0.1330 | 0.8474 |
| 0.0809 | 3.0 | 1575 | 0.1343 | 0.8637 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
gsarti/it5-efficient-small-el32-ilgiornale-to-repubblica
|
gsarti
| 2022-10-12T13:19:18Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"newspaper",
"efficient",
"ilgiornale",
"repubblica",
"style-transfer",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T14:12:35Z |
---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- efficient
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-efficient-small-el32-ilgiornale-to-repubblica
results:
- task:
type: headline-style-transfer-ilgiornale-to-repubblica
name: "Headline style transfer (Il Giornale to Repubblica)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.286
name: "Test Rouge1"
- type: rouge2
value: 0.099
name: "Test Rouge2"
- type: rougeL
value: 0.253
name: "Test RougeL"
- type: bertscore
value: 0.422
name: "Test BERTScore"
- type: headline-headline-consistency-classifier
value: 0.836
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.763
name: "Test Headline-Article Consistency Accuracy"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Il Giornale to Repubblica) 🗞️➡️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Il Giornale to Repubblica direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate a headline in the style of Repubblica from the full body of an article written in the style of Il Giornale. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
g2r = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-ilgiornale-to-repubblica')
g2r("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-ilgiornale-to-repubblica")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
troesy/distilbert-label-all-tokens
|
troesy
| 2022-10-12T13:17:05Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T13:08:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-label-all-tokens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-label-all-tokens
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.1769 |
| No log | 2.0 | 348 | 0.1705 |
| 0.188 | 3.0 | 522 | 0.1709 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
gsarti/it5-efficient-small-el32-repubblica-to-ilgiornale
|
gsarti
| 2022-10-12T13:16:45Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"efficient",
"newspaper",
"ilgiornale",
"repubblica",
"style-transfer",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T14:15:00Z |
---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- efficient
- newspaper
- ilgiornale
- repubblica
- style-transfer
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
- headline-headline-consistency-classifier
- headline-article-consistency-classifier
model-index:
- name: it5-efficient-small-el32-repubblica-to-ilgiornale
results:
- task:
type: headline-style-transfer-repubblica-to-ilgiornale
name: "Headline style transfer (Repubblica to Il Giornale)"
dataset:
type: gsarti/change_it
name: "CHANGE-IT"
metrics:
- type: rouge1
value: 0.269
name: "Test Rouge1"
- type: rouge2
value: 0.087
name: "Test Rouge2"
- type: rougeL
value: 0.235
name: "Test RougeL"
- type: bertscore
value: 0.395
name: "Test BERTScore"
- type: headline-headline-consistency-classifier
value: 0.808
name: "Test Headline-Headline Consistency Accuracy"
- type: headline-article-consistency-classifier
value: 0.810
name: "Test Headline-Article Consistency Accuracy"
thumbnail: https://gsarti.com/publication/it5/featured.png
---
# IT5 Cased Small Efficient EL32 for News Headline Style Transfer (Repubblica to Il Giornale) 🗞️➡️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline style transfer in the Repubblica to Il Giornale direction on the Italian CHANGE-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
The model is trained to generate a headline in the style of Il Giornale from the full body of an article written in the style of Repubblica. Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
r2g = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-repubblica-to-ilgiornale')
r2g("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-repubblica-to-ilgiornale")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
gsarti/it5-efficient-small-el32-informal-to-formal
|
gsarti
| 2022-10-12T13:12:49Z | 107 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"style-transfer",
"efficient",
"formality-style-transfer",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T13:48:32Z |
---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- efficient
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "maronn qualcuno mi spieg' CHECCOSA SUCCEDE?!?!"
- text: "wellaaaaaaa, ma fraté sei proprio troppo simpatiko, grazieeee!!"
- text: "nn capisco xke tt i ragazzi lo fanno"
- text: "IT5 è SUPERMEGA BRAVISSIMO a capire tt il vernacolo italiano!!!"
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-informal-to-formal
results:
- task:
type: formality-style-transfer
name: "Informal-to-formal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.430
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.221
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.408
name: "Avg. Test RougeL"
- type: bertscore
value: 0.630
name: "Avg. Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Informal-to-formal Style Transfer 🧐
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on Informal-to-formal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
i2f = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-informal-to-formal')
i2f("nn capisco xke tt i ragazzi lo fanno")
>>> [{"generated_text": "non comprendo perché tutti i ragazzi agiscono così"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-informal-to-formal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
gsarti/it5-efficient-small-el32-formal-to-informal
|
gsarti
| 2022-10-12T13:11:05Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"style-transfer",
"efficient",
"formality-style-transfer",
"it",
"dataset:yahoo/xformal_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T13:29:43Z |
---
language:
- it
license: apache-2.0
tags:
- italian
- sequence-to-sequence
- style-transfer
- efficient
- formality-style-transfer
datasets:
- yahoo/xformal_it
widget:
- text: "Questa performance è a dir poco spiacevole."
- text: "In attesa di un Suo cortese riscontro, Le auguriamo un piacevole proseguimento di giornata."
- text: "Questa visione mi procura una goduria indescrivibile."
- text: "qualora ciò possa interessarti, ti pregherei di contattarmi."
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-formal-to-informal
results:
- task:
type: formality-style-transfer
name: "Formal-to-informal Style Transfer"
dataset:
type: xformal_it
name: "XFORMAL (Italian Subset)"
metrics:
- type: rouge1
value: 0.459
name: "Avg. Test Rouge1"
- type: rouge2
value: 0.244
name: "Avg. Test Rouge2"
- type: rougeL
value: 0.435
name: "Avg. Test RougeL"
- type: bertscore
value: 0.739
name: "Avg. Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Formal-to-informal Style Transfer 🤗
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32)
model fine-tuned on Formal-to-informal style transfer on the Italian subset of the XFORMAL dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
f2i = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-formal-to-informal')
f2i("Vi ringrazio infinitamente per vostra disponibilità")
>>> [{"generated_text": "e grazie per la vostra disponibilità!"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5-efficient-small-el32-formal-to-informal")
model = AutoModelForSeq2SeqLM.from_pretrained("it5-efficient-small-el32-formal-to-informal")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
doyoungkim/bert-base-uncased-finetuned-sst2
|
doyoungkim
| 2022-10-12T13:09:42Z | 117 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model_index:
- name: bert-base-uncased-finetuned-sst2
results:
- dataset:
name: glue
type: glue
args: sst2
metric:
name: Accuracy
type: accuracy
value: 0.926605504587156
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2716
- Accuracy: 0.9266
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1666 | 1.0 | 2105 | 0.2403 | 0.9232 |
| 0.1122 | 2.0 | 4210 | 0.2716 | 0.9266 |
| 0.0852 | 3.0 | 6315 | 0.3150 | 0.9232 |
| 0.056 | 4.0 | 8420 | 0.3209 | 0.9163 |
| 0.0344 | 5.0 | 10525 | 0.3740 | 0.9243 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.8.1
- Datasets 1.11.0
- Tokenizers 0.10.1
|
gsarti/it5-efficient-small-el32-question-generation
|
gsarti
| 2022-10-12T13:09:07Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"Italian",
"efficient",
"sequence-to-sequence",
"question-generation",
"squad_it",
"it",
"dataset:squad_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T14:12:07Z |
---
language:
- it
license: apache-2.0
datasets:
- squad_it
tags:
- Italian
- efficient
- sequence-to-sequence
- question-generation
- squad_it
- text2text-generation
widget:
- text: "Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una \"grande pestilenza nell' aria\". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola \"peste\" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia"
- text: "Il 14 aprile 2011, ABC ha annullato le lunghe opere di sapone All My Children e One Life to Live dopo 41 e 43 anni in onda, rispettivamente (in seguito al contraccolpo dei tifosi, ABC ha venduto i diritti ad entrambi gli spettacoli a Prospect Park, che alla fine ha rilanciato i saponi su Hulu per un' ulteriore stagione nel 2013 e con entrambe le società che si citano in giudizio per accuse di interferenza con il processo di rilancio degli spettacoli, mancato pagamento delle tasse di licenza. Il talk/lifestyle show che ha sostituito One Life to Live, The Revolution, non è riuscito a generare giudizi soddisfacenti ed è stato a sua volta annullato dopo soli sette mesi. La stagione 2011-12 ha visto l' ABC cadere al quarto posto nel 18-49 demografico nonostante rinnovando una manciata di nuovi spettacoli (compresi i drammi matricole Scandal, Revenge e Once Upon a Time) per la seconda stagione. Risposta: Hulu"
- text: "L' American Broadcasting Company (ABC) (stlized nel suo logo come abc dal 1957) è una rete televisiva commerciale americana trasmissione televisiva che è di proprietà del Disney-ABC Television Group, una controllata della divisione Disney Media Networks di The Walt Disney Company. La rete fa parte delle grandi reti televisive Big Three. La rete ha sede a Columbus Avenue e West 66th Street a Manhattan, con ulteriori uffici e stabilimenti di produzione a New York City, Los Angeles e Burbank, California. Risposta: Manhattan"
- text: "La disobbedienza civile non rivoluzionaria è una semplice disobbedienza delle leggi sulla base del fatto che sono giudicate \"sbagliate\" da una coscienza individuale, o come parte di uno sforzo per rendere alcune leggi inefficaci, per causarne l' abrogazione, o per esercitare pressioni per ottenere i propri desideri politici su qualche altra questione. La disobbedienza civile rivoluzionaria è più che altro un tentativo attivo di rovesciare un governo (o di cambiare le tradizioni culturali, i costumi sociali, le credenze religiose, ecc. La rivoluzione non deve necessariamente essere politica, cioè \"rivoluzione culturale\", implica semplicemente un cambiamento radicale e diffuso in una sezione del tessuto sociale). Gli atti di Gandhi sono stati descritti come disobbedienza civile rivoluzionaria. È stato affermato che gli ungheresi sotto Ferenc Deák hanno diretto una disobbedienza civile rivoluzionaria contro il governo austriaco. Thoreau ha anche scritto di disobbedienza civile realizzando \"rivoluzione pacifica\". Howard Zinn, Harvey Wheeler e altri hanno identificato il diritto sposato nella Dichiarazione d' Indipendenza di \"alterare o abolire\" un governo ingiusto come principio di disobbedienza civile. Risposta: Ferenc Deák"
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-question-generation
results:
- task:
type: question-generation
name: "Question generation"
dataset:
type: squad_it
name: "SQuAD-IT"
metrics:
- type: rouge1
value: 0.382
name: "Test Rouge1"
- type: rouge2
value: 0.201
name: "Test Rouge2"
- type: rougeL
value: 0.357
name: "Test RougeL"
- type: bertscore
value: 0.517
name: "Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for Question Generation 💭 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on question generation on the [SQuAD-IT corpus](https://huggingface.co/datasets/squad_it) as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
qg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-question-generation')
qg("Le conoscenze mediche erano stagnanti durante il Medioevo. Il resoconto più autorevole di allora è venuto dalla facoltà di medicina di Parigi in un rapporto al re di Francia che ha incolpato i cieli, sotto forma di una congiunzione di tre pianeti nel 1345 che causò una "grande pestilenza nell\' aria". Questa relazione è diventata la prima e più diffusa di una serie di casi di peste che cercava di dare consigli ai malati. Che la peste fosse causata dalla cattiva aria divenne la teoria più accettata. Oggi, questo è conosciuto come la teoria di Miasma. La parola "peste" non aveva un significato particolare in questo momento, e solo la ricorrenza dei focolai durante il Medioevo gli diede il nome che è diventato il termine medico. Risposta: re di Francia")
>>> [{"generated_text": "Per chi è stato redatto il referto medico?"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-question-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-question-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7.0
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
gsarti/it5-efficient-small-el32-news-summarization
|
gsarti
| 2022-10-12T13:07:40Z | 141 | 4 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"fanpage",
"efficient",
"ilpost",
"summarization",
"it",
"dataset:ARTeLab/fanpage",
"dataset:ARTeLab/ilpost",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-04-28T14:11:32Z |
---
language:
- it
license: apache-2.0
datasets:
- ARTeLab/fanpage
- ARTeLab/ilpost
tags:
- italian
- sequence-to-sequence
- fanpage
- efficient
- ilpost
- summarization
widget:
- text: "Non lo vuole sposare. E’ quanto emerge all’interno dell’ultima intervista di Raffaella Fico che, ringraziando Mancini per i buoni consigli elargiti al suo fidanzato, rimanda l’idea del matrimonio per qualche anno ancora. La soubrette, che è stata recentemente protagonista di una dedica di Supermario, non ha ancora intenzione di accasarsi perché è sicura che per mettersi la fede al dito ci sia ancora tempo. Nonostante il suo Mario sia uno degli sportivi più desiderati al mondo, l’ex protagonista del Grande Fratello non ha alcuna intenzione di cedere seriamente alla sua corte. Solo qualche giorno fa, infatti, dopo l’ultima bravata di Balotelli, Mancini gli aveva consigliato di sposare la sua Raffaella e di mettere la testa a posto. Chi pensava che sarebbe stato Mario a rispondere, però, si è sbagliato. A mettere le cose bene in chiaro è la Fico che, intervistata dall’emittente radiofonica Rtl 102.5, dice: È presto per sposarsi, siamo ancora molto giovani. È giusto che prima uno si realizzi nel proprio lavoro. E poi successivamente perché no, ci si può anche pensare. Quando si è giovani capita di fare qualche pazzia, quindi ci sta. Comunque i tabloid inglesi sono totalmente accaniti sulla sua vita privata quando poi dovrebbero interessarsi di più di quello che fa sul campo. Lui non fa le cose con cattiveria, ma quando si è giovani si fanno determinate cose senza stare a pensare se sono giuste o sbagliate. Mario ha gli obiettivi puntati addosso: più per la sua vita privata che come giocatore. Per me può anche andare in uno strip club, se non fa niente di male, con gli amici, però devo dire che alla fine torna sempre da me, sono la sua preferita."
- text: "Valerio è giovanissimo ma già una star. Fuori dall’Ariston ragazzine e meno ragazzine passano ore anche sotto la pioggia per vederlo. Lui è forte del suo talento e sicuro. Partecipa in gara tra i “big” di diritto, per essere arrivato in finalissima nel programma Amici di Maria De Filippi e presenta il brano Per tutte le volte che scritta per lui da Pierdavide Carone. Valerio Scanu è stato eliminato. Ma non è detta l'ultima parola: il duetto di questa sera con Alessandra Amoroso potrebbe risollevarlo e farlo rientrare in gara. Che cosa è successo alla giuria visto che sei stato eliminato anche se l’esibizione era perfetta? Nn lo so. Sono andate bene le esibizioni, ero emozionato ma tranquillo. Ero contento ma ho cantato bene. Non sono passato e stasera ci sarà il ballottaggio… Quali sono le differenze tra Amici e Sanremo? Sono due cose diverse. Amici ti prepara a salire sul palco di amici. A Sanremo ci devi arrivare… ho fatto più di sessanta serate nel tour estivo, poi promozione del secondo disco. Una bella palestra. Sono cresciuto anche umanamente. Sono riuscito a percepire quello che il pubblico trasmette. L’umiltà? Prima di tutto. Sennò non sarei qui."
- text: "L’azienda statunitense Broadcom, uno dei più grandi produttori di semiconduttori al mondo, ha presentato un’offerta per acquisire Qualcomm, altra grande società degli Stati Uniti conosciuta soprattutto per la sua produzione di microprocessori Snapdragon (ARM), utilizzati in centinaia di milioni di smartphone in giro per il mondo. Broadcom ha proposto di acquistare ogni azione di Qualcomm al prezzo di 70 dollari, per un valore complessivo di circa 105 miliardi di dollari (130 miliardi se si comprendono 25 miliardi di debiti netti) . Se l’operazione dovesse essere approvata, sarebbe una delle più grandi acquisizioni di sempre nella storia del settore tecnologico degli Stati Uniti. Broadcom ha perfezionato per mesi la sua proposta di acquisto e, secondo i media statunitensi, avrebbe già preso contatti con Qualcomm per trovare un accordo. Secondo gli analisti, Qualcomm potrebbe comunque opporsi all’acquisizione perché il prezzo offerto è di poco superiore a quello dell’attuale valore delle azioni dell’azienda. Ci potrebbero essere inoltre complicazioni sul piano dell’antitrust da valutare, prima di un’eventuale acquisizione."
- text: "Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente."
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-news-summarization
results:
- task:
type: news-summarization
name: "News Summarization"
dataset:
type: newssum-it
name: "NewsSum-IT"
metrics:
- type: rouge1
value: 0.354
name: "Test Rouge1"
- type: rouge2
value: 0.172
name: "Test Rouge2"
- type: rougeL
value: 0.278
name: "Test RougeL"
- type: bertscore
value: 0.410
name: "Avg. Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for News Summarization ✂️🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/it5-efficient-small-el32-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
Kasturi135/finetuning-sentiment-model-3000-samples
|
Kasturi135
| 2022-10-12T13:05:39Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T08:50:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8666666666666667
- name: F1
type: f1
value: 0.8666666666666667
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3357
- Accuracy: 0.8667
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
gsarti/it5-efficient-small-el32-headline-generation
|
gsarti
| 2022-10-12T12:59:39Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tensorboard",
"t5",
"text2text-generation",
"italian",
"sequence-to-sequence",
"newspaper",
"ilgiornale",
"repubblica",
"efficient",
"headline-generation",
"it",
"dataset:gsarti/change_it",
"arxiv:2203.03759",
"arxiv:2109.10686",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-28T14:11:12Z |
---
language:
- it
license: apache-2.0
datasets:
- gsarti/change_it
tags:
- italian
- sequence-to-sequence
- newspaper
- ilgiornale
- repubblica
- efficient
- headline-generation
widget:
- text: "WASHINGTON - La Corea del Nord torna dopo nove anni nella blacklist Usa degli Stati considerati sponsor del terrorismo. Come Iran, Siria e Sudan. Lo ha deciso Donald Trump , che ha preferito dare l'annuncio non durante il suo recente viaggio in Asia ma ieri, in una riunione del governo alla Casa Bianca. 'Oggi gli Stati Uniti designeranno la Corea del nord come uno stato sponsor del terrorismo', ha tuonato il tycoon, anticipando che sarà formalizzata oggi dal dipartimento di stato e sarà accompagnata da nuove e più severe sanzioni. 'Il livello più alto' mai imposto a Pyongyang, ha promesso. 'Avrebbe dovuto succedere molto tempo fa', ha aggiunto, scaricando per l'ennesima volta la responsabilità dell'attuale crisi sull'amministrazione Obama. Poi si è scagliato contro un 'regime assassino' che 'deve mettere fine allo sviluppo del suo programma illegale nucleare e balistico'. Per giustificare la svolta, Trump ha accusato Pyongyang non solo di 'minacciare il mondo con una devastazione nucleare' ma anche di aver 'ripetutamente sostenuto atti di terrorismo internazionale', compreso omicidi in suolo straniero. Il riferimento è all' uccisione all'aeroporto della capitale malese di Kim Jong Nam , il fratellastro del leader nordcoreano Kim Jong Un , ma non ci sono altri episodi noti. Tanto che alcuni esperti, come pure dirigenti Usa coperti dall'anonimato, dubitano che Pyongyang risponda ai criteri per una tale designazione. La mossa appare altamente simbolica, dato che la Corea del Nord è già pesantemente sanzionata a livello internazionale. Per il segretario di stato Rex Tillerson è solo l'ultima di una serie di passi per rafforzare la pressione su Pyongyang e costringerla a sedersi ad un tavolo perché gli Usa hanno sempre 'speranza nella diplomazia'. Ma nello stesso tempo è un monito per 'fermare e dissuadere' altri Paesi dal sostenere la Corea del Nord, finita nella blacklist 'anche per l'uso di armi chimiche'. Ma la mossa potrebbe anche essere controproducente, provocando una risposta di Kim o minando gli sforzi per sollecitare Pechino ad una maggiore pressione su Pyongyang. In ogni caso non aiuta il dialogo diretto tra Usa e Corea del Nord, che sembrava essere stato avviato in modo riservato. Come non aiutano gli scambi di insulti fra Trump e Kim. Nord Corea, Trump: 'Cerco di essere amico di Kim, sarebbe una bella cosa per il mondo'. Pyongyang era stata messa nella lista Usa degli Stati sponsor del terrorismo per aver fatto esplodere nel 1987 un volo della Korean Air uccidendo tutti i 115 passeggeri a bordo. Ma l'amministrazione di George W. Bush l'aveva rimossa sperando di far avanzare i negoziati sulla denuclearizzazione della penisola coreana. Il governo giapponese sostiene la decisione degli Stati Uniti di inserire la Corea del Nord nella lista degli stati che sponsorizzano il terrorismo, pur riconoscendo che l'annuncio potrebbe provocare una reazione immediata del regime di Pyongyang. Il premier Shinzo Abe ha accolto con consenso il comunicato Usa e ha detto alla stampa che servirà a incrementare la pressione sulla Corea del Nord. Il ministro della Difesa Itsunori Onodera , pur valutando positivamente la notifica, ha spiegato che si attendono azioni provocatorie dallo stato eremita, ribadendo che è vitale rimanere vigili. Secondo la stampa nipponica Abe aveva richiesto al dipartimento di Stato Usa di mettere la Corea del Nord sulla lista durante l'incontro col presidente Usa Donald Trump a Tokyo a inizio mese. L'ultimo lancio di missile balistico condotto da Pyongyang nell'oceano Pacifico, sorvolando il mare del Giappone, risale allo scorso settembre."
- text: "ROMA - Una nuova droga killer è stata sequestrata per la prima volta in Europa dagli investigatori del Nas. Si tratta di una nuova \"miscela psicoattiva altamente tossica\" per la prima volta individuata da forze di polizia, simile all'eroina sintetica, ma molto più economica e letale. Tanto che i 20 grammi scoperti sarebbero stati sufficienti per fabbricare ben 20.000 dosi e lo stesso contatto attraverso la pelle può provocare intossicazione. Individuata per la prima volta, la nuova droga presenta una struttura simile al farmaco sedativo Fentanyl ma con effetti molto più devastanti per l'organismo. Proveniva dell'estero ed era contenuta in un plico postale indirizzato in una città del centro Italia: è stata intercettata tramite accertamenti sul web grazie a un'operazione di intelligence che ha visto come protagonisti i militari della Sezione operativa centrale del Comando carabinieri per la Tutela della salute (Nas). Economica e letale, secondo gli investigatori \"in confronto l'eroina è quasi 'acqua fresca', anzi, proprio per la sua economicità, in alcuni casi viene venduta dai pusher a giovani conviti di comprare eroina\". La diffusione di nuove droghe sintetiche che continuamente appaiono sui mercati necessita di un'attività investigativa costante e complessa. Si tratta infatti di sostanze dalla struttura molecolare molto simile a quella del Fentanyl ma ogni volta leggermente diversa. Di qui la difficoltà di individuarle e l'importanza del nuovo sequestro. \"La chiamano impropriamente 'eroina sintetica' - spiega il comandante dei Nas, generale Adelmo Lusi - per il tipo di effetto psicotropo simile, ma dal punto di vista della tossicità è molto peggio: con 25 milligrammi di eroina ci si sballa, con 25mg di simil-fentanyl, come quello appena sequestrato, si muore\". Le indagini sono partite da ricoveri per overdose in ospedale, in cui arrivavano ragazzi che non rispondevano al trattamento disintossicante per l'eroina. La nuova sostanza verrà ora segnalata per l'inserimento tra le tabelle ministeriali degli stupefacenti prevista dal Dpr 309/1990."
- text: "Fragile come il burro. Il nostro territorio è precario. Ne sanno qualcosa i comuni che sono stati investititi dal maltempo . Il dissesto idrogeologico imperversa su tutto il territorio. Infatti, oltre 6.600 comuni , pari all’82% del totale, sono in aree ad elevato rischio idrogeologico, pari al 10% della sua superficie. La popolazione potenzialmente esposta è stimata in 5,8 milioni di persone. I dati emergono dalle recenti analisi fatte da Legambiente e Protezione civile, che mettono in evidenza come in 10 anni in Italia sia raddoppiata l’area dei territori colpiti da alluvioni e frane , passando da una media di quattro regioni all’anno a otto regioni. Nella classifica delle regioni a maggior rischio idrogeologico prima è la Calabria con il 100% dei comuni esposti; al 100% ci sono anche la provincia di Trento, il Molise, la Basilicata, l’Umbria, la Valle d’Aosta. Poi Marche, Liguria al 99%; Lazio, Toscana al 98%; Abruzzo (96%), Emilia-Romagna (95%), Campania e Friuli Venezia Giulia al 92%, Piemonte (87%), Sardegna (81%), Puglia (78%), Sicilia (71%), Lombardia (60%), provincia di Bolzano (59%), Veneto (56%). Tra le cause che condizionano ed amplificano il rischio idrogeologico c’è l’azione dell’uomo (abbandono e degrado, cementificazione, consumo di suolo, abusivismo, disboscamento e incendi). Ma anche e soprattutto la mancanza di una seria manutenzione ordinaria e non ad una organica politica di prevenzione."
- text: "Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\"."
metrics:
- rouge
- bertscore
model-index:
- name: it5-efficient-small-el32-headline-generation
results:
- task:
type: headline-generation
name: "Headline generation"
dataset:
type: headgen_it
name: "HeadGen-IT"
metrics:
- type: rouge1
value: 0.299
name: "Test Rouge1"
- type: rouge2
value: 0.108
name: "Test Rouge2"
- type: rougeL
value: 0.264
name: "Test RougeL"
- type: bertscore
value: 0.427
name: "Test BERTScore"
---
# IT5 Cased Small Efficient EL32 for News Headline Generation 🗞️ 🇮🇹
*Shout-out to [Stefan Schweter](https://github.com/stefan-it) for contributing the pre-trained efficient model!*
This repository contains the checkpoint for the [IT5 Cased Small Efficient EL32](https://huggingface.co/it5/it5-efficient-small-el32) model fine-tuned on news headline generation on the Italian HeadGen-IT dataset as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
Efficient IT5 models differ from the standard ones by adopting a different vocabulary that enables cased text generation and an [optimized model architecture](https://arxiv.org/abs/2109.10686) to improve performances while reducing parameter count. The Small-EL32 replaces the original encoder from the T5 Small architecture with a 32-layer deep encoder, showing improved performances over the base model.
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
hg = pipeline("text2text-generation", model='it5/it5-efficient-small-el32-headline-generation')
hg("Arriva dal Partito nazionalista basco (Pnv) la conferma che i cinque deputati che siedono in parlamento voteranno la sfiducia al governo guidato da Mariano Rajoy. Pochi voti, ma significativi quelli della formazione politica di Aitor Esteban, che interverrà nel pomeriggio. Pur con dimensioni molto ridotte, il partito basco si è trovato a fare da ago della bilancia in aula. E il sostegno alla mozione presentata dai Socialisti potrebbe significare per il primo ministro non trovare quei 176 voti che gli servono per continuare a governare. \" Perché dovrei dimettermi io che per il momento ho la fiducia della Camera e quella che mi è stato data alle urne \", ha detto oggi Rajoy nel suo intervento in aula, mentre procedeva la discussione sulla mozione di sfiducia. Il voto dei baschi ora cambia le carte in tavola e fa crescere ulteriormente la pressione sul premier perché rassegni le sue dimissioni. La sfiducia al premier, o un'eventuale scelta di dimettersi, porterebbe alle estreme conseguenze lo scandalo per corruzione che ha investito il Partito popolare. Ma per ora sembra pensare a tutt'altro. \"Non ha intenzione di dimettersi - ha detto il segretario generale del Partito popolare , María Dolores de Cospedal - Non gioverebbe all'interesse generale o agli interessi del Pp\".")
>>> [{"generated_text": "il nazionalista rajoy: 'voteremo la sfiducia'"}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/it5-efficient-small-el32-headline-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/it5-efficient-small-el32-headline-generation")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
```
|
seonghyeonye/flipped_11B
|
seonghyeonye
| 2022-10-12T12:53:58Z | 16 | 20 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:bigscience/P3",
"arxiv:2210.02969",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-11T06:16:21Z |
---
datasets:
- bigscience/P3
language: en
license: apache-2.0
---
**Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning)
# Model Description
FLIPPED uses a unique meta-learning method to show zero-shot task generalization on classification natural language prompts, outperforming GPT-3 and T0-11B on many tasks with a 4x smaller scale.
It is a series of encoder-decoder model trained on a numerous classification dataset. We show inputs and its corresponding outputs of each instances in each dataset to FLIPPED, and train it to generate its possible instruction. We add unlikelihood loss in order **not** to generate the instruction when given the same input, but a wrong output. To obtain FLIPPED, we fine-tune a T5 model in a given scale on a multitask mixture covering many different classification NLP tasks.
# Intended uses
You can use the models to perform inference on tasks by specifying your input-output NLP query in a "input: {input}\noutput: {output}" form , and the model will predict the instruction. For example, You can try
*"input: <extra_id_0> this is the best cast iron skillet you will ever buy<extra_id_1>\noutput: Positive"*
as an input, and the model will hopefully generate *"Title: Review:"*.
# How to use
Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks.
|Model|Number of parameters|
|-|-|
|[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion|
|[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion|
Here is how to download the model in PyTorch:
```python
import torch
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/flipped_11B")
tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/flipped_11B")
```
If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`.
We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method.
**Note: the model was trained with bfloat16 activations. As such, we highly discourage running inference with fp16.**
# Training procedure
FLIPPED models are based on [T5](https://huggingface.co/google/t5-v1_1-xxl), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4).
At a high level, the input text along with output label is fed to the encoder and the instruction text is produced by the decoder. The model is fine-tuned to autoregressively generate the target. We also feed input text along with a wrong input, adding an unlikelihood loss in order not to make model produce the proper instruction in that case. Here are our training details.
Training details:
- Fine-tuning steps: 5'000
- Input sequence length: 384
- Target sequence length: 64
- Batch size: 240
- Optimizer: Adafactor
- Learning rate: 5e-5
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.)
# Training data
We trained different variants T0 with different mixtures of datasets.
|Model|Training datasets|
|--|--|
|FLIPPED-11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP|
|FLIPPED_3B|Same as FLIPPED-11B|
We only choose prompts examples that has output lables, which can be found on the dataset page.
# Evaluation data
We evaluate our models on following datasets:
|Task category|Datasets|
|-|-|
|Natural language inference|ANLI(R1, R2, R3), CB, RTE|
|Coreference resolution|WSC, Winogrande|
|Word sense disambiguation|WiC|
|Sentence completion|COPA, HellaSwag, Story Cloze|
|QA|PIQA, ARC-Challenge, OpenbookQA|
We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
# Label generalization
We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969).
|Task category|(Datasets, Template name)|
|-|-|
|Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)|
|Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) |
The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates).
# BibTeX entry and citation info
```bibtex
@article{ye2022guess,
title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners},
author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon},
journal={arXiv preprint arXiv:2210.02969},
year={2022}
}
```
|
csam/finetuning-sentiment-model-3000-samples
|
csam
| 2022-10-12T11:14:28Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-12T11:01:08Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.88
- name: F1
type: f1
value: 0.880794701986755
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2913
- Accuracy: 0.88
- F1: 0.8808
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
Linus4Lyf/my-awesome-setfit-model
|
Linus4Lyf
| 2022-10-12T11:00:56Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-12T11:00:44Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Guihss/TypeBERT
|
Guihss
| 2022-10-12T10:40:25Z | 131 | 0 |
transformers
|
[
"transformers",
"pytorch",
"type_bert",
"text-classification",
"custom_code",
"license:openrail",
"autotrain_compatible",
"region:us"
] |
text-classification
| 2022-10-12T09:45:16Z |
---
license: openrail
widget:
- text: "a form of locomotion or movement in which an organism or non-living (e.g., robotic) mechanical system propels itself through the air along a ballistic trajectory."
---
|
troesy/bert-base-uncased-hatexplain-label-all-tokens-True-3epoch
|
troesy
| 2022-10-12T10:35:44Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T10:22:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-hatexplain-label-all-tokens-True-3epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-hatexplain-label-all-tokens-True-3epoch
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2139
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.2211 |
| No log | 2.0 | 348 | 0.2089 |
| 0.2165 | 3.0 | 522 | 0.2139 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
Farhan11/bert-finetuned-ner
|
Farhan11
| 2022-10-12T10:29:27Z | 68 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T09:19:08Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Farhan11/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Farhan11/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0272
- Validation Loss: 0.0522
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1799 | 0.0605 | 0 |
| 0.0479 | 0.0561 | 1 |
| 0.0272 | 0.0522 | 2 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.8.2
- Datasets 2.5.2
- Tokenizers 0.13.1
|
Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer
|
Yoshiki
| 2022-10-12T10:09:17Z | 7 | 0 |
espnet
|
[
"espnet",
"audio",
"speech-enhancement-recognition",
"en",
"dataset:chime4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-10-12T09:15:17Z |
---
tags:
- espnet
- audio
- speech-enhancement-recognition
language: en
datasets:
- chime4
license: cc-by-4.0
---
## ESPnet2 EnhS2T model
### `Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer`
This model was trained by Yoshiki using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
8ed83f45d5aa2ca6b3635e44b9c29afb9b5fb600
pip install -e .
cd egs2/chime4/enh_asr1
./run.sh --skip_data_prep false --skip_train true --download_model Yoshiki/chime4_enh_asr1_wpd_wavlm_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Oct 11 02:40:53 UTC 2022`
- python version: `3.7.4 (default, Aug 13 2019, 20:35:49) [GCC 7.3.0]`
- espnet version: `espnet 202207`
- pytorch version: `pytorch 1.10.1+cu111`
- Git hash: ``
- Commit date: ``
## enh_asr_train_enh_asr_wpd_init_noenhloss_wavlm_conformer_raw_en_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_real_isolated_6ch_track|1640|27119|98.8|0.9|0.2|0.2|1.3|16.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_simu_isolated_6ch_track|1640|27120|98.9|0.9|0.2|0.1|1.3|15.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_real_isolated_6ch_track|1320|21409|98.4|1.4|0.2|0.2|1.8|20.6|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_simu_isolated_6ch_track|1320|21416|98.9|1.0|0.2|0.1|1.2|15.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_real_isolated_6ch_track|1640|160390|99.7|0.1|0.2|0.2|0.5|16.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/dt05_simu_isolated_6ch_track|1640|160400|99.7|0.1|0.2|0.1|0.5|15.2|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_real_isolated_6ch_track|1320|126796|99.5|0.2|0.3|0.2|0.7|20.6|
|decode_asr_transformer_largelm_normalize_output_wavtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_enh_asr_model_valid.acc.ave_10best/et05_simu_isolated_6ch_track|1320|126812|99.7|0.2|0.2|0.1|0.5|15.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## EnhS2T config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_asr_wpd_init_noenhloss_wavlm_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_asr_train_enh_asr_wpd_init_noenhloss_wavlm_conformer_raw_en_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 31
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
- - train
- loss
- min
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 1
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../enh1/exp/enh_train_enh_beamformer_wpd_ci_sdr_shorttap_raw/valid.loss.best.pth:separator:enh_model.separator
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:frontend:s2t_model.frontend
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:preencoder:s2t_model.preencoder
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:encoder:s2t_model.encoder
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:ctc:s2t_model.ctc
- ../asr1/exp/asr_train_asr_conformer_wavlm2_raw_en_char/valid.acc.best.pth:decoder:s2t_model.decoder
ignore_init_mismatch: false
freeze_param:
- s2t_model.frontend.upstream
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_asr_stats_raw_en_char/train/speech_shape
- exp/enh_asr_stats_raw_en_char/train/speech_ref1_shape
- exp/enh_asr_stats_raw_en_char/train/text_spk1_shape.char
valid_shape_file:
- exp/enh_asr_stats_raw_en_char/valid/speech_shape
- exp/enh_asr_stats_raw_en_char/valid/speech_ref1_shape
- exp/enh_asr_stats_raw_en_char/valid/text_spk1_shape.char
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_multi_isolated_6ch_track/wav.scp
- speech
- sound
- - dump/raw/tr05_multi_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_multi_isolated_6ch_track/text_spk1
- text_spk1
- text
valid_data_path_and_name_and_type:
- - dump/raw/dt05_multi_isolated_6ch_track/wav.scp
- speech
- sound
- - dump/raw/dt05_multi_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_multi_isolated_6ch_track/text_spk1
- text_spk1
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: sgd
optim_conf:
lr: 0.001
momentum: 0.9
scheduler: null
scheduler_conf: {}
token_list: data/en_token_list/char/tokens.txt
src_token_list: null
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
enh_criterions:
- name: ci_sdr
conf:
filter_length: 512
wrapper: fixed_order
wrapper_conf:
weight: 0.1
diar_num_spk: null
diar_input_size: null
enh_model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
asr_model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
st_model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: null
diar_model_conf:
diar_weight: 1.0
attractor_weight: 1.0
subtask_series:
- enh
- asr
model_conf:
calc_enh_loss: false
bypass_enh_prob: 0.0
use_preprocessor: true
token_type: char
bpemodel: null
src_token_type: bpe
src_bpemodel: null
non_linguistic_symbols: data/nlsyms.txt
cleaner: null
g2p: null
text_name:
- text_spk1
enh_encoder: stft
enh_encoder_conf:
n_fft: 512
win_length: 400
hop_length: 128
use_builtin_complex: false
enh_separator: wpe_beamformer
enh_separator_conf:
num_spk: 1
loss_type: spectrum
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 512
wprojs: 512
wdropout_rate: 0.0
taps: 3
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 4
use_noise_mask: true
beamformer_type: wpd_souden
bdropout_rate: 0.0
enh_decoder: stft
enh_decoder_conf:
n_fft: 512
win_length: 400
hop_length: 128
enh_mask_module: multi_mask
enh_mask_module_conf: {}
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 100
num_freq_mask: 4
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
asr_preencoder: linear
asr_preencoder_conf:
input_size: 1024
output_size: 80
asr_encoder: conformer
asr_encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d2
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
asr_postencoder: null
asr_postencoder_conf: {}
asr_decoder: transformer
asr_decoder_conf:
input_layer: embed
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
st_preencoder: null
st_preencoder_conf: {}
st_encoder: rnn
st_encoder_conf: {}
st_postencoder: null
st_postencoder_conf: {}
st_decoder: rnn
st_decoder_conf: {}
st_extra_asr_decoder: rnn
st_extra_asr_decoder_conf: {}
st_extra_mt_decoder: rnn
st_extra_mt_decoder_conf: {}
diar_frontend: default
diar_frontend_conf: {}
diar_specaug: null
diar_specaug_conf: {}
diar_normalize: utterance_mvn
diar_normalize_conf: {}
diar_encoder: transformer
diar_encoder_conf: {}
diar_decoder: linear
diar_decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
diar_attractor: null
diar_attractor_conf: {}
required:
- output_dir
version: '202207'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
troesy/toxicbert-hatexplain-label-all-tokens-False-3epoch
|
troesy
| 2022-10-12T09:58:31Z | 124 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T09:45:24Z |
---
tags:
- generated_from_trainer
model-index:
- name: toxicbert-hatexplain-label-all-tokens-False-3epoch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# toxicbert-hatexplain-label-all-tokens-False-3epoch
This model is a fine-tuned version of [unitary/toxic-bert](https://huggingface.co/unitary/toxic-bert) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 174 | 0.1816 |
| No log | 2.0 | 348 | 0.1751 |
| 0.1869 | 3.0 | 522 | 0.1779 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
anton-l/common_voice_generator
|
anton-l
| 2022-10-12T09:40:09Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-04-29T15:53:56Z |
## Common voice release generator
1. Copy the latest release id from the `RELEASES` dict in https://github.com/common-voice/common-voice/blob/main/web/src/components/pages/datasets/releases.ts
to the `VERSIONS` variable in `generate_datasets.py`.
2. Copy the languages from https://github.com/common-voice/common-voice/blob/release-v1.78.0/web/locales/en/messages.ftl
(replacing `release-v1.78.0` with the latest version tag) to the `languages.ftl` file.
3. Run `python generate_datasets.py` to generate the dataset repos.
4. `cd ..`
5. `huggingface-cli repo create --type dataset --organization mozilla-foundation common_voice_11_0`
6. `git clone https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0`
7. `cd common_voice_11_0`
8. `cp ../common_voice_generator/common_voice_11_0/* ./`
9. `git add . && git commit -m "Release" && git push`
|
farzeen/q-Taxi-v3
|
farzeen
| 2022-10-12T09:38:35Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-11T02:27:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="farzeen/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
farzeen/q-FrozenLake-v0-4x4-noSlippery
|
farzeen
| 2022-10-12T09:35:35Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-12T01:35:32Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v0-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="farzeen/q-FrozenLake-v0-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
din0s/bart-large-asqa-cb
|
din0s
| 2022-10-12T09:27:35Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-12T08:51:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-large-asqa-cb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-asqa-cb
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4791
- Rougelsum: 38.2862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:---------:|
| 3.347 | 1.0 | 545 | 2.5353 | 37.3812 |
| 2.7829 | 2.0 | 1090 | 2.5087 | 37.6431 |
| 2.6973 | 3.0 | 1635 | 2.4906 | 37.9194 |
| 2.6125 | 4.0 | 2180 | 2.4812 | 38.1180 |
| 2.5697 | 5.0 | 2725 | 2.4762 | 38.1616 |
| 2.5086 | 6.0 | 3270 | 2.4773 | 38.1370 |
| 2.4678 | 7.0 | 3815 | 2.4831 | 37.9346 |
| 2.4404 | 8.0 | 4360 | 2.4896 | 38.1150 |
| 2.3866 | 9.0 | 4905 | 2.4775 | 38.2222 |
| 2.3791 | 10.0 | 5450 | 2.4791 | 38.2862 |
### Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
osanseviero/q-FrozenLake-v1-4x4-noSlippery-works
|
osanseviero
| 2022-10-12T07:57:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-12T07:57:11Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-works
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="osanseviero/q-FrozenLake-v1-4x4-noSlippery-works", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
kiddothe2b/longformer-base-4096
|
kiddothe2b
| 2022-10-12T07:51:14Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"fill-mask",
"long-documents",
"en",
"dataset:c4",
"arxiv:2004.05150",
"arxiv:2210.05529",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-10T15:10:09Z |
---
license: cc-by-sa-4.0
pipeline_tag: fill-mask
arxiv: 2210.05529
language: en
tags:
- long-documents
datasets:
- c4
model-index:
- name: kiddothe2b/longformer-base-4096
results: []
---
# Longformer / longformer-base-4096
## Model description
[Longformer](https://arxiv.org/abs/2004.05150) is a transformer model for long documents. This version of Longformer presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
The model has been warm-started re-using the weights of RoBERTa (Liu et al., 2019), and continued pre-trained for MLM in long sequences following the paradigm of original Longformer released by Beltagy et al. (2020). It supports sequences of length up to 4,096.
Longformer uses a combination of a sliding window (local) attention and global attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=longformer) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification or question answering.
## How to use
You can use this model directly with a pipeline for masked language modeling:
```python
from transformers import pipeline
mlm_model = pipeline('fill-mask', model='kiddothe2b/longformer-base-4096', trust_remote_code=True)
mlm_model("Hello I'm a <mask> model.")
```
You can also fine-tune it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
```python
from transformers import AutoTokenizer, AutoModelforSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/longformer-base-4096", trust_remote_code=True)
doc_classifier = AutoModelforSequenceClassification("kiddothe2b/longformer-base-4096", trust_remote_code=True)
```
## Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training procedure
### Training and evaluation data
The model has been warm-started from [roberta-base](https://huggingface.co/roberta-base) checkpoint and has been continued pre-trained for additional 50k steps in long sequences (> 1024 subwords) of [C4](https://huggingface.co/datasets/c4) (Raffel et al., 2020).
### Training hyperparameters
TThe following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.7067 | 0.2 | 10000 | 1.5923 | 0.6714 |
| 1.6532 | 0.4 | 20000 | 1.5494 | 0.6784 |
| 1.622 | 0.6 | 30000 | 1.5208 | 0.6830 |
| 1.588 | 0.8 | 40000 | 1.4880 | 0.6876 |
| 1.5682 | 1.0 | 50000 | 1.4680 | 0.6908 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0
- Datasets 2.0.0
- Tokenizers 0.11.6
## Citing
If you use HAT in your research, please cite:
[An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
```
@misc{chalkidis-etal-2022-hat,
url = {https://arxiv.org/abs/2210.05529},
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
publisher = {arXiv},
year = {2022},
}
```
Also cite the original work: [Longformer: The Long-Document Transformer](https://arxiv.org/abs/2004.05150).
```
@article{Beltagy2020Longformer,
title={Longformer: The Long-Document Transformer},
author={Iz Beltagy and Matthew E. Peters and Arman Cohan},
journal={arXiv:2004.05150},
year={2020},
}
```
|
kiddothe2b/hierarchical-transformer-LC1-mini-1024
|
kiddothe2b
| 2022-10-12T07:46:58Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"hierarchical-transformer",
"fill-mask",
"long-documents",
"custom_code",
"en",
"dataset:wikipedia",
"arxiv:2210.05529",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2022-10-11T09:02:48Z |
---
license: cc-by-sa-4.0
pipeline_tag: fill-mask
language: en
arxiv: 2210.05529
tags:
- long-documents
datasets:
- wikipedia
model-index:
- name: kiddothe2b/hierarchical-transformer-LC1-mini-1024
results: []
---
# Hierarchical Attention Transformer (HAT) / hierarchical-transformer-LC1-mini-1024
## Model description
This is a Hierarchical Attention Transformer (HAT) model as presented in [An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification (Chalkidis et al., 2022)](https://arxiv.org/abs/2210.05529).
The model has been warm-started re-using the weights of miniature BERT (Turc et al., 2019), and continued pre-trained for MLM following the paradigm of Longformer released by Beltagy et al. (2020). It supports sequences of length up to 1,024.
HAT uses hierarchical attention, which is a combination of segment-wise and cross-segment attention operations. You can think of segments as paragraphs or sentences.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?other=hierarchical-transformer) to look for other versions of HAT, or fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole document to make decisions, such as document classification, sequential sentence classification, or question answering.
## How to use
You can use this model directly for masked language modeling:
```python
from transformers import AutoTokenizer, AutoModelforForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True)
mlm_model = AutoModelforForMaskedLM("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True)
```
You can also fine-tun it for SequenceClassification, SequentialSentenceClassification, and MultipleChoice down-stream tasks:
```python
from transformers import AutoTokenizer, AutoModelforSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True)
doc_classifier = AutoModelforSequenceClassification("kiddothe2b/hierarchical-transformer-LC1-mini-1024", trust_remote_code=True)
```
## Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from
neutral. Therefore, the model can have biased predictions.
## Training procedure
### Training and evaluation data
The model has been warm-started from [google/bert_uncased_L-6_H-256_A-4](https://huggingface.co/google/bert_uncased_L-6_H-256_A-4) checkpoint and has been continued pre-trained for additional 50k steps on English [Wikipedia](https://huggingface.co/datasets/wikipedia).
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: tpu
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 50000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3959 | 0.2 | 10000 | 2.2258 |
| 2.3395 | 0.4 | 20000 | 2.1738 |
| 2.3082 | 0.6 | 30000 | 2.1404 |
| 2.273 | 0.8 | 40000 | 2.1145 |
| 2.262 | 1.14 | 50000 | 2.1004 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
## Citing
If you use HAT in your research, please cite:
[An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification](https://arxiv.org/abs/2210.05529). Ilias Chalkidis, Xiang Dai, Manos Fergadiotis, Prodromos Malakasiotis, and Desmond Elliott. 2022. arXiv:2210.05529 (Preprint).
```
@misc{chalkidis-etal-2022-hat,
url = {https://arxiv.org/abs/2210.05529},
author = {Chalkidis, Ilias and Dai, Xiang and Fergadiotis, Manos and Malakasiotis, Prodromos and Elliott, Desmond},
title = {An Exploration of Hierarchical Attention Transformers for Efficient Long Document Classification},
publisher = {arXiv},
year = {2022},
}
```
|
Ferro/xlm-roberta-base-finetuned-panx-fr
|
Ferro
| 2022-10-12T07:44:35Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T07:25:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8346456692913387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2763
- F1: 0.8346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5779 | 1.0 | 191 | 0.3701 | 0.7701 |
| 0.2735 | 2.0 | 382 | 0.2908 | 0.8254 |
| 0.1769 | 3.0 | 573 | 0.2763 | 0.8346 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Ferro/xlm-roberta-base-finetuned-panx-de-it
|
Ferro
| 2022-10-12T07:24:50Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-12T06:59:01Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1475
- F1: 0.8589
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2761 | 1.0 | 595 | 0.1644 | 0.8164 |
| 0.1333 | 2.0 | 1190 | 0.1473 | 0.8525 |
| 0.0848 | 3.0 | 1785 | 0.1475 | 0.8589 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingtweets/deepleffen-the_dealersh1p
|
huggingtweets
| 2022-10-12T05:24:36Z | 130 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-12T05:09:22Z |
---
language: en
thumbnail: http://www.huggingtweets.com/deepleffen-the_dealersh1p/1665552272191/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1211158441504456704/dCNSnY4k_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1241879678455078914/e2EdZIrr_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">『 』『dan』『 』 & Deep Leffen Bot</div>
<div style="text-align: center; font-size: 14px;">@deepleffen-the_dealersh1p</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from 『 』『dan』『 』 & Deep Leffen Bot.
| Data | 『 』『dan』『 』 | Deep Leffen Bot |
| --- | --- | --- |
| Tweets downloaded | 2673 | 608 |
| Retweets | 1336 | 14 |
| Short tweets | 235 | 27 |
| Tweets kept | 1102 | 567 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2xu780cl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @deepleffen-the_dealersh1p's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3w2qdw30) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3w2qdw30/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/deepleffen-the_dealersh1p')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
MuhammadIqbalBazmi/wav2vec2-large-960h-lv60-self-intent-classification-ori
|
MuhammadIqbalBazmi
| 2022-10-12T05:12:19Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-10-12T04:48:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-960h-lv60-self-intent-classification-ori
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-960h-lv60-self-intent-classification-ori
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1985
- Accuracy: 0.5417
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2033 | 1.0 | 14 | 2.2126 | 0.0833 |
| 2.2006 | 2.0 | 28 | 2.2026 | 0.0833 |
| 2.1786 | 3.0 | 42 | 2.1758 | 0.3333 |
| 2.1712 | 4.0 | 56 | 2.1436 | 0.3333 |
| 2.1495 | 5.0 | 70 | 2.1120 | 0.3333 |
| 2.1326 | 6.0 | 84 | 2.0909 | 0.3333 |
| 2.1039 | 7.0 | 98 | 2.0966 | 0.3333 |
| 2.0931 | 8.0 | 112 | 2.0355 | 0.3333 |
| 2.1144 | 9.0 | 126 | 2.0082 | 0.3333 |
| 2.0258 | 10.0 | 140 | 1.9901 | 0.375 |
| 2.0028 | 11.0 | 154 | 1.9429 | 0.3958 |
| 1.9737 | 12.0 | 168 | 1.9538 | 0.3958 |
| 1.9023 | 13.0 | 182 | 1.8824 | 0.375 |
| 1.9226 | 14.0 | 196 | 1.8607 | 0.3958 |
| 1.8521 | 15.0 | 210 | 1.8065 | 0.3958 |
| 1.7752 | 16.0 | 224 | 1.8153 | 0.4167 |
| 1.8391 | 17.0 | 238 | 1.7470 | 0.4375 |
| 1.7041 | 18.0 | 252 | 1.7419 | 0.4167 |
| 1.7075 | 19.0 | 266 | 1.6644 | 0.4375 |
| 1.6845 | 20.0 | 280 | 1.6340 | 0.4375 |
| 1.6275 | 21.0 | 294 | 1.6271 | 0.4167 |
| 1.4586 | 22.0 | 308 | 1.5640 | 0.4375 |
| 1.4987 | 23.0 | 322 | 1.5279 | 0.4583 |
| 1.5513 | 24.0 | 336 | 1.4873 | 0.4792 |
| 1.4828 | 25.0 | 350 | 1.4887 | 0.4583 |
| 1.4711 | 26.0 | 364 | 1.4613 | 0.4583 |
| 1.371 | 27.0 | 378 | 1.4062 | 0.4792 |
| 1.3789 | 28.0 | 392 | 1.4038 | 0.4792 |
| 1.3579 | 29.0 | 406 | 1.4031 | 0.4792 |
| 1.2771 | 30.0 | 420 | 1.3637 | 0.5 |
| 1.3417 | 31.0 | 434 | 1.3655 | 0.5 |
| 1.231 | 32.0 | 448 | 1.3698 | 0.5 |
| 1.2367 | 33.0 | 462 | 1.3394 | 0.5 |
| 1.2933 | 34.0 | 476 | 1.3448 | 0.4792 |
| 1.1631 | 35.0 | 490 | 1.2867 | 0.5417 |
| 1.165 | 36.0 | 504 | 1.2624 | 0.5417 |
| 1.2431 | 37.0 | 518 | 1.2252 | 0.5625 |
| 1.1731 | 38.0 | 532 | 1.2082 | 0.5625 |
| 1.1734 | 39.0 | 546 | 1.2062 | 0.5417 |
| 1.1631 | 40.0 | 560 | 1.2034 | 0.5417 |
| 1.0963 | 41.0 | 574 | 1.1973 | 0.5417 |
| 1.2157 | 42.0 | 588 | 1.1988 | 0.5625 |
| 1.1467 | 43.0 | 602 | 1.2018 | 0.5417 |
| 1.1503 | 44.0 | 616 | 1.1986 | 0.5417 |
| 1.0945 | 45.0 | 630 | 1.1985 | 0.5417 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MuhammadIqbalBazmi/wav2vec2-xls-r-300m-intent-classification-ori
|
MuhammadIqbalBazmi
| 2022-10-12T04:13:26Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-10-11T22:05:35Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-xls-r-300m-intent-classification-ori
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-intent-classification-ori
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3107
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1982 | 1.0 | 14 | 2.1951 | 0.0625 |
| 2.2021 | 2.0 | 28 | 2.1847 | 0.1458 |
| 2.1819 | 3.0 | 42 | 2.1661 | 0.3333 |
| 2.1789 | 4.0 | 56 | 2.1413 | 0.3333 |
| 2.164 | 5.0 | 70 | 2.1183 | 0.3333 |
| 2.1484 | 6.0 | 84 | 2.0974 | 0.3333 |
| 2.1199 | 7.0 | 98 | 2.0939 | 0.3333 |
| 2.1343 | 8.0 | 112 | 2.0829 | 0.3333 |
| 2.1397 | 9.0 | 126 | 2.0654 | 0.3333 |
| 2.1045 | 10.0 | 140 | 2.0553 | 0.3333 |
| 2.1083 | 11.0 | 154 | 2.0255 | 0.3333 |
| 2.0914 | 12.0 | 168 | 2.0065 | 0.3333 |
| 2.0434 | 13.0 | 182 | 1.9696 | 0.3333 |
| 2.0687 | 14.0 | 196 | 1.9231 | 0.4167 |
| 2.0237 | 15.0 | 210 | 1.8679 | 0.4167 |
| 1.9562 | 16.0 | 224 | 1.8184 | 0.4167 |
| 2.0361 | 17.0 | 238 | 1.8803 | 0.3958 |
| 1.888 | 18.0 | 252 | 1.7802 | 0.4167 |
| 1.899 | 19.0 | 266 | 1.7662 | 0.4167 |
| 1.8959 | 20.0 | 280 | 1.7076 | 0.4167 |
| 1.8368 | 21.0 | 294 | 1.6566 | 0.4375 |
| 1.7358 | 22.0 | 308 | 1.6283 | 0.5 |
| 1.7877 | 23.0 | 322 | 1.6411 | 0.4583 |
| 1.7311 | 24.0 | 336 | 1.5525 | 0.5208 |
| 1.7079 | 25.0 | 350 | 1.5163 | 0.5 |
| 1.6496 | 26.0 | 364 | 1.5458 | 0.5 |
| 1.6374 | 27.0 | 378 | 1.5211 | 0.5 |
| 1.6048 | 28.0 | 392 | 1.4533 | 0.5417 |
| 1.5927 | 29.0 | 406 | 1.4319 | 0.5 |
| 1.4987 | 30.0 | 420 | 1.4579 | 0.5208 |
| 1.5745 | 31.0 | 434 | 1.4167 | 0.6042 |
| 1.4632 | 32.0 | 448 | 1.4471 | 0.5417 |
| 1.4686 | 33.0 | 462 | 1.4116 | 0.5625 |
| 1.5368 | 34.0 | 476 | 1.3872 | 0.6042 |
| 1.4327 | 35.0 | 490 | 1.3491 | 0.5833 |
| 1.3978 | 36.0 | 504 | 1.3325 | 0.5833 |
| 1.4509 | 37.0 | 518 | 1.3236 | 0.6042 |
| 1.3881 | 38.0 | 532 | 1.3426 | 0.5833 |
| 1.39 | 39.0 | 546 | 1.3137 | 0.6042 |
| 1.4153 | 40.0 | 560 | 1.3123 | 0.625 |
| 1.3635 | 41.0 | 574 | 1.3224 | 0.6042 |
| 1.403 | 42.0 | 588 | 1.3111 | 0.6042 |
| 1.3763 | 43.0 | 602 | 1.3197 | 0.5833 |
| 1.3539 | 44.0 | 616 | 1.3077 | 0.6042 |
| 1.306 | 45.0 | 630 | 1.3107 | 0.625 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Michael02/q-Taxi-v3
|
Michael02
| 2022-10-12T02:25:38Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-12T02:25:30Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Michael02/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
sd-concepts-library/kirby
|
sd-concepts-library
| 2022-10-12T01:09:05Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-12T01:09:00Z |
---
license: mit
---
### Kirby on Stable Diffusion
This is the `<kirby>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:









|
rhhast/chan
|
rhhast
| 2022-10-12T01:08:09Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-12T01:08:09Z |
---
license: bigscience-openrail-m
---
|
stevhliu/my_awesome_opus_books_model
|
stevhliu
| 2022-10-11T22:42:17Z | 109 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-11T21:34:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: opus_books
type: opus_books
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 5.9186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5550
- Bleu: 5.9186
- Gen Len: 17.5809
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 1.7524 | 1.0 | 6355 | 1.5696 | 5.8485 | 17.5793 |
| 1.7452 | 2.0 | 12710 | 1.5550 | 5.9186 | 17.5809 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
jglaser/affinity_pred_regex_2
|
jglaser
| 2022-10-11T22:23:10Z | 0 | 0 | null |
[
"pytorch",
"license:bsd-3-clause",
"region:us"
] | null | 2022-10-11T16:44:44Z |
---
license: bsd-3-clause
---
Copyright 2018-2022, UT-Battelle
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
jglaser/affinity_pred_regex_1
|
jglaser
| 2022-10-11T22:22:59Z | 0 | 0 | null |
[
"pytorch",
"license:bsd-3-clause",
"region:us"
] | null | 2022-10-11T16:31:49Z |
---
license: bsd-3-clause
---
Copyright 2018-2022, UT-Battelle
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
jglaser/affinity_pred_bert_4
|
jglaser
| 2022-10-11T22:22:25Z | 0 | 0 | null |
[
"pytorch",
"license:bsd-3-clause",
"region:us"
] | null | 2022-10-11T18:26:19Z |
---
license: bsd-3-clause
---
Copyright 2018-2022, UT-Battelle
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
jglaser/affinity_pred_bert_3
|
jglaser
| 2022-10-11T22:21:38Z | 0 | 0 | null |
[
"pytorch",
"license:bsd-3-clause",
"region:us"
] | null | 2022-10-11T18:22:37Z |
---
license: bsd-3-clause
---
Copyright 2018-2022, UT-Battelle
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
MuhammadIqbalBazmi/wav2vec2-xlsr-53-espeak-cv-ft-intent-classification-ori
|
MuhammadIqbalBazmi
| 2022-10-11T21:53:06Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-10-11T21:27:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-intent-classification-ori
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-intent-classification-ori
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9124
- Accuracy: 0.625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1894 | 1.0 | 14 | 2.1812 | 0.3333 |
| 2.1795 | 2.0 | 28 | 2.1553 | 0.3333 |
| 2.144 | 3.0 | 42 | 2.1066 | 0.3333 |
| 2.1175 | 4.0 | 56 | 2.0283 | 0.3542 |
| 2.0542 | 5.0 | 70 | 1.9253 | 0.3958 |
| 2.0007 | 6.0 | 84 | 1.8468 | 0.4167 |
| 1.8891 | 7.0 | 98 | 1.7655 | 0.4583 |
| 1.8484 | 8.0 | 112 | 1.6695 | 0.4792 |
| 1.8256 | 9.0 | 126 | 1.5920 | 0.5 |
| 1.6832 | 10.0 | 140 | 1.5331 | 0.5 |
| 1.6149 | 11.0 | 154 | 1.4763 | 0.5 |
| 1.5853 | 12.0 | 168 | 1.4453 | 0.5 |
| 1.4357 | 13.0 | 182 | 1.3588 | 0.5 |
| 1.4789 | 14.0 | 196 | 1.3238 | 0.4792 |
| 1.3886 | 15.0 | 210 | 1.2822 | 0.4792 |
| 1.313 | 16.0 | 224 | 1.2609 | 0.5 |
| 1.3559 | 17.0 | 238 | 1.2191 | 0.5208 |
| 1.1937 | 18.0 | 252 | 1.1936 | 0.5 |
| 1.1847 | 19.0 | 266 | 1.1547 | 0.5417 |
| 1.197 | 20.0 | 280 | 1.1390 | 0.5417 |
| 1.1057 | 21.0 | 294 | 1.1310 | 0.5208 |
| 1.0291 | 22.0 | 308 | 1.1086 | 0.5417 |
| 1.0768 | 23.0 | 322 | 1.1075 | 0.5417 |
| 1.0249 | 24.0 | 336 | 1.0654 | 0.5625 |
| 1.0433 | 25.0 | 350 | 1.0390 | 0.5625 |
| 0.9974 | 26.0 | 364 | 1.0086 | 0.6458 |
| 0.9578 | 27.0 | 378 | 0.9939 | 0.625 |
| 0.916 | 28.0 | 392 | 0.9938 | 0.625 |
| 0.9187 | 29.0 | 406 | 0.9843 | 0.625 |
| 0.8759 | 30.0 | 420 | 0.9755 | 0.625 |
| 0.9199 | 31.0 | 434 | 0.9822 | 0.6042 |
| 0.8791 | 32.0 | 448 | 0.9522 | 0.6458 |
| 0.8436 | 33.0 | 462 | 0.9414 | 0.6458 |
| 0.8692 | 34.0 | 476 | 0.9510 | 0.625 |
| 0.8201 | 35.0 | 490 | 0.9208 | 0.6667 |
| 0.8284 | 36.0 | 504 | 0.9398 | 0.6458 |
| 0.8761 | 37.0 | 518 | 0.9438 | 0.6458 |
| 0.7948 | 38.0 | 532 | 0.9253 | 0.6667 |
| 0.8339 | 39.0 | 546 | 0.9250 | 0.6458 |
| 0.8002 | 40.0 | 560 | 0.9145 | 0.6458 |
| 0.7791 | 41.0 | 574 | 0.9062 | 0.6667 |
| 0.7944 | 42.0 | 588 | 0.9077 | 0.6667 |
| 0.7777 | 43.0 | 602 | 0.9069 | 0.6458 |
| 0.7943 | 44.0 | 616 | 0.9118 | 0.625 |
| 0.7573 | 45.0 | 630 | 0.9124 | 0.625 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
MuhammadIqbalBazmi/wav2vec2-large-960h-intent-classification-ori
|
MuhammadIqbalBazmi
| 2022-10-11T21:18:11Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-10-11T20:59:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-960h-intent-classification-ori
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-960h-intent-classification-ori
This model is a fine-tuned version of [facebook/wav2vec2-large-960h](https://huggingface.co/facebook/wav2vec2-large-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6013
- Accuracy: 0.7708
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1937 | 1.0 | 14 | 2.1715 | 0.3333 |
| 2.1764 | 2.0 | 28 | 2.1317 | 0.3333 |
| 2.1358 | 3.0 | 42 | 2.0794 | 0.3333 |
| 2.1227 | 4.0 | 56 | 2.0367 | 0.3333 |
| 2.1157 | 5.0 | 70 | 1.9845 | 0.3333 |
| 2.0776 | 6.0 | 84 | 1.9369 | 0.3333 |
| 2.0126 | 7.0 | 98 | 1.8021 | 0.375 |
| 1.88 | 8.0 | 112 | 1.6825 | 0.4167 |
| 1.8818 | 9.0 | 126 | 1.4638 | 0.5 |
| 1.7044 | 10.0 | 140 | 1.4188 | 0.5208 |
| 1.5442 | 11.0 | 154 | 1.3421 | 0.5625 |
| 1.5045 | 12.0 | 168 | 1.3971 | 0.5 |
| 1.3369 | 13.0 | 182 | 1.1602 | 0.5833 |
| 1.4017 | 14.0 | 196 | 1.3510 | 0.5417 |
| 1.2565 | 15.0 | 210 | 1.0978 | 0.5625 |
| 1.1056 | 16.0 | 224 | 1.0847 | 0.5833 |
| 1.2006 | 17.0 | 238 | 1.0262 | 0.625 |
| 0.9235 | 18.0 | 252 | 0.9532 | 0.7083 |
| 0.9528 | 19.0 | 266 | 1.0212 | 0.6042 |
| 0.8195 | 20.0 | 280 | 0.8442 | 0.7083 |
| 0.7518 | 21.0 | 294 | 0.8379 | 0.6875 |
| 0.6017 | 22.0 | 308 | 0.9422 | 0.7292 |
| 0.7697 | 23.0 | 322 | 0.7353 | 0.75 |
| 0.5367 | 24.0 | 336 | 0.8685 | 0.6875 |
| 0.5655 | 25.0 | 350 | 0.7440 | 0.7708 |
| 0.5116 | 26.0 | 364 | 0.7572 | 0.75 |
| 0.4297 | 27.0 | 378 | 0.7518 | 0.75 |
| 0.4928 | 28.0 | 392 | 0.5988 | 0.7917 |
| 0.4424 | 29.0 | 406 | 0.7240 | 0.75 |
| 0.3313 | 30.0 | 420 | 0.6173 | 0.7708 |
| 0.3854 | 31.0 | 434 | 0.7375 | 0.75 |
| 0.4131 | 32.0 | 448 | 0.7026 | 0.7708 |
| 0.2899 | 33.0 | 462 | 0.6516 | 0.7708 |
| 0.3644 | 34.0 | 476 | 0.6201 | 0.7917 |
| 0.2316 | 35.0 | 490 | 0.6111 | 0.7708 |
| 0.2589 | 36.0 | 504 | 0.5518 | 0.7917 |
| 0.3778 | 37.0 | 518 | 0.5512 | 0.7708 |
| 0.2426 | 38.0 | 532 | 0.5779 | 0.7917 |
| 0.304 | 39.0 | 546 | 0.7771 | 0.75 |
| 0.1833 | 40.0 | 560 | 0.5839 | 0.7708 |
| 0.1649 | 41.0 | 574 | 0.5699 | 0.7708 |
| 0.2529 | 42.0 | 588 | 0.6190 | 0.75 |
| 0.2121 | 43.0 | 602 | 0.5992 | 0.75 |
| 0.2736 | 44.0 | 616 | 0.6011 | 0.7917 |
| 0.2446 | 45.0 | 630 | 0.6013 | 0.7708 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
cduncanja/emotion_model
|
cduncanja
| 2022-10-11T20:26:12Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-09T21:30:47Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- f1
model-index:
- name: emotion_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: F1
type: f1
value: 0.14545454545454545
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_model
This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7815
- F1: 0.1455
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.7968 | 1.0 | 2 | 1.7804 | 0.2286 |
| 1.7918 | 2.0 | 4 | 1.7812 | 0.2286 |
| 1.7867 | 3.0 | 6 | 1.7822 | 0.08 |
| 1.7884 | 4.0 | 8 | 1.7816 | 0.08 |
| 1.7833 | 5.0 | 10 | 1.7815 | 0.1455 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.11.0
|
Tritkoman/RussiantoChukchi
|
Tritkoman
| 2022-10-11T20:05:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"translation",
"en",
"nl",
"dataset:Tritkoman/autotrain-data-kkakkakqa",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-10-11T19:02:52Z |
---
tags:
- autotrain
- translation
language:
- en
- nl
datasets:
- Tritkoman/autotrain-data-kkakkakqa
co2_eq_emissions:
emissions: 96.54051975402358
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 1726160287
- CO2 Emissions (in grams): 96.5405
## Validation Metrics
- Loss: 0.151
- SacreBLEU: 51.859
- Gen len: 14.625
|
MuhammadIqbalBazmi/wav2vec2-large-xlsr-53-intent-classification-ori
|
MuhammadIqbalBazmi
| 2022-10-11T19:55:06Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-10-11T17:50:12Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-large-xlsr-53-intent-classification-ori
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-intent-classification-ori
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7682
- Accuracy: 0.4167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 45
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2017 | 1.0 | 14 | 2.2113 | 0.1042 |
| 2.2013 | 2.0 | 28 | 2.2078 | 0.0625 |
| 2.197 | 3.0 | 42 | 2.1981 | 0.0625 |
| 2.1917 | 4.0 | 56 | 2.1760 | 0.3125 |
| 2.1759 | 5.0 | 70 | 2.1534 | 0.3333 |
| 2.1713 | 6.0 | 84 | 2.1305 | 0.3333 |
| 2.1431 | 7.0 | 98 | 2.1044 | 0.3333 |
| 2.1366 | 8.0 | 112 | 2.0838 | 0.3333 |
| 2.1482 | 9.0 | 126 | 2.0649 | 0.3333 |
| 2.1074 | 10.0 | 140 | 2.0532 | 0.3333 |
| 2.1066 | 11.0 | 154 | 2.0506 | 0.3333 |
| 2.089 | 12.0 | 168 | 2.0624 | 0.3333 |
| 2.0625 | 13.0 | 182 | 2.0580 | 0.3333 |
| 2.1106 | 14.0 | 196 | 2.0419 | 0.3333 |
| 2.0714 | 15.0 | 210 | 2.0350 | 0.3333 |
| 2.0256 | 16.0 | 224 | 2.0333 | 0.3333 |
| 2.1226 | 17.0 | 238 | 2.0286 | 0.3333 |
| 2.0451 | 18.0 | 252 | 2.0195 | 0.3333 |
| 2.0822 | 19.0 | 266 | 1.9968 | 0.3333 |
| 2.0991 | 20.0 | 280 | 1.9883 | 0.3333 |
| 2.0537 | 21.0 | 294 | 1.9767 | 0.3333 |
| 1.973 | 22.0 | 308 | 1.9524 | 0.3333 |
| 2.0429 | 23.0 | 322 | 1.9432 | 0.3333 |
| 2.0091 | 24.0 | 336 | 1.9402 | 0.3333 |
| 2.0309 | 25.0 | 350 | 1.9295 | 0.3333 |
| 2.0261 | 26.0 | 364 | 1.9167 | 0.3333 |
| 2.0081 | 27.0 | 378 | 1.9083 | 0.3333 |
| 2.023 | 28.0 | 392 | 1.9013 | 0.3333 |
| 2.0 | 29.0 | 406 | 1.8623 | 0.375 |
| 1.936 | 30.0 | 420 | 1.8483 | 0.3958 |
| 1.9809 | 31.0 | 434 | 1.8344 | 0.3958 |
| 1.9645 | 32.0 | 448 | 1.8428 | 0.4167 |
| 1.9788 | 33.0 | 462 | 1.8372 | 0.3958 |
| 1.9484 | 34.0 | 476 | 1.8246 | 0.3958 |
| 1.9553 | 35.0 | 490 | 1.7941 | 0.4167 |
| 1.9321 | 36.0 | 504 | 1.7824 | 0.4167 |
| 1.9759 | 37.0 | 518 | 1.7884 | 0.3958 |
| 1.9424 | 38.0 | 532 | 1.7875 | 0.3958 |
| 1.9592 | 39.0 | 546 | 1.7901 | 0.3958 |
| 1.9425 | 40.0 | 560 | 1.7812 | 0.3958 |
| 1.8899 | 41.0 | 574 | 1.7736 | 0.3958 |
| 1.9361 | 42.0 | 588 | 1.7711 | 0.4167 |
| 1.9023 | 43.0 | 602 | 1.7711 | 0.4167 |
| 1.9203 | 44.0 | 616 | 1.7688 | 0.4167 |
| 1.8921 | 45.0 | 630 | 1.7682 | 0.4167 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ghadeermobasher/BC4CHEMD-Original-128-PubMedBERT-Trial-latest-general
|
ghadeermobasher
| 2022-10-11T19:39:04Z | 112 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-11T18:22:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: BC4CHEMD-Original-128-PubMedBERT-Trial-latest-general
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BC4CHEMD-Original-128-PubMedBERT-Trial-latest-general
This model is a fine-tuned version of [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
- Precision: 0.9678
- Recall: 0.9892
- F1: 0.9784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.10.3
|
shuojiang/PPO-LunarLander-v2-Tuned
|
shuojiang
| 2022-10-11T19:00:34Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-11T19:00:13Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 272.92 +/- 19.82
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
microsoft/bloom-deepspeed-inference-fp16
|
microsoft
| 2022-10-11T18:28:26Z | 13 | 12 |
transformers
|
[
"transformers",
"bloom",
"feature-extraction",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-08-17T21:01:05Z |
---
license: bigscience-bloom-rail-1.0
---
This is a copy of the original [BLOOM weights](https://huggingface.co/bigscience/bloom) that is more efficient to use with the [DeepSpeed-MII](https://github.com/microsoft/deepspeed-mii) and [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/). In this repo the original tensors are split into 8 shards to target 8 GPUs, this allows the user to run the model with DeepSpeed-inference Tensor Parallelism.
For specific details about the BLOOM model itself, please see the [original BLOOM model card](https://huggingface.co/bigscience/bloom).
For examples on using this repo please see the following:
* https://github.com/huggingface/transformers-bloom-inference
* https://github.com/microsoft/DeepSpeed-MII
|
stevhliu/my_awesome_billsum_model
|
stevhliu
| 2022-10-11T18:23:16Z | 699 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-11T18:04:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.176
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4290
- Rouge1: 0.176
- Rouge2: 0.0773
- Rougel: 0.1454
- Rougelsum: 0.1455
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.5195 | 0.1478 | 0.0528 | 0.1197 | 0.1194 | 19.0 |
| No log | 2.0 | 124 | 2.4660 | 0.1572 | 0.06 | 0.1288 | 0.1287 | 19.0 |
| No log | 3.0 | 186 | 2.4366 | 0.1691 | 0.0719 | 0.1394 | 0.1396 | 19.0 |
| No log | 4.0 | 248 | 2.4290 | 0.176 | 0.0773 | 0.1454 | 0.1455 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.13.1
|
Nakul24/RoBERTa-Goemotions-6
|
Nakul24
| 2022-10-11T18:12:22Z | 112 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-10T00:09:23Z |
Enter your thoughts in chat.
The output would be probability of your current mental state.
|
nvidia/nemo-megatron-t5-3B
|
nvidia
| 2022-10-11T17:45:19Z | 25 | 9 |
nemo
|
[
"nemo",
"pytorch",
"seq2seq",
"masked language modeling",
"en",
"dataset:the_pile",
"arxiv:1910.10683",
"arxiv:1909.08053",
"arxiv:2101.00027",
"license:cc-by-4.0",
"region:us"
] | null | 2022-09-20T20:57:01Z |
---
language:
- en
library_name: nemo
datasets:
- the_pile
tags:
- pytorch
- seq2seq
- masked language modeling
license: cc-by-4.0
---
# NeMo Megatron-T5 3B
<style>
img {
display: inline;
}
</style>
|[](#model-architecture)|[](#model-architecture)|[](#datasets)
## Model Description
NeMo Megatron-T5 3B is a transformer-based masked language model. [T5](https://arxiv.org/abs/1910.10683) [1] is a class of encoder-decoder models trained with a span-based masked language modeling objective. We follow the [T5v1.1](https://huggingface.co/docs/transformers/model_doc/t5v1.1) approach of pre-training using only the masked language modeling objective. It has Tensor Parallelism (TP) of 2, Pipeline Parallelism (PP) of 1 and should fit on a single NVIDIA GPU for inference and 2 A100 80G GPUs for finetuning.
This model was trained with [NeMo Megatron](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html).
## Getting started
### Step 1: Install NeMo and dependencies
You will need to install NVIDIA Apex and NeMo.
```
git clone https://github.com/ericharper/apex.git
cd apex
git checkout nm_v1.11.0
pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
```
```
pip install nemo_toolkit['nlp']==1.11.0
```
Alternatively, you can use NeMo Megatron training docker container with all dependencies pre-installed - [https://developer.nvidia.com/nemo-megatron-open-beta?nvid=nv-int-tblg-249896](https://developer.nvidia.com/nemo-megatron-open-beta)
### Step 2: Run inference
**Note.** The model has been trained with Tensor Parallelism (TP) of 2 and Pipeline Parallelism (PP) of 1, but it should be possible to run inference with tensor parallel size 1 on most NVIDIA GPUs
```
git clone https://github.com/NVIDIA/NeMo.git
cd NeMo/examples/nlp/language_modeling
git checkout v1.11.0
python megatron_t5_eval.py \
--model_file /raid/Data/NMT/Models/t5_3b/nemo_megatron_t5_3b_bf16_tp2.nemo \
--prompt '<mask> was the first person to set foot on the moon. When he did, he uttered the phrase <mask> for man, one <mask> for mankind which is still a popular quote today.' \
--tensor_model_parallel_size 2
```
The script will automatically replace all \<mask\> tokens with the appropriate sentinel tokens used while pre-training and attempt to fill them in autoregressively with greedy decoding.
*Expected Response*:
```
{
'prompt': '<mask> was the first person to set foot on the moon. When he did, he uttered the phrase <mask> for man, one <mask> for mankind which is still a popular quote today.',
'completion':
{
'text': '[CLS] <extra_id_0> Neil Armstrong <extra_id_1> one small step <extra_id_2> giant leap',
'tokens': [(101, '[CLS]', -2.9802276912960224e-06), (28996, '<extra_id_0>', -0.1492447555065155), (6003, 'Neil', -0.0015669699059799314), (8800, 'Armstrong', -0.013404252007603645), (28997, '<extra_id_1>', -0.9019092917442322), (1141, 'one', -0.7962003350257874), (1353, 'small', -0.006306509021669626), (2585, 'step', -1.9073468138230965e-06), (28998, '<extra_id_2>', -0.0026884861290454865), (4994, 'giant', -0.1679367572069168), (13660, 'leap', -5.960462772236497e-07)]
},
'masked_input': '<extra_id_0> was the first person to set foot on the moon . When he did , he uttered the phrase <extra_id_1> for man , one <extra_id_2> for mankind which is still a popular quote today .'
}
```
- prompt: The provided raw prompt as input
- completion:
- text: The final generated text from the model along with special/sentinel tokens besides \</s\>
- tokens: Each individual subword that is generated along with its log-probability.
- masked_input: The original raw prompt with <mask> replaced with appropriate sentinel tokens.
## Training Data
The model was trained on ["The Pile" dataset prepared by Eleuther.AI](https://pile.eleuther.ai/). [4]
## Evaluation results
*Fine-tuned Performance* on downstream *validation* sets for different tasks
| MNLI-M | MNLI-MM | SST-2 | STS-B (Spearman) |
| -------| --------| ------| -----------------|
| 90.62 | 90.61 | 97.2 | 91.5 |
## Limitations
The model was trained on the data originally crawled from the Internet. This data contains toxic language and societal biases. Therefore, the model may amplify those biases and return toxic responses especially when prompted with toxic prompts.
## References
[1] [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/abs/1910.10683)
[2] [Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism](https://arxiv.org/pdf/1909.08053.pdf)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[4] [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license.
|
BigSalmon/FormalInformalConcise-FIM-NeoX-1.3B
|
BigSalmon
| 2022-10-11T17:30:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-11T15:58:04Z |
data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
Trained on this model: https://huggingface.co/CarperAI/FIM-NeoX-1.3B, which is geared toward filling in the blank. Check out their model and give them a like!
```
from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast
tokenizer = GPTNeoXTokenizerFast.from_pretrained("CarperAI/FIM-NeoX-1.3B")
model = GPTNeoXForCausalLM.from_pretrained("BigSalmon/FormalInformalConcise-FIM-NeoX-1.3B")
```
To load model, you may need to do:
```
pip install git+https://github.com/huggingface/transformers
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/GPT2Mask
```
```
prompt = """<|SUF|> into relaxation <|PRE|> music before bedtime <|MID|>"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
How To Make Prompts:
Infill Phrase Masking In-Fill
```
<|SUF|> into relaxation <|PRE|> music before bedtime <|MID|>
```
Informal To Formal
```
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
|
araffin/tqc-donkey-warren-track-v0
|
araffin
| 2022-10-11T15:28:18Z | 11 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"donkey-warren-track-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-19T11:37:04Z |
---
library_name: stable-baselines3
tags:
- donkey-warren-track-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 175.85 +/- 2.78
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: donkey-warren-track-v0
type: donkey-warren-track-v0
---
# **TQC** Agent playing **donkey-warren-track-v0**
This is a trained model of a **TQC** agent playing **donkey-warren-track-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/>
Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/>
RL Zoo branch: `feat/gym-donkeycar`
**Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/match_ae-32_monaco_warren.pkl
```
export AE_PATH=/path/to/match_ae-32_monaco_warren.pkl
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env donkey-warren-track-v0 -orga araffin -f logs/
python enjoy.py --algo tqc --env donkey-warren-track-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env donkey-warren-track-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env donkey-warren-track-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 200000),
('callback',
[{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}},
'rl_zoo3.callbacks.LapTimeCallback']),
('ent_coef', 'auto'),
('env_wrapper',
[{'gym.wrappers.time_limit.TimeLimit': {'max_episode_steps': 10000}},
'ae.wrapper.AutoencoderWrapper',
{'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]),
('gamma', 0.99),
('gradient_steps', 256),
('learning_rate', 0.00073),
('learning_starts', 500),
('n_timesteps', 2000000.0),
('normalize', "{'norm_obs': True, 'norm_reward': False}"),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, '
'use_expln=True)'),
('sde_sample_freq', 16),
('tau', 0.02),
('train_freq', 200),
('use_sde', True),
('use_sde_at_warmup', True),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'conf': {'cam_resolution': (120, 160, 3),
'car_config': {'body_rgb': (226, 112, 18),
'body_style': 'donkey',
'car_name': 'Toni',
'font_size': 40},
'frame_skip': 1,
'host': 'localhost',
'level': 'warren',
'log_level': 20,
'max_cte': 8,
'port': 9091,
'start_delay': 5.0},
'min_throttle': -0.2,
'steer': 0.8}
```
|
araffin/tqc-RocketLander-v0
|
araffin
| 2022-10-11T15:28:13Z | 22 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"RocketLander-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-06T20:48:41Z |
---
library_name: stable-baselines3
tags:
- RocketLander-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -0.00 +/- 0.16
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: RocketLander-v0
type: RocketLander-v0
---
# **TQC** Agent playing **RocketLander-v0**
This is a trained model of a **TQC** agent playing **RocketLander-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Gym env: https://github.com/sdsubhajitdas/Rocket_Lander_Gym
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env RocketLander-v0 -orga araffin -f logs/
python enjoy.py --algo tqc --env RocketLander-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env RocketLander-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env RocketLander-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('env_wrapper',
[{'rl_zoo3.wrappers.FrameSkip': {'skip': 4}},
{'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]),
('n_timesteps', 3000000.0),
('policy', 'MlpPolicy'),
('normalize', False)])
```
|
araffin/tqc-donkey-avc-sparkfun-v0
|
araffin
| 2022-10-11T15:28:04Z | 18 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"donkey-avc-sparkfun-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-03T20:43:54Z |
---
library_name: stable-baselines3
tags:
- donkey-avc-sparkfun-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: 552.57 +/- 285.35
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: donkey-avc-sparkfun-v0
type: donkey-avc-sparkfun-v0
---
# **TQC** Agent playing **donkey-avc-sparkfun-v0**
This is a trained model of a **TQC** agent playing **donkey-avc-sparkfun-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Autoencoder: https://github.com/araffin/aae-train-donkeycar branch: `feat/race_june` <br/>
Gym env: https://github.com/araffin/gym-donkeycar-1 branch: `feat/race_june` <br/>
RL Zoo branch: `feat/gym-donkeycar`
**Pretrained autoencoder** can be downloaded here: https://github.com/araffin/aae-train-donkeycar/releases/download/live-twitch-2/ae-32_avc.pkl
```
# Export path to autoencoder
export AE_PATH=/path/to/ae-32_avc.pkl
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env donkey-avc-sparkfun-v0 -orga araffin -f logs/
python enjoy.py --algo tqc --env donkey-avc-sparkfun-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env donkey-avc-sparkfun-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env donkey-avc-sparkfun-v0 -f logs/ -orga araffin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 200000),
('callback',
[{'rl_zoo3.callbacks.ParallelTrainCallback': {'gradient_steps': 200}},
'rl_zoo3.callbacks.LapTimeCallback']),
('ent_coef', 'auto'),
('env_wrapper',
['ae.wrapper.AutoencoderWrapper',
{'rl_zoo3.wrappers.HistoryWrapper': {'horizon': 2}}]),
('gamma', 0.99),
('gradient_steps', 256),
('learning_rate', 0.00073),
('learning_starts', 500),
('n_timesteps', 2000000.0),
('normalize', "{'norm_obs': True, 'norm_reward': False}"),
('policy', 'MlpPolicy'),
('policy_kwargs',
'dict(log_std_init=-3, net_arch=[256, 256], n_critics=2, '
'use_expln=True)'),
('sde_sample_freq', 16),
('tau', 0.02),
('train_freq', 200),
('use_sde', True),
('use_sde_at_warmup', True),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'conf': {'cam_resolution': (120, 160, 3),
'car_config': {'body_rgb': (226, 112, 18),
'body_style': 'donkey',
'car_name': 'Toni',
'font_size': 40},
'frame_skip': 1,
'host': 'localhost',
'level': 'sparkfun_avc',
'log_level': 20,
'max_cte': 16,
'port': 9091,
'start_delay': 5.0},
'min_throttle': -0.2,
'steer': 0.3}
```
|
sd-concepts-library/ayush-spider-spr
|
sd-concepts-library
| 2022-10-11T15:24:27Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-11T15:24:20Z |
---
license: mit
---
### Ayush Spider SPR on Stable Diffusion
This is the `<spr-mn>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
araffin/ppo-LunarLander-v2
|
araffin
| 2022-10-11T15:23:43Z | 11 | 17 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-05-04T22:21:46Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 283.49 +/- 13.74
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.evaluation import evaluate_policy
# Download checkpoint
checkpoint = load_from_hub("araffin/ppo-LunarLander-v2", "ppo-LunarLander-v2.zip")
# Load the model
model = PPO.load(checkpoint)
env = make_vec_env("LunarLander-v2", n_envs=1)
# Evaluate
print("Evaluating model")
mean_reward, std_reward = evaluate_policy(
model,
env,
n_eval_episodes=20,
deterministic=True,
)
print(f"Mean reward = {mean_reward:.2f} +/- {std_reward:.2f}")
# Start a new episode
obs = env.reset()
try:
while True:
action, _states = model.predict(obs, deterministic=True)
obs, rewards, dones, info = env.step(action)
env.render()
except KeyboardInterrupt:
pass
```
## Training code (with SB3)
```python
from stable_baselines3 import PPO
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.callbacks import EvalCallback
# Create the environment
env_id = "LunarLander-v2"
n_envs = 16
env = make_vec_env(env_id, n_envs=n_envs)
# Create the evaluation envs
eval_envs = make_vec_env(env_id, n_envs=5)
# Adjust evaluation interval depending on the number of envs
eval_freq = int(1e5)
eval_freq = max(eval_freq // n_envs, 1)
# Create evaluation callback to save best model
# and monitor agent performance
eval_callback = EvalCallback(
eval_envs,
best_model_save_path="./logs/",
eval_freq=eval_freq,
n_eval_episodes=10,
)
# Instantiate the agent
# Hyperparameters from https://github.com/DLR-RM/rl-baselines3-zoo
model = PPO(
"MlpPolicy",
env,
n_steps=1024,
batch_size=64,
gae_lambda=0.98,
gamma=0.999,
n_epochs=4,
ent_coef=0.01,
verbose=1,
)
# Train the agent (you can kill it before using ctrl+c)
try:
model.learn(total_timesteps=int(5e6), callback=eval_callback)
except KeyboardInterrupt:
pass
# Load best model
model = PPO.load("logs/best_model.zip")
```
|
sb3/tqc-FetchPush-v1
|
sb3
| 2022-10-11T15:19:44Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FetchPush-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-02T20:55:29Z |
---
library_name: stable-baselines3
tags:
- FetchPush-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -11.60 +/- 6.20
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchPush-v1
type: FetchPush-v1
---
# **TQC** Agent playing **FetchPush-v1**
This is a trained model of a **TQC** agent playing **FetchPush-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchPush-v1 -orga sb3 -f logs/
python enjoy.py --algo tqc --env FetchPush-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env FetchPush-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchPush-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('learning_rate', 0.001),
('n_timesteps', 1000000.0),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4, max_episode_length=100 )'),
('tau', 0.005),
('normalize', False)])
```
|
sb3/tqc-FetchReach-v1
|
sb3
| 2022-10-11T15:19:40Z | 5 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FetchReach-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-02T20:54:49Z |
---
library_name: stable-baselines3
tags:
- FetchReach-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: TQC
results:
- metrics:
- type: mean_reward
value: -2.40 +/- 0.49
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FetchReach-v1
type: FetchReach-v1
---
# **TQC** Agent playing **FetchReach-v1**
This is a trained model of a **TQC** agent playing **FetchReach-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo tqc --env FetchReach-v1 -orga sb3 -f logs/
python enjoy.py --algo tqc --env FetchReach-v1 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo tqc --env FetchReach-v1 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo tqc --env FetchReach-v1 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('ent_coef', 'auto'),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.95),
('learning_rate', 0.001),
('learning_starts', 1000),
('n_timesteps', 20000.0),
('normalize', True),
('policy', 'MultiInputPolicy'),
('policy_kwargs', 'dict(net_arch=[64, 64], n_critics=1)'),
('replay_buffer_class', 'HerReplayBuffer'),
('replay_buffer_kwargs',
"dict( online_sampling=True, goal_selection_strategy='future', "
'n_sampled_goal=4 )'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
sb3/ddpg-Walker2DBulletEnv-v0
|
sb3
| 2022-10-11T15:19:35Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"Walker2DBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-02T20:43:12Z |
---
library_name: stable-baselines3
tags:
- Walker2DBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: 1495.73 +/- 612.27
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Walker2DBulletEnv-v0
type: Walker2DBulletEnv-v0
---
# **DDPG** Agent playing **Walker2DBulletEnv-v0**
This is a trained model of a **DDPG** agent playing **Walker2DBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env Walker2DBulletEnv-v0 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env Walker2DBulletEnv-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env Walker2DBulletEnv-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env Walker2DBulletEnv-v0 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('buffer_size', 1000000),
('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.0007),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
sb3/ddpg-BipedalWalker-v3
|
sb3
| 2022-10-11T15:19:30Z | 16 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-02T20:42:17Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DDPG
results:
- metrics:
- type: mean_reward
value: 153.32 +/- 164.43
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
---
# **DDPG** Agent playing **BipedalWalker-v3**
This is a trained model of a **DDPG** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ddpg --env BipedalWalker-v3 -orga sb3 -f logs/
python enjoy.py --algo ddpg --env BipedalWalker-v3 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ddpg --env BipedalWalker-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ddpg --env BipedalWalker-v3 -f logs/ -orga sb3
```
## Hyperparameters
```python
OrderedDict([('buffer_size', 200000),
('gamma', 0.98),
('gradient_steps', -1),
('learning_rate', 0.001),
('learning_starts', 10000),
('n_timesteps', 1000000.0),
('noise_std', 0.1),
('noise_type', 'normal'),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[400, 300])'),
('train_freq', [1, 'episode']),
('normalize', False)])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.