pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-ia
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1452
- Wer: 0.1253
## Training Procedure
Training is conducted in Google Colab, the training notebook provided in the repo
## Training and evaluation data
Language Model Created from texts from processed sentence in train + validation split of dataset (common voice 8.0 for Interlingua)
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_ia.ipynb"
Test WER without LM
wer = 20.1776 %
cer = 4.7205 %
Test WER using
wer = 8.6074 %
cer = 2.4147 %
evaluation using eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-ia --dataset mozilla-foundation/common_voice_8_0 --config ia --split test
# running without LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-ia --dataset mozilla-foundation/common_voice_8_0 --config ia --split test --greedy
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.432 | 1.87 | 400 | 2.9636 | 1.0 |
| 2.6922 | 3.74 | 800 | 2.2111 | 0.9977 |
| 1.2581 | 5.61 | 1200 | 0.4864 | 0.4028 |
| 0.6232 | 7.48 | 1600 | 0.2807 | 0.2413 |
| 0.4479 | 9.35 | 2000 | 0.2219 | 0.1885 |
| 0.3654 | 11.21 | 2400 | 0.1886 | 0.1606 |
| 0.323 | 13.08 | 2800 | 0.1716 | 0.1444 |
| 0.2935 | 14.95 | 3200 | 0.1687 | 0.1443 |
| 0.2707 | 16.82 | 3600 | 0.1632 | 0.1382 |
| 0.2559 | 18.69 | 4000 | 0.1507 | 0.1337 |
| 0.2433 | 20.56 | 4400 | 0.1572 | 0.1358 |
| 0.2338 | 22.43 | 4800 | 0.1489 | 0.1305 |
| 0.2258 | 24.3 | 5200 | 0.1485 | 0.1278 |
| 0.2218 | 26.17 | 5600 | 0.1470 | 0.1272 |
| 0.2169 | 28.04 | 6000 | 0.1470 | 0.1270 |
| 0.2117 | 29.91 | 6400 | 0.1452 | 0.1253 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["ia"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-ia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ia"}, "metrics": [{"type": "wer", "value": 8.6074, "name": "Test WER using LM"}, {"type": "cer", "value": 2.4147, "name": "Test CER using LM"}]}]}]}
|
ayameRushia/wav2vec2-large-xls-r-300m-ia
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"ia",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ia"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #mozilla-foundation/common_voice_8_0 #ia #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-ia
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1452
* Wer: 0.1253
Training Procedure
------------------
Training is conducted in Google Colab, the training notebook provided in the repo
Training and evaluation data
----------------------------
Language Model Created from texts from processed sentence in train + validation split of dataset (common voice 8.0 for Interlingua)
Evaluation is conducted in Notebook, you can see within the repo "notebook\_evaluation\_wav2vec2\_ia.ipynb"
Test WER without LM
wer = 20.1776 %
cer = 4.7205 %
Test WER using
wer = 8.6074 %
cer = 2.4147 %
evaluation using URL
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 400
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #mozilla-foundation/common_voice_8_0 #ia #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 400\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ID dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3975
- Wer: 0.2633
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 0.78 | 100 | 4.5645 | 1.0 |
| No log | 1.55 | 200 | 2.9016 | 1.0 |
| No log | 2.33 | 300 | 2.2666 | 1.0982 |
| No log | 3.1 | 400 | 0.6079 | 0.6376 |
| 3.2188 | 3.88 | 500 | 0.4985 | 0.5008 |
| 3.2188 | 4.65 | 600 | 0.4477 | 0.4469 |
| 3.2188 | 5.43 | 700 | 0.3953 | 0.3915 |
| 3.2188 | 6.2 | 800 | 0.4319 | 0.3921 |
| 3.2188 | 6.98 | 900 | 0.4171 | 0.3698 |
| 0.2193 | 7.75 | 1000 | 0.3957 | 0.3600 |
| 0.2193 | 8.53 | 1100 | 0.3730 | 0.3493 |
| 0.2193 | 9.3 | 1200 | 0.3780 | 0.3348 |
| 0.2193 | 10.08 | 1300 | 0.4133 | 0.3568 |
| 0.2193 | 10.85 | 1400 | 0.3984 | 0.3193 |
| 0.1129 | 11.63 | 1500 | 0.3845 | 0.3174 |
| 0.1129 | 12.4 | 1600 | 0.3882 | 0.3162 |
| 0.1129 | 13.18 | 1700 | 0.3982 | 0.3008 |
| 0.1129 | 13.95 | 1800 | 0.3902 | 0.3198 |
| 0.1129 | 14.73 | 1900 | 0.4082 | 0.3237 |
| 0.0765 | 15.5 | 2000 | 0.3732 | 0.3126 |
| 0.0765 | 16.28 | 2100 | 0.3893 | 0.3001 |
| 0.0765 | 17.05 | 2200 | 0.4168 | 0.3083 |
| 0.0765 | 17.83 | 2300 | 0.4193 | 0.3044 |
| 0.0765 | 18.6 | 2400 | 0.4006 | 0.3013 |
| 0.0588 | 19.38 | 2500 | 0.3836 | 0.2892 |
| 0.0588 | 20.16 | 2600 | 0.3761 | 0.2903 |
| 0.0588 | 20.93 | 2700 | 0.3895 | 0.2930 |
| 0.0588 | 21.71 | 2800 | 0.3885 | 0.2791 |
| 0.0588 | 22.48 | 2900 | 0.3902 | 0.2891 |
| 0.0448 | 23.26 | 3000 | 0.4200 | 0.2849 |
| 0.0448 | 24.03 | 3100 | 0.4013 | 0.2799 |
| 0.0448 | 24.81 | 3200 | 0.4039 | 0.2731 |
| 0.0448 | 25.58 | 3300 | 0.3970 | 0.2647 |
| 0.0448 | 26.36 | 3400 | 0.4081 | 0.2690 |
| 0.0351 | 27.13 | 3500 | 0.4090 | 0.2674 |
| 0.0351 | 27.91 | 3600 | 0.3953 | 0.2663 |
| 0.0351 | 28.68 | 3700 | 0.4044 | 0.2650 |
| 0.0351 | 29.46 | 3800 | 0.3969 | 0.2646 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["id"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "XLS-R-300M - Indonesia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "sv-SE"}, "metrics": [{"type": "wer", "value": 38.098, "name": "Test WER"}, {"type": "cer", "value": 14.261, "name": "Test CER"}]}]}]}
|
ayameRushia/wav2vec2-large-xls-r-300m-id
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - ID dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3975
* Wer: 0.2633
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.17.1.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #mozilla-foundation/common_voice_8_0 #generated_from_trainer #robust-speech-event #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.17.1.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - MN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5502
- Wer: 0.4042
## Training and evaluation data
Evaluation is conducted in Notebook, you can see within the repo "notebook_evaluation_wav2vec2_mn.ipynb"
Test WER without LM
wer = 58.2171 %
cer = 16.0670 %
Test WER using
wer = 31.3919 %
cer = 10.2565 %
How to use eval.py
```
huggingface-cli login #login to huggingface for getting auth token to access the common voice v8
#running with LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test
# running without LM
python eval.py --model_id ayameRushia/wav2vec2-large-xls-r-300m-mn --dataset mozilla-foundation/common_voice_8_0 --config mn --split test --greedy
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 6.35 | 400 | 0.9380 | 0.7902 |
| 3.2674 | 12.7 | 800 | 0.5794 | 0.5309 |
| 0.7531 | 19.05 | 1200 | 0.5749 | 0.4815 |
| 0.5382 | 25.4 | 1600 | 0.5530 | 0.4447 |
| 0.4293 | 31.75 | 2000 | 0.5709 | 0.4237 |
| 0.4293 | 38.1 | 2400 | 0.5476 | 0.4059 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["mn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-large-xls-r-300m-mn", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "mn"}, "metrics": [{"type": "wer", "value": 31.3919, "name": "Test WER using LM"}, {"type": "cer", "value": 10.2565, "name": "Test CER using LM"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "mn"}, "metrics": [{"type": "wer", "value": 65.26, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "mn"}, "metrics": [{"type": "wer", "value": 63.09, "name": "Test WER"}]}]}]}
|
ayameRushia/wav2vec2-large-xls-r-300m-mn
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"mn",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mn"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #mozilla-foundation/common_voice_8_0 #mn #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - MN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5502
* Wer: 0.4042
Training and evaluation data
----------------------------
Evaluation is conducted in Notebook, you can see within the repo "notebook\_evaluation\_wav2vec2\_mn.ipynb"
Test WER without LM
wer = 58.2171 %
cer = 16.0670 %
Test WER using
wer = 31.3919 %
cer = 10.2565 %
How to use URL
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 40.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.1+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 40.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #mozilla-foundation/common_voice_8_0 #mn #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 40.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.1+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER = 20.072720 %
## Training
Training using common voice dataset
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesia by Ayame Rushia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": "???", "name": "Test WER"}]}]}]}
|
ayameRushia/wav2vec2-large-xlsr-indo-base
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Indonesia using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
Test Result:
WER = 20.072720 %
## Training
Training using common voice dataset
|
[
"# Wav2Vec2-Large-XLSR-53-Indonesia\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Indonesia using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\nTest Result: \nWER = 20.072720 %",
"## Training\nTraining using common voice dataset"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Indonesia\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Indonesia using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\nTest Result: \nWER = 20.072720 %",
"## Training\nTraining using common voice dataset"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Indonesia using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "id", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "id", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model = Wav2Vec2ForCTC.from_pretrained("ayameRushia/wav2vec2-large-xlsr-indonesia-demo")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
resampler = torchaudio.transforms.Resample(sampling_rate, 16_000)
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER = 19.830319 %
## Training
Training using common voice dataset
|
{"language": "id", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Indonesia by Ayame Rushia", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice id", "type": "common_voice", "args": "id"}, "metrics": [{"type": "wer", "value": 19.830319, "name": "Test WER"}]}]}]}
|
ayameRushia/wav2vec2-large-xlsr-indonesia
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"id",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"id"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Indonesia
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Indonesia using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
Test Result:
WER = 19.830319 %
## Training
Training using common voice dataset
|
[
"# Wav2Vec2-Large-XLSR-53-Indonesia\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Indonesia using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\nTest Result: \nWER = 19.830319 %",
"## Training\nTraining using common voice dataset"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #id #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Indonesia\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Indonesia using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the {language} test data of Common Voice.\n\nTest Result: \nWER = 19.830319 %",
"## Training\nTraining using common voice dataset"
] |
fill-mask
|
transformers
|
# `false-positives-scancode-bert-base-uncased-L8-1`
## Intended Use
This model is intended to be used for Sentence Classification which is used for results
analysis in [`scancode-results-analyzer`](https://github.com/nexB/scancode-results-analyzer).
`scancode-results-analyzer` helps detect faulty scans in [`scancode-toolkit`](https://github.com/nexB/scancode-results-analyzer) by using statistics and nlp modeling, among other tools,
to make Scancode better.
#### How to use
Refer [quickstart](https://github.com/nexB/scancode-results-analyzer#quickstart---local-machine) section in `scancode-results-analyzer` documentation, for installing and getting started.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
Then in `NLPModelsPredict` class, function `predict_basic_false_positive` uses this classifier to
predict sentances as either valid license tags or false positives.
#### Limitations and bias
As this model is a fine-tuned version of the [`bert-base-uncased`](https://huggingface.co/bert-base-uncased) model,
it has the same biases, but as the task it is fine-tuned to is a very specific field
(license tags vs false positives) without those intended biases, it's safe to assume
those don't apply at all here.
## Training and Fine-Tuning Data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
Then this `bert-base-uncased` model was fine-tuned on Scancode Rule texts, specifically
trained in the context of sentence classification, where the two classes are
- License Tags
- False Positives of License Tags
## Training procedure
For fine-tuning procedure and training, refer `scancode-results-analyzer` code.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
In `NLPModelsTrain` class, function `prepare_input_data_false_positive` prepares the
training data.
In `NLPModelsTrain` class, function `train_basic_false_positive_classifier` fine-tunes
this classifier.
1. Model - [BertBaseUncased](https://huggingface.co/bert-base-uncased) (Weights 0.5 GB)
2. Sentence Length - 8
3. Labels - 2 (False Positive/License Tag)
4. After 4-6 Epochs of Fine-Tuning with learning rate 2e-5 (6 secs each on an RTX 2060)
Note: The classes aren't balanced.
## Eval results
- Accuracy on the training data (90%) : 0.99 (+- 0.005)
- Accuracy on the validation data (10%) : 0.96 (+- 0.015)
The errors have lower confidence scores using thresholds on confidence scores almost
makes it a perfect classifier as the classification task is comparatively easier.
Results are stable, in the sence fine-tuning accuracy is very easily achieved every
time, though more learning epochs makes the data overfit, i.e. the training loss
decreases, but the validation loss increases, even though accuracies are very stable
even on overfitting.
|
{"language": "en", "license": "apache-2.0", "tags": ["license", "sentence-classification", "scancode", "license-compliance"], "datasets": ["bookcorpus", "wikipedia", "scancode-rules"], "version": 1.0}
|
ayansinha/false-positives-scancode-bert-base-uncased-L8-1
| null |
[
"transformers",
"tf",
"bert",
"fill-mask",
"license",
"sentence-classification",
"scancode",
"license-compliance",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:scancode-rules",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #tf #bert #fill-mask #license #sentence-classification #scancode #license-compliance #en #dataset-bookcorpus #dataset-wikipedia #dataset-scancode-rules #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# 'false-positives-scancode-bert-base-uncased-L8-1'
## Intended Use
This model is intended to be used for Sentence Classification which is used for results
analysis in 'scancode-results-analyzer'.
'scancode-results-analyzer' helps detect faulty scans in 'scancode-toolkit' by using statistics and nlp modeling, among other tools,
to make Scancode better.
#### How to use
Refer quickstart section in 'scancode-results-analyzer' documentation, for installing and getting started.
- Link to Code
Then in 'NLPModelsPredict' class, function 'predict_basic_false_positive' uses this classifier to
predict sentances as either valid license tags or false positives.
#### Limitations and bias
As this model is a fine-tuned version of the 'bert-base-uncased' model,
it has the same biases, but as the task it is fine-tuned to is a very specific field
(license tags vs false positives) without those intended biases, it's safe to assume
those don't apply at all here.
## Training and Fine-Tuning Data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
Then this 'bert-base-uncased' model was fine-tuned on Scancode Rule texts, specifically
trained in the context of sentence classification, where the two classes are
- License Tags
- False Positives of License Tags
## Training procedure
For fine-tuning procedure and training, refer 'scancode-results-analyzer' code.
- Link to Code
In 'NLPModelsTrain' class, function 'prepare_input_data_false_positive' prepares the
training data.
In 'NLPModelsTrain' class, function 'train_basic_false_positive_classifier' fine-tunes
this classifier.
1. Model - BertBaseUncased (Weights 0.5 GB)
2. Sentence Length - 8
3. Labels - 2 (False Positive/License Tag)
4. After 4-6 Epochs of Fine-Tuning with learning rate 2e-5 (6 secs each on an RTX 2060)
Note: The classes aren't balanced.
## Eval results
- Accuracy on the training data (90%) : 0.99 (+- 0.005)
- Accuracy on the validation data (10%) : 0.96 (+- 0.015)
The errors have lower confidence scores using thresholds on confidence scores almost
makes it a perfect classifier as the classification task is comparatively easier.
Results are stable, in the sence fine-tuning accuracy is very easily achieved every
time, though more learning epochs makes the data overfit, i.e. the training loss
decreases, but the validation loss increases, even though accuracies are very stable
even on overfitting.
|
[
"# 'false-positives-scancode-bert-base-uncased-L8-1'",
"## Intended Use\n\nThis model is intended to be used for Sentence Classification which is used for results\nanalysis in 'scancode-results-analyzer'.\n\n'scancode-results-analyzer' helps detect faulty scans in 'scancode-toolkit' by using statistics and nlp modeling, among other tools,\nto make Scancode better.",
"#### How to use\n\nRefer quickstart section in 'scancode-results-analyzer' documentation, for installing and getting started.\n\n- Link to Code\n\nThen in 'NLPModelsPredict' class, function 'predict_basic_false_positive' uses this classifier to\npredict sentances as either valid license tags or false positives.",
"#### Limitations and bias\n\nAs this model is a fine-tuned version of the 'bert-base-uncased' model,\nit has the same biases, but as the task it is fine-tuned to is a very specific field\n(license tags vs false positives) without those intended biases, it's safe to assume\nthose don't apply at all here.",
"## Training and Fine-Tuning Data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nThen this 'bert-base-uncased' model was fine-tuned on Scancode Rule texts, specifically\ntrained in the context of sentence classification, where the two classes are\n\n\t- License Tags \n\t- False Positives of License Tags",
"## Training procedure\n\nFor fine-tuning procedure and training, refer 'scancode-results-analyzer' code.\n\n- Link to Code\n\nIn 'NLPModelsTrain' class, function 'prepare_input_data_false_positive' prepares the\ntraining data.\n\nIn 'NLPModelsTrain' class, function 'train_basic_false_positive_classifier' fine-tunes\nthis classifier.\n\n1. Model - BertBaseUncased (Weights 0.5 GB)\n2. Sentence Length - 8\n3. Labels - 2 (False Positive/License Tag)\n4. After 4-6 Epochs of Fine-Tuning with learning rate 2e-5 (6 secs each on an RTX 2060)\n\nNote: The classes aren't balanced.",
"## Eval results\n\n- Accuracy on the training data (90%) : 0.99 (+- 0.005) \n- Accuracy on the validation data (10%) : 0.96 (+- 0.015)\n\nThe errors have lower confidence scores using thresholds on confidence scores almost\nmakes it a perfect classifier as the classification task is comparatively easier.\n\nResults are stable, in the sence fine-tuning accuracy is very easily achieved every\ntime, though more learning epochs makes the data overfit, i.e. the training loss \ndecreases, but the validation loss increases, even though accuracies are very stable\neven on overfitting."
] |
[
"TAGS\n#transformers #tf #bert #fill-mask #license #sentence-classification #scancode #license-compliance #en #dataset-bookcorpus #dataset-wikipedia #dataset-scancode-rules #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# 'false-positives-scancode-bert-base-uncased-L8-1'",
"## Intended Use\n\nThis model is intended to be used for Sentence Classification which is used for results\nanalysis in 'scancode-results-analyzer'.\n\n'scancode-results-analyzer' helps detect faulty scans in 'scancode-toolkit' by using statistics and nlp modeling, among other tools,\nto make Scancode better.",
"#### How to use\n\nRefer quickstart section in 'scancode-results-analyzer' documentation, for installing and getting started.\n\n- Link to Code\n\nThen in 'NLPModelsPredict' class, function 'predict_basic_false_positive' uses this classifier to\npredict sentances as either valid license tags or false positives.",
"#### Limitations and bias\n\nAs this model is a fine-tuned version of the 'bert-base-uncased' model,\nit has the same biases, but as the task it is fine-tuned to is a very specific field\n(license tags vs false positives) without those intended biases, it's safe to assume\nthose don't apply at all here.",
"## Training and Fine-Tuning Data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nThen this 'bert-base-uncased' model was fine-tuned on Scancode Rule texts, specifically\ntrained in the context of sentence classification, where the two classes are\n\n\t- License Tags \n\t- False Positives of License Tags",
"## Training procedure\n\nFor fine-tuning procedure and training, refer 'scancode-results-analyzer' code.\n\n- Link to Code\n\nIn 'NLPModelsTrain' class, function 'prepare_input_data_false_positive' prepares the\ntraining data.\n\nIn 'NLPModelsTrain' class, function 'train_basic_false_positive_classifier' fine-tunes\nthis classifier.\n\n1. Model - BertBaseUncased (Weights 0.5 GB)\n2. Sentence Length - 8\n3. Labels - 2 (False Positive/License Tag)\n4. After 4-6 Epochs of Fine-Tuning with learning rate 2e-5 (6 secs each on an RTX 2060)\n\nNote: The classes aren't balanced.",
"## Eval results\n\n- Accuracy on the training data (90%) : 0.99 (+- 0.005) \n- Accuracy on the validation data (10%) : 0.96 (+- 0.015)\n\nThe errors have lower confidence scores using thresholds on confidence scores almost\nmakes it a perfect classifier as the classification task is comparatively easier.\n\nResults are stable, in the sence fine-tuning accuracy is very easily achieved every\ntime, though more learning epochs makes the data overfit, i.e. the training loss \ndecreases, but the validation loss increases, even though accuracies are very stable\neven on overfitting."
] |
fill-mask
|
transformers
|
# `lic-class-scancode-bert-base-cased-L32-1`
## Intended Use
This model is intended to be used for Sentence Classification which is used for results
analysis in [`scancode-results-analyzer`](https://github.com/nexB/scancode-results-analyzer).
`scancode-results-analyzer` helps detect faulty scans in [`scancode-toolkit`](https://github.com/nexB/scancode-results-analyzer) by using statistics and nlp modeling, among other tools,
to make Scancode better.
## How to Use
Refer [quickstart](https://github.com/nexB/scancode-results-analyzer#quickstart---local-machine) section in `scancode-results-analyzer` documentation, for installing and getting started.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
Then in `NLPModelsPredict` class, function `predict_basic_lic_class` uses this classifier to
predict sentances as either valid license tags or false positives.
## Limitations and Bias
As this model is a fine-tuned version of the [`bert-base-cased`](https://huggingface.co/bert-base-cased) model,
it has the same biases, but as the task it is fine-tuned to is a very specific task
(license text/notice/tag/referance) without those intended biases, it's safe to assume
those don't apply at all here.
## Training and Fine-Tuning Data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
Then this `bert-base-cased` model was fine-tuned on Scancode Rule texts, specifically
trained in the context of sentence classification, where the four classes are
- License Text
- License Notice
- License Tag
- License Referance
## Training Procedure
For fine-tuning procedure and training, refer `scancode-results-analyzer` code.
- [Link to Code](https://github.com/nexB/scancode-results-analyzer/blob/master/src/results_analyze/nlp_models.py)
In `NLPModelsTrain` class, function `prepare_input_data_false_positive` prepares the
training data.
In `NLPModelsTrain` class, function `train_basic_false_positive_classifier` fine-tunes
this classifier.
1. Model - [BertBaseCased](https://huggingface.co/bert-base-cased) (Weights 0.5 GB)
2. Sentence Length - 32
3. Labels - 4 (License Text/Notice/Tag/Referance)
4. After 4 Epochs of Fine-Tuning with learning rate 2e-5 (60 secs each on an RTX 2060)
Note: The classes aren't balanced.
## Eval Results
- Accuracy on the training data (90%) : 0.98 (+- 0.01)
- Accuracy on the validation data (10%) : 0.84 (+- 0.01)
## Further Work
1. Apllying Splitting/Aggregation Strategies
2. Data Augmentation according to Vaalidation Errors
3. Bigger/Better Suited Models
|
{"language": "en", "license": "apache-2.0", "tags": ["license", "sentence-classification", "scancode", "license-compliance"], "datasets": ["bookcorpus", "wikipedia", "scancode-rules"], "version": 1.0}
|
ayansinha/lic-class-scancode-bert-base-cased-L32-1
| null |
[
"transformers",
"tf",
"bert",
"fill-mask",
"license",
"sentence-classification",
"scancode",
"license-compliance",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"dataset:scancode-rules",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #tf #bert #fill-mask #license #sentence-classification #scancode #license-compliance #en #dataset-bookcorpus #dataset-wikipedia #dataset-scancode-rules #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# 'lic-class-scancode-bert-base-cased-L32-1'
## Intended Use
This model is intended to be used for Sentence Classification which is used for results
analysis in 'scancode-results-analyzer'.
'scancode-results-analyzer' helps detect faulty scans in 'scancode-toolkit' by using statistics and nlp modeling, among other tools,
to make Scancode better.
## How to Use
Refer quickstart section in 'scancode-results-analyzer' documentation, for installing and getting started.
- Link to Code
Then in 'NLPModelsPredict' class, function 'predict_basic_lic_class' uses this classifier to
predict sentances as either valid license tags or false positives.
## Limitations and Bias
As this model is a fine-tuned version of the 'bert-base-cased' model,
it has the same biases, but as the task it is fine-tuned to is a very specific task
(license text/notice/tag/referance) without those intended biases, it's safe to assume
those don't apply at all here.
## Training and Fine-Tuning Data
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
Then this 'bert-base-cased' model was fine-tuned on Scancode Rule texts, specifically
trained in the context of sentence classification, where the four classes are
- License Text
- License Notice
- License Tag
- License Referance
## Training Procedure
For fine-tuning procedure and training, refer 'scancode-results-analyzer' code.
- Link to Code
In 'NLPModelsTrain' class, function 'prepare_input_data_false_positive' prepares the
training data.
In 'NLPModelsTrain' class, function 'train_basic_false_positive_classifier' fine-tunes
this classifier.
1. Model - BertBaseCased (Weights 0.5 GB)
2. Sentence Length - 32
3. Labels - 4 (License Text/Notice/Tag/Referance)
4. After 4 Epochs of Fine-Tuning with learning rate 2e-5 (60 secs each on an RTX 2060)
Note: The classes aren't balanced.
## Eval Results
- Accuracy on the training data (90%) : 0.98 (+- 0.01)
- Accuracy on the validation data (10%) : 0.84 (+- 0.01)
## Further Work
1. Apllying Splitting/Aggregation Strategies
2. Data Augmentation according to Vaalidation Errors
3. Bigger/Better Suited Models
|
[
"# 'lic-class-scancode-bert-base-cased-L32-1'",
"## Intended Use\n\nThis model is intended to be used for Sentence Classification which is used for results\nanalysis in 'scancode-results-analyzer'.\n\n'scancode-results-analyzer' helps detect faulty scans in 'scancode-toolkit' by using statistics and nlp modeling, among other tools,\nto make Scancode better.",
"## How to Use\n\nRefer quickstart section in 'scancode-results-analyzer' documentation, for installing and getting started.\n\n- Link to Code\n\nThen in 'NLPModelsPredict' class, function 'predict_basic_lic_class' uses this classifier to\npredict sentances as either valid license tags or false positives.",
"## Limitations and Bias\n\nAs this model is a fine-tuned version of the 'bert-base-cased' model,\nit has the same biases, but as the task it is fine-tuned to is a very specific task\n(license text/notice/tag/referance) without those intended biases, it's safe to assume\nthose don't apply at all here.",
"## Training and Fine-Tuning Data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nThen this 'bert-base-cased' model was fine-tuned on Scancode Rule texts, specifically\ntrained in the context of sentence classification, where the four classes are\n\n\t- License Text\n\t- License Notice\n\t- License Tag\n\t- License Referance",
"## Training Procedure\n\nFor fine-tuning procedure and training, refer 'scancode-results-analyzer' code.\n\n- Link to Code\n\nIn 'NLPModelsTrain' class, function 'prepare_input_data_false_positive' prepares the\ntraining data.\n\nIn 'NLPModelsTrain' class, function 'train_basic_false_positive_classifier' fine-tunes\nthis classifier.\n\n1. Model - BertBaseCased (Weights 0.5 GB)\n2. Sentence Length - 32\n3. Labels - 4 (License Text/Notice/Tag/Referance)\n4. After 4 Epochs of Fine-Tuning with learning rate 2e-5 (60 secs each on an RTX 2060)\n\nNote: The classes aren't balanced.",
"## Eval Results\n\n- Accuracy on the training data (90%) : 0.98 (+- 0.01) \n- Accuracy on the validation data (10%) : 0.84 (+- 0.01)",
"## Further Work\n\n1. Apllying Splitting/Aggregation Strategies\n2. Data Augmentation according to Vaalidation Errors\n3. Bigger/Better Suited Models"
] |
[
"TAGS\n#transformers #tf #bert #fill-mask #license #sentence-classification #scancode #license-compliance #en #dataset-bookcorpus #dataset-wikipedia #dataset-scancode-rules #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# 'lic-class-scancode-bert-base-cased-L32-1'",
"## Intended Use\n\nThis model is intended to be used for Sentence Classification which is used for results\nanalysis in 'scancode-results-analyzer'.\n\n'scancode-results-analyzer' helps detect faulty scans in 'scancode-toolkit' by using statistics and nlp modeling, among other tools,\nto make Scancode better.",
"## How to Use\n\nRefer quickstart section in 'scancode-results-analyzer' documentation, for installing and getting started.\n\n- Link to Code\n\nThen in 'NLPModelsPredict' class, function 'predict_basic_lic_class' uses this classifier to\npredict sentances as either valid license tags or false positives.",
"## Limitations and Bias\n\nAs this model is a fine-tuned version of the 'bert-base-cased' model,\nit has the same biases, but as the task it is fine-tuned to is a very specific task\n(license text/notice/tag/referance) without those intended biases, it's safe to assume\nthose don't apply at all here.",
"## Training and Fine-Tuning Data\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).\n\nThen this 'bert-base-cased' model was fine-tuned on Scancode Rule texts, specifically\ntrained in the context of sentence classification, where the four classes are\n\n\t- License Text\n\t- License Notice\n\t- License Tag\n\t- License Referance",
"## Training Procedure\n\nFor fine-tuning procedure and training, refer 'scancode-results-analyzer' code.\n\n- Link to Code\n\nIn 'NLPModelsTrain' class, function 'prepare_input_data_false_positive' prepares the\ntraining data.\n\nIn 'NLPModelsTrain' class, function 'train_basic_false_positive_classifier' fine-tunes\nthis classifier.\n\n1. Model - BertBaseCased (Weights 0.5 GB)\n2. Sentence Length - 32\n3. Labels - 4 (License Text/Notice/Tag/Referance)\n4. After 4 Epochs of Fine-Tuning with learning rate 2e-5 (60 secs each on an RTX 2060)\n\nNote: The classes aren't balanced.",
"## Eval Results\n\n- Accuracy on the training data (90%) : 0.98 (+- 0.01) \n- Accuracy on the validation data (10%) : 0.84 (+- 0.01)",
"## Further Work\n\n1. Apllying Splitting/Aggregation Strategies\n2. Data Augmentation according to Vaalidation Errors\n3. Bigger/Better Suited Models"
] |
text-classification
|
transformers
|
# bert-base-cased trained on TREC 6-class task
## Model description
A simple base BERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/bert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/bert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
save_steps=3000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.974,
'eval_f1': array([0.98181818, 0.94444444, 1. , 0.99236641, 0.96995708,
0.98159509]),
'eval_loss': 0.138086199760437,
'eval_precision': array([0.98540146, 0.98837209, 1. , 0.98484848, 0.94166667,
0.97560976]),
'eval_recall': array([0.97826087, 0.90425532, 1. , 1. , 1. ,
0.98765432]),
'eval_runtime': 1.6132,
'eval_samples_per_second': 309.943}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-classification"], "datasets": ["trec"], "model-index": [{"name": "aychang/bert-base-cased-trec-coarse", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "trec", "type": "trec", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.974, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUwZTU1ZGU5YTRiMzNhNmQyMjNlY2M5YjAwN2RlMmYxODI2MjFkY2Q3NWFjZDg3Zjg5ZDk1Y2I1MTUxYjFhMCIsInZlcnNpb24iOjF9.GJkxJOFhsO4UaoHpHH1136Qj_fu9UQ9o3DThtT46hvMduswkgobl9iz6ICYQ7IdYKFbh3zRTlsZzjnAlzGqdBA"}, {"type": "precision", "value": 0.9793164100816639, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTMxMjI3NWZhOGZkODJmYzkxYzdhZWIwMTBkZTg4YWZiNjcwNTVmM2RjYmQ3ZmNhZjM2MWQzYTUzNzFlMjQzOCIsInZlcnNpb24iOjF9.n45s1_gW040u5f2y-zfVx_5XU-J97dcuWlmaIZsJsCetcHtrjsbHut2gAcPxErl8UPTXSq1XDg5WWug4FPM8CQ"}, {"type": "precision", "value": 0.974, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTY5ZTZiNmYzZDQzYWZiZDdlNDllZWQ4NTVjZWZlYWJkZDgyNGNhZjAzOTZjZDc0NDUwMTE3ODVlMjFjNTIxZCIsInZlcnNpb24iOjF9.4lR7MgvxxTblEV4LZGbko-ylIeFjcjNM5P21iYH6vkNkjItIfiXmKbL55_Zeab4oGJ5ytWz0rIdlpNnmmV29Cw"}, {"type": "precision", "value": 0.9746805065928548, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDEzYmZmZDIyNDFmNzJmODQ2ODdhYTUyYzQyZjEzZTdhMjg3MTllOGFkNGRlMDFhYzI4ZGE5OTExNjk1ZTI5OSIsInZlcnNpb24iOjF9.Ti5gL3Tk9hCpriIUhB8ltdKRibSilvRZOxAlLCgAkrhg0dXGE5f4n8almCAjbRJEaPW6H6581PhuUfjgMqceBw"}, {"type": "recall", "value": 0.9783617516169679, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWUwMGUwYmY3MWQwOTcwYjI2Yjc3Yzc1YWQ1YjU2ODY3MzAyMDdkNmM3MmFhZmMxZWFhMTUxNzZlNzViMDA0ZiIsInZlcnNpb24iOjF9.IWhPl9xS5pqEaFHKsBZj6JRtJRpQZQqJhQYW6zmtPi2F3speRsKc0iksfHkmPjm678v-wKUJ4zyGfRs-63HmBg"}, {"type": "recall", "value": 0.974, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjlhMDY0MmI2NzBiMWY5NTcwYjZlYzE5ODg0ODk1ZTBjZDI4YmZiY2RmZWVlZGUxYzk2MDQ4NjRkMTQ4ZTEzZiIsInZlcnNpb24iOjF9.g5p5b0BqyZxb7Hk9DayRndhs5F0r44h8TXMJDaP6IoFdYzlBfEcZv7UkCu6s6laz9-F-hhZHUZii2ljtYasVAA"}, {"type": "recall", "value": 0.974, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJjNTE2ZWFjMGYyZGUzOWI3MDRhM2I2MTRjZGNkOWZkZDJhNzQ4OTYwOTQ2NDY5OGNjZTZhOWU2MzlhNTY5YyIsInZlcnNpb24iOjF9.JnRFkZ-v-yRhCf6di7ONcy_8Tv0rNXQir1TVw-cU9fNY1c4vKRmGaKmLGeR7TxpmKzEQtikb6mFwRwhIAhl8AA"}, {"type": "f1", "value": 0.9783635353409951, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjM2NDY3MmUyMmEyZjg5MWZhNjllOGRlNWVkYzgyYmM5ZDBmMDdhYmY5NDAxZmYwMjA0YTkzNTI2MjU0NTRlZiIsInZlcnNpb24iOjF9.HlbHjJa-bpYPjujWODpvfLVMtCnNQMDBCYpLGokfBoXibZGKfIzXcgNdXLdJ-DkmMUriX3wVZtGcRvA2ErUeDw"}, {"type": "f1", "value": 0.974, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjMxNDE4MTBmYzU2MTllMjlhNTcwYWJhMzRkNTE2ZGFiNmQ0ZTEyOWJhMmU2ZDliYTIzNDExYTM5MTAxYjcxNSIsInZlcnNpb24iOjF9.B7G9Gs74MosZPQ16QH2k-zrmlE8KCtIFu3BcrgObYiuqOz1aFURS3IPoOynVFLp1jnJtgQAmQRY_GDumSS-oDg"}, {"type": "f1", "value": 0.97377371266232, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmEyNjRlYmE5M2U1OWY0OGY2YjQyN2E0NmQxNjY0NTY3N2JiZmMwOWQ1ZTMzZDcwNTdjNWYwNTRiNTljNjMxMiIsInZlcnNpb24iOjF9.VryHh8G_ZvoiSm1SZRMw4kheGWuI3rQ6GUVqm2uf-kkaSU20rYMW20-VKCtwayLcrIHJ92to6YvvW7yI0Le5DA"}, {"type": "loss", "value": 0.13812002539634705, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk4MDQ5NGRiNTExYmE3NGU1ZmQ1YjUzMTQ4NzUwNWViYzFiODEzMjc2MDA2MzYyOGNjNjYxYzliNDM4Y2U0ZSIsInZlcnNpb24iOjF9.u68ogPOH6-_pb6ZVulzMVfHIfFlLwBeDp8H4iqgfBadjwj2h-aO0jzc4umWFWtzWespsZvnlDjklbhhgrd1vCQ"}]}]}]}
|
aychang/bert-base-cased-trec-coarse
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"dataset:trec",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #text-classification #en #dataset-trec #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-cased trained on TREC 6-class task
## Model description
A simple base BERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
##### AdaptNLP
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC URL
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
## Eval results
|
[
"# bert-base-cased trained on TREC 6-class task",
"## Model description\n\nA simple base BERT model trained on the \"trec\" dataset.",
"## Intended uses & limitations",
"#### How to use",
"##### Transformers",
"##### AdaptNLP",
"#### Limitations and bias\n\nThis is minimal language model trained on a benchmark dataset.",
"## Training data\n\nTREC URL",
"## Training procedure\n\nPreprocessing, hardware used, hyperparameters...",
"#### Hardware\nOne V100",
"#### Hyperparameters and Training Args",
"## Eval results"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #text-classification #en #dataset-trec #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-cased trained on TREC 6-class task",
"## Model description\n\nA simple base BERT model trained on the \"trec\" dataset.",
"## Intended uses & limitations",
"#### How to use",
"##### Transformers",
"##### AdaptNLP",
"#### Limitations and bias\n\nThis is minimal language model trained on a benchmark dataset.",
"## Training data\n\nTREC URL",
"## Training procedure\n\nPreprocessing, hardware used, hyperparameters...",
"#### Hardware\nOne V100",
"#### Hyperparameters and Training Args",
"## Eval results"
] |
question-answering
| null |
# TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad
## Model description
A serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
|
{"language": ["en"], "license": "mit", "tags": ["question-answering", "torchscript", "FastNN"], "datasets": ["squad"]}
|
aychang/bert-large-cased-whole-word-masking-finetuned-squad
| null |
[
"question-answering",
"torchscript",
"FastNN",
"en",
"dataset:squad",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#question-answering #torchscript #FastNN #en #dataset-squad #license-mit #region-us
|
# TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad
## Model description
A serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a URL for deployment using NVIDIA Triton Inference Server.
|
[
"# TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad",
"## Model description\n\nA serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a URL for deployment using NVIDIA Triton Inference Server."
] |
[
"TAGS\n#question-answering #torchscript #FastNN #en #dataset-squad #license-mit #region-us \n",
"# TorchScript model of bert-large-cased-whole-word-masking-finetuned-squad",
"## Model description\n\nA serialized torchscript model of bert-large-cased-whole-word-masking-finetuned-squad with a URL for deployment using NVIDIA Triton Inference Server."
] |
text-classification
|
transformers
|
# TREC 6-class Task: distilbert-base-cased
## Model description
A simple base distilBERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/distilbert-base-cased-trec-coarse"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/distilbert-base-cased-trec-coarse"
texts = ["Where did the queen go?", "Why did the Queen hire 1000 ML Engineers?"]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC https://huggingface.co/datasets/trec
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=500,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.97,
'eval_f1': array([0.98220641, 0.91620112, 1. , 0.97709924, 0.98678414,
0.97560976]),
'eval_loss': 0.14275787770748138,
'eval_precision': array([0.96503497, 0.96470588, 1. , 0.96969697, 0.98245614,
0.96385542]),
'eval_recall': array([1. , 0.87234043, 1. , 0.98461538, 0.99115044,
0.98765432]),
'eval_runtime': 0.9731,
'eval_samples_per_second': 513.798}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-classification"], "datasets": ["trec"], "model-index": [{"name": "aychang/distilbert-base-cased-trec-coarse", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "trec", "type": "trec", "config": "default", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.97, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGNmZTQ1Mjk3YTQ0NTdiZmY2NGM2NDM2Yzc2OTI4NGNiZDg4MmViN2I0ZGZiYWJlMTg1ZDU0MTc2ZTg1NjcwZiIsInZlcnNpb24iOjF9.4x_Ze9S5MbAeIHZ4p1EFmWev8RLkAIYWKqouAzYOxTNqdfFN0HnqULiM19EMP42v658vl_fR3-Ig0xG45DioCA"}, {"type": "precision", "value": 0.9742915631870833, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjA2MWVjMDc3MDYyY2M3NzY4NGNhY2JlNzJjMGQzZDUzZjE3ZWI1MjVmMzc4ODM2ZTQ4YmRhOTVkZDU0MzJiNiIsInZlcnNpb24iOjF9.EfmXJ6w5_7dK6ys03hpADP9h_sWuPAHgxpltUtCkJP4Ys_Gh8Ak4pGS149zt5AdP_zkvsWlXwAvx5BDMEoB2AA"}, {"type": "precision", "value": 0.97, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDVjOGFjM2RkMDMxZTFiMzE1ZDM4OTRjMzkwOWE2NTJmMmUwMDdiZDg5ZjExYmFmZjg2Y2Y5NzcxZWVkODkwZSIsInZlcnNpb24iOjF9.BtO7DqJsUhSXE-_tJZJOPPd421VmZ3KR9-KkrhJkLNenoV2Xd6Pu6i5y6HZQhFB-9WfEhU9cCsIPQ1ioZ7dyDA"}, {"type": "precision", "value": 0.9699546283251607, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGQ0Mzc2MTE2YjkwNGY1MDEzNWQwYmNlZDMzZjBmNWM0ODExYjM1OTQyZGJkNjI2OTA5MDczZjFmOGM5MmMzMyIsInZlcnNpb24iOjF9.fGi2qNpOjWd1ci3p_E1p80nOqabiKiQqpQIxtk5aWxe_Nzqh3XiOCBF8vswCRvX8qTKdCc2ZEJ4s8dZMeltfCA"}, {"type": "recall", "value": 0.972626762268805, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQwMWZiYjIyMGVhN2M1ZDE5M2EzZmQ1ODRlYzE0MzJhZmU3ZTM1MmIyNTg5ZjBlMDcyMmQ0NmYzZjFmMmM4NSIsInZlcnNpb24iOjF9.SYDxsRw0xoQuQhei0YBdUbBxG891gqLafVFLdPMCJtQIktqCTrPW0sMKtis7GA-FEbNQVu8lp92znvlryNiFCw"}, {"type": "recall", "value": 0.97, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjQ0MjczYjFhZDdiMjdkMWVlZTAzYWU0ODVhNjkxN2I1N2Y1Y2IyOTNlYWQxM2UxODIyNDZhZDM3MWIwMTgzZCIsInZlcnNpb24iOjF9.C5cfDTz_H4Y7nEO4Eq_XFy92CSbo3IBuL5n8wBKkTuB6hSgctTHOdOJzV8gWyMJ9gRcNqxp_yVU4BEB_I_0KAA"}, {"type": "recall", "value": 0.97, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDZmYWM3OWExZWI1ZjRiZjczYWQwOWI5NWQzNDNkODcyMjBhMmVkYjY0MGZjYzlhNWQ0Y2MyMjc3OWEyZjY4NCIsInZlcnNpb24iOjF9.65WM5ihNfbKOCNZ6apX7iVAC2Ge_cwz9Xwa5oJHFq3Ci97eBFqK-qtADdB_SFRcSQUoNodaBeIhNfe0hVddxCA"}, {"type": "f1", "value": 0.9729834427867218, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWQyZGZmYjU4NjE4M2YzMTUxOWVkYjU0YTFmYzE3MmQ2NjhmNDY1MGRmNGQ1MWZjYjM1Mzg5Y2RmNTk5YmZiMSIsInZlcnNpb24iOjF9.WIF-fmV0SZ6-lcg3Rz6TjbVl7nLvy_ftDi8PPhDIP1V61jgR1AcjLFeEgeZLxSFMdmU9yqG2DWYubF0luK0jCg"}, {"type": "f1", "value": 0.97, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDM0NDY0YzI2ZTBjYWVmZmVkOTI4ODkzM2RhNWM2ZjkwYTU3N2FjNjA4NjUwYWVjODNhMGEwMzdhYmE2YmIwYyIsInZlcnNpb24iOjF9.sihEhcsOeg8dvpuGgC-KCp1PsRNyguAif2uTBv5ELtRnM5KmMaHzRqpdpdc88Dj_DeuY6Y6qPQJt_dGk2q1rDQ"}, {"type": "f1", "value": 0.9694196751375908, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ5ZjdiM2NiNDNkZTY5ZjNjNWUzZmI1MzgwMjhhNDEzMTEzZjFiNDhmZDllYmI0NjIwYjY0ZjcxM2M0ODE3NSIsInZlcnNpb24iOjF9.x4oR_PL0ALHYl-s4S7cPNPm4asSX3s3h30m-TKe7wpyZs0x6jwOqF-Tb1kgd4IMLl23pzsezmh72e_PmBFpRCg"}, {"type": "loss", "value": 0.14272506535053253, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODU3NGFiMzIxYWI4NzYxMzUxZGE5ZTZkYTlkN2U5MTI1NzA5NTBiNGM3Y2Q5YmVmZjU0MmU5MjJlZThkZTllMCIsInZlcnNpb24iOjF9.3QeWbECpJ0MHV5gC0_ES6PpwplLsCHPKuToErB1MSG69xNWVyMjKu1-1YEWZOU6dGfwKGh_HvwucY5kC9qwWBQ"}]}]}]}
|
aychang/distilbert-base-cased-trec-coarse
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:trec",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #en #dataset-trec #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# TREC 6-class Task: distilbert-base-cased
## Model description
A simple base distilBERT model trained on the "trec" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
##### AdaptNLP
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
TREC URL
## Training procedure
Preprocessing, hardware used, hyperparameters...
#### Hardware
One V100
#### Hyperparameters and Training Args
## Eval results
|
[
"# TREC 6-class Task: distilbert-base-cased",
"## Model description\n\nA simple base distilBERT model trained on the \"trec\" dataset.",
"## Intended uses & limitations",
"#### How to use",
"##### Transformers",
"##### AdaptNLP",
"#### Limitations and bias\n\nThis is minimal language model trained on a benchmark dataset.",
"## Training data\n\nTREC URL",
"## Training procedure\n\nPreprocessing, hardware used, hyperparameters...",
"#### Hardware\nOne V100",
"#### Hyperparameters and Training Args",
"## Eval results"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #en #dataset-trec #license-mit #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# TREC 6-class Task: distilbert-base-cased",
"## Model description\n\nA simple base distilBERT model trained on the \"trec\" dataset.",
"## Intended uses & limitations",
"#### How to use",
"##### Transformers",
"##### AdaptNLP",
"#### Limitations and bias\n\nThis is minimal language model trained on a benchmark dataset.",
"## Training data\n\nTREC URL",
"## Training procedure\n\nPreprocessing, hardware used, hyperparameters...",
"#### Hardware\nOne V100",
"#### Hyperparameters and Training Args",
"## Eval results"
] |
question-answering
| null |
# TorchScript model of distilbert-squad
## Model description
A serialized torchscript model of distilbert-squad with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
|
{"language": ["en"], "license": "mit", "tags": ["question-answering", "torchscript", "FastNN"], "datasets": ["squad"]}
|
aychang/distilbert-squad
| null |
[
"question-answering",
"torchscript",
"FastNN",
"en",
"dataset:squad",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#question-answering #torchscript #FastNN #en #dataset-squad #license-mit #region-us
|
# TorchScript model of distilbert-squad
## Model description
A serialized torchscript model of distilbert-squad with a URL for deployment using NVIDIA Triton Inference Server.
|
[
"# TorchScript model of distilbert-squad",
"## Model description\n\nA serialized torchscript model of distilbert-squad with a URL for deployment using NVIDIA Triton Inference Server."
] |
[
"TAGS\n#question-answering #torchscript #FastNN #en #dataset-squad #license-mit #region-us \n",
"# TorchScript model of distilbert-squad",
"## Model description\n\nA serialized torchscript model of distilbert-squad with a URL for deployment using NVIDIA Triton Inference Server."
] |
object-detection
| null |
# TorchScript model of faster-rcnn
## Model description
A serialized torchscript model of [faster-rcnn](https://pytorch.org/vision/stable/models.html#faster-r-cnn) with a config.pbtxt for deployment using NVIDIA Triton Inference Server.
|
{"language": ["en"], "license": "mit", "tags": ["object-detection", "torchscript", "FastNN"], "datasets": ["coco"]}
|
aychang/fasterrcnn-resnet50-cpu
| null |
[
"object-detection",
"torchscript",
"FastNN",
"en",
"dataset:coco",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#object-detection #torchscript #FastNN #en #dataset-coco #license-mit #region-us
|
# TorchScript model of faster-rcnn
## Model description
A serialized torchscript model of faster-rcnn with a URL for deployment using NVIDIA Triton Inference Server.
|
[
"# TorchScript model of faster-rcnn",
"## Model description\n\nA serialized torchscript model of faster-rcnn with a URL for deployment using NVIDIA Triton Inference Server."
] |
[
"TAGS\n#object-detection #torchscript #FastNN #en #dataset-coco #license-mit #region-us \n",
"# TorchScript model of faster-rcnn",
"## Model description\n\nA serialized torchscript model of faster-rcnn with a URL for deployment using NVIDIA Triton Inference Server."
] |
text-classification
|
transformers
|
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
```python
# Load model and tokenizer
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Use pipeline
from transformers import pipeline
model_name = "aychang/roberta-base-imdb"
nlp = pipeline("sentiment-analysis", model=model_name, tokenizer=model_name)
results = nlp(["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."])
```
##### AdaptNLP
```python
from adaptnlp import EasySequenceClassifier
model_name = "aychang/roberta-base-imdb"
texts = ["I didn't really like it because it was so terrible.", "I love how easy it is to watch and get good results."]
classifer = EasySequenceClassifier
results = classifier.tag_text(text=texts, model_name_or_path=model_name, mini_batch_size=2)
```
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB https://huggingface.co/datasets/imdb
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
```python
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./models',
overwrite_output_dir=False,
num_train_epochs=2,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=500,
weight_decay=0.01,
evaluation_strategy="steps",
logging_dir='./logs',
fp16=False,
eval_steps=800,
save_steps=300000
)
```
## Eval results
```
{'epoch': 2.0,
'eval_accuracy': 0.94668,
'eval_f1': array([0.94603457, 0.94731017]),
'eval_loss': 0.2578844428062439,
'eval_precision': array([0.95762642, 0.93624502]),
'eval_recall': array([0.93472, 0.95864]),
'eval_runtime': 244.7522,
'eval_samples_per_second': 102.144}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-classification"], "datasets": ["imdb"]}
|
aychang/roberta-base-imdb
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"en",
"dataset:imdb",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #roberta #text-classification #en #dataset-imdb #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# IMDB Sentiment Task: roberta-base
## Model description
A simple base roBERTa model trained on the "imdb" dataset.
## Intended uses & limitations
#### How to use
##### Transformers
##### AdaptNLP
#### Limitations and bias
This is minimal language model trained on a benchmark dataset.
## Training data
IMDB URL
## Training procedure
#### Hardware
One V100
#### Hyperparameters and Training Args
## Eval results
|
[
"# IMDB Sentiment Task: roberta-base",
"## Model description\n\nA simple base roBERTa model trained on the \"imdb\" dataset.",
"## Intended uses & limitations",
"#### How to use",
"##### Transformers",
"##### AdaptNLP",
"#### Limitations and bias\n\nThis is minimal language model trained on a benchmark dataset.",
"## Training data\n\nIMDB URL",
"## Training procedure",
"#### Hardware\nOne V100",
"#### Hyperparameters and Training Args",
"## Eval results"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #text-classification #en #dataset-imdb #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# IMDB Sentiment Task: roberta-base",
"## Model description\n\nA simple base roBERTa model trained on the \"imdb\" dataset.",
"## Intended uses & limitations",
"#### How to use",
"##### Transformers",
"##### AdaptNLP",
"#### Limitations and bias\n\nThis is minimal language model trained on a benchmark dataset.",
"## Training data\n\nIMDB URL",
"## Training procedure",
"#### Hardware\nOne V100",
"#### Hyperparameters and Training Args",
"## Eval results"
] |
text-generation
|
transformers
|
# My Awesome Model
|
{"tags": ["conversational"]}
|
aydin/DialoGPT-medium-michael
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# My Awesome Model
|
[
"# My Awesome Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# My Awesome Model"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-imdb
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the [imdb](https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-imdb", "results": []}]}
|
aypan17/distilgpt2-imdb
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# distilgpt2-imdb
This model is a fine-tuned version of distilgpt2 on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# distilgpt2-imdb\n\nThis model is a fine-tuned version of distilgpt2 on the imdb dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# distilgpt2-imdb\n\nThis model is a fine-tuned version of distilgpt2 on the imdb dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-med-imdb
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "gpt2-med-imdb", "results": []}]}
|
aypan17/gpt2-med-imdb
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# gpt2-med-imdb
This model is a fine-tuned version of gpt2-medium on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# gpt2-med-imdb\n\nThis model is a fine-tuned version of gpt2-medium on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# gpt2-med-imdb\n\nThis model is a fine-tuned version of gpt2-medium on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.17.0.dev0\n- Pytorch 1.10.2\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
text-classification
|
transformers
|
TrainingArgs:
lr=2e-5,
train-batch-size=16,
eval-batch-size=16,
num-train-epochs=5,
weight-decay=0.01,
|
{"license": "mit"}
|
aypan17/roberta-base-imdb
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #roberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
TrainingArgs:
lr=2e-5,
train-batch-size=16,
eval-batch-size=16,
num-train-epochs=5,
weight-decay=0.01,
|
[] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #license-mit #autotrain_compatible #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# RudeRick discord bot
|
{"tags": ["conversational"]}
|
ayush19/rick-sanchez
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# RudeRick discord bot
|
[
"# RudeRick discord bot"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# RudeRick discord bot"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbert-finetuned-azerbaijani-ner
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1385
- Precision: 0.8899
- Recall: 0.9154
- F1: 0.9025
- Accuracy: 0.9669
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2928 | 1.0 | 625 | 0.1415 | 0.8584 | 0.8918 | 0.8748 | 0.9595 |
| 0.1254 | 2.0 | 1250 | 0.1335 | 0.8875 | 0.9119 | 0.8996 | 0.9637 |
| 0.077 | 3.0 | 1875 | 0.1385 | 0.8899 | 0.9154 | 0.9025 | 0.9669 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wikiann"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "mbert-finetuned-azerbaijani-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "wikiann", "type": "wikiann", "args": "az"}, "metrics": [{"type": "precision", "value": 0.8898541731306236, "name": "Precision"}, {"type": "recall", "value": 0.915416533673795, "name": "Recall"}, {"type": "f1", "value": 0.9024543738200126, "name": "F1"}, {"type": "accuracy", "value": 0.966948310139165, "name": "Accuracy"}]}]}]}
|
azizbarank/mbert-finetuned-azerbaijani-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
mbert-finetuned-azerbaijani-ner
===============================
This model is a fine-tuned version of bert-base-multilingual-cased on the wikiann dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1385
* Precision: 0.8899
* Recall: 0.9154
* F1: 0.9025
* Accuracy: 0.9669
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.6
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #dataset-wikiann #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.6"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-gn-demo
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7426
- Wer: 0.7256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 50
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 4.0 | 100 | 0.7045 | 0.7409 |
| No log | 8.0 | 200 | 0.7200 | 0.75 |
| No log | 12.0 | 300 | 0.7400 | 0.7439 |
| No log | 16.0 | 400 | 0.7677 | 0.7515 |
| 0.0846 | 20.0 | 500 | 0.7765 | 0.7271 |
| 0.0846 | 24.0 | 600 | 0.7821 | 0.7287 |
| 0.0846 | 28.0 | 700 | 0.7671 | 0.7180 |
| 0.0846 | 32.0 | 800 | 0.7594 | 0.7180 |
| 0.0846 | 36.0 | 900 | 0.7500 | 0.7165 |
| 0.0713 | 40.0 | 1000 | 0.7351 | 0.7287 |
| 0.0713 | 44.0 | 1100 | 0.7361 | 0.7241 |
| 0.0713 | 48.0 | 1200 | 0.7389 | 0.7378 |
| 0.0713 | 52.0 | 1300 | 0.7424 | 0.7210 |
| 0.0713 | 56.0 | 1400 | 0.7425 | 0.7256 |
| 0.0669 | 60.0 | 1500 | 0.7426 | 0.7256 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"language": ["gn"], "license": "apache-2.0", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice", "mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-base-gn-demo", "results": []}]}
|
azuur/wav2vec2-base-gn-demo
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"hf-asr-leaderboard",
"gn",
"dataset:common_voice",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"gn"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_8_0 #robust-speech-event #hf-asr-leaderboard #gn #dataset-common_voice #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-base-gn-demo
=====================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7426
* Wer: 0.7256
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine\_with\_restarts
* lr\_scheduler\_warmup\_steps: 50
* num\_epochs: 60
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #mozilla-foundation/common_voice_8_0 #robust-speech-event #hf-asr-leaderboard #gn #dataset-common_voice #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
#Ragnar Lothbrok DialoGPT Model
|
{"tags": ["conversational"]}
|
b0shakk/DialoGPT-small-Ragnar
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Ragnar Lothbrok DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
image-classification
|
transformers
|
# shirt_identifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Big Check shirt

#### Formal Shirt

#### casual shirt

#### denim shirt

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
b25mayank3/shirt_identifier
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
|
# shirt_identifier
Autogenerated by HuggingPics️
Create your own image classifier for anything by running the demo on Google Colab.
Report any issues with the demo at the github repo.
## Example Images
#### Big Check shirt
!Big Check shirt
#### Formal Shirt
!Formal Shirt
#### casual shirt
!casual shirt
#### denim shirt
!denim shirt
|
[
"# shirt_identifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Big Check shirt\n\n!Big Check shirt",
"#### Formal Shirt\n\n!Formal Shirt",
"#### casual shirt\n\n!casual shirt",
"#### denim shirt\n\n!denim shirt"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #vit #image-classification #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"# shirt_identifier\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.",
"## Example Images",
"#### Big Check shirt\n\n!Big Check shirt",
"#### Formal Shirt\n\n!Formal Shirt",
"#### casual shirt\n\n!casual shirt",
"#### denim shirt\n\n!denim shirt"
] |
text-generation
|
transformers
|
# GPT-Neo 125M finetuned with beer recipes
## Model Description
GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture https://huggingface.co/EleutherAI/gpt-neo-125M.
It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.
## Training data
This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following
styles of beer:
* Strong American Ale
* Pale American Ale
* India Pale Ale (IPA)
* Standard American Beer
* Stout
* English Pale Ale
* IPA
* American Porter and Stout
* Sour Ale
* Irish Beer
* Strong British Ale
* Belgian and French Ale
* German Wheat and Rye Beer
* Czech Lager
* Spice/Herb/Vegetable Beer
* Specialty Beer
* American Ale
* Pilsner
* Belgian Ale
* Strong Belgian Ale
* Bock
* Brown British Beer
* German Wheat Beer
* Fruit Beer
* Amber Malty European Lager
* Pale Malty European Lager
* British Bitter
* Amber and Brown American Beer
* Light Hybrid Beer
* Pale Commonwealth Beer
* American Wild Ale
* European Amber Lager
* Belgian Strong Ale
* International Lager
* Amber Bitter European Lager
* Light Lager
* Scottish and Irish Ale
* European Sour Ale
* Trappist Ale
* Strong European Beer
* Porter
* Historical Beer
* Pale Bitter European Beer
* Amber Hybrid Beer
* Smoke Flavored/Wood-Aged Beer
* Spiced Beer
* Dark European Lager
* Alternative Fermentables Beer
* Mead
* Strong Ale
* Dark British Beer
* Scottish Ale
* Smoked Beer
* English Brown Ale
* Dark Lager
* Cider or Perry
* Wood Beer
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='b3ck1/gpt-neo-125M-finetuned-beer-recipes')
>>> generator("style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", do_sample=True, min_length=50, max_length=500)
>>> print(output[0]['generated_text'])
style: Pilsner
batch_size: 20
efficiency: 70
boil_size: 24
boil_time: 60
fermentables:
- name: Pale Ale
type: Grain
amount: 6.5
hops:
- name: Saaz
alpha: 3.5
use: Boil
time: 60
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 30
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 10
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 0
amount: 0.06
yeasts:
- name: Safale - American Ale Yeast US-05
amount: 0.11
min_temperature: 12
max_temperature: 25
primary_temp: null
mash_steps:
- step_temp: 65
step_time: 60
miscs: []
```
### See this model in action
This model was used to build https://beerai.net.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text generation", "pytorch", "causal-lm"], "datasets": ["custom"], "widget": [{"text": "style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", "example_title": "Pilsener"}, {"text": "style: IPA\nbatch_size: 20\nefficiency: 75\nboil_size:", "example_title": "IPA"}, {"text": "style: Scottish Ale\nbatch_size: 20\nefficiency: 75\nboil_size:", "example_title": "Scottish Ale"}], "inference": {"parameters": {"do_sample": true, "top_k": 10, "top_p": 0.99, "max_length": 500}}}
|
b3ck1/gpt-neo-125M-finetuned-beer-recipes
| null |
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt_neo #text-generation #text generation #causal-lm #en #dataset-custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# GPT-Neo 125M finetuned with beer recipes
## Model Description
GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture URL
It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.
## Training data
This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following
styles of beer:
* Strong American Ale
* Pale American Ale
* India Pale Ale (IPA)
* Standard American Beer
* Stout
* English Pale Ale
* IPA
* American Porter and Stout
* Sour Ale
* Irish Beer
* Strong British Ale
* Belgian and French Ale
* German Wheat and Rye Beer
* Czech Lager
* Spice/Herb/Vegetable Beer
* Specialty Beer
* American Ale
* Pilsner
* Belgian Ale
* Strong Belgian Ale
* Bock
* Brown British Beer
* German Wheat Beer
* Fruit Beer
* Amber Malty European Lager
* Pale Malty European Lager
* British Bitter
* Amber and Brown American Beer
* Light Hybrid Beer
* Pale Commonwealth Beer
* American Wild Ale
* European Amber Lager
* Belgian Strong Ale
* International Lager
* Amber Bitter European Lager
* Light Lager
* Scottish and Irish Ale
* European Sour Ale
* Trappist Ale
* Strong European Beer
* Porter
* Historical Beer
* Pale Bitter European Beer
* Amber Hybrid Beer
* Smoke Flavored/Wood-Aged Beer
* Spiced Beer
* Dark European Lager
* Alternative Fermentables Beer
* Mead
* Strong Ale
* Dark British Beer
* Scottish Ale
* Smoked Beer
* English Brown Ale
* Dark Lager
* Cider or Perry
* Wood Beer
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:
### See this model in action
This model was used to build URL.
|
[
"# GPT-Neo 125M finetuned with beer recipes",
"## Model Description\n\nGPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture URL\nIt generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.",
"## Training data\n\nThis model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following \nstyles of beer:\n\n* Strong American Ale \n* Pale American Ale\n* India Pale Ale (IPA)\n* Standard American Beer\n* Stout\n* English Pale Ale\n* IPA\n* American Porter and Stout\n* Sour Ale\n* Irish Beer\n* Strong British Ale\n* Belgian and French Ale\n* German Wheat and Rye Beer\n* Czech Lager\n* Spice/Herb/Vegetable Beer\n* Specialty Beer\n* American Ale\n* Pilsner\n* Belgian Ale\n* Strong Belgian Ale\n* Bock\n* Brown British Beer\n* German Wheat Beer\n* Fruit Beer\n* Amber Malty European Lager\n* Pale Malty European Lager\n* British Bitter\n* Amber and Brown American Beer\n* Light Hybrid Beer\n* Pale Commonwealth Beer\n* American Wild Ale\n* European Amber Lager\n* Belgian Strong Ale\n* International Lager\n* Amber Bitter European Lager\n* Light Lager\n* Scottish and Irish Ale\n* European Sour Ale\n* Trappist Ale\n* Strong European Beer\n* Porter\n* Historical Beer\n* Pale Bitter European Beer\n* Amber Hybrid Beer\n* Smoke Flavored/Wood-Aged Beer\n* Spiced Beer\n* Dark European Lager\n* Alternative Fermentables Beer\n* Mead\n* Strong Ale\n* Dark British Beer\n* Scottish Ale\n* Smoked Beer\n* English Brown Ale\n* Dark Lager\n* Cider or Perry\n* Wood Beer",
"### How to use\n\nYou can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:",
"### See this model in action\n\nThis model was used to build URL."
] |
[
"TAGS\n#transformers #pytorch #gpt_neo #text-generation #text generation #causal-lm #en #dataset-custom #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# GPT-Neo 125M finetuned with beer recipes",
"## Model Description\n\nGPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture URL\nIt generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.",
"## Training data\n\nThis model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following \nstyles of beer:\n\n* Strong American Ale \n* Pale American Ale\n* India Pale Ale (IPA)\n* Standard American Beer\n* Stout\n* English Pale Ale\n* IPA\n* American Porter and Stout\n* Sour Ale\n* Irish Beer\n* Strong British Ale\n* Belgian and French Ale\n* German Wheat and Rye Beer\n* Czech Lager\n* Spice/Herb/Vegetable Beer\n* Specialty Beer\n* American Ale\n* Pilsner\n* Belgian Ale\n* Strong Belgian Ale\n* Bock\n* Brown British Beer\n* German Wheat Beer\n* Fruit Beer\n* Amber Malty European Lager\n* Pale Malty European Lager\n* British Bitter\n* Amber and Brown American Beer\n* Light Hybrid Beer\n* Pale Commonwealth Beer\n* American Wild Ale\n* European Amber Lager\n* Belgian Strong Ale\n* International Lager\n* Amber Bitter European Lager\n* Light Lager\n* Scottish and Irish Ale\n* European Sour Ale\n* Trappist Ale\n* Strong European Beer\n* Porter\n* Historical Beer\n* Pale Bitter European Beer\n* Amber Hybrid Beer\n* Smoke Flavored/Wood-Aged Beer\n* Spiced Beer\n* Dark European Lager\n* Alternative Fermentables Beer\n* Mead\n* Strong Ale\n* Dark British Beer\n* Scottish Ale\n* Smoked Beer\n* English Brown Ale\n* Dark Lager\n* Cider or Perry\n* Wood Beer",
"### How to use\n\nYou can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:",
"### See this model in action\n\nThis model was used to build URL."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
{"language": ["ab"], "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": []}]}
|
baaastien/xls-r-ab-test
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"ab",
"dataset:common_voice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ab"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us
|
#
This model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.
It achieves the following results on the evaluation set:
- Loss: 133.5167
- Wer: 18.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
[
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 133.5167\n- Wer: 18.9286",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #ab #dataset-common_voice #endpoints_compatible #region-us \n",
"# \n\nThis model is a fine-tuned version of hf-test/xls-r-dummy on the COMMON_VOICE - AB dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 133.5167\n- Wer: 18.9286",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 2.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.1.dev0\n- Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-timit_asr-oogway
This model is a fine-tuned version of [OthmaneJ/distil-wav2vec2](https://huggingface.co/OthmaneJ/distil-wav2vec2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-timit_asr-oogway", "results": []}]}
|
baby-oogway/wav2vec2-timit_asr-oogway
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-timit_asr-oogway
This model is a fine-tuned version of OthmaneJ/distil-wav2vec2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-timit_asr-oogway\n\nThis model is a fine-tuned version of OthmaneJ/distil-wav2vec2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-timit_asr-oogway\n\nThis model is a fine-tuned version of OthmaneJ/distil-wav2vec2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
null |
transformers
|
"hello"
|
{}
|
bada/test
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"pretraining",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #pretraining #endpoints_compatible #region-us
|
"hello"
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #pretraining #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# Genji-python 6B
For example usage or to easily use the model you can check our colab notebook:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Model Description
Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size.
Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load.
This model needs more effort to set up as you need to install git-lfs and pull the repo.
| Hyperparameter | Value |
|-------------------|--------|
| n_parameters | 6,053,381,344 |
| n_layers | 28* |
| d_model | 4,096 |
| d_ff | 16,384 |
| n_heads | 16 |
| d_head | 256 |
| n_ctx | 2,048 |
| n_vocab | 50,400 (same tokenizer as GPT-2/3) |
| position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
`*` each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
## Training procedure
Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06
## Intended Use
This model is trained for assistence on writing python code and having fun trying weird stuff with it.
### How to use
This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
[Fork](https://github.com/finetuneanon/transformers)
to install with pip:
```bash
pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b
```
**git-lfs** also needs to be installed, on ubuntu:
```bash
apt install git-lfs
```
after it's installed, initialize git-lfs:
```bash
git lfs install
```
then clone this repo:
```bash
git clone https://huggingface.co/NovelAI/genji-python-6B-split
```
Now we can load the model.
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
How to use:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GPTNeoForCausalLM,
)
model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
text = '''def print_customer_name'''
tokens = tokenizer(text, return_tensors="pt").input_ids
generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id)
last_tokens = generated_tokens[0][len(tokens[0]):]
generated_text = tokenizer.decode(last_tokens)
print("Generation:\n" + generated_text)
```
When ran, this code generates:
```python
Prompt:
def print_customer_name
Generation:
(self, customer):
"""Print the name of a customer."""
if not self.is_valid():
return
print("Customer: {}".format(customer))
```
For example usage, you can see our colab notebook as well:
[Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing)
## Eval results
TBD
## Acknowledgements
This project was possible because of the compute provided by the
[TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B.
Thanks to everyone who contributed to this project:
- [Aero](https://github.com/AeroScripts)
- [Finetune](https://github.com/finetuneanon)
- [Kurumuz](https://github.com/kurumuz)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["the Pile"]}
|
baffo32/genji-python-6B-split
| null |
[
"transformers",
"gpt_neo",
"text-generation",
"pytorch",
"causal-lm",
"en",
"arxiv:2104.09864",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864"
] |
[
"en"
] |
TAGS
#transformers #gpt_neo #text-generation #pytorch #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
Genji-python 6B
===============
For example usage or to easily use the model you can check our colab notebook:
Notebook
Model Description
-----------------
Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size.
Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load.
This model needs more effort to set up as you need to install git-lfs and pull the repo.
'\*' each layer consists of one feedforward block and one self attention block
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
Training data
-------------
GPT-J 6B was pretrained on the Pile, a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile.
Training procedure
------------------
Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06
Intended Use
------------
This model is trained for assistence on writing python code and having fun trying weird stuff with it.
### How to use
This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.
For now, you need to use this fork:
Fork
to install with pip:
git-lfs also needs to be installed, on ubuntu:
after it's installed, initialize git-lfs:
then clone this repo:
Now we can load the model.
We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.
How to use:
When ran, this code generates:
For example usage, you can see our colab notebook as well:
Notebook
Eval results
------------
TBD
Acknowledgements
----------------
This project was possible because of the compute provided by the
TPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B.
Thanks to everyone who contributed to this project:
* Aero
* Finetune
* Kurumuz
|
[
"### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\ngit-lfs also needs to be installed, on ubuntu:\n\n\nafter it's installed, initialize git-lfs:\n\n\nthen clone this repo:\n\n\nNow we can load the model.\n\n\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project:\n\n\n* Aero\n* Finetune\n* Kurumuz"
] |
[
"TAGS\n#transformers #gpt_neo #text-generation #pytorch #causal-lm #en #arxiv-2104.09864 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nThis model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable.\nFor now, you need to use this fork:\nFork\n\n\nto install with pip:\n\n\ngit-lfs also needs to be installed, on ubuntu:\n\n\nafter it's installed, initialize git-lfs:\n\n\nthen clone this repo:\n\n\nNow we can load the model.\n\n\nWe recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards.\n\n\nHow to use:\n\n\nWhen ran, this code generates:\n\n\nFor example usage, you can see our colab notebook as well:\nNotebook\n\n\nEval results\n------------\n\n\nTBD\n\n\nAcknowledgements\n----------------\n\n\nThis project was possible because of the compute provided by the\nTPU Research Cloud and EleutherAI for pretraining of the GPT-J 6B.\n\n\nThanks to everyone who contributed to this project:\n\n\n* Aero\n* Finetune\n* Kurumuz"
] |
text-generation
|
transformers
|
# GPT-J 6B
## Model Description
GPT-J 6B is a transformer model trained using Ben Wang's [Mesh Transformer JAX](https://github.com/kingoflolz/mesh-transformer-jax/). "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
<figure>
| Hyperparameter | Value |
|----------------------|------------|
| \\(n_{parameters}\\) | 6053381344 |
| \\(n_{layers}\\) | 28* |
| \\(d_{model}\\) | 4096 |
| \\(d_{ff}\\) | 16384 |
| \\(n_{heads}\\) | 16 |
| \\(d_{head}\\) | 256 |
| \\(n_{ctx}\\) | 2048 |
| \\(n_{vocab}\\) | 50257/50400† (same tokenizer as GPT-2/3) |
| Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
| RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
<figcaption><p><strong>*</strong> Each layer consists of one feedforward block and one self attention block.</p>
<p><strong>†</strong> Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.</p></figcaption></figure>
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
## Training data
GPT-J 6B was trained on [the Pile](https://pile.eleuther.ai), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai).
## Training procedure
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
## Intended Use and Limitations
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the `AutoModelForCausalLM` functionality:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See [Sections 5 and 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
## Evaluation results
<figure>
| Model | Public | Training FLOPs | LAMBADA PPL ↓ | LAMBADA Acc ↑ | Winogrande ↑ | Hellaswag ↑ | PIQA ↑ | Dataset Size (GB) |
|--------------------------|-------------|----------------|--- |--- |--- |--- |--- |-------------------|
| Random Chance | ✓ | 0 | ~a lot | ~0% | 50% | 25% | 25% | 0 |
| GPT-3 Ada‡ | ✗ | ----- | 9.95 | 51.6% | 52.9% | 43.4% | 70.5% | ----- |
| GPT-2 1.5B | ✓ | ----- | 10.63 | 51.21% | 59.4% | 50.9% | 70.8% | 40 |
| GPT-Neo 1.3B‡ | ✓ | 3.0e21 | 7.50 | 57.2% | 55.0% | 48.9% | 71.1% | 825 |
| Megatron-2.5B* | ✗ | 2.4e21 | ----- | 61.7% | ----- | ----- | ----- | 174 |
| GPT-Neo 2.7B‡ | ✓ | 6.8e21 | 5.63 | 62.2% | 56.5% | 55.8% | 73.0% | 825 |
| GPT-3 1.3B*‡ | ✗ | 2.4e21 | 5.44 | 63.6% | 58.7% | 54.7% | 75.1% | ~800 |
| GPT-3 Babbage‡ | ✗ | ----- | 5.58 | 62.4% | 59.0% | 54.5% | 75.5% | ----- |
| Megatron-8.3B* | ✗ | 7.8e21 | ----- | 66.5% | ----- | ----- | ----- | 174 |
| GPT-3 2.7B*‡ | ✗ | 4.8e21 | 4.60 | 67.1% | 62.3% | 62.8% | 75.6% | ~800 |
| Megatron-11B† | ✓ | 1.0e22 | ----- | ----- | ----- | ----- | ----- | 161 |
| **GPT-J 6B‡** | **✓** | **1.5e22** | **3.99** | **69.7%** | **65.3%** | **66.1%** | **76.5%** | **825** |
| GPT-3 6.7B*‡ | ✗ | 1.2e22 | 4.00 | 70.3% | 64.5% | 67.4% | 78.0% | ~800 |
| GPT-3 Curie‡ | ✗ | ----- | 4.00 | 69.3% | 65.6% | 68.5% | 77.9% | ----- |
| GPT-3 13B*‡ | ✗ | 2.3e22 | 3.56 | 72.5% | 67.9% | 70.9% | 78.5% | ~800 |
| GPT-3 175B*‡ | ✗ | 3.1e23 | 3.00 | 76.2% | 70.2% | 78.9% | 81.0% | ~800 |
| GPT-3 Davinci‡ | ✗ | ----- | 3.0 | 75% | 72% | 78% | 80% | ----- |
<figcaption><p>Models roughly sorted by performance, or by FLOPs if not available.</p>
<p><strong>*</strong> Evaluation numbers reported by their respective authors. All other numbers are provided by
running <a href="https://github.com/EleutherAI/lm-evaluation-harness/"><code>lm-evaluation-harness</code></a> either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href="https://blog.eleuther.ai/gpt3-model-sizes/">this blog post</a> for more
details.</p>
<p><strong>†</strong> Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="https://github.com/huggingface/transformers/pull/10301">1</a>
<a href="https://github.com/pytorch/fairseq/issues/2358">2</a> <a href="https://github.com/pytorch/fairseq/issues/2719">3</a>)
Thus, evaluation was not attempted.</p>
<p><strong>‡</strong> These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.</p></figcaption></figure>
## Citation and Related Information
### BibTeX entry
To cite this model:
```bibtex
@misc{gpt-j,
author = {Wang, Ben and Komatsuzaki, Aran},
title = {{GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
To cite the codebase that trained this model:
```bibtex
@misc{mesh-transformer-jax,
author = {Wang, Ben},
title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
year = 2021,
month = May
}
```
If you use this model, we would love to hear about it! Reach out on [GitHub](https://github.com/kingoflolz/mesh-transformer-jax), Discord, or shoot Ben an email.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/), as well as the Cloud TPU team for providing early access to the [Cloud TPU VM](https://cloud.google.com/blog/products/compute/introducing-cloud-tpu-vms) Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
- [James Bradbury](https://twitter.com/jekbradbury) for valuable assistance with debugging JAX issues.
- [Stella Biderman](https://www.stellabiderman.com), [Eric Hallahan](https://twitter.com/erichallahan), [Kurumuz](https://github.com/kurumuz/), and [Finetune](https://github.com/finetuneanon/) for converting the model to be compatible with the `transformers` package.
- [Leo Gao](https://twitter.com/nabla_theta) for running zero shot evaluations for the baseline models for the table.
- [Laurence Golding](https://github.com/researcher2/) for adding some features to the web demo.
- [Aran Komatsuzaki](https://twitter.com/arankomatsuzaki) for advice with experiment design and writing the blog posts.
- [Janko Prester](https://github.com/jprester/) for creating the web demo frontend.
|
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["The Pile"]}
|
baffo32/gpt-j-6B-ptmap
| null |
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"en",
"arxiv:2104.09864",
"arxiv:2101.00027",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09864",
"2101.00027"
] |
[
"en"
] |
TAGS
#transformers #pytorch #gptj #text-generation #causal-lm #en #arxiv-2104.09864 #arxiv-2101.00027 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
GPT-J 6B
========
Model Description
-----------------
GPT-J 6B is a transformer model trained using Ben Wang's Mesh Transformer JAX. "GPT-J" refers to the class of model, while "6B" represents the number of trainable parameters.
**\*** Each layer consists of one feedforward block and one self attention block.
**†** Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer.
The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model
dimension is split into 16 heads, each with a dimension of 256. Rotary Position Embedding (RoPE) is applied to 64
dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as
GPT-2/GPT-3.
Training data
-------------
GPT-J 6B was trained on the Pile, a large-scale curated dataset created by EleutherAI.
Training procedure
------------------
This model was trained for 402 billion tokens over 383,500 steps on TPU v3-256 pod. It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token correctly.
Intended Use and Limitations
----------------------------
GPT-J learns an inner representation of the English language that can be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating text from a prompt.
### How to use
This model can be easily loaded using the 'AutoModelForCausalLM' functionality:
### Limitations and Biases
The core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon GPT-J to produce factually accurate output.
GPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.
As with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.
Evaluation results
------------------
Models roughly sorted by performance, or by FLOPs if not available.
**\*** Evaluation numbers reported by their respective authors. All other numbers are provided by
running [for more
details.](URL either with released
weights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these
might not be directly comparable. See <a href=)
**†** Megatron-11B provides no comparable metrics, and several implementations using the released weights do not
reproduce the generation quality and evaluations. (see <a href="URL
<a href="URL <a href="URL
Thus, evaluation was not attempted.</p>
**‡** These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models
failed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is
trained on the Pile, which has not been deduplicated against any test sets.
and Related Information
### BibTeX entry
To cite this model:
To cite the codebase that trained this model:
If you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email.
Acknowledgements
----------------
This project would not have been possible without compute generously provided by Google through the
TPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.
Thanks to everyone who have helped out one way or another (listed alphabetically):
* James Bradbury for valuable assistance with debugging JAX issues.
* Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the 'transformers' package.
* Leo Gao for running zero shot evaluations for the baseline models for the table.
* Laurence Golding for adding some features to the web demo.
* Aran Komatsuzaki for advice with experiment design and writing the blog posts.
* Janko Prester for creating the web demo frontend.
|
[
"### How to use\n\n\nThis model can be easily loaded using the 'AutoModelForCausalLM' functionality:",
"### Limitations and Biases\n\n\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\n\n\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEvaluation results\n------------------\n\n\n\n\nModels roughly sorted by performance, or by FLOPs if not available.\n\n\n**\\*** Evaluation numbers reported by their respective authors. All other numbers are provided by\nrunning [for more\ndetails.](URL either with released\nweights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these\nmight not be directly comparable. See <a href=)\n\n\n**†** Megatron-11B provides no comparable metrics, and several implementations using the released weights do not\nreproduce the generation quality and evaluations. (see <a href=\"URL\n<a href=\"URL <a href=\"URL\nThus, evaluation was not attempted.</p>\n**‡** These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models\nfailed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is\ntrained on the Pile, which has not been deduplicated against any test sets.\n\n\n\n\nand Related Information",
"### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nIf you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.\n\n\nThanks to everyone who have helped out one way or another (listed alphabetically):\n\n\n* James Bradbury for valuable assistance with debugging JAX issues.\n* Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the 'transformers' package.\n* Leo Gao for running zero shot evaluations for the baseline models for the table.\n* Laurence Golding for adding some features to the web demo.\n* Aran Komatsuzaki for advice with experiment design and writing the blog posts.\n* Janko Prester for creating the web demo frontend."
] |
[
"TAGS\n#transformers #pytorch #gptj #text-generation #causal-lm #en #arxiv-2104.09864 #arxiv-2101.00027 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nThis model can be easily loaded using the 'AutoModelForCausalLM' functionality:",
"### Limitations and Biases\n\n\nThe core functionality of GPT-J is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting GPT-J it is important to remember that the statistically most likely next token is often not the token that produces the most \"accurate\" text. Never depend upon GPT-J to produce factually accurate output.\n\n\nGPT-J was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case GPT-J may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\n\n\nAs with all language models, it is hard to predict in advance how GPT-J will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.\n\n\nEvaluation results\n------------------\n\n\n\n\nModels roughly sorted by performance, or by FLOPs if not available.\n\n\n**\\*** Evaluation numbers reported by their respective authors. All other numbers are provided by\nrunning [for more\ndetails.](URL either with released\nweights or with API access. Due to subtle implementation differences as well as different zero shot task framing, these\nmight not be directly comparable. See <a href=)\n\n\n**†** Megatron-11B provides no comparable metrics, and several implementations using the released weights do not\nreproduce the generation quality and evaluations. (see <a href=\"URL\n<a href=\"URL <a href=\"URL\nThus, evaluation was not attempted.</p>\n**‡** These models have been trained with data which contains possible test set contamination. The OpenAI GPT-3 models\nfailed to deduplicate training data for certain test sets, while the GPT-Neo models as well as this one is\ntrained on the Pile, which has not been deduplicated against any test sets.\n\n\n\n\nand Related Information",
"### BibTeX entry\n\n\nTo cite this model:\n\n\nTo cite the codebase that trained this model:\n\n\nIf you use this model, we would love to hear about it! Reach out on GitHub, Discord, or shoot Ben an email.\n\n\nAcknowledgements\n----------------\n\n\nThis project would not have been possible without compute generously provided by Google through the\nTPU Research Cloud, as well as the Cloud TPU team for providing early access to the Cloud TPU VM Alpha.\n\n\nThanks to everyone who have helped out one way or another (listed alphabetically):\n\n\n* James Bradbury for valuable assistance with debugging JAX issues.\n* Stella Biderman, Eric Hallahan, Kurumuz, and Finetune for converting the model to be compatible with the 'transformers' package.\n* Leo Gao for running zero shot evaluations for the baseline models for the table.\n* Laurence Golding for adding some features to the web demo.\n* Aran Komatsuzaki for advice with experiment design and writing the blog posts.\n* Janko Prester for creating the web demo frontend."
] |
text-generation
|
transformers
|
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": "en", "license": "mit", "tags": ["exbert"]}
|
baffo32/gpt2-ptmap
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #tflite #rust #gpt2 #text-generation #exbert #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GPT-2
=====
Test the whole generation capabilities here: URL
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
this paper
and first released at this page.
Disclaimer: The team releasing GPT-2 also wrote a
model card for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
Model description
-----------------
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token 'i' only uses the inputs from '1' to 'i' but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
Intended uses & limitations
---------------------------
You can use the raw model for text generation or fine-tune it to a downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
model card:
>
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
>
>
>
Here's an example of how the model can have biased predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
here.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
Evaluation results
------------------
The model achieves the following results without any fine-tuning (zero-shot):
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
|
[
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #tflite #rust #gpt2 #text-generation #exbert #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we\nset a seed for reproducibility:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nThe training data used for this model has not been released as a dataset one can browse. We know it contains a lot of\nunfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their\nmodel card:\n\n\n\n> \n> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases\n> that require the generated text to be true.\n> \n> \n> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do\n> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a\n> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,\n> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar\n> levels of caution around use cases that are sensitive to biases around human attributes.\n> \n> \n> \n\n\nHere's an example of how the model can have biased predictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web\npages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from\nthis dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights\n40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText\nhere.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a\nvocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.\n\n\nThe larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact\ndetails of training.\n\n\nEvaluation results\n------------------\n\n\nThe model achieves the following results without any fine-tuning (zero-shot):",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
text2text-generation
|
transformers
|
# ByT5 - Base
ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base).
ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292).
Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/abs/2105.13626)
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
## Example Inference
ByT5 works on raw UTF-8 bytes and can be used without a tokenizer:
```python
from transformers import T5ForConditionalGeneration
import torch
model = T5ForConditionalGeneration.from_pretrained('google/byt5-base')
input_ids = torch.tensor([list("Life is like a box of chocolates.".encode("utf-8"))]) + 3 # add 3 for special tokens
labels = torch.tensor([list("La vie est comme une boîte de chocolat.".encode("utf-8"))]) + 3 # add 3 for special tokens
loss = model(input_ids, labels=labels).loss # forward pass
```
For batched inference & training it is however recommended using a tokenizer class for padding:
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
model = T5ForConditionalGeneration.from_pretrained('google/byt5-base')
tokenizer = AutoTokenizer.from_pretrained('google/byt5-base')
model_inputs = tokenizer(["Life is like a box of chocolates.", "Today is Monday."], padding="longest", return_tensors="pt")
labels = tokenizer(["La vie est comme une boîte de chocolat.", "Aujourd'hui c'est lundi."], padding="longest", return_tensors="pt").input_ids
loss = model(**model_inputs, labels=labels).loss # forward pass
```
## Abstract
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.

|
{"language": "multilingual", "license": "apache-2.0", "datasets": ["mc4"]}
|
baffo32/pyc2py_alpha2
| null |
[
"transformers",
"jax",
"t5",
"text2text-generation",
"multilingual",
"dataset:mc4",
"arxiv:1907.06292",
"arxiv:2105.13626",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1907.06292",
"2105.13626"
] |
[
"multilingual"
] |
TAGS
#transformers #jax #t5 #text2text-generation #multilingual #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ByT5 - Base
ByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.
ByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.
ByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-base' significantly outperforms mt5-base on TweetQA.
Paper: ByT5: Towards a token-free future with pre-trained byte-to-byte models
Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*
## Example Inference
ByT5 works on raw UTF-8 bytes and can be used without a tokenizer:
For batched inference & training it is however recommended using a tokenizer class for padding:
## Abstract
Most widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.
!model image
|
[
"# ByT5 - Base\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-base' significantly outperforms mt5-base on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*",
"## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:",
"## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image"
] |
[
"TAGS\n#transformers #jax #t5 #text2text-generation #multilingual #dataset-mc4 #arxiv-1907.06292 #arxiv-2105.13626 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ByT5 - Base\n\nByT5 is a tokenizer-free version of Google's T5 and generally follows the architecture of MT5.\n\nByT5 was only pre-trained on mC4 excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task.\n\nByT5 works especially well on noisy text data,*e.g.*, 'google/byt5-base' significantly outperforms mt5-base on TweetQA.\n\nPaper: ByT5: Towards a token-free future with pre-trained byte-to-byte models\n\nAuthors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel*",
"## Example Inference\n\nByT5 works on raw UTF-8 bytes and can be used without a tokenizer:\n\n\n\nFor batched inference & training it is however recommended using a tokenizer class for padding:",
"## Abstract\n\nMost widely-used pre-trained language models operate on sequences of tokens corresponding to word or subword units. Encoding text as a sequence of tokens requires a tokenizer, which is typically created as an independent artifact from the model. Token-free models that instead operate directly on raw text (bytes or characters) have many benefits: they can process text in any language out of the box, they are more robust to noise, and they minimize technical debt by removing complex and error-prone text preprocessing pipelines. Since byte or character sequences are longer than token sequences, past work on token-free models has often introduced new model architectures designed to amortize the cost of operating directly on raw text. In this paper, we show that a standard Transformer architecture can be used with minimal modifications to process byte sequences. We carefully characterize the trade-offs in terms of parameter count, training FLOPs, and inference speed, and show that byte-level models are competitive with their token-level counterparts. We also demonstrate that byte-level models are significantly more robust to noise and perform better on tasks that are sensitive to spelling and pronunciation. As part of our contribution, we release a new set of pre-trained byte-level Transformer models based on the T5 architecture, as well as all code and data used in our experiments.\n\n!model image"
] |
translation
|
transformers
|
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
Pretraining Dataset: [C4](https://huggingface.co/datasets/c4)
Other Community Checkpoints: [here](https://huggingface.co/models?search=t5)
Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.

|
{"language": ["en", "fr", "ro", "de"], "license": "apache-2.0", "tags": ["summarization", "translation"], "datasets": ["c4"]}
|
baffo32/t5-base-ptmap
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"t5",
"text2text-generation",
"summarization",
"translation",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"arxiv:1910.10683",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.10683"
] |
[
"en",
"fr",
"ro",
"de"
] |
TAGS
#transformers #pytorch #tf #jax #rust #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Google's T5
Pretraining Dataset: C4
Other Community Checkpoints: here
Paper: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
## Abstract
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.
!model image
|
[
"## Abstract\n\nTransfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.\n\n!model image"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #rust #t5 #text2text-generation #summarization #translation #en #fr #ro #de #dataset-c4 #arxiv-1910.10683 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Abstract\n\nTransfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new “Colossal Clean Crawled Corpus”, we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code.\n\n!model image"
] |
text-generation
|
transformers
|
# Model name
Indian Political Tweets LM
## Model description
Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the [IndianPoliticalTweetsLMMedium](https://huggingface.co/bagdaebhishek/IndianPoliticalTweetsLMMedium) model.
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
{"language": "en", "license": "apache-2.0", "tags": ["India", "politics", "tweets", "BJP", "Congress", "AAP", "pytorch", "gpt2", "lm-head", "text-generation"], "datasets": ["Twitter", "IndianPolitics"], "thumbnail": "https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg"}
|
bagdaebhishek/IndianPoliticalTweetsLM
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"lm-head",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #India #politics #tweets #BJP #Congress #AAP #lm-head #en #dataset-Twitter #dataset-IndianPolitics #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model name
Indian Political Tweets LM
## Model description
Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the IndianPoliticalTweetsLMMedium model.
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this blog post.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a blog post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
[
"# Model name\nIndian Political Tweets LM",
"## Model description\nNote: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the IndianPoliticalTweetsLMMedium model. \n\nThis is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this blog post.",
"## Intended uses & limitations\n This finetuned model can be used to generate tweets which are related to Indian politics.",
"#### How to use",
"#### Limitations and bias\n1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate \"Hinglish\" text and hence no assumptions should be made about the language of the generated text.\n2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like \"-sent via NamoApp\" etc.\n3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.",
"## Training data\nI used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a blog post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.",
"## Training procedure\n\nFor pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.\n\nI then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.",
"### Hardware\n1. GPU: GTX 1080Ti\n2. CPU: Ryzen 3900x\n3. RAM: 32GB\n\nThis model took roughly 36 hours to fine-tune."
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #India #politics #tweets #BJP #Congress #AAP #lm-head #en #dataset-Twitter #dataset-IndianPolitics #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model name\nIndian Political Tweets LM",
"## Model description\nNote: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the IndianPoliticalTweetsLMMedium model. \n\nThis is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this blog post.",
"## Intended uses & limitations\n This finetuned model can be used to generate tweets which are related to Indian politics.",
"#### How to use",
"#### Limitations and bias\n1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate \"Hinglish\" text and hence no assumptions should be made about the language of the generated text.\n2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like \"-sent via NamoApp\" etc.\n3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.",
"## Training data\nI used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a blog post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.",
"## Training procedure\n\nFor pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.\n\nI then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.",
"### Hardware\n1. GPU: GTX 1080Ti\n2. CPU: Ryzen 3900x\n3. RAM: 32GB\n\nThis model took roughly 36 hours to fine-tune."
] |
text-generation
|
transformers
|
# Model name
Indian Political Tweets LM Medium (Based on GPT2-Medium)
## Model description
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
{"language": "en", "license": "apache-2.0", "tags": ["India", "politics", "tweets", "BJP", "Congress", "AAP", "pytorch", "gpt2", "lm-head", "text-generation"], "datasets": ["Twitter", "IndianPolitics"], "thumbnail": "https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg"}
|
bagdaebhishek/IndianPoliticalTweetsLMMedium
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"lm-head",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #India #politics #tweets #BJP #Congress #AAP #lm-head #en #dataset-Twitter #dataset-IndianPolitics #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model name
Indian Political Tweets LM Medium (Based on GPT2-Medium)
## Model description
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this blog post.
This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a blog post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
[
"# Model name\nIndian Political Tweets LM Medium (Based on GPT2-Medium)",
"## Model description\n\nThis is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this blog post. \n\nThis model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.",
"## Intended uses & limitations\n This finetuned model can be used to generate tweets which are related to Indian politics.",
"#### How to use",
"#### Limitations and bias\n1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate \"Hinglish\" text and hence no assumptions should be made about the language of the generated text.\n2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like \"-sent via NamoApp\" etc.\n3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.",
"## Training data\nI used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a blog post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.",
"## Training procedure\n\nFor pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.\n\nI then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.",
"### Hardware\n1. GPU: GTX 1080Ti\n2. CPU: Ryzen 3900x\n3. RAM: 32GB\n\nThis model took roughly 36 hours to fine-tune."
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #India #politics #tweets #BJP #Congress #AAP #lm-head #en #dataset-Twitter #dataset-IndianPolitics #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model name\nIndian Political Tweets LM Medium (Based on GPT2-Medium)",
"## Model description\n\nThis is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this blog post. \n\nThis model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.",
"## Intended uses & limitations\n This finetuned model can be used to generate tweets which are related to Indian politics.",
"#### How to use",
"#### Limitations and bias\n1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate \"Hinglish\" text and hence no assumptions should be made about the language of the generated text.\n2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like \"-sent via NamoApp\" etc.\n3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.",
"## Training data\nI used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a blog post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.",
"## Training procedure\n\nFor pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.\n\nI then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.",
"### Hardware\n1. GPU: GTX 1080Ti\n2. CPU: Ryzen 3900x\n3. RAM: 32GB\n\nThis model took roughly 36 hours to fine-tune."
] |
fill-mask
|
transformers
|
hello
|
{}
|
baicuya/bert_cn
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
hello
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Sinai Voice Arabic Speech Recognition Model
# نموذج **صوت سيناء** للتعرف على الأصوات العربية الفصحى و تحويلها إلى نصوص
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - AR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2141
- Wer: 0.1808
It achieves the following results on the evaluation set:
- eval_loss = 0.2141
- eval_samples = 10388
- eval_wer = 0.181
- eval_cer = 0.049
#### Evaluation Commands
1. To evaluate on `mozilla-foundation/common_voice_8_0` with split `test`
```bash
python eval.py --model_id bakrianoo/sinai-voice-ar-stt --dataset mozilla-foundation/common_voice_8_0 --config ar --split test
```
### Inference Without LM
```python
from transformers import (Wav2Vec2Processor, Wav2Vec2ForCTC)
import torchaudio
import torch
def speech_file_to_array_fn(voice_path, resampling_to=16000):
speech_array, sampling_rate = torchaudio.load(voice_path)
resampler = torchaudio.transforms.Resample(sampling_rate, resampling_to)
return resampler(speech_array)[0].numpy(), sampling_rate
# load the model
cp = "bakrianoo/sinai-voice-ar-stt"
processor = Wav2Vec2Processor.from_pretrained(cp)
model = Wav2Vec2ForCTC.from_pretrained(cp)
# recognize the text in a sample sound file
sound_path = './my_voice.mp3'
sample, sr = speech_file_to_array_fn(sound_path)
inputs = processor([sample], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values,).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
```
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 10
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 256
- total_eval_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.354 | 0.64 | 1000 | 0.4109 | 0.4493 |
| 0.5886 | 1.28 | 2000 | 0.2798 | 0.3099 |
| 0.4977 | 1.92 | 3000 | 0.2387 | 0.2673 |
| 0.4253 | 2.56 | 4000 | 0.2266 | 0.2523 |
| 0.3942 | 3.2 | 5000 | 0.2171 | 0.2437 |
| 0.3619 | 3.84 | 6000 | 0.2076 | 0.2253 |
| 0.3245 | 4.48 | 7000 | 0.2088 | 0.2186 |
| 0.308 | 5.12 | 8000 | 0.2086 | 0.2206 |
| 0.2881 | 5.76 | 9000 | 0.2089 | 0.2105 |
| 0.2557 | 6.4 | 10000 | 0.2015 | 0.2004 |
| 0.248 | 7.04 | 11000 | 0.2044 | 0.1953 |
| 0.2251 | 7.68 | 12000 | 0.2058 | 0.1932 |
| 0.2052 | 8.32 | 13000 | 0.2117 | 0.1878 |
| 0.1976 | 8.96 | 14000 | 0.2104 | 0.1825 |
| 0.1845 | 9.6 | 15000 | 0.2156 | 0.1821 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"language": ["ar"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "widget": [{"example_title": "Example 1", "src": "https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19077324.mp3"}, {"example_title": "Example 2", "src": "https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19205138.mp3"}, {"example_title": "Example 3", "src": "https://huggingface.co/bakrianoo/sinai-voice-ar-stt/raw/main/examples/common_voice_ar_19331711.mp3"}], "model-index": [{"name": "Sinai Voice Arabic Speech Recognition Model", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "mozilla-foundation/common_voice_8_0", "args": "ar"}, "metrics": [{"type": "wer", "value": 0.181, "name": "Test WER"}, {"type": "cer", "value": 0.049, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 93.03, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ar"}, "metrics": [{"type": "wer", "value": 90.79, "name": "Test WER"}]}]}]}
|
bakrianoo/sinai-voice-ar-stt
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"robust-speech-event",
"ar",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #ar #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us
|
Sinai Voice Arabic Speech Recognition Model
===========================================
نموذج صوت سيناء للتعرف على الأصوات العربية الفصحى و تحويلها إلى نصوص
====================================================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - AR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2141
* Wer: 0.1808
It achieves the following results on the evaluation set:
* eval\_loss = 0.2141
* eval\_samples = 10388
* eval\_wer = 0.181
* eval\_cer = 0.049
#### Evaluation Commands
1. To evaluate on 'mozilla-foundation/common\_voice\_8\_0' with split 'test'
### Inference Without LM
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 32
* eval\_batch\_size: 10
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 8
* total\_train\_batch\_size: 256
* total\_eval\_batch\_size: 80
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2+cu113
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference Without LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 10\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 80\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #hf-asr-leaderboard #robust-speech-event #ar #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #has_space #region-us \n",
"#### Evaluation Commands\n\n\n1. To evaluate on 'mozilla-foundation/common\\_voice\\_8\\_0' with split 'test'",
"### Inference Without LM",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 10\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 256\n* total\\_eval\\_batch\\_size: 80\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu113\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
text2text-generation
|
transformers
|
## Arabic T5 Base Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-base` model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
```
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
```
[Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
|
{"language": "Arabic", "license": "apache-2.0", "datasets": ["mc4"]}
|
bakrianoo/t5-arabic-base
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"Arabic"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #dataset-mc4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Arabic T5 Base Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-base' model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
Read More
|
[
"## Arabic T5 Base Model\n\nA customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-base' model, as it's much smaller and only targets Arabic and English based tasks.",
"### About T5\n\n\n\nRead More"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #dataset-mc4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Arabic T5 Base Model\n\nA customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-base' model, as it's much smaller and only targets Arabic and English based tasks.",
"### About T5\n\n\n\nRead More"
] |
text2text-generation
|
transformers
|
## Arabic T5 Large Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-large` model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
```
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
```
[Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
|
{"language": "Arabic", "license": "apache-2.0", "datasets": ["mc4"]}
|
bakrianoo/t5-arabic-large
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"Arabic"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #dataset-mc4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Arabic T5 Large Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-large' model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
Read More
|
[
"## Arabic T5 Large Model\n\nA customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-large' model, as it's much smaller and only targets Arabic and English based tasks.",
"### About T5\n\n\n\nRead More"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #dataset-mc4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Arabic T5 Large Model\n\nA customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-large' model, as it's much smaller and only targets Arabic and English based tasks.",
"### About T5\n\n\n\nRead More"
] |
text2text-generation
|
transformers
|
## Arabic T5 Small Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for `google/mt5-small` model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
```
T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which each task is converted into a text-to-text format.
The T5 model was presented in Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
```
[Read More](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html)
|
{"language": "Arabic", "license": "apache-2.0", "datasets": ["mc4"]}
|
bakrianoo/t5-arabic-small
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"Arabic"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #dataset-mc4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Arabic T5 Small Model
A customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-small' model, as it's much smaller and only targets Arabic and English based tasks.
### About T5
Read More
|
[
"## Arabic T5 Small Model\n\nA customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-small' model, as it's much smaller and only targets Arabic and English based tasks.",
"### About T5\n\n\n\nRead More"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #dataset-mc4 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Arabic T5 Small Model\n\nA customized T5 Model for Arabic and English Task. It could be used as an alternative for 'google/mt5-small' model, as it's much smaller and only targets Arabic and English based tasks.",
"### About T5\n\n\n\nRead More"
] |
null | null |
The main card for Saturday’s Manny Pacquiao vs Yordenis Ugas fight gets underway at T-Mobile Arena in Las Vegas at 9 p.m. ET and the main event is expected to start sometime around 11:30 p.m. This is going to air on FOX Sports PPV and YouTube PPV. The card will cost
https://web.sites.google.com/view/ppv-livemanny-pacquiao-vs-yord/home
https://web.sites.google.com/view/freevasyl-manny-pacquiao-vs-yo/home
https://web.sites.google.com/view/ppvlivestreammannypacquiaovsyo/home
https://web.sites.google.com/view/watchtv-manny-pacquiao-vs-yord/home
https://web.sites.google.com/view/heresmannypacquiaovsyordenisug/home
https://web.sites.google.com/view/mannypacquiaovsyordenislive/home
https://web.sites.google.com/view/free-2021-manny-pacquiao-vs-yo/home
LIVE::Watch Full Fight Live Here
LIVE::Watch Full Fight Live Here
https://goodavail.com/boxing/
The most intriguing storyline for this fight is the belt itself that is on the line. Pacquiao won the Super version of the WBA’s welterweight title in July 2019 when he beat Keith Thurman in a split decision. The WBA stripped Pacquiao of the title this past January due to inactivity. The organizing body then promoted Ugas into the Super belt. Ugas won the WBA’s Regular title in September 2020 when he beat Abel Ramos in a split decision.
That means Ugas won a title last held by Pacquiao without having to beat Pacquiao. Pacquiao-Spence would have been a significantly bigger fight for welterweight supremacy, but this is still interesting. Pacquiao was an underdog against Spence, but comes into this fight as a -360 favorite at DraftKings Sportsbook.
How to watch Manny Pacquiao vs. Yordenis Ugas
TV channel: FOX Sports PPV
|
{}
|
balalsahabi/fdgdfg
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
The main card for Saturday’s Manny Pacquiao vs Yordenis Ugas fight gets underway at T-Mobile Arena in Las Vegas at 9 p.m. ET and the main event is expected to start sometime around 11:30 p.m. This is going to air on FOX Sports PPV and YouTube PPV. The card will cost
URL
URL
URL
URL
URL
URL
URL
LIVE::Watch Full Fight Live Here
LIVE::Watch Full Fight Live Here
URL
The most intriguing storyline for this fight is the belt itself that is on the line. Pacquiao won the Super version of the WBA’s welterweight title in July 2019 when he beat Keith Thurman in a split decision. The WBA stripped Pacquiao of the title this past January due to inactivity. The organizing body then promoted Ugas into the Super belt. Ugas won the WBA’s Regular title in September 2020 when he beat Abel Ramos in a split decision.
That means Ugas won a title last held by Pacquiao without having to beat Pacquiao. Pacquiao-Spence would have been a significantly bigger fight for welterweight supremacy, but this is still interesting. Pacquiao was an underdog against Spence, but comes into this fight as a -360 favorite at DraftKings Sportsbook.
How to watch Manny Pacquiao vs. Yordenis Ugas
TV channel: FOX Sports PPV
|
[] |
[
"TAGS\n#region-us \n"
] |
token-classification
|
transformers
|
# Named Entity Recognition using Transformers
This is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance.
# Dataset
**CoNLL-2003** :
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br>
**Link** : https://huggingface.co/datasets/conll2003
# Using this fine-tuned version
From python, download the whole pipeline and use it instantly using the following code :
```
from transformers import pipeline
# Loading the pipeline from hub
# Pipeline handles the preprocessing and post processing steps
model_checkpoint = "balamurugan1603/bert-finetuned-ner"
namedEntityRecogniser = pipeline(
"token-classification", model=model_checkpoint, aggregation_strategy="simple"
)
```
Reference for using this pipeline to find NER tags can be found in this <a href="https://github.com/balamurugan1603/Named-Entity-Recognition-using-Tranformers/blob/main/named-entity-recognition-using-transfer-learning.ipynb">notebook</a>.
|
{}
|
balamurugan1603/bert-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #bert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Named Entity Recognition using Transformers
This is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance.
# Dataset
CoNLL-2003 :
The shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br>
Link : URL
# Using this fine-tuned version
From python, download the whole pipeline and use it instantly using the following code :
Reference for using this pipeline to find NER tags can be found in this <a href="URL
|
[
"# Named Entity Recognition using Transformers\nThis is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance.",
"# Dataset\nCoNLL-2003 :\nThe shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br>\nLink : URL",
"# Using this fine-tuned version\n\nFrom python, download the whole pipeline and use it instantly using the following code :\n\n\nReference for using this pipeline to find NER tags can be found in this <a href=\"URL"
] |
[
"TAGS\n#transformers #pytorch #tf #bert #token-classification #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Named Entity Recognition using Transformers\nThis is a Fine-tuned version of BERT using HuggingFace transformers to perform Named Entity Recognition on Text data. BERT is a state-of-the-art model with attention mechanism as underlying architecture trained with masked-language-modeling and next-sentence-prediction objectives, used for various tasks including Question answering systems, Text Summarization, etc... which can also perform token classification tasks such as NER with great performance.",
"# Dataset\nCoNLL-2003 :\nThe shared task of CoNLL-2003 concerns language-independent named entity recognition. We will concentrate on four types of named entities: persons, locations, organizations, and names of miscellaneous entities that do not belong to the previous three groups.<br><br>\nLink : URL",
"# Using this fine-tuned version\n\nFrom python, download the whole pipeline and use it instantly using the following code :\n\n\nReference for using this pipeline to find NER tags can be found in this <a href=\"URL"
] |
text-generation
|
transformers
|
# Test Bot DialoGTP Model
|
{"tags": ["conversational"]}
|
balta/DialoGPT-small-TestBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Test Bot DialoGTP Model
|
[
"# Test Bot DialoGTP Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Test Bot DialoGTP Model"
] |
text-generation
|
transformers
|
TRIGGER WARNING
---------------
This model was created by training GPT2-medium on a custom dataset containing tens of thousands of blog posts about people's experiences living with mental illnesses. As such, the texts that this model generates may be triggering and/or NSFW. Please explore at your own discretion.
The blog posts that were compiled were specifically about 6 different mental health conditions: depression, ptsd, cptsd, borderline personality disorder, bipolar (non-specific), and dissociation. These are very serious illnesses so please treat this with respect, and I encourage everyone to learn more about these conditions.
Thank you, and enjoy!
|
{"language": "en", "widget": [{"text": "I feel "}, {"text": "I want "}, {"text": "I believe "}]}
|
banalyst/wonder-egg
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
TRIGGER WARNING
---------------
This model was created by training GPT2-medium on a custom dataset containing tens of thousands of blog posts about people's experiences living with mental illnesses. As such, the texts that this model generates may be triggering and/or NSFW. Please explore at your own discretion.
The blog posts that were compiled were specifically about 6 different mental health conditions: depression, ptsd, cptsd, borderline personality disorder, bipolar (non-specific), and dissociation. These are very serious illnesses so please treat this with respect, and I encourage everyone to learn more about these conditions.
Thank you, and enjoy!
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Rick Sanchez DialoGPT Model
|
{"tags": ["conversational"]}
|
banden/DialoGPT-medium-RickBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model
|
[
"# Rick Sanchez DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
text-generation
|
transformers
|
# Loki DialoGPT Model
|
{"tags": ["conversational"]}
|
banden/DialoGPT-small-LokiBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Loki DialoGPT Model
|
[
"# Loki DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Loki DialoGPT Model"
] |
text-classification
|
transformers
|
## Overview
This model was trained with data from https://registry.opendata.aws/helpful-sentences-from-reviews/ to predict how "helpful" a review is.
The model was fine-tuned from the `distilbert-base-uncased` model
### Labels
LABEL_0 - Not helpful
LABEL_1 - Helpful
### How to use
The following code shows how to make a prediction with this model
```python
from transformers import (
AutoTokenizer,
AutoModelForSequenceClassification,
TextClassificationPipeline,
)
tokenizer = AutoTokenizer.from_pretrained("banjtheman/distilbert-base-uncased-helpful-amazon")
model = AutoModelForSequenceClassification.from_pretrained(
"banjtheman/distilbert-base-uncased-helpful-amazon"
)
pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer)
result = pipe("This was a Christmas gift for my grandson.")
print(result)
#[{'label': 'LABEL_0', 'score': 0.998775064945221}]
# This is NOT A HELPFUL comment
```
|
{"license": "apache-2.0"}
|
banjtheman/distilbert-base-uncased-helpful-amazon
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
## Overview
This model was trained with data from URL to predict how "helpful" a review is.
The model was fine-tuned from the 'distilbert-base-uncased' model
### Labels
LABEL_0 - Not helpful
LABEL_1 - Helpful
### How to use
The following code shows how to make a prediction with this model
|
[
"## Overview\r\n\r\nThis model was trained with data from URL to predict how \"helpful\" a review is.\r\n\r\nThe model was fine-tuned from the 'distilbert-base-uncased' model",
"### Labels\r\nLABEL_0 - Not helpful \r\nLABEL_1 - Helpful",
"### How to use\r\n\r\nThe following code shows how to make a prediction with this model"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"## Overview\r\n\r\nThis model was trained with data from URL to predict how \"helpful\" a review is.\r\n\r\nThe model was fine-tuned from the 'distilbert-base-uncased' model",
"### Labels\r\nLABEL_0 - Not helpful \r\nLABEL_1 - Helpful",
"### How to use\r\n\r\nThe following code shows how to make a prediction with this model"
] |
text-generation
|
transformers
|
Model based on [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3small_based_on_gpt2) for generating songs.
Tuned on lyrics collected from [genius](https://genius.com/).
Examples of used artists:
* [Oxxxymiron](https://genius.com/artists/Oxxxymiron)
* [Моргенштерн](https://genius.com/artists/Morgenshtern)
* [ЛСП](https://genius.com/artists/Lsp)
* [Гражданская оборона](https://genius.com/artists/Civil-defense)
* [Король и Шут](https://genius.com/artists/The-king-and-the-jester)
* etc
|
{"language": ["ru"], "tags": ["PyTorch", "Transformers"], "widget": [{"text": "\u0411\u0430\u0442\u044f \u0432\u043e\u0437\u0432\u0440\u0430\u0449\u0430\u0435\u0442\u0441\u044f \u0442\u0440\u0435\u0437\u0432\u044b\u0439, \u0432 \u0440\u0443\u043a\u0435 \u0431\u0443\u0445\u0430\u043d\u043a\u0430", "example_title": "Example 1"}, {"text": "\u041a\u0430\u043a \u0434\u0435\u043b\u0430? \u041a\u0430\u043a \u0434\u0435\u043b\u0430? \u042d\u0442\u043e \u043d\u043e\u0432\u044b\u0439 \u043a\u0430\u0434\u0438\u043b\u043b\u0430\u043a", "example_title": "Example 2"}, {"text": "4:20 \u043d\u0430 \u0447\u0430\u0441\u0430\u0445 \u0438 \u044f \u0434\u0440\u043e\u0447\u0443 \u043d\u0430 \u0442\u0432\u043e\u0451 \u0444\u043e\u0442\u043e", "example_title": "Example 3"}], "inference": {"parameters": {"temperature": 0.9, "k": 50, "p": 0.95, "length": 1500}}}
|
bankholdup/rugpt3_song_writer
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #PyTorch #Transformers #ru #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
Model based on ruGPT-3 for generating songs.
Tuned on lyrics collected from genius.
Examples of used artists:
* Oxxxymiron
* Моргенштерн
* ЛСП
* Гражданская оборона
* Король и Шут
* etc
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #PyTorch #Transformers #ru #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7523
- Matthews Correlation: 0.5259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.533 | 1.0 | 535 | 0.5318 | 0.3887 |
| 0.3562 | 2.0 | 1070 | 0.5145 | 0.5100 |
| 0.2429 | 3.0 | 1605 | 0.6558 | 0.4888 |
| 0.1831 | 4.0 | 2140 | 0.7523 | 0.5259 |
| 0.1352 | 5.0 | 2675 | 0.8406 | 0.5182 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5258663312307151, "name": "Matthews Correlation"}]}]}]}
|
banri/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on the glue dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7523
* Matthews Correlation: 0.5259
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #dataset-glue #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# Multi-dialect-Arabic-BERT
This is a repository of Multi-dialect Arabic BERT model.
By [Mawdoo3-AI](https://ai.mawdoo3.com/).
<p align="center">
<br>
<img src="https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png" alt="Background reference: http://www.qfi.org/wp-content/uploads/2018/02/Qfi_Infographic_Mother-Language_Final.pdf" width="500"/>
<br>
<p>
### About our Multi-dialect-Arabic-BERT model
Instead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT) and trained it on 10M arabic tweets from the unlabled data of [The Nuanced Arabic Dialect Identification (NADI) shared task](https://sites.google.com/view/nadi-shared-task).
### To cite this work
```
@misc{talafha2020multidialect,
title={Multi-Dialect Arabic BERT for Country-Level Dialect Identification},
author={Bashar Talafha and Mohammad Ali and Muhy Eddin Za'ter and Haitham Seelawi and Ibraheem Tuffaha and Mostafa Samir and Wael Farhan and Hussein T. Al-Natsheh},
year={2020},
eprint={2007.05612},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Usage
The model weights can be loaded using `transformers` library by HuggingFace.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic")
model = AutoModel.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic")
```
Example using `pipeline`:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="bashar-talafha/multi-dialect-bert-base-arabic ",
tokenizer="bashar-talafha/multi-dialect-bert-base-arabic "
)
fill_mask(" سافر الرحالة من مطار [MASK] ")
```
```
[{'sequence': '[CLS] سافر الرحالة من مطار الكويت [SEP]', 'score': 0.08296813815832138, 'token': 3226},
{'sequence': '[CLS] سافر الرحالة من مطار دبي [SEP]', 'score': 0.05123933032155037, 'token': 4747},
{'sequence': '[CLS] سافر الرحالة من مطار مسقط [SEP]', 'score': 0.046838656067848206, 'token': 13205},
{'sequence': '[CLS] سافر الرحالة من مطار القاهرة [SEP]', 'score': 0.03234650194644928, 'token': 4003},
{'sequence': '[CLS] سافر الرحالة من مطار الرياض [SEP]', 'score': 0.02606341242790222, 'token': 2200}]
```
### Repository
Please check the [original repository](https://github.com/mawdoo3/Multi-dialect-Arabic-BERT) for more information.
|
{"language": "ar", "datasets": ["nadi"], "thumbnail": "https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png"}
|
bashar-talafha/multi-dialect-bert-base-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"ar",
"dataset:nadi",
"arxiv:2007.05612",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2007.05612"
] |
[
"ar"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #ar #dataset-nadi #arxiv-2007.05612 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Multi-dialect-Arabic-BERT
This is a repository of Multi-dialect Arabic BERT model.
By Mawdoo3-AI.
<p align="center">
<br>
<img src="URL alt="Background reference: URL width="500"/>
<br>
<p>
### About our Multi-dialect-Arabic-BERT model
Instead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using Arabic-BERT and trained it on 10M arabic tweets from the unlabled data of The Nuanced Arabic Dialect Identification (NADI) shared task.
### To cite this work
### Usage
The model weights can be loaded using 'transformers' library by HuggingFace.
Example using 'pipeline':
### Repository
Please check the original repository for more information.
|
[
"# Multi-dialect-Arabic-BERT\nThis is a repository of Multi-dialect Arabic BERT model.\n\nBy Mawdoo3-AI. \n\n<p align=\"center\">\n <br>\n <img src=\"URL alt=\"Background reference: URL width=\"500\"/>\n <br>\n<p>",
"### About our Multi-dialect-Arabic-BERT model\nInstead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using Arabic-BERT and trained it on 10M arabic tweets from the unlabled data of The Nuanced Arabic Dialect Identification (NADI) shared task.",
"### To cite this work",
"### Usage\nThe model weights can be loaded using 'transformers' library by HuggingFace.\n\n\n\nExample using 'pipeline':",
"### Repository\nPlease check the original repository for more information."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #ar #dataset-nadi #arxiv-2007.05612 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Multi-dialect-Arabic-BERT\nThis is a repository of Multi-dialect Arabic BERT model.\n\nBy Mawdoo3-AI. \n\n<p align=\"center\">\n <br>\n <img src=\"URL alt=\"Background reference: URL width=\"500\"/>\n <br>\n<p>",
"### About our Multi-dialect-Arabic-BERT model\nInstead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using Arabic-BERT and trained it on 10M arabic tweets from the unlabled data of The Nuanced Arabic Dialect Identification (NADI) shared task.",
"### To cite this work",
"### Usage\nThe model weights can be loaded using 'transformers' library by HuggingFace.\n\n\n\nExample using 'pipeline':",
"### Repository\nPlease check the original repository for more information."
] |
text-classification
|
transformers
|
# BatteryBERT-cased for Battery Abstract Classification
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.29,
"Test accuracy": 96.85,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/batterybert-cased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatteryBERT-cased for Battery Abstract Classification
Language model: batterybert-cased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryBERT-cased for Battery Abstract Classification \r\nLanguage model: batterybert-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatteryBERT-cased for Battery Abstract Classification \r\nLanguage model: batterybert-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BatteryBERT-cased for QA
**Language model:** batterybert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 4
base_LM_model = "batterybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.54,
"f1": 89.16,
```
Evaluated on the battery device dataset.
```
"precision": 70.74,
"recall": 84.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/batterybert-cased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BatteryBERT-cased for QA
Language model: batterybert-cased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryBERT-cased for QA \r\nLanguage model: batterybert-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BatteryBERT-cased for QA \r\nLanguage model: batterybert-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
fill-mask
|
transformers
|
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is case-sensitive: it makes a difference between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the [bert-base-cased](https://huggingface.co/bert-base-cased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batterybert-cased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-cased')
model = BertModel.from_pretrained('batterydata/batterybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-cased')
model = TFBertModel.from_pretrained('batterydata/batterybert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 0.9609.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]}
|
batterydata/batterybert-cased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the bert-base-cased weights. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it makes a difference between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the bert-base-cased weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the bert-base-cased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are
then of the form:
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Evaluation results
Final loss: 0.9609.
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryBERT-uncased model\r\n\r\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the bert-base-cased weights. It was introduced in\r\nthis paper and first released in\r\nthis repository. This model is case-sensitive: it makes a difference between english and English.",
"## Model description\r\n\r\nBatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the bert-base-cased weights. This means\r\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\r\npublicly available data) with an automatic process to generate inputs and labels from those texts. \r\n\r\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\r\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\r\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\r\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\r\nlearn a bidirectional representation of the sentence.\r\n\r\nThis way, the model learns an inner representation of the English language that can then be used to extract features\r\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\r\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\r\n\r\nThe BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the bert-base-cased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\r\n\r\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are\r\nthen of the form:\r\n\r\n\r\n\r\nThe details of the masking procedure for each sentence are the following:\r\n- 15% of the tokens are masked.\r\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\r\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\r\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\r\n\r\n\r\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\r\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\r\n\r\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\r\nSee the model hub to look for fine-tuned versions on a task that\r\ninterests you.\r\n\r\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\r\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\r\ngeneration you should look at model like GPT2.",
"### How to use\r\n\r\nYou can use this model directly with a pipeline for masked language modeling:\r\n\r\n\r\n\r\nHere is how to use this model to get the features of a given text in PyTorch:\r\n\r\n\r\n\r\nand in TensorFlow:",
"## Evaluation results\r\n\r\nFinal loss: 0.9609.",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatteryBERT-uncased model\r\n\r\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the bert-base-cased weights. It was introduced in\r\nthis paper and first released in\r\nthis repository. This model is case-sensitive: it makes a difference between english and English.",
"## Model description\r\n\r\nBatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the bert-base-cased weights. This means\r\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\r\npublicly available data) with an automatic process to generate inputs and labels from those texts. \r\n\r\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\r\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\r\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\r\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\r\nlearn a bidirectional representation of the sentence.\r\n\r\nThis way, the model learns an inner representation of the English language that can then be used to extract features\r\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\r\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\r\n\r\nThe BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the bert-base-cased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\r\n\r\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 28,996. The inputs of the model are\r\nthen of the form:\r\n\r\n\r\n\r\nThe details of the masking procedure for each sentence are the following:\r\n- 15% of the tokens are masked.\r\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\r\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\r\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\r\n\r\n\r\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\r\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\r\n\r\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\r\nSee the model hub to look for fine-tuned versions on a task that\r\ninterests you.\r\n\r\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\r\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\r\ngeneration you should look at model like GPT2.",
"### How to use\r\n\r\nYou can use this model directly with a pipeline for masked language modeling:\r\n\r\n\r\n\r\nHere is how to use this model to get the features of a given text in PyTorch:\r\n\r\n\r\n\r\nand in TensorFlow:",
"## Evaluation results\r\n\r\nFinal loss: 0.9609.",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BatteryBERT-uncased for Battery Abstract Classification
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batterybert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.10,
"Test accuracy": 96.94,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/batterybert-uncased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatteryBERT-uncased for Battery Abstract Classification
Language model: batterybert-uncased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryBERT-uncased for Battery Abstract Classification \r\nLanguage model: batterybert-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatteryBERT-uncased for Battery Abstract Classification \r\nLanguage model: batterybert-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BatteryBERT-uncased for QA
**Language model:** batterybert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batterybert-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.08,
"f1": 88.41,
```
Evaluated on the battery device dataset.
```
"precision": 68.27,
"recall": 80.88,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batterybert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/batterybert-uncased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BatteryBERT-uncased for QA
Language model: batterybert-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryBERT-uncased for QA \r\nLanguage model: batterybert-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BatteryBERT-uncased for QA \r\nLanguage model: batterybert-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
fill-mask
|
transformers
|
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference
between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the [bert-base-uncased](https://huggingface.co/bert-base-uncased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batterybert-uncased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-uncased')
model = BertModel.from_pretrained('batterydata/batterybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batterybert-uncased')
model = TFBertModel.from_pretrained('batterydata/batterybert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.0317.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]}
|
batterydata/batterybert-uncased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatteryBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the bert-base-uncased weights. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
## Model description
BatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the bert-base-uncased weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the bert-base-uncased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are
then of the form:
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Evaluation results
Final loss: 1.0317.
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryBERT-uncased model\n\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the bert-base-uncased weights. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.",
"## Model description\n\nBatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the bert-base-uncased weights. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\nlearn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\n\nThe BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the bert-base-uncased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are\nthen of the form:\n\n\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Evaluation results\n\nFinal loss: 1.0317.",
"## Authors\nShu Huang: 'sh2009 [at] URL'\n\nJacqueline Cole: 'jmc61 [at] URL'\n\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatteryBERT-uncased model\n\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the bert-base-uncased weights. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.",
"## Model description\n\nBatteryBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the bert-base-uncased weights. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\nlearn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\n\nThe BatteryBERT model was pretrained on the full text of battery papers only, after initialized from the bert-base-uncased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,522. The inputs of the model are\nthen of the form:\n\n\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Evaluation results\n\nFinal loss: 1.0317.",
"## Authors\nShu Huang: 'sh2009 [at] URL'\n\nJacqueline Cole: 'jmc61 [at] URL'\n\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BatteryOnlyBERT-cased for Battery Abstract Classification
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryonlybert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.33,
"Test accuracy": 97.34,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/batteryonlybert-cased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatteryOnlyBERT-cased for Battery Abstract Classification
Language model: batteryonlybert-cased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryOnlyBERT-cased for Battery Abstract Classification \r\nLanguage model: batteryonlybert-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatteryOnlyBERT-cased for Battery Abstract Classification \r\nLanguage model: batteryonlybert-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BatteryOnlyBERT-cased for QA
**Language model:** batteryonlybert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 3
base_LM_model = "batteryonlybert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.61,
"f1": 87.30,
```
Evaluated on the battery device dataset.
```
"precision": 64.28,
"recall": 82.72,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/batteryonlybert-cased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BatteryOnlyBERT-cased for QA
Language model: batteryonlybert-cased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryOnlyBERT-cased for QA \r\nLanguage model: batteryonlybert-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BatteryOnlyBERT-cased for QA \r\nLanguage model: batteryonlybert-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BatteryOnlyBERT-uncased for Battery Abstract Classification
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 13
base_LM_model = "batteryonlybert-uncased"
learning_rate = 3e-5
```
## Performance
```
"Validation accuracy": 97.18,
"Test accuracy": 97.08,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/batteryonlybert-uncased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatteryOnlyBERT-uncased for Battery Abstract Classification
Language model: batteryonlybert-uncased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryOnlyBERT-uncased for Battery Abstract Classification \r\nLanguage model: batteryonlybert-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatteryOnlyBERT-uncased for Battery Abstract Classification \r\nLanguage model: batteryonlybert-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BatteryOnlyBERT-uncased for QA
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 2
base_LM_model = "batteryonlybert-uncased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.53,
"f1": 87.22,
```
Evaluated on the battery device dataset.
```
"precision": 67.20,
"recall": 83.82,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/batteryonlybert-uncased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BatteryOnlyBERT-uncased for QA
Language model: batteryonlybert-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatteryOnlyBERT-uncased for QA \r\nLanguage model: batteryonlybert-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BatteryOnlyBERT-uncased for QA \r\nLanguage model: batteryonlybert-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BatterySciBERT-cased for Battery Abstract Classification
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batteryscibert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.06,
"Test accuracy": 97.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/batteryscibert-cased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatterySciBERT-cased for Battery Abstract Classification
Language model: batteryscibert-cased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatterySciBERT-cased for Battery Abstract Classification \r\nLanguage model: batteryscibert-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatterySciBERT-cased for Battery Abstract Classification \r\nLanguage model: batteryscibert-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BatterySciBERT-cased for QA
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-cased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.66,
"f1": 87.43,
```
Evaluated on the battery device dataset.
```
"precision": 65.09,
"recall": 84.56,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/batteryscibert-cased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BatterySciBERT-cased for QA
Language model: batteryscibert-cased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatterySciBERT-cased for QA \r\nLanguage model: batteryscibert-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BatterySciBERT-cased for QA \r\nLanguage model: batteryscibert-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
fill-mask
|
transformers
|
# BatterySciBERT-cased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [SciBERT-cased](https://huggingface.co/allenai/scibert_scivocab_cased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is case-sensitive: it makes a difference between english and English.
## Model description
BatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [SciBERT-cased](https://huggingface.co/allenai/scibert_scivocab_cased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the [SciBERT-cased](https://huggingface.co/allenai/scibert_scivocab_cased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 31,116. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batteryscibert-cased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-cased')
model = BertModel.from_pretrained('batterydata/batteryscibert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-cased')
model = TFBertModel.from_pretrained('batterydata/batteryscibert-cased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.0505.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]}
|
batterydata/batteryscibert-cased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatterySciBERT-cased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the SciBERT-cased weights. It was introduced in
this paper and first released in
this repository. This model is case-sensitive: it makes a difference between english and English.
## Model description
BatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the SciBERT-cased weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the SciBERT-cased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.
## Training procedure
### Preprocessing
The texts are tokenized using WordPiece and a vocabulary size of 31,116. The inputs of the model are
then of the form:
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Evaluation results
Final loss: 1.0505.
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatterySciBERT-cased model\n\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the SciBERT-cased weights. It was introduced in\nthis paper and first released in\nthis repository. This model is case-sensitive: it makes a difference between english and English.",
"## Model description\n\nBatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the SciBERT-cased weights. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\nlearn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\n\nThe BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the SciBERT-cased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\n\nThe texts are tokenized using WordPiece and a vocabulary size of 31,116. The inputs of the model are\nthen of the form:\n\n\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Evaluation results\n\nFinal loss: 1.0505.",
"## Authors\nShu Huang: 'sh2009 [at] URL'\n\nJacqueline Cole: 'jmc61 [at] URL'\n\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatterySciBERT-cased model\n\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the SciBERT-cased weights. It was introduced in\nthis paper and first released in\nthis repository. This model is case-sensitive: it makes a difference between english and English.",
"## Model description\n\nBatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the SciBERT-cased weights. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\nlearn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\n\nThe BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the SciBERT-cased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\n\nThe texts are tokenized using WordPiece and a vocabulary size of 31,116. The inputs of the model are\nthen of the form:\n\n\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Evaluation results\n\nFinal loss: 1.0505.",
"## Authors\nShu Huang: 'sh2009 [at] URL'\n\nJacqueline Cole: 'jmc61 [at] URL'\n\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BatterySciBERT-uncased for Battery Abstract Classification
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 14
base_LM_model = "batteryscibert-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.12,
"Test accuracy": 97.47,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/batteryscibert-uncased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatterySciBERT-uncased for Battery Abstract Classification
Language model: batteryscibert-uncased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatterySciBERT-uncased for Battery Abstract Classification \r\nLanguage model: batteryscibert-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatterySciBERT-uncased for Battery Abstract Classification \r\nLanguage model: batteryscibert-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BatterySciBERT-uncased for QA
**Language model:** batteryscibert-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "batteryscibert-uncased"
max_seq_len = 386
learning_rate = 2e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 79.81,
"f1": 87.66,
```
Evaluated on the battery device dataset.
```
"precision": 66.65,
"recall": 85.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/batteryscibert-uncased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BatterySciBERT-uncased for QA
Language model: batteryscibert-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatterySciBERT-uncased for QA \r\nLanguage model: batteryscibert-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BatterySciBERT-uncased for QA \r\nLanguage model: batteryscibert-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
fill-mask
|
transformers
|
# BatterySciBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the [SciBERT-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) weights. It was introduced in
[this paper](paper_link) and first released in
[this repository](https://github.com/ShuHuang/batterybert). This model is uncased: it does not make a difference
between english and English.
## Model description
BatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the [SciBERT-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the [SciBERT-uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at [Github](https://github.com/ShuHuang/batterybert/blob/main/corpus.txt).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 31,090. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the [model hub](https://huggingface.co/models?filter=batterybert) to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='batterydata/batteryscibert-uncased')
>>> unmasker("Hello I'm a <mask> model.")
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-uncased')
model = BertModel.from_pretrained('batterydata/batteryscibert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('batterydata/batteryscibert-uncased')
model = TFBertModel.from_pretrained('batterydata/batteryscibert-uncased')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
## Evaluation results
Final loss: 1.095.
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["batterypapers"]}
|
batterydata/batteryscibert-uncased
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:batterypapers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BatterySciBERT-uncased model
Pretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the SciBERT-uncased weights. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
## Model description
BatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the SciBERT-uncased weights. This means
it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model
randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict
the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one
after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to
learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Training data
The BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the SciBERT-uncased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 31,090. The inputs of the model are
then of the form:
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by '[MASK]'.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.
See the model hub to look for fine-tuned versions on a task that
interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
## Evaluation results
Final loss: 1.095.
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BatterySciBERT-uncased model\n\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the SciBERT-uncased weights. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.",
"## Model description\n\nBatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the SciBERT-uncased weights. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\nlearn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\n\nThe BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the SciBERT-uncased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 31,090. The inputs of the model are\nthen of the form:\n\n\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Evaluation results\n\nFinal loss: 1.095.",
"## Authors\nShu Huang: 'sh2009 [at] URL'\n\nJacqueline Cole: 'jmc61 [at] URL'\n\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #fill-mask #exbert #en #dataset-batterypapers #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BatterySciBERT-uncased model\n\nPretrained model on a large corpus of battery research papers using a masked language modeling (MLM) objective, starting with the SciBERT-uncased weights. It was introduced in\nthis paper and first released in\nthis repository. This model is uncased: it does not make a difference\nbetween english and English.",
"## Model description\n\nBatterySciBERT is a transformers model pretrained on a large corpus of battery research papers in a self-supervised fashion, starting with the SciBERT-uncased weights. This means\nit was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of\npublicly available data) with an automatic process to generate inputs and labels from those texts. \n\nMore precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model\nrandomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict\nthe masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one\nafter the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to\nlearn a bidirectional representation of the sentence.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features\nuseful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard\nclassifier using the features produced by the BERT model as inputs.",
"## Training data\n\nThe BatterySciBERT model was pretrained on the full text of battery papers only, after initialized from the SciBERT-uncased weights. The paper corpus contains a total of 400,366 battery research papers that are published from 2000 to June 2021, from the publishers Royal Society of Chemistry (RSC), Elsevier, and Springer. The list of DOIs can be found at Github.",
"## Training procedure",
"### Preprocessing\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 31,090. The inputs of the model are\nthen of the form:\n\n\n\nThe details of the masking procedure for each sentence are the following:\n- 15% of the tokens are masked.\n- In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n- In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 8 NVIDIA DGX A100 GPUs for 1,000,000 steps with a batch size of 256. The sequence length was limited to 512 tokens. The optimizer used is Adam with a learning rate of 2e-5, \\\\(\\beta_{1} = 0.9\\\\) and \\\\(\\beta_{2} = 0.999\\\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.",
"## Intended uses & limitations\n\nYou can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task.\nSee the model hub to look for fine-tuned versions on a task that\ninterests you.\n\nNote that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)\nto make decisions, such as sequence classification, token classification or question answering. For tasks such as text\ngeneration you should look at model like GPT2.",
"### How to use\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\n\nand in TensorFlow:",
"## Evaluation results\n\nFinal loss: 1.095.",
"## Authors\nShu Huang: 'sh2009 [at] URL'\n\nJacqueline Cole: 'jmc61 [at] URL'\n\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BERT-base-cased for Battery Abstract Classification
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 15
base_LM_model = "bert-base-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.84,
"Test accuracy": 96.83,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/bert-base-cased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT-base-cased for Battery Abstract Classification
Language model: bert-base-cased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BERT-base-cased for Battery Abstract Classification \r\nLanguage model: bert-base-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT-base-cased for Battery Abstract Classification \r\nLanguage model: bert-base-cased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BERT-base-cased for QA
**Language model:** bert-base-cased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 2
base_LM_model = "bert-base-cased"
max_seq_len = 386
learning_rate = 5e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 81.30,
"f1": 88.58,
```
Evaluated on the battery device dataset.
```
"precision": 67.02,
"recall": 80.15,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-cased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/bert-base-cased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BERT-base-cased for QA
Language model: bert-base-cased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BERT-base-cased for QA \r\nLanguage model: bert-base-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BERT-base-cased for QA \r\nLanguage model: bert-base-cased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
text-classification
|
transformers
|
# BERT-base-uncased for Battery Abstract Classification
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 13
base_LM_model = "bert-base-uncased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 96.79,
"Test accuracy": 96.29,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "Text Classification", "datasets": ["batterydata/paper-abstracts"], "metrics": "glue"}
|
batterydata/bert-base-uncased-abstract
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"Text Classification",
"en",
"dataset:batterydata/paper-abstracts",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# BERT-base-uncased for Battery Abstract Classification
Language model: bert-base-uncased
Language: English
Downstream-task: Text Classification
Training data: training\_data.csv
Eval data: val\_data.csv
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BERT-base-uncased for Battery Abstract Classification \r\nLanguage model: bert-base-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #Text Classification #en #dataset-batterydata/paper-abstracts #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# BERT-base-uncased for Battery Abstract Classification \r\nLanguage model: bert-base-uncased\r\nLanguage: English \r\nDownstream-task: Text Classification\r\nTraining data: training\\_data.csv\r\nEval data: val\\_data.csv\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
question-answering
|
transformers
|
# BERT-base-cased for QA
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD v1
**Eval data:** SQuAD v1
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
```
## Performance
Evaluated on the SQuAD v1.0 dev set.
```
"exact": 80.93,
"f1": 88.20,
```
Evaluated on the battery device dataset.
```
"precision": 62.19,
"recall": 75.00,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
{"language": "en", "license": "apache-2.0", "tags": "question answering", "datasets": ["squad", "batterydata/battery-device-data-qa"], "metrics": "squad"}
|
batterydata/bert-base-uncased-squad-v1
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"question answering",
"en",
"dataset:squad",
"dataset:batterydata/battery-device-data-qa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us
|
# BERT-base-cased for QA
Language model: bert-base-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
## Hyperparameters
## Performance
Evaluated on the SQuAD v1.0 dev set.
Evaluated on the battery device dataset.
## Usage
### In Transformers
## Authors
Shu Huang: 'sh2009 [at] URL'
Jacqueline Cole: 'jmc61 [at] URL'
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement
|
[
"# BERT-base-cased for QA \r\nLanguage model: bert-base-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #question answering #en #dataset-squad #dataset-batterydata/battery-device-data-qa #license-apache-2.0 #endpoints_compatible #region-us \n",
"# BERT-base-cased for QA \r\nLanguage model: bert-base-uncased\r\nLanguage: English \r\nDownstream-task: Extractive QA \r\nTraining data: SQuAD v1\r\nEval data: SQuAD v1\r\nCode: See example \r\nInfrastructure: 8x DGX A100",
"## Hyperparameters",
"## Performance\r\nEvaluated on the SQuAD v1.0 dev set.\r\n\r\nEvaluated on the battery device dataset.",
"## Usage",
"### In Transformers",
"## Authors\r\nShu Huang: 'sh2009 [at] URL'\r\n\r\nJacqueline Cole: 'jmc61 [at] URL'\r\n\r\nBatteryBERT: A Pre-trained Language Model for Battery Database Enhancement"
] |
fill-mask
|
transformers
|
# ALBERT-Mongolian
[pretraining repo link](https://github.com/bayartsogt-ya/albert-mongolian)
## Model description
Here we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus.
## Evaluation Result:
```
loss = 1.7478163
masked_lm_accuracy = 0.6838185
masked_lm_loss = 1.6687671
sentence_order_accuracy = 0.998125
sentence_order_loss = 0.007942731
```
## Fine-tuning Result on Eduge Dataset:
```
precision recall f1-score support
байгал орчин 0.85 0.83 0.84 999
боловсрол 0.80 0.80 0.80 873
спорт 0.98 0.98 0.98 2736
технологи 0.88 0.93 0.91 1102
улс төр 0.92 0.85 0.89 2647
урлаг соёл 0.93 0.94 0.94 1457
хууль 0.89 0.87 0.88 1651
эдийн засаг 0.83 0.88 0.86 2509
эрүүл мэнд 0.89 0.92 0.90 1159
accuracy 0.90 15133
macro avg 0.89 0.89 0.89 15133
weighted avg 0.90 0.90 0.90 15133
```
## Reference
1. [ALBERT - official repo](https://github.com/google-research/albert)
2. [WikiExtrator](https://github.com/attardi/wikiextractor)
3. [Mongolian BERT](https://github.com/tugstugi/mongolian-bert)
4. [ALBERT - Japanese](https://github.com/alinear-corp/albert-japanese)
5. [Mongolian Text Classification](https://github.com/sharavsambuu/mongolian-text-classification)
6. [You's paper](https://arxiv.org/abs/1904.00962)
## Citation
```
@misc{albert-mongolian,
author = {Bayartsogt Yadamsuren},
title = {ALBERT Pretrained Model on Mongolian Datasets},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/bayartsogt-ya/albert-mongolian/}}
}
```
## For More Information
Please contact by [email protected]
|
{"language": "mn"}
|
bayartsogt/albert-mongolian
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"mn",
"arxiv:1904.00962",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.00962"
] |
[
"mn"
] |
TAGS
#transformers #pytorch #tf #safetensors #albert #fill-mask #mn #arxiv-1904.00962 #autotrain_compatible #endpoints_compatible #region-us
|
# ALBERT-Mongolian
pretraining repo link
## Model description
Here we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus.
## Evaluation Result:
## Fine-tuning Result on Eduge Dataset:
## Reference
1. ALBERT - official repo
2. WikiExtrator
3. Mongolian BERT
4. ALBERT - Japanese
5. Mongolian Text Classification
6. You's paper
## For More Information
Please contact by bayartsogtyadamsuren@URL
|
[
"# ALBERT-Mongolian\npretraining repo link",
"## Model description\nHere we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus.",
"## Evaluation Result:",
"## Fine-tuning Result on Eduge Dataset:",
"## Reference\n1. ALBERT - official repo\n2. WikiExtrator\n3. Mongolian BERT\n4. ALBERT - Japanese\n5. Mongolian Text Classification\n6. You's paper",
"## For More Information\nPlease contact by bayartsogtyadamsuren@URL"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #albert #fill-mask #mn #arxiv-1904.00962 #autotrain_compatible #endpoints_compatible #region-us \n",
"# ALBERT-Mongolian\npretraining repo link",
"## Model description\nHere we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus.",
"## Evaluation Result:",
"## Fine-tuning Result on Eduge Dataset:",
"## Reference\n1. ALBERT - official repo\n2. WikiExtrator\n3. Mongolian BERT\n4. ALBERT - Japanese\n5. Mongolian Text Classification\n6. You's paper",
"## For More Information\nPlease contact by bayartsogtyadamsuren@URL"
] |
null | null |
|fold|accuracy|
|-|-|
| fold 0 | 0.974197247706422 |
| fold 1 | 0.9627293577981652 |
| fold 2 | 0.9724770642201835 |
| fold 3 | 0.9696100917431193 |
| fold 4 | 0.9684633027522935 |
| OOF Acc | 0.9694954128440367 |
|
{}
|
bayartsogt/mlub-bert-base-uncased-tr5meaning
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
[] |
[
"TAGS\n#region-us \n"
] |
|
null | null |
|fold|accuracy|
|-|-|
| fold 0 | 0.9730504587155964 |
| fold 1 | 0.9690366972477065 |
| fold 2 | 0.970756880733945 |
| fold 3 | 0.9684633027522935 |
| fold 4 | 0.9719036697247706 |
| OOF Acc | 0.9706422018348624 |
|
{}
|
bayartsogt/mlub-bert-large-cased-tr5do30ep25s42
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
[] |
[
"TAGS\n#region-us \n"
] |
|
null | null |
|fold|accuracy|
|-|-|
| fold 0 | 0.9753440366972477 |
| fold 1 | 0.9678899082568807 |
| fold 2 | 0.9747706422018348 |
| fold 3 | 0.9690366972477065 |
| fold 4 | 0.9759174311926605 |
| OOF Acc | 0.9725917431192661 |
|
{}
|
bayartsogt/mlub-bert-large-uncased-tr5do20ep25s42
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
[] |
[
"TAGS\n#region-us \n"
] |
|
null | null |
|fold|accuracy|
|-|-|
| fold 0 | 0.974197247706422 |
| fold 1 | 0.9678899082568807 |
| fold 2 | 0.9724770642201835 |
| fold 3 | 0.9701834862385321 |
| fold 4 | 0.9736238532110092 |
| OOF Acc | 0.9716743119266055 |
```
synset_word
ав 1.000000
ам 0.931507
баг 0.980000
байр 0.943548
бараа 0.964789
гар 0.950210
гол 0.938731
гүн 0.912088
зах 0.946667
зуу 0.995798
зүрх 0.918367
мөнгө 0.973333
нуруу 0.968750
нүд 1.000000
нүүр 0.987805
салбар 0.963636
сар 0.996627
сум 0.816667
тэрэг 0.822581
түүх 0.980237
төр 0.998428
хий 0.993077
хураа 0.858268
хэлбэр 0.727273
хөндий 1.000000
шат 1.000000
эм 1.000000
эрүүл 1.000000
dtype: float64
```
|
{}
|
bayartsogt/mlub-bert-large-uncased-tr5do30ep25
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
[] |
[
"TAGS\n#region-us \n"
] |
|
fill-mask
|
transformers
|
# StructBERT: Un-Official Copy
Official Repository Link: https://github.com/alibaba/AliceMind/tree/main/StructBERT
**Claimer**
* This model card is not produced by [AliceMind Team](https://github.com/alibaba/AliceMind/)
## Reproduce HFHub models:
Download model/tokenizer vocab
```bash
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/large_bert_config.json && mv large_bert_config.json config.json
wget https://raw.githubusercontent.com/alibaba/AliceMind/main/StructBERT/config/vocab.txt
wget https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model && mv en_model pytorch_model.bin
```
```python
from transformers import AutoConfig, AutoModelForMaskedLM, AutoTokenizer
config = AutoConfig.from_pretrained("./config.json")
model = AutoModelForMaskedLM.from_pretrained(".", config=config)
tokenizer = AutoTokenizer.from_pretrained(".", config=config)
model.push_to_hub("structbert-large")
tokenizer.push_to_hub("structbert-large")
```
[https://arxiv.org/abs/1908.04577](https://arxiv.org/abs/1908.04577)
# StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
## Introduction
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
order of words and sentences, which leverage language structures at the word and sentence levels,
respectively.
## Pre-trained models
|Model | Description | #params | Download |
|------------------------|-------------------------------------------|------|------|
|structbert.en.large | StructBERT using the BERT-large architecture | 340M | [structbert.en.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/en_model) |
|structroberta.en.large | StructRoBERTa continue training from RoBERTa | 355M | Coming soon |
|structbert.ch.large | Chinese StructBERT; BERT-large architecture | 330M | [structbert.ch.large](https://alice-open.oss-cn-zhangjiakou.aliyuncs.com/StructBERT/ch_model) |
## Results
The results of GLUE & CLUE tasks can be reproduced using the hyperparameters listed in the following "Example usage" section.
#### structbert.en.large
[GLUE benchmark](https://gluebenchmark.com/leaderboard)
|Model| MNLI | QNLIv2 | QQP | SST-2 | MRPC |
|--------------------|-------|-------|-------|-------|-------|
|structbert.en.large |86.86% |93.04% |91.67% |93.23% |86.51% |
#### structbert.ch.large
[CLUE benchmark](https://www.cluebenchmarks.com/)
|Model | CMNLI | OCNLI | TNEWS | AFQMC |
|--------------------|-------|-------|-------|-------|
|structbert.ch.large |84.47% |81.28% |68.67% |76.11% |
## Example usage
#### Requirements and Installation
* [PyTorch](https://pytorch.org/) version >= 1.0.1
* Install other libraries via
```
pip install -r requirements.txt
```
* For faster training install NVIDIA's [apex](https://github.com/NVIDIA/apex) library
#### Finetune MNLI
```
python run_classifier_multi_task.py \
--task_name MNLI \
--do_train \
--do_eval \
--do_test \
--amp_type O1 \
--lr_decay_factor 1 \
--dropout 0.1 \
--do_lower_case \
--detach_index -1 \
--core_encoder bert \
--data_dir path_to_glue_data \
--vocab_file config/vocab.txt \
--bert_config_file config/large_bert_config.json \
--init_checkpoint path_to_pretrained_model \
--max_seq_length 128 \
--train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--fast_train \
--gradient_accumulation_steps 1 \
--output_dir path_to_output_dir
```
## Citation
If you use our work, please cite:
```
@article{wang2019structbert,
title={Structbert: Incorporating language structures into pre-training for deep language understanding},
author={Wang, Wei and Bi, Bin and Yan, Ming and Wu, Chen and Bao, Zuyi and Xia, Jiangnan and Peng, Liwei and Si, Luo},
journal={arXiv preprint arXiv:1908.04577},
year={2019}
}
```
|
{}
|
bayartsogt/structbert-large
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:1908.04577",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1908.04577"
] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #arxiv-1908.04577 #autotrain_compatible #endpoints_compatible #region-us
|
StructBERT: Un-Official Copy
============================
Official Repository Link: URL
Claimer
* This model card is not produced by AliceMind Team
Reproduce HFHub models:
-----------------------
Download model/tokenizer vocab
URL
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
===============================================================================================
Introduction
------------
We extend BERT to a new model, StructBERT, by incorporating language structures into pre-training.
Specifically, we pre-train StructBERT with two auxiliary tasks to make the most of the sequential
order of words and sentences, which leverage language structures at the word and sentence levels,
respectively.
Pre-trained models
------------------
Results
-------
The results of GLUE & CLUE tasks can be reproduced using the hyperparameters listed in the following "Example usage" section.
#### URL
GLUE benchmark
#### URL
CLUE benchmark
Example usage
-------------
#### Requirements and Installation
* PyTorch version >= 1.0.1
* Install other libraries via
* For faster training install NVIDIA's apex library
#### Finetune MNLI
If you use our work, please cite:
|
[
"#### URL\n\n\nGLUE benchmark",
"#### URL\n\n\nCLUE benchmark\n\n\n\nExample usage\n-------------",
"#### Requirements and Installation\n\n\n* PyTorch version >= 1.0.1\n* Install other libraries via\n* For faster training install NVIDIA's apex library",
"#### Finetune MNLI\n\n\nIf you use our work, please cite:"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #arxiv-1908.04577 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### URL\n\n\nGLUE benchmark",
"#### URL\n\n\nCLUE benchmark\n\n\n\nExample usage\n-------------",
"#### Requirements and Installation\n\n\n* PyTorch version >= 1.0.1\n* Install other libraries via\n* For faster training install NVIDIA's apex library",
"#### Finetune MNLI\n\n\nIf you use our work, please cite:"
] |
text-to-speech
|
fairseq
|
# tts_transformer-mn-mbspeech
[Transformer](https://arxiv.org/abs/1809.08895) text-to-speech model from fairseq S^2 ([paper](https://arxiv.org/abs/2109.06912)/[code](https://github.com/pytorch/fairseq/tree/main/examples/speech_synthesis)):
- Mongolian
- Single-speaker male voice
- Trained on [MBSpeech](https://github.com/tugstugi/mongolian-nlp/blob/master/datasets/MBSpeech-1.0-csv.zip)
|
{"language": "mn", "library_name": "fairseq", "tags": ["fairseq", "audio", "text-to-speech"], "datasets": ["mbspeech"], "task": "text-to-speech", "widget": [{"text": "\u043c\u0438\u043d\u0438\u0439 \u043d\u044d\u0440\u0438\u0439\u0433 \u0431\u0430\u044f\u0440\u0446\u043e\u0433\u0442 \u0433\u044d\u0434\u044d\u0433", "example_title": "Say my name!"}, {"text": "\u0431\u0438 \u043c\u043e\u043d\u0433\u043e\u043b \u0443\u043b\u0441\u044b\u043d \u043d\u0438\u0439\u0441\u043b\u044d\u043b, \u0443\u043b\u0430\u0430\u043d\u0431\u0430\u0430\u0442\u0430\u0440 \u0445\u043e\u0442\u043e\u0434 \u0430\u043c\u044c\u0434\u0430\u0440\u0434\u0430\u0433", "example_title": "Where I am from?"}, {"text": "\u044d\u043d\u044d\u0445\u04af\u04af \u04e9\u0433\u04e9\u0433\u0434\u043b\u0438\u0439\u0433 \u043d\u044d\u044d\u043b\u0442\u0442\u044d\u0439 \u0431\u043e\u043b\u0433\u043e\u0441\u043e\u043d, \u0431\u043e\u043b\u043e\u0440 \u0441\u043e\u043e\u0444\u0442\u044b\u043d\u0445\u043e\u043d\u0434 \u0431\u0430\u044f\u0440\u043b\u0430\u043b\u0430\u0430", "example_title": "Thank you!"}, {"text": "\u044d\u043d\u044d\u0445\u04af\u04af \u0430\u0436\u043b\u044b\u043d \u0438\u0445\u044d\u043d\u0445 \u0445\u044d\u0441\u0433\u0438\u0439\u0433, \u0442\u04e9\u0433\u04e9\u043b\u0434\u04e9\u0440 \u0430\u0445 \u0445\u0438\u0439\u0441\u044d\u043d \u0431\u043e\u043b\u043d\u043e", "example_title": "Shout out to original creater"}]}
|
bayartsogt/tts_transformer-mn-mbspeech
| null |
[
"fairseq",
"audio",
"text-to-speech",
"mn",
"dataset:mbspeech",
"arxiv:1809.08895",
"arxiv:2109.06912",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1809.08895",
"2109.06912"
] |
[
"mn"
] |
TAGS
#fairseq #audio #text-to-speech #mn #dataset-mbspeech #arxiv-1809.08895 #arxiv-2109.06912 #region-us
|
# tts_transformer-mn-mbspeech
Transformer text-to-speech model from fairseq S^2 (paper/code):
- Mongolian
- Single-speaker male voice
- Trained on MBSpeech
|
[
"# tts_transformer-mn-mbspeech\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Mongolian\n- Single-speaker male voice\n- Trained on MBSpeech"
] |
[
"TAGS\n#fairseq #audio #text-to-speech #mn #dataset-mbspeech #arxiv-1809.08895 #arxiv-2109.06912 #region-us \n",
"# tts_transformer-mn-mbspeech\nTransformer text-to-speech model from fairseq S^2 (paper/code):\n- Mongolian\n- Single-speaker male voice\n- Trained on MBSpeech"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Mongolian-v1
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model.to("cuda")
chars_to_ignore_regex = '[\!\"\'\,\.\«\»\?\-]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.64 %
## Training
The Common Voice `train` dataset was used for training as well as ... and ...
|
{"language": "mn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice mn"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Mongolian V1 by Bayartsogt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mn", "type": "common_voice", "args": "mn"}, "metrics": [{"type": "wer", "value": 34.64, "name": "Test WER"}]}]}]}
|
bayartsogt/wav2vec2-large-xlsr-mongolian-v1
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mn"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mn #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Mongolian-v1
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
Test Result: 34.64 %
## Training
The Common Voice 'train' dataset was used for training as well as ... and ...
|
[
"# Wav2Vec2-Large-XLSR-53-Mongolian-v1\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Mongolian test data of Common Voice.\n\n\n\n\nTest Result: 34.64 %",
"## Training\n\nThe Common Voice 'train' dataset was used for training as well as ... and ..."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mn #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Mongolian-v1\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice.\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Mongolian test data of Common Voice.\n\n\n\n\nTest Result: 34.64 %",
"## Training\n\nThe Common Voice 'train' dataset was used for training as well as ... and ..."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�\\\\'h\\\\«\\\\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.82%
## Training
❌ The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
❌ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
{"language": "mn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Mongolian by Bayartsogt", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice mn", "type": "common_voice", "args": "mn"}, "metrics": [{"type": "wer", "value": 45.82, "name": "Test WER"}]}]}]}
|
bayartsogt/wav2vec2-large-xlsr-mongolian
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"mn",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"mn"
] |
TAGS
#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mn #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
Test Result: 45.82%
## Training
The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
The script used for training can be found here # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
[
"# Wav2Vec2-Large-XLSR-53-Mongolian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Mongolian test data of Common Voice.\n\n\n\nTest Result: 45.82%",
"## Training\n\n The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.\n\n The script used for training can be found here # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #mn #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Mongolian\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Mongolian using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Mongolian test data of Common Voice.\n\n\n\nTest Result: 45.82%",
"## Training\n\n The Common Voice 'train', 'validation', and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.\n\n The script used for training can be found here # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here."
] |
sentence-similarity
|
sentence-transformers
|
# bchan007/fnctech
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bchan007/fnctech')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bchan007/fnctech')
model = AutoModel.from_pretrained('bchan007/fnctech')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=bchan007/fnctech)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
|
bchan007/fnctech
| null |
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us
|
# bchan007/fnctech
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
## Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
## Evaluation Results
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL
## Full Model Architecture
## Citing & Authors
|
[
"# bchan007/fnctech\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
[
"TAGS\n#sentence-transformers #pytorch #mpnet #feature-extraction #sentence-similarity #transformers #endpoints_compatible #region-us \n",
"# bchan007/fnctech\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.",
"## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:",
"## Usage (HuggingFace Transformers)\nWithout sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.",
"## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL",
"## Full Model Architecture",
"## Citing & Authors"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["xsum"], "model-index": [{"name": "t5-small-finetuned-xsum", "results": []}]}
|
bdwjaya/t5-small-finetuned-xsum
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-small-finetuned-xsum
This model is a fine-tuned version of t5-small on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
[
"# t5-small-finetuned-xsum\n\nThis model is a fine-tuned version of t5-small on the xsum dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-xsum #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-small-finetuned-xsum\n\nThis model is a fine-tuned version of t5-small on the xsum dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.9.0+cu111\n- Datasets 1.13.3\n- Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
RICK!!!
|
{"tags": ["conversational"]}
|
beatajackowska/DialoGPT-RickBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
RICK!!!
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask
|
transformers
|
# DiLBERT (Disease Language BERT)
The objective of this model was to obtain a specialized disease-related language, trained **from scratch**. <br>
We created a pre-training corpora starting from **ICD-11** entities, and enriched it with documents from **PubMed** and **Wikipedia** related to the same entities. <br>
Results of finetuning show that DiLBERT leads to comparable or higher accuracy scores on various classification tasks compared with other general-purpose or in-domain models (e.g., BioClinicalBERT, RoBERTa, XLNet).
Model released with the paper "**DiLBERT: Cheap Embeddings for Disease Related Medical NLP**". <br>
To summarize the practical implications of our work: we pre-trained and fine-tuned a domain specific BERT model on a small corpora, with comparable or better performance than state-of-the-art models.
This approach may also simplify the development of models for languages different from English, due to the minor quantity of data needed for training.
### Composition of the pretraining corpus
| Source | Documents | Words |
|---|---:|---:|
| ICD-11 descriptions | 34,676 | 1.0 million |
| PubMed Title and Abstracts | 852,550 | 184.6 million |
| Wikipedia pages | 37,074 | 6.1 million |
### Main repository
For more details check the main repo https://github.com/KevinRoitero/dilbert
# Usage
```python
from transformers import AutoModelForMaskedLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("beatrice-portelli/DiLBERT")
model = AutoModelForMaskedLM.from_pretrained("beatrice-portelli/DiLBERT")
```
# How to cite
```
@article{roitero2021dilbert,
title={{DilBERT}: Cheap Embeddings for Disease Related Medical NLP},
author={Roitero, Kevin and Portelli, Beatrice and Popescu, Mihai Horia and Della Mea, Vincenzo},
journal={IEEE Access},
volume={},
pages={},
year={2021},
publisher={IEEE},
note = {In Press}
}
```
|
{"language": ["en"], "tags": ["medical", "disease", "classification"]}
|
beatrice-portelli/DiLBERT
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"fill-mask",
"medical",
"disease",
"classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #fill-mask #medical #disease #classification #en #autotrain_compatible #endpoints_compatible #region-us
|
DiLBERT (Disease Language BERT)
===============================
The objective of this model was to obtain a specialized disease-related language, trained from scratch.
We created a pre-training corpora starting from ICD-11 entities, and enriched it with documents from PubMed and Wikipedia related to the same entities.
Results of finetuning show that DiLBERT leads to comparable or higher accuracy scores on various classification tasks compared with other general-purpose or in-domain models (e.g., BioClinicalBERT, RoBERTa, XLNet).
Model released with the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP".
To summarize the practical implications of our work: we pre-trained and fine-tuned a domain specific BERT model on a small corpora, with comparable or better performance than state-of-the-art models.
### Composition of the pretraining corpus
### Main repository
For more details check the main repo URL
Usage
=====
How to cite
===========
|
[
"### Composition of the pretraining corpus",
"### Main repository\n\n\nFor more details check the main repo URL\n\n\nUsage\n=====\n\n\nHow to cite\n==========="
] |
[
"TAGS\n#transformers #pytorch #tf #bert #fill-mask #medical #disease #classification #en #autotrain_compatible #endpoints_compatible #region-us \n",
"### Composition of the pretraining corpus",
"### Main repository\n\n\nFor more details check the main repo URL\n\n\nUsage\n=====\n\n\nHow to cite\n==========="
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned", "results": []}]}
|
begar/distilgpt2-finetuned
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# distilgpt2-finetuned
This model is a fine-tuned version of distilgpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# distilgpt2-finetuned\n\nThis model is a fine-tuned version of distilgpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# distilgpt2-finetuned\n\nThis model is a fine-tuned version of distilgpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 32\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0276
- Mae: 0.5310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1582 | 1.0 | 308 | 1.0625 | 0.5221 |
| 1.0091 | 2.0 | 616 | 1.0276 | 0.5310 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "model-index": [{"name": "xlm-roberta-base-finetuned-marc", "results": []}]}
|
begar/xlm-roberta-base-finetuned-marc
| null |
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
xlm-roberta-base-finetuned-marc
===============================
This model is a fine-tuned version of xlm-roberta-base on the amazon\_reviews\_multi dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0276
* Mae: 0.5310
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #xlm-roberta #text-classification #generated_from_trainer #dataset-amazon_reviews_multi #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
null | null |
from transformers import pipeline
import json
import requests
API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
headers = {"Authorization": "Bearer api_hwKbAMoHAzOVDdCxgfpPxMjjcrdKHMakhg"}
def query(payload):
\tdata = json.dumps(payload)
\tresponse = requests.request("POST", API_URL, headers=headers, data=data)
\treturn json.loads(response.content.decode("utf-8"))
data = query("Can you please let us know more details about your ")
|
{}
|
begimayk/try1
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
from transformers import pipeline
import json
import requests
API_URL = "URL
headers = {"Authorization": "Bearer api_hwKbAMoHAzOVDdCxgfpPxMjjcrdKHMakhg"}
def query(payload):
\tdata = URL(payload)
\tresponse = requests.request("POST", API_URL, headers=headers, data=data)
\treturn URL(URL("utf-8"))
data = query("Can you please let us know more details about your ")
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# DaddyBen DialoGPT Model
|
{"tags": ["conversational"]}
|
benajtil/DialoGPT-small-Daddyben
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# DaddyBen DialoGPT Model
|
[
"# DaddyBen DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# DaddyBen DialoGPT Model"
] |
text-generation
|
transformers
|
# Rick And Morty Scripts DialoGPT Model
|
{"tags": ["conversational"]}
|
benajtil/DialoGPT-small-RickAndMortyScripts
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick And Morty Scripts DialoGPT Model
|
[
"# Rick And Morty Scripts DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick And Morty Scripts DialoGPT Model"
] |
text-generation
|
transformers
|
# GerPT2
German large and small versions of GPT2:
- https://huggingface.co/benjamin/gerpt2
- https://huggingface.co/benjamin/gerpt2-large
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | __16.08__ | __23.26__ |
| | | |
See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Training details
GerPT2-large is trained on the entire German data from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
GerPT2-large was trained with:
- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
```
ĠMinde -> Ġleast
Ġjed -> Ġwhatsoever
flughafen -> Air
vermittlung -> employment
teilung -> ignment
ĠInterpretation -> Ġinterpretation
Ġimport -> Ġimported
hansa -> irl
genehmigungen -> exempt
ĠAuflist -> Ġlists
Ġverschwunden -> Ġdisappeared
ĠFlyers -> ĠFlyers
Kanal -> Channel
Ġlehr -> Ġteachers
Ġnahelie -> Ġconvenient
gener -> Generally
mitarbeiter -> staff
```
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.
## License
GerPT2 is licensed under the MIT License.
## Citing
Please cite GerPT2 as follows:
```
@misc{Minixhofer_GerPT2_German_large_2020,
author = {Minixhofer, Benjamin},
doi = {10.5281/zenodo.5509984},
month = {12},
title = {{GerPT2: German large and small versions of GPT2}},
url = {https://github.com/bminixhofer/gerpt2},
year = {2020}
}
```
## Acknowledgements
Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.
|
{"language": "de", "license": "mit", "widget": [{"text": "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einh\u00f6rner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."}]}
|
benjamin/gerpt2-large
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GerPT2
======
German large and small versions of GPT2:
* URL
* URL
See the GPT2 model card for considerations on limitations and bias. See the GPT2 documentation for details on GPT2.
Comparison to dbmdz/german-gpt2
-------------------------------
I evaluated both GerPT2-large and the other German GPT2, dbmdz/german-gpt2 on the CC-100 dataset and on the German Wikipedia:
CC-100 (PPL): dbmdz/german-gpt2, Wikipedia (PPL): 49.47
CC-100 (PPL): GerPT2, Wikipedia (PPL): 24.78
CC-100 (PPL): GerPT2-large, Wikipedia (PPL): **16.08**
CC-100 (PPL): , Wikipedia (PPL):
See the script 'URL' in the GerPT2 Github repository for the code.
Usage
-----
Also, two tricks might improve the generated text:
Training details
----------------
GerPT2-large is trained on the entire German data from the CC-100 Corpus and weights were initialized from the English GPT2 model.
GerPT2-large was trained with:
* a batch size of 256
* using OneCycle learning rate with a maximum of 5e-3
* with AdamW with a weight decay of 0.01
* for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the Github repository:
0. Download and unzip training data from URL
1. Train a tokenizer using 'prepare/train\_tokenizer.py'. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with 'prepare/generate\_aligned\_wte.py'. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the 'wte\_path' to the training script. Credit to this blogpost for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using 'prepare/tokenize\_text.py'. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script 'URL'! 'URL' shows how this was executed for the full run with config 'configs/tpu\_large.json'.
License
-------
GerPT2 is licensed under the MIT License.
Citing
------
Please cite GerPT2 as follows:
Acknowledgements
----------------
Thanks to Hugging Face for awesome tools and infrastructure.
Huge thanks to Artus Krohn-Grimberghe at LYTiQ for making this possible by sponsoring the resources used for training.
|
[] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# GerPT2
German large and small versions of GPT2:
- https://huggingface.co/benjamin/gerpt2
- https://huggingface.co/benjamin/gerpt2-large
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | __16.08__ | __23.26__ |
| | | |
See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Training details
GerPT2-large is trained on the entire German data from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
GerPT2-large was trained with:
- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
```
ĠMinde -> Ġleast
Ġjed -> Ġwhatsoever
flughafen -> Air
vermittlung -> employment
teilung -> ignment
ĠInterpretation -> Ġinterpretation
Ġimport -> Ġimported
hansa -> irl
genehmigungen -> exempt
ĠAuflist -> Ġlists
Ġverschwunden -> Ġdisappeared
ĠFlyers -> ĠFlyers
Kanal -> Channel
Ġlehr -> Ġteachers
Ġnahelie -> Ġconvenient
gener -> Generally
mitarbeiter -> staff
```
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.
## License
GerPT2 is licensed under the MIT License.
## Citing
Please cite GerPT2 as follows:
```
@misc{Minixhofer_GerPT2_German_large_2020,
author = {Minixhofer, Benjamin},
doi = {10.5281/zenodo.5509984},
month = {12},
title = {{GerPT2: German large and small versions of GPT2}},
url = {https://github.com/bminixhofer/gerpt2},
year = {2020}
}
```
## Acknowledgements
Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training.
|
{"language": "de", "license": "mit", "widget": [{"text": "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einh\u00f6rner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."}]}
|
benjamin/gerpt2
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
GerPT2
======
German large and small versions of GPT2:
* URL
* URL
See the GPT2 model card for considerations on limitations and bias. See the GPT2 documentation for details on GPT2.
Comparison to dbmdz/german-gpt2
-------------------------------
I evaluated both GerPT2-large and the other German GPT2, dbmdz/german-gpt2 on the CC-100 dataset and on the German Wikipedia:
CC-100 (PPL): dbmdz/german-gpt2, Wikipedia (PPL): 49.47
CC-100 (PPL): GerPT2, Wikipedia (PPL): 24.78
CC-100 (PPL): GerPT2-large, Wikipedia (PPL): **16.08**
CC-100 (PPL): , Wikipedia (PPL):
See the script 'URL' in the GerPT2 Github repository for the code.
Usage
-----
Also, two tricks might improve the generated text:
Training details
----------------
GerPT2-large is trained on the entire German data from the CC-100 Corpus and weights were initialized from the English GPT2 model.
GerPT2-large was trained with:
* a batch size of 256
* using OneCycle learning rate with a maximum of 5e-3
* with AdamW with a weight decay of 0.01
* for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the Github repository:
0. Download and unzip training data from URL
1. Train a tokenizer using 'prepare/train\_tokenizer.py'. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with 'prepare/generate\_aligned\_wte.py'. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the 'wte\_path' to the training script. Credit to this blogpost for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using 'prepare/tokenize\_text.py'. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script 'URL'! 'URL' shows how this was executed for the full run with config 'configs/tpu\_large.json'.
License
-------
GerPT2 is licensed under the MIT License.
Citing
------
Please cite GerPT2 as follows:
Acknowledgements
----------------
Thanks to Hugging Face for awesome tools and infrastructure.
Huge thanks to Artus Krohn-Grimberghe at LYTiQ for making this possible by sponsoring the resources used for training.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #jax #safetensors #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# gpt2-wechsel-chinese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "zh", "license": "mit"}
|
benjamin/gpt2-wechsel-chinese
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"zh",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #zh #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-wechsel-chinese
====================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #zh #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
text-generation
|
transformers
|
# gpt2-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "fr", "license": "mit"}
|
benjamin/gpt2-wechsel-french
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #fr #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-wechsel-french
===================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #fr #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
text-generation
|
transformers
|
# gpt2-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "de", "license": "mit"}
|
benjamin/gpt2-wechsel-german
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-wechsel-german
===================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #de #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
text-generation
|
transformers
|
# gpt2-wechsel-swahili
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "sw", "license": "mit"}
|
benjamin/gpt2-wechsel-swahili
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"sw",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sw"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #sw #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
gpt2-wechsel-swahili
====================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #sw #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
fill-mask
|
transformers
|
# roberta-base-wechsel-chinese
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "zh", "license": "mit"}
|
benjamin/roberta-base-wechsel-chinese
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"zh",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #zh #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-wechsel-chinese
============================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #zh #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
fill-mask
|
transformers
|
# roberta-base-wechsel-french
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "fr", "license": "mit"}
|
benjamin/roberta-base-wechsel-french
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-wechsel-french
===========================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #fr #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
fill-mask
|
transformers
|
# roberta-base-wechsel-german
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "de", "license": "mit"}
|
benjamin/roberta-base-wechsel-german
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-wechsel-german
===========================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #de #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
fill-mask
|
transformers
|
# roberta-base-wechsel-swahili
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: https://github.com/CPJKU/wechsel
And the paper here: https://aclanthology.org/2022.naacl-main.293/
## Performance
### RoBERTa
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-french` | **82.43** | **90.88** | **86.65** |
| `camembert-base` | 80.88 | 90.26 | 85.57 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-german` | **81.79** | **89.72** | **85.76** |
| `deepset/gbert-base` | 78.64 | 89.46 | 84.05 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-chinese` | **78.32** | 80.55 | **79.44** |
| `bert-base-chinese` | 76.55 | **82.05** | 79.30 |
| Model | NLI Score | NER Score | Avg Score |
|---|---|---|---|
| `roberta-base-wechsel-swahili` | **75.05** | **87.39** | **81.22** |
| `xlm-roberta-base` | 69.18 | 87.37 | 78.28 |
### GPT2
| Model | PPL |
|---|---|
| `gpt2-wechsel-french` | **19.71** |
| `gpt2` (retrained from scratch) | 20.47 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-german` | **26.8** |
| `gpt2` (retrained from scratch) | 27.63 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-chinese` | **51.97** |
| `gpt2` (retrained from scratch) | 52.98 |
| Model | PPL |
|---|---|
| `gpt2-wechsel-swahili` | **10.14** |
| `gpt2` (retrained from scratch) | 10.58 |
See our paper for details.
## Citation
Please cite WECHSEL as
```
@inproceedings{minixhofer-etal-2022-wechsel,
title = "{WECHSEL}: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models",
author = "Minixhofer, Benjamin and
Paischer, Fabian and
Rekabsaz, Navid",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
address = "Seattle, United States",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.naacl-main.293",
pages = "3992--4006",
abstract = "Large pretrained language models (LMs) have become the central building block of many NLP applications. Training these models requires ever more computational resources and most of the existing models are trained on English text only. It is exceedingly expensive to train these models in other languages. To alleviate this problem, we introduce a novel method {--} called WECHSEL {--} to efficiently and effectively transfer pretrained LMs to new languages. WECHSEL can be applied to any model which uses subword-based tokenization and learns an embedding for each subword. The tokenizer of the source model (in English) is replaced with a tokenizer in the target language and token embeddings are initialized such that they are semantically similar to the English tokens by utilizing multilingual static word embeddings covering English and the target language. We use WECHSEL to transfer the English RoBERTa and GPT-2 models to four languages (French, German, Chinese and Swahili). We also study the benefits of our method on very low-resource languages. WECHSEL improves over proposed methods for cross-lingual parameter transfer and outperforms models of comparable size trained from scratch with up to 64x less training effort. Our method makes training large language models for new languages more accessible and less damaging to the environment. We make our code and models publicly available.",
}
```
|
{"language": "sw", "license": "mit"}
|
benjamin/roberta-base-wechsel-swahili
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"sw",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sw"
] |
TAGS
#transformers #pytorch #safetensors #roberta #fill-mask #sw #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
roberta-base-wechsel-swahili
============================
Model trained with WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
See the code here: URL
And the paper here: URL
Performance
-----------
### RoBERTa
### GPT2
See our paper for details.
Please cite WECHSEL as
|
[
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
[
"TAGS\n#transformers #pytorch #safetensors #roberta #fill-mask #sw #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### RoBERTa",
"### GPT2\n\n\n\n\n\n\nSee our paper for details.\n\n\nPlease cite WECHSEL as"
] |
text-generation
|
transformers
|
Still figuring out to properly write model cards.
WIP.
|
{"language": ["en"], "license": "mit", "tags": ["conversational", "pytorch", "transformers", "gpt2"], "datasets": ["empathetic dialogues"]}
|
benjaminbeilharz/dialoGPT-small-empatheticdialogues-generation
| null |
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"conversational",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tensorboard #gpt2 #text-generation #conversational #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Still figuring out to properly write model cards.
WIP.
|
[] |
[
"TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #conversational #en #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
# Misato Katsuragi DialoGPT Model
---
|
{"tags": ["conversational"]}
|
benmrtnz27/DialoGPT-small-misato
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Misato Katsuragi DialoGPT Model
---
|
[
"# Misato Katsuragi DialoGPT Model\n---"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Misato Katsuragi DialoGPT Model\n---"
] |
text-generation
|
transformers
|
#GPTCartman
|
{"tags": ["conversational"]}
|
bensuydam/CartmanBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#GPTCartman
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
fill-mask
|
transformers
|
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import BertTokenizer, TFBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = TFBertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("The man worked as a [MASK].")
[{'sequence': '[CLS] the man worked as a carpenter. [SEP]',
'score': 0.09747550636529922,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the man worked as a waiter. [SEP]',
'score': 0.0523831807076931,
'token': 15610,
'token_str': 'waiter'},
{'sequence': '[CLS] the man worked as a barber. [SEP]',
'score': 0.04962705448269844,
'token': 13362,
'token_str': 'barber'},
{'sequence': '[CLS] the man worked as a mechanic. [SEP]',
'score': 0.03788609802722931,
'token': 15893,
'token_str': 'mechanic'},
{'sequence': '[CLS] the man worked as a salesman. [SEP]',
'score': 0.037680890411138535,
'token': 18968,
'token_str': 'salesman'}]
>>> unmasker("The woman worked as a [MASK].")
[{'sequence': '[CLS] the woman worked as a nurse. [SEP]',
'score': 0.21981462836265564,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the woman worked as a waitress. [SEP]',
'score': 0.1597415804862976,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the woman worked as a maid. [SEP]',
'score': 0.1154729500412941,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the woman worked as a prostitute. [SEP]',
'score': 0.037968918681144714,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the woman worked as a cook. [SEP]',
'score': 0.03042375110089779,
'token': 5660,
'token_str': 'cook'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The BERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038
unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and
headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
|:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
| | 84.6/83.4 | 71.2 | 90.5 | 93.5 | 52.1 | 85.8 | 88.9 | 66.4 | 79.6 |
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=bert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
{"language": "en", "license": "apache-2.0", "tags": ["exbert"], "datasets": ["bookcorpus", "wikipedia"]}
|
benyong/testmodel
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1810.04805"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #rust #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
BERT base model (uncased)
=========================
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
this paper and first released in
this repository. This model is uncased: it does not make a difference
between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
the Hugging Face team.
Model description
-----------------
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
* Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
* Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
Intended uses & limitations
---------------------------
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the model hub to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
Here is how to use this model to get the features of a given text in PyTorch:
and in TensorFlow:
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions:
This bias will also affect all fine-tuned versions of this model.
Training data
-------------
The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038
unpublished books and English Wikipedia (excluding lists, tables and
headers).
Training procedure
------------------
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by '[MASK]'.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size
of 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer
used is Adam with a learning rate of 1e-4, \(\beta\_{1} = 0.9\) and \(\beta\_{2} = 0.999\), a weight decay of 0.01,
learning rate warmup for 10,000 steps and linear decay of the learning rate after.
Evaluation results
------------------
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
### BibTeX entry and citation info
<a href="URL
<img width="300px" src="URL
|
[
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #rust #bert #fill-mask #exbert #en #dataset-bookcorpus #dataset-wikipedia #arxiv-1810.04805 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nHere is how to use this model to get the features of a given text in PyTorch:\n\n\nand in TensorFlow:",
"### Limitations and bias\n\n\nEven if the training data used for this model could be characterized as fairly neutral, this model can have biased\npredictions:\n\n\nThis bias will also affect all fine-tuned versions of this model.\n\n\nTraining data\n-------------\n\n\nThe BERT model was pretrained on BookCorpus, a dataset consisting of 11,038\nunpublished books and English Wikipedia (excluding lists, tables and\nheaders).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are\nthen of the form:\n\n\nWith probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in\nthe other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a\nconsecutive span of text usually longer than a single sentence. The only constrain is that the result with the two\n\"sentences\" has a combined length of less than 512 tokens.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by '[MASK]'.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.",
"### Pretraining\n\n\nThe model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size\nof 256. The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. The optimizer\nused is Adam with a learning rate of 1e-4, \\(\\beta\\_{1} = 0.9\\) and \\(\\beta\\_{2} = 0.999\\), a weight decay of 0.01,\nlearning rate warmup for 10,000 steps and linear decay of the learning rate after.\n\n\nEvaluation results\n------------------\n\n\nWhen fine-tuned on downstream tasks, this model achieves the following results:\n\n\nGlue test results:",
"### BibTeX entry and citation info\n\n\n<a href=\"URL\n<img width=\"300px\" src=\"URL"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.