pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation
|
transformers
|
A simple question-generation model built based on SQuAD 2.0 dataset.
Example use:
```python
from transformers import T5Config, T5ForConditionalGeneration, T5Tokenizer
model_name = "allenai/t5-small-squad2-question-generation"
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
def run_model(input_string, **generator_args):
input_ids = tokenizer.encode(input_string, return_tensors="pt")
res = model.generate(input_ids, **generator_args)
output = tokenizer.batch_decode(res, skip_special_tokens=True)
print(output)
return output
run_model("shrouds herself in white and walks penitentially disguised as brotherly love through factories and parliaments; offers help, but desires power;")
run_model("He thanked all fellow bloggers and organizations that showed support.")
run_model("Races are held between April and December at the Veliefendi Hippodrome near Bakerky, 15 km (9 miles) west of Istanbul.")
```
which should result in the following:
```
['What is the name of the man who is a brotherly love?']
['What did He thank all fellow bloggers and organizations that showed support?']
['Where is the Veliefendi Hippodrome located?']
```
|
{"language": "en"}
|
allenai/t5-small-squad2-question-generation
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
A simple question-generation model built based on SQuAD 2.0 dataset.
Example use:
which should result in the following:
|
[] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# Tailor
## Model description
This is a ported version of [Tailor](https://homes.cs.washington.edu/~wtshuang/static/papers/2021-arxiv-tailor.pdf), the general-purpose counterfactual generator.
For more code release, please refer to [this github page](https://github.com/allenai/tailor).
#### How to use
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
model_path = "allenai/tailor"
generator = pipeline("text2text-generation",
model=AutoModelForSeq2SeqLM.from_pretrained(model_path),
tokenizer=AutoTokenizer.from_pretrained(model_path),
framework="pt", device=0)
prompt_text = "[VERB+active+past: comfort | AGENT+complete: the doctor | PATIENT+partial: athlete | LOCATIVE+partial: in] <extra_id_0> , <extra_id_1> <extra_id_2> <extra_id_3> ."
generator(prompt_text, max_length=200)
```
### BibTeX entry and citation info
```bibtex
@misc{ross2021tailor,
title={Tailor: Generating and Perturbing Text with Semantic Controls},
author={Alexis Ross and Tongshuang Wu and Hao Peng and Matthew E. Peters and Matt Gardner},
year={2021},
eprint={2107.07150},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2107.07150},
}
```
|
{"language": "en", "tags": ["controlled generation", "perturbation"], "widget": [{"text": "[VERB+passive+past: break | PATIENT+partial: cup] <extra_id_0> <extra_id_1> <extra_id_2> ."}, {}]}
|
allenai/tailor
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"controlled generation",
"perturbation",
"en",
"arxiv:2107.07150",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2107.07150"
] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #controlled generation #perturbation #en #arxiv-2107.07150 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Tailor
## Model description
This is a ported version of Tailor, the general-purpose counterfactual generator.
For more code release, please refer to this github page.
#### How to use
### BibTeX entry and citation info
|
[
"# Tailor",
"## Model description\n\nThis is a ported version of Tailor, the general-purpose counterfactual generator.\nFor more code release, please refer to this github page.",
"#### How to use",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #controlled generation #perturbation #en #arxiv-2107.07150 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Tailor",
"## Model description\n\nThis is a ported version of Tailor, the general-purpose counterfactual generator.\nFor more code release, please refer to this github page.",
"#### How to use",
"### BibTeX entry and citation info"
] |
question-answering
|
allennlp
|
A reading comprehension model patterned after the proposed model in Devlin et al, with improvements borrowed from the SQuAD model in the transformers project
The model implements a reading comprehension model patterned after the proposed model in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al, 2018), with improvements borrowed from the SQuAD model in the transformers project. It predicts start tokens and end tokens with a linear layer on top of word piece embeddings.
|
{"language": "en", "tags": ["allennlp", "question-answering"]}
|
allenai/transformer_qa
| null |
[
"allennlp",
"tensorboard",
"question-answering",
"en",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#allennlp #tensorboard #question-answering #en #region-us
|
A reading comprehension model patterned after the proposed model in Devlin et al, with improvements borrowed from the SQuAD model in the transformers project
The model implements a reading comprehension model patterned after the proposed model in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (Devlin et al, 2018), with improvements borrowed from the SQuAD model in the transformers project. It predicts start tokens and end tokens with a linear layer on top of word piece embeddings.
|
[] |
[
"TAGS\n#allennlp #tensorboard #question-answering #en #region-us \n"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-11b-1251000
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-11b-1363200
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-3b-1251000
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-3b-1363200
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-base-1251000
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-base-1363200
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-large-1251000
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-large-1363200
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-small-1251000
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
text2text-generation
|
transformers
|
# Further details: https://github.com/allenai/unifiedqa
|
{"language": "en"}
|
allenai/unifiedqa-v2-t5-small-1363200
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Further details: URL
|
[
"# Further details: URL"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #en #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Further details: URL"
] |
translation
|
transformers
|
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-12-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-12-1 | 26.9 | 25.75
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en", "de"], "license": "apache-2.0", "tags": ["translation", "wmt16", "allenai"], "datasets": ["wmt16"], "metrics": ["bleu"]}
|
allenai/wmt16-en-de-12-1
| null |
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt16",
"allenai",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.10369"
] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #fsmt #text2text-generation #translation #wmt16 #allenai #en #de #dataset-wmt16 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
FSMT
====
Model description
-----------------
This is a ported version of fairseq-based wmt16 transformer for en-de.
For more details, please, see Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation.
All 3 models are available:
* wmt16-en-de-dist-12-1
* wmt16-en-de-dist-6-1
* wmt16-en-de-12-1
Intended uses & limitations
---------------------------
#### How to use
#### Limitations and bias
Training data
-------------
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.
Eval results
------------
Here are the BLEU scores:
model: wmt16-en-de-12-1, fairseq: 26.9, transformers: 25.75
The score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.
The score was calculated using this code:
Data Sources
------------
* training, etc.
* test set
### BibTeX entry and citation info
|
[
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\nmodel: wmt16-en-de-12-1, fairseq: 26.9, transformers: 25.75\n\n\nThe score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #fsmt #text2text-generation #translation #wmt16 #allenai #en #de #dataset-wmt16 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\nmodel: wmt16-en-de-12-1, fairseq: 26.9, transformers: 25.75\n\n\nThe score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
translation
|
transformers
|
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-dist-12-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-dist-12-1 | 28.3 | 27.52
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-12-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en", "de"], "license": "apache-2.0", "tags": ["translation", "wmt16", "allenai"], "datasets": ["wmt16"], "metrics": ["bleu"]}
|
allenai/wmt16-en-de-dist-12-1
| null |
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt16",
"allenai",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.10369"
] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #fsmt #text2text-generation #translation #wmt16 #allenai #en #de #dataset-wmt16 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
FSMT
====
Model description
-----------------
This is a ported version of fairseq-based wmt16 transformer for en-de.
For more details, please, see Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation.
All 3 models are available:
* wmt16-en-de-dist-12-1
* wmt16-en-de-dist-6-1
* wmt16-en-de-12-1
Intended uses & limitations
---------------------------
#### How to use
#### Limitations and bias
Training data
-------------
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.
Eval results
------------
Here are the BLEU scores:
model: wmt16-en-de-dist-12-1, fairseq: 28.3, transformers: 27.52
The score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.
The score was calculated using this code:
Data Sources
------------
* training, etc.
* test set
### BibTeX entry and citation info
|
[
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\nmodel: wmt16-en-de-dist-12-1, fairseq: 28.3, transformers: 27.52\n\n\nThe score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #fsmt #text2text-generation #translation #wmt16 #allenai #en #de #dataset-wmt16 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\nmodel: wmt16-en-de-dist-12-1, fairseq: 28.3, transformers: 27.52\n\n\nThe score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
translation
|
transformers
|
# FSMT
## Model description
This is a ported version of fairseq-based [wmt16 transformer](https://github.com/jungokasai/deep-shallow/) for en-de.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
All 3 models are available:
* [wmt16-en-de-dist-12-1](https://huggingface.co/allenai/wmt16-en-de-dist-12-1)
* [wmt16-en-de-dist-6-1](https://huggingface.co/allenai/wmt16-en-de-dist-6-1)
* [wmt16-en-de-12-1](https://huggingface.co/allenai/wmt16-en-de-12-1)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt16-en-de-dist-6-1"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Maschinelles Lernen ist großartig, nicht wahr?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | fairseq | transformers
-------|---------|----------
wmt16-en-de-dist-6-1 | 27.4 | 27.11
The score is slightly below the score reported in the paper, as the researchers don't use `sacrebleu` and measure the score on tokenized outputs. `transformers` score was measured using `sacrebleu` on detokenized outputs.
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=en-de
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt16 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt16 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt16-en-de-dist-6-1 $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt16/)
- [test set](http://matrix.statmt.org/test_sets/newstest2016.tgz?1504722372)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en", "de"], "license": "apache-2.0", "tags": ["translation", "wmt16", "allenai"], "datasets": ["wmt16"], "metrics": ["bleu"]}
|
allenai/wmt16-en-de-dist-6-1
| null |
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt16",
"allenai",
"en",
"de",
"dataset:wmt16",
"arxiv:2006.10369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.10369"
] |
[
"en",
"de"
] |
TAGS
#transformers #pytorch #fsmt #text2text-generation #translation #wmt16 #allenai #en #de #dataset-wmt16 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
FSMT
====
Model description
-----------------
This is a ported version of fairseq-based wmt16 transformer for en-de.
For more details, please, see Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation.
All 3 models are available:
* wmt16-en-de-dist-12-1
* wmt16-en-de-dist-6-1
* wmt16-en-de-12-1
Intended uses & limitations
---------------------------
#### How to use
#### Limitations and bias
Training data
-------------
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.
Eval results
------------
Here are the BLEU scores:
model: wmt16-en-de-dist-6-1, fairseq: 27.4, transformers: 27.11
The score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.
The score was calculated using this code:
Data Sources
------------
* training, etc.
* test set
### BibTeX entry and citation info
|
[
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\nmodel: wmt16-en-de-dist-6-1, fairseq: 27.4, transformers: 27.11\n\n\nThe score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #fsmt #text2text-generation #translation #wmt16 #allenai #en #de #dataset-wmt16 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\nmodel: wmt16-en-de-dist-6-1, fairseq: 27.4, transformers: 27.11\n\n\nThe score is slightly below the score reported in the paper, as the researchers don't use 'sacrebleu' and measure the score on tokenized outputs. 'transformers' score was measured using 'sacrebleu' on detokenized outputs.\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
translation
|
transformers
|
# FSMT
## Model description
This is a ported version of fairseq-based [wmt19 transformer](https://github.com/jungokasai/deep-shallow/) for de-en.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
2 models are available:
* [wmt19-de-en-6-6-big](https://huggingface.co/allenai/wmt19-de-en-6-6-big)
* [wmt19-de-en-6-6-base](https://huggingface.co/allenai/wmt19-de-en-6-6-base)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt19-de-en-6-6-base"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Maschinelles Lernen ist großartig, nicht wahr?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Machine learning is great, isn't it?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | transformers
-------|---------
wmt19-de-en-6-6-base | 38.37
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt19-de-en-6-6-base $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["de", "en"], "license": "apache-2.0", "tags": ["translation", "wmt19", "allenai"], "datasets": ["wmt19"], "metrics": ["bleu"]}
|
allenai/wmt19-de-en-6-6-base
| null |
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt19",
"allenai",
"de",
"en",
"dataset:wmt19",
"arxiv:2006.10369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.10369"
] |
[
"de",
"en"
] |
TAGS
#transformers #pytorch #fsmt #text2text-generation #translation #wmt19 #allenai #de #en #dataset-wmt19 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
FSMT
====
Model description
-----------------
This is a ported version of fairseq-based wmt19 transformer for de-en.
For more details, please, see Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation.
2 models are available:
* wmt19-de-en-6-6-big
* wmt19-de-en-6-6-base
Intended uses & limitations
---------------------------
#### How to use
#### Limitations and bias
Training data
-------------
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.
Eval results
------------
Here are the BLEU scores:
The score was calculated using this code:
Data Sources
------------
* training, etc.
* test set
### BibTeX entry and citation info
|
[
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #fsmt #text2text-generation #translation #wmt19 #allenai #de #en #dataset-wmt19 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
translation
|
transformers
|
# FSMT
## Model description
This is a ported version of fairseq-based [wmt19 transformer](https://github.com/jungokasai/deep-shallow/) for de-en.
For more details, please, see [Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation](https://arxiv.org/abs/2006.10369).
2 models are available:
* [wmt19-de-en-6-6-big](https://huggingface.co/allenai/wmt19-de-en-6-6-big)
* [wmt19-de-en-6-6-base](https://huggingface.co/allenai/wmt19-de-en-6-6-base)
## Intended uses & limitations
#### How to use
```python
from transformers import FSMTForConditionalGeneration, FSMTTokenizer
mname = "allenai/wmt19-de-en-6-6-big"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Maschinelles Lernen ist großartig, nicht wahr?"
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Machine learning is great, isn't it?
```
#### Limitations and bias
## Training data
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the [paper](https://arxiv.org/abs/2006.10369).
## Eval results
Here are the BLEU scores:
model | transformers
-------|---------
wmt19-de-en-6-6-big | 39.9
The score was calculated using this code:
```bash
git clone https://github.com/huggingface/transformers
cd transformers
export PAIR=de-en
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
export BS=8
export NUM_BEAMS=5
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref > $DATA_DIR/val.target
echo $PAIR
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py allenai/wmt19-de-en-6-6-big $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation --num_beams $NUM_BEAMS
```
## Data Sources
- [training, etc.](http://www.statmt.org/wmt19/)
- [test set](http://matrix.statmt.org/test_sets/newstest2019.tgz?1556572561)
### BibTeX entry and citation info
```
@misc{kasai2020deep,
title={Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation},
author={Jungo Kasai and Nikolaos Pappas and Hao Peng and James Cross and Noah A. Smith},
year={2020},
eprint={2006.10369},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["de", "en"], "license": "apache-2.0", "tags": ["translation", "wmt19", "allenai"], "datasets": ["wmt19"], "metrics": ["bleu"]}
|
allenai/wmt19-de-en-6-6-big
| null |
[
"transformers",
"pytorch",
"fsmt",
"text2text-generation",
"translation",
"wmt19",
"allenai",
"de",
"en",
"dataset:wmt19",
"arxiv:2006.10369",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2006.10369"
] |
[
"de",
"en"
] |
TAGS
#transformers #pytorch #fsmt #text2text-generation #translation #wmt19 #allenai #de #en #dataset-wmt19 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
FSMT
====
Model description
-----------------
This is a ported version of fairseq-based wmt19 transformer for de-en.
For more details, please, see Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation.
2 models are available:
* wmt19-de-en-6-6-big
* wmt19-de-en-6-6-base
Intended uses & limitations
---------------------------
#### How to use
#### Limitations and bias
Training data
-------------
Pretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.
Eval results
------------
Here are the BLEU scores:
The score was calculated using this code:
Data Sources
------------
* training, etc.
* test set
### BibTeX entry and citation info
|
[
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #fsmt #text2text-generation #translation #wmt19 #allenai #de #en #dataset-wmt19 #arxiv-2006.10369 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"#### How to use",
"#### Limitations and bias\n\n\nTraining data\n-------------\n\n\nPretrained weights were left identical to the original model released by allenai. For more details, please, see the paper.\n\n\nEval results\n------------\n\n\nHere are the BLEU scores:\n\n\n\nThe score was calculated using this code:\n\n\nData Sources\n------------\n\n\n* training, etc.\n* test set",
"### BibTeX entry and citation info"
] |
null |
transformers
|
# Model name
Chinese-bert-wwm-electrical-health-records-ner-question-answering-sequence-labeling
#### How to use
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-qasl")
model = AutoModelForTokenClassification.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-qasl")
```
|
{"language": "zh-tw"}
|
allenyummy/chinese-bert-wwm-ehr-ner-qasl
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh-tw"
] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
# Model name
Chinese-bert-wwm-electrical-health-records-ner-question-answering-sequence-labeling
#### How to use
|
[
"# Model name\nChinese-bert-wwm-electrical-health-records-ner-question-answering-sequence-labeling",
"#### How to use"
] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n",
"# Model name\nChinese-bert-wwm-electrical-health-records-ner-question-answering-sequence-labeling",
"#### How to use"
] |
null |
transformers
|
# Model name
Chinese-bert-wwm-electrical-health-records-ner-sequence-labeling
#### How to use
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-sl")
model = AutoModelForTokenClassification.from_pretrained("allenyummy/chinese-bert-wwm-ehr-ner-sl")
```
|
{"language": "zh-tw"}
|
allenyummy/chinese-bert-wwm-ehr-ner-sl
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh-tw"
] |
TAGS
#transformers #pytorch #bert #endpoints_compatible #region-us
|
# Model name
Chinese-bert-wwm-electrical-health-records-ner-sequence-labeling
#### How to use
|
[
"# Model name\nChinese-bert-wwm-electrical-health-records-ner-sequence-labeling",
"#### How to use"
] |
[
"TAGS\n#transformers #pytorch #bert #endpoints_compatible #region-us \n",
"# Model name\nChinese-bert-wwm-electrical-health-records-ner-sequence-labeling",
"#### How to use"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Swahili
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Swahili using the following datasets:
- [ALFFA](http://www.openslr.org/25/),
- [Gamayun](https://gamayun.translatorswb.org/download/gamayun-5k-english-swahili/)
- [IWSLT](https://iwslt.org/2021/low-resource)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
processor = Wav2Vec2Processor.from_pretrained("alokmatta/wav2vec2-large-xlsr-53-sw")
model = Wav2Vec2ForCTC.from_pretrained("alokmatta/wav2vec2-large-xlsr-53-sw").to("cuda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def load_file_to_data(file):
batch = {}
speech, _ = torchaudio.load(file)
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
return batch
def predict(data):
features = processor(data["speech"], sampling_rate=data["sampling_rate"], padding=True, return_tensors="pt")
input_values = features.input_values.to("cuda")
attention_mask = features.attention_mask.to("cuda")
with torch.no_grad():
logits = model(input_values, attention_mask=attention_mask).logits
pred_ids = torch.argmax(logits, dim=-1)
return processor.batch_decode(pred_ids)
predict(load_file_to_data('./demo.wav'))
```
**Test Result**: 40 %
## Training
The script used for training can be found [here](https://colab.research.google.com/drive/1_RL6TQv_Yiu_xbWXu4ycbzdCdXCqEQYU?usp=sharing)
|
{"language": "sw", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["ALFFA,Gamayun & IWSLT"], "metrics": ["wer"]}
|
alokmatta/wav2vec2-large-xlsr-53-sw
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"sw",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sw"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sw #license-apache-2.0 #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Swahili
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Swahili using the following datasets:
- ALFFA,
- Gamayun
- IWSLT
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
Test Result: 40 %
## Training
The script used for training can be found here
|
[
"# Wav2Vec2-Large-XLSR-53-Swahili \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Swahili using the following datasets:\n- ALFFA,\n- Gamayun \n- IWSLT\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\n\n\nTest Result: 40 %",
"## Training\n\n\nThe script used for training can be found here"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #sw #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Swahili \n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Swahili using the following datasets:\n- ALFFA,\n- Gamayun \n- IWSLT\n\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:\n\n\n\nTest Result: 40 %",
"## Training\n\n\nThe script used for training can be found here"
] |
question-answering
|
transformers
|
# bert-base-multilingual-uncased for multilingual QA
# Overview
**Language Model**: bert-base-multilingual-uncased \
**Downstream task**: Extractive QA \
**Training data**: [XQuAD](https://github.com/deepmind/xquad) \
**Testing Data**: [XQuAD](https://github.com/deepmind/xquad)
# Hyperparameters
```python
batch_size = 48
n_epochs = 6
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
```
# Performance
Evaluated on held-out test set from XQuAD
```python
"exact_match": 64.6067415730337,
"f1": 79.52043478874286,
"test_samples": 2384
```
# Usage
## In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/bert-base-multilingual-xquad"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/bert-base-multilingual-xquad"
# a) Get predictions
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
## In Haystack
```python
reader = FARMReader(model_name_or_path="alon-albalak/bert-base-multilingual-xquad")
# or
reader = TransformersReader(model="alon-albalak/bert-base-multilingual-xquad",tokenizer="alon-albalak/bert-base-multilingual-xquad")
```
Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2
|
{"tags": ["multilingual"], "datasets": ["xquad"]}
|
alon-albalak/bert-base-multilingual-xquad
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"multilingual",
"dataset:xquad",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #bert #question-answering #multilingual #dataset-xquad #endpoints_compatible #region-us
|
# bert-base-multilingual-uncased for multilingual QA
# Overview
Language Model: bert-base-multilingual-uncased \
Downstream task: Extractive QA \
Training data: XQuAD \
Testing Data: XQuAD
# Hyperparameters
# Performance
Evaluated on held-out test set from XQuAD
# Usage
## In Transformers
## In FARM
## In Haystack
Usage instructions for FARM and Haystack were adopted from URL
|
[
"# bert-base-multilingual-uncased for multilingual QA",
"# Overview\nLanguage Model: bert-base-multilingual-uncased \\\nDownstream task: Extractive QA \\\nTraining data: XQuAD \\\nTesting Data: XQuAD",
"# Hyperparameters",
"# Performance\n\nEvaluated on held-out test set from XQuAD",
"# Usage",
"## In Transformers",
"## In FARM",
"## In Haystack\n\n\n\nUsage instructions for FARM and Haystack were adopted from URL"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bert #question-answering #multilingual #dataset-xquad #endpoints_compatible #region-us \n",
"# bert-base-multilingual-uncased for multilingual QA",
"# Overview\nLanguage Model: bert-base-multilingual-uncased \\\nDownstream task: Extractive QA \\\nTraining data: XQuAD \\\nTesting Data: XQuAD",
"# Hyperparameters",
"# Performance\n\nEvaluated on held-out test set from XQuAD",
"# Usage",
"## In Transformers",
"## In FARM",
"## In Haystack\n\n\n\nUsage instructions for FARM and Haystack were adopted from URL"
] |
question-answering
|
transformers
|
# xlm-roberta-base for multilingual QA
# Overview
**Language Model**: xlm-roberta-base \
**Downstream task**: Extractive QA \
**Training data**: [XQuAD](https://github.com/deepmind/xquad)\
**Testing Data**: [XQuAD](https://github.com/deepmind/xquad)
# Hyperparameters
```python
batch_size = 40
n_epochs = 10
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
```
# Performance
Evaluated on held-out test set from XQuAD
```python
"exact_match": 79.44756554307116,
"f1": 89.79318021513376,
"test_samples": 2307
```
# Usage
## In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/xlm-roberta-base-xquad"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/xlm-roberta-base-xquad"
# a) Get predictions
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
## In Haystack
```python
reader = FARMReader(model_name_or_path="alon-albalak/xlm-roberta-base-xquad")
# or
reader = TransformersReader(model="alon-albalak/xlm-roberta-base-xquad",tokenizer="alon-albalak/xlm-roberta-base-xquad")
```
Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2
|
{"tags": ["multilingual"], "datasets": ["xquad"]}
|
alon-albalak/xlm-roberta-base-xquad
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"multilingual",
"dataset:xquad",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #question-answering #multilingual #dataset-xquad #endpoints_compatible #region-us
|
# xlm-roberta-base for multilingual QA
# Overview
Language Model: xlm-roberta-base \
Downstream task: Extractive QA \
Training data: XQuAD\
Testing Data: XQuAD
# Hyperparameters
# Performance
Evaluated on held-out test set from XQuAD
# Usage
## In Transformers
## In FARM
## In Haystack
Usage instructions for FARM and Haystack were adopted from URL
|
[
"# xlm-roberta-base for multilingual QA",
"# Overview\nLanguage Model: xlm-roberta-base \\\nDownstream task: Extractive QA \\\nTraining data: XQuAD\\\nTesting Data: XQuAD",
"# Hyperparameters",
"# Performance\nEvaluated on held-out test set from XQuAD",
"# Usage",
"## In Transformers",
"## In FARM",
"## In Haystack\n\n\n\nUsage instructions for FARM and Haystack were adopted from URL"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #question-answering #multilingual #dataset-xquad #endpoints_compatible #region-us \n",
"# xlm-roberta-base for multilingual QA",
"# Overview\nLanguage Model: xlm-roberta-base \\\nDownstream task: Extractive QA \\\nTraining data: XQuAD\\\nTesting Data: XQuAD",
"# Hyperparameters",
"# Performance\nEvaluated on held-out test set from XQuAD",
"# Usage",
"## In Transformers",
"## In FARM",
"## In Haystack\n\n\n\nUsage instructions for FARM and Haystack were adopted from URL"
] |
question-answering
|
transformers
|
# xlm-roberta-large for multilingual QA
# Overview
**Language Model**: xlm-roberta-large \
**Downstream task**: Extractive QA \
**Training data**: [XQuAD](https://github.com/deepmind/xquad) \
**Testing Data**: [XQuAD](https://github.com/deepmind/xquad)
# Hyperparameters
```python
batch_size = 48
n_epochs = 13
max_seq_len = 384
doc_stride = 128
learning_rate = 3e-5
```
# Performance
Evaluated on held-out test set from XQuAD
```python
"exact_match": 87.12546816479401,
"f1": 94.77703248802527,
"test_samples": 2307
```
# Usage
## In Transformers
```python
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "alon-albalak/xlm-roberta-large-xquad"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## In FARM
```python
from farm.modeling.adaptive_model import AdaptiveModel
from farm.modeling.tokenization import Tokenizer
from farm.infer import QAInferencer
model_name = "alon-albalak/xlm-roberta-large-xquad"
# a) Get predictions
nlp = QAInferencer.load(model_name)
QA_input = [{"questions": ["Why is model conversion important?"],
"text": "The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks."}]
res = nlp.inference_from_dicts(dicts=QA_input, rest_api_schema=True)
# b) Load model & tokenizer
model = AdaptiveModel.convert_from_transformers(model_name, device="cpu", task_type="question_answering")
tokenizer = Tokenizer.load(model_name)
```
## In Haystack
```python
reader = FARMReader(model_name_or_path="alon-albalak/xlm-roberta-large-xquad")
# or
reader = TransformersReader(model="alon-albalak/xlm-roberta-large-xquad",tokenizer="alon-albalak/xlm-roberta-large-xquad")
```
Usage instructions for FARM and Haystack were adopted from https://huggingface.co/deepset/xlm-roberta-large-squad2
|
{"tags": ["multilingual"], "datasets": ["xquad"]}
|
alon-albalak/xlm-roberta-large-xquad
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"question-answering",
"multilingual",
"dataset:xquad",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #question-answering #multilingual #dataset-xquad #endpoints_compatible #has_space #region-us
|
# xlm-roberta-large for multilingual QA
# Overview
Language Model: xlm-roberta-large \
Downstream task: Extractive QA \
Training data: XQuAD \
Testing Data: XQuAD
# Hyperparameters
# Performance
Evaluated on held-out test set from XQuAD
# Usage
## In Transformers
## In FARM
## In Haystack
Usage instructions for FARM and Haystack were adopted from URL
|
[
"# xlm-roberta-large for multilingual QA",
"# Overview\nLanguage Model: xlm-roberta-large \\\nDownstream task: Extractive QA \\\nTraining data: XQuAD \\\nTesting Data: XQuAD",
"# Hyperparameters",
"# Performance\n\nEvaluated on held-out test set from XQuAD",
"# Usage",
"## In Transformers",
"## In FARM",
"## In Haystack\n\n\n\nUsage instructions for FARM and Haystack were adopted from URL"
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #question-answering #multilingual #dataset-xquad #endpoints_compatible #has_space #region-us \n",
"# xlm-roberta-large for multilingual QA",
"# Overview\nLanguage Model: xlm-roberta-large \\\nDownstream task: Extractive QA \\\nTraining data: XQuAD \\\nTesting Data: XQuAD",
"# Hyperparameters",
"# Performance\n\nEvaluated on held-out test set from XQuAD",
"# Usage",
"## In Transformers",
"## In FARM",
"## In Haystack\n\n\n\nUsage instructions for FARM and Haystack were adopted from URL"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 536415182
- CO2 Emissions (in grams): 1.268309634217171
## Validation Metrics
- Loss: 0.44733062386512756
- Accuracy: 0.8873239436619719
- Macro F1: 0.8859416445623343
- Micro F1: 0.8873239436619719
- Weighted F1: 0.8864646766540891
- Macro Precision: 0.8848522167487685
- Micro Precision: 0.8873239436619719
- Weighted Precision: 0.8883299798792756
- Macro Recall: 0.8908045977011494
- Micro Recall: 0.8873239436619719
- Weighted Recall: 0.8873239436619719
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alperiox/autonlp-user-review-classification-536415182
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alperiox/autonlp-user-review-classification-536415182", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["alperiox/autonlp-data-user-review-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 1.268309634217171}
|
alperiox/autonlp-user-review-classification-536415182
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:alperiox/autonlp-data-user-review-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-alperiox/autonlp-data-user-review-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 536415182
- CO2 Emissions (in grams): 1.268309634217171
## Validation Metrics
- Loss: 0.44733062386512756
- Accuracy: 0.8873239436619719
- Macro F1: 0.8859416445623343
- Micro F1: 0.8873239436619719
- Weighted F1: 0.8864646766540891
- Macro Precision: 0.8848522167487685
- Micro Precision: 0.8873239436619719
- Weighted Precision: 0.8883299798792756
- Macro Recall: 0.8908045977011494
- Micro Recall: 0.8873239436619719
- Weighted Recall: 0.8873239436619719
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 536415182\n- CO2 Emissions (in grams): 1.268309634217171",
"## Validation Metrics\n\n- Loss: 0.44733062386512756\n- Accuracy: 0.8873239436619719\n- Macro F1: 0.8859416445623343\n- Micro F1: 0.8873239436619719\n- Weighted F1: 0.8864646766540891\n- Macro Precision: 0.8848522167487685\n- Micro Precision: 0.8873239436619719\n- Weighted Precision: 0.8883299798792756\n- Macro Recall: 0.8908045977011494\n- Micro Recall: 0.8873239436619719\n- Weighted Recall: 0.8873239436619719",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-alperiox/autonlp-data-user-review-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 536415182\n- CO2 Emissions (in grams): 1.268309634217171",
"## Validation Metrics\n\n- Loss: 0.44733062386512756\n- Accuracy: 0.8873239436619719\n- Macro F1: 0.8859416445623343\n- Micro F1: 0.8873239436619719\n- Weighted F1: 0.8864646766540891\n- Macro Precision: 0.8848522167487685\n- Micro Precision: 0.8873239436619719\n- Weighted Precision: 0.8883299798792756\n- Macro Recall: 0.8908045977011494\n- Micro Recall: 0.8873239436619719\n- Weighted Recall: 0.8873239436619719",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
token-classification
|
spacy
|
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.1.0,<3.2.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `awarded` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 99.44 |
| `ENTS_P` | 99.63 |
| `ENTS_R` | 99.25 |
| `TOK2VEC_LOSS` | 37454.98 |
| `NER_LOSS` | 9266.72 |
|
{"language": ["en"], "tags": ["spacy", "token-classification"]}
|
alphai/en_pipeline
| null |
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #token-classification #en #model-index #region-us
|
### Label Scheme
View label scheme (1 labels for 1 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (1 labels for 1 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #en #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (1 labels for 1 components)",
"### Accuracy"
] |
text-generation
|
transformers
|
#Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
aluserhuggingface/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#Harry Potter DialoGPT Model
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
token-classification
|
transformers
|
BioBERT model fine-tuned in NER task with BC5CDR-chemicals and BC4CHEMD corpus.
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner
|
{"language": "en", "license": "apache-2.0", "tags": ["token-classification", "NER", "Biomedical", "Chemicals"], "datasets": ["BC5CDR-chemicals", "BC4CHEMD"]}
|
alvaroalon2/biobert_chemical_ner
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"NER",
"Biomedical",
"Chemicals",
"en",
"dataset:BC5CDR-chemicals",
"dataset:BC4CHEMD",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #bert #token-classification #NER #Biomedical #Chemicals #en #dataset-BC5CDR-chemicals #dataset-BC4CHEMD #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
BioBERT model fine-tuned in NER task with BC5CDR-chemicals and BC4CHEMD corpus.
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: URL
|
[] |
[
"TAGS\n#transformers #pytorch #tf #bert #token-classification #NER #Biomedical #Chemicals #en #dataset-BC5CDR-chemicals #dataset-BC4CHEMD #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
token-classification
|
transformers
|
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner
|
{"language": "en", "license": "apache-2.0", "tags": ["token-classification", "NER", "Biomedical", "Diseases"], "datasets": ["BC5CDR-diseases", "ncbi_disease"]}
|
alvaroalon2/biobert_diseases_ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"Biomedical",
"Diseases",
"en",
"dataset:BC5CDR-diseases",
"dataset:ncbi_disease",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #Biomedical #Diseases #en #dataset-BC5CDR-diseases #dataset-ncbi_disease #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
BioBERT model fine-tuned in NER task with BC5CDR-diseases and NCBI-diseases corpus
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #Biomedical #Diseases #en #dataset-BC5CDR-diseases #dataset-ncbi_disease #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
token-classification
|
transformers
|
BioBERT model fine-tuned in NER task with JNLPBA and BC2GM corpus for genetic class entities.
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: https://github.com/librairy/bio-ner
|
{"language": "en", "license": "apache-2.0", "tags": ["token-classification", "NER", "Biomedical", "Genetics"], "datasets": ["JNLPBA", "BC2GM"]}
|
alvaroalon2/biobert_genetic_ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"NER",
"Biomedical",
"Genetics",
"en",
"dataset:JNLPBA",
"dataset:BC2GM",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #token-classification #NER #Biomedical #Genetics #en #dataset-JNLPBA #dataset-BC2GM #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
BioBERT model fine-tuned in NER task with JNLPBA and BC2GM corpus for genetic class entities.
This was fine-tuned in order to use it in a BioNER/BioNEN system which is available at: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #token-classification #NER #Biomedical #Genetics #en #dataset-JNLPBA #dataset-BC2GM #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
null | null |
Hi!
|
{}
|
alvinhou/model_test
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
Hi!
|
[] |
[
"TAGS\n#region-us \n"
] |
text-generation
|
transformers
|
# Frank Talks DialoGPT Model
|
{"tags": ["conversational"]}
|
alvinkobe/DialoGPT-medium-steve_biko
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Frank Talks DialoGPT Model
|
[
"# Frank Talks DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Frank Talks DialoGPT Model"
] |
text-generation
|
transformers
|
#PANAFRICAN DialoGPT
|
{"tags": ["conversational"]}
|
alvinkobe/DialoGPT-small-KST
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#PANAFRICAN DialoGPT
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 34318169
- CO2 Emissions (in grams): 8.612473981829835
## Validation Metrics
- Loss: 1.3520570993423462
- Accuracy: 0.6083916083916084
- Macro F1: 0.5420169617715481
- Micro F1: 0.6083916083916084
- Weighted F1: 0.5963328136975058
- Macro Precision: 0.5864033493660455
- Micro Precision: 0.6083916083916084
- Weighted Precision: 0.6364793882921277
- Macro Recall: 0.5545405576555766
- Micro Recall: 0.6083916083916084
- Weighted Recall: 0.6083916083916084
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alvp/autonlp-alberti-stanza-names-34318169
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alvp/autonlp-alberti-stanza-names-34318169", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["alvp/autonlp-data-alberti-stanza-names"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 8.612473981829835}
|
alvp/alberti-stanzas
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:alvp/autonlp-data-alberti-stanza-names",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-alvp/autonlp-data-alberti-stanza-names #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 34318169
- CO2 Emissions (in grams): 8.612473981829835
## Validation Metrics
- Loss: 1.3520570993423462
- Accuracy: 0.6083916083916084
- Macro F1: 0.5420169617715481
- Micro F1: 0.6083916083916084
- Weighted F1: 0.5963328136975058
- Macro Precision: 0.5864033493660455
- Micro Precision: 0.6083916083916084
- Weighted Precision: 0.6364793882921277
- Macro Recall: 0.5545405576555766
- Micro Recall: 0.6083916083916084
- Weighted Recall: 0.6083916083916084
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 34318169\n- CO2 Emissions (in grams): 8.612473981829835",
"## Validation Metrics\n\n- Loss: 1.3520570993423462\n- Accuracy: 0.6083916083916084\n- Macro F1: 0.5420169617715481\n- Micro F1: 0.6083916083916084\n- Weighted F1: 0.5963328136975058\n- Macro Precision: 0.5864033493660455\n- Micro Precision: 0.6083916083916084\n- Weighted Precision: 0.6364793882921277\n- Macro Recall: 0.5545405576555766\n- Micro Recall: 0.6083916083916084\n- Weighted Recall: 0.6083916083916084",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-alvp/autonlp-data-alberti-stanza-names #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 34318169\n- CO2 Emissions (in grams): 8.612473981829835",
"## Validation Metrics\n\n- Loss: 1.3520570993423462\n- Accuracy: 0.6083916083916084\n- Macro F1: 0.5420169617715481\n- Micro F1: 0.6083916083916084\n- Weighted F1: 0.5963328136975058\n- Macro Precision: 0.5864033493660455\n- Micro Precision: 0.6083916083916084\n- Weighted Precision: 0.6364793882921277\n- Macro Recall: 0.5545405576555766\n- Micro Recall: 0.6083916083916084\n- Weighted Recall: 0.6083916083916084",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57426955
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "57426955", "results": []}]}
|
am-shb/bert-base-multilingual-cased-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# 57426955
This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4779
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
[
"# 57426955\n\nThis model is a fine-tuned version of bert-base-multilingual-cased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4779",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 12\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# 57426955\n\nThis model is a fine-tuned version of bert-base-multilingual-cased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4779",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 12\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 57463134
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "57463134", "results": []}]}
|
am-shb/bert-base-multilingual-uncased-finetuned
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# 57463134
This model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6137
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 24
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
[
"# 57463134\n\nThis model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.6137",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 12\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# 57463134\n\nThis model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.6137",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 12\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 24\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-multilingual-uncased", "results": []}]}
|
am-shb/bert-base-multilingual-uncased-pretrained
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# bert-base-multilingual-uncased
This model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
[
"# bert-base-multilingual-uncased\n\nThis model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2198",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# bert-base-multilingual-uncased\n\nThis model is a fine-tuned version of bert-base-multilingual-uncased on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.2198",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "model-index": [{"name": "roberta", "results": []}]}
|
am-shb/xlm-roberta-base-pretrained
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
# roberta
This model is a fine-tuned version of xlm-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4144
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 4
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
[
"# roberta\n\nThis model is a fine-tuned version of xlm-roberta-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4144",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 12\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 48\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta\n\nThis model is a fine-tuned version of xlm-roberta-base on the None dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.4144",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 12\n- eval_batch_size: 16\n- seed: 1337\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 48\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.2\n- Pytorch 1.10.0\n- Datasets 1.8.0\n- Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36789092
- CO2 Emissions (in grams): 1.4280361775467445
## Validation Metrics
- Loss: 0.5255328416824341
- Accuracy: 0.7666078777189889
- Precision: 0.6913123844731978
- Recall: 0.6192052980132451
- AUC: 0.7893359070795125
- F1: 0.6532751091703057
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/am4nsolanki/autonlp-text-hateful-memes-36789092
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("am4nsolanki/autonlp-text-hateful-memes-36789092", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["am4nsolanki/autonlp-data-text-hateful-memes"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 1.4280361775467445}
|
am4nsolanki/autonlp-text-hateful-memes-36789092
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:am4nsolanki/autonlp-data-text-hateful-memes",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-am4nsolanki/autonlp-data-text-hateful-memes #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 36789092
- CO2 Emissions (in grams): 1.4280361775467445
## Validation Metrics
- Loss: 0.5255328416824341
- Accuracy: 0.7666078777189889
- Precision: 0.6913123844731978
- Recall: 0.6192052980132451
- AUC: 0.7893359070795125
- F1: 0.6532751091703057
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 36789092\n- CO2 Emissions (in grams): 1.4280361775467445",
"## Validation Metrics\n\n- Loss: 0.5255328416824341\n- Accuracy: 0.7666078777189889\n- Precision: 0.6913123844731978\n- Recall: 0.6192052980132451\n- AUC: 0.7893359070795125\n- F1: 0.6532751091703057",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-am4nsolanki/autonlp-data-text-hateful-memes #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 36789092\n- CO2 Emissions (in grams): 1.4280361775467445",
"## Validation Metrics\n\n- Loss: 0.5255328416824341\n- Accuracy: 0.7666078777189889\n- Precision: 0.6913123844731978\n- Recall: 0.6192052980132451\n- AUC: 0.7893359070795125\n- F1: 0.6532751091703057",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
fill-mask
|
transformers
|
# RoBERTa base model for Hindi language
Pretrained model on Hindi language using a masked language modeling (MLM) objective. [A more interactive & comparison demo is available here](https://huggingface.co/spaces/flax-community/roberta-hindi).
> This is part of the
[Flax/Jax Community Week](https://discuss.huggingface.co/t/pretrain-roberta-from-scratch-in-hindi/7091), organized by [Hugging Face](https://huggingface.co/) and TPU usage sponsored by Google.
## Model description
RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data(a combination of **mc4, oscar and indic-nlp** datasets)
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='flax-community/roberta-hindi')
>>> unmasker("हम आपके सुखद <mask> की कामना करते हैं")
[{'score': 0.3310680091381073,
'sequence': 'हम आपके सुखद सफर की कामना करते हैं',
'token': 1349,
'token_str': ' सफर'},
{'score': 0.15317578613758087,
'sequence': 'हम आपके सुखद पल की कामना करते हैं',
'token': 848,
'token_str': ' पल'},
{'score': 0.07826550304889679,
'sequence': 'हम आपके सुखद समय की कामना करते हैं',
'token': 453,
'token_str': ' समय'},
{'score': 0.06304813921451569,
'sequence': 'हम आपके सुखद पहल की कामना करते हैं',
'token': 404,
'token_str': ' पहल'},
{'score': 0.058322224766016006,
'sequence': 'हम आपके सुखद अवसर की कामना करते हैं',
'token': 857,
'token_str': ' अवसर'}]
```
## Training data
The RoBERTa Hindi model was pretrained on the reunion of the following datasets:
- [OSCAR](https://huggingface.co/datasets/oscar) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
- [mC4](https://huggingface.co/datasets/mc4) is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.
- [IndicGLUE](https://indicnlp.ai4bharat.org/indic-glue/) is a natural language understanding benchmark.
- [Samanantar](https://indicnlp.ai4bharat.org/samanantar/) is a parallel corpora collection for Indic language.
- [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-and-large-summarization-corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
- [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi-text-short-summarization-corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites.
- [Old Newspapers Hindi](https://www.kaggle.com/crazydiv/oldnewspapershindi) is a cleaned subset of HC Corpora newspapers.
## Training procedure
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with `<s>` and the end of one by `</s>`.
- We had to perform cleanup of **mC4** and **oscar** datasets by removing all non hindi (non Devanagari) characters from the datasets.
- We tried to filter out evaluation set of WikiNER of [IndicGlue](https://indicnlp.ai4bharat.org/indic-glue/) benchmark by [manual labelling](https://github.com/amankhandelia/roberta_hindi/blob/master/wikiner_incorrect_eval_set.csv) where the actual labels were not correct and modifying the [downstream evaluation dataset](https://github.com/amankhandelia/roberta_hindi/blob/master/utils.py).
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `<mask>`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of **mC4, oscar** and other datasets listed above was used to train the model. Training logs are present in [wandb](https://wandb.ai/wandb/hf-flax-roberta-hindi).
## Evaluation Results
RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below.
| Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi |
|-------------------------|----------------------|-----------|------------|-------------------------------|-----------------------|---------------|
| BBC News Classification | Genre Classification | **76.44** | 66.86 | **77.6** | 64.9 | 73.67 |
| WikiNER | Token Classification | - | 90.68 | **95.09** | 89.61 | **92.76** |
| IITP Product Reviews | Sentiment Analysis | **78.01** | 73.23 | **78.39** | 66.16 | 75.53 |
| IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | **70.65** | 49.35 | **61.29** |
## Team Members
- Aman K ([amankhandelia](https://huggingface.co/amankhandelia))
- Haswanth Aekula ([hassiahk](https://huggingface.co/hassiahk))
- Kartik Godawat ([dk-crazydiv](https://huggingface.co/dk-crazydiv))
- Prateek Agrawal ([prateekagrawal](https://huggingface.co/prateekagrawal))
- Rahul Dev ([mlkorra](https://huggingface.co/mlkorra))
## Credits
Huge thanks to Hugging Face 🤗 & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to [Suraj Patil](https://huggingface.co/valhalla) & [Patrick von Platen](https://huggingface.co/patrickvonplaten) for mentoring during the whole week.
<img src=https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:medium>
|
{"widget": [{"text": "\u092e\u0941\u091d\u0947 \u0909\u0928\u0938\u0947 \u092c\u093e\u0924 \u0915\u0930\u0928\u093e <mask> \u0905\u091a\u094d\u091b\u093e \u0932\u0917\u093e"}, {"text": "\u0939\u092e \u0906\u092a\u0915\u0947 \u0938\u0941\u0916\u0926 <mask> \u0915\u0940 \u0915\u093e\u092e\u0928\u093e \u0915\u0930\u0924\u0947 \u0939\u0948\u0902"}, {"text": "\u0938\u092d\u0940 \u0905\u091a\u094d\u091b\u0940 \u091a\u0940\u091c\u094b\u0902 \u0915\u093e \u090f\u0915 <mask> \u0939\u094b\u0924\u093e \u0939\u0948"}]}
|
amankhandelia/panini
| null |
[
"transformers",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
RoBERTa base model for Hindi language
=====================================
Pretrained model on Hindi language using a masked language modeling (MLM) objective. A more interactive & comparison demo is available here.
>
> This is part of the
> Flax/Jax Community Week, organized by Hugging Face and TPU usage sponsored by Google.
>
>
>
Model description
-----------------
RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data(a combination of mc4, oscar and indic-nlp datasets)
### How to use
You can use this model directly with a pipeline for masked language modeling:
Training data
-------------
The RoBERTa Hindi model was pretrained on the reunion of the following datasets:
* OSCAR is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
* mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.
* IndicGLUE is a natural language understanding benchmark.
* Samanantar is a parallel corpora collection for Indic language.
* Hindi Text Short and Large Summarization Corpus is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
* Hindi Text Short Summarization Corpus is a collection of ~330k articles with their headlines collected from Hindi News Websites.
* Old Newspapers Hindi is a cleaned subset of HC Corpora newspapers.
Training procedure
------------------
### Preprocessing
The texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of
the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked
with '~~' and the end of one by '~~'.
* We had to perform cleanup of mC4 and oscar datasets by removing all non hindi (non Devanagari) characters from the datasets.
* We tried to filter out evaluation set of WikiNER of IndicGlue benchmark by manual labelling where the actual labels were not correct and modifying the downstream evaluation dataset.
The details of the masking procedure for each sentence are the following:
* 15% of the tokens are masked.
* In 80% of the cases, the masked tokens are replaced by ''.
* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
* In the 10% remaining cases, the masked tokens are left as is.
Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
### Pretraining
The model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of mC4, oscar and other datasets listed above was used to train the model. Training logs are present in wandb.
Evaluation Results
------------------
RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below.
Team Members
------------
* Aman K (amankhandelia)
* Haswanth Aekula (hassiahk)
* Kartik Godawat (dk-crazydiv)
* Prateek Agrawal (prateekagrawal)
* Rahul Dev (mlkorra)
Credits
-------
Huge thanks to Hugging Face & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to Suraj Patil & Patrick von Platen for mentoring during the whole week.
<img src=URL
|
[
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nTraining data\n-------------\n\n\nThe RoBERTa Hindi model was pretrained on the reunion of the following datasets:\n\n\n* OSCAR is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n* mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.\n* IndicGLUE is a natural language understanding benchmark.\n* Samanantar is a parallel corpora collection for Indic language.\n* Hindi Text Short and Large Summarization Corpus is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.\n* Hindi Text Short Summarization Corpus is a collection of ~330k articles with their headlines collected from Hindi News Websites.\n* Old Newspapers Hindi is a cleaned subset of HC Corpora newspapers.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'.\n\n\n* We had to perform cleanup of mC4 and oscar datasets by removing all non hindi (non Devanagari) characters from the datasets.\n* We tried to filter out evaluation set of WikiNER of IndicGlue benchmark by manual labelling where the actual labels were not correct and modifying the downstream evaluation dataset.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of mC4, oscar and other datasets listed above was used to train the model. Training logs are present in wandb.\n\n\nEvaluation Results\n------------------\n\n\nRoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below.\n\n\n\nTeam Members\n------------\n\n\n* Aman K (amankhandelia)\n* Haswanth Aekula (hassiahk)\n* Kartik Godawat (dk-crazydiv)\n* Prateek Agrawal (prateekagrawal)\n* Rahul Dev (mlkorra)\n\n\nCredits\n-------\n\n\nHuge thanks to Hugging Face & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to Suraj Patil & Patrick von Platen for mentoring during the whole week.\n\n\n<img src=URL"
] |
[
"TAGS\n#transformers #roberta #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"### How to use\n\n\nYou can use this model directly with a pipeline for masked language modeling:\n\n\nTraining data\n-------------\n\n\nThe RoBERTa Hindi model was pretrained on the reunion of the following datasets:\n\n\n* OSCAR is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.\n* mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.\n* IndicGLUE is a natural language understanding benchmark.\n* Samanantar is a parallel corpora collection for Indic language.\n* Hindi Text Short and Large Summarization Corpus is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.\n* Hindi Text Short Summarization Corpus is a collection of ~330k articles with their headlines collected from Hindi News Websites.\n* Old Newspapers Hindi is a cleaned subset of HC Corpora newspapers.\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nThe texts are tokenized using a byte version of Byte-Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of\nthe model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked\nwith '~~' and the end of one by '~~'.\n\n\n* We had to perform cleanup of mC4 and oscar datasets by removing all non hindi (non Devanagari) characters from the datasets.\n* We tried to filter out evaluation set of WikiNER of IndicGlue benchmark by manual labelling where the actual labels were not correct and modifying the downstream evaluation dataset.\n\n\nThe details of the masking procedure for each sentence are the following:\n\n\n* 15% of the tokens are masked.\n* In 80% of the cases, the masked tokens are replaced by ''.\n* In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.\n* In the 10% remaining cases, the masked tokens are left as is.\nContrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).",
"### Pretraining\n\n\nThe model was trained on Google Cloud Engine TPUv3-8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores).A randomized shuffle of combined dataset of mC4, oscar and other datasets listed above was used to train the model. Training logs are present in wandb.\n\n\nEvaluation Results\n------------------\n\n\nRoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below.\n\n\n\nTeam Members\n------------\n\n\n* Aman K (amankhandelia)\n* Haswanth Aekula (hassiahk)\n* Kartik Godawat (dk-crazydiv)\n* Prateek Agrawal (prateekagrawal)\n* Rahul Dev (mlkorra)\n\n\nCredits\n-------\n\n\nHuge thanks to Hugging Face & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to Suraj Patil & Patrick von Platen for mentoring during the whole week.\n\n\n<img src=URL"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 20114061
- CO2 Emissions (in grams): 3.651199395353127
## Validation Metrics
- Loss: 0.5046541690826416
- Accuracy: 0.8036219581211093
- Macro F1: 0.807095210403678
- Micro F1: 0.8036219581211093
- Weighted F1: 0.8039634739225368
- Macro Precision: 0.8076842795233988
- Micro Precision: 0.8036219581211093
- Weighted Precision: 0.8052135235094771
- Macro Recall: 0.8075241470527056
- Micro Recall: 0.8036219581211093
- Weighted Recall: 0.8036219581211093
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["amansolanki/autonlp-data-Tweet-Sentiment-Extraction"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 3.651199395353127}
|
amansolanki/autonlp-Tweet-Sentiment-Extraction-20114061
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:amansolanki/autonlp-data-Tweet-Sentiment-Extraction",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-amansolanki/autonlp-data-Tweet-Sentiment-Extraction #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 20114061
- CO2 Emissions (in grams): 3.651199395353127
## Validation Metrics
- Loss: 0.5046541690826416
- Accuracy: 0.8036219581211093
- Macro F1: 0.807095210403678
- Micro F1: 0.8036219581211093
- Weighted F1: 0.8039634739225368
- Macro Precision: 0.8076842795233988
- Micro Precision: 0.8036219581211093
- Weighted Precision: 0.8052135235094771
- Macro Recall: 0.8075241470527056
- Micro Recall: 0.8036219581211093
- Weighted Recall: 0.8036219581211093
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 20114061\n- CO2 Emissions (in grams): 3.651199395353127",
"## Validation Metrics\n\n- Loss: 0.5046541690826416\n- Accuracy: 0.8036219581211093\n- Macro F1: 0.807095210403678\n- Micro F1: 0.8036219581211093\n- Weighted F1: 0.8039634739225368\n- Macro Precision: 0.8076842795233988\n- Micro Precision: 0.8036219581211093\n- Weighted Precision: 0.8052135235094771\n- Macro Recall: 0.8075241470527056\n- Micro Recall: 0.8036219581211093\n- Weighted Recall: 0.8036219581211093",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #autonlp #en #dataset-amansolanki/autonlp-data-Tweet-Sentiment-Extraction #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 20114061\n- CO2 Emissions (in grams): 3.651199395353127",
"## Validation Metrics\n\n- Loss: 0.5046541690826416\n- Accuracy: 0.8036219581211093\n- Macro F1: 0.807095210403678\n- Micro F1: 0.8036219581211093\n- Weighted F1: 0.8039634739225368\n- Macro Precision: 0.8076842795233988\n- Micro Precision: 0.8036219581211093\n- Weighted Precision: 0.8052135235094771\n- Macro Recall: 0.8075241470527056\n- Micro Recall: 0.8036219581211093\n- Weighted Recall: 0.8036219581211093",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
fill-mask
|
transformers
|
⚠️ **Disclaimer** ⚠️
This model is community-contributed, and not supported by Amazon, Inc.
## BORT
[Amazon's BORT](https://www.amazon.science/blog/a-version-of-the-bert-language-model-thats-20-times-as-fast)
BORT is a highly compressed version of [bert-large](https://huggingface.co/bert-large-uncased) that is up to 10 times faster at inference.
The model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.
[Paper](https://arxiv.org/abs/2010.10499)
**Abstract**
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
The original model can be found under:
https://github.com/alexa/bort
**IMPORTANT**
BORT requires a very unique fine-tuning algorithm, called [Agora](https://adewynter.github.io/notes/bort_algorithms_and_applications.html) which is not open-sourced yet.
Standard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!
|
{}
|
amazon/bort
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"arxiv:2010.10499",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.10499"
] |
[] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2010.10499 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
️ Disclaimer ️
This model is community-contributed, and not supported by Amazon, Inc.
## BORT
Amazon's BORT
BORT is a highly compressed version of bert-large that is up to 10 times faster at inference.
The model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.
Paper
Abstract
We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
The original model can be found under:
URL
IMPORTANT
BORT requires a very unique fine-tuning algorithm, called Agora which is not open-sourced yet.
Standard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!
|
[
"## BORT\n\nAmazon's BORT\n\nBORT is a highly compressed version of bert-large that is up to 10 times faster at inference. \nThe model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.\n\nPaper\n\nAbstract\n\nWe extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as \"Bort\", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.\n\nThe original model can be found under:\nURL\n\nIMPORTANT\n\nBORT requires a very unique fine-tuning algorithm, called Agora which is not open-sourced yet. \nStandard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #arxiv-2010.10499 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"## BORT\n\nAmazon's BORT\n\nBORT is a highly compressed version of bert-large that is up to 10 times faster at inference. \nThe model is an optimal sub-architecture of *bert-large* that was found using neural architecture search.\n\nPaper\n\nAbstract\n\nWe extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as \"Bort\", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.\n\nThe original model can be found under:\nURL\n\nIMPORTANT\n\nBORT requires a very unique fine-tuning algorithm, called Agora which is not open-sourced yet. \nStandard fine-tuning has not shown to work well in initial experiments, so stay tuned for updates!"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# encoder_decoder_es
This model is a fine-tuned version of [](https://huggingface.co/) on the cc_news_es_titles dataset.
It achieves the following results on the evaluation set:
- Loss: 7.8773
- Rouge2 Precision: 0.002
- Rouge2 Recall: 0.0116
- Rouge2 Fmeasure: 0.0034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 7.8807 | 1.0 | 5784 | 7.8976 | 0.0023 | 0.012 | 0.0038 |
| 7.8771 | 2.0 | 11568 | 7.8873 | 0.0018 | 0.0099 | 0.003 |
| 7.8588 | 3.0 | 17352 | 7.8819 | 0.0015 | 0.0085 | 0.0025 |
| 7.8507 | 4.0 | 23136 | 7.8773 | 0.002 | 0.0116 | 0.0034 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["cc_news_es_titles"], "model-index": [{"name": "encoder_decoder_es", "results": []}]}
|
amazon-sagemaker-community/encoder_decoder_es
| null |
[
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cc_news_es_titles",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #encoder-decoder #text2text-generation #generated_from_trainer #dataset-cc_news_es_titles #autotrain_compatible #endpoints_compatible #has_space #region-us
|
encoder\_decoder\_es
====================
This model is a fine-tuned version of [](URL on the cc\_news\_es\_titles dataset.
It achieves the following results on the evaluation set:
* Loss: 7.8773
* Rouge2 Precision: 0.002
* Rouge2 Recall: 0.0116
* Rouge2 Fmeasure: 0.0034
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.003
* train\_batch\_size: 32
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #encoder-decoder #text2text-generation #generated_from_trainer #dataset-cc_news_es_titles #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.003\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-en-ru-emoji-v2
This model is a fine-tuned version of [DeepPavlov/xlm-roberta-large-en-ru](https://huggingface.co/DeepPavlov/xlm-roberta-large-en-ru) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3356
- Accuracy: 0.3102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.4 | 200 | 3.0592 | 0.1204 |
| No log | 0.81 | 400 | 2.5356 | 0.2480 |
| 2.6294 | 1.21 | 600 | 2.4570 | 0.2569 |
| 2.6294 | 1.62 | 800 | 2.3332 | 0.2832 |
| 1.9286 | 2.02 | 1000 | 2.3354 | 0.2803 |
| 1.9286 | 2.42 | 1200 | 2.3610 | 0.2881 |
| 1.9286 | 2.83 | 1400 | 2.3004 | 0.2973 |
| 1.7312 | 3.23 | 1600 | 2.3619 | 0.3026 |
| 1.7312 | 3.64 | 1800 | 2.3596 | 0.3032 |
| 1.5816 | 4.04 | 2000 | 2.2972 | 0.3072 |
| 1.5816 | 4.44 | 2200 | 2.3077 | 0.3073 |
| 1.5816 | 4.85 | 2400 | 2.3356 | 0.3102 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "DeepPavlov/xlm-roberta-large-en-ru", "model-index": [{"name": "xlm-roberta-en-ru-emoji-v2", "results": []}]}
|
amazon-sagemaker-community/xlm-roberta-en-ru-emoji-v2
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:DeepPavlov/xlm-roberta-large-en-ru",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-DeepPavlov/xlm-roberta-large-en-ru #autotrain_compatible #endpoints_compatible #has_space #region-us
|
xlm-roberta-en-ru-emoji-v2
==========================
This model is a fine-tuned version of DeepPavlov/xlm-roberta-large-en-ru on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 2.3356
* Accuracy: 0.3102
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 96
* eval\_batch\_size: 96
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.9.1
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 96\n* eval\\_batch\\_size: 96\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-DeepPavlov/xlm-roberta-large-en-ru #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 96\n* eval\\_batch\\_size: 96\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.9.1\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
# Passage Reranking Multilingual BERT 🔃 🌍
## Model description
**Input:** Supports over 100 Languages. See [List of supported languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for all available.
**Purpose:** This module takes a search query [1] and a passage [2] and calculates if the passage matches the query.
It can be used as an improvement for Elasticsearch Results and boosts the relevancy by up to 100%.
**Architecture:** On top of BERT there is a Densly Connected NN which takes the 768 Dimensional [CLS] Token as input and provides the output ([Arxiv](https://arxiv.org/abs/1901.04085)).
**Output:** Just a single value between between -10 and 10. Better matching query,passage pairs tend to have a higher a score.
## Intended uses & limitations
Both query[1] and passage[2] have to fit in 512 Tokens.
As you normally want to rerank the first dozens of search results keep in mind the inference time of approximately 300 ms/query.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
```
This Model can be used as a drop-in replacement in the [Nboost Library](https://github.com/koursaros-ai/nboost)
Through this you can directly improve your Elasticsearch Results without any coding.
## Training data
This model is trained using the [**Microsoft MS Marco Dataset**](https://microsoft.github.io/msmarco/ "Microsoft MS Marco"). This training dataset contains approximately 400M tuples of a query, relevant and non-relevant passages. All datasets used for training and evaluating are listed in this [table](https://github.com/microsoft/MSMARCO-Passage-Ranking#data-information-and-formating). The used dataset for training is called *Train Triples Large*, while the evaluation was made on *Top 1000 Dev*. There are 6,900 queries in total in the development dataset, where each query is mapped to top 1,000 passage retrieved using BM25 from MS MARCO corpus.
## Training procedure
The training is performed the same way as stated in this [README](https://github.com/nyu-dl/dl4marco-bert "NYU Github"). See their excellent Paper on [Arxiv](https://arxiv.org/abs/1901.04085).
We changed the BERT Model from an English only to the default BERT Multilingual uncased Model from [Google](https://huggingface.co/bert-base-multilingual-uncased).
Training was done 400 000 Steps. This equaled 12 hours an a TPU V3-8.
## Eval results
We see nearly similar performance than the English only Model in the English [Bing Queries Dataset](http://www.msmarco.org/). Although the training data is English only internal Tests on private data showed a far higher accurancy in German than all other available models.
Fine-tuned Models | Dependency | Eval Set | Search Boost<a href='#benchmarks'> | Speed on GPU
----------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------- | ----------------------------------
**`amberoad/Multilingual-uncased-MSMARCO`** (This Model) | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-blue"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+61%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query <a href='#footnotes'>
`nboost/pt-tinybert-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+45%** <sub><sup>(0.26 vs 0.18)</sup></sub> | ~50ms/query <a href='#footnotes'>
`nboost/pt-bert-base-uncased-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+62%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query<a href='#footnotes'>
`nboost/pt-bert-large-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+77%** <sub><sup>(0.32 vs 0.18)</sup></sub> | -
`nboost/pt-biobert-base-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='https://github.com/naver/biobert-pretrained'>biomed</a> | **+66%** <sub><sup>(0.17 vs 0.10)</sup></sub> | ~300 ms/query<a href='#footnotes'>
This table is taken from [nboost](https://github.com/koursaros-ai/nboost) and extended by the first line.
## Contact Infos

Amberoad is a company focussing on Search and Business Intelligence.
We provide you:
* Advanced Internal Company Search Engines thorugh NLP
* External Search Egnines: Find Competitors, Customers, Suppliers
**Get in Contact now to benefit from our Expertise:**
The training and evaluation was performed by [**Philipp Reissel**](https://reissel.eu/) and [**Igli Manaj**](https://github.com/iglimanaj)
[ Linkedin](https://de.linkedin.com/company/amberoad) | <svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px"
width="32" height="32"
viewBox="0 0 172 172"
style=" fill:#000000;"><g fill="none" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><path d="M0,172v-172h172v172z" fill="none"></path><g fill="#e67e22"><path d="M37.625,21.5v86h96.75v-86h-5.375zM48.375,32.25h10.75v10.75h-10.75zM69.875,32.25h10.75v10.75h-10.75zM91.375,32.25h32.25v10.75h-32.25zM48.375,53.75h75.25v43h-75.25zM80.625,112.875v17.61572c-1.61558,0.93921 -2.94506,2.2687 -3.88428,3.88428h-49.86572v10.75h49.86572c1.8612,3.20153 5.28744,5.375 9.25928,5.375c3.97183,0 7.39808,-2.17347 9.25928,-5.375h49.86572v-10.75h-49.86572c-0.93921,-1.61558 -2.2687,-2.94506 -3.88428,-3.88428v-17.61572z"></path></g></g></svg>[Homepage](https://de.linkedin.com/company/amberoad) | [Email]([email protected])
|
{"language": ["multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "hr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo"], "license": "apache-2.0", "tags": ["msmarco", "multilingual", "passage reranking"], "datasets": ["msmarco"], "metrics": ["MRR"], "thumbnail": "https://amberoad.de/images/logo_text.png", "widget": [{"query": "What is a corporation?", "passage": "A company is incorporated in a specific nation, often within the bounds of a smaller subset of that nation, such as a state or province. The corporation is then governed by the laws of incorporation in that state. A corporation may issue stock, either private or public, or may be classified as a non-stock corporation. If stock is issued, the corporation will usually be governed by its shareholders, either directly or indirectly."}]}
|
amberoad/bert-multilingual-passage-reranking-msmarco
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"msmarco",
"multilingual",
"passage reranking",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:msmarco",
"arxiv:1901.04085",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1901.04085"
] |
[
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"hr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo"
] |
TAGS
#transformers #pytorch #tf #jax #bert #text-classification #msmarco #multilingual #passage reranking #af #sq #ar #an #hy #ast #az #ba #eu #bar #be #bn #inc #bs #br #bg #my #ca #ceb #ce #zh #cv #hr #cs #da #nl #en #et #fi #fr #gl #ka #de #el #gu #ht #he #hi #hu #is #io #id #ga #it #ja #jv #kn #kk #ky #ko #la #lv #lt #roa #nds #lm #mk #mg #ms #ml #mr #min #ne #new #nb #nn #oc #fa #pms #pl #pt #pa #ro #ru #sco #sr #scn #sk #sl #aze #es #su #sw #sv #tl #tg #ta #tt #te #tr #uk #ud #uz #vi #vo #war #cy #fry #pnb #yo #dataset-msmarco #arxiv-1901.04085 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Passage Reranking Multilingual BERT
===================================
Model description
-----------------
Input: Supports over 100 Languages. See List of supported languages for all available.
Purpose: This module takes a search query [1] and a passage [2] and calculates if the passage matches the query.
It can be used as an improvement for Elasticsearch Results and boosts the relevancy by up to 100%.
Architecture: On top of BERT there is a Densly Connected NN which takes the 768 Dimensional [CLS] Token as input and provides the output (Arxiv).
Output: Just a single value between between -10 and 10. Better matching query,passage pairs tend to have a higher a score.
Intended uses & limitations
---------------------------
Both query[1] and passage[2] have to fit in 512 Tokens.
As you normally want to rerank the first dozens of search results keep in mind the inference time of approximately 300 ms/query.
#### How to use
This Model can be used as a drop-in replacement in the Nboost Library
Through this you can directly improve your Elasticsearch Results without any coding.
Training data
-------------
This model is trained using the Microsoft MS Marco Dataset. This training dataset contains approximately 400M tuples of a query, relevant and non-relevant passages. All datasets used for training and evaluating are listed in this table. The used dataset for training is called *Train Triples Large*, while the evaluation was made on *Top 1000 Dev*. There are 6,900 queries in total in the development dataset, where each query is mapped to top 1,000 passage retrieved using BM25 from MS MARCO corpus.
Training procedure
------------------
The training is performed the same way as stated in this README. See their excellent Paper on Arxiv.
We changed the BERT Model from an English only to the default BERT Multilingual uncased Model from Google.
Training was done 400 000 Steps. This equaled 12 hours an a TPU V3-8.
Eval results
------------
We see nearly similar performance than the English only Model in the English Bing Queries Dataset. Although the training data is English only internal Tests on private data showed a far higher accurancy in German than all other available models.
This table is taken from nboost and extended by the first line.
Contact Infos
-------------
 that handles only 5 languages (en, fr, es, de and zh) instead of 104.
The model is therefore 30% smaller than the original one (124M parameters instead of 178M) but gives exactly the same representations for the above cited languages.
Starting from `bert-base-5lang-cased` will facilitate the deployment of your model on public cloud platforms while keeping similar results.
For instance, Google Cloud Platform requires that the model size on disk should be lower than 500 MB for serveless deployments (Cloud Functions / Cloud ML) which is not the case of the original `bert-base-multilingual-cased`.
For more information about the models size, memory footprint and loading time please refer to the table below:
| Model | Num parameters | Size | Memory | Loading time |
| ---------------------------- | -------------- | -------- | -------- | ------------ |
| bert-base-multilingual-cased | 178 million | 714 MB | 1400 MB | 4.2 sec |
| bert-base-5lang-cased | 124 million | 495 MB | 950 MB | 3.6 sec |
These measurements have been computed on a [Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB)](https://cloud.google.com/compute/docs/machine-types\#n1_machine_type).
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("amine/bert-base-5lang-cased")
model = AutoModel.from_pretrained("amine/bert-base-5lang-cased")
```
### How to cite
```bibtex
@inproceedings{smallermbert,
title={Load What You Need: Smaller Versions of Multilingual BERT},
author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire},
booktitle={SustaiNLP / EMNLP},
year={2020}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
{"language": ["en", "fr", "es", "de", "zh", "multilingual"], "license": "apache-2.0", "tags": ["pytorch", "bert", "multilingual", "en", "fr", "es", "de", "zh"], "datasets": "wikipedia", "inference": false}
|
amine/bert-base-5lang-cased
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"multilingual",
"en",
"fr",
"es",
"de",
"zh",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"fr",
"es",
"de",
"zh",
"multilingual"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #en #fr #es #de #zh #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #region-us
|
bert-base-5lang-cased
=====================
This is a smaller version of bert-base-multilingual-cased that handles only 5 languages (en, fr, es, de and zh) instead of 104.
The model is therefore 30% smaller than the original one (124M parameters instead of 178M) but gives exactly the same representations for the above cited languages.
Starting from 'bert-base-5lang-cased' will facilitate the deployment of your model on public cloud platforms while keeping similar results.
For instance, Google Cloud Platform requires that the model size on disk should be lower than 500 MB for serveless deployments (Cloud Functions / Cloud ML) which is not the case of the original 'bert-base-multilingual-cased'.
For more information about the models size, memory footprint and loading time please refer to the table below:
These measurements have been computed on a Google Cloud n1-standard-1 machine (1 vCPU, 3.75 GB).
How to use
----------
### How to cite
Contact
-------
Please contact amine@URL for any question, feedback or request.
|
[
"### How to cite\n\n\nContact\n-------\n\n\nPlease contact amine@URL for any question, feedback or request."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #multilingual #en #fr #es #de #zh #dataset-wikipedia #license-apache-2.0 #autotrain_compatible #region-us \n",
"### How to cite\n\n\nContact\n-------\n\n\nPlease contact amine@URL for any question, feedback or request."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pft-clf-finetuned
This model is a fine-tuned version of [HooshvareLab/bert-fa-zwnj-base](https://huggingface.co/HooshvareLab/bert-fa-zwnj-base) on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from [here](https://www.kaggle.com/amirhossein76/farsnews1398). I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification.
The notebook used for fine-tuning can be found [here](https://colab.research.google.com/drive/1jC2dfKRASxCY-b6bJSPkhEJfQkOA30O0?usp=sharing). I've reported loss and Matthews correlation criteria on the validation set.
It achieves the following results on the evaluation set:
- Loss: 0.0617
- Matthews Correlation: 0.9830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 6
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0634 | 1.0 | 20276 | 0.0617 | 0.9830 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"language": "fa", "license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "widget": [{"text": "\u0627\u0645\u0631\u0648\u0632 \u062f\u0631\u0628\u06cc \u062f\u0648 \u062a\u06cc\u0645 \u067e\u0631\u0633\u067e\u0648\u0644\u06cc\u0633 \u0648 \u0627\u0633\u062a\u0642\u0644\u0627\u0644 \u062f\u0631 \u0648\u0631\u0632\u0634\u06af\u0627\u0647 \u0622\u0632\u0627\u062f\u06cc \u062a\u0647\u0631\u0627\u0646 \u0628\u0631\u06af\u0632\u0627\u0631 \u0645\u06cc\u200c\u0634\u0648\u062f."}, {"text": "\u0648\u0632\u06cc\u0631 \u0627\u0645\u0648\u0631 \u062e\u0627\u0631\u062c\u0647 \u0627\u0631\u062f\u0646 \u062a\u0627\u06a9\u06cc\u062f \u06a9\u0631\u062f \u06a9\u0647 \u0647\u0645\u0647 \u06a9\u0634\u0648\u0631\u0647\u0627\u06cc \u0639\u0631\u0628\u06cc \u062e\u0648\u0627\u0647\u0627\u0646 \u0631\u0648\u0627\u0628\u0637 \u062e\u0648\u0628 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0647\u0633\u062a\u0646\u062f.\n\u0628\u0647 \u06af\u0632\u0627\u0631\u0634 \u0627\u06cc\u0633\u0646\u0627 \u0628\u0647 \u0646\u0642\u0644 \u0627\u0632 \u0634\u0628\u06a9\u0647 \u0641\u0631\u0627\u0646\u0633 \u06f2\u06f4\u060c \u0627\u06cc\u0645\u0646 \u0627\u0644\u0635\u0641\u062f\u06cc \u0645\u0639\u0627\u0648\u0646 \u0646\u062e\u0633\u062a\u200c\u0648\u0632\u06cc\u0631 \u0648 \u0648\u0632\u06cc\u0631 \u0627\u0645\u0648\u0631 \u062e\u0627\u0631\u062c\u0647 \u0627\u0631\u062f\u0646 \u067e\u0633 \u0627\u0632 \u06a9\u0646\u0641\u0631\u0627\u0646\u0633 \u0644\u06cc\u0628\u06cc \u062f\u0631 \u067e\u0627\u0631\u06cc\u0633 \u062f\u0631 \u06af\u0641\u062a\u200c\u0648\u06af\u0648\u06cc\u06cc \u0628\u0627 \u0641\u0631\u0627\u0646\u0633 \u06f2\u06f4 \u062a\u0627\u06a9\u06cc\u062f \u06a9\u0631\u062f: \u0645\u0648\u0636\u0639 \u0627\u0631\u062f\u0646 \u0631\u0648\u0634\u0646 \u0627\u0633\u062a\u060c \u0645\u0627 \u062e\u0648\u0627\u0633\u062a\u0627\u0631 \u0631\u0648\u0627\u0628\u0637 \u0645\u0646\u0637\u0642\u0647\u200c\u0627\u06cc \u0645\u0628\u062a\u0646\u06cc \u0628\u0631 \u062d\u0633\u0646 \u0647\u0645\u062c\u0648\u0627\u0631\u06cc \u0648 \u0639\u062f\u0645 \u0645\u062f\u0627\u062e\u0644\u0647 \u062f\u0631 \u0627\u0645\u0648\u0631 \u062f\u0627\u062e\u0644\u06cc \u0647\u0633\u062a\u06cc\u0645. \u0628\u0633\u06cc\u0627\u0631\u06cc \u0627\u0632 \u0645\u0633\u0627\u0626\u0644 \u0648 \u0645\u0634\u06a9\u0644\u0627\u062a \u0645\u0646\u0637\u0642\u0647 \u0646\u06cc\u0627\u0632 \u0628\u0647 \u0631\u0633\u06cc\u062f\u06af\u06cc \u0627\u0632 \u0637\u0631\u06cc\u0642 \u06af\u0641\u062a\u200c\u0648\u06af\u0648 \u062f\u0627\u0631\u062f.\n\n\u0627\u0644\u0635\u0641\u062f\u06cc \u0647\u0631\u06af\u0648\u0646\u0647 \u06af\u0641\u062a\u200c\u0648\u06af\u0648\u06cc \u0628\u0627 \u0648\u0627\u0633\u0637\u0647 \u0627\u0631\u062f\u0646 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0631\u0627 \u0631\u062f \u06a9\u0631\u062f\u0647 \u0648 \u06af\u0641\u062a: \u0645\u0627 \u0628\u0627 \u0646\u0645\u0627\u06cc\u0646\u062f\u06af\u0627\u0646 \u0647\u06cc\u0686\u200c\u06a9\u0633 \u0635\u062d\u0628\u062a \u0646\u0645\u06cc\u200c\u06a9\u0646\u06cc\u0645 \u0648 \u0632\u0645\u0627\u0646\u06cc \u06a9\u0647 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0635\u062d\u0628\u062a \u0645\u06cc\u200c\u06a9\u0646\u06cc\u0645 \u0645\u0633\u062a\u0642\u06cc\u0645\u0627\u064b \u0628\u0627 \u062f\u0648\u0644\u062a \u0627\u06cc\u0646 \u06a9\u0634\u0648\u0631 \u0628\u0648\u062f\u0647 \u0648 \u0627\u0632 \u0637\u0631\u06cc\u0642 \u062a\u0645\u0627\u0633 \u062a\u0644\u0641\u0646\u06cc \u0648\u0632\u06cc\u0631 \u0627\u0645\u0648\u0631 \u062e\u0627\u0631\u062c\u0647 \u062f\u0648 \u06a9\u0634\u0648\u0631.\n\u0648\u06cc \u062a\u0627\u06a9\u06cc\u062f \u06a9\u0631\u062f: \u0647\u0645\u0647 \u062f\u0631 \u0645\u0646\u0637\u0642\u0647 \u0639\u0631\u0628\u06cc \u062e\u0648\u0627\u0633\u062a\u0627\u0631 \u0631\u0648\u0627\u0628\u0637 \u062e\u0648\u0628 \u0628\u0627 \u0627\u06cc\u0631\u0627\u0646 \u0647\u0633\u062a\u0646\u062f\u060c \u0627\u0645\u0627 \u0628\u0631\u0627\u06cc \u062a\u062d\u0642\u0642 \u0627\u06cc\u0646 \u0627\u0645\u0631 \u0628\u0627\u06cc\u062f \u0631\u0648\u0627\u0628\u0637 \u0628\u0631 \u0627\u0633\u0627\u0633 \u0634\u0641\u0627\u0641\u06cc\u062a \u0648 \u0628\u0631 \u0627\u0633\u0627\u0633 \u0627\u0635\u0648\u0644 \u0627\u062d\u062a\u0631\u0627\u0645 \u0628\u0647 \u0647\u0645\u0633\u0627\u06cc\u06af\u06cc \u0648 \u0639\u062f\u0645 \u0645\u062f\u0627\u062e\u0644\u0647 \u062f\u0631 \u0627\u0645\u0648\u0631 \u062f\u0627\u062e\u0644\u06cc \u0628\u0627\u0634\u062f. "}], "model-index": [{"name": "pft-clf-finetuned", "results": []}]}
|
amirhossein1376/pft-clf-finetuned
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"fa",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fa"
] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
pft-clf-finetuned
=================
This model is a fine-tuned version of HooshvareLab/bert-fa-zwnj-base on an "FarsNews1398" dataset. This dataset contains a collection of news that has been gathered from the farsnews website which is a news agency in Iran. You can download the dataset from here. I used category, abstract, and paragraphs of news for doing text classification. "abstract" and "paragraphs" for each news concatenated together and "category" used as a target for classification.
The notebook used for fine-tuning can be found here. I've reported loss and Matthews correlation criteria on the validation set.
It achieves the following results on the evaluation set:
* Loss: 0.0617
* Matthews Correlation: 0.9830
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 6
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu111
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #fa #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
fill-mask
|
transformers
|
# nepbert
## Model description
Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.
## Intended uses & limitations
#### How to use
```python
from transformers import pipeline
pipe = pipeline(
"fill-mask",
model="amitness/nepbert",
tokenizer="amitness/nepbert"
)
print(pipe(u"तिमीलाई कस्तो <mask>?"))
```
## Training data
The data was taken from the nepali language subset of CC-100 dataset.
## Training procedure
The model was trained on Google Colab using `1x Tesla V100`.
|
{"language": ["ne"], "license": "mit", "tags": ["roberta", "nepali-laguage-model"], "datasets": ["cc100"], "widget": [{"text": "\u0924\u093f\u092e\u0940\u0932\u093e\u0908 \u0915\u0938\u094d\u0924\u094b <mask>?"}]}
|
amitness/roberta-base-ne
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"nepali-laguage-model",
"ne",
"dataset:cc100",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ne"
] |
TAGS
#transformers #pytorch #jax #safetensors #roberta #fill-mask #nepali-laguage-model #ne #dataset-cc100 #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
# nepbert
## Model description
Roberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.
## Intended uses & limitations
#### How to use
## Training data
The data was taken from the nepali language subset of CC-100 dataset.
## Training procedure
The model was trained on Google Colab using '1x Tesla V100'.
|
[
"# nepbert",
"## Model description\n\nRoberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.",
"## Intended uses & limitations",
"#### How to use",
"## Training data\n\nThe data was taken from the nepali language subset of CC-100 dataset.",
"## Training procedure\nThe model was trained on Google Colab using '1x Tesla V100'."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #roberta #fill-mask #nepali-laguage-model #ne #dataset-cc100 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"# nepbert",
"## Model description\n\nRoberta trained from scratch on the Nepali CC-100 dataset with 12 million sentences.",
"## Intended uses & limitations",
"#### How to use",
"## Training data\n\nThe data was taken from the nepali language subset of CC-100 dataset.",
"## Training procedure\nThe model was trained on Google Colab using '1x Tesla V100'."
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Kannada
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Kannada using the [OpenSLR SLR79](http://openslr.org/79/) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada `sentence` and `path` fields:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For a sample, see the Colab link in Training Section.
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
resampler = torchaudio.transforms.Resample(48_000, 16_000) # The original data was with 48,000 sampling rate. You can change it according to your input.
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on 10% of the Kannada data on OpenSLR.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# test_dataset = #TODO: WRITE YOUR CODE TO LOAD THE TEST DATASET. For sample see the Colab link in Training Section.
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model = Wav2Vec2ForCTC.from_pretrained("amoghsgopadi/wav2vec2-large-xlsr-kn")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�\–\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"),
attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 27.08 %
## Training
90% of the OpenSLR Kannada dataset was used for training.
The colab notebook used for training can be found [here](https://colab.research.google.com/github/amoghgopadi/wav2vec2-xlsr-kannada/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Kannada_ASR.ipynb).
|
{"language": "kn", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["openslr"], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Large 53 Kannada by Amogh Gopadi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "OpenSLR kn", "type": "openslr"}, "metrics": [{"type": "wer", "value": 27.08, "name": "Test WER"}]}]}]}
|
amoghsgopadi/wav2vec2-large-xlsr-kn
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"kn",
"dataset:openslr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"kn"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #kn #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Kannada
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Kannada using the OpenSLR SLR79 dataset. When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada 'sentence' and 'path' fields:
## Evaluation
The model can be evaluated as follows on 10% of the Kannada data on OpenSLR.
Test Result: 27.08 %
## Training
90% of the OpenSLR Kannada dataset was used for training.
The colab notebook used for training can be found here.
|
[
"# Wav2Vec2-Large-XLSR-53-Kannada\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Kannada using the OpenSLR SLR79 dataset. When using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada 'sentence' and 'path' fields:",
"## Evaluation\n\nThe model can be evaluated as follows on 10% of the Kannada data on OpenSLR.\n\n\n\nTest Result: 27.08 %",
"## Training\n\n90% of the OpenSLR Kannada dataset was used for training.\n\nThe colab notebook used for training can be found here."
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #kn #dataset-openslr #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Kannada\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Kannada using the OpenSLR SLR79 dataset. When using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows, assuming you have a dataset with Kannada 'sentence' and 'path' fields:",
"## Evaluation\n\nThe model can be evaluated as follows on 10% of the Kannada data on OpenSLR.\n\n\n\nTest Result: 27.08 %",
"## Training\n\n90% of the OpenSLR Kannada dataset was used for training.\n\nThe colab notebook used for training can be found here."
] |
fill-mask
|
transformers
|
# roberta-cord19-1M7k

> This model is based on ***RoBERTa*** and was pre-trained on 1.7 million sentences.
The training corpus was papers taken from *Semantic Scholar*'s CORD-19 historical releases. Corpus size is `13k` papers, `~60M` tokens. I used the full-text `"body_text"` of the papers in training (details below).
#### Usage
```python
from transformers import pipeline
from transformers import RobertaTokenizerFast, RobertaForMaskedLM
tokenizer = RobertaTokenizerFast.from_pretrained("amoux/roberta-cord19-1M7k")
model = RobertaForMaskedLM.from_pretrained("amoux/roberta-cord19-1M7k")
fillmask = pipeline("fill-mask", model=model, tokenizer=tokenizer)
text = "Lung infiltrates cause significant morbidity and mortality in immunocompromised patients."
masked_text = text.replace("patients", tokenizer.mask_token)
predictions = fillmask(masked_text, top_k=3)
```
- Predicted tokens
```bash
[{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised patients.</s>',
'score': 0.6273621320724487,
'token': 660,
'token_str': 'Ġpatients'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised individuals.</s>',
'score': 0.19800445437431335,
'token': 1868,
'token_str': 'Ġindividuals'},
{'sequence': '<s>Lung infiltrates cause significant morbidity and mortality in immunocompromised animals.</s>',
'score': 0.022069649770855904,
'token': 1471,
'token_str': 'Ġanimals'}]
```
## Dataset
- About
- name: *CORD-19: The Covid-19 Open Research Dataset*
- date: *2020-03-18*
- md5 | sha1: `a36fe181 | 8fbea927`
- text-key: `body_text`
- subsets (*total*: `13,202`):
- *biorxiv_medrxiv*: `803`
- *comm_use_subset*: `9000`
- *pmc_custom_license*: `1426`
- *noncomm_use_subset*: `1973`
- Splits (*ratio: 0.9*)
- sentences used for training: `1,687,124`
- sentences used for evaluation: `187,459`
- Total training steps: `210,890`
- Total evaluation steps: `23,433`
## Parameters
- Data
- block_size: `256`
- Training
- per_device_train_batch_size: `8`
- per_device_eval_batch_size: `8`
- gradient_accumulation_steps: `2`
- learning_rate: `5e-5`
- num_train_epochs: `2`
- fp16: `True`
- fp16_opt_level: `'01'`
- seed: `42`
- Output
- global_step: `210890`
- training_loss: `3.5964575726682155`
## Evaluation
- Perplexity: `17.469366079957922`
### Citation
> Allen Institute CORD-19 [Historical Releases](https://ai2-semanticscholar-cord-19.s3-us-west-2.amazonaws.com/historical_releases.html)
```
@article{Wang2020CORD19TC,
title={CORD-19: The Covid-19 Open Research Dataset},
author={Lucy Lu Wang and Kyle Lo and Yoganand Chandrasekhar and Russell Reas and Jiangjiang Yang and Darrin Eide and K. Funk and Rodney Michael Kinney and Ziyang Liu and W. Merrill and P. Mooney and D. Murdick and Devvret Rishi and Jerry Sheehan and Zhihong Shen and B. Stilson and A. Wade and K. Wang and Christopher Wilhelm and Boya Xie and D. Raymond and Daniel S. Weld and Oren Etzioni and Sebastian Kohlmeier},
journal={ArXiv},
year={2020}
}
```
|
{"language": "en", "thumbnail": "https://github.githubassets.com/images/icons/emoji/unicode/2695.png", "widget": [{"text": "Lung infiltrates cause significant morbidity and mortality in immunocompromised <mask>."}, {"text": "Tuberculosis appears to be an important <mask> in endemic regions especially in the non-HIV, non-hematologic malignancy group."}, {"text": "For vector-transmitted diseases this places huge significance on vector mortality rates as vectors usually don't <mask> an infection and instead remain infectious for life."}, {"text": "The lung lesions were characterized by bronchointerstitial pneumonia with accumulation of neutrophils, macrophages and necrotic debris in <mask> and bronchiolar lumens and peribronchiolar/perivascular infiltration of inflammatory cells."}]}
|
amoux/roberta-cord19-1M7k
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"roberta",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #roberta #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us
|
# roberta-cord19-1M7k
.
#### Usage
- Predicted tokens
## Dataset
- About
- name: *CORD-19: The Covid-19 Open Research Dataset*
- date: *2020-03-18*
- md5 | sha1: 'a36fe181 | 8fbea927'
- text-key: 'body_text'
- subsets (*total*: '13,202'):
- *biorxiv_medrxiv*: '803'
- *comm_use_subset*: '9000'
- *pmc_custom_license*: '1426'
- *noncomm_use_subset*: '1973'
- Splits (*ratio: 0.9*)
- sentences used for training: '1,687,124'
- sentences used for evaluation: '187,459'
- Total training steps: '210,890'
- Total evaluation steps: '23,433'
## Parameters
- Data
- block_size: '256'
- Training
- per_device_train_batch_size: '8'
- per_device_eval_batch_size: '8'
- gradient_accumulation_steps: '2'
- learning_rate: '5e-5'
- num_train_epochs: '2'
- fp16: 'True'
- fp16_opt_level: ''01''
- seed: '42'
- Output
- global_step: '210890'
- training_loss: '3.5964575726682155'
## Evaluation
- Perplexity: '17.469366079957922'
> Allen Institute CORD-19 Historical Releases
|
[
"# roberta-cord19-1M7k\n\n.",
"#### Usage\n\n\n\n- Predicted tokens",
"## Dataset\n\n- About\n\t- name: *CORD-19: The Covid-19 Open Research Dataset*\n\t- date: *2020-03-18*\n\t- md5 | sha1: 'a36fe181 | 8fbea927'\n\t- text-key: 'body_text'\n\t- subsets (*total*: '13,202'):\n\t - *biorxiv_medrxiv*: '803'\n\t - *comm_use_subset*: '9000'\n\t - *pmc_custom_license*: '1426'\n\t - *noncomm_use_subset*: '1973'\n- Splits (*ratio: 0.9*)\n\t- sentences used for training: '1,687,124'\n\t- sentences used for evaluation: '187,459'\n- Total training steps: '210,890'\n- Total evaluation steps: '23,433'",
"## Parameters\n\n- Data\n\t- block_size: '256'\n- Training\n\t- per_device_train_batch_size: '8'\n\t- per_device_eval_batch_size: '8'\n\t- gradient_accumulation_steps: '2'\n\t- learning_rate: '5e-5'\n\t- num_train_epochs: '2'\n\t- fp16: 'True'\n\t- fp16_opt_level: ''01''\n\t- seed: '42'\n- Output\n - global_step: '210890'\n - training_loss: '3.5964575726682155'",
"## Evaluation\n\n- Perplexity: '17.469366079957922'\n\n> Allen Institute CORD-19 Historical Releases"
] |
[
"TAGS\n#transformers #pytorch #tf #jax #roberta #fill-mask #en #autotrain_compatible #endpoints_compatible #region-us \n",
"# roberta-cord19-1M7k\n\n.",
"#### Usage\n\n\n\n- Predicted tokens",
"## Dataset\n\n- About\n\t- name: *CORD-19: The Covid-19 Open Research Dataset*\n\t- date: *2020-03-18*\n\t- md5 | sha1: 'a36fe181 | 8fbea927'\n\t- text-key: 'body_text'\n\t- subsets (*total*: '13,202'):\n\t - *biorxiv_medrxiv*: '803'\n\t - *comm_use_subset*: '9000'\n\t - *pmc_custom_license*: '1426'\n\t - *noncomm_use_subset*: '1973'\n- Splits (*ratio: 0.9*)\n\t- sentences used for training: '1,687,124'\n\t- sentences used for evaluation: '187,459'\n- Total training steps: '210,890'\n- Total evaluation steps: '23,433'",
"## Parameters\n\n- Data\n\t- block_size: '256'\n- Training\n\t- per_device_train_batch_size: '8'\n\t- per_device_eval_batch_size: '8'\n\t- gradient_accumulation_steps: '2'\n\t- learning_rate: '5e-5'\n\t- num_train_epochs: '2'\n\t- fp16: 'True'\n\t- fp16_opt_level: ''01''\n\t- seed: '42'\n- Output\n - global_step: '210890'\n - training_loss: '3.5964575726682155'",
"## Evaluation\n\n- Perplexity: '17.469366079957922'\n\n> Allen Institute CORD-19 Historical Releases"
] |
token-classification
|
flair
|
#### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5)
7-class NER English model using [Flair TransformerWordEmbeddings - distilroberta-base](https://github.com/flairNLP/flair/).
| **tag** | **meaning** |
|---------------------------------|-----------|
| nb_rounds | Number of rounds |
| duration_br_sd | Duration btwn rounds in seconds |
| duration_br_min | Duration btwn rounds in minutes |
| duration_br_hr | Duration btwn rounds in hours |
| duration_wt_sd | workout duration in seconds |
| duration_wt_min | workout duration in minutes |
| duration_wt_hr | workout duration in hours |
---
The dataset was created manually (perfectible). Sentences example :
```
19 sets of 3 minutes 21 minutes between sets
start 7 sets of 32 seconds
create 13 sets of 26 seconds
init 8 series of 3 hours
2 sets of 30 seconds 35 minutes between each cycle
...
```
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "12 sets of 2 minutes 38 minutes between each set"}]}
|
amtam0/timer-ner-en
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #en #region-us
|
#### This model is used in the Speech Interval Timer app
7-class NER English model using Flair TransformerWordEmbeddings - distilroberta-base.
---
The dataset was created manually (perfectible). Sentences example :
|
[
"#### This model is used in the Speech Interval Timer app\n\n\n7-class NER English model using Flair TransformerWordEmbeddings - distilroberta-base.\n\n\n\n\n\n---\n\n\nThe dataset was created manually (perfectible). Sentences example :"
] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #en #region-us \n",
"#### This model is used in the Speech Interval Timer app\n\n\n7-class NER English model using Flair TransformerWordEmbeddings - distilroberta-base.\n\n\n\n\n\n---\n\n\nThe dataset was created manually (perfectible). Sentences example :"
] |
token-classification
|
flair
|
#### This model is used in the [Speech Interval Timer app](https://medium.com/@amtam0/speech-interval-timer-app-using-transformers-1df8fa3821d5)
7-class NER French model using [Flair TransformerWordEmbeddings - camembert-base](https://github.com/flairNLP/flair/).
| **tag** | **meaning** |
|---------------------------------|-----------|
| nb_rounds | Number of rounds |
| duration_br_sd | Duration btwn rounds in seconds |
| duration_br_min | Duration btwn rounds in minutes |
| duration_br_hr | Duration btwn rounds in hours |
| duration_wt_sd | workout duration in seconds |
| duration_wt_min | workout duration in minutes |
| duration_wt_hr | workout duration in hours |
---
Synthetic dataset has been used (perfectible). Sentences example in the widget.
|
{"language": "fr", "tags": ["flair", "token-classification", "sequence-tagger-model"], "widget": [{"text": "g\u00e9n\u00e8re 27 s\u00e9ries de 54 seconde "}, {"text": " 9 cycles de 17 minute "}, {"text": "initie 17 sets de 44 secondes 297 minutes entre s\u00e9ries"}, {"text": " 13 sets de 88 secondes 225 minutes 49 entre chaque s\u00e9rie"}, {"text": "g\u00e9n\u00e8re 39 s\u00e9ries de 19 minute 21 minute 45 entre s\u00e9ries"}, {"text": "d\u00e9bute 47 sets de 6 heures "}, {"text": "d\u00e9bute 1 cycle de 25 minutes 48 23 minute 32 entre chaque s\u00e9rie"}, {"text": "commence 23 s\u00e9ries de 18 heure et demi 25 minutes 41 entre s\u00e9ries"}, {"text": " 13 cycles de 52 secondes "}, {"text": "cr\u00e9e 31 s\u00e9rie de 60 secondes "}, {"text": " 7 set de 36 secondes 139 minutes 34 entre s\u00e9ries"}, {"text": "commence 37 sets de 51 minute 25 295 minute entre chaque s\u00e9rie"}, {"text": "cr\u00e9e 11 cycles de 72 seconde 169 minute 15 entre chaque s\u00e9rie"}, {"text": "initie 5 s\u00e9rie de 33 minutes 48 "}, {"text": "cr\u00e9e 23 set de 1 minute 46 279 minutes 50 entre chaque s\u00e9rie"}, {"text": "g\u00e9n\u00e8re 41 s\u00e9rie de 35 minutes 55 "}, {"text": "lance 11 cycles de 4 heures "}, {"text": "cr\u00e9e 47 cycle de 28 heure moins quart 243 minutes 45 entre chaque s\u00e9rie"}, {"text": "initie 23 set de 36 secondes "}, {"text": "commence 37 sets de 24 heures et quart "}]}
|
amtam0/timer-ner-fr
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"fr",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#flair #pytorch #token-classification #sequence-tagger-model #fr #region-us
|
#### This model is used in the Speech Interval Timer app
7-class NER French model using Flair TransformerWordEmbeddings - camembert-base.
---
Synthetic dataset has been used (perfectible). Sentences example in the widget.
|
[
"#### This model is used in the Speech Interval Timer app\n\n\n7-class NER French model using Flair TransformerWordEmbeddings - camembert-base.\n\n\n\n\n\n---\n\n\nSynthetic dataset has been used (perfectible). Sentences example in the widget."
] |
[
"TAGS\n#flair #pytorch #token-classification #sequence-tagger-model #fr #region-us \n",
"#### This model is used in the Speech Interval Timer app\n\n\n7-class NER French model using Flair TransformerWordEmbeddings - camembert-base.\n\n\n\n\n\n---\n\n\nSynthetic dataset has been used (perfectible). Sentences example in the widget."
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
|
anan0329/wav2vec2-base-timit-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"# wav2vec2-base-timit-demo-colab\n\nThis model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 2\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1713
- Accuracy: 0.9460
- F1: 0.9509
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.323 | 1.0 | 96 | 0.2699 | 0.9026 | 0.9085 |
| 0.2003 | 2.0 | 192 | 0.2005 | 0.9234 | 0.9300 |
| 0.1808 | 3.0 | 288 | 0.1780 | 0.9377 | 0.9438 |
| 0.1537 | 4.0 | 384 | 0.1673 | 0.9441 | 0.9488 |
| 0.1135 | 5.0 | 480 | 0.1713 | 0.9460 | 0.9509 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-adult-child-cls", "results": []}]}
|
anantoj/wav2vec2-adult-child-cls
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
wav2vec2-adult-child-cls
========================
This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1713
* Accuracy: 0.9460
* F1: 0.9509
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1755
- Accuracy: 0.9432
- F1: 0.9472
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.368 | 1.0 | 383 | 0.2560 | 0.9072 | 0.9126 |
| 0.2013 | 2.0 | 766 | 0.1959 | 0.9321 | 0.9362 |
| 0.22 | 3.0 | 1149 | 0.1755 | 0.9432 | 0.9472 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-xls-r-300m-adult-child-cls", "results": []}]}
|
anantoj/wav2vec2-large-xlsr-53-adult-child-cls
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
wav2vec2-xls-r-300m-adult-child-cls
===================================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1755
* Accuracy: 0.9432
* F1: 0.9472
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 4e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 4e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the KRESNIK/ZEROTH_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0639
- Wer: 0.0449
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.603 | 0.72 | 500 | 4.6572 | 0.9985 |
| 2.6314 | 1.44 | 1000 | 2.0424 | 0.9256 |
| 2.2708 | 2.16 | 1500 | 0.9889 | 0.6989 |
| 2.1769 | 2.88 | 2000 | 0.8366 | 0.6312 |
| 2.1142 | 3.6 | 2500 | 0.7555 | 0.5998 |
| 2.0084 | 4.32 | 3000 | 0.7144 | 0.6003 |
| 1.9272 | 5.04 | 3500 | 0.6311 | 0.5461 |
| 1.8687 | 5.75 | 4000 | 0.6252 | 0.5430 |
| 1.8186 | 6.47 | 4500 | 0.5491 | 0.4988 |
| 1.7364 | 7.19 | 5000 | 0.5463 | 0.4959 |
| 1.6809 | 7.91 | 5500 | 0.4724 | 0.4484 |
| 1.641 | 8.63 | 6000 | 0.4679 | 0.4461 |
| 1.572 | 9.35 | 6500 | 0.4387 | 0.4236 |
| 1.5256 | 10.07 | 7000 | 0.3970 | 0.4003 |
| 1.5044 | 10.79 | 7500 | 0.3690 | 0.3893 |
| 1.4563 | 11.51 | 8000 | 0.3752 | 0.3875 |
| 1.394 | 12.23 | 8500 | 0.3386 | 0.3567 |
| 1.3641 | 12.95 | 9000 | 0.3290 | 0.3467 |
| 1.2878 | 13.67 | 9500 | 0.2893 | 0.3135 |
| 1.2602 | 14.39 | 10000 | 0.2723 | 0.3029 |
| 1.2302 | 15.11 | 10500 | 0.2603 | 0.2989 |
| 1.1865 | 15.83 | 11000 | 0.2440 | 0.2794 |
| 1.1491 | 16.55 | 11500 | 0.2500 | 0.2788 |
| 1.093 | 17.27 | 12000 | 0.2279 | 0.2629 |
| 1.0367 | 17.98 | 12500 | 0.2076 | 0.2443 |
| 0.9954 | 18.7 | 13000 | 0.1844 | 0.2259 |
| 0.99 | 19.42 | 13500 | 0.1794 | 0.2179 |
| 0.9385 | 20.14 | 14000 | 0.1765 | 0.2122 |
| 0.8952 | 20.86 | 14500 | 0.1706 | 0.1974 |
| 0.8841 | 21.58 | 15000 | 0.1791 | 0.1969 |
| 0.847 | 22.3 | 15500 | 0.1780 | 0.2060 |
| 0.8669 | 23.02 | 16000 | 0.1608 | 0.1862 |
| 0.8066 | 23.74 | 16500 | 0.1447 | 0.1626 |
| 0.7908 | 24.46 | 17000 | 0.1457 | 0.1655 |
| 0.7459 | 25.18 | 17500 | 0.1350 | 0.1445 |
| 0.7218 | 25.9 | 18000 | 0.1276 | 0.1421 |
| 0.703 | 26.62 | 18500 | 0.1177 | 0.1302 |
| 0.685 | 27.34 | 19000 | 0.1147 | 0.1305 |
| 0.6811 | 28.06 | 19500 | 0.1128 | 0.1244 |
| 0.6444 | 28.78 | 20000 | 0.1120 | 0.1213 |
| 0.6323 | 29.5 | 20500 | 0.1137 | 0.1166 |
| 0.5998 | 30.22 | 21000 | 0.1051 | 0.1107 |
| 0.5706 | 30.93 | 21500 | 0.1035 | 0.1037 |
| 0.5555 | 31.65 | 22000 | 0.1031 | 0.0927 |
| 0.5389 | 32.37 | 22500 | 0.0997 | 0.0900 |
| 0.5201 | 33.09 | 23000 | 0.0920 | 0.0912 |
| 0.5146 | 33.81 | 23500 | 0.0929 | 0.0947 |
| 0.515 | 34.53 | 24000 | 0.1000 | 0.0953 |
| 0.4743 | 35.25 | 24500 | 0.0922 | 0.0892 |
| 0.4707 | 35.97 | 25000 | 0.0852 | 0.0808 |
| 0.4456 | 36.69 | 25500 | 0.0855 | 0.0779 |
| 0.443 | 37.41 | 26000 | 0.0843 | 0.0738 |
| 0.4388 | 38.13 | 26500 | 0.0816 | 0.0699 |
| 0.4162 | 38.85 | 27000 | 0.0752 | 0.0645 |
| 0.3979 | 39.57 | 27500 | 0.0761 | 0.0621 |
| 0.3889 | 40.29 | 28000 | 0.0771 | 0.0625 |
| 0.3923 | 41.01 | 28500 | 0.0755 | 0.0598 |
| 0.3693 | 41.73 | 29000 | 0.0730 | 0.0578 |
| 0.3642 | 42.45 | 29500 | 0.0739 | 0.0598 |
| 0.3532 | 43.17 | 30000 | 0.0712 | 0.0553 |
| 0.3513 | 43.88 | 30500 | 0.0762 | 0.0516 |
| 0.3349 | 44.6 | 31000 | 0.0731 | 0.0504 |
| 0.3305 | 45.32 | 31500 | 0.0725 | 0.0507 |
| 0.3285 | 46.04 | 32000 | 0.0709 | 0.0489 |
| 0.3179 | 46.76 | 32500 | 0.0667 | 0.0467 |
| 0.3158 | 47.48 | 33000 | 0.0653 | 0.0494 |
| 0.3033 | 48.2 | 33500 | 0.0638 | 0.0456 |
| 0.3023 | 48.92 | 34000 | 0.0644 | 0.0464 |
| 0.2975 | 49.64 | 34500 | 0.0643 | 0.0455 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
{"language": "ko", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["kresnik/zeroth_korean"], "model-index": [{"name": "Wav2Vec2 XLS-R 1B Korean", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ko"}, "metrics": [{"type": "wer", "value": 82.07, "name": "Test WER"}, {"type": "cer", "value": 42.12, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ko"}, "metrics": [{"type": "wer", "value": 82.09, "name": "Test WER"}]}]}]}
|
anantoj/wav2vec2-xls-r-1b-korean
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ko",
"dataset:kresnik/zeroth_korean",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ko #dataset-kresnik/zeroth_korean #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-1b on the KRESNIK/ZEROTH\_KOREAN - CLEAN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0639
* Wer: 0.0449
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 50.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ko #dataset-kresnik/zeroth_korean #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
audio-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-adult-child-cls
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1770
- Accuracy: 0.9404
- F1: 0.9440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.25 | 1.0 | 383 | 0.2516 | 0.9077 | 0.9106 |
| 0.2052 | 2.0 | 766 | 0.2138 | 0.9321 | 0.9353 |
| 0.1901 | 3.0 | 1149 | 0.1770 | 0.9404 | 0.9440 |
| 0.2255 | 4.0 | 1532 | 0.1794 | 0.9404 | 0.9440 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "wav2vec2-xls-r-300m-adult-child-cls", "results": []}]}
|
anantoj/wav2vec2-xls-r-300m-adult-child-cls
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-adult-child-cls
===================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1770
* Accuracy: 0.9404
* F1: 0.9440
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 4
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #audio-classification #generated_from_trainer #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 4",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - ZH-CN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8122
- Wer: 0.8392
- Cer: 0.2059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 69.215 | 0.74 | 500 | 74.9751 | 1.0 | 1.0 |
| 8.2109 | 1.48 | 1000 | 7.0617 | 1.0 | 1.0 |
| 6.4277 | 2.22 | 1500 | 6.3811 | 1.0 | 1.0 |
| 6.3513 | 2.95 | 2000 | 6.3061 | 1.0 | 1.0 |
| 6.2522 | 3.69 | 2500 | 6.2147 | 1.0 | 1.0 |
| 5.9757 | 4.43 | 3000 | 5.7906 | 1.1004 | 0.9924 |
| 5.0642 | 5.17 | 3500 | 4.2984 | 1.7729 | 0.8214 |
| 4.6346 | 5.91 | 4000 | 3.7129 | 1.8946 | 0.7728 |
| 4.267 | 6.65 | 4500 | 3.2177 | 1.7526 | 0.6922 |
| 3.9964 | 7.39 | 5000 | 2.8337 | 1.8055 | 0.6546 |
| 3.8035 | 8.12 | 5500 | 2.5726 | 2.1851 | 0.6992 |
| 3.6273 | 8.86 | 6000 | 2.3391 | 2.1029 | 0.6511 |
| 3.5248 | 9.6 | 6500 | 2.1944 | 2.3617 | 0.6859 |
| 3.3683 | 10.34 | 7000 | 1.9827 | 2.1014 | 0.6063 |
| 3.2411 | 11.08 | 7500 | 1.8610 | 1.6160 | 0.5135 |
| 3.1299 | 11.82 | 8000 | 1.7446 | 1.5948 | 0.4946 |
| 3.0574 | 12.56 | 8500 | 1.6454 | 1.1291 | 0.4051 |
| 2.985 | 13.29 | 9000 | 1.5919 | 1.0673 | 0.3893 |
| 2.9573 | 14.03 | 9500 | 1.4903 | 1.0604 | 0.3766 |
| 2.8897 | 14.77 | 10000 | 1.4614 | 1.0059 | 0.3653 |
| 2.8169 | 15.51 | 10500 | 1.3997 | 1.0030 | 0.3550 |
| 2.8155 | 16.25 | 11000 | 1.3444 | 0.9980 | 0.3441 |
| 2.7595 | 16.99 | 11500 | 1.2911 | 0.9703 | 0.3325 |
| 2.7107 | 17.72 | 12000 | 1.2462 | 0.9565 | 0.3227 |
| 2.6358 | 18.46 | 12500 | 1.2466 | 0.9955 | 0.3333 |
| 2.5801 | 19.2 | 13000 | 1.2059 | 1.0010 | 0.3226 |
| 2.5554 | 19.94 | 13500 | 1.1919 | 1.0094 | 0.3223 |
| 2.5314 | 20.68 | 14000 | 1.1703 | 0.9847 | 0.3156 |
| 2.509 | 21.42 | 14500 | 1.1733 | 0.9896 | 0.3177 |
| 2.4391 | 22.16 | 15000 | 1.1811 | 0.9723 | 0.3164 |
| 2.4631 | 22.89 | 15500 | 1.1382 | 0.9698 | 0.3059 |
| 2.4414 | 23.63 | 16000 | 1.0893 | 0.9644 | 0.2972 |
| 2.3771 | 24.37 | 16500 | 1.0930 | 0.9505 | 0.2954 |
| 2.3658 | 25.11 | 17000 | 1.0756 | 0.9609 | 0.2926 |
| 2.3215 | 25.85 | 17500 | 1.0512 | 0.9614 | 0.2890 |
| 2.3327 | 26.59 | 18000 | 1.0627 | 1.1984 | 0.3282 |
| 2.3055 | 27.33 | 18500 | 1.0582 | 0.9520 | 0.2841 |
| 2.299 | 28.06 | 19000 | 1.0356 | 0.9480 | 0.2817 |
| 2.2673 | 28.8 | 19500 | 1.0305 | 0.9367 | 0.2771 |
| 2.2166 | 29.54 | 20000 | 1.0139 | 0.9223 | 0.2702 |
| 2.2378 | 30.28 | 20500 | 1.0095 | 0.9268 | 0.2722 |
| 2.2168 | 31.02 | 21000 | 1.0001 | 0.9085 | 0.2691 |
| 2.1766 | 31.76 | 21500 | 0.9884 | 0.9050 | 0.2640 |
| 2.1715 | 32.5 | 22000 | 0.9730 | 0.9505 | 0.2719 |
| 2.1104 | 33.23 | 22500 | 0.9752 | 0.9362 | 0.2656 |
| 2.1158 | 33.97 | 23000 | 0.9720 | 0.9263 | 0.2624 |
| 2.0718 | 34.71 | 23500 | 0.9573 | 1.0005 | 0.2759 |
| 2.0824 | 35.45 | 24000 | 0.9609 | 0.9525 | 0.2643 |
| 2.0591 | 36.19 | 24500 | 0.9662 | 0.9570 | 0.2667 |
| 2.0768 | 36.93 | 25000 | 0.9528 | 0.9574 | 0.2646 |
| 2.0893 | 37.67 | 25500 | 0.9810 | 0.9169 | 0.2612 |
| 2.0282 | 38.4 | 26000 | 0.9556 | 0.8877 | 0.2528 |
| 1.997 | 39.14 | 26500 | 0.9523 | 0.8723 | 0.2501 |
| 2.0209 | 39.88 | 27000 | 0.9542 | 0.8773 | 0.2503 |
| 1.987 | 40.62 | 27500 | 0.9427 | 0.8867 | 0.2500 |
| 1.9663 | 41.36 | 28000 | 0.9546 | 0.9065 | 0.2546 |
| 1.9945 | 42.1 | 28500 | 0.9431 | 0.9119 | 0.2536 |
| 1.9604 | 42.84 | 29000 | 0.9367 | 0.9030 | 0.2490 |
| 1.933 | 43.57 | 29500 | 0.9071 | 0.8916 | 0.2432 |
| 1.9227 | 44.31 | 30000 | 0.9048 | 0.8882 | 0.2428 |
| 1.8784 | 45.05 | 30500 | 0.9106 | 0.8991 | 0.2437 |
| 1.8844 | 45.79 | 31000 | 0.8996 | 0.8758 | 0.2379 |
| 1.8776 | 46.53 | 31500 | 0.9028 | 0.8798 | 0.2395 |
| 1.8372 | 47.27 | 32000 | 0.9047 | 0.8778 | 0.2379 |
| 1.832 | 48.01 | 32500 | 0.9016 | 0.8941 | 0.2393 |
| 1.8154 | 48.74 | 33000 | 0.8915 | 0.8916 | 0.2372 |
| 1.8072 | 49.48 | 33500 | 0.8781 | 0.8872 | 0.2365 |
| 1.7489 | 50.22 | 34000 | 0.8738 | 0.8956 | 0.2340 |
| 1.7928 | 50.96 | 34500 | 0.8684 | 0.8872 | 0.2323 |
| 1.7748 | 51.7 | 35000 | 0.8723 | 0.8718 | 0.2321 |
| 1.7355 | 52.44 | 35500 | 0.8760 | 0.8842 | 0.2331 |
| 1.7167 | 53.18 | 36000 | 0.8746 | 0.8817 | 0.2324 |
| 1.7479 | 53.91 | 36500 | 0.8762 | 0.8753 | 0.2281 |
| 1.7428 | 54.65 | 37000 | 0.8733 | 0.8699 | 0.2277 |
| 1.7058 | 55.39 | 37500 | 0.8816 | 0.8649 | 0.2263 |
| 1.7045 | 56.13 | 38000 | 0.8733 | 0.8689 | 0.2297 |
| 1.709 | 56.87 | 38500 | 0.8648 | 0.8654 | 0.2232 |
| 1.6799 | 57.61 | 39000 | 0.8717 | 0.8580 | 0.2244 |
| 1.664 | 58.35 | 39500 | 0.8653 | 0.8723 | 0.2259 |
| 1.6488 | 59.08 | 40000 | 0.8637 | 0.8803 | 0.2271 |
| 1.6298 | 59.82 | 40500 | 0.8553 | 0.8768 | 0.2253 |
| 1.6185 | 60.56 | 41000 | 0.8512 | 0.8718 | 0.2240 |
| 1.574 | 61.3 | 41500 | 0.8579 | 0.8773 | 0.2251 |
| 1.6192 | 62.04 | 42000 | 0.8499 | 0.8743 | 0.2242 |
| 1.6275 | 62.78 | 42500 | 0.8419 | 0.8758 | 0.2216 |
| 1.5697 | 63.52 | 43000 | 0.8446 | 0.8699 | 0.2222 |
| 1.5384 | 64.25 | 43500 | 0.8462 | 0.8580 | 0.2200 |
| 1.5115 | 64.99 | 44000 | 0.8467 | 0.8674 | 0.2214 |
| 1.5547 | 65.73 | 44500 | 0.8505 | 0.8669 | 0.2204 |
| 1.5597 | 66.47 | 45000 | 0.8421 | 0.8684 | 0.2192 |
| 1.505 | 67.21 | 45500 | 0.8485 | 0.8619 | 0.2187 |
| 1.5101 | 67.95 | 46000 | 0.8489 | 0.8649 | 0.2204 |
| 1.5199 | 68.69 | 46500 | 0.8407 | 0.8619 | 0.2180 |
| 1.5207 | 69.42 | 47000 | 0.8379 | 0.8496 | 0.2163 |
| 1.478 | 70.16 | 47500 | 0.8357 | 0.8595 | 0.2163 |
| 1.4817 | 70.9 | 48000 | 0.8346 | 0.8496 | 0.2151 |
| 1.4827 | 71.64 | 48500 | 0.8362 | 0.8624 | 0.2169 |
| 1.4513 | 72.38 | 49000 | 0.8355 | 0.8451 | 0.2137 |
| 1.4988 | 73.12 | 49500 | 0.8325 | 0.8624 | 0.2161 |
| 1.4267 | 73.85 | 50000 | 0.8396 | 0.8481 | 0.2157 |
| 1.4421 | 74.59 | 50500 | 0.8355 | 0.8491 | 0.2122 |
| 1.4311 | 75.33 | 51000 | 0.8358 | 0.8476 | 0.2118 |
| 1.4174 | 76.07 | 51500 | 0.8289 | 0.8451 | 0.2101 |
| 1.4349 | 76.81 | 52000 | 0.8372 | 0.8580 | 0.2140 |
| 1.3959 | 77.55 | 52500 | 0.8325 | 0.8436 | 0.2116 |
| 1.4087 | 78.29 | 53000 | 0.8351 | 0.8446 | 0.2105 |
| 1.415 | 79.03 | 53500 | 0.8363 | 0.8476 | 0.2123 |
| 1.4122 | 79.76 | 54000 | 0.8310 | 0.8481 | 0.2112 |
| 1.3969 | 80.5 | 54500 | 0.8239 | 0.8446 | 0.2095 |
| 1.361 | 81.24 | 55000 | 0.8282 | 0.8427 | 0.2091 |
| 1.3611 | 81.98 | 55500 | 0.8282 | 0.8407 | 0.2092 |
| 1.3677 | 82.72 | 56000 | 0.8235 | 0.8436 | 0.2084 |
| 1.3361 | 83.46 | 56500 | 0.8231 | 0.8377 | 0.2069 |
| 1.3779 | 84.19 | 57000 | 0.8206 | 0.8436 | 0.2070 |
| 1.3727 | 84.93 | 57500 | 0.8204 | 0.8392 | 0.2065 |
| 1.3317 | 85.67 | 58000 | 0.8207 | 0.8436 | 0.2065 |
| 1.3332 | 86.41 | 58500 | 0.8186 | 0.8357 | 0.2055 |
| 1.3299 | 87.15 | 59000 | 0.8193 | 0.8417 | 0.2075 |
| 1.3129 | 87.89 | 59500 | 0.8183 | 0.8431 | 0.2065 |
| 1.3352 | 88.63 | 60000 | 0.8151 | 0.8471 | 0.2062 |
| 1.3026 | 89.36 | 60500 | 0.8125 | 0.8486 | 0.2067 |
| 1.3468 | 90.1 | 61000 | 0.8124 | 0.8407 | 0.2058 |
| 1.3028 | 90.84 | 61500 | 0.8122 | 0.8461 | 0.2051 |
| 1.2884 | 91.58 | 62000 | 0.8086 | 0.8427 | 0.2048 |
| 1.3005 | 92.32 | 62500 | 0.8110 | 0.8387 | 0.2055 |
| 1.2996 | 93.06 | 63000 | 0.8126 | 0.8328 | 0.2057 |
| 1.2707 | 93.8 | 63500 | 0.8098 | 0.8402 | 0.2047 |
| 1.3026 | 94.53 | 64000 | 0.8097 | 0.8402 | 0.2050 |
| 1.2546 | 95.27 | 64500 | 0.8111 | 0.8402 | 0.2055 |
| 1.2426 | 96.01 | 65000 | 0.8088 | 0.8372 | 0.2059 |
| 1.2869 | 96.75 | 65500 | 0.8093 | 0.8397 | 0.2048 |
| 1.2782 | 97.49 | 66000 | 0.8099 | 0.8412 | 0.2049 |
| 1.2457 | 98.23 | 66500 | 0.8134 | 0.8412 | 0.2062 |
| 1.2967 | 98.97 | 67000 | 0.8115 | 0.8382 | 0.2055 |
| 1.2817 | 99.7 | 67500 | 0.8128 | 0.8392 | 0.2063 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3.dev0
- Tokenizers 0.11.0
|
{"language": ["zh-CN"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "sv"], "datasets": ["common_voice"], "model-index": [{"name": "", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "zh-CN"}, "metrics": [{"type": "cer", "value": 66.22, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "zh-CN"}, "metrics": [{"type": "cer", "value": 37.51, "name": "Test CER"}]}]}]}
|
anantoj/wav2vec2-xls-r-300m-zh-CN
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"sv",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh-CN"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #sv #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the COMMON\_VOICE - ZH-CN dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8122
* Wer: 0.8392
* Cer: 0.2059
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 7.5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 2000
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.3.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #sv #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 7.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.3.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice Corpus 4](https://commonvoice.mozilla.org/en/datasets) dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ar", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ar", split="test")
processor = Wav2Vec2Processor.from_pretrained("anas/wav2vec2-large-xlsr-arabic")
model = Wav2Vec2ForCTC.from_pretrained("anas/wav2vec2-large-xlsr-arabic/")
model.to("cuda")
chars_to_ignore_regex = '[\,\؟\.\!\-\;\\:\'\"\☭\«\»\؛\—\ـ\_\،\“\%\‘\”\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub('[a-z]','',batch["sentence"])
batch["sentence"] = re.sub("[إأٱآا]", "ا", batch["sentence"])
noise = re.compile(""" ّ | # Tashdid
َ | # Fatha
ً | # Tanwin Fath
ُ | # Damma
ٌ | # Tanwin Damm
ِ | # Kasra
ٍ | # Tanwin Kasr
ْ | # Sukun
ـ # Tatwil/Kashida
""", re.VERBOSE)
batch["sentence"] = re.sub(noise, '', batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 52.18 %
## Training
The Common Voice Corpus 4 `train`, `validation`, datasets were used for training
The script used for training can be found [here](https://github.com/anashas/Fine-Tuning-of-XLSR-Wav2Vec2-on-Arabic)
Twitter: [here](https://twitter.com/hasnii_anas)
Email: [email protected]
|
{"language": "ar", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": [{"common_voice": "Common Voice Corpus 4"}], "metrics": ["wer"], "model-index": [{"name": "Hasni XLSR Wav2Vec2 Large 53", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ar", "type": "common_voice", "args": "ar"}, "metrics": [{"type": "wer", "value": 52.18, "name": "Test WER"}]}]}]}
|
anas/wav2vec2-large-xlsr-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"ar",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ar #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# Wav2Vec2-Large-XLSR-53-Arabic
Fine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the Common Voice Corpus 4 dataset.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice.
Test Result: 52.18 %
## Training
The Common Voice Corpus 4 'train', 'validation', datasets were used for training
The script used for training can be found here
Twitter: here
Email: anashasni146@URL
|
[
"# Wav2Vec2-Large-XLSR-53-Arabic\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the Common Voice Corpus 4 dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Arabic test data of Common Voice.\n\n\n\n\nTest Result: 52.18 %",
"## Training\n\nThe Common Voice Corpus 4 'train', 'validation', datasets were used for training\n\nThe script used for training can be found here\n\nTwitter: here\n\nEmail: anashasni146@URL"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #ar #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# Wav2Vec2-Large-XLSR-53-Arabic\n\nFine-tuned facebook/wav2vec2-large-xlsr-53 on Arabic using the Common Voice Corpus 4 dataset.\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\n\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\n\nThe model can be evaluated as follows on the Arabic test data of Common Voice.\n\n\n\n\nTest Result: 52.18 %",
"## Training\n\nThe Common Voice Corpus 4 'train', 'validation', datasets were used for training\n\nThe script used for training can be found here\n\nTwitter: here\n\nEmail: anashasni146@URL"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10",
"### Training results\n\n{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-42\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10",
"### Training results\n\n{'exact_match': 40.91769157994324, 'f1': 52.89154394730339}",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-1024-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results\n\n{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-42\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results\n\n{'exact_match': 12.93282876064333, 'f1': 21.98821604201723}",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-128-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 12\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results\n{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-42\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 12\n- eval_batch_size: 12\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results\n{'exact_match': 3.207190160832545, 'f1': 6.680463956037787}",
"### Framework versions\n\n- Transformers 4.16.2\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-16-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-256-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-32-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-2\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-4\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-6\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-512-finetuned-squad-seed-8\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 10.0",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-64-finetuned-squad-seed-0", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-64-finetuned-squad-seed-0
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-0
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-0\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-few-shot-k-64-finetuned-squad-seed-10", "results": []}]}
|
anas-awadalla/bert-base-uncased-few-shot-k-64-finetuned-squad-seed-10
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-10
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 200
### Training results
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"# bert-base-uncased-few-shot-k-64-finetuned-squad-seed-10\n\nThis model is a fine-tuned version of bert-base-uncased on the squad dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 3e-05\n- train_batch_size: 24\n- eval_batch_size: 24\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- training_steps: 200",
"### Training results",
"### Framework versions\n\n- Transformers 4.16.0.dev0\n- Pytorch 1.10.2+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.