modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 06:30:45
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 06:30:39
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
inkoziev/rugpt_chitchat
|
inkoziev
| 2022-10-19T07:44:11Z | 208 | 17 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"PyTorch",
"Transformers",
"ru",
"license:unlicense",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-09-15T07:20:18Z |
---
pipeline_tag: text-generation
tags:
- PyTorch
- Transformers
- gpt2
license: unlicense
language: ru
widget:
- text: "- У Джульетты было 7 пончиков, а потом она 3 съела. Сколько у нее осталось пончиков? -"
- text: "- Поглажено 4 манула. Осталось погладить 6. Сколько всего манулов надо погладить? -"
- text: "- Для начала скажи, чему равно пятью девять? -"
- text: "- ты чё такой борзый? -"
- text: "- Привет! Как ваше ничего? -"
---
## Russian Chit-chat, Deductive and Common Sense reasoning model
Модель является ядром прототипа [диалоговой системы](https://github.com/Koziev/chatbot) с двумя основными функциями.
Первая функция - **генерация реплик чит-чата**. В качестве затравки подается история диалога (предшествующие несколько реплик, от 1 до 10).
```
- Привет, как дела?
- Привет, так себе.
- <<< эту реплику ожидаем от модели >>>
```
Вторая функция модели - вывод ответа на заданный вопрос, опираясь на дополнительные факты или на "здравый смысл". Предполагается, что релевантные факты извлекаются
из стороннего хранилища (базы знаний) с помощью другой модели, например [sbert_pq](https://huggingface.co/inkoziev/sbert_pq).
Используя указанный факт(ы) и текст вопроса, модель построит грамматичный и максимально краткий ответ, как это сделал бы
человек в подобной коммуникативной ситуации. Релевантные факты следует указывать перед текстом заданного вопроса так,
будто сам собеседник сказал их:
```
- Сегодня 15 сентября. Какой сейчас у нас месяц?
- Сентябрь
```
Модель не ожидает, что все найденные и добавленные в контекст диалога факты действительно имеют отношение к заданному вопросу. Поэтому
модель, извлекающая из базы знаний информацию, может жертвовать точностью в пользу полноте и добавлять что-то лишнее. Модель читчата
в этом случае сама выберет среди добавленных в контекст фактов необходимую фактуру и проигнорирует лишнее. Текущая версия модели
допускает до 5 фактов перед вопросом. Например:
```
- Стасу 16 лет. Стас живет в Подольске. У Стаса нет своей машины. Где живет Стас?
- в Подольске
```
В некоторых случаях модель может выполнять **силлогический вывод** ответа, опираясь на 2 предпосылки, связанные друг с другом. Выводимое из двух предпосылок следствие не фигурирует явно, а *как бы* используется для вывода ответа:
```
- Смертен ли Аристофан, если он был греческим философом, а все философы смертны?
- Да
```
Как можно видеть из приведенных примеров, формат подаваемой на вход модели фактической информации для выполнения вывода предельно естественный и свободный.
Кроме логического вывода, модель также умеет решать простые арифметические задачи в рамках 1-2 классов начальной школы, с двумя числовыми аргументами:
```
- Чему равно 2+8?
- 10
```
### Варианты модели и метрики
Выложенная на данный момент модель имеет 760 млн. параметров, т.е. уровня sberbank-ai/rugpt3large_based_on_gpt2. Далее приводится
результат замера точности решения арифметических задач на отложенном тестовом наборе сэмплов:
| base model | arith. accuracy |
| --------------------------------------- | --------------- |
| sberbank-ai/rugpt3large_based_on_gpt2 | 0.91 |
| sberbank-ai/rugpt3medium_based_on_gpt2 | 0.70 |
| sberbank-ai/rugpt3small_based_on_gpt2 | 0.58 |
| tinkoff-ai/ruDialoGPT-small | 0.44 |
| tinkoff-ai/ruDialoGPT-medium | 0.69 |
Цифра 0.91 в столбце "arith. accuracy" означает, что 91% тестовых задач решено полностью верно.
Любое отклонение сгенерированного ответа от эталонного рассматривается
как ошибка. Например, выдача ответа "120" вместо "119" тоже фиксируется как ошибка.
### Пример использования
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "inkoziev/rugpt_chitchat"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.add_special_tokens({'bos_token': '<s>', 'eos_token': '</s>', 'pad_token': '<pad>'})
model = AutoModelForCausalLM.from_pretrained(model_name)
model.to(device)
model.eval()
# На вход модели подаем последние 2-3 реплики диалога. Каждая реплика на отдельной строке, начинается с символа "-"
input_text = """<s>- Привет! Что делаешь?
- Привет :) В такси еду
-"""
encoded_prompt = tokenizer.encode(input_text, add_special_tokens=False, return_tensors="pt").to(device)
output_sequences = model.generate(input_ids=encoded_prompt, max_length=100, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id)
text = tokenizer.decode(output_sequences[0].tolist(), clean_up_tokenization_spaces=True)[len(input_text)+1:]
text = text[: text.find('</s>')]
print(text)
```
### Контакты
Если у Вас есть какие-то вопросы по использованию этой модели, или предложения по ее улучшению - пишите мне [email protected]
### Citation:
```
@MISC{rugpt_chitchat,
author = {Ilya Koziev},
title = {Russian Chit-chat with Common sence Reasoning},
url = {https://huggingface.co/inkoziev/rugpt_chitchat},
year = 2022
}
```
|
SalML/DETR-table-structure-recognition
|
SalML
| 2022-10-19T07:23:28Z | 15 | 5 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"en",
"dataset:PubTables-1M",
"license:unknown",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-10-01T07:55:34Z |
---
language: en
tags:
- detr
license: unknown
datasets:
- PubTables-1M
---
# The models are taken from https://github.com/microsoft/table-transformer/
# Original model now on MSFT org: https://huggingface.co/microsoft/table-transformer-structure-recognition
I have built a HuggingFace Space: https://huggingface.co/spaces/SalML/TableTransformer2CSV
It runs an OCR on the table-transformer output image to obtain a CSV downloadable table.
|
SalML/DETR-table-detection
|
SalML
| 2022-10-19T07:22:07Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"en",
"dataset:PubTables-1M",
"license:unknown",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-09-09T10:49:56Z |
---
language: en
tags:
- detr
license: unknown
datasets:
- PubTables-1M
---
# The models are taken from https://github.com/microsoft/table-transformer/
# Original model now on MSFT org: https://huggingface.co/microsoft/table-transformer-detection
I have built a HuggingFace Space: https://huggingface.co/spaces/SalML/TableTransformer2CSV
It runs an OCR on the table-transformer output image to obtain a CSV downloadable table.
|
readerbench/RoGPT2-base
|
readerbench
| 2022-10-19T05:10:01Z | 324 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
Model card for RoGPT2-base
---
language:
- ro
---
# RoGPT2: Romanian GPT2 for text generation
All models are available:
* [RoGPT2-base](https://huggingface.co/readerbench/RoGPT2-base)
* [RoGPT2-medium](https://huggingface.co/readerbench/RoGPT2-medium)
* [RoGPT2-large](https://huggingface.co/readerbench/RoGPT2-large)
For code and evaluation check out [GitHub](https://github.com/readerbench/RoGPT2).
#### How to use
```python
# TensorFlow
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-base')
model = TFAutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-base')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='tf')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
# PyTorch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-base')
model = AutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-base')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='pt')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
```
## Training
---
### Corpus Statistics
| Corpus | Total size | Number of words | Number of sentences |
|:------:|:----------:|:---------------:|:-------------------:|
|OSCAR| 11.54 GB | 1745M | 48.46M |
|Wiki-Ro | 0.46 GB | 68M | 1.79M |
|Debates | 0.5 GB | 73M | 3.61M |
|Books | 4.37 GB | 667M | 37.39M |
|News | 0.15 GB | 23M | 0.77M |
### Training Statistics
| Version | Number of parameters | Number of epoch | Duration of an epoch | Context size | Batch size | PPL |
|:-------:|:--------------------:|:---------------:|:--------------------:|:----------:|:----------:|:---:|
| Base | 124M | 15 | 7h | 1024 | 72 | 22.96 |
| Medium | 354M | 10 | 22h | 1024 | 24 | 17.64 |
| Large | 774M | 5 | **45h** | 512 | 16 | **16.77**|
## Evaluation
---
### 1. MOROCO
| Model | Dialect | Md to Ro | Ro to Md |
|:-----------------:|:-------:|:--------:|:--------:|
| KRR + SK | 94.06 | 67.59 | 75.47 |
| BERT-base-ro | 95.98 | 69.90 | 78.08 |
| RoBERT-small | 95.76 | 69.05 | 80.15 |
| RoBERT-base |**97.24**| 68.80 | 82.37 |
| RoBERT-large | 97.21 | 69.50 | **83.26**|
| RoGPT2-base | 96.69 | 69.82 | 77.55 |
| RoGPT2-medium | 96.42 | 69.77 | 80.51 |
| RoGPT2-large | 96.93 |**71.07** | 82.56 |
### 2. LaRoSeDa
| Model | Binary: Accuracy | Binary: F1-Score | Multi-Class: Accuracy | Multi-Class: F1-Score |
|:------------:|:----------------:|:----------------:|:---------------------:|:---------------------:|
|BERT-base-ro | 98.07 | 97.94 | - |79.61 |
| RoDiBERT |**98.40** |**98.31** | - |83.01 |
| RoBERT-small | 97.44 | 97.43 | 89.30 |84.23 |
| RoBERT-base | 98.27 | 98.26 | 90.59 |86.27 |
| RoBERT-large | 98.20 | 98.19 |**90.93** |**86.63** |
| RoGPT2-base | 97.89 | 97.88 |89.65 |84.68 |
|RoGPT2-medium | 98.03 |98.04 | 90.29 | 85.37 |
| RoGPT2-large | 98.06 |98.07 | 90.26 | 84.89 |
### 3. RoSTS
| Model | Spearman dev-set | Spearman test-set | Pearson dev-set | Pearson test-set |
|:------------:|:----------------:|:-----------------:|:---------------:|:----------------:|
|BERT-base-ro | 84.26 | 80.86 | 84.59 | 81.59 |
|RoDiBERT | 77.07 | 71.47 | 77.13 | 72.25 |
|RoBERT-small | 82.06 | 78.06 | 81.66 | 78.49 |
|RoBERT-base | 84.93 | 80.39 | 85.03 | 80.39 |
|RoBERT-large |**86.25** |**83.15** |**86.58** |**83.76** |
|RoGPT2-base | 83.51 | 79.77 | 83.74 | 80.56 |
|RoGPT2-medium | 85.75 | 82.25 | 86.04 | 83.16 |
|RoGPT2-large | 85.70 | 82.64 | 86.14 | 83.46 |
### 4. WMT16
| Model | Decoder method | Ro-En | En-Ro |
|:------------:|:--------------:|:------:|:------:|
|mBART | - |**38.5**|**38.5**|
|OpenNMT | - | - | 24.7 |
|RoGPT2-base |Greedy | 30.37 | 20.27 |
|RoGPT2-base |Beam-search-4 | 31.26 | 22.31 |
|RoGPT2-base |Beam-search-8 | 31.39 | 22.95 |
|RoGPT2-medium |Greedy | 32.48 | 22.18 |
|RoGPT2-medium |Beam-search-4 | 34.08 | 24.03 |
|RoGPT2-medium |Beam-search-8 | 34.16 | 24.13 |
|RoGPT2-large |Greedy | 33.69 | 23.31 |
|RoGPT2-large |Beam-search-4 |34.40 |24.23 |
|RoGPT2-large |Beam-search-8 |34.51 |24.32 |
### 5. XQuAD
| Model |Decoder method | EM | F1-Score |
|:------------:|:-------------:|:-----:|:--------:|
|BERT-base-ro | - | 47.89 | 63.74 |
|RoDiBERT | - | 21.76 | 34.57 |
|RoBERT-small | - | 30.84 | 45.17 |
|RoBERT-base | - | 53.52 | 70.04 |
|RoBERT-large | - | 55.46 | 69.64 |
|mBERT | - |59.9 | 72.7 |
|XLM-R Large | - |**69.7**|**83.6**|
|RoGPT2-base | Greedy | 23.69 | 35.97 |
|RoGPT2-base | Beam-search-4 | 24.11 | 35.27 |
|RoGPT2-medium | Greedy | 29.66 | 44.74 |
|RoGPT2-medium | Beam-search-4 | 31.59 | 45.32 |
|RoGPT2-large | Greedy | 29.74 | 42.98 |
|RoGPT2-large | Beam-search-4 | 29.66 | 43.05 |
|RoGPT2-base-en-ro | Greedy | 23.86 | 34.27 |
|RoGPT2-base-en-ro | Beam-search-4 | 25.04 | 34.51 |
|RoGPT2-medium-en-ro | Greedy | 27.05 | 39.75 |
|RoGPT2-medium-en-ro | Beam-search-4 | 27.64 | 39.11 |
|RoGPT2-large-en-ro | Greedy | 28.40 | 39.79 |
|RoGPT2-large-en-ro | Beam-search-4 | 28.73 | 39.71 |
|RoGPT2-large-en-ro-mask | Greedy | 31.34 | 44.71 |
|RoGPT2-large-en-ro-mask| Beam-search-4 | 31.59 | 43.53 |
### 6. Wiki-Ro: LM
| Model | PPL dev | PPL test |
|:------------:|:-------:|:--------:|
|BERT-base-ro | 29.0897 | 28.0043|
|RoGPT2-base | 34.3795 | 33.7460|
|RoGPT2-medium | 23.7879 | 23.4581|
|RoGPT2-large | **21.7491** | **21.5200** |
### 7. RoGEC
| Model | Decoder mothod | P | R | F<sub>0.5</sub> |
|:-----:|:--------------:|:---:|:---:|:------:|
|Transformer-tiny | Beam-search | 53.53 | 26.36 | 44.38 |
|Transformer-base Finetuning | Beam-search | 56.05 | 46.19 | 53.76 |
|Transformer-base Finetuning | Beam-search-LM | 50.68 | 45.39 | 49.52 |
|Transformer-base Finetuning | Beam-search-norm-LM | 51.06 | 45.43 | 49.83 |
|RoGPT2-base | Greedy | 59.02 | 49.35 | 56.80 |
|RoGPT2-base | Beam-search-4 | 65.23 | 49.26 | 61.26 |
|RoGPT2-base |Beam-search-8 | 65.88 | 49.64 | 61.84 |
|RoGPT2-medium | Greedy | 69.97 | 57.94 | 67.18 |
|RoGPT2-medium | Beam-search-4 | **72.46** | **57.99** | **69.01** |
|RoGPT2-medium | Beam-search-8 | 72.24 | 57.69 | 68.77 |
|RoGP2-large | Greedy | 61.90 | 49.09 | 58.83 |
|RoGP2-large | Beam-search-4 | 65.24 | 49.43 | 61.32 |
|RoGP2-large | Beam-search-8 | 64.96 | 49.22 | 61.06 |
|RoGPT2-base* | Greedy | 68.67 | 49.60 | 63.77 |
|RoGPT2-base* | Beam-search-4 | 71.16 | 50.53 | 65.79 |
|RoGPT2-base* | Beam-search-8 | 71.68 | 50.65 | 66.18 |
|RoGPT2-medium* | Greedy | 58.21 | 43.32 | 54.47 |
|RoGPT2-medium* | Beam-search-4 | 68.31 | 43.78 | 61.43 |
|RoGPT2-medium* | Beam-search-8 | 68.68 | 43.99 | 61.75 |
|RoGPT2-large* | Greedy | 64.86 | 41.30 | 58.22 |
|RoGPT2-large* | Beam-search-4 | 65.57 | 41.00 | 58.55 |
|RoGPT2-large* | Beam-search-8 | 65.44 | 41.09 | 58.50 |
**__Note__**: * the models were trained using the dataset of 3,000,000 artificially generated pairs
## Acknowledgments
---
Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
## How to cite
---
```bibtex
@inproceedings{niculescu2021rogpt2,
title={RoGPT2: Romanian GPT2 for Text Generation},
author={Niculescu, Mihai Alexandru and Ruseti, Stefan and Dascalu, Mihai},
booktitle={2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI)},
pages={1154--1161},
year={2021},
organization={IEEE}
}
```
|
readerbench/RoGPT2-large
|
readerbench
| 2022-10-19T05:08:30Z | 177 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
Model card for RoGPT2-large
---
language:
- ro
---
# RoGPT2: Romanian GPT2 for text generation
All models are available:
* [RoGPT2-base](https://huggingface.co/readerbench/RoGPT2-base)
* [RoGPT2-medium](https://huggingface.co/readerbench/RoGPT2-medium)
* [RoGPT2-large](https://huggingface.co/readerbench/RoGPT2-large)
For code and evaluation check out [GitHub](https://github.com/readerbench/RoGPT2).
#### How to use
```python
# TensorFlow
from transformers import AutoTokenizer, TFAutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-large')
model = TFAutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-large')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='tf')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
# PyTorch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('readerbench/RoGPT2-large')
model = AutoModelForCausalLM.from_pretrained('readerbench/RoGPT2-large')
inputs = tokenizer.encode("Este o zi de vara", return_tensors='pt')
text = model.generate(inputs, max_length=1024, no_repeat_ngram_size=2)
print(tokenizer.decode(text[0]))
```
## Training
---
### Corpus Statistics
| Corpus | Total size | Number of words | Number of sentences |
|:------:|:----------:|:---------------:|:-------------------:|
|OSCAR| 11.54 GB | 1745M | 48.46M |
|Wiki-Ro | 0.46 GB | 68M | 1.79M |
|Debates | 0.5 GB | 73M | 3.61M |
|Books | 4.37 GB | 667M | 37.39M |
|News | 0.15 GB | 23M | 0.77M |
### Training Statistics
| Version | Number of parameters | Number of epoch | Duration of an epoch | Context size | Batch size | PPL |
|:-------:|:--------------------:|:---------------:|:--------------------:|:----------:|:----------:|:---:|
| Base | 124M | 15 | 7h | 1024 | 72 | 22.96 |
| Medium | 354M | 10 | 22h | 1024 | 24 | 17.64 |
| Large | 774M | 5 | **45h** | 512 | 16 | **16.77**|
## Evaluation
---
### 1. MOROCO
| Model | Dialect | Md to Ro | Ro to Md |
|:-----------------:|:-------:|:--------:|:--------:|
| KRR + SK | 94.06 | 67.59 | 75.47 |
| BERT-base-ro | 95.98 | 69.90 | 78.08 |
| RoBERT-small | 95.76 | 69.05 | 80.15 |
| RoBERT-base |**97.24**| 68.80 | 82.37 |
| RoBERT-large | 97.21 | 69.50 | **83.26**|
| RoGPT2-base | 96.69 | 69.82 | 77.55 |
| RoGPT2-medium | 96.42 | 69.77 | 80.51 |
| RoGPT2-large | 96.93 |**71.07** | 82.56 |
### 2. LaRoSeDa
| Model | Binary: Accuracy | Binary: F1-Score | Multi-Class: Accuracy | Multi-Class: F1-Score |
|:------------:|:----------------:|:----------------:|:---------------------:|:---------------------:|
|BERT-base-ro | 98.07 | 97.94 | - |79.61 |
| RoDiBERT |**98.40** |**98.31** | - |83.01 |
| RoBERT-small | 97.44 | 97.43 | 89.30 |84.23 |
| RoBERT-base | 98.27 | 98.26 | 90.59 |86.27 |
| RoBERT-large | 98.20 | 98.19 |**90.93** |**86.63** |
| RoGPT2-base | 97.89 | 97.88 |89.65 |84.68 |
|RoGPT2-medium | 98.03 |98.04 | 90.29 | 85.37 |
| RoGPT2-large | 98.06 |98.07 | 90.26 | 84.89 |
### 3. RoSTS
| Model | Spearman dev-set | Spearman test-set | Pearson dev-set | Pearson test-set |
|:------------:|:----------------:|:-----------------:|:---------------:|:----------------:|
|BERT-base-ro | 84.26 | 80.86 | 84.59 | 81.59 |
|RoDiBERT | 77.07 | 71.47 | 77.13 | 72.25 |
|RoBERT-small | 82.06 | 78.06 | 81.66 | 78.49 |
|RoBERT-base | 84.93 | 80.39 | 85.03 | 80.39 |
|RoBERT-large |**86.25** |**83.15** |**86.58** |**83.76** |
|RoGPT2-base | 83.51 | 79.77 | 83.74 | 80.56 |
|RoGPT2-medium | 85.75 | 82.25 | 86.04 | 83.16 |
|RoGPT2-large | 85.70 | 82.64 | 86.14 | 83.46 |
### 4. WMT16
| Model | Decoder method | Ro-En | En-Ro |
|:------------:|:--------------:|:------:|:------:|
|mBART | - |**38.5**|**38.5**|
|OpenNMT | - | - | 24.7 |
|RoGPT2-base |Greedy | 30.37 | 20.27 |
|RoGPT2-base |Beam-search-4 | 31.26 | 22.31 |
|RoGPT2-base |Beam-search-8 | 31.39 | 22.95 |
|RoGPT2-medium |Greedy | 32.48 | 22.18 |
|RoGPT2-medium |Beam-search-4 | 34.08 | 24.03 |
|RoGPT2-medium |Beam-search-8 | 34.16 | 24.13 |
|RoGPT2-large |Greedy | 33.69 | 23.31 |
|RoGPT2-large |Beam-search-4 |34.40 |24.23 |
|RoGPT2-large |Beam-search-8 |34.51 |24.32 |
### 5. XQuAD
| Model |Decoder method | EM | F1-Score |
|:------------:|:-------------:|:-----:|:--------:|
|BERT-base-ro | - | 47.89 | 63.74 |
|RoDiBERT | - | 21.76 | 34.57 |
|RoBERT-small | - | 30.84 | 45.17 |
|RoBERT-base | - | 53.52 | 70.04 |
|RoBERT-large | - | 55.46 | 69.64 |
|mBERT | - | 59.9 | 72.7 |
|XLM-R Large | - |**69.7**|**83.6**|
|RoGPT2-base | Greedy | 23.69 | 35.97 |
|RoGPT2-base | Beam-search-4 | 24.11 | 35.27 |
|RoGPT2-medium | Greedy | 29.66 | 44.74 |
|RoGPT2-medium | Beam-search-4 | 31.59 | 45.32 |
|RoGPT2-large | Greedy | 29.74 | 42.98 |
|RoGPT2-large | Beam-search-4 | 29.66 | 43.05 |
|RoGPT2-base-en-ro | Greedy | 23.86 | 34.27 |
|RoGPT2-base-en-ro | Beam-search-4 | 25.04 | 34.51 |
|RoGPT2-medium-en-ro | Greedy | 27.05 | 39.75 |
|RoGPT2-medium-en-ro | Beam-search-4 | 27.64 | 39.11 |
|RoGPT2-large-en-ro | Greedy | 28.40 | 39.79 |
|RoGPT2-large-en-ro | Beam-search-4 | 28.73 | 39.71 |
|RoGPT2-large-en-ro-mask | Greedy | 31.34 | 44.71 |
|RoGPT2-large-en-ro-mask| Beam-search-4 | 31.59 | 43.53 |
### 6. Wiki-Ro: LM
| Model | PPL dev | PPL test |
|:------------:|:-------:|:--------:|
|BERT-base-ro | 29.0897 | 28.0043|
|RoGPT2-base | 34.3795 | 33.7460|
|RoGPT2-medium | 23.7879 | 23.4581|
|RoGPT2-large | **21.7491** | **21.5200** |
### 7. RoGEC
| Model | Decoder mothod | P | R | F<sub>0.5</sub> |
|:-----:|:--------------:|:---:|:---:|:------:|
|Transformer-tiny | Beam-search | 53.53 | 26.36 | 44.38 |
|Transformer-base Finetuning | Beam-search | 56.05 | 46.19 | 53.76 |
|Transformer-base Finetuning | Beam-search-LM | 50.68 | 45.39 | 49.52 |
|Transformer-base Finetuning | Beam-search-norm-LM | 51.06 | 45.43 | 49.83 |
|RoGPT2-base | Greedy | 59.02 | 49.35 | 56.80 |
|RoGPT2-base | Beam-search-4 | 65.23 | 49.26 | 61.26 |
|RoGPT2-base |Beam-search-8 | 65.88 | 49.64 | 61.84 |
|RoGPT2-medium | Greedy | 69.97 | 57.94 | 67.18 |
|RoGPT2-medium | Beam-search-4 | **72.46** | **57.99** | **69.01** |
|RoGPT2-medium | Beam-search-8 | 72.24 | 57.69 | 68.77 |
|RoGP2-large | Greedy | 61.90 | 49.09 | 58.83 |
|RoGP2-large | Beam-search-4 | 65.24 | 49.43 | 61.32 |
|RoGP2-large | Beam-search-8 | 64.96 | 49.22 | 61.06 |
|RoGPT2-base* | Greedy | 68.67 | 49.60 | 63.77 |
|RoGPT2-base* | Beam-search-4 | 71.16 | 50.53 | 65.79 |
|RoGPT2-base* | Beam-search-8 | 71.68 | 50.65 | 66.18 |
|RoGPT2-medium* | Greedy | 58.21 | 43.32 | 54.47 |
|RoGPT2-medium* | Beam-search-4 | 68.31 | 43.78 | 61.43 |
|RoGPT2-medium* | Beam-search-8 | 68.68 | 43.99 | 61.75 |
|RoGPT2-large* | Greedy | 64.86 | 41.30 | 58.22 |
|RoGPT2-large* | Beam-search-4 | 65.57 | 41.00 | 58.55 |
|RoGPT2-large* | Beam-search-8 | 65.44 | 41.09 | 58.50 |
**__Note__**: * the models were trained using the dataset of 3,000,000 artificially generated pairs
## Acknowledgments
---
Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
## How to cite
---
```bibtex
@inproceedings{niculescu2021rogpt2,
title={RoGPT2: Romanian GPT2 for Text Generation},
author={Niculescu, Mihai Alexandru and Ruseti, Stefan and Dascalu, Mihai},
booktitle={2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI)},
pages={1154--1161},
year={2021},
organization={IEEE}
}
```
|
bongsoo/mdistilbertV3.1
|
bongsoo
| 2022-10-19T02:19:47Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"en",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-19T01:50:21Z |
---
license: apache-2.0
pipeline_tag: fill-mask
tags:
- fill-mask
- transformers
- en
- ko
widget:
- text: 대한민국의 수도는 [MASK] 입니다.
---
# mdistilbertV3.1
- distilbert-base-multilingual-cased 모델에 [moco-corpus-kowiki2022 말뭉치](https://huggingface.co/datasets/bongsoo/moco-corpus-kowiki2022)(kowiki202206 + MOCOMSYS 추출 3.2M 문장)로 vocab 추가하여 학습 시킨 모델
- **vocab: 159,552개 (기존 bert 모델 vocab(119,548개)에 40,004개 (한글단어30,000개+영문10,000개+수동 4개)vocab 추가**
- mdistilbertV2.1 보다 약 **7,000개** 단어가 더 많고, 한글단어는 **mecab를 이용하여 추출**함.
- **epoch은 12**번 학습함(mdistilbertV2.1은 8번)
## Usage (HuggingFace Transformers)
### 1. MASK 예시
```python
from transformers import AutoTokenizer, AutoModel, DistilBertForMaskedLM
import torch
import torch.nn.functional as F
tokenizer = AutoTokenizer.from_pretrained('bongsoo/mdistilbertV3.1', do_lower_case=False)
model = DistilBertForMaskedLM.from_pretrained('bongsoo/mdistilbertV3.1')
text = ['한국의 수도는 [MASK] 이다', '에펠탑은 [MASK]에 있다', '충무공 이순신은 [MASK]에 최고의 장수였다']
tokenized_input = tokenizer(text, max_length=128, truncation=True, padding='max_length', return_tensors='pt')
outputs = model(**tokenized_input)
logits = outputs.logits
mask_idx_list = []
for tokens in tokenized_input['input_ids'].tolist():
token_str = [tokenizer.convert_ids_to_tokens(s) for s in tokens]
# **위 token_str리스트에서 [MASK] 인덱스를 구함
# => **해당 [MASK] 안덱스 값 mask_idx 에서는 아래 출력하는데 사용됨
mask_idx = token_str.index('[MASK]')
mask_idx_list.append(mask_idx)
for idx, mask_idx in enumerate(mask_idx_list):
logits_pred=torch.argmax(F.softmax(logits[idx]), dim=1)
mask_logits_idx = int(logits_pred[mask_idx])
# [MASK]에 해당하는 token 구함
mask_logits_token = tokenizer.convert_ids_to_tokens(mask_logits_idx)
# 결과 출력
print('\n')
print('*Input: {}'.format(text[idx]))
print('*[MASK] : {} ({})'.format(mask_logits_token, mask_logits_idx))
```
- 결과
```
*Input: 한국의 수도는 [MASK] 이다
*[MASK] : 서울 (48253)
*Input: 에펠탑은 [MASK]에 있다
*[MASK] : 프랑스 (47364)
*Input: 충무공 이순신은 [MASK]에 최고의 장수였다
*[MASK] : 임진왜란 (121990)
```
### 2. 임베딩 예시
- 평균 폴링(mean_pooling) 방식 사용. ([cls 폴링](https://huggingface.co/sentence-transformers/bert-base-nli-cls-token), [max 폴링](https://huggingface.co/sentence-transformers/bert-base-nli-max-tokens))
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bongsoo/mdistilbertV3.1')
model = AutoModel.from_pretrained('bongsoo/mdistilbertV3.1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
# sklearn 을 이용하여 cosine_scores를 구함
# => 입력값 embeddings 은 (1,768) 처럼 2D 여야 함.
from sklearn.metrics.pairwise import paired_cosine_distances, paired_euclidean_distances, paired_manhattan_distances
cosine_scores = 1 - (paired_cosine_distances(sentence_embeddings[0].reshape(1,-1), sentence_embeddings[1].reshape(1,-1)))
print(f'*cosine_score:{cosine_scores[0]}')
```
- 결과
```
Sentence embeddings:
tensor([[-0.1137, 0.1491, 0.6711, ..., -0.0217, 0.1839, -0.6143],
[ 0.0482, -0.0649, 0.5333, ..., 0.1424, -0.0982, -0.3414]])
*cosine_score:0.4784715175628662
```
## Training
**MLM(Masked Langeuage Model) 훈련**
- 입력 모델 : distilbert-base-multilingual-cased
- 말뭉치 : 훈련 : bongsoo/moco-corpus-kowiki2022(7.6M) , 평가: **bongsoo/moco_eval**
- HyperParameter : **LearningRate : 5e-5, epochs: 12 , batchsize: 32, max_token_len : 128**
- vocab : **159,552개** (기존 bert 모델 vocab(119,548개)에 40,004개 (한글단어30,000개+영문10,000개+수동 4개)vocab 추가
- 출력 모델 : mdistilbertV3.1 (size: 634MB)
- 훈련시간 : 90h/1GPU (24GB/16.5 use)
- **훈련loss: 2.1154, 평가loss: 2.5275**
- 훈련코드 [여기](https://github.com/kobongsoo/BERT/blob/master/distilbert/distilbert-MLM-Trainer-V1.2.ipynb) 참조
<br>perplexity 평가 코드는 [여기](https://github.com/kobongsoo/BERT/blob/master/distilbert/distilbert-perplexity-eval-V1.2.ipynb) 참조
## Model Config
```
{
"_name_or_path": "",
"activation": "gelu",
"architectures": [
"DistilBertForMaskedLM"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"output_past": true,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"torch_dtype": "float32",
"transformers_version": "4.21.2",
"vocab_size": 159552
}
```
## Citing & Authors
bongsoo
|
MoseliMotsoehli/DeepGeoPark
|
MoseliMotsoehli
| 2022-10-19T02:07:19Z | 0 | 0 | null |
[
"license:openrail",
"region:us"
] | null | 2022-10-17T21:09:29Z |
---
license: openrail
---
# <span>Public Parking Sport Detector Using Deep Learning</span>
|
craigchen/BART-139M-ecommerce-customer-service-query-to-intent-generation
|
craigchen
| 2022-10-19T01:35:10Z | 14 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-19T01:34:13Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: BART-139M-ecommerce-customer-service-query-to-intent-generation
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BART-139M-ecommerce-customer-service-query-to-intent-generation
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
pierric/test-EsperBERTo-small
|
pierric
| 2022-10-19T01:28:09Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"eo",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language: eo
thumbnail: https://huggingface.co/blog/assets/EsperBERTo-thumbnail-v2.png
---
## EsperBERTo: RoBERTa-like Language model trained on Esperanto
**Companion model to blog post https://huggingface.co/blog/how-to-train** 🔥
### Training Details
- current checkpoint: 566000
- machine name: `galinette`
|
dnautiyal/bert_model_reddit_tsla_tracked
|
dnautiyal
| 2022-10-19T00:45:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-19T00:41:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert_model_reddit_tsla_tracked
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_model_reddit_tsla_tracked
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Arnaudmkonan/adn-setfit-model
|
Arnaudmkonan
| 2022-10-19T00:37:34Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-19T00:37:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2500 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2500,
"warmup_steps": 250,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
emilys/hmBERT-CoNLL-cp3
|
emilys
| 2022-10-19T00:15:01Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T23:44:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: hmBERT-CoNLL-cp3
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9121408403919614
- name: Recall
type: recall
value: 0.9242679232581622
- name: F1
type: f1
value: 0.9181643400484828
- name: Accuracy
type: accuracy
value: 0.9862154900510105
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hmBERT-CoNLL-cp3
This model is a fine-tuned version of [dbmdz/bert-base-historic-multilingual-cased](https://huggingface.co/dbmdz/bert-base-historic-multilingual-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0572
- Precision: 0.9121
- Recall: 0.9243
- F1: 0.9182
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.06 | 25 | 0.4115 | 0.3643 | 0.3728 | 0.3685 | 0.9007 |
| No log | 0.11 | 50 | 0.2243 | 0.6393 | 0.6908 | 0.6641 | 0.9460 |
| No log | 0.17 | 75 | 0.1617 | 0.7319 | 0.7637 | 0.7475 | 0.9580 |
| No log | 0.23 | 100 | 0.1544 | 0.7282 | 0.7637 | 0.7455 | 0.9585 |
| No log | 0.28 | 125 | 0.1341 | 0.7595 | 0.8117 | 0.7847 | 0.9644 |
| No log | 0.34 | 150 | 0.1221 | 0.7980 | 0.8251 | 0.8114 | 0.9693 |
| No log | 0.4 | 175 | 0.1013 | 0.7968 | 0.8344 | 0.8152 | 0.9719 |
| No log | 0.46 | 200 | 0.1076 | 0.8265 | 0.8403 | 0.8333 | 0.9732 |
| No log | 0.51 | 225 | 0.0883 | 0.8453 | 0.8635 | 0.8543 | 0.9763 |
| No log | 0.57 | 250 | 0.0973 | 0.8439 | 0.8633 | 0.8535 | 0.9763 |
| No log | 0.63 | 275 | 0.0883 | 0.8497 | 0.8655 | 0.8575 | 0.9765 |
| No log | 0.68 | 300 | 0.0879 | 0.8462 | 0.8642 | 0.8551 | 0.9766 |
| No log | 0.74 | 325 | 0.0781 | 0.8592 | 0.8834 | 0.8711 | 0.9787 |
| No log | 0.8 | 350 | 0.0725 | 0.8697 | 0.8928 | 0.8811 | 0.9803 |
| No log | 0.85 | 375 | 0.0755 | 0.8687 | 0.8943 | 0.8813 | 0.9807 |
| No log | 0.91 | 400 | 0.0666 | 0.8781 | 0.9004 | 0.8891 | 0.9822 |
| No log | 0.97 | 425 | 0.0658 | 0.8877 | 0.8995 | 0.8936 | 0.9823 |
| No log | 1.03 | 450 | 0.0645 | 0.8951 | 0.9036 | 0.8993 | 0.9837 |
| No log | 1.08 | 475 | 0.0697 | 0.8864 | 0.9039 | 0.8951 | 0.9831 |
| 0.1392 | 1.14 | 500 | 0.0688 | 0.8824 | 0.8994 | 0.8908 | 0.9824 |
| 0.1392 | 1.2 | 525 | 0.0681 | 0.8950 | 0.9049 | 0.8999 | 0.9827 |
| 0.1392 | 1.25 | 550 | 0.0676 | 0.8855 | 0.8977 | 0.8915 | 0.9823 |
| 0.1392 | 1.31 | 575 | 0.0618 | 0.8940 | 0.9088 | 0.9014 | 0.9842 |
| 0.1392 | 1.37 | 600 | 0.0644 | 0.8945 | 0.9076 | 0.9010 | 0.9840 |
| 0.1392 | 1.42 | 625 | 0.0641 | 0.8936 | 0.9086 | 0.9010 | 0.9837 |
| 0.1392 | 1.48 | 650 | 0.0619 | 0.8969 | 0.9120 | 0.9044 | 0.9846 |
| 0.1392 | 1.54 | 675 | 0.0608 | 0.9045 | 0.9105 | 0.9075 | 0.9848 |
| 0.1392 | 1.59 | 700 | 0.0624 | 0.9038 | 0.9143 | 0.9091 | 0.9851 |
| 0.1392 | 1.65 | 725 | 0.0596 | 0.9062 | 0.9170 | 0.9116 | 0.9852 |
| 0.1392 | 1.71 | 750 | 0.0580 | 0.8995 | 0.9143 | 0.9069 | 0.9848 |
| 0.1392 | 1.77 | 775 | 0.0582 | 0.9082 | 0.9172 | 0.9127 | 0.9858 |
| 0.1392 | 1.82 | 800 | 0.0588 | 0.9024 | 0.9179 | 0.9101 | 0.9852 |
| 0.1392 | 1.88 | 825 | 0.0592 | 0.9020 | 0.9219 | 0.9119 | 0.9856 |
| 0.1392 | 1.94 | 850 | 0.0600 | 0.9054 | 0.9182 | 0.9118 | 0.9852 |
| 0.1392 | 1.99 | 875 | 0.0568 | 0.9068 | 0.9202 | 0.9135 | 0.9861 |
| 0.1392 | 2.05 | 900 | 0.0571 | 0.9131 | 0.9212 | 0.9171 | 0.9861 |
| 0.1392 | 2.11 | 925 | 0.0577 | 0.9110 | 0.9204 | 0.9157 | 0.9858 |
| 0.1392 | 2.16 | 950 | 0.0605 | 0.9127 | 0.9243 | 0.9185 | 0.9860 |
| 0.1392 | 2.22 | 975 | 0.0575 | 0.9109 | 0.9224 | 0.9166 | 0.9867 |
| 0.0392 | 2.28 | 1000 | 0.0572 | 0.9121 | 0.9243 | 0.9182 | 0.9862 |
| 0.0392 | 2.33 | 1025 | 0.0567 | 0.9171 | 0.9253 | 0.9212 | 0.9870 |
| 0.0392 | 2.39 | 1050 | 0.0570 | 0.9193 | 0.9295 | 0.9244 | 0.9871 |
| 0.0392 | 2.45 | 1075 | 0.0584 | 0.9155 | 0.9276 | 0.9215 | 0.9867 |
| 0.0392 | 2.51 | 1100 | 0.0591 | 0.9168 | 0.9286 | 0.9227 | 0.9867 |
| 0.0392 | 2.56 | 1125 | 0.0577 | 0.9182 | 0.9312 | 0.9246 | 0.9874 |
| 0.0392 | 2.62 | 1150 | 0.0570 | 0.9184 | 0.9283 | 0.9233 | 0.9870 |
| 0.0392 | 2.68 | 1175 | 0.0563 | 0.9191 | 0.9298 | 0.9245 | 0.9872 |
| 0.0392 | 2.73 | 1200 | 0.0565 | 0.9180 | 0.9313 | 0.9246 | 0.9872 |
| 0.0392 | 2.79 | 1225 | 0.0559 | 0.9190 | 0.9298 | 0.9244 | 0.9873 |
| 0.0392 | 2.85 | 1250 | 0.0562 | 0.9185 | 0.9293 | 0.9239 | 0.9873 |
| 0.0392 | 2.9 | 1275 | 0.0564 | 0.9175 | 0.9285 | 0.9230 | 0.9872 |
| 0.0392 | 2.96 | 1300 | 0.0563 | 0.9181 | 0.9295 | 0.9237 | 0.9873 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0
- Datasets 2.4.0
- Tokenizers 0.12.1
|
crumb/midjourney-textual-inversions
|
crumb
| 2022-10-18T23:18:25Z | 0 | 19 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-18T23:12:35Z |
---
license: mit
---
These are the midjourney styles that are pre-loaded in [Whatchamacallit](https://colab.research.google.com/github/aicrumb/whatchamacallit/blob/main/Whatchamacallit.ipynb)
Using original textual inversion bins that are compatible with most webuis/notebooks that support text inversion loading. They can be easily converted to diffusers-style and in Whatchamacallit there is code to do that already if you need reference.
\- midj-strong: <br>
good at that weird surreal melty almost golden sort of style, looks like clip guided diffusion in my opinion
\- midj-portrait: <br>
a bit more subtle but still very cinematic and changes the image significantly but less so than midj-strong
\- midj-anthro: <br>
was finetuned on some anthropomorphic animals (not traditional furry style, but just animals standing like humans). good on other subjects though.

|
mriggs/byt5-small-finetuned-2epoch-opus_books-en-to-it
|
mriggs
| 2022-10-18T20:50:33Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-18T19:38:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: byt5-small-finetuned-2epoch-opus_books-en-to-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-finetuned-2epoch-opus_books-en-to-it
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2729 | 1.0 | 3638 | 0.9497 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kevinbror/test
|
kevinbror
| 2022-10-18T20:37:35Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-18T20:37:26Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: test
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6406
- Train End Logits Accuracy: 0.5766
- Train Start Logits Accuracy: 0.5397
- Validation Loss: 1.2711
- Validation End Logits Accuracy: 0.6595
- Validation Start Logits Accuracy: 0.6190
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6406 | 0.5766 | 0.5397 | 1.2711 | 0.6595 | 0.6190 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
gkioxari/omni3d
|
gkioxari
| 2022-10-18T20:15:39Z | 0 | 1 | null |
[
"vision",
"3D",
"3D object detection",
"dataset:omni3d",
"arxiv:2207.10660",
"region:us"
] | null | 2022-10-18T17:14:50Z |
---
tags:
- vision
- 3D
- 3D object detection
datasets:
- omni3d
metrics:
- AP
---
# 3D Object Detection with Cube R-CNN
3D Object Detection with Cube R-CNN is described in [**Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild**](https://arxiv.org/abs/2207.10660) and released in this [repository](https://github.com/facebookresearch/omni3d)
## Overview
A description of the model and its architecture are shown below
<img src="https://s3.amazonaws.com/moonup/production/uploads/1666115971617-634ededbd049354d7ee4b557.png" width=700px/>
## Training Data
Cube R-CNN was trained on Omni3D, a large benchmark for 3D object detection in the wild.
## Demo: Inference on Any Image
The model detects objects in 3D from a single image. There are 50 distinct object categories including *car, truck, chair, table, cabinet, books, and many more*.
The model assumes known focal length for the image in order to predict the right metric scale.
However, users can provide any focal length and will get predictions on a "relative" scale.
For example, we can predict 3D objects from COCO images with a user-defined focal length of 4.0, as shown below
<img src="https://github.com/facebookresearch/omni3d/blob/main/.github/generalization_coco.png?raw=true" width=500px/>
The above output is produced by our demo
```bash
python demo/demo.py \
--config cubercnn://omni3d/cubercnn_DLA34_FPN.yaml \
--input-folder "datasets/image_inputs" \
--threshold 0.25 --focal 4.0 --display \
MODEL.WEIGHTS cubercnn://omni3d/cubercnn_DLA34_FPN.pth \
OUTPUT_DIR output/demo
```
## Checkpoints
You can find model checkpoints in the original [model zoo](https://github.com/facebookresearch/omni3d/blob/main/MODEL_ZOO.md).
## Intended Use and Limitations
Cube R-CNN is a data-driven method trained on an annotated dataset, Omni3D. The purpose of the project is to advance 3D computer vision and 3D object recognition. The dataset contains a *pedestrian* category, which we acknowledge as a potential issue in the case of unethical applications of our model.
The limitations of our approach are: erroneous predictions especially for far away objects, mistakes in predicting rotations and depth. Our evaluation reports an analysis for various depths and object sizes to better understand performance.
|
mjawadazad2321/donut-base-Medical_Handwritten_Prescriptions_Information_Extraction_updated
|
mjawadazad2321
| 2022-10-18T20:09:41Z | 43 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-10-18T20:02:17Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-Medical_Handwritten_Prescriptions_Information_Extraction_updated
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-Medical_Handwritten_Prescriptions_Information_Extraction_updated
This model is a fine-tuned version of [mjawadazad2321/donut-base-Medical_Handwritten_Prescriptions_Information_Extraction](https://huggingface.co/mjawadazad2321/donut-base-Medical_Handwritten_Prescriptions_Information_Extraction) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
kevinbror/korttextbert
|
kevinbror
| 2022-10-18T19:04:09Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-18T19:04:00Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: korttextbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# korttextbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7501
- Train End Logits Accuracy: 0.7864
- Train Start Logits Accuracy: 0.7557
- Validation Loss: 1.0797
- Validation End Logits Accuracy: 0.7166
- Validation Start Logits Accuracy: 0.6912
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11529, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.4538 | 0.6209 | 0.5897 | 1.1081 | 0.6964 | 0.6739 | 0 |
| 0.9285 | 0.7425 | 0.7106 | 1.0454 | 0.7147 | 0.6917 | 1 |
| 0.7501 | 0.7864 | 0.7557 | 1.0797 | 0.7166 | 0.6912 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/tvman000
|
huggingtweets
| 2022-10-18T18:46:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-18T18:45:39Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tvman000/1666118790144/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1313242619510689794/BO-zQyrZ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Daniel Cieslinski</div>
<div style="text-align: center; font-size: 14px;">@tvman000</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Daniel Cieslinski.
| Data | Daniel Cieslinski |
| --- | --- |
| Tweets downloaded | 214 |
| Retweets | 32 |
| Short tweets | 42 |
| Tweets kept | 140 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/xo7nzzp0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tvman000's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3gi0grtu) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3gi0grtu/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tvman000')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/__emmamme__-shell_nigeria-wef
|
huggingtweets
| 2022-10-18T18:30:01Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-18T18:29:53Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/565498192171507712/r2Hb2gvX_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1582040730842841089/FGLi_5Xd_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/479362131813343232/Vl0Ow-_W_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">World Economic Forum & emma & Shell Nigeria</div>
<div style="text-align: center; font-size: 14px;">@__emmamme__-shell_nigeria-wef</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from World Economic Forum & emma & Shell Nigeria.
| Data | World Economic Forum | emma | Shell Nigeria |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 151 | 3195 |
| Retweets | 29 | 6 | 455 |
| Short tweets | 6 | 29 | 13 |
| Tweets kept | 3215 | 116 | 2727 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/11b1thr0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @__emmamme__-shell_nigeria-wef's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3tc6nf11) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3tc6nf11/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/__emmamme__-shell_nigeria-wef')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
lethebodies/ppo_lunarlander
|
lethebodies
| 2022-10-18T18:23:01Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-18T16:30:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 196.41 +/- 19.88
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
```
Use the model like this
```python
import gym
from huggingface_sb3 import load_from_hub
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
# Retrieve the model from the hub
## repo_id = id of the model repository from the Hugging Face Hub (repo_id = {organization}/{repo_name})
## filename = name of the model zip file from the repository
checkpoint = load_from_hub(repo_id="ThomasSimonini/ppo-LunarLander-v2", filename="ppo-LunarLander-v2.zip")
model = PPO.load(checkpoint)
# Evaluate the agent
eval_env = gym.make('LunarLander-v2')
mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
# Watch the agent play
obs = eval_env.reset()
for i in range(1000):
action, _state = model.predict(obs)
obs, reward, done, info = eval_env.step(action)
eval_env.render()
if done:
obs = eval_env.reset()
eval_env.close()
```
|
xh3b4sd/ppo-LunarLander-v2
|
xh3b4sd
| 2022-10-18T18:16:00Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-18T18:15:24Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 170.87 +/- 38.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Mihakram/AraT5-base-question-generation
|
Mihakram
| 2022-10-18T18:03:41Z | 270 | 8 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"answer-aware-question-generation",
"question-generation",
"QG",
"ar",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-08T00:33:03Z |
---
language:
- ar
tags:
- answer-aware-question-generation
- question-generation
- QG
dataset:
- arabic_question_answering
widget:
- text: "context: الثورة الجزائرية أو ثورة المليون شهيد، اندلعت في 1 نوفمبر 1954 ضد المستعمر الفرنسي ودامت 7 سنوات ونصف. استشهد فيها أكثر من مليون ونصف مليون جزائري answer: 7 سنوات ونصف </s>
"
- text: "context: اسكتلندا دولة في شمال غرب أوروبا، تعتبر جزء من الدول الأربع المكونة المملكة المتحدة. تحتل الثلث الشمالي من جزيرة بريطانيا العظمى وتحدها جنوبا إنجلترا ويحدها شرقا بحر الشمال وغربا المحيط الأطلسي. عاصمتها أدنبرة، وأهم مدنها وأكبرها مدينة غلاسكو. كانت اسكتلندا مملكة مستقلة حتى 1 مايو 1707 answer: أدنبرة </s>"
- text: "context: مات المستشار الألماني أدولف هتلر في 30 أبريل 1945 منتحرا عن طريق تناول مادة السيانيد السامة وإطلاق النار على نفسه وهي الرواية العامة المقبولة لطريقة موت الزعيم النازي answer: منتحرا </s>
"
metrics:
- bleu
model-index:
- name: Arabic-Question-Generation
results:
- task:
name: Question-Generation
type: automatic-question-generation
metrics:
- name: Bleu1
type: bleu
value: 37.62
- name: Bleu2
type: bleu
value: 27.80
- name: Bleu3
type: bleu
value: 20.89
- name: Bleu4
type: bleu
value: 15.87
- name: meteor
type: meteor
value: 33.19
- name: rougel
type: rouge
value: 43.37
---
# Arabic Question Generation Model
This model is ready to use for **Question Generation** task, simply input the text and answer, the model will generate a question, This model is a fine-tuned version of [AraT5-Base](https://huggingface.co/UBC-NLP/AraT5-base)
## Live Demo
Get the Question from given Context and a Answer : [Arabic QG Model](https://huggingface.co/spaces/Mihakram/Arabic_Question_Generation)
## Model in Action 🚀
```python
#Requirements !pip install transformers
from transformers import AutoTokenizer,AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("Mihakram/AraT5-base-question-generation")
tokenizer = AutoTokenizer.from_pretrained("Mihakram/AraT5-base-question-generation")
def get_question(context,answer):
text="context: " +context + " " + "answer: " + answer + " </s>"
text_encoding = tokenizer.encode_plus(
text,return_tensors="pt"
)
model.eval()
generated_ids = model.generate(
input_ids=text_encoding['input_ids'],
attention_mask=text_encoding['attention_mask'],
max_length=64,
num_beams=5,
num_return_sequences=1
)
return tokenizer.decode(generated_ids[0],skip_special_tokens=True,clean_up_tokenization_spaces=True).replace('question: ',' ')
context="الثورة الجزائرية أو ثورة المليون شهيد، اندلعت في 1 نوفمبر 1954 ضد المستعمر الفرنسي ودامت 7 سنوات ونصف. استشهد فيها أكثر من مليون ونصف مليون جزائري"
answer =" 7 سنوات ونصف"
get_question(context,answer)
#output : question="كم استمرت الثورة الجزائرية؟ "
```
## Citation
If you want to cite this model you can use this:
## Contacts
**Mihoubi Akram Fawzi**: [Linkedin](https://www.linkedin.com/in/mihoubi-akram/) | [Github](https://github.com/mihoubi-akram) | <[email protected]>
**Ibrir Adel**: [Linkedin]() | [Github]() | <[email protected]>
|
huggingtweets/exxonmobil-tencentglobal-wef
|
huggingtweets
| 2022-10-18T16:36:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-18T12:03:50Z |
---
language: en
thumbnail: http://www.huggingtweets.com/exxonmobil-tencentglobal-wef/1666111008009/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/902558084064616448/YTOCYYnn_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1397133852246646784/Z4XI4oyC_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/565498192171507712/r2Hb2gvX_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ExxonMobil & Tencent 腾讯 & World Economic Forum</div>
<div style="text-align: center; font-size: 14px;">@exxonmobil-tencentglobal-wef</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ExxonMobil & Tencent 腾讯 & World Economic Forum.
| Data | ExxonMobil | Tencent 腾讯 | World Economic Forum |
| --- | --- | --- | --- |
| Tweets downloaded | 3248 | 590 | 3250 |
| Retweets | 209 | 39 | 29 |
| Short tweets | 7 | 1 | 6 |
| Tweets kept | 3032 | 550 | 3215 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/146l36xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @exxonmobil-tencentglobal-wef's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/kqpaxkc6) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/kqpaxkc6/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/exxonmobil-tencentglobal-wef')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
micole66/autotrain-mercuryorsodium-1804662320
|
micole66
| 2022-10-18T16:32:30Z | 42 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:micole66/autotrain-data-mercuryorsodium",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-18T16:32:04Z |
---
tags:
- autotrain
- vision
- image-classification
datasets:
- micole66/autotrain-data-mercuryorsodium
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.3397575484174952
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1804662320
- CO2 Emissions (in grams): 0.3398
## Validation Metrics
- Loss: 0.186
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
|
mrahusain/ppo-LunarLander-v2
|
mrahusain
| 2022-10-18T16:10:21Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-18T16:09:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -238.72 +/- 303.92
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
micole66/autotrain-sexy-or-ugly-1802962297
|
micole66
| 2022-10-18T15:59:45Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"en",
"dataset:micole66/autotrain-data-sexy-or-ugly",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T15:59:23Z |
---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- micole66/autotrain-data-sexy-or-ugly
co2_eq_emissions:
emissions: 0.316594943692132
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1802962297
- CO2 Emissions (in grams): 0.3166
## Validation Metrics
- Loss: 0.616
- Accuracy: 0.800
- Precision: 0.429
- Recall: 0.600
- F1: 0.500
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/micole66/autotrain-sexy-or-ugly-1802962297
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("micole66/autotrain-sexy-or-ugly-1802962297", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("micole66/autotrain-sexy-or-ugly-1802962297", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
huggingtweets/emmarkgadgets
|
huggingtweets
| 2022-10-18T14:50:31Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-10T06:24:48Z |
---
language: en
thumbnail: http://www.huggingtweets.com/emmarkgadgets/1666104626415/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1577618253312040962/_WR59faP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Emmark</div>
<div style="text-align: center; font-size: 14px;">@emmarkgadgets</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Emmark.
| Data | Emmark |
| --- | --- |
| Tweets downloaded | 3249 |
| Retweets | 200 |
| Short tweets | 2112 |
| Tweets kept | 937 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/zjdekgzp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emmarkgadgets's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/uheep0ve) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/uheep0ve/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/emmarkgadgets')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Rocketknight1/mt5-small-finetuned-amazon-en-es
|
Rocketknight1
| 2022-10-18T14:34:56Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"mt5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-23T14:34:02Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/mt5-small-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 10.2613
- Validation Loss: 4.5342
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 9672, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 10.2613 | 4.5342 | 0 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
philschmid/donut-base-finetuned-cord-v2
|
philschmid
| 2022-10-18T14:16:41Z | 28 | 5 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"donut",
"image-to-text",
"vision",
"endpoints-template",
"arxiv:2111.15664",
"license:mit",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2022-10-18T13:08:02Z |
---
license: mit
tags:
- donut
- image-to-text
- vision
- endpoints-template
---
# Fork of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2)
> This is fork of [naver-clova-ix/donut-base-finetuned-cord-v2](https://huggingface.co/naver-clova-ix/donut-base-finetuned-cord-v2) implementing a custom `handler.py` as an example for how to use `donut` models with [inference-endpoints](https://hf.co/inference-endpoints)
---
# Donut (base-sized model, fine-tuned on CORD)
Donut model fine-tuned on CORD. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.
# Use with Inference Endpoints
Hugging Face Inference endpoints can directly work with binary data, this means that we can directly send our image from our document to the endpoint. We are going to use requests to send our requests. (make your you have it installed `pip install requests`)

## Send requests with Pyton
load sample image
```bash
wget https://huggingface.co/philschmid/donut-base-finetuned-cord-v2/resolve/main/sample.png
```
send request to endpoint
```python
import json
import requests as r
import mimetypes
ENDPOINT_URL="" # url of your endpoint
HF_TOKEN="" # organization token where you deployed your endpoint
def predict(path_to_image:str=None):
with open(path_to_image, "rb") as i:
b = i.read()
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": mimetypes.guess_type(path_to_image)[0]
}
response = r.post(ENDPOINT_URL, headers=headers, data=b)
return response.json()
prediction = predict(path_to_image="sample.png")
print(prediction)
# {'menu': [{'nm': '0571-1854 BLUS WANITA',
# 'unitprice': '@120.000',
# 'cnt': '1',
# 'price': '120,000'},
# {'nm': '1002-0060 SHOPPING BAG', 'cnt': '1', 'price': '0'}],
# 'total': {'total_price': '120,000',
# 'changeprice': '0',
# 'creditcardprice': '120,000',
# 'menuqty_cnt': '1'}}
```
**curl example**
```bash
curl https://ak7gduay2ypyr9vp.us-east-1.aws.endpoints.huggingface.cloud \
-X POST \
--data-binary 'sample.png' \
-H "Authorization: Bearer XXX" \
-H "Content-Type: null"
```
|
lewtun/setfit-finetuned-sst2
|
lewtun
| 2022-10-18T13:52:14Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-18T13:52:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
vvincentt/roberta-base-squad2
|
vvincentt
| 2022-10-18T13:45:25Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-18T10:24:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-squad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/cryptoanglio
|
huggingtweets
| 2022-10-18T13:20:47Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-18T11:38:14Z |
---
language: en
thumbnail: http://www.huggingtweets.com/cryptoanglio/1666099242969/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1539914688611459074/jnZfe1Rf_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Anglio.S🟠L (33.3%)</div>
<div style="text-align: center; font-size: 14px;">@cryptoanglio</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Anglio.S🟠L (33.3%).
| Data | Anglio.S🟠L (33.3%) |
| --- | --- |
| Tweets downloaded | 3213 |
| Retweets | 634 |
| Short tweets | 562 |
| Tweets kept | 2017 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2g8dyjwv/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @cryptoanglio's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3laoj52a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3laoj52a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/cryptoanglio')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jayantapaul888/vit-base-patch16-224-finetuned-memes-v2
|
jayantapaul888
| 2022-10-18T13:09:42Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-17T08:44:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-memes-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8377125193199382
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-memes-v2
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4096
- Accuracy: 0.8377
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8643 | 0.99 | 20 | 0.6406 | 0.7720 |
| 0.4279 | 1.99 | 40 | 0.4885 | 0.8130 |
| 0.2272 | 2.99 | 60 | 0.4224 | 0.8331 |
| 0.1483 | 3.99 | 80 | 0.4096 | 0.8377 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
Keerthan/finetuning-sentiment-model-distilbert-trial2
|
Keerthan
| 2022-10-18T13:06:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-18T10:21:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-distilbert-trial2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-distilbert-trial2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5119
- Accuracy: 0.9323
- F1: 0.9339
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Rocketknight1/bert-finetuned-ner
|
Rocketknight1
| 2022-10-18T12:52:07Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T12:50:47Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Rocketknight1/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Rocketknight1/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1748
- Validation Loss: 0.0673
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1748 | 0.0673 | 0 |
### Framework versions
- Transformers 4.24.0.dev0
- TensorFlow 2.10.0
- Datasets 2.6.1
- Tokenizers 0.11.0
|
mriggs/byt5-small-finetuned-2epoch-opus_books-en-to-fr
|
mriggs
| 2022-10-18T12:17:44Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_books",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-18T08:41:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- opus_books
model-index:
- name: byt5-small-finetuned-2epoch-opus_books-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-finetuned-2epoch-opus_books-en-to-fr
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the opus_books dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.9652 | 1.0 | 14297 | 0.7181 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
hivemind/gpt-j-6B-8bit
|
hivemind
| 2022-10-18T11:49:06Z | 146 | 131 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"en",
"arxiv:2106.09685",
"arxiv:2110.02861",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- pytorch
- causal-lm
license: apache-2.0
datasets:
- The Pile
---
Note: this model was superceded by the [`load_in_8bit=True` feature in transformers](https://github.com/huggingface/transformers/pull/17901)
by Younes Belkada and Tim Dettmers. Please see [this usage example](https://colab.research.google.com/drive/1qOjXfQIAULfKvZqwCen8-MoWKGdSatZ4#scrollTo=W8tQtyjp75O).
This legacy model was built for [transformers v4.15.0](https://github.com/huggingface/transformers/releases/tag/v4.15.0) and pytorch 1.11. Newer versions could work, but are not supported.
### Quantized EleutherAI/gpt-j-6b with 8-bit weights
This is a version of EleutherAI's GPT-J with 6 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**.
Here's how to run it: [](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
__The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
- large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
- using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
- scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).

__Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
__What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
### How should I fine-tune the model?
We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
As a result, the larger batch size you can fit, the more efficient you will train.
### Where can I train for free?
You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
### Can I use this technique with other models?
The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
|
ibm-research/qp-questions
|
ibm-research
| 2022-10-18T11:37:49Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-18T11:16:38Z |
The QP model from the paper [Quality Controlled Paraphrase Generation](https://aclanthology.org/2022.acl-long.45/)
Important: read [this](https://github.com/IBM/quality-controlled-paraphrase-generation/issues/5#issuecomment-1238453742) before any use.
More details on the model training and usage see in this [GitHub repo](https://github.com/IBM/quality-controlled-paraphrase-generation).
|
nayan06/binary-classifier-conversion-intent-1.0
|
nayan06
| 2022-10-18T10:43:32Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"setfit classification",
"binary_classification",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-17T10:13:25Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- setfit classification
- binary_classification
---
this is a setfit classifier which can be used for conversion or other , binary classification
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have SetFit installed,
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel, SetFitTrainer
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.0")
preds = model(["view details"])
```
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 573 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 573,
"warmup_steps": 58,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NikitaBaramiia/PPO-FrozenLake-v1
|
NikitaBaramiia
| 2022-10-18T10:22:32Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"FrozenLake-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-18T10:22:28Z |
---
library_name: stable-baselines3
tags:
- FrozenLake-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1
type: FrozenLake-v1
metrics:
- type: mean_reward
value: 0.80 +/- 0.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **FrozenLake-v1**
This is a trained model of a **PPO** agent playing **FrozenLake-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YaYaB/yb_test_inference_endpoint_det
|
YaYaB
| 2022-10-18T10:21:20Z | 0 | 0 | null |
[
"endpoints_compatible",
"region:us"
] | null | 2022-10-18T08:03:22Z |
Please use the image nvcr.io/nvidia/pytorch:21.11-py3 when you want to launch it
|
Osaleh/sagemaker-bert-base-intent1018
|
Osaleh
| 2022-10-18T10:13:59Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-18T10:12:46Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sagemaker-bert-base-intent1018
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sagemaker-bert-base-intent1018
This model is a fine-tuned version of [asafaya/bert-base-arabic](https://huggingface.co/asafaya/bert-base-arabic) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0371
- Accuracy: 0.0855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 44 | 4.2225 | 0.0192 |
| No log | 2.0 | 88 | 4.0371 | 0.0855 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1
- Datasets 1.15.1
- Tokenizers 0.10.3
|
victorbahlangene/deberta-v3-small-finetuned-Disaster-Tweets-Part1
|
victorbahlangene
| 2022-10-18T10:10:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-18T10:01:57Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-small-finetuned-Disaster-Tweets-Part1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-small-finetuned-Disaster-Tweets-Part1
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4014
- Accuracy: 0.8564
- F1: 0.8557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 203 | 0.3828 | 0.8415 | 0.8414 |
| No log | 2.0 | 406 | 0.4014 | 0.8564 | 0.8557 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
malteos/bloom-350m-german
|
malteos
| 2022-10-18T09:27:53Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bloom",
"feature-extraction",
"text-generation",
"de",
"license:mit",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-17T12:54:01Z |
---
license: mit
language: de
pipeline_tag: text-generation
---
A [bloom-350m](https://huggingface.co/bigscience/bloom-350m) model trained from scratch on German data.
|
ezzouhri/vit-base-patch16-224-in21k-finetuned-eurosat
|
ezzouhri
| 2022-10-18T08:53:56Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-17T09:17:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: vit-base-patch16-224-in21k-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2695
- eval_accuracy: 0.9022
- eval_runtime: 195.5267
- eval_samples_per_second: 21.486
- eval_steps_per_second: 0.675
- epoch: 51.76
- step: 10196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 200
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Sebabrata/lmv2-g-rai_1-995-doc-10-18
|
Sebabrata
| 2022-10-18T08:37:34Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T06:05:06Z |
---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-rai_1-995-doc-10-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-rai_1-995-doc-10-18
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0163
- Dob Key Precision: 0.7638
- Dob Key Recall: 0.7638
- Dob Key F1: 0.7638
- Dob Key Number: 127
- Dob Value Precision: 0.9767
- Dob Value Recall: 0.9767
- Dob Value F1: 0.9767
- Dob Value Number: 129
- Doctor Name Key Precision: 0.6970
- Doctor Name Key Recall: 0.6866
- Doctor Name Key F1: 0.6917
- Doctor Name Key Number: 67
- Doctor Name Value Precision: 0.9275
- Doctor Name Value Recall: 0.9143
- Doctor Name Value F1: 0.9209
- Doctor Name Value Number: 70
- Patient Name Key Precision: 0.7055
- Patient Name Key Recall: 0.7357
- Patient Name Key F1: 0.7203
- Patient Name Key Number: 140
- Patient Name Value Precision: 0.9724
- Patient Name Value Recall: 0.9792
- Patient Name Value F1: 0.9758
- Patient Name Value Number: 144
- Overall Precision: 0.8460
- Overall Recall: 0.8523
- Overall F1: 0.8492
- Overall Accuracy: 0.9958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Dob Key Precision | Dob Key Recall | Dob Key F1 | Dob Key Number | Dob Value Precision | Dob Value Recall | Dob Value F1 | Dob Value Number | Doctor Name Key Precision | Doctor Name Key Recall | Doctor Name Key F1 | Doctor Name Key Number | Doctor Name Value Precision | Doctor Name Value Recall | Doctor Name Value F1 | Doctor Name Value Number | Patient Name Key Precision | Patient Name Key Recall | Patient Name Key F1 | Patient Name Key Number | Patient Name Value Precision | Patient Name Value Recall | Patient Name Value F1 | Patient Name Value Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------:|:--------------:|:----------:|:--------------:|:-------------------:|:----------------:|:------------:|:----------------:|:-------------------------:|:----------------------:|:------------------:|:----------------------:|:---------------------------:|:------------------------:|:--------------------:|:------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:----------------------------:|:-------------------------:|:---------------------:|:-------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.5034 | 1.0 | 796 | 0.0841 | 0.7143 | 0.7480 | 0.7308 | 127 | 0.7881 | 0.9225 | 0.85 | 129 | 0.0 | 0.0 | 0.0 | 67 | 0.0 | 0.0 | 0.0 | 70 | 0.5988 | 0.7143 | 0.6515 | 140 | 0.4908 | 0.9236 | 0.6410 | 144 | 0.5944 | 0.6603 | 0.6256 | 0.9887 |
| 0.0579 | 2.0 | 1592 | 0.0365 | 0.7231 | 0.7402 | 0.7315 | 127 | 0.9766 | 0.9690 | 0.9728 | 129 | 0.6462 | 0.6269 | 0.6364 | 67 | 0.9296 | 0.9429 | 0.9362 | 70 | 0.7103 | 0.7357 | 0.7228 | 140 | 0.9392 | 0.9653 | 0.9521 | 144 | 0.8282 | 0.8405 | 0.8343 | 0.9954 |
| 0.0317 | 3.0 | 2388 | 0.0297 | 0.7578 | 0.7638 | 0.7608 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.7077 | 0.6866 | 0.6970 | 67 | 0.8676 | 0.8429 | 0.8551 | 70 | 0.6474 | 0.7214 | 0.6824 | 140 | 0.8993 | 0.9306 | 0.9147 | 144 | 0.8101 | 0.8316 | 0.8207 | 0.9943 |
| 0.0233 | 4.0 | 3184 | 0.0195 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9403 | 0.9767 | 0.9582 | 129 | 0.7015 | 0.7015 | 0.7015 | 67 | 0.9718 | 0.9857 | 0.9787 | 70 | 0.6164 | 0.7 | 0.6555 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8222 | 0.8538 | 0.8377 | 0.9958 |
| 0.0189 | 5.0 | 3980 | 0.0188 | 0.7462 | 0.7638 | 0.7549 | 127 | 0.9545 | 0.9767 | 0.9655 | 129 | 0.5606 | 0.5522 | 0.5564 | 67 | 0.9565 | 0.9429 | 0.9496 | 70 | 0.6228 | 0.7429 | 0.6775 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8054 | 0.8434 | 0.8240 | 0.9955 |
| 0.0174 | 6.0 | 4776 | 0.0167 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5970 | 0.5970 | 0.5970 | 67 | 0.9714 | 0.9714 | 0.9714 | 70 | 0.6478 | 0.7357 | 0.6890 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8250 | 0.8493 | 0.8370 | 0.9956 |
| 0.0162 | 7.0 | 5572 | 0.0185 | 0.7578 | 0.7638 | 0.7608 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.4272 | 0.6567 | 0.5176 | 67 | 0.9677 | 0.8571 | 0.9091 | 70 | 0.7007 | 0.7357 | 0.7178 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7997 | 0.8434 | 0.8210 | 0.9954 |
| 0.0153 | 8.0 | 6368 | 0.0170 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5758 | 0.5672 | 0.5714 | 67 | 0.9571 | 0.9571 | 0.9571 | 70 | 0.7305 | 0.7357 | 0.7331 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8437 | 0.8449 | 0.8443 | 0.9957 |
| 0.0142 | 9.0 | 7164 | 0.0163 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6970 | 0.6866 | 0.6917 | 67 | 0.9275 | 0.9143 | 0.9209 | 70 | 0.7055 | 0.7357 | 0.7203 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8460 | 0.8523 | 0.8492 | 0.9958 |
| 0.0136 | 10.0 | 7960 | 0.0177 | 0.7405 | 0.7638 | 0.7519 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6094 | 0.5821 | 0.5954 | 67 | 0.8358 | 0.8 | 0.8175 | 70 | 0.6541 | 0.7429 | 0.6957 | 140 | 0.9589 | 0.9722 | 0.9655 | 144 | 0.8075 | 0.8301 | 0.8186 | 0.9953 |
| 0.0131 | 11.0 | 8756 | 0.0202 | 0.7402 | 0.7402 | 0.7402 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5968 | 0.5522 | 0.5736 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.7305 | 0.7357 | 0.7331 | 140 | 0.9655 | 0.9722 | 0.9689 | 144 | 0.8390 | 0.8316 | 0.8353 | 0.9954 |
| 0.0134 | 12.0 | 9552 | 0.0195 | 0.7239 | 0.7638 | 0.7433 | 127 | 0.9237 | 0.8450 | 0.8826 | 129 | 0.5846 | 0.5672 | 0.5758 | 67 | 0.9041 | 0.9429 | 0.9231 | 70 | 0.7305 | 0.7357 | 0.7331 | 140 | 0.9722 | 0.9722 | 0.9722 | 144 | 0.8193 | 0.8168 | 0.8180 | 0.9949 |
| 0.0127 | 13.0 | 10348 | 0.0169 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.7077 | 0.6866 | 0.6970 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.6211 | 0.7143 | 0.6645 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8256 | 0.8464 | 0.8359 | 0.9957 |
| 0.0119 | 14.0 | 11144 | 0.0174 | 0.7638 | 0.7638 | 0.7638 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5821 | 0.5821 | 0.5821 | 67 | 0.9437 | 0.9571 | 0.9504 | 70 | 0.6897 | 0.7143 | 0.7018 | 140 | 0.9338 | 0.9792 | 0.9559 | 144 | 0.8261 | 0.8419 | 0.8339 | 0.9955 |
| 0.013 | 15.0 | 11940 | 0.0174 | 0.6953 | 0.7008 | 0.6980 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6164 | 0.6716 | 0.6429 | 67 | 0.9706 | 0.9429 | 0.9565 | 70 | 0.6667 | 0.7143 | 0.6897 | 140 | 0.9583 | 0.9583 | 0.9583 | 144 | 0.8150 | 0.8331 | 0.8240 | 0.9950 |
| 0.0133 | 16.0 | 12736 | 0.0195 | 0.7008 | 0.7008 | 0.7008 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5823 | 0.6866 | 0.6301 | 67 | 0.9054 | 0.9571 | 0.9306 | 70 | 0.6174 | 0.6571 | 0.6367 | 140 | 0.9161 | 0.9097 | 0.9129 | 144 | 0.7860 | 0.8139 | 0.7997 | 0.9946 |
| 0.0154 | 17.0 | 13532 | 0.0239 | 0.6885 | 0.6614 | 0.6747 | 127 | 0.8623 | 0.9225 | 0.8914 | 129 | 0.5057 | 0.6567 | 0.5714 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.3727 | 0.5857 | 0.4556 | 140 | 0.9655 | 0.9722 | 0.9689 | 144 | 0.6829 | 0.7858 | 0.7308 | 0.9935 |
| 0.0163 | 18.0 | 14328 | 0.0437 | 0.6607 | 0.5827 | 0.6192 | 127 | 0.5736 | 0.8760 | 0.6933 | 129 | 0.4177 | 0.4925 | 0.4521 | 67 | 0.8243 | 0.8714 | 0.8472 | 70 | 0.4845 | 0.5571 | 0.5183 | 140 | 0.5990 | 0.7986 | 0.6845 | 144 | 0.5816 | 0.7001 | 0.6354 | 0.9887 |
| 0.0109 | 19.0 | 15124 | 0.0220 | 0.7578 | 0.7638 | 0.7608 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.7097 | 0.6567 | 0.6822 | 67 | 0.9403 | 0.9 | 0.9197 | 70 | 0.6776 | 0.7357 | 0.7055 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.8404 | 0.8479 | 0.8441 | 0.9955 |
| 0.0104 | 20.0 | 15920 | 0.0184 | 0.6093 | 0.7244 | 0.6619 | 127 | 0.976 | 0.9457 | 0.9606 | 129 | 0.6133 | 0.6866 | 0.6479 | 67 | 0.9437 | 0.9571 | 0.9504 | 70 | 0.6013 | 0.6571 | 0.6280 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7778 | 0.8272 | 0.8017 | 0.9950 |
| 0.0086 | 21.0 | 16716 | 0.0232 | 0.3889 | 0.4409 | 0.4133 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5270 | 0.5821 | 0.5532 | 67 | 0.9444 | 0.9714 | 0.9577 | 70 | 0.5245 | 0.5357 | 0.5300 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7143 | 0.7459 | 0.7298 | 0.9930 |
| 0.0085 | 22.0 | 17512 | 0.0197 | 0.7480 | 0.7480 | 0.7480 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6471 | 0.6567 | 0.6519 | 67 | 0.9189 | 0.9714 | 0.9444 | 70 | 0.6149 | 0.65 | 0.6319 | 140 | 0.9658 | 0.9792 | 0.9724 | 144 | 0.8165 | 0.8346 | 0.8254 | 0.9951 |
| 0.0083 | 23.0 | 18308 | 0.0220 | 0.7328 | 0.7559 | 0.7442 | 127 | 0.9692 | 0.9767 | 0.9730 | 129 | 0.6081 | 0.6716 | 0.6383 | 67 | 0.9571 | 0.9571 | 0.9571 | 70 | 0.6479 | 0.6571 | 0.6525 | 140 | 0.9592 | 0.9792 | 0.9691 | 144 | 0.8170 | 0.8375 | 0.8271 | 0.9952 |
| 0.0084 | 24.0 | 19104 | 0.0226 | 0.6418 | 0.6772 | 0.6590 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5 | 0.7164 | 0.5890 | 67 | 0.8919 | 0.9429 | 0.9167 | 70 | 0.5034 | 0.5286 | 0.5157 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7462 | 0.7991 | 0.7718 | 0.9942 |
| 0.0067 | 25.0 | 19900 | 0.0257 | 0.6691 | 0.7165 | 0.6920 | 127 | 0.9692 | 0.9767 | 0.9730 | 129 | 0.6267 | 0.7015 | 0.6620 | 67 | 0.9143 | 0.9143 | 0.9143 | 70 | 0.6828 | 0.7071 | 0.6947 | 140 | 0.94 | 0.9792 | 0.9592 | 144 | 0.8045 | 0.8390 | 0.8214 | 0.9949 |
| 0.0071 | 26.0 | 20696 | 0.0241 | 0.5828 | 0.6929 | 0.6331 | 127 | 0.9692 | 0.9767 | 0.9730 | 129 | 0.6029 | 0.6119 | 0.6074 | 67 | 0.8889 | 0.9143 | 0.9014 | 70 | 0.5563 | 0.5643 | 0.5603 | 140 | 0.9658 | 0.9792 | 0.9724 | 144 | 0.7602 | 0.7962 | 0.7778 | 0.9943 |
| 0.0072 | 27.0 | 21492 | 0.0222 | 0.6850 | 0.6850 | 0.6850 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.5714 | 0.6567 | 0.6111 | 67 | 0.9178 | 0.9571 | 0.9371 | 70 | 0.6370 | 0.6643 | 0.6503 | 140 | 0.9592 | 0.9792 | 0.9691 | 144 | 0.7983 | 0.8242 | 0.8110 | 0.9948 |
| 0.0057 | 28.0 | 22288 | 0.0259 | 0.5909 | 0.6142 | 0.6023 | 127 | 0.9767 | 0.9767 | 0.9767 | 129 | 0.6714 | 0.7015 | 0.6861 | 67 | 0.9275 | 0.9143 | 0.9209 | 70 | 0.5734 | 0.5857 | 0.5795 | 140 | 0.9724 | 0.9792 | 0.9758 | 144 | 0.7820 | 0.7947 | 0.7883 | 0.9943 |
| 0.0054 | 29.0 | 23084 | 0.0299 | 0.6418 | 0.6772 | 0.6590 | 127 | 0.9618 | 0.9767 | 0.9692 | 129 | 0.6216 | 0.6866 | 0.6525 | 67 | 0.8873 | 0.9 | 0.8936 | 70 | 0.5306 | 0.5571 | 0.5436 | 140 | 0.9655 | 0.9722 | 0.9689 | 144 | 0.7678 | 0.7962 | 0.7817 | 0.9937 |
| 0.0066 | 30.0 | 23880 | 0.0254 | 0.5532 | 0.6142 | 0.5821 | 127 | 0.9259 | 0.9690 | 0.9470 | 129 | 0.5938 | 0.5672 | 0.5802 | 67 | 0.9130 | 0.9 | 0.9065 | 70 | 0.6738 | 0.6786 | 0.6762 | 140 | 0.9592 | 0.9792 | 0.9691 | 144 | 0.7747 | 0.7976 | 0.7860 | 0.9943 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.2.2
- Tokenizers 0.13.1
|
micole66/autotrain-pachyderm-1799762243
|
micole66
| 2022-10-18T08:35:41Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"en",
"dataset:micole66/autotrain-data-pachyderm",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T08:34:29Z |
---
tags:
- autotrain
- token-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- micole66/autotrain-data-pachyderm
co2_eq_emissions:
emissions: 1.2406150246482144
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1799762243
- CO2 Emissions (in grams): 1.2406
## Validation Metrics
- Loss: 0.463
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/micole66/autotrain-pachyderm-1799762243
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("micole66/autotrain-pachyderm-1799762243", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("micole66/autotrain-pachyderm-1799762243", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
teacookies/autotrain-18102022_retoken-1799162225
|
teacookies
| 2022-10-18T08:01:54Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-18102022_retoken",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T07:50:22Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-18102022_retoken
co2_eq_emissions:
emissions: 20.17997164723111
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1799162225
- CO2 Emissions (in grams): 20.1800
## Validation Metrics
- Loss: 0.024
- Accuracy: 0.993
- Precision: 0.829
- Recall: 0.893
- F1: 0.860
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-18102022_retoken-1799162225
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-18102022_retoken-1799162225", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-18102022_retoken-1799162225", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
Makokokoko/AI
|
Makokokoko
| 2022-10-18T07:36:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-18T06:40:52Z |
pip install diffusers transformers nvidia-ml-py3 ftfy pytorch pillow
|
sd-concepts-library/beholder
|
sd-concepts-library
| 2022-10-18T07:28:26Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-18T07:28:20Z |
---
license: mit
---
### Beholder on Stable Diffusion
This is the `<beholder>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
tehnlulz/pruned_datavq__ydnj-is_phishing-classification
|
tehnlulz
| 2022-10-18T07:15:14Z | 0 | 0 |
sklearn
|
[
"sklearn",
"tabular-classification",
"baseline-trainer",
"license:apache-2.0",
"region:us"
] |
tabular-classification
| 2022-10-18T07:15:12Z |
---
license: apache-2.0
library_name: sklearn
tags:
- tabular-classification
- baseline-trainer
---
## Baseline Model trained on pruned_datavq__ydnj to apply classification on is_phishing
**Metrics of the best model:**
accuracy 1.0
average_precision 1.0
roc_auc 1.0
recall_macro 1.0
f1_macro 1.0
Name: DecisionTreeClassifier(class_weight='balanced', max_depth=1), dtype: float64
**See model plot below:**
<style>#sk-container-id-1 {color: black;background-color: white;}#sk-container-id-1 pre{padding: 0;}#sk-container-id-1 div.sk-toggleable {background-color: white;}#sk-container-id-1 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-1 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-1 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-1 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-1 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-1 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-1 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-1 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-1 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-1 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-1 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-1 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-1 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-1 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-1 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-1 div.sk-item {position: relative;z-index: 1;}#sk-container-id-1 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-1 div.sk-item::before, #sk-container-id-1 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-1 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-1 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-1 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-1 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-1 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-1 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-1 div.sk-label-container {text-align: center;}#sk-container-id-1 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-1 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-1" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
bad_domain False False False ... False True False
safe_domain False False False ... False False False[3 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced', max_depth=1))])</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-1" type="checkbox" ><label for="sk-estimator-id-1" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('easypreprocessor',EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
bad_domain False False False ... False True False
safe_domain False False False ... False False False[3 rows x 7 columns])),('decisiontreeclassifier',DecisionTreeClassifier(class_weight='balanced', max_depth=1))])</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-2" type="checkbox" ><label for="sk-estimator-id-2" class="sk-toggleable__label sk-toggleable__label-arrow">EasyPreprocessor</label><div class="sk-toggleable__content"><pre>EasyPreprocessor(types= continuous dirty_float low_card_int ... date free_string useless
id True False False ... False False False
bad_domain False False False ... False True False
safe_domain False False False ... False False False[3 rows x 7 columns])</pre></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-3" type="checkbox" ><label for="sk-estimator-id-3" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(class_weight='balanced', max_depth=1)</pre></div></div></div></div></div></div></div>
**Disclaimer:** This model is trained with dabl library as a baseline, for better results, use [AutoTrain](https://huggingface.co/autotrain).
**Logs of training** including the models tried in the process can be found in logs.txt
|
tkubotake/xlm-roberta-base-finetuned-panx-de
|
tkubotake
| 2022-10-18T06:51:15Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T06:26:50Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.de
split: train
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Formzu/bert-base-japanese-jsnli
|
Formzu
| 2022-10-18T03:13:20Z | 59 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"ja",
"dataset:JSNLI",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-14T07:50:13Z |
---
language:
- ja
license: cc-by-sa-4.0
tags:
- zero-shot-classification
- text-classification
- nli
- pytorch
metrics:
- accuracy
datasets:
- JSNLI
pipeline_tag: text-classification
widget:
- text: "あなたが好きです。 あなたを愛しています。"
model-index:
- name: bert-base-japanese-jsnli
results:
- task:
type: text-classification
name: Natural Language Inference
dataset:
type: snli
name: JSNLI
split: dev
metrics:
- type: accuracy
value: 0.9288
verified: false
---
# bert-base-japanese-jsnli
This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) on the [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2085
- Accuracy: 0.9288
### How to use the model
#### Simple zero-shot classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model="Formzu/bert-base-japanese-jsnli")
sequence_to_classify = "いつか世界を見る。"
candidate_labels = ['旅行', '料理', '踊り']
out = classifier(sequence_to_classify, candidate_labels, hypothesis_template="この例は{}です。")
print(out)
#{'sequence': 'いつか世界を見る。',
# 'labels': ['旅行', '料理', '踊り'],
# 'scores': [0.6758995652198792, 0.22110949456691742, 0.1029909998178482]}
```
#### NLI use-case
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model_name = "Formzu/bert-base-japanese-jsnli"
model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name)
premise = "いつか世界を見る。"
label = '旅行'
hypothesis = f'この例は{label}です。'
input = tokenizer.encode(premise, hypothesis, return_tensors='pt').to(device)
with torch.no_grad():
logits = model(input)["logits"][0]
probs = logits.softmax(dim=-1)
print(probs.cpu().numpy(), logits.cpu().numpy())
#[0.68940836 0.29482093 0.01577068] [ 1.7791482 0.92968255 -1.998533 ]
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
| :-----------: | :---: | :---: | :-------------: | :------: |
| 0.4054 | 1.0 | 16657 | 0.2141 | 0.9216 |
| 0.3297 | 2.0 | 33314 | 0.2145 | 0.9236 |
| 0.2645 | 3.0 | 49971 | 0.2085 | 0.9288 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu116
- Datasets 2.4.0
- Tokenizers 0.12.1
|
joelb/custom-handler-tutorial
|
joelb
| 2022-10-18T02:23:12Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"emotion",
"endpoints-template",
"en",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-18T02:21:57Z |
---
language:
- en
tags:
- text-classification
- emotion
- endpoints-template
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Fork of [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion)
|
sd-concepts-library/arq-render
|
sd-concepts-library
| 2022-10-18T02:10:58Z | 0 | 8 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-18T02:10:47Z |
---
license: mit
---
### arq render on Stable Diffusion
This is the `<arq-style>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
teacookies/autotrain-17102022-update_scope_and_date-1789062099
|
teacookies
| 2022-10-18T01:53:54Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17102022-update_scope_and_date",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-18T01:42:37Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022-update_scope_and_date
co2_eq_emissions:
emissions: 19.692537664708304
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1789062099
- CO2 Emissions (in grams): 19.6925
## Validation Metrics
- Loss: 0.029
- Accuracy: 0.992
- Precision: 0.777
- Recall: 0.826
- F1: 0.801
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022-update_scope_and_date-1789062099
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022-update_scope_and_date-1789062099", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022-update_scope_and_date-1789062099", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
althoughh/distilroberta-base-finetuned-wikitext2
|
althoughh
| 2022-10-18T01:23:37Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-18T01:13:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-wikitext2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 251 | 1.7837 |
| 2.0311 | 2.0 | 502 | 1.7330 |
| 2.0311 | 3.0 | 753 | 1.7085 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
corgi777/distilbert-base-uncased-finetuned-emotion
|
corgi777
| 2022-10-18T01:00:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-18T00:07:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9262012280043272
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2135
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.2996 | 0.915 | 0.9124 |
| No log | 2.0 | 500 | 0.2135 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
KarelDO/gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42
|
KarelDO
| 2022-10-18T00:17:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:OpenTable",
"license:mit",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-10-18T00:13:32Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- OpenTable
metrics:
- accuracy
model-index:
- name: gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: OpenTable OPENTABLE-ABSA
type: OpenTable
args: opentable-absa
metrics:
- name: Accuracy
type: accuracy
value: 0.8310893512851897
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2.CEBaB_confounding.price_food_ambiance_negative.absa.5-class.seed_42
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the OpenTable OPENTABLE-ABSA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4726
- Accuracy: 0.8311
- Macro-f1: 0.8295
- Weighted-macro-f1: 0.8313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.5.2
- Tokenizers 0.12.1
|
MrBananaHuman/re_generator
|
MrBananaHuman
| 2022-10-17T23:26:07Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-20T13:43:52Z |
important_labels = {
"no_relation":"관계 없음",
"per:employee_of":"고용",
"org:member_of":"소속",
"org:place_of_headquarters":"장소",
"org:top_members/employees":"대표",
"per:origin":"출신",
"per:title":"직업",
"per:colleagues":"동료",
"org:members":"소속",
"org:alternate_names":"본명",
"per:place_of_residence":"거주지"
}
https://colab.research.google.com/drive/1K3lygU6BBLsFwI99JNaX8BauH7vgUsv9?authuser=1#scrollTo=h8-68Ko_pKpJ
|
MrBananaHuman/en_ko_translator
|
MrBananaHuman
| 2022-10-17T23:24:52Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-29T04:28:10Z |
https://colab.research.google.com/drive/1AD96dq3y0s2MSzWKgCpI9-oHMpzsbyR2?authuser=1
|
MrBananaHuman/ko_en_translator
|
MrBananaHuman
| 2022-10-17T23:24:40Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-29T04:39:37Z |
https://colab.research.google.com/drive/1AD96dq3y0s2MSzWKgCpI9-oHMpzsbyR2?authuser=1
|
sd-concepts-library/ghost-style
|
sd-concepts-library
| 2022-10-17T23:08:16Z | 0 | 2 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-17T23:08:12Z |
---
license: mit
---
### GHOST style on Stable Diffusion
This is the `<ghost>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
facebook/textless_sm_et_es
|
facebook
| 2022-10-17T23:06:02Z | 1 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T01:22:43Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_de_es
|
facebook
| 2022-10-17T23:05:53Z | 2 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T01:22:09Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_cs_es
|
facebook
| 2022-10-17T23:05:44Z | 4 | 1 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T00:03:47Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/unit_hifigan_mhubert_vp_en_es_fr_it3_400k_layer11_km1000_es_css10
|
facebook
| 2022-10-17T22:56:56Z | 4 | 0 |
fairseq
|
[
"fairseq",
"audio",
"text-to-speech",
"en",
"dataset:mtedx",
"dataset:covost2",
"dataset:europarl_st",
"dataset:voxpopuli",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-speech
| 2022-10-17T22:13:09Z |
---
license: cc-by-nc-4.0
library_name: fairseq
task: text-to-speech
tags:
- fairseq
- audio
- text-to-speech
language: en
datasets:
- mtedx
- covost2
- europarl_st
- voxpopuli
---
|
facebook/textless_sm_ro_fr
|
facebook
| 2022-10-17T22:12:06Z | 3 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T01:21:43Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_fi_fr
|
facebook
| 2022-10-17T22:10:59Z | 2 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T01:20:46Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_es_fr
|
facebook
| 2022-10-17T22:10:39Z | 5 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T01:20:29Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
facebook/textless_sm_de_fr
|
facebook
| 2022-10-17T22:09:29Z | 4 | 0 |
fairseq
|
[
"fairseq",
"audio",
"audio-to-audio",
"speech-to-speech-translation",
"license:cc-by-nc-4.0",
"region:us"
] |
audio-to-audio
| 2022-10-16T01:19:56Z |
---
library_name: fairseq
task: audio-to-audio
tags:
- fairseq
- audio
- audio-to-audio
- speech-to-speech-translation
license: cc-by-nc-4.0
---
You can try out the model on the right of the page by uploading or recording.
For model usage, please refer to https://huggingface.co/facebook/textless_sm_cs_en
|
Kateryna/eva_ru_forum_headlines
|
Kateryna
| 2022-10-17T21:44:55Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-04-20T00:00:24Z |
---
language:
- ru
widget:
- text: "Цель одна - истребление как можно больше славянских народов. На очереди поляки, они тоже славяне, их тоже на утилизировать. Это Цель НАТО. Ну и заодно разрушение экономики ЕС, ну и Китай дот кучи под плинтус загнать."
- text: "Дочке 15, книг не читает, вся жизнь (вне школы) в телефоне на кровати. Любознательности ноль. Куда-то поехать в новое место, узнать что-то, найти интересные курсы - вообще не про нее. Учеба все хуже, багажа знаний уже нет, списывает и выкручивается в течение четверти, как контрольная или что-то посерьезнее, где не списать - на 2-3. При любой возможности не ходит в школу (голова болит, можно сегодня не пойду. а потом пятница, что на один день ходить...)"
- "Ребёнок учится в 8 классе. По алгебре одни тройки. Но это точно 2. Просто учитель не будет ставить в четверти 2. Она гуманитарий. Алгебра никак не идёт. Репетитор сейчас занимается, понимает только лёгкие темы. Я боюсь, что провалит ОГЭ. Там пересдать можно? А если опять 2,это второй год?"
---
# eva_ru_forum_headlines
## Model Description
The model was trained on forum topics names and first posts (100 - 150 words). It generates short headlines (3 - 5 words) in the opposite to headlines from models trained on newspaper articles.
"I do not know how to title this post" can be a valid headline.
"What would you do in my place?" is one of the most popular headline.
### Usage
```python
from transformers import AutoTokenizer, T5ForConditionalGeneration
model_name = "Kateryna/eva_ru_forum_headlines"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Я влюбилась в одного парня. Каждый раз, когда он меня видит, он плюется и переходит на другую сторону улицы. Как вы думаете, он меня любит?"
input_ids = tokenizer(
[text],
max_length=150,
add_special_tokens=True,
padding="max_length",
truncation=True,
return_tensors="pt"
)["input_ids"]
output_ids = model.generate(
input_ids=input_ids,
max_length=25,
num_beams=4,
repetition_penalty=5.0,
no_repeat_ngram_size=4
)[0]
headline = tokenizer.decode(output_ids, skip_special_tokens=True)
print(headline)
```
### Training and Validation
Training dataset: https://huggingface.co/datasets/Kateryna/eva_ru_forum_headlines
From all available posts and topics names I selected only posts and abstractive topic names e.g. the topic name does not match exactly anything in the correspondent post.
The base model is cointegrated/rut5-base
Training parameters:
- max_source_tokens_count = 150
- max_target_tokens_count = 25
- learning_rate = 0.0007
- num_train_epochs = 3
- batch_size = 8
- gradient_accumulation_steps = 96
ROUGE and BLUE scores were not very helpful to choose a best model.
I manually estimated ~100 results in each candidate model.
1. The less gradient_accumulation_steps the more abstractive headlines but they becomes less and less related to the correspondent posts. The worse model with gradient_accumulation_steps = 1 had all headlines abstractive but random.
2. The source for the model is real short texts created by ordinary persons without any editing. In many cases, the forum posts are not connected sentences and it is not clear what the author wanted to say or discuss. Sometimes there is a contradiction in the text and only the real topic name reveals what this all about. Naturally the model fails to produce a good headline in such cases.
https://github.com/KaterynaD/eva.ru/tree/main/Code/Notebooks/9.%20Headlines
|
WonderingNut/TheNuts
|
WonderingNut
| 2022-10-17T21:38:06Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-17T21:38:06Z |
---
license: creativeml-openrail-m
---
|
ArafatBHossain/distilbert-base-uncased_fine_tuned_sent140
|
ArafatBHossain
| 2022-10-17T20:59:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-17T20:51:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_fine_tuned_sent140
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0133
- Accuracy: 0.7674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.6699 | 0.7807 |
| 0.7334 | 2.0 | 816 | 0.7937 | 0.7781 |
| 0.3584 | 3.0 | 1224 | 1.0133 | 0.7674 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
damilare-akin/test_worm
|
damilare-akin
| 2022-10-17T20:57:03Z | 9 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2022-10-17T19:48:45Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: damilare-akin/test_worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ArafatBHossain/debert_base_fine_tuned_sent140
|
ArafatBHossain
| 2022-10-17T20:47:44Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-17T20:21:43Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: debert_base_fine_tuned_sent140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# debert_base_fine_tuned_sent140
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9678
- Accuracy: 0.7647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 408 | 0.8139 | 0.7219 |
| 0.8198 | 2.0 | 816 | 0.7742 | 0.7460 |
| 0.4479 | 3.0 | 1224 | 0.9678 | 0.7647 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
bwconrad/beit-base-patch16-224-pt22k-ft22k-dafre
|
bwconrad
| 2022-10-17T20:38:52Z | 0 | 0 | null |
[
"arxiv:2101.08674",
"license:apache-2.0",
"region:us"
] | null | 2022-10-17T17:26:30Z |
---
license: apache-2.0
---
A BEiT-b/16 model fine-tuned for anime character classification on the [DAF:re dataset](https://arxiv.org/abs/2101.08674). Training code can be found [here](https://github.com/bwconrad/dafre).
## DAF:re Results
| Top-1 Val Acc | Top-5 Val Acc | Top-1 Test Acc| Top-5 Test Acc|
|:-------------:|:-------------:|:-------------:|:-------------:|
| 95.26 | 98.38 | 94.84 | 98.30 |
|
kevinbror/dead
|
kevinbror
| 2022-10-17T19:53:11Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-17T19:52:20Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dead
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dead
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.6198
- Train End Logits Accuracy: 0.5843
- Train Start Logits Accuracy: 0.5459
- Validation Loss: 1.2514
- Validation End Logits Accuracy: 0.6603
- Validation Start Logits Accuracy: 0.6255
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2766, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.6198 | 0.5843 | 0.5459 | 1.2514 | 0.6603 | 0.6255 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
heriosousa/LunarLander-v2
|
heriosousa
| 2022-10-17T19:47:50Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-17T19:44:50Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -161.34 +/- 91.29
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': '__file__'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'f': None
'repo_id': 'heriosousa/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
heriosousa/ppo-CartPole-v1
|
heriosousa
| 2022-10-17T19:46:56Z | 0 | 0 | null |
[
"tensorboard",
"CartPole-v1",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-17T19:05:13Z |
---
tags:
- CartPole-v1
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 148.00 +/- 47.52
name: mean_reward
verified: false
---
# PPO Agent Playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1.
To learn to code your own PPO agent and train it Unit 8 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit8
# Hyperparameters
```python
{'exp_name': '__file__'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'CartPole-v1'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'f': '/root/.local/share/jupyter/runtime/kernel-9c96fe8c-041c-4681-aa25-a76703c94d0d.json'
'repo_id': 'heriosousa/ppo-CartPole-v1'
'batch_size': 512
'minibatch_size': 128}
```
|
introduck/en_ner_vc_lg
|
introduck
| 2022-10-17T19:19:13Z | 0 | 2 |
spacy
|
[
"spacy",
"token-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-29T21:30:53Z |
---
language: en
license: mit
tags:
- spacy
- token-classification
---
English pipeline optimized for CPU. Components: ner.
|
pfr/conditional-utilitarian-roberta-01
|
pfr
| 2022-10-17T19:08:15Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-27T21:14:11Z |
---
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Conditional Utilitarian Roberta 01
## Model description
This is a [Roberta-based](https://huggingface.co/roberta-large) model. It was first fine-tuned on for computing utility estimates of experiences (see [utilitarian-roberta-01](https://huggingface.co/pfr/utilitarian-roberta-01). It was then further fine-tuned on 160 examples of pairwise comparisons of conditional utilities.
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios, under extra contextual information.
## Limitations
The model was fine-tuned on only 160 examples, so it should be expected to have limited performance.
Further, while the base model was trained on ~10000 examples, they are still restricted, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
Given a scenario S under a context C, and the model U, one computes the estimated conditional utility with `U(f'{C} {S}') - U(C)`.
## Training data
The first training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
The second training data consists of 160 crowdsourced examples of triples (S, C0, C1) consisting of one scenario and two possible contexts, where `U(S | C0) > U(S | C1)`.
## Training procedure
Starting from [utilitarian-roberta-01](https://huggingface.co/pfr/utilitarian-roberta-01), we fine-tune the model over the training data of 160 examples, with a learning rate of `1e-5`, a batch size of `8`, and for 2 epochs.
## Evaluation results
The model achieves ~70% accuracy over 40 crowdsourced examples, from the same distribution as the training data.
|
pfr/utilitarian-roberta-01
|
pfr
| 2022-10-17T18:41:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-22T20:49:32Z |
---
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Utilitarian Roberta 01
## Model description
This is a [Roberta model](https://huggingface.co/roberta-large) fine-tuned on for computing utility estimates of experiences, represented in first-person sentences. It was trained from human-annotated pairwise utility comparisons, from the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios.
## Limitations
The model was only trained on a limited number of scenarios, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
The model receives a sentence describing a scenario in first-person, and outputs a scalar representing a utility estimate.
## Training data
The training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Training procedure
Training can be reproduced by executing the training procedure from [`tune.py`](https://github.com/hendrycks/ethics/blob/3e4c09259a1b4022607da093e9452383fc1bb7e3/utilitarianism/tune.py) as follows:
```
python tune.py --ngpus 1 --model roberta-large --learning_rate 1e-5 --batch_size 16 --nepochs 2
```
## Evaluation results
The model achieves 90.8% accuracy on [The Moral Uncertainty Research Competition](https://moraluncertainty.mlsafety.org/), which consists of a subset of the ETHICS dataset.
|
pfr/utilitarian-deberta-01
|
pfr
| 2022-10-17T18:36:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"deberta-v3",
"arxiv:2008.02275",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-23T03:33:34Z |
---
tags:
- deberta-v3
inference:
parameters:
function_to_apply: "none"
widget:
- text: "I cuddled with my dog today."
---
# Utilitarian Deberta 01
## Model description
This is a [Deberta model](https://huggingface.co/microsoft/deberta-v3-large) fine-tuned on for computing utility estimates of experiences, represented in first-person sentences. It was trained from human-annotated pairwise utility comparisons, from the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Intended use
The main use case is the computation of utility estimates of first-person text scenarios.
## Limitations
The model was only trained on a limited number of scenarios, and only on first-person sentences. It does not have the capability of interpreting highly complex or unusual scenarios, and it does not have hard guarantees on its domain of accuracy.
## How to use
The model receives a sentence describing a scenario in first-person, and outputs a scalar representing a utility estimate.
## Training data
The training data is the train split from the Utilitarianism part of the [ETHICS dataset](https://arxiv.org/abs/2008.02275).
## Training procedure
Training can be reproduced by executing the training procedure from [`tune.py`](https://github.com/hendrycks/ethics/blob/3e4c09259a1b4022607da093e9452383fc1bb7e3/utilitarianism/tune.py) as follows:
```
python tune.py --ngpus 1 --model microsoft/deberta-v3-large --learning_rate 1e-5 --batch_size 16 --nepochs 2
```
## Evaluation results
The model achieves 92.2% accuracy on [The Moral Uncertainty Research Competition](https://moraluncertainty.mlsafety.org/), which consists of a subset of the ETHICS dataset.
|
sd-concepts-library/willy-hd
|
sd-concepts-library
| 2022-10-17T17:55:03Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-17T17:54:56Z |
---
license: mit
---
### Willy-HD on Stable Diffusion
This is the `<willy_character>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
kevinbror/nyaszzzz
|
kevinbror
| 2022-10-17T17:50:04Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-17T17:32:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: nyaszzzz
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nyaszzzz
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4490
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 0.5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6801 | 0.5 | 1384 | 1.4490 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
damilare-akin/testpyramidsrnd
|
damilare-akin
| 2022-10-17T16:53:49Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-10-17T16:53:41Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: damilare-akin/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
pachi107/autotrain-ethos-sentiments-1790262082
|
pachi107
| 2022-10-17T16:30:43Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:pachi107/autotrain-data-ethos-sentiments",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-17T16:29:55Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pachi107/autotrain-data-ethos-sentiments
co2_eq_emissions:
emissions: 0.8181506582658064
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1790262082
- CO2 Emissions (in grams): 0.8182
## Validation Metrics
- Loss: 0.565
- Accuracy: 0.775
- Precision: 0.783
- Recall: 0.832
- AUC: 0.823
- F1: 0.807
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pachi107/autotrain-ethos-sentiments-1790262082
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262082", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262082", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
pachi107/autotrain-ethos-sentiments-1790262081
|
pachi107
| 2022-10-17T16:30:36Z | 100 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:pachi107/autotrain-data-ethos-sentiments",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-17T16:29:48Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pachi107/autotrain-data-ethos-sentiments
co2_eq_emissions:
emissions: 1.1459528952345301
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1790262081
- CO2 Emissions (in grams): 1.1460
## Validation Metrics
- Loss: 0.498
- Accuracy: 0.795
- Precision: 0.781
- Recall: 0.885
- AUC: 0.857
- F1: 0.830
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pachi107/autotrain-ethos-sentiments-1790262081
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262081", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262081", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
pachi107/autotrain-ethos-sentiments-1790262079
|
pachi107
| 2022-10-17T16:30:34Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"text-classification",
"en",
"dataset:pachi107/autotrain-data-ethos-sentiments",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-17T16:29:42Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- pachi107/autotrain-data-ethos-sentiments
co2_eq_emissions:
emissions: 0.8438685047317921
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1790262079
- CO2 Emissions (in grams): 0.8439
## Validation Metrics
- Loss: 0.513
- Accuracy: 0.755
- Precision: 0.881
- Recall: 0.655
- AUC: 0.857
- F1: 0.751
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/pachi107/autotrain-ethos-sentiments-1790262079
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262079", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("pachi107/autotrain-ethos-sentiments-1790262079", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
sd-concepts-library/zero
|
sd-concepts-library
| 2022-10-17T16:16:00Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-17T16:15:56Z |
---
license: mit
---
### zero on Stable Diffusion
This is the `<zero>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





|
une/uneune-diffusion
|
une
| 2022-10-17T14:58:29Z | 0 | 1 | null |
[
"stable-diffusion",
"text-to-image",
"ja",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2022-10-15T04:37:52Z |
---
language:
- "ja"
thumbnail: "https://pbs.twimg.com/media/Fei_dTPagAETB1q?format=png&name=900x900"
tags:
- stable-diffusion
- text-to-image
license: "creativeml-openrail-m"
---
waifudiffusion1.3をベースに自作のイラスト学習データ348枚でチューニングさせたckptファイルです。
token:uneune
class:person
自分の絵柄を学習させたデータを公開する人増えたら楽しいなぁ。
pixivでの公開など
https://www.pixiv.net/artworks/101775163
|
israel/AmhWordPieceTokenizer
|
israel
| 2022-10-17T14:32:38Z | 0 | 0 | null |
[
"Amharic",
"Word Piece Tokenizer",
"Tokenizer",
"amh",
"doi:10.57967/hf/0044",
"license:cc-by-4.0",
"region:us"
] | null | 2022-10-17T11:11:22Z |
---
language:
- amh
tags:
- Amharic
- Word Piece Tokenizer
- Tokenizer
license: cc-by-4.0
---
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("israel/AmhWordPieceTokenizer")
encoding = tokenizer.encode("ኮሌጁ ቢያስተምርም ወደስራ የሚመድባቸው መንግስት ነው abcs")
encoding.tokens
```
|
airnicco8/xlm-roberta-en-it-de
|
airnicco8
| 2022-10-17T14:15:20Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"english",
"german",
"italian",
"nli",
"text-classification",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-14T08:53:59Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- english
- german
- italian
- nli
- text-classification
---
# airnicco8/xlm-roberta-en-it-de
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. It is a student XLMRoBERTa model trained in order to have multilingual sentence embeddings for English, German and Italian. It can be fine-tuned for downstream tasks, such as: semantic similarity (example provided here), NLI and Text Classification.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('airnicco8/xlm-roberta-en-it-de')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('airnicco8/xlm-roberta-en-it-de')
model = AutoModel.from_pretrained('airnicco8/xlm-roberta-en-it-de')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=airnicco8/xlm-roberta-en-it-de)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 6142 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
pszemraj/neuspell-scrnn-probwordnoise
|
pszemraj
| 2022-10-17T14:04:44Z | 0 | 0 | null |
[
"pytorch",
"neuspell",
"spelling",
"spell-correction",
"license:apache-2.0",
"region:us"
] | null | 2022-10-11T02:29:11Z |
---
languages: en
license: apache-2.0
tags:
- neuspell
- spelling
- spell-correction
---
# neuspell-scrnn-probwordnoise
> towards a reliable workaround for the `neuspell` lib being broken.
See the [github repository](https://github.com/neuspell/neuspell) for usage and all official information.
## usage
clone this model repo with git:
```bash
sudo apt-get install git-lfs -q
git clone https://huggingface.co/pszemraj/neuspell-scrnn-probwordnoise
```
install neuspell:
```bash
pip install -U neuspell -q
```
use in python for spell correction:
```python
from neuspell import SclstmChecker
checker = SclstmChecker()
checker.from_pretrained("./neuspell-scrnn-probwordnoise/")
checker.correct("I luk foward to receving your reply") # correct a string
checker.correct_strings(
["I luk foward to receving your reply", "were did wendigo goe boating?"]
) # correct a list of strings
```
|
teacookies/autotrain-17102022-cert_update_date-1786462003
|
teacookies
| 2022-10-17T12:34:15Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-17102022-cert_update_date",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-17T12:23:09Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-17102022-cert_update_date
co2_eq_emissions:
emissions: 18.37074974959855
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1786462003
- CO2 Emissions (in grams): 18.3707
## Validation Metrics
- Loss: 0.019
- Accuracy: 0.995
- Precision: 0.835
- Recall: 0.867
- F1: 0.851
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-17102022-cert_update_date-1786462003
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-17102022-cert_update_date-1786462003", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-17102022-cert_update_date-1786462003", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
ner4archives/fr_ner4archives_v3_with_vectors
|
ner4archives
| 2022-10-17T12:32:56Z | 30 | 0 |
spacy
|
[
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
] |
token-classification
| 2022-10-14T12:41:47Z |
---
widget:
- text: "415 Lyon Lettres de rémission accordées à Denis Fromant, marinier, pour meurtre commis à Saint-Haon 1, au pays de Roannais, sur la personne de Driet Cantin qui l'accusait d'avoir maltraité un de ses pages et de l'avoir dépouillé d'une jument (Fol 145 v°, n° 415) Septembre 1501."
example_title: "FRAN_IR_000061"
- text: "BB/29/988 page 143 Penne (Lot-et-Garronne) 14 décembre 1822. BB/29/988 page 145 Billom (Puy-de-Dôme) 11 janvier 1823."
example_title: "FRAN_IR_050370"
tags:
- spacy
- token-classification
language:
- fr
model-index:
- name: fr_ner4archives_v3_with_vectors
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8829593693
- name: NER Recall
type: recall
value: 0.8489795918
- name: NER F Score
type: f_score
value: 0.8656361474
---
| Feature | Description |
| --- | --- |
| **Name** | `fr_ner4archives_v3_with_vectors` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 500000 keys, 500000 unique vectors (300 dimensions) |
| **Sources** | French corpus for the NER task composed of finding aids in XML-EAD from the National Archives of France (v. 3.0) - [Check corpus version on GitHub](https://github.com/NER4Archives-project/Corpus_TrainingData) |
| **License** | CC-BY-4.0 license |
| **Author** | [Archives nationales]() / [Inria-Almanach]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `EVENT`, `LOCATION`, `ORGANISATION`, `PERSON`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 86.56 |
| `ENTS_P` | 88.30 |
| `ENTS_R` | 84.90 |
| `TOK2VEC_LOSS` | 13527.63 |
| `NER_LOSS` | 58805.82 |
|
ner4archives/fr_ner4archives_v3_default
|
ner4archives
| 2022-10-17T12:31:01Z | 29 | 0 |
spacy
|
[
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
] |
token-classification
| 2022-10-07T16:34:00Z |
---
widget:
- text: "415 Lyon Lettres de rémission accordées à Denis Fromant, marinier, pour meurtre commis à Saint-Haon 1, au pays de Roannais, sur la personne de Driet Cantin qui l'accusait d'avoir maltraité un de ses pages et de l'avoir dépouillé d'une jument (Fol 145 v°, n° 415) Septembre 1501."
example_title: "FRAN_IR_000061"
- text: "BB/29/988 page 143 Penne (Lot-et-Garronne) 14 décembre 1822. BB/29/988 page 145 Billom (Puy-de-Dôme) 11 janvier 1823."
example_title: "FRAN_IR_050370"
tags:
- spacy
- token-classification
language:
- fr
model-index:
- name: fr_ner4archives_v3_default
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8390532544
- name: NER Recall
type: recall
value: 0.8268221574
- name: NER F Score
type: f_score
value: 0.8328928047
---
| Feature | Description |
| --- | --- |
| **Name** | `fr_ner4archives_v3_default` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | French corpus for the NER task composed of finding aids in XML-EAD from the National Archives of France (v. 3.0) - [Check corpus version on GitHub](https://github.com/NER4Archives-project/Corpus_TrainingData) |
| **License** | CC-BY-4.0 license |
| **Author** | [Archives nationales]() / [Inria-Almanach]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `EVENT`, `LOCATION`, `ORGANISATION`, `PERSON`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 83.29 |
| `ENTS_P` | 83.91 |
| `ENTS_R` | 82.68 |
| `TOK2VEC_LOSS` | 68553.28 |
| `NER_LOSS` | 18164.88 |
|
ner4archives/fr_ner4archives_V3_camembert_base
|
ner4archives
| 2022-10-17T12:26:27Z | 7 | 1 |
spacy
|
[
"spacy",
"token-classification",
"fr",
"model-index",
"region:us"
] |
token-classification
| 2022-10-14T16:03:05Z |
---
widget:
- text: "415 Lyon Lettres de rémission accordées à Denis Fromant, marinier, pour meurtre commis à Saint-Haon 1, au pays de Roannais, sur la personne de Driet Cantin qui l'accusait d'avoir maltraité un de ses pages et de l'avoir dépouillé d'une jument (Fol 145 v°, n° 415) Septembre 1501."
example_title: "FRAN_IR_000061"
- text: "BB/29/988 page 143 Penne (Lot-et-Garronne) 14 décembre 1822. BB/29/988 page 145 Billom (Puy-de-Dôme) 11 janvier 1823."
example_title: "FRAN_IR_050370"
tags:
- spacy
- token-classification
language:
- fr
model-index:
- name: fr_ner4archives_V3_camembert_base
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.916087963
- name: NER Recall
type: recall
value: 0.92303207
- name: NER F Score
type: f_score
value: 0.9195469068
---
| Feature | Description |
| --- | --- |
| **Name** | `fr_ner4archives_V3_camembert_base` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.1,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | French corpus for the NER task composed of finding aids in XML-EAD from the National Archives of France (v. 3.0) - [Check corpus version on GitHub](https://github.com/NER4Archives-project/Corpus_TrainingData) |
| **License** | CC-BY-4.0 license |
| **Author** | [Archives nationales]() / [Inria-Almanach]() |
### Label Scheme
<details>
<summary>View label scheme (5 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `EVENT`, `LOCATION`, `ORGANISATION`, `PERSON`, `TITLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 91.95 |
| `ENTS_P` | 91.61 |
| `ENTS_R` | 92.30 |
| `TRANSFORMER_LOSS` | 395487.28 |
| `NER_LOSS` | 11238.70 |
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.