pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
token-classification
|
transformers
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "xlm-roberta-base"
(a multilingual version of RoBERTa)
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "xlm-roberta-base"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 2
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
tokenizer = AutoTokenizer.from_pretrained("akdeniz27/xlm-roberta-base-turkish-ner")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9919343118732742
* f1: 0.9492100796448622
* precision: 0.9407349896480332
* recall: 0.9578392621870883
|
{"language": "tr", "widget": [{"text": "Mustafa Kemal Atat\u00fcrk 19 May\u0131s 1919'da Samsun'a \u00e7\u0131kt\u0131."}]}
|
akdeniz27/xlm-roberta-base-turkish-ner
| null |
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"tr",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #safetensors #xlm-roberta #token-classification #tr #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# Turkish Named Entity Recognition (NER) Model
This model is the fine-tuned version of "xlm-roberta-base"
(a multilingual version of RoBERTa)
using a reviewed version of well known Turkish NER dataset
(URL
# Fine-tuning parameters:
# How to use:
Pls refer "URL for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9919343118732742
* f1: 0.9492100796448622
* precision: 0.9407349896480332
* recall: 0.9578392621870883
|
[
"# Turkish Named Entity Recognition (NER) Model\nThis model is the fine-tuned version of \"xlm-roberta-base\"\n(a multilingual version of RoBERTa) \nusing a reviewed version of well known Turkish NER dataset \n(URL",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9919343118732742\n* f1: 0.9492100796448622\n* precision: 0.9407349896480332\n* recall: 0.9578392621870883"
] |
[
"TAGS\n#transformers #pytorch #safetensors #xlm-roberta #token-classification #tr #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# Turkish Named Entity Recognition (NER) Model\nThis model is the fine-tuned version of \"xlm-roberta-base\"\n(a multilingual version of RoBERTa) \nusing a reviewed version of well known Turkish NER dataset \n(URL",
"# Fine-tuning parameters:",
"# How to use: \n\nPls refer \"URL for entity grouping with aggregation_strategy parameter.",
"# Reference test results:\n* accuracy: 0.9919343118732742\n* f1: 0.9492100796448622\n* precision: 0.9407349896480332\n* recall: 0.9578392621870883"
] |
object-detection
| null |
<div align="left">
## You Only Look Once for Panoptic Driving Perception
> [**You Only Look at Once for Panoptic driving Perception**](https://arxiv.org/abs/2108.11250)
>
> by Dong Wu, Manwen Liao, Weitian Zhang, [Xinggang Wang](https://xinggangw.info/) [*School of EIC, HUST*](http://eic.hust.edu.cn/English/Home.htm)
>
> *arXiv technical report ([arXiv 2108.11250](https://arxiv.org/abs/2108.11250))*
---
### The Illustration of YOLOP

### Contributions
* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the `BDD100K `dataset.
* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.
### Results
#### Traffic Object Detection Result
| Model | Recall(%) | mAP50(%) | Speed(fps) |
| -------------- | --------- | -------- | ---------- |
| `Multinet` | 81.3 | 60.2 | 8.6 |
| `DLT-Net` | 89.4 | 68.4 | 9.3 |
| `Faster R-CNN` | 77.2 | 55.6 | 5.3 |
| `YOLOv5s` | 86.8 | 77.2 | 82 |
| `YOLOP(ours)` | 89.2 | 76.5 | 41 |
#### Drivable Area Segmentation Result
| Model | mIOU(%) | Speed(fps) |
| ------------- | ------- | ---------- |
| `Multinet` | 71.6 | 8.6 |
| `DLT-Net` | 71.3 | 9.3 |
| `PSPNet` | 89.6 | 11.1 |
| `YOLOP(ours)` | 91.5 | 41 |
#### Lane Detection Result:
| Model | mIOU(%) | IOU(%) |
| ------------- | ------- | ------ |
| `ENet` | 34.12 | 14.64 |
| `SCNN` | 35.79 | 15.84 |
| `ENet-SAD` | 36.56 | 16.02 |
| `YOLOP(ours)` | 70.50 | 26.20 |
#### Ablation Studies 1: End-to-end v.s. Step-by-step:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) |
| --------------- | --------- | ----- | ------- | ----------- | ------ |
| `ES-W` | 87.0 | 75.3 | 90.4 | 66.8 | 26.2 |
| `ED-W` | 87.3 | 76.0 | 91.6 | 71.2 | 26.1 |
| `ES-D-W` | 87.0 | 75.1 | 91.7 | 68.6 | 27.0 |
| `ED-S-W` | 87.5 | 76.1 | 91.6 | 68.0 | 26.8 |
| `End-to-end` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 |
#### Ablation Studies 2: Multi-task v.s. Single task:
| Training_method | Recall(%) | AP(%) | mIoU(%) | Accuracy(%) | IoU(%) | Speed(ms/frame) |
| --------------- | --------- | ----- | ------- | ----------- | ------ | --------------- |
| `Det(only)` | 88.2 | 76.9 | - | - | - | 15.7 |
| `Da-Seg(only)` | - | - | 92.0 | - | - | 14.8 |
| `Ll-Seg(only)` | - | - | - | 79.6 | 27.9 | 14.8 |
| `Multitask` | 89.2 | 76.5 | 91.5 | 70.5 | 26.2 | 24.4 |
**Notes**:
- The works we has use for reference including `Multinet` ([paper](https://arxiv.org/pdf/1612.07695.pdf?utm_campaign=affiliate-ir-Optimise%20media%28%20South%20East%20Asia%29%20Pte.%20ltd._156_-99_national_R_all_ACQ_cpa_en&utm_content=&utm_source=%20388939),[code](https://github.com/MarvinTeichmann/MultiNet)),`DLT-Net` ([paper](https://ieeexplore.ieee.org/abstract/document/8937825)),`Faster R-CNN` ([paper](https://proceedings.neurips.cc/paper/2015/file/14bfa6bb14875e45bba028a21ed38046-Paper.pdf),[code](https://github.com/ShaoqingRen/faster_rcnn)),`YOLOv5s`([code](https://github.com/ultralytics/yolov5)) ,`PSPNet`([paper](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf),[code](https://github.com/hszhao/PSPNet)) ,`ENet`([paper](https://arxiv.org/pdf/1606.02147.pdf),[code](https://github.com/osmr/imgclsmob)) `SCNN`([paper](https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/download/16802/16322),[code](https://github.com/XingangPan/SCNN)) `SAD-ENet`([paper](https://openaccess.thecvf.com/content_ICCV_2019/papers/Hou_Learning_Lightweight_Lane_Detection_CNNs_by_Self_Attention_Distillation_ICCV_2019_paper.pdf),[code](https://github.com/cardwing/Codes-for-Lane-Detection)). Thanks for their wonderful works.
- In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.
---
### Visualization
#### Traffic Object Detection Result

#### Drivable Area Segmentation Result

#### Lane Detection Result

**Notes**:
- The visualization of lane detection result has been post processed by quadratic fitting.
---
### Project Structure
```python
├─inference
│ ├─images # inference images
│ ├─output # inference result
├─lib
│ ├─config/default # configuration of training and validation
│ ├─core
│ │ ├─activations.py # activation function
│ │ ├─evaluate.py # calculation of metric
│ │ ├─function.py # training and validation of model
│ │ ├─general.py #calculation of metric、nms、conversion of data-format、visualization
│ │ ├─loss.py # loss function
│ │ ├─postprocess.py # postprocess(refine da-seg and ll-seg, unrelated to paper)
│ ├─dataset
│ │ ├─AutoDriveDataset.py # Superclass dataset,general function
│ │ ├─bdd.py # Subclass dataset,specific function
│ │ ├─hust.py # Subclass dataset(Campus scene, unrelated to paper)
│ │ ├─convect.py
│ │ ├─DemoDataset.py # demo dataset(image, video and stream)
│ ├─models
│ │ ├─YOLOP.py # Setup and Configuration of model
│ │ ├─light.py # Model lightweight(unrelated to paper, zwt)
│ │ ├─commom.py # calculation module
│ ├─utils
│ │ ├─augmentations.py # data augumentation
│ │ ├─autoanchor.py # auto anchor(k-means)
│ │ ├─split_dataset.py # (Campus scene, unrelated to paper)
│ │ ├─utils.py # logging、device_select、time_measure、optimizer_select、model_save&initialize 、Distributed training
│ ├─run
│ │ ├─dataset/training time # Visualization, logging and model_save
├─tools
│ │ ├─demo.py # demo(folder、camera)
│ │ ├─test.py
│ │ ├─train.py
├─toolkits
│ │ ├─depoly # Deployment of model
├─weights # Pretraining model
```
---
### Requirement
This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
```
conda install pytorch==1.7.0 torchvision==0.8.0 cudatoolkit=10.2 -c pytorch
```
See `requirements.txt` for additional dependencies and version requirements.
```setup
pip install -r requirements.txt
```
### Data preparation
#### Download
- Download the images from [images](https://bdd-data.berkeley.edu/).
- Download the annotations of detection from [det_annotations](https://drive.google.com/file/d/1Ge-R8NTxG1eqd4zbryFo-1Uonuh0Nxyl/view?usp=sharing).
- Download the annotations of drivable area segmentation from [da_seg_annotations](https://drive.google.com/file/d/1xy_DhUZRHR8yrZG3OwTQAHhYTnXn7URv/view?usp=sharing).
- Download the annotations of lane line segmentation from [ll_seg_annotations](https://drive.google.com/file/d/1lDNTPIQj_YLNZVkksKM25CvCHuquJ8AP/view?usp=sharing).
We recommend the dataset directory structure to be the following:
```
# The id represent the correspondence relation
├─dataset root
│ ├─images
│ │ ├─train
│ │ ├─val
│ ├─det_annotations
│ │ ├─train
│ │ ├─val
│ ├─da_seg_annotations
│ │ ├─train
│ │ ├─val
│ ├─ll_seg_annotations
│ │ ├─train
│ │ ├─val
```
Update the your dataset path in the `./lib/config/default.py`.
### Training
You can set the training configuration in the `./lib/config/default.py`. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch_size).
If you want try alternating optimization or train model for single task, please modify the corresponding configuration in `./lib/config/default.py` to `True`. (As following, all configurations is `False`, which means training multiple tasks end to end).
```python
# Alternating optimization
_C.TRAIN.SEG_ONLY = False # Only train two segmentation branchs
_C.TRAIN.DET_ONLY = False # Only train detection branch
_C.TRAIN.ENC_SEG_ONLY = False # Only train encoder and two segmentation branchs
_C.TRAIN.ENC_DET_ONLY = False # Only train encoder and detection branch
# Single task
_C.TRAIN.DRIVABLE_ONLY = False # Only train da_segmentation task
_C.TRAIN.LANE_ONLY = False # Only train ll_segmentation task
_C.TRAIN.DET_ONLY = False # Only train detection task
```
Start training:
```shell
python tools/train.py
```
### Evaluation
You can set the evaluation configuration in the `./lib/config/default.py`. (Including: batch_size and threshold value for nms).
Start evaluating:
```shell
python tools/test.py --weights weights/End-to-end.pth
```
### Demo Test
We provide two testing method.
#### Folder
You can store the image or video in `--source`, and then save the reasoning result to `--save-dir`
```shell
python tools/demo --source inference/images
```
#### Camera
If there are any camera connected to your computer, you can set the `source` as the camera number(The default is 0).
```shell
python tools/demo --source 0
```
### Deployment
Our model can reason in real-time on `Jetson Tx2`, with `Zed Camera` to capture image. We use `TensorRT` tool for speeding up. We provide code for deployment and reasoning of model in `./toolkits/deploy`.
## Citation
If you find our paper and code useful for your research, please consider giving a star and citation:
```BibTeX
@misc{2108.11250,
Author = {Dong Wu and Manwen Liao and Weitian Zhang and Xinggang Wang},
Title = {YOLOP: You Only Look Once for Panoptic Driving Perception},
Year = {2021},
Eprint = {arXiv:2108.11250},
}
```
|
{"tags": ["object-detection"]}
|
akhaliq/YOLOP
| null |
[
"object-detection",
"arxiv:2108.11250",
"arxiv:1612.07695",
"arxiv:1606.02147",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2108.11250",
"1612.07695",
"1606.02147"
] |
[] |
TAGS
#object-detection #arxiv-2108.11250 #arxiv-1612.07695 #arxiv-1606.02147 #region-us
|
You Only Look Once for Panoptic Driving Perception
----------------------------------------------------
>
> You Only Look at Once for Panoptic driving Perception
>
>
> by Dong Wu, Manwen Liao, Weitian Zhang, Xinggang Wang *School of EIC, HUST*
>
>
> *arXiv technical report (arXiv 2108.11250)*
>
>
>
---
### The Illustration of YOLOP
!yolop
### Contributions
* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the 'BDD100K 'dataset.
* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.
### Results
#### Traffic Object Detection Result
#### Drivable Area Segmentation Result
Model: 'Multinet', mIOU(%): 71.6, Speed(fps): 8.6
Model: 'DLT-Net', mIOU(%): 71.3, Speed(fps): 9.3
Model: 'PSPNet', mIOU(%): 89.6, Speed(fps): 11.1
Model: 'YOLOP(ours)', mIOU(%): 91.5, Speed(fps): 41
#### Lane Detection Result:
Model: 'ENet', mIOU(%): 34.12, IOU(%): 14.64
Model: 'SCNN', mIOU(%): 35.79, IOU(%): 15.84
Model: 'ENet-SAD', mIOU(%): 36.56, IOU(%): 16.02
Model: 'YOLOP(ours)', mIOU(%): 70.50, IOU(%): 26.20
#### Ablation Studies 1: End-to-end v.s. Step-by-step:
#### Ablation Studies 2: Multi-task v.s. Single task:
Notes:
* The works we has use for reference including 'Multinet' (paper,code),'DLT-Net' (paper),'Faster R-CNN' (paper,code),'YOLOv5s'(code) ,'PSPNet'(paper,code) ,'ENet'(paper,code) 'SCNN'(paper,code) 'SAD-ENet'(paper,code). Thanks for their wonderful works.
* In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.
---
### Visualization
#### Traffic Object Detection Result
!detect result
#### Drivable Area Segmentation Result

#### Lane Detection Result

Notes:
* The visualization of lane detection result has been post processed by quadratic fitting.
---
### Project Structure
---
### Requirement
This codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:
See 'URL' for additional dependencies and version requirements.
### Data preparation
#### Download
* Download the images from images.
* Download the annotations of detection from det\_annotations.
* Download the annotations of drivable area segmentation from da\_seg\_annotations.
* Download the annotations of lane line segmentation from ll\_seg\_annotations.
We recommend the dataset directory structure to be the following:
Update the your dataset path in the './lib/config/URL'.
### Training
You can set the training configuration in the './lib/config/URL'. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch\_size).
If you want try alternating optimization or train model for single task, please modify the corresponding configuration in './lib/config/URL' to 'True'. (As following, all configurations is 'False', which means training multiple tasks end to end).
Start training:
### Evaluation
You can set the evaluation configuration in the './lib/config/URL'. (Including: batch\_size and threshold value for nms).
Start evaluating:
### Demo Test
We provide two testing method.
#### Folder
You can store the image or video in '--source', and then save the reasoning result to '--save-dir'
#### Camera
If there are any camera connected to your computer, you can set the 'source' as the camera number(The default is 0).
### Deployment
Our model can reason in real-time on 'Jetson Tx2', with 'Zed Camera' to capture image. We use 'TensorRT' tool for speeding up. We provide code for deployment and reasoning of model in './toolkits/deploy'.
If you find our paper and code useful for your research, please consider giving a star and citation:
|
[
"### The Illustration of YOLOP\n\n\n!yolop",
"### Contributions\n\n\n* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the 'BDD100K 'dataset.\n* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.",
"### Results",
"#### Traffic Object Detection Result",
"#### Drivable Area Segmentation Result\n\n\nModel: 'Multinet', mIOU(%): 71.6, Speed(fps): 8.6\nModel: 'DLT-Net', mIOU(%): 71.3, Speed(fps): 9.3\nModel: 'PSPNet', mIOU(%): 89.6, Speed(fps): 11.1\nModel: 'YOLOP(ours)', mIOU(%): 91.5, Speed(fps): 41",
"#### Lane Detection Result:\n\n\nModel: 'ENet', mIOU(%): 34.12, IOU(%): 14.64\nModel: 'SCNN', mIOU(%): 35.79, IOU(%): 15.84\nModel: 'ENet-SAD', mIOU(%): 36.56, IOU(%): 16.02\nModel: 'YOLOP(ours)', mIOU(%): 70.50, IOU(%): 26.20",
"#### Ablation Studies 1: End-to-end v.s. Step-by-step:",
"#### Ablation Studies 2: Multi-task v.s. Single task:\n\n\n\nNotes:\n\n\n* The works we has use for reference including 'Multinet' (paper,code),'DLT-Net' (paper),'Faster R-CNN' (paper,code),'YOLOv5s'(code) ,'PSPNet'(paper,code) ,'ENet'(paper,code) 'SCNN'(paper,code) 'SAD-ENet'(paper,code). Thanks for their wonderful works.\n* In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.\n\n\n\n\n---",
"### Visualization",
"#### Traffic Object Detection Result\n\n\n!detect result",
"#### Drivable Area Segmentation Result\n\n\n",
"#### Lane Detection Result\n\n\n\n\n\nNotes:\n\n\n* The visualization of lane detection result has been post processed by quadratic fitting.\n\n\n\n\n---",
"### Project Structure\n\n\n\n\n---",
"### Requirement\n\n\nThis codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:\n\n\nSee 'URL' for additional dependencies and version requirements.",
"### Data preparation",
"#### Download\n\n\n* Download the images from images.\n* Download the annotations of detection from det\\_annotations.\n* Download the annotations of drivable area segmentation from da\\_seg\\_annotations.\n* Download the annotations of lane line segmentation from ll\\_seg\\_annotations.\n\n\nWe recommend the dataset directory structure to be the following:\n\n\nUpdate the your dataset path in the './lib/config/URL'.",
"### Training\n\n\nYou can set the training configuration in the './lib/config/URL'. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch\\_size).\n\n\nIf you want try alternating optimization or train model for single task, please modify the corresponding configuration in './lib/config/URL' to 'True'. (As following, all configurations is 'False', which means training multiple tasks end to end).\n\n\nStart training:",
"### Evaluation\n\n\nYou can set the evaluation configuration in the './lib/config/URL'. (Including: batch\\_size and threshold value for nms).\n\n\nStart evaluating:",
"### Demo Test\n\n\nWe provide two testing method.",
"#### Folder\n\n\nYou can store the image or video in '--source', and then save the reasoning result to '--save-dir'",
"#### Camera\n\n\nIf there are any camera connected to your computer, you can set the 'source' as the camera number(The default is 0).",
"### Deployment\n\n\nOur model can reason in real-time on 'Jetson Tx2', with 'Zed Camera' to capture image. We use 'TensorRT' tool for speeding up. We provide code for deployment and reasoning of model in './toolkits/deploy'.\n\n\nIf you find our paper and code useful for your research, please consider giving a star and citation:"
] |
[
"TAGS\n#object-detection #arxiv-2108.11250 #arxiv-1612.07695 #arxiv-1606.02147 #region-us \n",
"### The Illustration of YOLOP\n\n\n!yolop",
"### Contributions\n\n\n* We put forward an efficient multi-task network that can jointly handle three crucial tasks in autonomous driving: object detection, drivable area segmentation and lane detection to save computational costs, reduce inference time as well as improve the performance of each task. Our work is the first to reach real-time on embedded devices while maintaining state-of-the-art level performance on the 'BDD100K 'dataset.\n* We design the ablative experiments to verify the effectiveness of our multi-tasking scheme. It is proved that the three tasks can be learned jointly without tedious alternating optimization.",
"### Results",
"#### Traffic Object Detection Result",
"#### Drivable Area Segmentation Result\n\n\nModel: 'Multinet', mIOU(%): 71.6, Speed(fps): 8.6\nModel: 'DLT-Net', mIOU(%): 71.3, Speed(fps): 9.3\nModel: 'PSPNet', mIOU(%): 89.6, Speed(fps): 11.1\nModel: 'YOLOP(ours)', mIOU(%): 91.5, Speed(fps): 41",
"#### Lane Detection Result:\n\n\nModel: 'ENet', mIOU(%): 34.12, IOU(%): 14.64\nModel: 'SCNN', mIOU(%): 35.79, IOU(%): 15.84\nModel: 'ENet-SAD', mIOU(%): 36.56, IOU(%): 16.02\nModel: 'YOLOP(ours)', mIOU(%): 70.50, IOU(%): 26.20",
"#### Ablation Studies 1: End-to-end v.s. Step-by-step:",
"#### Ablation Studies 2: Multi-task v.s. Single task:\n\n\n\nNotes:\n\n\n* The works we has use for reference including 'Multinet' (paper,code),'DLT-Net' (paper),'Faster R-CNN' (paper,code),'YOLOv5s'(code) ,'PSPNet'(paper,code) ,'ENet'(paper,code) 'SCNN'(paper,code) 'SAD-ENet'(paper,code). Thanks for their wonderful works.\n* In table 4, E, D, S and W refer to Encoder, Detect head, two Segment heads and whole network. So the Algorithm (First, we only train Encoder and Detect head. Then we freeze the Encoder and Detect head as well as train two Segmentation heads. Finally, the entire network is trained jointly for all three tasks.) can be marked as ED-S-W, and the same for others.\n\n\n\n\n---",
"### Visualization",
"#### Traffic Object Detection Result\n\n\n!detect result",
"#### Drivable Area Segmentation Result\n\n\n",
"#### Lane Detection Result\n\n\n\n\n\nNotes:\n\n\n* The visualization of lane detection result has been post processed by quadratic fitting.\n\n\n\n\n---",
"### Project Structure\n\n\n\n\n---",
"### Requirement\n\n\nThis codebase has been developed with python version 3.7, PyTorch 1.7+ and torchvision 0.8+:\n\n\nSee 'URL' for additional dependencies and version requirements.",
"### Data preparation",
"#### Download\n\n\n* Download the images from images.\n* Download the annotations of detection from det\\_annotations.\n* Download the annotations of drivable area segmentation from da\\_seg\\_annotations.\n* Download the annotations of lane line segmentation from ll\\_seg\\_annotations.\n\n\nWe recommend the dataset directory structure to be the following:\n\n\nUpdate the your dataset path in the './lib/config/URL'.",
"### Training\n\n\nYou can set the training configuration in the './lib/config/URL'. (Including: the loading of preliminary model, loss, data augmentation, optimizer, warm-up and cosine annealing, auto-anchor, training epochs, batch\\_size).\n\n\nIf you want try alternating optimization or train model for single task, please modify the corresponding configuration in './lib/config/URL' to 'True'. (As following, all configurations is 'False', which means training multiple tasks end to end).\n\n\nStart training:",
"### Evaluation\n\n\nYou can set the evaluation configuration in the './lib/config/URL'. (Including: batch\\_size and threshold value for nms).\n\n\nStart evaluating:",
"### Demo Test\n\n\nWe provide two testing method.",
"#### Folder\n\n\nYou can store the image or video in '--source', and then save the reasoning result to '--save-dir'",
"#### Camera\n\n\nIf there are any camera connected to your computer, you can set the 'source' as the camera number(The default is 0).",
"### Deployment\n\n\nOur model can reason in real-time on 'Jetson Tx2', with 'Zed Camera' to capture image. We use 'TensorRT' tool for speeding up. We provide code for deployment and reasoning of model in './toolkits/deploy'.\n\n\nIf you find our paper and code useful for your research, please consider giving a star and citation:"
] |
text-generation
|
transformers
|
# GPT2-Small-Arabic-Poetry
## Model description
Fine-tuned model of Arabic poetry dataset based on gpt2-small-arabic.
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
#### Limitations and bias
Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance.
Use them as demonstrations or proof of concepts but not as production code.
## Training data
This pretrained model used the [Arabic Poetry dataset](https://www.kaggle.com/ahmedabelal/arabic-poetry) from 9 different eras with a total of around 40k poems.
The dataset was trained (fine-tuned) based on the [gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) transformer model.
## Training procedure
Training was done using [Simple Transformers](https://github.com/ThilinaRajapakse/simpletransformers) library on Kaggle, using free GPU.
## Eval results
Final perplexity reached ws 76.3, loss: 4.33
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2020}
}
```
|
{"language": "ar", "tags": ["text-generation"], "datasets": ["Arabic poetry from several eras"]}
|
akhooli/gpt2-small-arabic-poetry
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #ar #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# GPT2-Small-Arabic-Poetry
## Model description
Fine-tuned model of Arabic poetry dataset based on gpt2-small-arabic.
## Intended uses & limitations
#### How to use
An example is provided in this colab notebook.
#### Limitations and bias
Both the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance.
Use them as demonstrations or proof of concepts but not as production code.
## Training data
This pretrained model used the Arabic Poetry dataset from 9 different eras with a total of around 40k poems.
The dataset was trained (fine-tuned) based on the gpt2-small-arabic transformer model.
## Training procedure
Training was done using Simple Transformers library on Kaggle, using free GPU.
## Eval results
Final perplexity reached ws 76.3, loss: 4.33
### BibTeX entry and citation info
|
[
"# GPT2-Small-Arabic-Poetry",
"## Model description\n\nFine-tuned model of Arabic poetry dataset based on gpt2-small-arabic.",
"## Intended uses & limitations",
"#### How to use\n\nAn example is provided in this colab notebook.",
"#### Limitations and bias\n\nBoth the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance. \nUse them as demonstrations or proof of concepts but not as production code.",
"## Training data\n\nThis pretrained model used the Arabic Poetry dataset from 9 different eras with a total of around 40k poems. \nThe dataset was trained (fine-tuned) based on the gpt2-small-arabic transformer model.",
"## Training procedure\n\nTraining was done using Simple Transformers library on Kaggle, using free GPU.",
"## Eval results \nFinal perplexity reached ws 76.3, loss: 4.33",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #ar #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# GPT2-Small-Arabic-Poetry",
"## Model description\n\nFine-tuned model of Arabic poetry dataset based on gpt2-small-arabic.",
"## Intended uses & limitations",
"#### How to use\n\nAn example is provided in this colab notebook.",
"#### Limitations and bias\n\nBoth the GPT2-small-arabic (trained on Arabic Wikipedia) and this model have several limitations in terms of coverage and training performance. \nUse them as demonstrations or proof of concepts but not as production code.",
"## Training data\n\nThis pretrained model used the Arabic Poetry dataset from 9 different eras with a total of around 40k poems. \nThe dataset was trained (fine-tuned) based on the gpt2-small-arabic transformer model.",
"## Training procedure\n\nTraining was done using Simple Transformers library on Kaggle, using free GPU.",
"## Eval results \nFinal perplexity reached ws 76.3, loss: 4.33",
"### BibTeX entry and citation info"
] |
text-generation
|
transformers
|
# GPT2-Small-Arabic
## Model description
GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).
## Intended uses & limitations
#### How to use
An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing).
Both text and poetry (fine-tuned model) generation are included.
#### Limitations and bias
GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.
Use as demonstration or proof of concepts but not as production code.
## Training data
This pretrained model used the Arabic Wikipedia dump (around 900 MB).
## Training procedure
Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU.
## Eval results
Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307
### BibTeX entry and citation info
```bibtex
@inproceedings{Abed Khooli,
year={2020}
}
```
|
{"language": "ar", "datasets": ["Arabic Wikipedia"], "metrics": ["none"]}
|
akhooli/gpt2-small-arabic
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"gpt2",
"text-generation",
"ar",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #jax #safetensors #gpt2 #text-generation #ar #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# GPT2-Small-Arabic
## Model description
GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).
## Intended uses & limitations
#### How to use
An example is provided in this colab notebook.
Both text and poetry (fine-tuned model) generation are included.
#### Limitations and bias
GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.
Use as demonstration or proof of concepts but not as production code.
## Training data
This pretrained model used the Arabic Wikipedia dump (around 900 MB).
## Training procedure
Training was done using Fastai2 library on Kaggle, using free GPU.
## Eval results
Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307
### BibTeX entry and citation info
|
[
"# GPT2-Small-Arabic",
"## Model description\n\nGPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).",
"## Intended uses & limitations",
"#### How to use\n\nAn example is provided in this colab notebook. \nBoth text and poetry (fine-tuned model) generation are included.",
"#### Limitations and bias\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. \nUse as demonstration or proof of concepts but not as production code.",
"## Training data\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).",
"## Training procedure\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.",
"## Eval results \nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #gpt2 #text-generation #ar #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# GPT2-Small-Arabic",
"## Model description\n\nGPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).",
"## Intended uses & limitations",
"#### How to use\n\nAn example is provided in this colab notebook. \nBoth text and poetry (fine-tuned model) generation are included.",
"#### Limitations and bias\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. \nUse as demonstration or proof of concepts but not as production code.",
"## Training data\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).",
"## Training procedure\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.",
"## Eval results \nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307",
"### BibTeX entry and citation info"
] |
translation
|
transformers
|
### mbart-large-ar-en
This is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
Other models by me: [Abed Khooli](https://huggingface.co/akhooli)
|
{"language": ["ar", "en"], "license": "mit", "tags": ["translation"]}
|
akhooli/mbart-large-cc25-ar-en
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"ar",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar",
"en"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #translation #ar #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### mbart-large-ar-en
This is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en.
Usage: see example notebook
Note: model has limited training set, not fully trained (do not use for production).
Other models by me: Abed Khooli
|
[
"### mbart-large-ar-en\nThis is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en. \nUsage: see example notebook \nNote: model has limited training set, not fully trained (do not use for production). \nOther models by me: Abed Khooli"
] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #translation #ar #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### mbart-large-ar-en\nThis is mbart-large-cc25, finetuned on a subset of the OPUS corpus for ar_en. \nUsage: see example notebook \nNote: model has limited training set, not fully trained (do not use for production). \nOther models by me: Abed Khooli"
] |
translation
|
transformers
|
### mbart-large-en-ar
This is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar.
Usage: see [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set, not fully trained (do not use for production).
|
{"language": ["en", "ar"], "license": "mit", "tags": ["translation"]}
|
akhooli/mbart-large-cc25-en-ar
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"translation",
"en",
"ar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en",
"ar"
] |
TAGS
#transformers #pytorch #mbart #text2text-generation #translation #en #ar #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### mbart-large-en-ar
This is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar.
Usage: see example notebook
Note: model has limited training set, not fully trained (do not use for production).
|
[
"### mbart-large-en-ar\nThis is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar. \nUsage: see example notebook \nNote: model has limited training set, not fully trained (do not use for production)."
] |
[
"TAGS\n#transformers #pytorch #mbart #text2text-generation #translation #en #ar #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### mbart-large-en-ar\nThis is mbart-large-cc25, finetuned on a subset of the UN corpus for en_ar. \nUsage: see example notebook \nNote: model has limited training set, not fully trained (do not use for production)."
] |
text-generation
|
transformers
|
## personachat-arabic (conversational AI)
This is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English)
and fine-tuned from [akhooli/gpt2-small-arabic](https://huggingface.co/akhooli/gpt2-small-arabic) which is a limited text generation model.
Usage: see the last section of this [example notebook](https://colab.research.google.com/drive/1I6RFOWMaTpPBX7saJYjnSTddW0TD6H1t?usp=sharing)
Note: model has limited training set which was machine translated (do not use for production).
|
{"language": ["ar"], "license": "mit", "tags": ["conversational"]}
|
akhooli/personachat-arabic
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"conversational",
"ar",
"license:mit",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #conversational #ar #license-mit #endpoints_compatible #has_space #text-generation-inference #region-us
|
## personachat-arabic (conversational AI)
This is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English)
and fine-tuned from akhooli/gpt2-small-arabic which is a limited text generation model.
Usage: see the last section of this example notebook
Note: model has limited training set which was machine translated (do not use for production).
|
[
"## personachat-arabic (conversational AI)\nThis is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English) \nand fine-tuned from akhooli/gpt2-small-arabic which is a limited text generation model. \nUsage: see the last section of this example notebook \nNote: model has limited training set which was machine translated (do not use for production)."
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #conversational #ar #license-mit #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"## personachat-arabic (conversational AI)\nThis is personachat-arabic, using a subset from the persona-chat validation dataset, machine translated to Arabic (from English) \nand fine-tuned from akhooli/gpt2-small-arabic which is a limited text generation model. \nUsage: see the last section of this example notebook \nNote: model has limited training set which was machine translated (do not use for production)."
] |
text-classification
|
transformers
|
### xlm-r-large-arabic-sent
Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other
classes (was based on a rate of 3 out of 5 in reviews).
Usage: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
|
{"language": ["ar", "en", "multilingual"], "license": "mit"}
|
akhooli/xlm-r-large-arabic-sent
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ar",
"en",
"multilingual",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar",
"en",
"multilingual"
] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #ar #en #multilingual #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### xlm-r-large-arabic-sent
Multilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other
classes (was based on a rate of 3 out of 5 in reviews).
Usage: see last section in this Colab notebook
|
[
"### xlm-r-large-arabic-sent \nMultilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large. \nZero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other \nclasses (was based on a rate of 3 out of 5 in reviews). \nUsage: see last section in this Colab notebook"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #ar #en #multilingual #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### xlm-r-large-arabic-sent \nMultilingual sentiment classification (Label_0: mixed, Label_1: negative, Label_2: positive) of Arabic reviews by fine-tuning XLM-Roberta-Large. \nZero shot classification of other languages (also works in mixed languages - ex. Arabic & English). Mixed category is not accurate and may confuse other \nclasses (was based on a rate of 3 out of 5 in reviews). \nUsage: see last section in this Colab notebook"
] |
text-classification
|
transformers
|
### xlm-r-large-arabic-toxic (toxic/hate speech classifier)
Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this [Colab notebook](https://lnkd.in/d3bCFyZ)
|
{"language": ["ar", "en"], "license": "mit"}
|
akhooli/xlm-r-large-arabic-toxic
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"ar",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ar",
"en"
] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #ar #en #license-mit #autotrain_compatible #endpoints_compatible #region-us
|
### xlm-r-large-arabic-toxic (toxic/hate speech classifier)
Toxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large.
Zero shot classification of other languages (also works in mixed languages - ex. Arabic & English).
Usage and further info: see last section in this Colab notebook
|
[
"### xlm-r-large-arabic-toxic (toxic/hate speech classifier) \nToxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. \nZero shot classification of other languages (also works in mixed languages - ex. Arabic & English). \nUsage and further info: see last section in this Colab notebook"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #ar #en #license-mit #autotrain_compatible #endpoints_compatible #region-us \n",
"### xlm-r-large-arabic-toxic (toxic/hate speech classifier) \nToxic (hate speech) classification (Label_0: non-toxic, Label_1: toxic) of Arabic comments by fine-tuning XLM-Roberta-Large. \nZero shot classification of other languages (also works in mixed languages - ex. Arabic & English). \nUsage and further info: see last section in this Colab notebook"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529614927
- CO2 Emissions (in grams): 5.999771405025692
## Validation Metrics
- Loss: 0.7582379579544067
- Accuracy: 0.7636103151862464
- Macro F1: 0.770630619486531
- Micro F1: 0.7636103151862464
- Weighted F1: 0.765233270165301
- Macro Precision: 0.7746285216467107
- Micro Precision: 0.7636103151862464
- Weighted Precision: 0.7683270753840836
- Macro Recall: 0.7680576576961138
- Micro Recall: 0.7636103151862464
- Weighted Recall: 0.7636103151862464
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/akilesh96/autonlp-mrcooper_text_classification-529614927
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("akilesh96/autonlp-mrcooper_text_classification-529614927", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["akilesh96/autonlp-data-mrcooper_text_classification"], "widget": [{"text": "Not Many People Know About The City 1200 Feet Below Detroit"}, {"text": "Bob accepts the challenge, and the next week they're standing in Saint Peters square. 'This isnt gonna work, he's never going to see me here when theres this much people. You stay here, I'll go talk to him and you'll see me on the balcony, the guards know me too.' Half an hour later, Bob and the pope appear side by side on the balcony. Bobs boss gets a heart attack, and Bob goes to visit him in the hospital."}, {"text": "I\u2019m sorry if you made it this far, but I\u2019m just genuinely idk, I feel like I shouldn\u2019t give up, it\u2019s just getting harder to come back from stuff like this."}], "co2_eq_emissions": 5.999771405025692}
|
akilesh96/autonlp-mrcooper_text_classification-529614927
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"en",
"dataset:akilesh96/autonlp-data-mrcooper_text_classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #en #dataset-akilesh96/autonlp-data-mrcooper_text_classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 529614927
- CO2 Emissions (in grams): 5.999771405025692
## Validation Metrics
- Loss: 0.7582379579544067
- Accuracy: 0.7636103151862464
- Macro F1: 0.770630619486531
- Micro F1: 0.7636103151862464
- Weighted F1: 0.765233270165301
- Macro Precision: 0.7746285216467107
- Micro Precision: 0.7636103151862464
- Weighted Precision: 0.7683270753840836
- Macro Recall: 0.7680576576961138
- Micro Recall: 0.7636103151862464
- Weighted Recall: 0.7636103151862464
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 529614927\n- CO2 Emissions (in grams): 5.999771405025692",
"## Validation Metrics\n\n- Loss: 0.7582379579544067\n- Accuracy: 0.7636103151862464\n- Macro F1: 0.770630619486531\n- Micro F1: 0.7636103151862464\n- Weighted F1: 0.765233270165301\n- Macro Precision: 0.7746285216467107\n- Micro Precision: 0.7636103151862464\n- Weighted Precision: 0.7683270753840836\n- Macro Recall: 0.7680576576961138\n- Micro Recall: 0.7636103151862464\n- Weighted Recall: 0.7636103151862464",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #en #dataset-akilesh96/autonlp-data-mrcooper_text_classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 529614927\n- CO2 Emissions (in grams): 5.999771405025692",
"## Validation Metrics\n\n- Loss: 0.7582379579544067\n- Accuracy: 0.7636103151862464\n- Macro F1: 0.770630619486531\n- Micro F1: 0.7636103151862464\n- Weighted F1: 0.765233270165301\n- Macro Precision: 0.7746285216467107\n- Micro Precision: 0.7636103151862464\n- Weighted Precision: 0.7683270753840836\n- Macro Recall: 0.7680576576961138\n- Micro Recall: 0.7636103151862464\n- Weighted Recall: 0.7636103151862464",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-generation
|
transformers
|
hello
|
{}
|
akozlo/con_bal60k
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
hello
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# conserv_fulltext_model
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
unbalanced_texts gpt2
|
{"license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "conserv_fulltext_model", "results": []}]}
|
akozlo/conserv_fulltext_1_18_22
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# conserv_fulltext_model
This model is a fine-tuned version of gpt2 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu102
- Datasets 1.17.0
- Tokenizers 0.10.3
unbalanced_texts gpt2
|
[
"# conserv_fulltext_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3\nunbalanced_texts gpt2"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #generated_from_trainer #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# conserv_fulltext_model\n\nThis model is a fine-tuned version of gpt2 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.1+cu102\n- Datasets 1.17.0\n- Tokenizers 0.10.3\nunbalanced_texts gpt2"
] |
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-bert
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-bert
| null |
[
"transformers",
"pytorch",
"tf",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #bert #endpoints_compatible #region-us
|
This is a copy of: URL
Changes: use old format for 'pytorch_model.bin'.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #bert #endpoints_compatible #region-us \n"
] |
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-gpt2
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-gpt2
| null |
[
"transformers",
"pytorch",
"tf",
"gpt2",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #gpt2 #endpoints_compatible #text-generation-inference #region-us
|
This is a copy of: URL
Changes: use old format for 'pytorch_model.bin'.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #gpt2 #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mbart
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-mbart
| null |
[
"transformers",
"pytorch",
"tf",
"mbart",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #mbart #endpoints_compatible #region-us
|
This is a copy of: URL
Changes: use old format for 'pytorch_model.bin'.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #mbart #endpoints_compatible #region-us \n"
] |
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-mpnet
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-mpnet
| null |
[
"transformers",
"pytorch",
"tf",
"mpnet",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #mpnet #endpoints_compatible #region-us
|
This is a copy of: URL
Changes: use old format for 'pytorch_model.bin'.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #mpnet #endpoints_compatible #region-us \n"
] |
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-t5
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-t5
| null |
[
"transformers",
"pytorch",
"tf",
"t5",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #t5 #endpoints_compatible #text-generation-inference #region-us
|
This is a copy of: URL
Changes: use old format for 'pytorch_model.bin'.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #t5 #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
transformers
|
This is a copy of: https://huggingface.co/hf-internal-testing/tiny-random-xlnet
Changes: use old format for `pytorch_model.bin`.
|
{}
|
akreal/tiny-random-xlnet
| null |
[
"transformers",
"pytorch",
"tf",
"xlnet",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tf #xlnet #endpoints_compatible #region-us
|
This is a copy of: URL
Changes: use old format for 'pytorch_model.bin'.
|
[] |
[
"TAGS\n#transformers #pytorch #tf #xlnet #endpoints_compatible #region-us \n"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unkown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0475
- Matthews Correlation: 0.6290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 16 | 1.3863 | 0.0 |
| No log | 2.0 | 32 | 1.2695 | 0.4503 |
| No log | 3.0 | 48 | 1.1563 | 0.6110 |
| No log | 4.0 | 64 | 1.0757 | 0.6290 |
| No log | 5.0 | 80 | 1.0475 | 0.6290 |
### Framework versions
- Transformers 4.9.2
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "model_index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"name": "Text Classification", "type": "text-classification"}, "metric": {"name": "Matthews Correlation", "type": "matthews_correlation", "value": 0.6290322580645161}}]}]}
|
akshara23/distilbert-base-uncased-finetuned-cola
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cola
======================================
This model is a fine-tuned version of distilbert-base-uncased on an unkown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0475
* Matthews Correlation: 0.6290
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.9.2
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.2\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
- Precision: 0.8975
- Recall: 0.9080
- F1: 0.9027
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.1326 | 0.7990 | 0.8043 | 0.8017 | 0.9338 |
| No log | 2.0 | 332 | 0.0925 | 0.8770 | 0.8946 | 0.8858 | 0.9618 |
| No log | 3.0 | 498 | 0.0812 | 0.8975 | 0.9080 | 0.9027 | 0.9703 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cloud-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cloud-ner
===========================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0812
* Precision: 0.8975
* Recall: 0.9080
* F1: 0.9027
* Accuracy: 0.9703
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud1-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0074
- Precision: 0.9714
- Recall: 0.9855
- F1: 0.9784
- Accuracy: 0.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.0160 | 0.9653 | 0.9420 | 0.9535 | 0.9945 |
| No log | 2.0 | 332 | 0.0089 | 0.9623 | 0.9855 | 0.9737 | 0.9965 |
| No log | 3.0 | 498 | 0.0074 | 0.9714 | 0.9855 | 0.9784 | 0.9972 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cloud1-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud1-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cloud1-ner
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0074
* Precision: 0.9714
* Recall: 0.9855
* F1: 0.9784
* Accuracy: 0.9972
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud2-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 |
| No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 |
| No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cloud2-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-cloud2-ner
============================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.8866
* Precision: 0.0
* Recall: 0.0
* F1: 0.0
* Accuracy: 0.8453
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-hypertuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5683
- Precision: 0.3398
- Recall: 0.6481
- F1: 0.4459
- Accuracy: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.3566 | 0.2913 | 0.5556 | 0.3822 | 0.8585 |
| No log | 2.0 | 168 | 0.4698 | 0.3366 | 0.6296 | 0.4387 | 0.8730 |
| No log | 3.0 | 252 | 0.5683 | 0.3398 | 0.6481 | 0.4459 | 0.8762 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-hypertuned-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-hypertuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-hypertuned-ner
================================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5683
* Precision: 0.3398
* Recall: 0.6481
* F1: 0.4459
* Accuracy: 0.8762
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 3e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9988
- Precision: 0.3
- Recall: 0.6
- F1: 0.4
- Accuracy: 0.7870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 84 | 0.8399 | 0.2105 | 0.4 | 0.2759 | 0.75 |
| No log | 2.0 | 168 | 0.9664 | 0.3 | 0.6 | 0.4 | 0.7870 |
| No log | 3.0 | 252 | 0.9988 | 0.3 | 0.6 | 0.4 | 0.7870 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": []}]}
|
akshaychaudhary/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9988
* Precision: 0.3
* Recall: 0.6
* F1: 0.4
* Accuracy: 0.7870
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.2
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.2\n* Tokenizers 0.11.0"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9250
- Recall: 0.9321
- F1: 0.9285
- Accuracy: 0.9834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2399 | 1.0 | 878 | 0.0702 | 0.9118 | 0.9208 | 0.9163 | 0.9805 |
| 0.0503 | 2.0 | 1756 | 0.0614 | 0.9176 | 0.9311 | 0.9243 | 0.9824 |
| 0.0304 | 3.0 | 2634 | 0.0611 | 0.9250 | 0.9321 | 0.9285 | 0.9834 |
### Framework versions
- Transformers 4.9.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9833669595056158}}]}]}
|
al00014/distilbert-base-uncased-finetuned-ner
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-ner
=====================================
This model is a fine-tuned version of distilbert-base-uncased on the conll2003 dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0611
* Precision: 0.9250
* Recall: 0.9321
* F1: 0.9285
* Accuracy: 0.9834
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.9.1
* Pytorch 1.9.0+cu102
* Datasets 1.11.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #dataset-conll2003 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.9.1\n* Pytorch 1.9.0+cu102\n* Datasets 1.11.0\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
# BART Pretrained
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 BART Pretrain 단계를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
{"language": ["ko"], "widget": [{"text": "[BOS]\ubb50 \ud574?[SEP][MASK]\ud558\ub2e4\uac00 \uc774\uc81c [MASK]\ub824\uace0[EOS]"}], "inference": {"parameters": {"max_length": 64}}}
|
alaggung/bart-pretrained
| null |
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us
|
# BART Pretrained
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
2021-dialogue-summary-competition 레포지토리의 BART Pretrain 단계를 학습한 모델입니다.
데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다.
|
[
"# BART Pretrained\n\n[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.\n\n2021-dialogue-summary-competition 레포지토리의 BART Pretrain 단계를 학습한 모델입니다.\n\n데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다."
] |
[
"TAGS\n#transformers #pytorch #tf #bart #text2text-generation #ko #autotrain_compatible #endpoints_compatible #region-us \n",
"# BART Pretrained\n\n[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.\n\n2021-dialogue-summary-competition 레포지토리의 BART Pretrain 단계를 학습한 모델입니다.\n\n데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다."
] |
summarization
|
transformers
|
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[bart-pretrained](https://huggingface.co/alaggung/bart-pretrained) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
{"language": ["ko"], "tags": ["summarization"], "widget": [{"text": "[BOS]\ubc25 \u3131?[SEP]\uace0\uace0\uace0\uace0 \ubb50 \uba39\uc744\uae4c?[SEP]\uc5b4\uc81c \uae40\uce58\ucc0c\uac1c \uba39\uc5b4\uc11c \ud55c\uc2dd\ub9d0\uace0 \ub534 \uac70[SEP]\uadf8\ub7fc \ub3c8\uae4c\uc2a4 \uc5b4\ub54c?[SEP]\uc624 \uc88b\ub2e4 1\uc2dc \ud559\uad00 \uc55e\uc73c\ub85c \uc624\uc148[SEP]\u3147\u314b[EOS]"}], "inference": {"parameters": {"max_length": 64, "top_k": 5}}}
|
alaggung/bart-r3f
| null |
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"summarization",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #bart #text2text-generation #summarization #ko #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
bart-pretrained 모델에 2021-dialogue-summary-competition 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다.
|
[
"# BART R3F\n\n[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.\n\nbart-pretrained 모델에 2021-dialogue-summary-competition 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다.\n\n데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다."
] |
[
"TAGS\n#transformers #pytorch #tf #bart #text2text-generation #summarization #ko #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# BART R3F\n\n[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.\n\nbart-pretrained 모델에 2021-dialogue-summary-competition 레포지토리의 R3F를 적용해 대화요약 Task를 학습한 모델입니다.\n\n데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다."
] |
summarization
|
transformers
|
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
[bart-r3f](https://huggingface.co/alaggung/bart-r3f) 모델에 [2021-dialogue-summary-competition](https://github.com/cosmoquester/2021-dialogue-summary-competition) 레포지토리의 RL 기법을 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 [AIHub 한국어 대화요약](https://aihub.or.kr/aidata/30714) 데이터를 사용하였습니다.
|
{"language": ["ko"], "tags": ["summarization"], "widget": [{"text": "[BOS]\ubc25 \u3131?[SEP]\uace0\uace0\uace0\uace0 \ubb50 \uba39\uc744\uae4c?[SEP]\uc5b4\uc81c \uae40\uce58\ucc0c\uac1c \uba39\uc5b4\uc11c \ud55c\uc2dd\ub9d0\uace0 \ub534 \uac70[SEP]\uadf8\ub7fc \ub3c8\uae4c\uc2a4 \uc5b4\ub54c?[SEP]\uc624 \uc88b\ub2e4 1\uc2dc \ud559\uad00 \uc55e\uc73c\ub85c \uc624\uc148[SEP]\u3147\u314b[EOS]"}], "inference": {"parameters": {"max_length": 64, "top_k": 5}}}
|
alaggung/bart-rl
| null |
[
"transformers",
"pytorch",
"tf",
"bart",
"text2text-generation",
"summarization",
"ko",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ko"
] |
TAGS
#transformers #pytorch #tf #bart #text2text-generation #summarization #ko #autotrain_compatible #endpoints_compatible #region-us
|
# BART R3F
[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.
bart-r3f 모델에 2021-dialogue-summary-competition 레포지토리의 RL 기법을 적용해 대화요약 Task를 학습한 모델입니다.
데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다.
|
[
"# BART R3F\n\n[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.\n\nbart-r3f 모델에 2021-dialogue-summary-competition 레포지토리의 RL 기법을 적용해 대화요약 Task를 학습한 모델입니다.\n\n데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다."
] |
[
"TAGS\n#transformers #pytorch #tf #bart #text2text-generation #summarization #ko #autotrain_compatible #endpoints_compatible #region-us \n",
"# BART R3F\n\n[2021 훈민정음 한국어 음성•자연어 인공지능 경진대회] 대화요약 부문 알라꿍달라꿍 팀의 대화요약 학습 샘플 모델을 공유합니다.\n\nbart-r3f 모델에 2021-dialogue-summary-competition 레포지토리의 RL 기법을 적용해 대화요약 Task를 학습한 모델입니다.\n\n데이터는 AIHub 한국어 대화요약 데이터를 사용하였습니다."
] |
text2text-generation
|
transformers
|
# mt5-large-finetuned-mnli-xtreme-xnli
## Model Description
This model takes a pretrained large [multilingual-t5](https://github.com/google-research/multilingual-t5) (also available from [models](https://huggingface.co/google/mt5-large)) and fine-tunes it on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set. It is intended to be used for zero-shot text classification, inspired by [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli).
## Intended Use
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on English MNLI and the [xtreme_xnli](https://www.tensorflow.org/datasets/catalog/xtreme_xnli) training set, a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
- Arabic
- Bulgarian
- Chinese
- English
- French
- German
- Greek
- Hindi
- Russian
- Spanish
- Swahili
- Thai
- Turkish
- Urdu
- Vietnamese
As per recommendations in [xlm-roberta-large-xnli](https://huggingface.co/joeddav/xlm-roberta-large-xnli), for English-only classification, you might want to check out:
- [bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli)
- [a distilled bart MNLI model](https://huggingface.co/models?filter=pipeline_tag%3Azero-shot-classification&search=valhalla).
### Zero-shot example:
The model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, "xnli:".
Below is an example, using PyTorch, of the model's use in a similar fashion to the `zero-shot-classification` pipeline. We use the logits from the LM output at the first token to represent confidence.
```python
from torch.nn.functional import softmax
from transformers import MT5ForConditionalGeneration, MT5Tokenizer
model_name = "alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli"
tokenizer = MT5Tokenizer.from_pretrained(model_name)
model = MT5ForConditionalGeneration.from_pretrained(model_name)
model.eval()
sequence_to_classify = "¿A quién vas a votar en 2020?"
candidate_labels = ["Europa", "salud pública", "política"]
hypothesis_template = "Este ejemplo es {}."
ENTAILS_LABEL = "▁0"
NEUTRAL_LABEL = "▁1"
CONTRADICTS_LABEL = "▁2"
label_inds = tokenizer.convert_tokens_to_ids(
[ENTAILS_LABEL, NEUTRAL_LABEL, CONTRADICTS_LABEL])
def process_nli(premise: str, hypothesis: str):
""" process to required xnli format with task prefix """
return "".join(['xnli: premise: ', premise, ' hypothesis: ', hypothesis])
# construct sequence of premise, hypothesis pairs
pairs = [(sequence_to_classify, hypothesis_template.format(label)) for label in
candidate_labels]
# format for mt5 xnli task
seqs = [process_nli(premise=premise, hypothesis=hypothesis) for
premise, hypothesis in pairs]
print(seqs)
# ['xnli: premise: ¿A quién vas a votar en 2020? hypothesis: Este ejemplo es Europa.',
# 'xnli: premise: ¿A quién vas a votar en 2020? hypothesis: Este ejemplo es salud pública.',
# 'xnli: premise: ¿A quién vas a votar en 2020? hypothesis: Este ejemplo es política.']
inputs = tokenizer.batch_encode_plus(seqs, return_tensors="pt", padding=True)
out = model.generate(**inputs, output_scores=True, return_dict_in_generate=True,
num_beams=1)
# sanity check that our sequences are expected length (1 + start token + end token = 3)
for i, seq in enumerate(out.sequences):
assert len(
seq) == 3, f"generated sequence {i} not of expected length, 3." \\\\
f" Actual length: {len(seq)}"
# get the scores for our only token of interest
# we'll now treat these like the output logits of a `*ForSequenceClassification` model
scores = out.scores[0]
# scores has a size of the model's vocab.
# However, for this task we have a fixed set of labels
# sanity check that these labels are always the top 3 scoring
for i, sequence_scores in enumerate(scores):
top_scores = sequence_scores.argsort()[-3:]
assert set(top_scores.tolist()) == set(label_inds), \\\\
f"top scoring tokens are not expected for this task." \\\\
f" Expected: {label_inds}. Got: {top_scores.tolist()}."
# cut down scores to our task labels
scores = scores[:, label_inds]
print(scores)
# tensor([[-2.5697, 1.0618, 0.2088],
# [-5.4492, -2.1805, -0.1473],
# [ 2.2973, 3.7595, -0.1769]])
# new indices of entailment and contradiction in scores
entailment_ind = 0
contradiction_ind = 2
# we can show, per item, the entailment vs contradiction probas
entail_vs_contra_scores = scores[:, [entailment_ind, contradiction_ind]]
entail_vs_contra_probas = softmax(entail_vs_contra_scores, dim=1)
print(entail_vs_contra_probas)
# tensor([[0.0585, 0.9415],
# [0.0050, 0.9950],
# [0.9223, 0.0777]])
# or we can show probas similar to `ZeroShotClassificationPipeline`
# this gives a zero-shot classification style output across labels
entail_scores = scores[:, entailment_ind]
entail_probas = softmax(entail_scores, dim=0)
print(entail_probas)
# tensor([7.6341e-03, 4.2873e-04, 9.9194e-01])
print(dict(zip(candidate_labels, entail_probas.tolist())))
# {'Europa': 0.007634134963154793,
# 'salud pública': 0.0004287279152777046,
# 'política': 0.9919371604919434}
```
Unfortunately, the `generate` function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.
The model is currently not compatible with the existing `zero-shot-classification` pipeline.
## Training
This model was pre-trained on a set of 101 languages in the mC4, as described in [the mt5 paper](https://arxiv.org/abs/2010.11934). It was then fine-tuned on the [mt5_xnli_translate_train](https://github.com/google-research/multilingual-t5/blob/78d102c830d76bd68f27596a97617e2db2bfc887/multilingual_t5/tasks.py#L190) task for 8k steps in a similar manner to that described in the [offical repo](https://github.com/google-research/multilingual-t5#fine-tuning), with guidance from [Stephen Mayhew's notebook](https://github.com/mayhewsw/multilingual-t5/blob/master/notebooks/mt5-xnli.ipynb). The resulting model was then converted to :hugging_face: format.
## Eval results
Accuracy over XNLI test set:
| ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vi | zh | average |
|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|------|
| 81.0 | 85.0 | 84.3 | 84.3 | 88.8 | 85.3 | 83.9 | 79.9 | 82.6 | 78.0 | 81.0 | 81.6 | 76.4 | 81.7 | 82.3 | 82.4 |
|
{"language": ["multilingual", "en", "fr", "es", "de", "el", "bg", "ru", "tr", "ar", "vi", "th", "zh", "hi", "sw", "ur"], "license": "apache-2.0", "tags": ["pytorch"], "datasets": ["multi_nli", "xnli"], "metrics": ["xnli"]}
|
alan-turing-institute/mt5-large-finetuned-mnli-xtreme-xnli
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"mt5",
"text2text-generation",
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur",
"dataset:multi_nli",
"dataset:xnli",
"arxiv:2010.11934",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.11934"
] |
[
"multilingual",
"en",
"fr",
"es",
"de",
"el",
"bg",
"ru",
"tr",
"ar",
"vi",
"th",
"zh",
"hi",
"sw",
"ur"
] |
TAGS
#transformers #pytorch #tf #safetensors #mt5 #text2text-generation #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #dataset-multi_nli #dataset-xnli #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
mt5-large-finetuned-mnli-xtreme-xnli
====================================
Model Description
-----------------
This model takes a pretrained large multilingual-t5 (also available from models) and fine-tunes it on English MNLI and the xtreme\_xnli training set. It is intended to be used for zero-shot text classification, inspired by xlm-roberta-large-xnli.
Intended Use
------------
This model is intended to be used for zero-shot text classification, especially in languages other than English. It is fine-tuned on English MNLI and the xtreme\_xnli training set, a multilingual NLI dataset. The model can therefore be used with any of the languages in the XNLI corpus:
* Arabic
* Bulgarian
* Chinese
* English
* French
* German
* Greek
* Hindi
* Russian
* Spanish
* Swahili
* Thai
* Turkish
* Urdu
* Vietnamese
As per recommendations in xlm-roberta-large-xnli, for English-only classification, you might want to check out:
* bart-large-mnli
* a distilled bart MNLI model.
### Zero-shot example:
The model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, "xnli:".
Below is an example, using PyTorch, of the model's use in a similar fashion to the 'zero-shot-classification' pipeline. We use the logits from the LM output at the first token to represent confidence.
Unfortunately, the 'generate' function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.
The model is currently not compatible with the existing 'zero-shot-classification' pipeline.
Training
--------
This model was pre-trained on a set of 101 languages in the mC4, as described in the mt5 paper. It was then fine-tuned on the mt5\_xnli\_translate\_train task for 8k steps in a similar manner to that described in the offical repo, with guidance from Stephen Mayhew's notebook. The resulting model was then converted to :hugging\_face: format.
Eval results
------------
Accuracy over XNLI test set:
|
[
"### Zero-shot example:\n\n\nThe model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, \"xnli:\".\n\n\nBelow is an example, using PyTorch, of the model's use in a similar fashion to the 'zero-shot-classification' pipeline. We use the logits from the LM output at the first token to represent confidence.\n\n\nUnfortunately, the 'generate' function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.\n\n\nThe model is currently not compatible with the existing 'zero-shot-classification' pipeline.\n\n\nTraining\n--------\n\n\nThis model was pre-trained on a set of 101 languages in the mC4, as described in the mt5 paper. It was then fine-tuned on the mt5\\_xnli\\_translate\\_train task for 8k steps in a similar manner to that described in the offical repo, with guidance from Stephen Mayhew's notebook. The resulting model was then converted to :hugging\\_face: format.\n\n\nEval results\n------------\n\n\nAccuracy over XNLI test set:"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #mt5 #text2text-generation #multilingual #en #fr #es #de #el #bg #ru #tr #ar #vi #th #zh #hi #sw #ur #dataset-multi_nli #dataset-xnli #arxiv-2010.11934 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Zero-shot example:\n\n\nThe model retains its text-to-text characteristic after fine-tuning. This means that our expected outputs will be text. During fine-tuning, the model learns to respond to the NLI task with a series of single token responses that map to entailment, neutral, or contradiction. The NLI task is indicated with a fixed prefix, \"xnli:\".\n\n\nBelow is an example, using PyTorch, of the model's use in a similar fashion to the 'zero-shot-classification' pipeline. We use the logits from the LM output at the first token to represent confidence.\n\n\nUnfortunately, the 'generate' function for the TF equivalent model doesn't exactly mirror the PyTorch version so the above code won't directly transfer.\n\n\nThe model is currently not compatible with the existing 'zero-shot-classification' pipeline.\n\n\nTraining\n--------\n\n\nThis model was pre-trained on a set of 101 languages in the mC4, as described in the mt5 paper. It was then fine-tuned on the mt5\\_xnli\\_translate\\_train task for 8k steps in a similar manner to that described in the offical repo, with guidance from Stephen Mayhew's notebook. The resulting model was then converted to :hugging\\_face: format.\n\n\nEval results\n------------\n\n\nAccuracy over XNLI test set:"
] |
text-generation
|
transformers
|
# Rick Sanchez DialoGPT Model
|
{"tags": ["conversational"]}
|
alankar/DialoGPT-small-rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick Sanchez DialoGPT Model
|
[
"# Rick Sanchez DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick Sanchez DialoGPT Model"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 1311135
## Validation Metrics
- Loss: 0.35616958141326904
- Accuracy: 0.8979447200566973
- Macro F1: 0.8545383956197669
- Micro F1: 0.8979447200566975
- Weighted F1: 0.8983951947775538
- Macro Precision: 0.8615833774439791
- Micro Precision: 0.8979447200566973
- Weighted Precision: 0.9013559365881655
- Macro Recall: 0.8516503001777104
- Micro Recall: 0.8979447200566973
- Weighted Recall: 0.8979447200566973
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "bn", "tags": "autonlp", "datasets": ["albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
albertvillanova/autonlp-indic_glue-multi_class_classification-1e67664-1311135
| null |
[
"transformers",
"pytorch",
"albert",
"text-classification",
"autonlp",
"bn",
"dataset:albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bn"
] |
TAGS
#transformers #pytorch #albert #text-classification #autonlp #bn #dataset-albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 1311135
## Validation Metrics
- Loss: 0.35616958141326904
- Accuracy: 0.8979447200566973
- Macro F1: 0.8545383956197669
- Micro F1: 0.8979447200566975
- Weighted F1: 0.8983951947775538
- Macro Precision: 0.8615833774439791
- Micro Precision: 0.8979447200566973
- Weighted Precision: 0.9013559365881655
- Macro Recall: 0.8516503001777104
- Micro Recall: 0.8979447200566973
- Weighted Recall: 0.8979447200566973
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 1311135",
"## Validation Metrics\n\n- Loss: 0.35616958141326904\n- Accuracy: 0.8979447200566973\n- Macro F1: 0.8545383956197669\n- Micro F1: 0.8979447200566975\n- Weighted F1: 0.8983951947775538\n- Macro Precision: 0.8615833774439791\n- Micro Precision: 0.8979447200566973\n- Weighted Precision: 0.9013559365881655\n- Macro Recall: 0.8516503001777104\n- Micro Recall: 0.8979447200566973\n- Weighted Recall: 0.8979447200566973",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #albert #text-classification #autonlp #bn #dataset-albertvillanova/autonlp-data-indic_glue-multi_class_classification-1e67664 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 1311135",
"## Validation Metrics\n\n- Loss: 0.35616958141326904\n- Accuracy: 0.8979447200566973\n- Macro F1: 0.8545383956197669\n- Micro F1: 0.8979447200566975\n- Weighted F1: 0.8983951947775538\n- Macro Precision: 0.8615833774439791\n- Micro Precision: 0.8979447200566973\n- Weighted Precision: 0.9013559365881655\n- Macro Recall: 0.8516503001777104\n- Micro Recall: 0.8979447200566973\n- Weighted Recall: 0.8979447200566973",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
token-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 1301123
## Validation Metrics
- Loss: 0.14097803831100464
- Accuracy: 0.9740097463451206
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "bn", "tags": "autonlp", "datasets": ["albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
albertvillanova/autonlp-wikiann-entity_extraction-1e67664-1301123
| null |
[
"transformers",
"pytorch",
"safetensors",
"albert",
"token-classification",
"autonlp",
"bn",
"dataset:albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bn"
] |
TAGS
#transformers #pytorch #safetensors #albert #token-classification #autonlp #bn #dataset-albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Entity Extraction
- Model ID: 1301123
## Validation Metrics
- Loss: 0.14097803831100464
- Accuracy: 0.9740097463451206
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 1301123",
"## Validation Metrics\n\n- Loss: 0.14097803831100464\n- Accuracy: 0.9740097463451206\n- Precision: 0.0\n- Recall: 0.0\n- F1: 0.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #safetensors #albert #token-classification #autonlp #bn #dataset-albertvillanova/autonlp-data-wikiann-entity_extraction-1e67664 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Entity Extraction\n- Model ID: 1301123",
"## Validation Metrics\n\n- Loss: 0.14097803831100464\n- Accuracy: 0.9740097463451206\n- Precision: 0.0\n- Recall: 0.0\n- F1: 0.0",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
null | null |
# Configuration
`title`: _string_
Display title for the Space
`emoji`: _string_
Space emoji (emoji-only character allowed)
`colorFrom`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`colorTo`: _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
`sdk`: _string_
Can be either `gradio` or `streamlit`
`sdk_version` : _string_
Only applicable for `streamlit` SDK.
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
`app_file`: _string_
Path to your main application file (which contains either `gradio` or `streamlit` Python code).
Path is relative to the root of the repository.
`pinned`: _boolean_
Whether the Space stays on top of your list.
|
{"title": "clip", "emoji": "\ud83d\udc41", "colorFrom": "indigo", "colorTo": "blue", "sdk": "streamlit", "app_file": "app.py", "pinned": true}
|
allen0s/clip
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Configuration
'title': _string_
Display title for the Space
'emoji': _string_
Space emoji (emoji-only character allowed)
'colorFrom': _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
'colorTo': _string_
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
'sdk': _string_
Can be either 'gradio' or 'streamlit'
'sdk_version' : _string_
Only applicable for 'streamlit' SDK.
See doc for more info on supported versions.
'app_file': _string_
Path to your main application file (which contains either 'gradio' or 'streamlit' Python code).
Path is relative to the root of the repository.
'pinned': _boolean_
Whether the Space stays on top of your list.
|
[
"# Configuration\n\n'title': _string_ \nDisplay title for the Space\n\n'emoji': _string_ \nSpace emoji (emoji-only character allowed)\n\n'colorFrom': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'colorTo': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'sdk': _string_ \nCan be either 'gradio' or 'streamlit'\n\n'sdk_version' : _string_ \nOnly applicable for 'streamlit' SDK. \nSee doc for more info on supported versions.\n\n'app_file': _string_ \nPath to your main application file (which contains either 'gradio' or 'streamlit' Python code). \nPath is relative to the root of the repository.\n\n'pinned': _boolean_ \nWhether the Space stays on top of your list."
] |
[
"TAGS\n#region-us \n",
"# Configuration\n\n'title': _string_ \nDisplay title for the Space\n\n'emoji': _string_ \nSpace emoji (emoji-only character allowed)\n\n'colorFrom': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'colorTo': _string_ \nColor for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)\n\n'sdk': _string_ \nCan be either 'gradio' or 'streamlit'\n\n'sdk_version' : _string_ \nOnly applicable for 'streamlit' SDK. \nSee doc for more info on supported versions.\n\n'app_file': _string_ \nPath to your main application file (which contains either 'gradio' or 'streamlit' Python code). \nPath is relative to the root of the repository.\n\n'pinned': _boolean_ \nWhether the Space stays on top of your list."
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/alecmullen/autonlp-group-classification-441411446
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("alecmullen/autonlp-group-classification-441411446", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": "autonlp", "datasets": ["alecmullen/autonlp-data-group-classification"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 0.4362732160754736}
|
alecmullen/autonlp-group-classification-441411446
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autonlp",
"en",
"dataset:alecmullen/autonlp-data-group-classification",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-alecmullen/autonlp-data-group-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 441411446
- CO2 Emissions (in grams): 0.4362732160754736
## Validation Metrics
- Loss: 0.7598486542701721
- Accuracy: 0.8222222222222222
- Macro F1: 0.2912091747693842
- Micro F1: 0.8222222222222222
- Weighted F1: 0.7707160863181806
- Macro Precision: 0.29631463146314635
- Micro Precision: 0.8222222222222222
- Weighted Precision: 0.7341339689524508
- Macro Recall: 0.30174603174603176
- Micro Recall: 0.8222222222222222
- Weighted Recall: 0.8222222222222222
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 441411446\n- CO2 Emissions (in grams): 0.4362732160754736",
"## Validation Metrics\n\n- Loss: 0.7598486542701721\n- Accuracy: 0.8222222222222222\n- Macro F1: 0.2912091747693842\n- Micro F1: 0.8222222222222222\n- Weighted F1: 0.7707160863181806\n- Macro Precision: 0.29631463146314635\n- Micro Precision: 0.8222222222222222\n- Weighted Precision: 0.7341339689524508\n- Macro Recall: 0.30174603174603176\n- Micro Recall: 0.8222222222222222\n- Weighted Recall: 0.8222222222222222",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #autonlp #en #dataset-alecmullen/autonlp-data-group-classification #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Multi-class Classification\n- Model ID: 441411446\n- CO2 Emissions (in grams): 0.4362732160754736",
"## Validation Metrics\n\n- Loss: 0.7598486542701721\n- Accuracy: 0.8222222222222222\n- Macro F1: 0.2912091747693842\n- Micro F1: 0.8222222222222222\n- Weighted F1: 0.7707160863181806\n- Macro Precision: 0.29631463146314635\n- Micro Precision: 0.8222222222222222\n- Weighted Precision: 0.7341339689524508\n- Macro Recall: 0.30174603174603176\n- Micro Recall: 0.8222222222222222\n- Weighted Recall: 0.8222222222222222",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
feature-extraction
|
transformers
|
## Classifier to check if two sequences are paraphrase or not
Trained based on ruBert by DeepPavlov.
Use this way:
```
import torch
import torch.nn as nn
import os
import copy
import random
import numpy as np
import pandas as pd
from torch.utils.data import DataLoader, Dataset
from torch.cuda.amp import autocast, GradScaler
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModel, AdamW, get_linear_schedule_with_warmup
from transformers.file_utils import (
cached_path,
hf_bucket_url,
is_remote_url,
)
archive_file = hf_bucket_url(
"alenusch/par_cls_bert",
filename="rubert-base-cased_lr_2e-05_val_loss_0.66143_ep_4.pt",
revision=None,
mirror=None,
)
resolved_archive_file = cached_path(
archive_file,
cache_dir=None,
force_download=False,
proxies=None,
resume_download=False,
local_files_only=False,
)
os.environ["TOKENIZERS_PARALLELISM"] = "false"
class SentencePairClassifier(nn.Module):
def __init__(self, bert_model):
super(SentencePairClassifier, self).__init__()
self.bert_layer = AutoModel.from_pretrained(bert_model)
self.cls_layer = nn.Linear(768, 1)
self.dropout = nn.Dropout(p=0.1)
@autocast()
def forward(self, input_ids, attn_masks, token_type_ids):
cont_reps, pooler_output = self.bert_layer(input_ids, attn_masks, token_type_ids, return_dict=False)
logits = self.cls_layer(self.dropout(pooler_output))
return logits
class CustomDataset(Dataset):
def __init__(self, data, maxlen, bert_model):
self.data = data
self.tokenizer = AutoTokenizer.from_pretrained(bert_model)
self.maxlen = maxlen
self.targets = False
def __len__(self):
return len(self.data)
def __getitem__(self, index):
sent1 = str(self.data[index][0])
sent2 = str(self.data[index][1])
encoded_pair = self.tokenizer(sent1, sent2,
padding='max_length', # Pad to max_length
truncation=True, # Truncate to max_length
max_length=self.maxlen,
return_tensors='pt') # Return torch.Tensor objects
token_ids = encoded_pair['input_ids'].squeeze(0) # tensor of token ids
attn_masks = encoded_pair['attention_mask'].squeeze(0) # binary tensor with "0" for padded values and "1" for the other values
token_type_ids = encoded_pair['token_type_ids'].squeeze(0) # binary tensor with "0" for the 1st sentence tokens & "1" for the 2nd sentence tokens
return token_ids, attn_masks, token_type_ids
def get_probs_from_logits(logits):
probs = torch.sigmoid(logits.unsqueeze(-1))
return probs.detach().cpu().numpy()
def test_prediction(net, device, dataloader, with_labels=False):
net.eval()
probs_all = []
with torch.no_grad():
for seq, attn_masks, token_type_ids in tqdm(dataloader):
seq, attn_masks, token_type_ids = seq.to(device), attn_masks.to(device), token_type_ids.to(device)
logits = net(seq, attn_masks, token_type_ids)
probs = get_probs_from_logits(logits.squeeze(-1)).squeeze(-1)
probs_all += probs.tolist()
return probs_all
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
cls_model = SentencePairClassifier(bert_model="alenusch/par_cls_bert")
if torch.cuda.device_count() > 1:
cls_model = nn.DataParallel(model)
cls_model.load_state_dict(torch.load(resolved_archive_file))
cls_model.to(device)
variants = [["sentence1", "sentence2"]]
test_set = CustomDataset(variants, maxlen=512, bert_model="alenusch/par_cls_bert")
test_loader = DataLoader(test_set, batch_size=16, num_workers=5)
res = test_prediction(net=cls_model, device=device, dataloader=test_loader, with_labels=False)
```
|
{}
|
alenusch/par_cls_bert
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
|
## Classifier to check if two sequences are paraphrase or not
Trained based on ruBert by DeepPavlov.
Use this way:
|
[
"## Classifier to check if two sequences are paraphrase or not\n\nTrained based on ruBert by DeepPavlov.\n\nUse this way:"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n",
"## Classifier to check if two sequences are paraphrase or not\n\nTrained based on ruBert by DeepPavlov.\n\nUse this way:"
] |
feature-extraction
|
transformers
|
alex6095/SanctiMolyOH_Cpu
|
{}
|
alex6095/SanctiMolyOH_Cpu
| null |
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #distilbert #feature-extraction #endpoints_compatible #has_space #region-us
|
alex6095/SanctiMolyOH_Cpu
|
[] |
[
"TAGS\n#transformers #pytorch #distilbert #feature-extraction #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
# DanBERT
## Model description
DanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis.
The model can be found at:
* [danbert-da](https://huggingface.co/alexanderfalk/danbert-small-cased)
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("alexanderfalk/danbert-small-cased")
model = AutoModel.from_pretrained("alexanderfalk/danbert-small-cased")
```
### BibTeX entry and citation info
```bibtex
@inproceedings{...,
year={2020},
title={Anonymization of Danish, Real-Time Data, and Personalized Modelling},
author={Alexander Falk},
}
```
|
{"language": ["da", "en"], "license": "apache-2.0", "tags": ["named entity recognition", "token criticality"], "datasets": ["custom danish dataset"], "metrics": ["array of metric identifiers"], "inference": false}
|
alexanderfalk/danbert-small-cased
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"named entity recognition",
"token criticality",
"da",
"en",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"da",
"en"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #named entity recognition #token criticality #da #en #license-apache-2.0 #autotrain_compatible #region-us
|
# DanBERT
## Model description
DanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis.
The model can be found at:
* danbert-da
## Intended uses & limitations
#### How to use
### BibTeX entry and citation info
|
[
"# DanBERT",
"## Model description\n\nDanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis. \nThe model can be found at:\n\n* danbert-da",
"## Intended uses & limitations",
"#### How to use",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #named entity recognition #token criticality #da #en #license-apache-2.0 #autotrain_compatible #region-us \n",
"# DanBERT",
"## Model description\n\nDanBERT is a danish pre-trained model based on BERT-Base. The pre-trained model has been trained on more than 2 million sentences and 40 millions, danish words. The training has been conducted as part of a thesis. \nThe model can be found at:\n\n* danbert-da",
"## Intended uses & limitations",
"#### How to use",
"### BibTeX entry and citation info"
] |
token-classification
|
transformers
|
# ArcheoBERTje-NER
A Dutch BERT model for Named Entity Recognition in the Archaeology domain
This is the [ArcheoBERTje](https://huggingface.co/alexbrandsen/ArcheoBERTje) model finetuned for NER, targeting the following entities:
- Time periods
- Places
- Artefacts
- Contexts
- Materials
- Species
|
{}
|
alexbrandsen/ArcheoBERTje-NER
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
|
# ArcheoBERTje-NER
A Dutch BERT model for Named Entity Recognition in the Archaeology domain
This is the ArcheoBERTje model finetuned for NER, targeting the following entities:
- Time periods
- Places
- Artefacts
- Contexts
- Materials
- Species
|
[
"# ArcheoBERTje-NER\nA Dutch BERT model for Named Entity Recognition in the Archaeology domain\n\nThis is the ArcheoBERTje model finetuned for NER, targeting the following entities:\n\n- Time periods\n- Places\n- Artefacts\n- Contexts\n- Materials\n- Species"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# ArcheoBERTje-NER\nA Dutch BERT model for Named Entity Recognition in the Archaeology domain\n\nThis is the ArcheoBERTje model finetuned for NER, targeting the following entities:\n\n- Time periods\n- Places\n- Artefacts\n- Contexts\n- Materials\n- Species"
] |
fill-mask
|
transformers
|
# ArcheoBERTje
A Dutch BERT model for the Archaeology domain
This model is based on the Dutch BERTje model by wietsedv (https://github.com/wietsedv/bertje).
We further finetuned BERTje with a corpus of roughly 60k Dutch excavation reports (~650 million tokens) from the DANS data archive (https://easy.dans.knaw.nl/ui/home).
|
{}
|
alexbrandsen/ArcheoBERTje
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# ArcheoBERTje
A Dutch BERT model for the Archaeology domain
This model is based on the Dutch BERTje model by wietsedv (URL
We further finetuned BERTje with a corpus of roughly 60k Dutch excavation reports (~650 million tokens) from the DANS data archive (URL
|
[
"# ArcheoBERTje\nA Dutch BERT model for the Archaeology domain\n\nThis model is based on the Dutch BERTje model by wietsedv (URL \n\nWe further finetuned BERTje with a corpus of roughly 60k Dutch excavation reports (~650 million tokens) from the DANS data archive (URL"
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# ArcheoBERTje\nA Dutch BERT model for the Archaeology domain\n\nThis model is based on the Dutch BERTje model by wietsedv (URL \n\nWe further finetuned BERTje with a corpus of roughly 60k Dutch excavation reports (~650 million tokens) from the DANS data archive (URL"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-large-xlsr-polish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Polish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "pl", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "pl", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model = Wav2Vec2ForCTC.from_pretrained("alexcleu/wav2vec2-large-xlsr-polish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 24.846030
## Training
The Common Voice `train`, `validation` datasets were used for training.
|
{"language": "pl", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2vec2 Large 53 Polish by Alex Leu", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice pl", "type": "common_voice", "args": "pl"}, "metrics": [{"type": "wer", "value": 24.84603, "name": "Test WER"}]}]}]}
|
alexcleu/wav2vec2-large-xlsr-polish
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"pl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #pl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# wav2vec2-large-xlsr-polish
Fine-tuned facebook/wav2vec2-large-xlsr-53 in Polish using the Common Voice
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
## Evaluation
The model can be evaluated as follows on the Turkish test data of Common Voice.
Test Result: 24.846030
## Training
The Common Voice 'train', 'validation' datasets were used for training.
|
[
"# wav2vec2-large-xlsr-polish\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Polish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\nTest Result: 24.846030",
"## Training\nThe Common Voice 'train', 'validation' datasets were used for training."
] |
[
"TAGS\n#transformers #pytorch #jax #safetensors #wav2vec2 #automatic-speech-recognition #audio #speech #xlsr-fine-tuning-week #pl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# wav2vec2-large-xlsr-polish\nFine-tuned facebook/wav2vec2-large-xlsr-53 in Polish using the Common Voice\nWhen using this model, make sure that your speech input is sampled at 16kHz.",
"## Usage\nThe model can be used directly (without a language model) as follows:",
"## Evaluation\nThe model can be evaluated as follows on the Turkish test data of Common Voice.\n\nTest Result: 24.846030",
"## Training\nThe Common Voice 'train', 'validation' datasets were used for training."
] |
text2text-generation
|
transformers
|
t5_boolq
|
{}
|
alexcruz0202/t5_boolq
| null |
[
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5_boolq
|
[] |
[
"TAGS\n#transformers #pytorch #jax #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 1.7446 | 9.0564 | 17.8356 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned-en-to-de
===========================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned128-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned128-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned128-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-small-finetuned128-en-to-de
This model is a fine-tuned version of t5-small on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
[
"# t5-small-finetuned128-en-to-de\n\nThis model is a fine-tuned version of t5-small on the wmt16 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-small-finetuned128-en-to-de\n\nThis model is a fine-tuned version of t5-small on the wmt16 dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.16.1\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned16-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 2.1906 | 23.3821 | 12.956 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned16-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned16-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned16-en-to-de
=============================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned300-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.1454 | 14.2319 | 17.8329 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned300-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned300-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned300-en-to-de
==============================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned32-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 136 | 1.4226 | 21.9554 | 17.8089 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned32-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned32-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned32-en-to-de
=============================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned8-en-to-de
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 136 | 3.6717 | 3.9127 | 4.0207 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "model-index": [{"name": "t5-small-finetuned8-en-to-de", "results": []}]}
|
alexrfelicio/t5-small-finetuned8-en-to-de
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
t5-small-finetuned8-en-to-de
============================
This model is a fine-tuned version of t5-small on the wmt16 dataset.
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 1
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.5
* Pytorch 1.10.0+cu111
* Datasets 1.16.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #dataset-wmt16 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.5\n* Pytorch 1.10.0+cu111\n* Datasets 1.16.1\n* Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# alexrink/t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.6399
- Validation Loss: 6.0028
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.2, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 11.4991 | 6.9902 | 0 |
| 6.5958 | 6.2502 | 1 |
| 6.1443 | 6.1638 | 2 |
| 5.9379 | 6.0765 | 3 |
| 5.7739 | 5.9393 | 4 |
| 5.7033 | 6.0061 | 5 |
| 5.7070 | 5.9305 | 6 |
| 5.7000 | 5.9698 | 7 |
| 5.6888 | 5.9223 | 8 |
| 5.6657 | 5.9773 | 9 |
| 5.6827 | 5.9734 | 10 |
| 5.6380 | 5.9428 | 11 |
| 5.6532 | 5.9799 | 12 |
| 5.6617 | 5.9974 | 13 |
| 5.6402 | 5.9563 | 14 |
| 5.6710 | 5.9926 | 15 |
| 5.6999 | 5.9764 | 16 |
| 5.6573 | 5.9557 | 17 |
| 5.6297 | 5.9678 | 18 |
| 5.6399 | 6.0028 | 19 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.11.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "alexrink/t5-small-finetuned-xsum", "results": []}]}
|
alexrink/t5-small-finetuned-xsum
| null |
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #tf #tensorboard #t5 #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
alexrink/t5-small-finetuned-xsum
================================
This model is a fine-tuned version of t5-small on an unknown dataset.
It achieves the following results on the evaluation set:
* Train Loss: 5.6399
* Validation Loss: 6.0028
* Epoch: 19
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 0.2, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01}
* training\_precision: float32
### Training results
### Framework versions
* Transformers 4.26.1
* TensorFlow 2.11.0
* Datasets 2.9.0
* Tokenizers 0.13.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 0.2, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.26.1\n* TensorFlow 2.11.0\n* Datasets 2.9.0\n* Tokenizers 0.13.2"
] |
[
"TAGS\n#transformers #tf #tensorboard #t5 #text2text-generation #generated_from_keras_callback #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 0.2, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.26.1\n* TensorFlow 2.11.0\n* Datasets 2.9.0\n* Tokenizers 0.13.2"
] |
fill-mask
|
transformers
|
Paper: https://arxiv.org/abs/2204.03951
Code: https://github.com/alexyalunin/RuBioRoBERTa
|
{}
|
alexyalunin/RuBioBERT
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"arxiv:2204.03951",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2204.03951"
] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #arxiv-2204.03951 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
Paper: URL
Code: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #arxiv-2204.03951 #autotrain_compatible #endpoints_compatible #has_space #region-us \n"
] |
fill-mask
|
transformers
|
### Contact
[email protected]
https://t.me/pavel_blinoff
### Paper
https://arxiv.org/abs/2204.03951
### Code
https://github.com/alexyalunin/RuBioRoBERTa
### Citation
```
@misc{alex2022rubioroberta,
title={RuBioRoBERTa: a pre-trained biomedical language model for Russian language biomedical text mining},
author={Alexander Yalunin and Alexander Nesterov and Dmitriy Umerenkov},
year={2022},
eprint={2204.03951},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["ru"], "multilinguality": ["monolingual"], "widget": [{"text": "\u0416\u0430\u043b\u043e\u0431\u044b \u043d\u0430 \u0431\u043e\u043b\u044c \u0432\u043d\u0438\u0437\u0443 <mask> \u043f\u043e\u0441\u043b\u0435 \u043f\u0440\u0438\u0451\u043c\u0430 \u043f\u0438\u0449\u0438.", "example_title": "pain_example"}, {"text": "\u041f\u0430\u0446\u0438\u0435\u043d\u0442\u043a\u0430 \u043d\u0430\u0431\u043b\u044e\u0434\u0430\u043b\u0430\u0441\u044c \u0443 <mask> \u043f\u043e \u043f\u043e\u0432\u043e\u0434\u0443 \u0433\u0440\u0438\u0431\u043a\u043e\u0432\u043e\u0433\u043e \u043f\u043e\u0440\u0430\u0436\u0435\u043d\u0438\u044f \u043a\u043e\u0436\u0438.", "example_title": "spec_example"}, {"text": "\u041f\u043e\u044f\u0432\u0438\u043b\u0441\u044f \u0437\u0443\u0434 \u0442\u0435\u043b\u0430, <mask> \u0432\u0435\u0441\u0430, \u043f\u043e\u0442\u043b\u0438\u0432\u043e\u0441\u0442\u044c, \u043f\u0440\u043e\u0432\u043e\u0434\u0438\u043b \u043a\u043e\u043d\u0442\u0440\u043e\u043b\u044c \u0441\u0430\u0445\u0430\u0440\u0430 \u043a\u0440\u043e\u0432\u0438.", "example_title": "weight_example"}]}
|
alexyalunin/RuBioRoBERTa
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ru",
"arxiv:2204.03951",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2204.03951"
] |
[
"ru"
] |
TAGS
#transformers #pytorch #roberta #fill-mask #ru #arxiv-2204.03951 #autotrain_compatible #endpoints_compatible #region-us
|
### Contact
URL@URL
https://t.me/pavel_blinoff
### Paper
URL
### Code
URL
|
[
"### Contact\n\nURL@URL\n\nhttps://t.me/pavel_blinoff",
"### Paper\nURL",
"### Code\nURL"
] |
[
"TAGS\n#transformers #pytorch #roberta #fill-mask #ru #arxiv-2204.03951 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Contact\n\nURL@URL\n\nhttps://t.me/pavel_blinoff",
"### Paper\nURL",
"### Code\nURL"
] |
fill-mask
|
transformers
|
# RuBio
for paper: dsdfsfsdf
|
{}
|
alexyalunin/my-awesome-model
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
# RuBio
for paper: dsdfsfsdf
|
[
"# RuBio\n\nfor paper: dsdfsfsdf"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"# RuBio\n\nfor paper: dsdfsfsdf"
] |
fill-mask
|
transformers
|
<img src="https://raw.githubusercontent.com/alger-ia/dziribert/main/dziribert_drawing.png" alt="drawing" width="25%" height="25%" align="right"/>
# DziriBERT
DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets).
For more information, please visit our paper: https://arxiv.org/pdf/2109.12346.pdf.
## How to use
```python
from transformers import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained("alger-ia/dziribert")
model = BertForMaskedLM.from_pretrained("alger-ia/dziribert")
```
You can find a fine-tuning script in our Github repo: https://github.com/alger-ia/dziribert
## Limitations
The pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.
### How to cite
```bibtex
@article{dziribert,
title={DziriBERT: a Pre-trained Language Model for the Algerian Dialect},
author={Abdaoui, Amine and Berrimi, Mohamed and Oussalah, Mourad and Moussaoui, Abdelouahab},
journal={arXiv preprint arXiv:2109.12346},
year={2021}
}
```
## Contact
Please contact [email protected] for any question, feedback or request.
|
{"language": ["ar", "dz"], "license": "apache-2.0", "tags": ["pytorch", "bert", "multilingual", "ar", "dz"], "widget": [{"text": " \u0623\u0646\u0627 \u0645\u0646 \u0627\u0644\u062c\u0632\u0627\u0626\u0631 \u0645\u0646 \u0648\u0644\u0627\u064a\u0629 [MASK] "}, {"text": "rabi [MASK] khouya sami"}, {"text": " \u0631\u0628\u064a [MASK] \u062e\u0648\u064a\u0627 \u0644\u0639\u0632\u064a\u0632"}, {"text": "tahya el [MASK]."}, {"text": "rouhi ya dzayer [MASK]"}], "inference": true}
|
alger-ia/dziribert
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"ar",
"dz",
"arxiv:2109.12346",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2109.12346"
] |
[
"ar",
"dz"
] |
TAGS
#transformers #pytorch #tf #safetensors #bert #fill-mask #multilingual #ar #dz #arxiv-2109.12346 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
<img src="URL alt="drawing" width="25%" height="25%" align="right"/>
# DziriBERT
DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets).
For more information, please visit our paper: URL
## How to use
You can find a fine-tuning script in our Github repo: URL
## Limitations
The pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.
### How to cite
## Contact
Please contact URL@URL for any question, feedback or request.
|
[
"# DziriBERT\n\n\nDziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets).\n\nFor more information, please visit our paper: URL",
"## How to use\n\n\n\nYou can find a fine-tuning script in our Github repo: URL",
"## Limitations\n\nThe pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.",
"### How to cite",
"## Contact \n\nPlease contact URL@URL for any question, feedback or request."
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #bert #fill-mask #multilingual #ar #dz #arxiv-2109.12346 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# DziriBERT\n\n\nDziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets).\n\nFor more information, please visit our paper: URL",
"## How to use\n\n\n\nYou can find a fine-tuning script in our Github repo: URL",
"## Limitations\n\nThe pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user.",
"### How to cite",
"## Contact \n\nPlease contact URL@URL for any question, feedback or request."
] |
fill-mask
|
transformers
|
<p>Chinese Bert Large Model</p>
<p>bert large中文预训练模型</p>
#### 训练语料
中文wiki, 2018-2020海量新闻语料
|
{}
|
algolet/bert-large-chinese
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us
|
<p>Chinese Bert Large Model</p>
<p>bert large中文预训练模型</p>
#### 训练语料
中文wiki, 2018-2020海量新闻语料
|
[
"#### 训练语料\n中文wiki, 2018-2020海量新闻语料"
] |
[
"TAGS\n#transformers #pytorch #bert #fill-mask #autotrain_compatible #endpoints_compatible #region-us \n",
"#### 训练语料\n中文wiki, 2018-2020海量新闻语料"
] |
text2text-generation
|
transformers
|
<h3 align="center">
<p>MT5 Base Model for Chinese Question Generation</p>
</h3>
<h3 align="center">
<p>基于mt5的中文问题生成任务</p>
</h3>
#### 可以通过安装question-generation包开始用
```
pip install question-generation
```
使用方法请参考github项目:https://github.com/algolet/question_generation
#### 在线使用
可以直接在线使用我们的模型:https://www.algolet.com/applications/qg
#### 通过transformers调用
``` python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("algolet/mt5-base-chinese-qg")
model = AutoModelForSeq2SeqLM.from_pretrained("algolet/mt5-base-chinese-qg")
model.eval()
text = "在一个寒冷的冬天,赶集完回家的农夫在路边发现了一条冻僵了的蛇。他很可怜蛇,就把它放在怀里。当他身上的热气把蛇温暖以后,蛇很快苏醒了,露出了残忍的本性,给了农夫致命的伤害——咬了农夫一口。农夫临死之前说:“我竟然救了一条可怜的毒蛇,就应该受到这种报应啊!”"
text = "question generation: " + text
inputs = tokenizer(text,
return_tensors='pt',
truncation=True,
max_length=512)
with torch.no_grad():
outs = model.generate(input_ids=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_length=128,
no_repeat_ngram_size=4,
num_beams=4)
question = tokenizer.decode(outs[0], skip_special_tokens=True)
questions = [q.strip() for q in question.split("<sep>") if len(q.strip()) > 0]
print(questions)
['在寒冷的冬天,农夫在哪里发现了一条可怜的蛇?', '农夫是如何看待蛇的?', '当农夫遇到蛇时,他做了什么?']
```
#### 指标
rouge-1: 0.4041
rouge-2: 0.2104
rouge-l: 0.3843
---
language:
- zh
tags:
- mt5
- question generation
metrics:
- rouge
---
|
{}
|
algolet/mt5-base-chinese-qg
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
<h3 align="center">
<p>MT5 Base Model for Chinese Question Generation</p>
</h3>
<h3 align="center">
<p>基于mt5的中文问题生成任务</p>
</h3>
#### 可以通过安装question-generation包开始用
使用方法请参考github项目:URL
#### 在线使用
可以直接在线使用我们的模型:URL
#### 通过transformers调用
#### 指标
rouge-1: 0.4041
rouge-2: 0.2104
rouge-l: 0.3843
---
language:
- zh
tags:
- mt5
- question generation
metrics:
- rouge
---
|
[
"#### 可以通过安装question-generation包开始用\n\n使用方法请参考github项目:URL",
"#### 在线使用\n可以直接在线使用我们的模型:URL",
"#### 通过transformers调用",
"#### 指标\nrouge-1: 0.4041\n\nrouge-2: 0.2104\n\nrouge-l: 0.3843\n\n---\nlanguage: \n - zh\n \ntags:\n- mt5\n- question generation\n\nmetrics:\n- rouge\n\n---"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"#### 可以通过安装question-generation包开始用\n\n使用方法请参考github项目:URL",
"#### 在线使用\n可以直接在线使用我们的模型:URL",
"#### 通过transformers调用",
"#### 指标\nrouge-1: 0.4041\n\nrouge-2: 0.2104\n\nrouge-l: 0.3843\n\n---\nlanguage: \n - zh\n \ntags:\n- mt5\n- question generation\n\nmetrics:\n- rouge\n\n---"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2899
- Precision: 0.3170
- Recall: 0.5261
- F1: 0.3956
- Accuracy: 0.8799
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 |
| No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 |
| No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 |
| No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 |
| No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27", "results": []}]}
|
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased\_token\_itr0\_0.0001\_all\_01\_03\_2022-04\_48\_27
====================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2899
* Precision: 0.3170
* Recall: 0.5261
* F1: 0.3956
* Accuracy: 0.8799
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2698
- Precision: 0.3321
- Recall: 0.5265
- F1: 0.4073
- Accuracy: 0.8942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3314 | 0.1627 | 0.3746 | 0.2269 | 0.8419 |
| No log | 2.0 | 60 | 0.2957 | 0.2887 | 0.4841 | 0.3617 | 0.8592 |
| No log | 3.0 | 90 | 0.2905 | 0.2429 | 0.5141 | 0.3299 | 0.8651 |
| No log | 4.0 | 120 | 0.2759 | 0.3137 | 0.5565 | 0.4013 | 0.8787 |
| No log | 5.0 | 150 | 0.2977 | 0.3116 | 0.5565 | 0.3995 | 0.8796 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25", "results": []}]}
|
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased\_token\_itr0\_0.0001\_all\_01\_03\_2022-14\_21\_25
====================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2698
* Precision: 0.3321
* Recall: 0.5265
* F1: 0.4073
* Accuracy: 0.8942
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2741
- Precision: 0.1936
- Recall: 0.3243
- F1: 0.2424
- Accuracy: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3235 | 0.1062 | 0.2076 | 0.1405 | 0.8556 |
| No log | 2.0 | 60 | 0.2713 | 0.1710 | 0.3080 | 0.2199 | 0.8872 |
| No log | 3.0 | 90 | 0.3246 | 0.2010 | 0.3391 | 0.2524 | 0.8334 |
| No log | 4.0 | 120 | 0.3008 | 0.2011 | 0.3685 | 0.2602 | 0.8459 |
| No log | 5.0 | 150 | 0.2714 | 0.1780 | 0.3772 | 0.2418 | 0.8661 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10", "results": []}]}
|
ali2066/bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert-base-uncased\_token\_itr0\_2e-05\_all\_01\_03\_2022-04\_40\_10
===================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2741
* Precision: 0.1936
* Recall: 0.3243
* F1: 0.2424
* Accuracy: 0.8764
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7632
- Accuracy: 0.8263
- F1: 0.8871
- Precision: 0.8551
- Recall: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3986 | 0.8305 | 0.8903 | 0.8868 | 0.8938 |
| 0.4561 | 2.0 | 780 | 0.4018 | 0.8439 | 0.9009 | 0.8805 | 0.9223 |
| 0.3111 | 3.0 | 1170 | 0.4306 | 0.8354 | 0.8924 | 0.8974 | 0.8875 |
| 0.1739 | 4.0 | 1560 | 0.5499 | 0.8378 | 0.9002 | 0.8547 | 0.9509 |
| 0.1739 | 5.0 | 1950 | 0.6223 | 0.85 | 0.9052 | 0.8814 | 0.9303 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15", "results": []}]}
|
ali2066/bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
bert\_base\_uncased\_itr0\_0.0001\_all\_01\_03\_2022-14\_08\_15
===============================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7632
* Accuracy: 0.8263
* F1: 0.8871
* Precision: 0.8551
* Recall: 0.9215
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2711
- Precision: 0.3373
- Recall: 0.5670
- F1: 0.4230
- Accuracy: 0.8943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3783 | 0.1833 | 0.3975 | 0.2509 | 0.8413 |
| No log | 2.0 | 60 | 0.3021 | 0.3280 | 0.4820 | 0.3904 | 0.8876 |
| No log | 3.0 | 90 | 0.3196 | 0.3504 | 0.5036 | 0.4133 | 0.8918 |
| No log | 4.0 | 120 | 0.3645 | 0.3434 | 0.5306 | 0.4170 | 0.8759 |
| No log | 5.0 | 150 | 0.4027 | 0.3217 | 0.5486 | 0.4056 | 0.8797 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19", "results": []}]}
|
ali2066/correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_BERT\_token\_itr0\_0.0001\_all\_01\_03\_2022-15\_52\_19
================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2711
* Precision: 0.3373
* Recall: 0.5670
* F1: 0.4230
* Accuracy: 0.8943
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1059
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1103 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 2.0 | 30 | 0.0842 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 3.0 | 45 | 0.0767 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 4.0 | 60 | 0.0754 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
| No log | 5.0 | 75 | 0.0735 | 0.12 | 0.0135 | 0.0243 | 0.9772 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21", "results": []}]}
|
ali2066/correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_BERT\_token\_itr0\_0.0001\_editorials\_01\_03\_2022-15\_50\_21
=======================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1059
* Precision: 0.0637
* Recall: 0.0080
* F1: 0.0141
* Accuracy: 0.9707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1801
- Precision: 0.6153
- Recall: 0.7301
- F1: 0.6678
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 |
| No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 |
| No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 |
| No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 |
| No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47", "results": []}]}
|
ali2066/correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_BERT\_token\_itr0\_0.0001\_essays\_01\_03\_2022-15\_48\_47
===================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1801
* Precision: 0.6153
* Recall: 0.7301
* F1: 0.6678
* Accuracy: 0.9346
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6542
- Precision: 0.0092
- Recall: 0.0403
- F1: 0.0150
- Accuracy: 0.7291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.5856 | 0.0012 | 0.0125 | 0.0022 | 0.6950 |
| No log | 2.0 | 20 | 0.5933 | 0.0 | 0.0 | 0.0 | 0.7282 |
| No log | 3.0 | 30 | 0.5729 | 0.0051 | 0.025 | 0.0085 | 0.7155 |
| No log | 4.0 | 40 | 0.6178 | 0.0029 | 0.0125 | 0.0047 | 0.7143 |
| No log | 5.0 | 50 | 0.6707 | 0.0110 | 0.0375 | 0.0170 | 0.7178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14", "results": []}]}
|
ali2066/correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_BERT\_token\_itr0\_0.0001\_webDiscourse\_01\_03\_2022-15\_47\_14
=========================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6542
* Precision: 0.0092
* Recall: 0.0403
* F1: 0.0150
* Accuracy: 0.7291
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3343
- Precision: 0.1651
- Recall: 0.3039
- F1: 0.2140
- Accuracy: 0.8493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4801 | 0.0352 | 0.0591 | 0.0441 | 0.7521 |
| No log | 2.0 | 60 | 0.3795 | 0.0355 | 0.0795 | 0.0491 | 0.8020 |
| No log | 3.0 | 90 | 0.3359 | 0.0591 | 0.1294 | 0.0812 | 0.8334 |
| No log | 4.0 | 120 | 0.3205 | 0.0785 | 0.1534 | 0.1039 | 0.8486 |
| No log | 5.0 | 150 | 0.3144 | 0.0853 | 0.1571 | 0.1105 | 0.8516 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47", "results": []}]}
|
ali2066/correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_distilBERT\_token\_itr0\_1e-05\_all\_01\_03\_2022-15\_43\_47
=====================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3343
* Precision: 0.1651
* Recall: 0.3039
* F1: 0.2140
* Accuracy: 0.8493
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1206
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1222 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 2.0 | 30 | 0.1159 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 3.0 | 45 | 0.1082 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 4.0 | 60 | 0.1042 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
| No log | 5.0 | 75 | 0.1029 | 0.12 | 0.0139 | 0.0249 | 0.9736 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32", "results": []}]}
|
ali2066/correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_distilBERT\_token\_itr0\_1e-05\_editorials\_01\_03\_2022-15\_42\_32
============================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1206
* Precision: 0.0637
* Recall: 0.0080
* F1: 0.0141
* Accuracy: 0.9707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3097
- Precision: 0.2769
- Recall: 0.4391
- F1: 0.3396
- Accuracy: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.4573 | 0.0094 | 0.0027 | 0.0042 | 0.7702 |
| No log | 2.0 | 22 | 0.3660 | 0.1706 | 0.3253 | 0.2239 | 0.8516 |
| No log | 3.0 | 33 | 0.3096 | 0.2339 | 0.408 | 0.2974 | 0.8827 |
| No log | 4.0 | 44 | 0.2868 | 0.2963 | 0.4693 | 0.3633 | 0.8928 |
| No log | 5.0 | 55 | 0.2798 | 0.3141 | 0.48 | 0.3797 | 0.8960 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29", "results": []}]}
|
ali2066/correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_distilBERT\_token\_itr0\_1e-05\_essays\_01\_03\_2022-15\_41\_29
========================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3097
* Precision: 0.2769
* Recall: 0.4391
* F1: 0.3396
* Accuracy: 0.8878
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5794
- Precision: 0.0094
- Recall: 0.0147
- F1: 0.0115
- Accuracy: 0.7156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6319 | 0.08 | 0.0312 | 0.0449 | 0.6753 |
| No log | 2.0 | 20 | 0.6265 | 0.0364 | 0.0312 | 0.0336 | 0.6764 |
| No log | 3.0 | 30 | 0.6216 | 0.0351 | 0.0312 | 0.0331 | 0.6762 |
| No log | 4.0 | 40 | 0.6193 | 0.0274 | 0.0312 | 0.0292 | 0.6759 |
| No log | 5.0 | 50 | 0.6183 | 0.0222 | 0.0312 | 0.0260 | 0.6773 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24", "results": []}]}
|
ali2066/correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
correct\_distilBERT\_token\_itr0\_1e-05\_webDiscourse\_01\_03\_2022-15\_40\_24
==============================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5794
* Precision: 0.0094
* Recall: 0.0147
* F1: 0.0115
* Accuracy: 0.7156
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2876
- Precision: 0.2345
- Recall: 0.4281
- F1: 0.3030
- Accuracy: 0.8728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3907 | 0.0433 | 0.0824 | 0.0568 | 0.7626 |
| No log | 2.0 | 60 | 0.3046 | 0.2302 | 0.4095 | 0.2947 | 0.8598 |
| No log | 3.0 | 90 | 0.2945 | 0.2084 | 0.4095 | 0.2762 | 0.8668 |
| No log | 4.0 | 120 | 0.2687 | 0.2847 | 0.4607 | 0.3519 | 0.8761 |
| No log | 5.0 | 150 | 0.2643 | 0.2779 | 0.4444 | 0.3420 | 0.8788 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04", "results": []}]}
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
correct\_twitter\_RoBERTa\_token\_itr0\_1e-05\_all\_01\_03\_2022-15\_36\_04
===========================================================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2876
* Precision: 0.2345
* Recall: 0.4281
* F1: 0.3030
* Accuracy: 0.8728
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1138
- Precision: 0.5788
- Recall: 0.4712
- F1: 0.5195
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.1316 | 0.04 | 0.0021 | 0.0040 | 0.9624 |
| No log | 2.0 | 30 | 0.1016 | 0.6466 | 0.4688 | 0.5435 | 0.9767 |
| No log | 3.0 | 45 | 0.0899 | 0.5873 | 0.4625 | 0.5175 | 0.9757 |
| No log | 4.0 | 60 | 0.0849 | 0.5984 | 0.4813 | 0.5335 | 0.9761 |
| No log | 5.0 | 75 | 0.0835 | 0.5984 | 0.4813 | 0.5335 | 0.9761 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51", "results": []}]}
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
correct\_twitter\_RoBERTa\_token\_itr0\_1e-05\_editorials\_01\_03\_2022-15\_33\_51
==================================================================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1138
* Precision: 0.5788
* Recall: 0.4712
* F1: 0.5195
* Accuracy: 0.9688
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2663
- Precision: 0.3644
- Recall: 0.4985
- F1: 0.4210
- Accuracy: 0.8997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5174 | 0.0120 | 0.0061 | 0.0081 | 0.6997 |
| No log | 2.0 | 22 | 0.4029 | 0.1145 | 0.3098 | 0.1672 | 0.8265 |
| No log | 3.0 | 33 | 0.3604 | 0.2539 | 0.4448 | 0.3233 | 0.8632 |
| No log | 4.0 | 44 | 0.3449 | 0.2992 | 0.4755 | 0.3673 | 0.8704 |
| No log | 5.0 | 55 | 0.3403 | 0.3340 | 0.4816 | 0.3945 | 0.8760 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16", "results": []}]}
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
correct\_twitter\_RoBERTa\_token\_itr0\_1e-05\_essays\_01\_03\_2022-15\_32\_16
==============================================================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2663
* Precision: 0.3644
* Recall: 0.4985
* F1: 0.4210
* Accuracy: 0.8997
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6169
- Precision: 0.0031
- Recall: 0.0357
- F1: 0.0057
- Accuracy: 0.6464
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6339 | 0.0116 | 0.0120 | 0.0118 | 0.6662 |
| No log | 2.0 | 20 | 0.6182 | 0.0064 | 0.0120 | 0.0084 | 0.6688 |
| No log | 3.0 | 30 | 0.6139 | 0.0029 | 0.0241 | 0.0052 | 0.6659 |
| No log | 4.0 | 40 | 0.6172 | 0.0020 | 0.0241 | 0.0037 | 0.6622 |
| No log | 5.0 | 50 | 0.6165 | 0.0019 | 0.0241 | 0.0036 | 0.6599 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39", "results": []}]}
|
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
correct\_twitter\_RoBERTa\_token\_itr0\_1e-05\_webDiscourse\_01\_03\_2022-15\_30\_39
====================================================================================
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6169
* Precision: 0.0031
* Recall: 0.0357
* F1: 0.0057
* Accuracy: 0.6464
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2811
- Precision: 0.3231
- Recall: 0.5151
- F1: 0.3971
- Accuracy: 0.8913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 |
| No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 |
| No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 |
| No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 |
| No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12", "results": []}]}
|
ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_0.0001\_all\_01\_03\_2022-15\_22\_12
=============================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2811
* Precision: 0.3231
* Recall: 0.5151
* F1: 0.3971
* Accuracy: 0.8913
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1290
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0733 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 2.0 | 30 | 0.0732 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 3.0 | 45 | 0.0731 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 4.0 | 60 | 0.0716 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
| No log | 5.0 | 75 | 0.0635 | 0.04 | 0.0055 | 0.0097 | 0.9861 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12", "results": []}]}
|
ali2066/distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_0.0001\_editorials\_01\_03\_2022-15\_20\_12
====================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1290
* Precision: 0.0637
* Recall: 0.0080
* F1: 0.0141
* Accuracy: 0.9707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1832
- Precision: 0.6138
- Recall: 0.7169
- F1: 0.6613
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 |
| No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 |
| No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 |
| No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 |
| No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35", "results": []}]}
|
ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_0.0001\_essays\_01\_03\_2022-15\_18\_35
================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1832
* Precision: 0.6138
* Recall: 0.7169
* F1: 0.6613
* Accuracy: 0.9332
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5923
- Precision: 0.0039
- Recall: 0.0212
- F1: 0.0066
- Accuracy: 0.7084
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.6673 | 0.0476 | 0.0128 | 0.0202 | 0.6652 |
| No log | 2.0 | 20 | 0.6211 | 0.0 | 0.0 | 0.0 | 0.6707 |
| No log | 3.0 | 30 | 0.6880 | 0.0038 | 0.0128 | 0.0058 | 0.6703 |
| No log | 4.0 | 40 | 0.6566 | 0.0030 | 0.0128 | 0.0049 | 0.6690 |
| No log | 5.0 | 50 | 0.6036 | 0.0 | 0.0 | 0.0 | 0.6868 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57", "results": []}]}
|
ali2066/distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_0.0001\_webDiscourse\_01\_03\_2022-15\_16\_57
======================================================================
This model is a fine-tuned version of bert-base-uncased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5923
* Precision: 0.0039
* Recall: 0.0212
* F1: 0.0066
* Accuracy: 0.7084
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3121
- Precision: 0.1204
- Recall: 0.2430
- F1: 0.1611
- Accuracy: 0.8538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4480 | 0.0209 | 0.0223 | 0.0216 | 0.7794 |
| No log | 2.0 | 60 | 0.3521 | 0.0559 | 0.1218 | 0.0767 | 0.8267 |
| No log | 3.0 | 90 | 0.3177 | 0.1208 | 0.2504 | 0.1629 | 0.8487 |
| No log | 4.0 | 120 | 0.3009 | 0.1296 | 0.2607 | 0.1731 | 0.8602 |
| No log | 5.0 | 150 | 0.2988 | 0.1393 | 0.2693 | 0.1836 | 0.8599 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04", "results": []}]}
|
ali2066/distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_1e-05\_all\_01\_03\_2022-15\_14\_04
============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3121
* Precision: 0.1204
* Recall: 0.2430
* F1: 0.1611
* Accuracy: 0.8538
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1194
- Precision: 0.0637
- Recall: 0.0080
- F1: 0.0141
- Accuracy: 0.9707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 15 | 0.0877 | 0.12 | 0.0194 | 0.0333 | 0.9830 |
| No log | 2.0 | 30 | 0.0806 | 0.12 | 0.0194 | 0.0333 | 0.9830 |
| No log | 3.0 | 45 | 0.0758 | 0.12 | 0.0194 | 0.0333 | 0.9830 |
| No log | 4.0 | 60 | 0.0741 | 0.12 | 0.0194 | 0.0333 | 0.9830 |
| No log | 5.0 | 75 | 0.0741 | 0.12 | 0.0194 | 0.0333 | 0.9830 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47", "results": []}]}
|
ali2066/distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_1e-05\_editorials\_01\_03\_2022-15\_12\_47
===================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1194
* Precision: 0.0637
* Recall: 0.0080
* F1: 0.0141
* Accuracy: 0.9707
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3082
- Precision: 0.2796
- Recall: 0.4373
- F1: 0.3411
- Accuracy: 0.8887
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 11 | 0.5018 | 0.0192 | 0.0060 | 0.0091 | 0.7370 |
| No log | 2.0 | 22 | 0.4066 | 0.1541 | 0.2814 | 0.1992 | 0.8340 |
| No log | 3.0 | 33 | 0.3525 | 0.1768 | 0.3234 | 0.2286 | 0.8612 |
| No log | 4.0 | 44 | 0.3250 | 0.2171 | 0.3503 | 0.2680 | 0.8766 |
| No log | 5.0 | 55 | 0.3160 | 0.2353 | 0.3713 | 0.2880 | 0.8801 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44", "results": []}]}
|
ali2066/distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_1e-05\_essays\_01\_03\_2022-15\_11\_44
===============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3082
* Precision: 0.2796
* Recall: 0.4373
* F1: 0.3411
* Accuracy: 0.8887
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5867
- Precision: 0.0119
- Recall: 0.0116
- F1: 0.0118
- Accuracy: 0.6976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 10 | 0.5730 | 0.0952 | 0.0270 | 0.0421 | 0.7381 |
| No log | 2.0 | 20 | 0.5755 | 0.0213 | 0.0135 | 0.0165 | 0.7388 |
| No log | 3.0 | 30 | 0.5635 | 0.0196 | 0.0135 | 0.016 | 0.7416 |
| No log | 4.0 | 40 | 0.5549 | 0.0392 | 0.0270 | 0.032 | 0.7429 |
| No log | 5.0 | 50 | 0.5530 | 0.0357 | 0.0270 | 0.0308 | 0.7438 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39", "results": []}]}
|
ali2066/distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilBERT\_token\_itr0\_1e-05\_webDiscourse\_01\_03\_2022-15\_10\_39
=====================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5867
* Precision: 0.0119
* Recall: 0.0116
* F1: 0.0118
* Accuracy: 0.6976
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2572
- Precision: 0.3363
- Recall: 0.5110
- F1: 0.4057
- Accuracy: 0.8931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.3976 | 0.1405 | 0.3058 | 0.1925 | 0.7921 |
| No log | 2.0 | 60 | 0.3511 | 0.2360 | 0.4038 | 0.2979 | 0.8260 |
| No log | 3.0 | 90 | 0.3595 | 0.1863 | 0.3827 | 0.2506 | 0.8211 |
| No log | 4.0 | 120 | 0.3591 | 0.2144 | 0.4288 | 0.2859 | 0.8299 |
| No log | 5.0 | 150 | 0.3605 | 0.1989 | 0.4212 | 0.2702 | 0.8343 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58", "results": []}]}
|
ali2066/distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert\_token\_itr0\_0.0001\_all\_01\_03\_2022-14\_30\_58
=============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2572
* Precision: 0.3363
* Recall: 0.5110
* F1: 0.4057
* Accuracy: 0.8931
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3255
- Precision: 0.1412
- Recall: 0.25
- F1: 0.1805
- Accuracy: 0.8491
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 30 | 0.4549 | 0.0228 | 0.0351 | 0.0276 | 0.7734 |
| No log | 2.0 | 60 | 0.3577 | 0.0814 | 0.1260 | 0.0989 | 0.8355 |
| No log | 3.0 | 90 | 0.3116 | 0.1534 | 0.2648 | 0.1943 | 0.8611 |
| No log | 4.0 | 120 | 0.2975 | 0.1792 | 0.2967 | 0.2234 | 0.8690 |
| No log | 5.0 | 150 | 0.2935 | 0.1873 | 0.2998 | 0.2305 | 0.8715 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33", "results": []}]}
|
ali2066/distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilbert\_token\_itr0\_1e-05\_all\_01\_03\_2022-14\_33\_33
============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3255
* Precision: 0.1412
* Recall: 0.25
* F1: 0.1805
* Accuracy: 0.8491
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-token-argumentative
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1573
- Precision: 0.3777
- Recall: 0.3919
- F1: 0.3847
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.3241 | 0.1109 | 0.2178 | 0.1470 | 0.8488 |
| No log | 2.0 | 150 | 0.3145 | 0.1615 | 0.2462 | 0.1950 | 0.8606 |
| No log | 3.0 | 225 | 0.3035 | 0.1913 | 0.3258 | 0.2411 | 0.8590 |
| No log | 4.0 | 300 | 0.3080 | 0.2199 | 0.3220 | 0.2613 | 0.8612 |
| No log | 5.0 | 375 | 0.3038 | 0.2209 | 0.3277 | 0.2639 | 0.8630 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "finetuned-token-argumentative", "results": []}]}
|
ali2066/finetuned-token-argumentative
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned-token-argumentative
=============================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1573
* Precision: 0.3777
* Recall: 0.3919
* F1: 0.3847
* Accuracy: 0.9497
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #token-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7600
- Accuracy: 0.8144
- F1: 0.8788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 |
| No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 |
| 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 |
| 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 |
| 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43", "results": []}]}
|
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_0.0002\_all\_27\_02\_2022-17\_55\_43
===============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7600
* Accuracy: 0.8144
* F1: 0.8788
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4064
- Accuracy: 0.8289
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 |
| No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 |
| 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 |
| 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 |
| 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17", "results": []}]}
|
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_0.0002\_all\_27\_02\_2022-19\_11\_17
===============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4064
* Accuracy: 0.8289
* F1: 0.8901
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3825
- Accuracy: 0.8144
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3975 | 0.8122 | 0.8795 |
| No log | 2.0 | 390 | 0.4376 | 0.8085 | 0.8673 |
| 0.3169 | 3.0 | 585 | 0.5736 | 0.8171 | 0.8790 |
| 0.3169 | 4.0 | 780 | 0.8178 | 0.8098 | 0.8754 |
| 0.3169 | 5.0 | 975 | 0.9244 | 0.8073 | 0.8738 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53", "results": []}]}
|
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_0.0002\_all\_27\_02\_2022-22\_30\_53
===============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3825
* Accuracy: 0.8144
* F1: 0.8833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0926
- Accuracy: 0.9772
- F1: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 |
| No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 |
| No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 |
| No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 |
| 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36", "results": []}]}
|
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_0.0002\_editorials\_27\_02\_2022-19\_42\_36
======================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0926
* Accuracy: 0.9772
* F1: 0.9883
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3358
- Accuracy: 0.8688
- F1: 0.9225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 |
| No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 |
| No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 |
| No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 |
| No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10", "results": []}]}
|
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_0.0002\_essays\_27\_02\_2022-19\_33\_10
==================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3358
* Accuracy: 0.8688
* F1: 0.9225
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5777
- Accuracy: 0.6794
- F1: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.6059 | 0.63 | 0.4932 |
| No log | 2.0 | 96 | 0.6327 | 0.705 | 0.5630 |
| No log | 3.0 | 144 | 0.7003 | 0.695 | 0.5197 |
| No log | 4.0 | 192 | 0.9368 | 0.69 | 0.4655 |
| No log | 5.0 | 240 | 1.1935 | 0.685 | 0.4425 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06", "results": []}]}
|
ali2066/finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_0.0002\_webDiscourse\_27\_02\_2022-19\_25\_06
========================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5777
* Accuracy: 0.6794
* F1: 0.5010
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4787
- Accuracy: 0.8138
- F1: 0.8785
- Precision: 0.8489
- Recall: 0.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 |
| 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 |
| 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 |
| 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 |
| 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32", "results": []}]}
|
ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_1e-05\_all\_01\_03\_2022-13\_25\_32
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4787
* Accuracy: 0.8138
* F1: 0.8785
* Precision: 0.8489
* Recall: 0.9101
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4563
- Accuracy: 0.8440
- F1: 0.8954
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4302 | 0.8073 | 0.8754 |
| No log | 2.0 | 390 | 0.3970 | 0.8220 | 0.8875 |
| 0.3703 | 3.0 | 585 | 0.3972 | 0.8402 | 0.8934 |
| 0.3703 | 4.0 | 780 | 0.4945 | 0.8390 | 0.8935 |
| 0.3703 | 5.0 | 975 | 0.5354 | 0.8305 | 0.8898 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_01\_03\_2022-02\_53\_51
==============================================================
This model is a fine-tuned version of siebert/sentiment-roberta-large-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4563
* Accuracy: 0.8440
* F1: 0.8954
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #roberta #text-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4208
- Accuracy: 0.8283
- F1: 0.8915
- Precision: 0.8487
- Recall: 0.9389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 |
| 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 |
| 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 |
| 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 |
| 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_01\_03\_2022-05\_32\_03
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4208
* Accuracy: 0.8283
* F1: 0.8915
* Precision: 0.8487
* Recall: 0.9389
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6168
- Accuracy: 0.8286
- F1: 0.8887
- Precision: 0.8628
- Recall: 0.9162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 |
| 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 |
| 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 |
| 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 |
| 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_01\_03\_2022-13\_11\_55
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6168
* Accuracy: 0.8286
* F1: 0.8887
* Precision: 0.8628
* Recall: 0.9162
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4345
- Accuracy: 0.8321
- F1: 0.8904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 |
| No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 |
| 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 |
| 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 |
| 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_26\_02\_2022-03\_57\_45
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4345
* Accuracy: 0.8321
* F1: 0.8904
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5002
- Accuracy: 0.8103
- F1: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4178 | 0.7963 | 0.8630 |
| No log | 2.0 | 390 | 0.3935 | 0.8061 | 0.8770 |
| 0.4116 | 3.0 | 585 | 0.4037 | 0.8085 | 0.8735 |
| 0.4116 | 4.0 | 780 | 0.4696 | 0.8146 | 0.8796 |
| 0.4116 | 5.0 | 975 | 0.4849 | 0.8207 | 0.8823 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_27\_02\_2022-17\_27\_47
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5002
* Accuracy: 0.8103
* F1: 0.8764
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4917
- Accuracy: 0.8231
- F1: 0.8833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 |
| No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 |
| 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 |
| 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 |
| 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_27\_02\_2022-19\_05\_42
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4917
* Accuracy: 0.8231
* F1: 0.8833
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4638
- Accuracy: 0.8247
- F1: 0.8867
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 195 | 0.4069 | 0.7976 | 0.875 |
| No log | 2.0 | 390 | 0.4061 | 0.8134 | 0.8838 |
| 0.4074 | 3.0 | 585 | 0.4075 | 0.8134 | 0.8798 |
| 0.4074 | 4.0 | 780 | 0.4746 | 0.8256 | 0.8885 |
| 0.4074 | 5.0 | 975 | 0.4881 | 0.8220 | 0.8845 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_all\_27\_02\_2022-22\_25\_09
==============================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4638
* Accuracy: 0.8247
* F1: 0.8867
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0914
- Accuracy: 0.9746
- F1: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 |
| No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 |
| No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 |
| No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 |
| 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_editorials\_27\_02\_2022-19\_38\_42
=====================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0914
* Accuracy: 0.9746
* F1: 0.9870
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3455
- Accuracy: 0.8609
- F1: 0.9156
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 |
| No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 |
| No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 |
| No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 |
| No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_essays\_27\_02\_2022-19\_30\_22
=================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3455
* Accuracy: 0.8609
* F1: 0.9156
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7224
- Accuracy: 0.6979
- F1: 0.4736
- Precision: 0.5074
- Recall: 0.4440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 |
| No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 |
| No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 |
| No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 |
| No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_webDiscourse\_01\_03\_2022-13\_17\_55
=======================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7224
* Accuracy: 0.6979
* F1: 0.4736
* Precision: 0.5074
* Recall: 0.4440
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6049
- Accuracy: 0.6926
- F1: 0.4160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 |
| No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 |
| No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 |
| No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 |
| No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_webDiscourse\_27\_02\_2022-18\_51\_55
=======================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6049
* Accuracy: 0.6926
* F1: 0.4160
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5819
- Accuracy: 0.7058
- F1: 0.4267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 48 | 0.6110 | 0.665 | 0.0 |
| No log | 2.0 | 96 | 0.5706 | 0.685 | 0.2588 |
| No log | 3.0 | 144 | 0.5484 | 0.725 | 0.5299 |
| No log | 4.0 | 192 | 0.5585 | 0.71 | 0.4727 |
| No log | 5.0 | 240 | 0.5616 | 0.725 | 0.5133 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29", "results": []}]}
|
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
finetuned\_sentence\_itr0\_2e-05\_webDiscourse\_27\_02\_2022-19\_22\_29
=======================================================================
This model is a fine-tuned version of distilbert-base-uncased-finetuned-sst-2-english on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5819
* Accuracy: 0.7058
* F1: 0.4267
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 64
* eval\_batch\_size: 64
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.1+cu113
* Datasets 1.18.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.1+cu113\n* Datasets 1.18.0\n* Tokenizers 0.10.3"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.