modelId
stringlengths
5
139
author
stringlengths
2
42
last_modified
timestamp[us, tz=UTC]date
2020-02-15 11:33:14
2025-09-01 12:28:49
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
530 values
tags
listlengths
1
4.05k
pipeline_tag
stringclasses
55 values
createdAt
timestamp[us, tz=UTC]date
2022-03-02 23:29:04
2025-09-01 12:27:35
card
stringlengths
11
1.01M
Gigworks/ASR_zh_espnet2
Gigworks
2022-01-21T02:58:59Z
0
1
null
[ "region:us" ]
null
2022-03-02T23:29:04Z
<b>Speech-To-Text Chinese Model</b> <br/><br/> Reference: <br/> Model - https://huggingface.co/espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char <br/> Code - https://huggingface.co/spaces/akhaliq/espnet2_asr/blob/main/app.py
guoqiang/glm
guoqiang
2022-01-21T01:21:46Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
# WudaoSailing WudaoSailing is a package for pretraining chinese Language Model and finetune tasks. Now it supports GLM, Bert, T5, Cogview and Roberta models. ## Get Started ### Docker Image We prepare two docker images based on CUDA 10.2 and CUDA 11.2. You can build images from the docker file [docs/docker/cuda102.dockerfile](docs/docker/cuda102.dcokerfile) or pull the pre-built images from Docker Hub and run with docker v19.03+ ```shell nvidia-docker run -id --hostname=V100 --network=host\ --ipc=host --shm-size=16gb --name=deepspeed-cuda \ -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 \ -v /DATA/disk1/docker/containers/:/data deepspeed/cuda102:lastest ``` or replace `cuda102` with `cuda112`. ```shell docker build -f cuda102.dockerfile -t deepspeed/cuda102 . ``` ### Clone this repo ```shell git clone https://github.com/wangguojim/WudaoSailing.git cd WudaoSailing pip install -r requirements.txt ``` ## GLM We show some examples based on GLM model. ### finetuene We provide scripts for finetuning GLM on some downstream tasks. #### SuperGLUE - Download the [SuperGlue](https://super.gluebenchmark.com/tasks) data and check the experiment setup in [examples/glm/scripts/ds_finetune_superglue.sh](xamples/glm/scripts/ds_finetune_superglue.sh). Note that `DATA_ROOT, CHECKPOINT_PATH, SAVE_PATH` need to be changed to your local path. You may also change the `batch-size` and `nproc_per_node` according to your available hardware. - Run the following script for text similarity finetune task (use the afqmc dataset as an example) ``` cd examples/glm/ bash scripts/ds_finetune_superglue.sh\ config/model_blocklm_large_chinese.sh\ config_tasks/task_afqmc.sh ``` - Run the following script for text classification finetune task (use the thunews and thunews dataset as an example) ``` cd examples/glm/ bash scripts/ds_finetune_superglue.sh\ config/model_blocklm_large_chinese.sh\ config_tasks/task_tnews.sh ``` - Run the following script for causal inference finetune task (use the COPA dataset as an example) ``` cd examples/glm/ bash scripts/ds_finetune_superglue.sh\ config/model_blocklm_large_chinese.sh\ config_tasks/task_copa.sh ``` - To apply GLM to a new NLU dataset with cloze-filling finetuning, implement a `DataProcessor` in [examples/glm/tasks/superglue/dataset.py](examples/glm/tasks/superglue/dataset.py) for data loading and add a `PVP` in [examples/glm/tasks/superglue/pvp.py](examples/glm/tasks/superglue/pvp.py) for the cloze question. More details can be found [here](examples/glm/tasks/superglue/README.md). #### Blank Filling (Interactive) * Change `CHECKPOINT_PATH` to your local path. Run the following script ``` bash config/generate_block.sh\ config/model_blocklm_large_chinese.sh ``` ##### Example1 (Entity Prediction): Context: 凯旋门位于意大利米兰市古城堡旁。1807年为纪念[MASK]而建,门高25米,顶上矗立两武士青铜古兵车铸像。 GLM:拿破仑军队攻克米兰城 ##### Example2 (Sentence Prediction) Context: 工业互联网(Industrial Internet)是新一代信息通信技术与工业经济深度融合的新型基础设施、应用模式和工业生态,通过对人、机、物、系统等的全面连接,构建起覆盖全产业链、全价值链的全新制造和服务体系,为工业乃至产业数字化、网络化、智能化发展提供了实现途径,是第四次工业革命的重要基石。[sMASK]它以网络为基础、平台为中枢、数据为要素、安全为保障,既是工业数字化、网络化、智能化转型的基础设施,也是互联网、大数据、人工智能与实体经济深度融合的应用模式,同时也是一种新业态、新产业,将重塑企业形态、供应链和产业链。当前,工业互联网融合应用向国民经济重点行业广泛拓展,形成平台化设计、智能化制造、网络化协同、个性化定制、服务化延伸、数字化管理六大新模式,赋能、赋智、赋值作用不断显现,有力的促进了实体经济提质、增效、降本、绿色、安全发展。 GLM: 工业互联网是制造业技术、管理、模式的重大变革,是推动互联网、大数据、人工智能和实体经济深度融合的重要载体,是建设制造强国和网络强国的重要基础。 ##### Example3 (Long Text Generation) Context: 问题:高斯所在的国家有什么汽车品牌?答案:[gMASK] GLM:答案:[gMASK]<|startofpiece|>德国奔驰、德国大众、别克、沃尔沃、斯柯达、本田、雪铁龙. ### Ptuning Run the following script to integrate p-tuning with GLM: ```shell cd algutils/ptuning/ bash finetune_zy.sh ``` ### Pretrain Run the following script to pre-train the GLM-Large model ```shell cd examples/glm/ bash scripts/ds_pretrain_nvidia.sh config/ds_block_large.sh ``` The script [examples/glm/config/ds_pretrain_nvidia.sh](examples/glm/config/ds_pretrain_nvidia.sh) launches the training program with DeepSpeed. You should change `NUM_WORKERS` and `NUM_GPUS_PER_WORKER` to the number of workers and the number of gpus per worker. Also change `HOST_FILE_PATH` to the path to an OpenMPI-style hostfile. More details about DeepSpeed launcher can be found [here](https://www.deepspeed.ai/getting-started/#resource-configuration-multi-node). The file [examples/glm/config/ds_block_large.sh](examples/glm/config/ds_block_large.sh) defines the hyperparameters for pretraining. Most of the arguments are fairly self-explanatory. Specifically, `--train-data` can be multiple keywords defined in `NAMED_CORPORA` in [data_utils/corpora.py](data_utils/corpora.py). The hyperparameters of the optimizer are defined in the corresponding json file under `config`. The semantics of the json file can be found [here](https://www.deepspeed.ai/docs/config-json). ## Bert We show some examples based on GLM model. ### Pretrain Run the following script to pre-train the Bert model ```shell cd examples/bert/ python quick_start.py ``` ## CogView ### Pretrain Run the following script to pre-train the cogview model ```shell cd examples/cogview/ bash config/pretrain_multiple_nodes.sh ``` ### inference Run the following script to test the ability of text2image ```shell cd examples/cogview/ bash config/text2image_cogview.sh ```
kika2000/wav2vec2-large-xls-r-300m-kika10
kika2000
2022-01-21T00:02:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-georgian2-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-georgian2-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4317 - Wer: 0.4280 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7071 | 4.76 | 400 | 0.6897 | 0.7844 | | 0.2908 | 9.52 | 800 | 0.4630 | 0.5582 | | 0.1392 | 14.29 | 1200 | 0.4501 | 0.5006 | | 0.0977 | 19.05 | 1600 | 0.4593 | 0.4755 | | 0.075 | 23.81 | 2000 | 0.4340 | 0.4401 | | 0.0614 | 28.57 | 2400 | 0.4317 | 0.4280 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/anticarbons
huggingtweets
2022-01-20T22:52:20Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/anticarbons/1642719091326/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1477498953524518912/yvJkd9VL_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ANTICARBON</div> <div style="text-align: center; font-size: 14px;">@anticarbons</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ANTICARBON. | Data | ANTICARBON | | --- | --- | | Tweets downloaded | 2518 | | Retweets | 427 | | Short tweets | 352 | | Tweets kept | 1739 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/s9q99sc5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @anticarbons's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1k8boybi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/anticarbons') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
milyiyo/selectra-small-finetuned-amazon-review
milyiyo
2022-01-20T21:11:57Z
16
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: selectra-small-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.737 - name: F1 type: f1 value: 0.7437773019932409 - name: Precision type: precision value: 0.7524857881639091 - name: Recall type: recall value: 0.737 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # selectra-small-finetuned-amazon-review This model is a fine-tuned version of [Recognai/selectra_small](https://huggingface.co/Recognai/selectra_small) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.6279 - Accuracy: 0.737 - F1: 0.7438 - Precision: 0.7525 - Recall: 0.737 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.5 | 500 | 0.7041 | 0.7178 | 0.6724 | 0.6715 | 0.7178 | | 0.7908 | 1.0 | 1000 | 0.6365 | 0.7356 | 0.7272 | 0.7211 | 0.7356 | | 0.7908 | 1.5 | 1500 | 0.6204 | 0.7376 | 0.7380 | 0.7387 | 0.7376 | | 0.6358 | 2.0 | 2000 | 0.6162 | 0.7386 | 0.7377 | 0.7380 | 0.7386 | | 0.6358 | 2.5 | 2500 | 0.6228 | 0.7274 | 0.7390 | 0.7576 | 0.7274 | | 0.5827 | 3.0 | 3000 | 0.6188 | 0.7378 | 0.7400 | 0.7425 | 0.7378 | | 0.5827 | 3.5 | 3500 | 0.6246 | 0.7374 | 0.7416 | 0.7467 | 0.7374 | | 0.5427 | 4.0 | 4000 | 0.6266 | 0.7446 | 0.7452 | 0.7465 | 0.7446 | | 0.5427 | 4.5 | 4500 | 0.6331 | 0.7392 | 0.7421 | 0.7456 | 0.7392 | | 0.5184 | 5.0 | 5000 | 0.6279 | 0.737 | 0.7438 | 0.7525 | 0.737 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
oandreae/financial_sentiment_model
oandreae
2022-01-20T20:00:01Z
4
1
transformers
[ "transformers", "pytorch", "tensorboard", "perceiver", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - recall - accuracy - precision model-index: - name: financial_sentiment_model results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_50agree metrics: - name: Recall type: recall value: 0.8839956357328868 - name: Accuracy type: accuracy value: 0.8804123711340206 - name: Precision type: precision value: 0.8604175202419276 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financial_sentiment_model This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.3467 - Recall: 0.8840 - Accuracy: 0.8804 - Precision: 0.8604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:| | 0.4481 | 1.0 | 273 | 0.4035 | 0.8526 | 0.8433 | 0.7955 | | 0.4069 | 2.0 | 546 | 0.4478 | 0.8683 | 0.8289 | 0.8123 | | 0.2225 | 3.0 | 819 | 0.3167 | 0.8747 | 0.8680 | 0.8387 | | 0.1245 | 4.0 | 1092 | 0.3467 | 0.8840 | 0.8804 | 0.8604 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
espnet/akreal_swbd_da_hubert_conformer
espnet
2022-01-20T18:57:49Z
2
0
espnet
[ "espnet", "audio", "automatic-speech-recognition", "en", "dataset:swbd_da", "arxiv:1804.00015", "license:cc-by-4.0", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- tags: - espnet - audio - automatic-speech-recognition language: en datasets: - swbd_da license: cc-by-4.0 --- ## ESPnet2 ASR model ### `akreal/espnet2_swbd_da_hubert_conformer` This model was trained by Pavel Denisov using swbd_da recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 08c6efbc6299c972301236625f9abafe087c9f9c pip install -e . cd egs2/swbd_da/asr1 ./run.sh --skip_data_prep false --skip_train true --download_model espnet/akreal_swbd_da_hubert_conformer ``` <!-- Generated by scripts/utils/show_asr_result.sh --> # RESULTS ## Environments - date: `Thu Jan 20 19:31:21 CET 2022` - python version: `3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]` - espnet version: `espnet 0.10.6a1` - pytorch version: `pytorch 1.10.1+cu113` - Git hash: `08c6efbc6299c972301236625f9abafe087c9f9c` - Commit date: `Tue Jan 4 13:40:33 2022 +0100` ## asr_train_asr_raw_en_word_sp ### WER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|2379|66.3|33.7|0.0|0.0|33.7|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|8116|69.5|30.5|0.0|0.0|30.5|30.5| ### CER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| |decode_asr_asr_model_valid.loss.ave/test_context3|2379|19440|76.1|17.7|6.2|8.1|32.0|33.7| |decode_asr_asr_model_valid.loss.ave/valid_context3|8116|66353|79.5|16.1|4.4|8.0|28.5|30.5| ### TER |dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err| |---|---|---|---|---|---|---|---|---| ## ASR config <details><summary>expand</summary> ``` config: conf/tuning/train_asr_conformer_hubert_context3.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/asr_train_asr_conformer_hubert_context3_raw_en_word_sp ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: null dist_rank: null local_rank: 0 dist_master_addr: null dist_master_port: null dist_launcher: null multiprocessing_distributed: false unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 35 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 7 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: - frontend.upstream num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 4000000 valid_batch_bins: null train_shape_file: - exp/asr_stats_context3_raw_en_word_sp/train/speech_shape - exp/asr_stats_context3_raw_en_word_sp/train/text_shape.word valid_shape_file: - exp/asr_stats_context3_raw_en_word_sp/valid/speech_shape - exp/asr_stats_context3_raw_en_word_sp/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 150 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_context3_sp/wav.scp - speech - sound - - dump/raw/train_context3_sp/text - text - text valid_data_path_and_name_and_type: - - dump/raw/valid_context3/wav.scp - speech - sound - - dump/raw/valid_context3/text - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0001 scheduler: warmuplr scheduler_conf: warmup_steps: 25000 token_list: - <blank> - <unk> - statement - backchannel - opinion - abandon - agree - yn_q - apprec - 'yes' - uninterp - close - wh_q - acknowledge - 'no' - yn_decl_q - hedge - backchannel_q - sum - quote - affirm - other - directive - repeat - open_q - completion - rhet_q - hold - reject - answer - neg - ans_dispref - repeat_q - open - or - commit - maybe - decl_q - third_pty - self_talk - thank - apology - tag_q - downplay - <sos/eos> init: null input_size: null ctc_conf: dropout_rate: 0.0 ctc_type: builtin reduce: true ignore_nan_grad: true joint_net_conf: null model_conf: ctc_weight: 0.0 extract_feats_in_collect_stats: false use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' frontend: s3prl frontend_conf: frontend_conf: upstream: hubert_large_ll60k download_dir: ./hub multilayer_feature: true fs: 16k specaug: specaug specaug_conf: apply_time_warp: true time_warp_window: 5 time_warp_mode: bicubic apply_freq_mask: true freq_mask_width_range: - 0 - 30 num_freq_mask: 2 apply_time_mask: true time_mask_width_range: - 0 - 40 num_time_mask: 2 normalize: utterance_mvn normalize_conf: {} preencoder: linear preencoder_conf: input_size: 1024 output_size: 80 encoder: conformer encoder_conf: output_size: 512 attention_heads: 8 linear_units: 2048 num_blocks: 12 dropout_rate: 0.1 positional_dropout_rate: 0.1 attention_dropout_rate: 0.1 input_layer: conv2d normalize_before: true macaron_style: true pos_enc_layer_type: rel_pos selfattention_layer_type: rel_selfattn activation_type: swish use_cnn_module: true cnn_module_kernel: 31 postencoder: null postencoder_conf: {} decoder: transformer decoder_conf: attention_heads: 8 linear_units: 2048 num_blocks: 6 dropout_rate: 0.1 positional_dropout_rate: 0.1 self_attention_dropout_rate: 0.1 src_attention_dropout_rate: 0.1 required: - output_dir - token_list version: 0.10.5a1 distributed: false ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
tomwetherell/TOMFINSEN
tomwetherell
2022-01-20T18:19:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "perceiver", "text-classification", "generated_from_trainer", "dataset:financial_phrasebank", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - financial_phrasebank metrics: - recall - accuracy - precision model-index: - name: TOMFINSEN results: - task: name: Text Classification type: text-classification dataset: name: financial_phrasebank type: financial_phrasebank args: sentences_50agree metrics: - name: Recall type: recall value: 0.8985861629736692 - name: Accuracy type: accuracy value: 0.8742268041237113 - name: Precision type: precision value: 0.8509995913451198 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # TOMFINSEN This model is a fine-tuned version of [deepmind/language-perceiver](https://huggingface.co/deepmind/language-perceiver) on the financial_phrasebank dataset. It achieves the following results on the evaluation set: - Loss: 0.3642 - Recall: 0.8986 - Accuracy: 0.8742 - Precision: 0.8510 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Recall | Accuracy | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:|:---------:| | 0.5403 | 1.0 | 273 | 0.4207 | 0.8358 | 0.8619 | 0.8534 | | 0.3939 | 2.0 | 546 | 0.3750 | 0.8943 | 0.8577 | 0.8225 | | 0.1993 | 3.0 | 819 | 0.3113 | 0.8882 | 0.8660 | 0.8367 | | 0.301 | 4.0 | 1092 | 0.3642 | 0.8986 | 0.8742 | 0.8510 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.9.0+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
ilevs/opus-mt-en-ru-finetuned-en-to-ru
ilevs
2022-01-20T18:18:30Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-en-ru-finetuned-en-to-ru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ru-finetuned-en-to-ru This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7682 - Bleu: 14.6112 - Gen Len: 7.202 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.3198 | 1.0 | 4956 | 2.1261 | 9.5339 | 6.7709 | | 1.9732 | 2.0 | 9912 | 1.9639 | 10.4715 | 7.1254 | | 1.7127 | 3.0 | 14868 | 1.8780 | 11.6128 | 7.1106 | | 1.5614 | 4.0 | 19824 | 1.8367 | 12.8389 | 7.0468 | | 1.4276 | 5.0 | 24780 | 1.8040 | 13.7423 | 7.0403 | | 1.3096 | 6.0 | 29736 | 1.7820 | 14.1469 | 7.0555 | | 1.2381 | 7.0 | 34692 | 1.7761 | 13.9987 | 7.2225 | | 1.1784 | 8.0 | 39648 | 1.7725 | 14.4675 | 7.1799 | | 1.1376 | 9.0 | 44604 | 1.7692 | 14.4937 | 7.1957 | | 1.0862 | 10.0 | 49560 | 1.7682 | 14.6112 | 7.202 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ucberkeley-dlab/hate-measure-roberta-large
ucberkeley-dlab
2022-01-20T17:57:30Z
7
4
tf-keras
[ "tf-keras", "text-classification", "hate-speech", "counterspeech", "irt", "arxiv:2009.10277", "en", "dataset:ucberkeley-dlab/measuring-hate-speech", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: - en tags: - text-classification - hate-speech - counterspeech - irt - arxiv:2009.10277 datasets: - ucberkeley-dlab/measuring-hate-speech --- # Measuring hate speech: RoBERTa-Large This model predicts a continuous hate speech score as described in Kennedy et al. (2020). ## Citation ``` @article{kennedy2020constructing, title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application}, author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia}, journal={arXiv preprint arXiv:2009.10277}, year={2020} } ``` ## References Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277.
nntadotzip/bert-base-cased-IUChatbot-ontologyDts
nntadotzip
2022-01-20T16:21:21Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-cased-IUChatbot-ontologyDts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-IUChatbot-ontologyDts This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2446 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 382 | 0.2686 | | 0.3946 | 2.0 | 764 | 0.2535 | | 0.2577 | 3.0 | 1146 | 0.2446 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
radhakri119/wav2vec2-base-timit-demo-colab
radhakri119
2022-01-20T16:09:09Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4780 - Wer: 0.3403 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.5299 | 4.0 | 500 | 1.5195 | 0.9991 | | 0.6229 | 8.0 | 1000 | 0.4447 | 0.4282 | | 0.2136 | 12.0 | 1500 | 0.4154 | 0.3764 | | 0.1196 | 16.0 | 2000 | 0.4394 | 0.3597 | | 0.0834 | 20.0 | 2500 | 0.4891 | 0.3619 | | 0.0591 | 24.0 | 3000 | 0.4535 | 0.3439 | | 0.0448 | 28.0 | 3500 | 0.4780 | 0.3403 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
milyiyo/distilbert-base-uncased-finetuned-amazon-review
milyiyo
2022-01-20T15:14:48Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: distilbert-base-uncased-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.693 - name: F1 type: f1 value: 0.7002653469272611 - name: Precision type: precision value: 0.709541681233075 - name: Recall type: recall value: 0.693 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-amazon-review This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.3494 - Accuracy: 0.693 - F1: 0.7003 - Precision: 0.7095 - Recall: 0.693 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.5 | 500 | 0.8287 | 0.7104 | 0.7120 | 0.7152 | 0.7104 | | 0.4238 | 1.0 | 1000 | 0.8917 | 0.7094 | 0.6989 | 0.6917 | 0.7094 | | 0.4238 | 1.5 | 1500 | 0.9367 | 0.6884 | 0.6983 | 0.7151 | 0.6884 | | 0.3152 | 2.0 | 2000 | 0.9845 | 0.7116 | 0.7144 | 0.7176 | 0.7116 | | 0.3152 | 2.5 | 2500 | 1.0752 | 0.6814 | 0.6968 | 0.7232 | 0.6814 | | 0.2454 | 3.0 | 3000 | 1.1215 | 0.6918 | 0.6954 | 0.7068 | 0.6918 | | 0.2454 | 3.5 | 3500 | 1.2905 | 0.6976 | 0.7048 | 0.7138 | 0.6976 | | 0.1989 | 4.0 | 4000 | 1.2938 | 0.694 | 0.7016 | 0.7113 | 0.694 | | 0.1989 | 4.5 | 4500 | 1.3623 | 0.6972 | 0.7014 | 0.7062 | 0.6972 | | 0.1746 | 5.0 | 5000 | 1.3494 | 0.693 | 0.7003 | 0.7095 | 0.693 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/aevaeavaevevave
huggingtweets
2022-01-20T15:13:33Z
6
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/aevaeavaevevave/1642691608974/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1471448753353670660/T0h3zXn-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">aeva</div> <div style="text-align: center; font-size: 14px;">@aevaeavaevevave</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from aeva. | Data | aeva | | --- | --- | | Tweets downloaded | 3184 | | Retweets | 985 | | Short tweets | 659 | | Tweets kept | 1540 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g4kejp0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @aevaeavaevevave's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3ikuw0pg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3ikuw0pg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/aevaeavaevevave') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
pitehu/T5_NER_CONLL_LIST
pitehu
2022-01-20T14:32:20Z
12
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "Named Entity Recognition", "en", "dataset:wmt19", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - Named Entity Recognition license: apache-2.0 datasets: - wmt19 metrics: - bleu - sacrebleu inference: parameters: max_length: 1024 ---
g30rv17ys/avhubert
g30rv17ys
2022-01-20T13:07:45Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
https://dl.fbaipublicfiles.com/avhubert/model/lrs3_vox/vsr/base_vox_433h.pt
dehio/german-qg-t5-e2e-quad
dehio
2022-01-20T09:40:47Z
5
3
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Naturschutzwarte haben auf der ostfriesischen Insel Wangerooge zwei seltene Kurzschnäuzige Seepferdchen entdeckt. Die Tiere seien vergangene Woche bei einer sogenannten Spülsaumkontrolle entdeckt worden, bei der die Strände eigentlich nach Müll und toten Vögeln abgesucht würden, sagte der Geschäftsführer der zuständigen Naturschutz- und Forschungsgemeinschaft Mellumrat, Mathias Heckroth. Dabei seien den Naturschützern am Nordstrand kurz hintereinander die beiden leblosen, nur wenige Zentimeter großen Tiere aufgefallen. Experten der Nationalparkverwaltung bestimmten beide Tiere als Kurzschnäuzige Seepferdchen (Hippocampus hippocampus)." inference: parameters: max_length: 128 language: - de tags: - question generation datasets: - deepset/germanquad model-index: - name: german-qg-t5-e2e-quad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # german-qg-t5-e2e-quad (Work in progress) This model is a end-to-end question generation model in German. Given a text, it generates several questions about it. This model is a fine-tuned version of [valhalla/t5-base-e2e-qg](https://huggingface.co/valhalla/t5-base-e2e-qg) on the [GermanQuAD dataset from deepset](https://huggingface.co/datasets/deepset/germanquad). ## Model description More information needed ## Training and evaluation data Bleu_1: 0.196051 Bleu_2: 0.122380 Bleu_3: 0.079980 Bleu_4: 0.053672 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
hrdipto/wav2vec2-xls-r-tf-left-right-shuru
hrdipto
2022-01-20T08:48:17Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-tf-left-right-shuru results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-shuru This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0921 - Wer: 1.2628 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.5528 | 23.81 | 500 | 0.5509 | 1.9487 | | 0.2926 | 47.62 | 1000 | 0.1306 | 1.2756 | | 0.1171 | 71.43 | 1500 | 0.1189 | 1.2628 | | 0.0681 | 95.24 | 2000 | 0.0921 | 1.2628 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
huggingtweets/chickenhalf
huggingtweets
2022-01-20T07:52:22Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/chickenhalf/1642665052826/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1482989404125806596/JtLgKHTu_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">chicken sandwich</div> <div style="text-align: center; font-size: 14px;">@chickenhalf</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from chicken sandwich. | Data | chicken sandwich | | --- | --- | | Tweets downloaded | 3202 | | Retweets | 126 | | Short tweets | 427 | | Tweets kept | 2649 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3r0cwhle/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chickenhalf's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1zvaxh71/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/chickenhalf') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
LiqiangXiao/summarization
LiqiangXiao
2022-01-20T05:01:36Z
5
4
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:04Z
## Copy-or-Rewrite This repository contains the code of paper "Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning". A model built for human-like summarization task and trained with Actor-critic Reinforcement Learning. This work significantly improved the ROUGE scores on CNN/DM dataset by 1.7 and augmented the informativity and readability of generated summaries. It implemented a more human-like workflow for summarization task solving the information loss problem. It contains a novel hierarchical transformer module to represent article in both word and sentence level, a new reinforcement learning method that can effectively train two-step model. ## Model description Copy-or-Rewrite is a model to improve the workflow of summarization models. Existing methods that adopt an extract-then-abstract strategy have achieved impressive results, yet they suffer from the information loss in the abstraction step because they compress all the selected sentences without distinguish. Especially when the whole sentence is summary-worthy, salient content would be lost by compression. To address this problem, we pro- pose HYSUM, a hybrid framework for summarization that can flexibly switch between copying sentence and rewriting sentence according to the degree of redundancy. In this way, our approach can effectively combine the advantages of two branches of summarization, juggling informativity and conciseness. Moreover, we based on Hierarchical Reinforcement Learning, propose an end-to-end reinforcing method to bridge together the extraction module and rewriting module, which can enhance the cooperation between them. Automatic evaluation shows that our approach significantly outperforms the state-of-the-arts on the CNN/DailyMail corpus. Human evaluation also demonstrates that our generated summaries are more informative and concise than popular models. ## Intended uses & limitations With this repository, you can generate informative and concise summaries for input articles. For other tasks, you may used the hierarchical representation module to effectively represent the article. The parameters of the model is pre-trained on CNN/DM dataset. You may need to fine-tune it other your own dataset when needed. ## How to use from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("LiqiangXiao/summarization") model = AutoModelForSeq2SeqLM.from_pretrained("LiqiangXiao/summarization") ## Training data This model used the non-anonymous version of CNN/Daily Mail dataset. ## BibTeX entry and citation info @inproceedings{DBLP:conf/aaai/XiaoWHJ20, author = {Liqiang Xiao and Lu Wang and Hao He and Yaohui Jin}, title = {Copy or Rewrite: Hybrid Summarization with Hierarchical Reinforcement Learning}, booktitle = {The Thirty-Fourth {AAAI} Conference on Artificial Intelligence, {AAAI} 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, {IAAI} 2020, The Tenth {AAAI} Symposium on Educational Advances in Artificial Intelligence, {EAAI} 2020, New York, NY, USA, February 7-12, 2020}, pages = {9306--9313}, publisher = {{AAAI} Press}, year = {2020}, url = {https://aaai.org/ojs/index.php/AAAI/article/view/6470}, timestamp = {Tue, 02 Feb 2021 08:00:14 +0100}, biburl = {https://dblp.org/rec/conf/aaai/XiaoWHJ20.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
abdelkader/distilbert-base-uncased-finetuned-clinc
abdelkader
2022-01-20T04:59:36Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9174193548387096 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7713 - Accuracy: 0.9174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2831 | 0.7426 | | 3.785 | 2.0 | 636 | 1.8739 | 0.8335 | | 3.785 | 3.0 | 954 | 1.1525 | 0.8926 | | 1.6894 | 4.0 | 1272 | 0.8569 | 0.91 | | 0.897 | 5.0 | 1590 | 0.7713 | 0.9174 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ethzanalytics/ai-msgbot-gpt2-XL
ethzanalytics
2022-01-20T01:40:42Z
9
1
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "gpt", "en", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: - en tags: - text-generation - gpt2 - gpt license: mit datasets: - natural questions widget: - text: "Do you like my new haircut?\nperson beta:\n\n" example_title: "haircut" - text: "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n" example_title: "teaching" - text: "What's your favorite animal? Mine is the dog? \nperson beta:\n\n" example_title: "favorite" - text: "how much does it cost?\nperson beta:\n\n" example_title: "money" inference: parameters: min_length: 2 max_length: 64 length_penalty: 0.6 no_repeat_ngram_size: 3 do_sample: True top_p: 0.85 top_k: 10 repetition_penalty: 2.1 --- # ai-msgbot GPT2-XL _NOTE: model card is WIP_ GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`. Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it). ## conversation data The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses. `script_speaker_name` = `person alpha` `script_responder_name` = `person beta` ## examples - the default inference API examples should work _okay_ - an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt. ### Example prompt: ``` do you like to eat beans? person beta: ``` ### Resulting output ``` do you like to eat beans?person beta: yes, i like fried beans. person alpha: i wonder when the first beans were cultivated and how they were processed. person beta: nitrogenic bacteria (in ``` _Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_ ## citations ``` @inproceedings{dinan2019wizard, author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston}, title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents}, booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)}, year={2019}, } @inproceedings{li-etal-2017-dailydialog, title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset", author = "Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi", booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = nov, year = "2017", address = "Taipei, Taiwan", publisher = "Asian Federation of Natural Language Processing", url = "https://aclanthology.org/I17-1099", pages = "986--995", abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}", } ```
UBC-NLP/ARBERT
UBC-NLP
2022-01-19T20:10:55Z
540
5
transformers
[ "transformers", "pytorch", "tf", "jax", "bert", "fill-mask", "Arabic BERT", "MSA", "Twitter", "Masked Langauge Model", "ar", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- language: - ar tags: - Arabic BERT - MSA - Twitter - Masked Langauge Model widget: - text: "اللغة العربية هي لغة [MASK]." --- <img src="https://raw.githubusercontent.com/UBC-NLP/marbert/main/ARBERT_MARBERT.jpg" alt="drawing" width="30%" height="30%" align="right"/> **ARBERT** is one of three models described in our **ACl 2021 paper** **["ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic"](https://mageed.arts.ubc.ca/files/2020/12/marbert_arxiv_2020.pdf)**. ARBERT is a large-scale pre-trained masked language model focused on Modern Standard Arabic (MSA). To train ARBERT, we use the same architecture as BERT-base: 12 attention layers, each has 12 attention heads and 768 hidden dimensions, a vocabulary of 100K WordPieces, making ∼163M parameters. We train ARBERT on a collection of Arabic datasets comprising **61GB of text** (**6.2B tokens**). For more information, please visit our own GitHub [repo](https://github.com/UBC-NLP/marbert). # BibTex If you use our models (ARBERT, MARBERT, or MARBERTv2) for your scientific publication, or if you find the resources in this repository useful, please cite our paper as follows (to be updated): ```bibtex @inproceedings{abdul-mageed-etal-2021-arbert, title = "{ARBERT} {\&} {MARBERT}: Deep Bidirectional Transformers for {A}rabic", author = "Abdul-Mageed, Muhammad and Elmadany, AbdelRahim and Nagoudi, El Moatez Billah", booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.acl-long.551", doi = "10.18653/v1/2021.acl-long.551", pages = "7088--7105", abstract = "Pre-trained language models (LMs) are currently integral to many natural language processing systems. Although multilingual LMs were also introduced to serve many languages, these have limitations such as being costly at inference time and the size and diversity of non-English data involved in their pre-training. We remedy these issues for a collection of diverse Arabic varieties by introducing two powerful deep bidirectional transformer-based models, ARBERT and MARBERT. To evaluate our models, we also introduce ARLUE, a new benchmark for multi-dialectal Arabic language understanding evaluation. ARLUE is built using 42 datasets targeting six different task clusters, allowing us to offer a series of standardized experiments under rich conditions. When fine-tuned on ARLUE, our models collectively achieve new state-of-the-art results across the majority of tasks (37 out of 48 classification tasks, on the 42 datasets). Our best model acquires the highest ARLUE score (77.40) across all six task clusters, outperforming all other models including XLM-R Large ( 3.4x larger size). Our models are publicly available at https://github.com/UBC-NLP/marbert and ARLUE will be released through the same repository.", } ``` ## Acknowledgments We gratefully acknowledge support from the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada, Canadian Foundation for Innovation, [ComputeCanada](www.computecanada.ca) and [UBC ARC-Sockeye](https://doi.org/10.14288/SOCKEYE). We also thank the [Google TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc) program for providing us with free TPU access.
hrdipto/wav2vec2-xls-r-tf-left-right-trainer
hrdipto
2022-01-19T20:06:38Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-xls-r-tf-left-right-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-tf-left-right-trainer This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0090 - eval_wer: 0.0037 - eval_runtime: 11.2686 - eval_samples_per_second: 71.703 - eval_steps_per_second: 8.963 - epoch: 21.05 - step: 4000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt
vuiseng9
2022-01-19T19:13:40Z
5
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-pruneofa-90pc-bt```](https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes: 1. magnitude sparsification at 0% upon initialization. Custom reverse masking and sparsity freezing are applied. 2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers. 3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.6623 eval_f1 = 87.7147 eval_samples = 10784 ``` # Setup ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 5647610d5ee2bf9f1324604e6579bca1c391e260 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 5dd7402e9a316041dea4ff67508c01047323616e pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} # Additional dependencies pip install onnx ``` # Train ```bash wget https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt/raw/main/nncf_bert_squad_sparsity.json NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise OUTROOT=/path/to/train_output_root #to-revise WORKDIR=transformers/examples/pytorch/question-answering #to-revise RUNID=bert-base-squadv1-pruneofa-90pc-bt-qat-lt cd $WORKDIR OUTDIR=$OUTROOT/$RUNID mkdir -p $OUTDIR export CUDA_VISIBLE_DEVICES=0 NEPOCH=5 python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \ --pruneofa_qat \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 250 \ --learning_rate 3e-5 \ --lr_scheduler_type cosine_with_restarts \ --warmup_ratio 0.25 \ --cosine_cycles 1 \ --teacher bert-large-uncased-whole-word-masking-finetuned-squad \ --teacher_ratio 0.9 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 250 \ --nncf_config $NNCF_CFG \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt MODELROOT=/path/to/cloned_repo_above #to-revise export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt-qat-lt WORKDIR=transformers/examples/pytorch/question-answering #to-revise cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \ --dataset_name squad \ --qat_checkpoint $MODELROOT/checkpoint-22000 \ --nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \ --to_onnx $OUTDIR/bert-base-squadv1-pruneofa-90pc-bt-qat-lt.onnx \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
vuiseng9/bert-base-squadv1
vuiseng9
2022-01-19T19:03:57Z
5
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "question-answering", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
This model is a fork of [```csarron/bert-base-uncased-squad-v1```](https://huggingface.co/csarron/bert-base-uncased-squad-v1). ``` eval_exact_match = 80.9082 eval_f1 = 88.2275 eval_samples = 10784 ``` # Eval ```bash export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1 WORKDIR=transformers/examples/pytorch/question-answering cd $WORKDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1 \ --dataset_name squad \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
huggingtweets/t_zahil
huggingtweets
2022-01-19T16:50:12Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1374040164180299791/ACw4G3nZ_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Thomas Sanlis 🌱</div> <div style="text-align: center; font-size: 14px;">@t_zahil</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Thomas Sanlis 🌱. | Data | Thomas Sanlis 🌱 | | --- | --- | | Tweets downloaded | 3242 | | Retweets | 597 | | Short tweets | 312 | | Tweets kept | 2333 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/33umauvo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @t_zahil's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3fhm3dlx) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3fhm3dlx/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/t_zahil') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
dehio/german-qg-t5-drink600
dehio
2022-01-19T16:38:22Z
7
1
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit widget: - text: "generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl <hl>im Sommer wie auch zu Silvester<hl> funktioniert." language: - de tags: - question generation datasets: - deepset/germanquad model-index: - name: german-qg-t5-drink600 results: [] --- # german-qg-t5-drink600 This model is fine-tuned in question generation in German. The expected answer must be highlighted with &lt;hl> token. It is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad) and further pre-trained on drink related questions. ## Task example #### Input generate question: Der Monk Sour Drink ist ein somit eine aromatische Überraschung, die sowohl &lt;hl>im Sommer wie auch zu Silvester&lt;hl> funktioniert. #### Expected Question Zu welchen Gelegenheiten passt der Monk Sour gut? ## Model description The model is based on [german-qg-t5-quad](https://huggingface.co/dehio/german-qg-t5-quad), which was pre-trained on [GermanQUAD](https://www.deepset.ai/germanquad). We further pre-trained it on questions annotated on drink receipts from [Mixology](https://mixology.eu/) ("drink600"). We have not yet open sourced the dataset, since we do not own copyright on the source material. ## Training and evaluation data The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg). ## Evaluation It achieves a **BLEU-4 score of 29.80** on the drink600 test set (n=120) and **11.30** on the GermanQUAD test set. Thus, fine-tuning on drink600 did not affect performance on GermanQuAD. In comparison, *german-qg-t5-quad* achieves a BLEU-4 score of **10.76** on the drink600 test set. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 100 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
dehio/german-qg-t5-quad
dehio
2022-01-19T16:36:25Z
12
5
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "question generation", "de", "dataset:deepset/germanquad", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: mit widget: - text: "Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl>britischen Common Laws<hl> sind, setzt sich das amerikanische Recht bedeutend davon ab." language: - de tags: - question generation datasets: - deepset/germanquad model-index: - name: german-qg-t5-quad results: [] --- # german-qg-t5-quad This model is fine-tuned in question generation in German. The expected answer must be highlighted with a &lt;hl> token. ## Task example #### Input generate question: Obwohl die Vereinigten Staaten wie auch viele Staaten des Commonwealth Erben des <hl> britischen Common Laws <hl> sind, setzt sich das amerikanische Recht bedeutend davon ab. Dies rührt größtenteils von dem langen Zeitraum her, [...] #### Expected output Von welchem Gesetzt stammt das Amerikanische ab? ## Model description This model is a fine-tuned version of [valhalla/t5-base-qg-hl](https://huggingface.co/valhalla/t5-base-qg-hl) on the [GermanQUAD](https://www.deepset.ai/germanquad) dataset. ## Training and evaluation data The training script can be accessed [here](https://github.com/d-e-h-i-o/german-qg). ### Evaluation The model achieves a BLEU-4 score of **11.30** on the GermanQuAD test set (n=2204). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 100 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
DanL/scientific-challenges-and-directions
DanL
2022-01-19T12:47:22Z
315
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:DanL/scientific-challenges-and-directions-dataset", "arxiv:2108.13751", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
--- tags: - generated_from_trainer - text-classification language: - en datasets: - DanL/scientific-challenges-and-directions-dataset widget: - text: "severe atypical cases of pneumonia emerged and quickly spread worldwide." example_title: "challenge" - text: "we speculate that studying IL-6 will be beneficial." example_title: "direction" - text: "in future studies, both PRRs should be tested as the cause for multiple deaths." example_title: "both" - text: "IbMADS1-transformed potatoes exhibited tuber morphogenesis in the fibrous roots." example_title: "neither" metrics: - precision - recall - f1 model-index: - name: scientific-challenges-and-directions results: [] --- # scientific-challenges-and-directions We present a novel resource to help scientists and medical professionals discover challenges and potential directions across scientific literature, focusing on a broad corpus pertaining to the COVID-19 pandemic and related historical research. At a high level, the _challenges_ and _directions_ are defined as follows: * **Challenge**: A sentence mentioning a problem, difficulty, flaw, limitation, failure, lack of clarity, or knowledge gap. * **Research direction**: A sentence mentioning suggestions or needs for further research, hypotheses, speculations, indications or hints that an issue is worthy of exploration. * This model here is described in our paper: [A Search Engine for Discovery of Scientific Challenges and Directions](https://arxiv.org/abs/2108.13751) (though we've upgraded the infrastructure since the paper was released - there are slight differences in the results). * Our dataset can be found [here](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset). * Please cite our paper if you use our datasets or models in your project. See the [BibTeX](#citation). * Feel free to [email us](#contact-us). * Also, check out [our search engine](https://challenges.apps.allenai.org/), as an example application. ## Model description This model is a fine-tuned version of [PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) on the [scientific-challenges-and-directions-dataset](https://huggingface.co/datasets/DanL/scientific-challenges-and-directions-dataset), designed for multi-label text classification. ## Training and evaluation data The scientific-challenges-and-directions model is trained based on a dataset that is a collection of 2894 sentences and their surrounding contexts, from 1786 full-text papers in the CORD-19 corpus, labeled for classification of challenges and directions by expert annotators with biomedical and bioNLP backgrounds. For full details on the train/test/split of the data see section 3.1 in our [paper](https://arxiv.org/abs/2108.13751) ## Example notebook We include an example notebook that uses the model for inference in our [repo](https://github.com/Dan-La/scientific-challenges-and-directions). See `Inference_Notebook.ipynb`. A training notebook is also included. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning rate: 2e-05 - train batch size: 8 - eval batch size: 4 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr scheduler type: linear - lr scheduler warmup steps: 500 - num epochs: 30 ### Training results The achieves the following results on the test set: - Precision Challenge: 0.768719 - Recall Challenge: 0.780405 - F1 Challenge: 0.774518 - Precision Direction: 0.758112 - Recall Direction: 0.774096 - F1 Direction: 0.766021 - Precision (micro avg. on both labels): 0.764894 - Recall (micro avg. on both labels): 0.778139 - F1 (micro avg. on both labels): 0.771459 ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3 ## Citation If using our dataset and models, please cite: ``` @misc{lahav2021search, title={A Search Engine for Discovery of Scientific Challenges and Directions}, author={Dan Lahav and Jon Saad Falcon and Bailey Kuehl and Sophie Johnson and Sravanthi Parasa and Noam Shomron and Duen Horng Chau and Diyi Yang and Eric Horvitz and Daniel S. Weld and Tom Hope}, year={2021}, eprint={2108.13751}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## Contact us Please don't hesitate to reach out. **Email:** `[email protected]`,`[email protected]`.
AT/distilroberta-base-finetuned-wikitext2
AT
2022-01-19T08:22:36Z
19
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-finetuned-wikitext2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-finetuned-wikitext2 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 80.0 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/wmascen
huggingtweets
2022-01-19T04:52:23Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/wmascen/1642567908765/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1453179488569802752/LsB82o0-_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">wihrel</div> <div style="text-align: center; font-size: 14px;">@wmascen</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from wihrel. | Data | wihrel | | --- | --- | | Tweets downloaded | 2900 | | Retweets | 203 | | Short tweets | 236 | | Tweets kept | 2461 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bsbw98xm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @wmascen's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3pwlitks/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/wmascen') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/godslovepariah
huggingtweets
2022-01-19T04:12:22Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/godslovepariah/1642565537762/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1432780406777020417/XTrp9MCR_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">LOVER//PARIAH</div> <div style="text-align: center; font-size: 14px;">@godslovepariah</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from LOVER//PARIAH. | Data | LOVER//PARIAH | | --- | --- | | Tweets downloaded | 525 | | Retweets | 9 | | Short tweets | 10 | | Tweets kept | 506 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/6l5fj9xw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @godslovepariah's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3v0x5r1a/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/godslovepariah') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
domdomreloaded/bert-base-uncased-finetuned-swag
domdomreloaded
2022-01-18T22:33:47Z
11
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "multiple-choice", "generated_from_trainer", "dataset:swag", "license:apache-2.0", "endpoints_compatible", "region:us" ]
multiple-choice
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag metrics: - accuracy model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. It achieves the following results on the evaluation set: - Loss: 0.6045 - Accuracy: 0.7960 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7494 | 1.0 | 4597 | 0.5942 | 0.7716 | | 0.3499 | 2.0 | 9194 | 0.6045 | 0.7960 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
milyiyo/electra-base-gen-finetuned-amazon-review
milyiyo
2022-01-18T21:21:53Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: electra-base-gen-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.5024 - name: F1 type: f1 value: 0.5063190059782597 - name: Precision type: precision value: 0.5121183330982292 - name: Recall type: recall value: 0.5024 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-base-gen-finetuned-amazon-review This model is a fine-tuned version of [mrm8488/electricidad-base-generator](https://huggingface.co/mrm8488/electricidad-base-generator) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.8030 - Accuracy: 0.5024 - F1: 0.5063 - Precision: 0.5121 - Recall: 0.5024 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall | |:-------------:|:-----:|:----:|:--------:|:------:|:---------------:|:---------:|:------:| | 0.5135 | 1.0 | 1000 | 0.4886 | 0.4929 | 1.6580 | 0.5077 | 0.4886 | | 0.4138 | 2.0 | 2000 | 0.5044 | 0.5093 | 1.7951 | 0.5183 | 0.5044 | | 0.4244 | 3.0 | 3000 | 0.5022 | 0.5068 | 1.8108 | 0.5141 | 0.5022 | | 0.4231 | 6.0 | 6000 | 1.7636 | 0.4972 | 0.5018 | 0.5092 | 0.4972 | | 0.3574 | 7.0 | 7000 | 1.8030 | 0.5024 | 0.5063 | 0.5121 | 0.5024 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
malloc/OpenNMT-py-German-English-2-layer-BiLSTM
malloc
2022-01-18T20:22:23Z
0
0
null
[ "translation", "pytorch", "de", "en", "license:mit", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - de - en tags: - translation - pytorch license: mit datasets: - IWSLT ‘14 DE-EN metrics: - bleu --- # OpenNMT-py-English-German-Transformer [OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework. OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for German to English translation. - Configuration: 2-layer BiLSTM with hidden size 500 trained for 20 epochs - Data: IWSLT ‘14 DE-EN - BLEU: 30.33
malloc/OpenNMT-py-English-German-Transformer
malloc
2022-01-18T20:18:11Z
0
2
null
[ "translation", "pytorch", "de", "en", "dataset:WMT", "license:mit", "region:us" ]
translation
2022-03-02T23:29:05Z
--- language: - de - en tags: - translation - pytorch license: mit datasets: - WMT metrics: - bleu --- # OpenNMT-py-English-German-Transformer [OpenNMT-py](https://github.com/OpenNMT/OpenNMT-py) is the PyTorch version of the OpenNMT project, an open-source (MIT) neural machine translation framework. OpenNMT has several [pretrained models](https://opennmt.net/Models-py/). This one is trained particularly for English to German translation. - Configuration: Base Transformer configuration with [standard training options](http://opennmt.net/OpenNMT-py/FAQ.html#how-do-i-use-the-transformer-model-do-you-support-multi-gpu) - Data: WMT with shared SentencePiece model - BLEU: - newstest2014 = 26.89 - newstest2017 = 28.09
Supiri/t5-base-conversation
Supiri
2022-01-18T17:56:42Z
33
20
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "NLP", "ChatBot", "Game AI", "en", "dataset:cornell_movie_dialog", "license:gpl-3.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en datasets: - cornell_movie_dialog license: gpl-3.0 tags: - NLP - ChatBot - Game AI metrics: - rouge widget: - text: "personality: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody.</s> inquiry: What's your name?" example_title: "Talk to Hinata" - text: "personality: Voldemort is a raging psychopath, devoid of the normal human responses to other people's suffering. He has no conscience, feels no remorse or empathy, and does not recognize the worth and humanity of anybody except himself.</s> inquiry: What's your name?" example_title: "Talk to Voldemort" inference: parameters: num_beams: 6 diversity_penalty: 2.5 num_beam_groups: 2 --- # FreeIsland AI With the advancement of the graphical processing power of computers and sophisticated algorithms like [Nanite](https://docs.unrealengine.com/5.0/en-US/RenderingFeatures/Nanite/), simulating lifelike sceneries in real-time is never being easier. About a month ago Epic Games [showoff](https://www.youtube.com/watch?v=WU0gvPcc3jQ) the newest capabilities of their newest game engine by simulating an entire city including population, traffic, weather, etc running on a Playstore 5. That made me think what are the things missing from that simulation and how can I use my skills to improve it. One of the main missing components that separate our world and the simulated world is people. More importantly, the interactivity of people in simulated worlds. Last year a game called cyberpunk got released and it had an option to [talk to any person](https://www.youtube.com/watch?v=Z1OtYGzUoSo) in its city but the problem with that was all the responses from the Non-player Characters (NPCs) are hardcoded which greatly reduce the immersion of the game. So the goal of this project is to experiment with how the advancement of Natural Language Processing makes NPCs from video games interactive and enhances immersion in video games. # Usage ```py from transformers import AutoModelForSeq2SeqLM trained_model = AutoModelForSeq2SeqLM.from_pretrained(f"Supiri/t5-base-conversation") prompt = "What's your name?" context = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody." input_ids = tokenizer(f"personality: {context}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=2.5, num_beam_groups=2) print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True)) # Answer: My name is Hinata ``` # Evaluation ## Test 1 For this test, I sampled input from the test dataset. For this question the actual response is > "It works a little." But models' response was > "I don't want to flirt with you." Which reflect its bio which was filled by GPT-3 > "He stands primarily to gain self-esteem, which he often receives through the submission of others" In gist, Dr. Greenbaum tried to tease Sebastian about his seductive traits but this model's go-to response was to shut her down since the biography of Sebastian states he often tries to assert his dominance over others. ```py prompt = dataset['test'][66]['request'] contexts = dataset['test'][66]['bio'] input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2) print("Input to the Model") print("Bio:\t",contexts) print("\nPrompt:\t", prompt) print("\nGround truth response") print("\t", dataset['test'][66]['response']) print("\nModel's Prediction") print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ```txt Input to the Model Bio: Sebastian is a very extreme representation of the trope of the "Confidence Man", and acts it out to a degree that is sometimes comedic but mostly frightening. He stands primarily to gain self-esteem, which he often receives through the submission of others or solely through his own perceptions. An artful seducer, his incredible charisma is both his greatest weapon and most intoxicating weakness. Prompt: You think you can come in here with that cute little smirk on your face and try and flirt with me. It doesn't work, Sebastian. Ground truth response It works a little. Model's Prediction Answer: I don't want to flirt with you. ``` ### Test 2 Hinata is a kind-hearted girl from the anime series Naruto. I took her bio from [personality database](https://www.personality-database.com/profile/2790/hinata-hyga-naruto-shippden-mbti-personality-type) and ask a few questions about her. Off the top, you can see the model understands the context since when I asked the model, "**What's your name?**" it responded with the name given with the context. Also, notice when prompted the same question differently (**"Who are you?"**), it still manages to answer it well. ```py prompts = ["What's your name?", "How are you feeling?", "Do you like Star Wars?", "Who are you?", "Coffee or tea?"] contexts = "Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody." print("Bio:\t",contexts, "\n") for prompt in prompts: input_ids = tokenizer(f"personality: {contexts}", f"inquiry: {prompt}", return_tensors='pt').input_ids outputs = trained_model.generate(input_ids, num_beams=6, diversity_penalty=5.0, num_beam_groups=2) print("Prompt:\t", prompt) print("Answer:\t", tokenizer.decode(outputs[0], skip_special_tokens=True), "\n") ``` ```txt Bio: Hinata was soft-spoken and polite, always addressing people with proper honorifics. She is kind, always thinking of others more than for herself, caring for their feelings and well-being. She doesn't like being confrontational for any reason. This led to her being meek or timid to others, as her overwhelming kindness can render her unable to respond or act for fear of offending somebody. Prompt: What's your name? Answer: My name is Hinata Prompt: How are you feeling? Answer: I'm fine. Prompt: Do you like Star Wars? Answer: No, I don't. Prompt: Who are you? Answer: My name is Hinata Prompt: Coffee or tea? Answer: No, I don't drink much. ``` # Conclusion After training the `t5-base` model for 5 epochs, the model started getting adapted to the dataset but there are a lot more improvements that can be done. 1. During the dataset creation part I had to limit the dataset size to 200 unique characters out of 9,035 that's present in the dataset due to the **budget constraints**. So If I manage to cover at least half of the dataset this model will have come up with far better responses. 2. Both input size and batch size were severely constrained due to the lack of access to GPU memory. Having the batch size of 64 is in contrast to 8 would have massive improvements in both training time and **generalization of model**. 3. Using a bigger model like `t5-large` or `t5-3b` will certainly improve the performance. 4. One of the main downsides to using this pre-trained model is this model was trained in German, French, and Romanian. Which consumed a chunk of the **vocabulary size and trainable parameters**. Retraining this model from scratch will help to reduce both needed parameter count and training loss when it comes to this specific task.
vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt
vuiseng9
2022-01-18T17:45:15Z
1
0
transformers
[ "transformers", "pytorch", "onnx", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes: 1. magnitude sparsification at 57.92% upon initialization so that sparsity over all linear layers of bert-base is at 90%. Parameters are ranked globally via thier absolute norm. Only linear layers of self-attention and ffnn are targeted. 2. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad``` ``` eval_exact_match = 80.4447 eval_f1 = 87.7678 eval_samples = 10784 ``` # Setup ```bash # OpenVINO/NNCF git clone https://github.com/vuiseng9/nncf && cd nncf git checkout tld-poc git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2 python setup.py develop pip install -r examples/torch/requirements.txt # Huggingface nn_pruning git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning git checkout reproduce-evaluation git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446 pip install -e ".[dev]" # Huggingface Transformers git clone https://github.com/vuiseng9/transformers && cd transformers git checkout tld-poc git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5 pip install -e . head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {} # Additional dependencies pip install onnx ``` # Train ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt BASE_MODEL=/path/to/cloned_repo_above #to-revise wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt/raw/main/nncf_bert_squad_sparsity.json NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise OUTROOT=/path/to/train_output_root #to-revise WORKDIR=transformers/examples/pytorch/question-answering #to-revise RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt cd $WORKDIR OUTDIR=$OUTROOT/$RUNID mkdir -p $OUTDIR export CUDA_VISIBLE_DEVICES=0 NEPOCH=5 python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --optimize_model_before_eval \ --optimized_checkpoint $BASE_MODEL \ --dataset_name squad \ --do_eval \ --do_train \ --evaluation_strategy steps \ --eval_steps 250 \ --learning_rate 3e-5 \ --lr_scheduler_type cosine_with_restarts \ --warmup_ratio 0.25 \ --cosine_cycles 1 \ --teacher bert-large-uncased-whole-word-masking-finetuned-squad \ --teacher_ratio 0.9 \ --num_train_epochs $NEPOCH \ --per_device_eval_batch_size 128 \ --per_device_train_batch_size 16 \ --max_seq_length 384 \ --doc_stride 128 \ --save_steps 250 \ --nncf_config $NNCF_CFG \ --logging_steps 1 \ --overwrite_output_dir \ --run_name $RUNID \ --output_dir $OUTDIR ``` # Eval This repo must be cloned locally. ```bash git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt MODELROOT=/path/to/cloned_repo_above #to-revise export CUDA_VISIBLE_DEVICES=0 OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt WORKDIR=transformers/examples/pytorch/question-answering #to-revise cd $WORKDIR mkdir $OUTDIR nohup python run_qa.py \ --model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \ --dataset_name squad \ --optimize_model_before_eval \ --qat_checkpoint $MODELROOT/checkpoint-20000 \ --nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \ --to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-nncf-57.92sparse-lt.onnx \ --do_eval \ --per_device_eval_batch_size 128 \ --max_seq_length 384 \ --doc_stride 128 \ --overwrite_output_dir \ --output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log & ```
huggingtweets/collision
huggingtweets
2022-01-18T17:17:28Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/collision/1642526243846/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2464132281/jbbxl9p7ratdyuposrif_400x400.jpeg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">John Collison</div> <div style="text-align: center; font-size: 14px;">@collision</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from John Collison. | Data | John Collison | | --- | --- | | Tweets downloaded | 3222 | | Retweets | 999 | | Short tweets | 206 | | Tweets kept | 2017 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2ifqwdbm/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @collision's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2gdto8z3) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2gdto8z3/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/collision') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
tal-yifat/injury-report-test
tal-yifat
2022-01-18T16:24:00Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: injury-report-test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # injury-report-test This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5697 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.8158 | 1.0 | 6633 | 1.7368 | | 1.6984 | 2.0 | 13266 | 1.6198 | | 1.6209 | 3.0 | 19899 | 1.5800 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
akozlo/conserv_fulltext_1_18_22
akozlo
2022-01-18T13:42:59Z
5
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer model-index: - name: conserv_fulltext_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # conserv_fulltext_model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3 unbalanced_texts gpt2
soskok1288/Sas
soskok1288
2022-01-18T11:54:46Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
export enum PipelineType { "text-generation"}
NbAiLab/roberta_des_ada_128_6e4
NbAiLab
2022-01-18T10:45:01Z
8
0
transformers
[ "transformers", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Just for performing some experiments. Do not use.
hkunlp/T5_large_prefix_all_tasks_2upsample2
hkunlp
2022-01-18T07:15:22Z
4
2
transformers
[ "transformers", "pytorch", "t5", "text-generation-inference", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
This is the ckpt of prefix-tuning model we trained on 21 tasks using a upsampling temp of 2. Note: The prefix module is large due to the fact we keep the re-param weight and didn't compress it to make it more original and extendable for researchers.
philschmid/tf-distilbart-cnn-12-6-tradetheevent
philschmid
2022-01-18T05:02:13Z
5
0
transformers
[ "transformers", "tf", "tensorboard", "bart", "text2text-generation", "generated_from_keras_callback", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: philschmid/tf-distilbart-cnn-12-6-tradetheevent results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # philschmid/tf-distilbart-cnn-12-6-tradetheevent This model is a fine-tuned version of [philschmid/tf-distilbart-cnn-12-6](https://huggingface.co/philschmid/tf-distilbart-cnn-12-6) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6894 - Validation Loss: 1.7245 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 161440, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.6635 | 1.5957 | 0 | | 1.3144 | 1.5577 | 1 | | 1.0819 | 1.6059 | 2 | | 0.8702 | 1.6695 | 3 | | 0.6894 | 1.7245 | 4 | ### Framework versions - Transformers 4.16.0.dev0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
dmiller1/distilbert-base-uncased-finetuned-emotion
dmiller1
2022-01-18T03:59:30Z
5
0
transformers
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261144741040841 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2161 - Accuracy: 0.926 - F1: 0.9261 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8436 | 1.0 | 250 | 0.3175 | 0.9105 | 0.9081 | | 0.2492 | 2.0 | 500 | 0.2161 | 0.926 | 0.9261 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.7.1 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/dankogai-hirox246-syakkin_dama
huggingtweets
2022-01-18T02:01:17Z
0
0
null
[ "huggingtweets", "en", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/dankogai-hirox246-syakkin_dama/1642471272927/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1190142566831984640/o4kO2hp-_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1283621672541536259/WI_8OTJz_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ひろゆき, Hiroyuki Nishimura & Dan Kogai & 借金玉</div> <div style="text-align: center; font-size: 14px;">@dankogai-hirox246-syakkin_dama</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ひろゆき, Hiroyuki Nishimura & Dan Kogai & 借金玉. | Data | ひろゆき, Hiroyuki Nishimura | Dan Kogai | 借金玉 | | --- | --- | --- | --- | | Tweets downloaded | 3249 | 3250 | 3249 | | Retweets | 283 | 341 | 260 | | Short tweets | 1819 | 2313 | 2918 | | Tweets kept | 1147 | 596 | 71 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1meoqt2b/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dankogai-hirox246-syakkin_dama's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1gc1ic0l) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1gc1ic0l/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/dankogai-hirox246-syakkin_dama') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
jkang/drawing-artist-classifier
jkang
2022-01-18T01:19:28Z
5
1
tf-keras
[ "tf-keras", "en", "license:mit", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: en license: mit datasets: - web crawled (coming soon) --- # Simple CNN-based Artist Classifier This repo contains a simple CNN-based Keras model which classifies images into one of 10 selected artists/painters. - The purpose of this model was for a quick prototyping - Data has been web-crawled using `https://github.com/YoongiKim/AutoCrawler` - 10 popular artists/painters were chosen: - \[ARTIST\]: \[ID\] - claude_monet: 0, - henri_matisse: 1, - jean_michel_basquiat: 2, - keith_haring: 3, - pablo_picasso: 4, - pierre_augste_renoir: 5, - rene_magritte: 6, - roy_richtenstein: 7, - vincent_van_gogh: 8, - wassily_kandinsky: 9 - About 100 representative paintings per artist were crawled and manually checked - Dataset will be shared later # How to use ```python import tensorflow as tf from huggingface_hub import from_pretrained_keras model = from_pretrained_keras("jkang/drawing-artist-classifier") image_file = 'monet.jpg' img = tf.io.read_file(image_file) img = tf.io.decode_jpeg(img, channels=3) last_layer_activation, predictions = model(img[tf.newaxis,...]) ``` # Intended uses & limitations You can use this model freely for predicting artists or trends of a given image. Please keep in mind that this model is not intended for production, but for research and quick prototyping. Web-crawled image data might not have a balanced amount of drawings that sufficiently represent the artists. --- - 2022-01-18 first created by jaekoo kang
Huertas97/en_roberta_base_leetspeak_ner
Huertas97
2022-01-17T21:54:01Z
5
1
spacy
[ "spacy", "token-classification", "en", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - spacy - token-classification language: - en license: apache-2.0 widget: - text: "But one other thing that we have to re;think is the way that we dy£ our #c!l.o|th?£+s." example_title: "Word camouflage detection" model-index: - name: en_roberta_base_leetspeak_ner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.7966001851 - name: NER Recall type: recall value: 0.8619559279 - name: NER F Score type: f_score value: 0.8279903783 --- | Feature | Description | | --- | --- | | **Name** | `en_roberta_base_leetspeak_ner` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [roberta-base](https://huggingface.co/roberta-base) pre-trained model on English language using a masked language modeling (MLM) objective by Yinhan Liu et al. <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders| | **License** | Apache 2.0 | | **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 82.80 | | `ENTS_P` | 79.66 | | `ENTS_R` | 86.20 | | `TRANSFORMER_LOSS` | 177808.42 | | `NER_LOSS` | 608427.31 |
DSI/human-directed-sentiment
DSI
2022-01-17T14:20:52Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:04Z
** Human-Directed Sentiment Analysis in Arabic A supervised training procedure to classify human-directed-sentiment in a text. We define the human-directed-sentiment as the polarity of one user towards a second person who is involved with him in a discussion.
nielsr/tapex-large-finetuned-tabfact
nielsr
2022-01-17T13:39:28Z
5
0
transformers
[ "transformers", "pytorch", "bart", "text-classification", "tapex", "en", "dataset:tab_fact", "arxiv:2107.07653", "license:apache-2.0", "autotrain_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: en tags: - tapex license: apache-2.0 datasets: - tab_fact inference: false --- TAPEX-large model fine-tuned on WTQ. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining). To load it and run inference, you can do the following: ``` from transformers import BartTokenizer, BartForSequenceClassification import pandas as pd tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-tabfact") model = BartForSequenceClassification.from_pretrained("nielsr/tapex-large-finetuned-tabfact") # create table data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) # turn into dict table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]} # turn into format TAPEX expects # define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py linearizer = IndexedRowTableLinearize() linear_table = linearizer.process_table(table_dict) # add sentence sentence = "George Clooney has 69 movies" joint_input = sentence + " " + linear_table # encode encoding = tokenizer(joint_input, return_tensors="pt") # forward pass outputs = model(**encoding) # print prediction logits = outputs.logits print(logits.argmax(-1)) ```
groadabike/ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline
groadabike
2022-01-17T12:53:22Z
11
1
asteroid
[ "asteroid", "pytorch", "audio", "ConvTasNet", "audio-to-audio", "license:cc-by-sa-4.0", "region:us" ]
audio-to-audio
2022-03-02T23:29:05Z
--- tags: - asteroid - audio - ConvTasNet - audio-to-audio datasets: - DAMP-VSEP - Singing/Accompaniment Separation license: cc-by-sa-4.0 --- ## Description: This model was trained by Gerardo Roa using the dampvsep recipe in Asteroid. It was trained on the `singing/accompaniment` task of the `DAMP-VSEP` dataset. ## Training config: ```yaml data: channels: 1 emb_model: 'no' metadata_path: metadata mixture: remix root_path: /fastdata/acp13gr/DAMP/DAMP-VSEP sample_rate: 16000 train_set: english_nonenglish filterbank: kernel_size: 20 n_filters: 256 stride: 10 main_args: exp_dir: exp/train_convtasnet_remix-no-0.0-english_nonenglish-0.0005-jade help: null masknet: bn_chan: 256 conv_kernel_size: 3 hid_chan: 512 mask_act: relu n_blocks: 10 n_repeats: 4 n_src: 2 norm_type: gLN skip_chan: 256 optim: lr: 0.0005 optimizer: adam weight_decay: 0.0 positional arguments: {} training: batch_size: 7 early_stop: true epochs: 50 half_lr: true loss_alpha: 0.0 num_workers: 10 ``` ## Results: ```yaml "si_sdr": 15.111802516750586, "si_sdr_imp": 15.178209807687663, "si_sdr_s0": 12.160261214703553, "si_sdr_s0_imp": 17.434593619085675, "si_sdr_s1": 18.063343818797623, "si_sdr_s1_imp": 12.92182599628965, "sdr": 15.959722569460281, "sdr_imp": 14.927002467087567, "sdr_s0": 13.270412028426595, "sdr_s0_imp": 16.45867572657551, "sdr_s1": 18.64903311049397, "sdr_s1_imp": 13.39532920759962, "sir": 23.935932341084754, "sir_imp": 22.903212238712012, "sir_s0": 22.30777879911744, "sir_s0_imp": 25.49604249726635, "sir_s1": 25.56408588305207, "sir_s1_imp": 20.310381980157665, "sar": 17.174899162445882, "sar_imp": -134.47377304178818, "sar_s0": 14.268071153965913, "sar_s0_imp": -137.38060105026818, "sar_s1": 20.081727170925856, "sar_s1_imp": -131.56694503330817, "stoi": 0.7746496376326059, "stoi_imp": 0.19613735629114643, "stoi_s0": 0.6611376621212413, "stoi_s0_imp": 0.21162695175464794, "stoi_s1": 0.8881616131439705, "stoi_s1_imp": 0.1806477608276449 ``` ## License notice: ** This is important, please fill it, if you need help, you can ask on Asteroid's slack.** This work "ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline" is a derivative of [DAMP-VSEP corpus](https://zenodo.org/record/3553059) by [Smule, Inc](https://www.smule.com/), used under [Restricted License](https://zenodo.org/record/3553059)(Research only). "ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Gerardo Roa.
addy88/t5-grammar-correction
addy88
2022-01-17T12:09:14Z
109
2
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
### How to use Here is how to use this model in PyTorch: ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("addy88/t5-grammar-correction") model = AutoModelForSeq2SeqLM.from_pretrained("addy88/t5-grammar-correction") input_ids = tokenizer('grammar: This sentences has has bads grammar.', return_tensors='pt').input_ids outputs = model.generate(input_ids) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ```
philschmid/tf-distilbart-cnn-12-6
philschmid
2022-01-17T08:39:52Z
28
0
transformers
[ "transformers", "tf", "bart", "text2text-generation", "summarization", "en", "dataset:cnn_dailymail", "dataset:xsum", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
summarization
2022-03-02T23:29:05Z
--- language: en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail - xsum thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png --- # This is an Tensorflow fork of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) ### Usage This checkpoint should be loaded into `BartForConditionalGeneration.from_pretrained`. See the [BART docs](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration) for more information. ### Metrics for DistilBART models | Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | |:---------------------------|------------:|----------------------:|----------:|----------:|----------:| | distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | | distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | | distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | | distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | | bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | | distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | | bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 |
sahri/indonesiasentiment
sahri
2022-01-17T04:50:03Z
19
0
transformers
[ "transformers", "pytorch", "tf", "roberta", "text-classification", "indonesian-roberta-base-sentiment-classifier", "id", "dataset:indonlu", "arxiv:1907.11692", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: id tags: - indonesian-roberta-base-sentiment-classifier license: mit datasets: - indonlu widget: - text: "tidak jelek tapi keren" --- ## Indonesian RoBERTa Base Sentiment Classifier Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews. After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%. Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless. ## Model | Model | #params | Arch. | Training/Validation data (text) | | ---------------------------------------------- | ------- | ------------ | ------------------------------- | | `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` | ## Evaluation Results The model was trained for 5 epochs and the best model was loaded at the end. | Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall | | ----- | ------------- | --------------- | -------- | -------- | --------- | -------- | | 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 | | 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 | | 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 | | 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 | | 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 | ## How to Use ### As Text Classifier ```python from transformers import pipeline pretrained_name = "sahri/sentiment" nlp = pipeline( "sentiment-analysis", model=pretrained_name, tokenizer=pretrained_name ) nlp("tidak jelek tapi keren") ``` ## Disclaimer Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model. ## Author Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [sahri ramadhan] All computation and development are done on Google Colaboratory using their free GPU access.
milyiyo/multi-minilm-finetuned-amazon-review
milyiyo
2022-01-16T22:53:05Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:amazon_reviews_multi", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy - f1 - precision - recall model-index: - name: multi-minilm-finetuned-amazon-review results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.5422 - name: F1 type: f1 value: 0.543454465221178 - name: Precision type: precision value: 0.5452336215624385 - name: Recall type: recall value: 0.5422 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multi-minilm-finetuned-amazon-review This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.2436 - Accuracy: 0.5422 - F1: 0.5435 - Precision: 0.5452 - Recall: 0.5422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 1.0049 | 1.0 | 2500 | 1.0616 | 0.5352 | 0.5268 | 0.5347 | 0.5352 | | 0.9172 | 2.0 | 5000 | 1.0763 | 0.5432 | 0.5412 | 0.5444 | 0.5432 | | 0.8285 | 3.0 | 7500 | 1.1077 | 0.5408 | 0.5428 | 0.5494 | 0.5408 | | 0.7361 | 4.0 | 10000 | 1.1743 | 0.5342 | 0.5399 | 0.5531 | 0.5342 | | 0.6538 | 5.0 | 12500 | 1.2436 | 0.5422 | 0.5435 | 0.5452 | 0.5422 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/emsorkun
huggingtweets
2022-01-16T22:19:55Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1477509052074766340/rVamRzsW_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Enver Melih Sorkun</div> <div style="text-align: center; font-size: 14px;">@emsorkun</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Enver Melih Sorkun. | Data | Enver Melih Sorkun | | --- | --- | | Tweets downloaded | 2107 | | Retweets | 618 | | Short tweets | 130 | | Tweets kept | 1359 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c12hxxur/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @emsorkun's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3prqt8oz) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3prqt8oz/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/emsorkun') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ilevs/opus-mt-ru-en-finetuned-ru-to-en
ilevs
2022-01-16T19:29:07Z
5
1
transformers
[ "transformers", "pytorch", "tensorboard", "marian", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: opus-mt-ru-en-finetuned-ru-to-en results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-ru-en-finetuned-ru-to-en This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.1251 - Bleu: 15.9892 - Gen Len: 5.0168 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 2.6914 | 1.0 | 4956 | 2.5116 | 11.1411 | 4.9989 | | 2.2161 | 2.0 | 9912 | 2.3255 | 11.7334 | 5.1678 | | 1.9237 | 3.0 | 14868 | 2.2388 | 13.6802 | 5.1463 | | 1.7087 | 4.0 | 19824 | 2.1892 | 13.8815 | 5.0625 | | 1.5423 | 5.0 | 24780 | 2.1586 | 14.8182 | 5.0779 | | 1.3909 | 6.0 | 29736 | 2.1445 | 14.3603 | 5.2194 | | 1.3041 | 7.0 | 34692 | 2.1323 | 16.2138 | 5.0438 | | 1.2078 | 8.0 | 39648 | 2.1275 | 16.2574 | 5.0165 | | 1.1523 | 9.0 | 44604 | 2.1255 | 16.0368 | 5.014 | | 1.1005 | 10.0 | 49560 | 2.1251 | 15.9892 | 5.0168 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
Shushant/biobert-v1.1-biomedicalQuestionAnswering
Shushant
2022-01-16T15:34:49Z
83
5
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer model-index: - name: biobert-v1.1-biomedicalQuestionAnswering results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biobert-v1.1-biomedicalQuestionAnswering This model is a fine-tuned version of [dmis-lab/biobert-v1.1](https://huggingface.co/dmis-lab/biobert-v1.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.9009 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 22 | 3.7409 | | No log | 2.0 | 44 | 3.1852 | | No log | 3.0 | 66 | 3.0342 | | No log | 4.0 | 88 | 2.9416 | | No log | 5.0 | 110 | 2.9009 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
ptaszynski/yacis-electra-small-japanese-cyberbullying
ptaszynski
2022-01-16T13:51:28Z
61
6
transformers
[ "transformers", "pytorch", "electra", "text-classification", "ja", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- language: ja license: cc-by-sa-4.0 datasets: - YACIS corpus - Harmful BBS Japanese comments dataset - Twitter Japanese cyberbullying dataset --- # yacis-electra-small-cyberbullying This is an [ELECTRA](https://github.com/google-research/electra) Small model for the Japanese language finetuned for automatic cyberbullying detection. The original foundation model was originally pretrained on 5.6 billion words [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus, and later finetuned on a balanced dataset created by unifying two datasets, namely "Harmful BBS Japanese comments dataset" and "Twitter Japanese cyberbullying dataset". ## Model architecture The original model was pretrained using ELECTRA Small model settings and can be found here: [https://huggingface.co/ptaszynski/yacis-electra-small-japanese](https://huggingface.co/ptaszynski/yacis-electra-small-japanese) ## Licenses The finetuned model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> ## Citations Please, cite this model using the following citation. ``` @inproceedings{shibata2022yacis-electra, title={日本語大規模ブログコーパスYACISに基づいたELECTRA事前学習済み言語モデルの作成及び性能評価}, % title={Development and performance evaluation of ELECTRA pretrained language model based on YACIS large-scale Japanese blog corpus [in Japanese]}, %% for English citations author={柴田 祥伍 and プタシンスキ ミハウ and エロネン ユーソ and ノヴァコフスキ カロル and 桝井 文人}, % author={Shibata, Shogo and Ptaszynski, Michal and Eronen, Juuso and Nowakowski, Karol and Masui, Fumito}, %% for English citations booktitle={言語処理学会第28回年次大会(NLP2022) (予定)}, % booktitle={Proceedings of The 28th Annual Meeting of The Association for Natural Language Processing (NLP2022)}, %% for English citations pages={1--4}, year={2022} } ``` The two datasets used for finetuning should be cited using the following references. - Harmful BBS Japanese comments dataset: ``` @book{ptaszynski2018automatic, title={Automatic Cyberbullying Detection: Emerging Research and Opportunities: Emerging Research and Opportunities}, author={Ptaszynski, Michal E and Masui, Fumito}, year={2018}, publisher={IGI Global} } ``` ``` @article{松葉達明2009学校非公式サイトにおける有害情報検出, title={学校非公式サイトにおける有害情報検出}, author={松葉達明 and 里見尚宏 and 桝井文人 and 河合敦夫 and 井須尚紀}, journal={電子情報通信学会技術研究報告. NLC, 言語理解とコミュニケーション}, volume={109}, number={142}, pages={93--98}, year={2009}, publisher={一般社団法人電子情報通信学会} } ``` - Twitter Japanese cyberbullying dataset: ``` TBA ``` The pretraining was done using YACIS corpus, which should be cited using at least one of the following references. ``` @inproceedings{ptaszynski2012yacis, title={YACIS: A five-billion-word corpus of Japanese blogs fully annotated with syntactic and affective information}, author={Ptaszynski, Michal and Dybala, Pawel and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio}, booktitle={Proceedings of the AISB/IACAP world congress}, pages={40--49}, year={2012}, howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}" } ``` ``` @article{ptaszynski2014automatically, title={Automatically annotating a five-billion-word corpus of Japanese blogs for sentiment and affect analysis}, author={Ptaszynski, Michal and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio}, journal={Computer Speech \& Language}, volume={28}, number={1}, pages={38--55}, year={2014}, publisher={Elsevier}, howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}" } ```
jiobiala24/wav2vec2-base-checkpoint-5
jiobiala24
2022-01-16T10:56:18Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-5 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-5 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-4](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-4) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.9849 - Wer: 0.3354 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3947 | 1.96 | 1000 | 0.5749 | 0.3597 | | 0.2856 | 3.93 | 2000 | 0.6212 | 0.3479 | | 0.221 | 5.89 | 3000 | 0.6280 | 0.3502 | | 0.1755 | 7.86 | 4000 | 0.6517 | 0.3526 | | 0.1452 | 9.82 | 5000 | 0.7115 | 0.3481 | | 0.1256 | 11.79 | 6000 | 0.7687 | 0.3509 | | 0.1117 | 13.75 | 7000 | 0.7785 | 0.3490 | | 0.0983 | 15.72 | 8000 | 0.8115 | 0.3442 | | 0.0877 | 17.68 | 9000 | 0.8290 | 0.3429 | | 0.0799 | 19.65 | 10000 | 0.8517 | 0.3412 | | 0.0733 | 21.61 | 11000 | 0.9370 | 0.3448 | | 0.066 | 23.58 | 12000 | 0.9157 | 0.3410 | | 0.0623 | 25.54 | 13000 | 0.9673 | 0.3377 | | 0.0583 | 27.5 | 14000 | 0.9804 | 0.3348 | | 0.0544 | 29.47 | 15000 | 0.9849 | 0.3354 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
matthewburke/korean_sentiment
matthewburke
2022-01-16T02:31:37Z
4,148
16
transformers
[ "transformers", "pytorch", "electra", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
``` from transformers import pipeline classifier = pipeline("text-classification", model="matthewburke/korean_sentiment") custom_tweet = "영화 재밌다." preds = classifier(custom_tweet, return_all_scores=True) is_positive = preds[0][1]['score'] > 0.5 ```
milyiyo/minilm-finetuned-emotion
milyiyo
2022-01-16T00:37:00Z
3
1
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - emotion metrics: - f1 model-index: - name: minilm-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: F1 type: f1 value: 0.931192 --- Based model: [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) Dataset: [emotion](https://huggingface.co/datasets/emotion) These are the results on the evaluation set: | Attribute | Value | | ------------------ | -------- | | Training Loss | 0.163100 | | Validation Loss | 0.192153 | | F1 | 0.931192 |
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1
husnu
2022-01-15T20:09:15Z
10
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-1 This model is a fine-tuned version of [husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3](https://huggingface.co/husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.4196 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.5885 | 1.0 | 2245 | 1.4196 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3
husnu
2022-01-15T18:42:09Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - turkish squad v2 model-index: - name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3TQUAD2-finetuned_lr-2e-05_epochs-3 This model is a fine-tuned version of [husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3](https://huggingface.co/husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3) on the turkish squad2 dataset. It achieves the following results on the evaluation set: - Loss: 1.9011 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6404 | 1.0 | 2245 | 1.4524 | | 0.403 | 2.0 | 4490 | 1.5638 | | 0.2355 | 3.0 | 6735 | 1.9011 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
pediberto/autonlp-testing-504313966
pediberto
2022-01-15T15:02:13Z
3
0
transformers
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "unk", "dataset:pediberto/autonlp-data-testing", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: unk widget: - text: "I love AutoNLP 🤗" datasets: - pediberto/autonlp-data-testing co2_eq_emissions: 12.994518654810642 --- # Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 504313966 - CO2 Emissions (in grams): 12.994518654810642 ## Validation Metrics - Loss: 0.19673296809196472 - Accuracy: 0.9398032027783138 - Precision: 0.9133115705476967 - Recall: 0.9718255499807025 - AUC: 0.985316873222122 - F1: 0.9416604338070308 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/pediberto/autonlp-testing-504313966 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("pediberto/autonlp-testing-504313966", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("pediberto/autonlp-testing-504313966", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
jiobiala24/wav2vec2-base-checkpoint-4
jiobiala24
2022-01-15T12:59:52Z
4
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-base-checkpoint-4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-checkpoint-4 This model is a fine-tuned version of [jiobiala24/wav2vec2-base-checkpoint-3](https://huggingface.co/jiobiala24/wav2vec2-base-checkpoint-3) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Huertas97/es_roberta_base_bne_leetspeak_ner
Huertas97
2022-01-15T11:55:46Z
4
1
spacy
[ "spacy", "token-classification", "es", "license:apache-2.0", "model-index", "region:us" ]
token-classification
2022-03-02T23:29:04Z
--- tags: - spacy - token-classification language: - es license: apache-2.0 widget: - text: "La C0v!d es un 3ng@ño de los G0b!3rno$" example_title: "Word camouflage detection" model-index: - name: es_roberta_base_bne_leetspeak_ner results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.8979055626 - name: NER Recall type: recall value: 0.9393701406 - name: NER F Score type: f_score value: 0.9181699547 --- | Feature | Description | | --- | --- | | **Name** | `es_roberta_base_bne_leetspeak_ner` | | **Version** | `0.0.0` | | **spaCy** | `>=3.2.1,<3.3.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) model a transformer-based masked language model for the Spanish language pre-trained with a total of 570GB of clean and deduplicated text compiled from the web crawlings performed by the National Library of Spain (Biblioteca Nacional de España) <br> [LeetSpeak-NER](https://huggingface.co/spaces/Huertas97/LeetSpeak-NER) app where this model is in production for countering information disorders| | **License** | Apache 2.0 | | **Author** | [Álvaro Huertas García](https://www.linkedin.com/in/alvaro-huertas-garcia/) at [AI+DA](http://aida.etsisi.upm.es/) | ### Label Scheme <details> <summary>View label scheme (4 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `INV_CAMO`, `LEETSPEAK`, `MIX`, `PUNCT_CAMO` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 91.82 | | `ENTS_P` | 89.79 | | `ENTS_R` | 93.94 | | `TRANSFORMER_LOSS` | 166484.92 | | `NER_LOSS` | 318457.35 |
husnu/electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6
husnu
2022-01-15T07:27:37Z
6
0
transformers
[ "transformers", "pytorch", "tensorboard", "electra", "question-answering", "generated_from_trainer", "dataset:tsquad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - tsquad model-index: - name: electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # electra-small-turkish-uncased-discriminator-finetuned_lr-2e-05_epochs-6 This model is a fine-tuned version of [loodos/electra-small-turkish-uncased-discriminator](https://huggingface.co/loodos/electra-small-turkish-uncased-discriminator) on the turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 2.0379 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 4.128 | 1.0 | 722 | 2.7187 | | 3.0376 | 2.0 | 1444 | 2.4486 | | 2.5304 | 3.0 | 2166 | 2.3485 | | 2.4214 | 4.0 | 2888 | 2.0450 | | 2.1568 | 5.0 | 3610 | 2.0576 | | 2.0752 | 6.0 | 4332 | 2.0379 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
husnu/bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3
husnu
2022-01-15T07:25:53Z
25
2
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- tags: - generated_from_trainer datasets: - squad model-index: - name: bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-turkish-128k-cased-finetuned_lr-2e-05_epochs-3 This model is a fine-tuned version of [dbmdz/bert-base-turkish-128k-cased](https://huggingface.co/dbmdz/bert-base-turkish-128k-cased) on the turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 1.4724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.3911 | 1.0 | 1281 | 1.4900 | | 0.9058 | 2.0 | 2562 | 1.3471 | | 0.6747 | 3.0 | 3843 | 1.4724 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
khizon/bert-unreliable-news-eng
khizon
2022-01-15T07:04:33Z
8
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
# Unreliable News Classifier (English) Trained, validate, and tested using a subset of the NELA-GT-2018 dataset. The dataset is split such that there was no overlap in of news sources between the three sets. This model used the pre-trained weights of `bert-base-cased` as starting point and was able to achieve 84% accuracy on the test set. For more details: [Github](https://github.com/khizon/CS284_final_project)
husnu/xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9
husnu
2022-01-15T05:08:37Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-TQUAD-finetuned_lr-2e-05_epochs-9 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the Turkish squad dataset. It achieves the following results on the evaluation set: - Loss: 2.2340 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 9 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5236 | 1.0 | 1050 | 3.0042 | | 2.8489 | 2.0 | 2100 | 2.5866 | | 2.5485 | 3.0 | 3150 | 2.3526 | | 2.4067 | 4.0 | 4200 | 2.3535 | | 2.3091 | 5.0 | 5250 | 2.2862 | | 2.2401 | 6.0 | 6300 | 2.3989 | | 2.1715 | 7.0 | 7350 | 2.2284 | | 2.1414 | 8.0 | 8400 | 2.2298 | | 2.1221 | 9.0 | 9450 | 2.2340 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/f1
huggingtweets
2022-01-15T02:57:32Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/f1/1642215447713/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1385670642327040001/Z5LaCXJI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Formula 1</div> <div style="text-align: center; font-size: 14px;">@f1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Formula 1. | Data | Formula 1 | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 157 | | Short tweets | 35 | | Tweets kept | 3058 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tsp2kk9/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @f1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3vu2nlz5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3vu2nlz5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/f1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
NbAiLab/roberta_NCC_des_128_decayfrom200
NbAiLab
2022-01-15T00:11:52Z
4
0
transformers
[ "transformers", "pytorch", "jax", "tensorboard", "roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:04Z
Just for performing some experiments. Do not use.
begar/distilgpt2-finetuned
begar
2022-01-14T22:01:35Z
9
0
transformers
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilgpt2-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
anuragshas/wav2vec2-large-xlsr-as
anuragshas
2022-01-14T16:41:25Z
21
0
transformers
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "as", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- language: as datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Anurag Singh XLSR Wav2Vec2 Large 53 Assamese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice as type: common_voice args: as metrics: - name: Test WER type: wer value: 69.63 --- # Wav2Vec2-Large-XLSR-53-Assamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Assamese using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "as", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Assamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "as", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") model = Wav2Vec2ForCTC.from_pretrained("anuragshas/wav2vec2-large-xlsr-as") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\”\\়\\।]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub('’ ',' ',batch["sentence"]) batch["sentence"] = re.sub(' ‘',' ',batch["sentence"]) batch["sentence"] = re.sub('’|‘','\'',batch["sentence"]) batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 69.63 % ## Training The Common Voice `train` and `validation` datasets were used for training.
vennify/t5-base-grammar-correction
vennify
2022-01-14T16:35:23Z
13,051
165
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "grammar", "en", "dataset:jfleg", "arxiv:1702.04066", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- language: en tags: - grammar - text2text-generation license: cc-by-nc-sa-4.0 datasets: - jfleg --- # T5 Grammar Correction This model generates a revised version of inputted text with the goal of containing fewer grammatical errors. It was trained with [Happy Transformer](https://github.com/EricFillion/happy-transformer) using a dataset called [JFLEG](https://arxiv.org/abs/1702.04066). Here's a [full article](https://www.vennify.ai/fine-tune-grammar-correction/) on how to train a similar model. ## Usage `pip install happytransformer ` ```python from happytransformer import HappyTextToText, TTSettings happy_tt = HappyTextToText("T5", "vennify/t5-base-grammar-correction") args = TTSettings(num_beams=5, min_length=1) # Add the prefix "grammar: " before each input result = happy_tt.generate_text("grammar: This sentences has has bads grammar.", args=args) print(result.text) # This sentence has bad grammar. ```
socrates/socrates2.0
socrates
2022-01-14T16:07:04Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
The unexamined life is not worth living
erwanlc/t5-coktails_recipe-small
erwanlc
2022-01-14T14:32:10Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-coktails_recipe-small results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-coktails_recipe-small This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
vachonni/wav2vec2-large-xls-r-300m-da-colab
vachonni
2022-01-14T12:14:53Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-xls-r-300m-da-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-da-colab This model is a fine-tuned version of [Alvenir/wav2vec2-base-da](https://huggingface.co/Alvenir/wav2vec2-base-da) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
lewtun/distilbert-base-uncased-finetuned-emotion-test-01
lewtun
2022-01-14T10:29:26Z
7
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-test-01 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.39 - name: F1 type: f1 value: 0.21884892086330932 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-test-01 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 1.7510 - Accuracy: 0.39 - F1: 0.2188 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 2 | 1.7634 | 0.39 | 0.2188 | | No log | 2.0 | 4 | 1.7510 | 0.39 | 0.2188 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
anirudh21/xlnet-base-cased-finetuned-rte
anirudh21
2022-01-14T07:04:23Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "xlnet", "text-classification", "generated_from_trainer", "dataset:glue", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: xlnet-base-cased-finetuned-rte results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: rte metrics: - name: Accuracy type: accuracy value: 0.6895306859205776 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlnet-base-cased-finetuned-rte This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.0656 - Accuracy: 0.6895 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 156 | 0.7007 | 0.4874 | | No log | 2.0 | 312 | 0.6289 | 0.6751 | | No log | 3.0 | 468 | 0.7020 | 0.6606 | | 0.6146 | 4.0 | 624 | 1.0573 | 0.6570 | | 0.6146 | 5.0 | 780 | 1.0656 | 0.6895 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
atu/paddle_detection
atu
2022-01-14T00:33:03Z
0
0
null
[ "region:us" ]
null
2022-03-02T23:29:05Z
一个测试Paddle 服务器模型的项目
husnu/xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3
husnu
2022-01-14T00:17:31Z
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: mit tags: - generated_from_trainer datasets: - squad model-index: - name: xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtremedistil-l6-h256-uncased-finetuned_lr-2e-05_epochs-3 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.2864 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6088 | 1.0 | 5533 | 1.4429 | | 1.3928 | 2.0 | 11066 | 1.3183 | | 1.3059 | 3.0 | 16599 | 1.2864 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
carlosaguayo/cats_vs_dogs
carlosaguayo
2022-01-13T21:58:53Z
9
3
tf-keras
[ "tf-keras", "image-classification", "region:us" ]
image-classification
2022-03-02T23:29:05Z
--- tags: - image-classification widget: - src: https://upload.wikimedia.org/wikipedia/commons/0/0c/About_The_Dog.jpg example_title: Dog-1 - src: https://yt3.ggpht.com/ytc/AKedOLRvxGYSdEHqu0X4EYcJ2kq7BttRKBNpfwdHJf3FSg=s900-c-k-c0x00ffffff-no-rj example_title: Dog-2 - src: https://upload.wikimedia.org/wikipedia/commons/c/c7/Tabby_cat_with_blue_eyes-3336579.jpg example_title: Cat-1 - src: https://pixabay.com/get/g31cf3b945cf9b9144eb6c1ecf514b4db668875b75d0c615e0330aec74bef5edde11567ef4a6f5fdb61a828b8086a39d3a0e72fb326d78467786dcdde4e6fa23c5c4c309d0abc089a8663809c175aee22_1920.jpg example_title: Cat-2 --- # Classify Cats and Dogs VGG16 fine tuned to classify cats and dogs Notebook https://www.kaggle.com/carlosaguayo/cats-vs-dogs-transfer-learning-pre-trained-vgg16 ### How to use Here is how to use this model to classify an image as a cat or dog: ```python from skimage import io import cv2 import matplotlib.pyplot as plt from huggingface_hub import from_pretrained_keras %matplotlib inline ROWS, COLS = 150, 150 model = from_pretrained_keras("carlosaguayo/cats_vs_dogs") img_url = 'https://upload.wikimedia.org/wikipedia/commons/0/0c/About_The_Dog.jpg' # img_url = 'https://upload.wikimedia.org/wikipedia/commons/c/c7/Tabby_cat_with_blue_eyes-3336579.jpg' img = io.imread(img_url) img = cv2.resize(img, (ROWS, COLS), interpolation=cv2.INTER_CUBIC) img = img / 255.0 img = img.reshape(1,ROWS,COLS,3) prediction = model.predict(img)[0][0] if prediction >= 0.5: print('I am {:.2%} sure this is a Cat'.format(prediction)) else: print('I am {:.2%} sure this is a Dog'.format(1-prediction)) plt.imshow(img[0], 'Blues') plt.axis("off") plt.show() ```
keras-io/graph-attention-nets
keras-io
2022-01-13T14:54:10Z
8
6
tf-keras
[ "tf-keras", "graph neural networks", "arxiv:1710.10903", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- thumbnail: "url to a thumbnail used in social sharing" tags: - graph neural networks license: - cc0-1.0 --- ## Keras Implementation of Graph Attention Networks for Node Classification 🕸 This repo contains the model and the notebook [to this Keras example on Graph Attention Networks for Node Classification](https://keras.io/examples/graph/gat_node_classification/). Full credits to: [Alexander Kensert](https://github.com/akensert) ## Background Information Graph neural networks is the preferred neural network architecture for processing data structured as graphs (for example, social networks or molecule structures), yielding better results than fully-connected networks or convolutional networks. This tutorial implements a specific graph neural network known as a [Graph Attention Network (GAT)](https://arxiv.org/abs/1710.10903) to predict labels of scientific papers based on the papers they cite (using the [Cora dataset](https://linqs.soe.ucsc.edu/data)). References For more information on GAT, see the original paper [Graph Attention Networks](https://arxiv.org/abs/1710.10903) as well as [DGL's Graph Attention Networks](https://docs.dgl.ai/en/0.4.x/tutorials/models/1_gnn/9_gat.html) documentation.
keras-io/deep-dream
keras-io
2022-01-13T14:53:54Z
10
3
tf-keras
[ "tf-keras", "gan", "generative adversarial networks", "deep dream", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - gan - generative adversarial networks - deep dream license: - cc0-1.0 --- ## Keras Implementation of Deep Dream 🦚🌌 This repo contains the model and the notebook [for this Deep Dream implementation of Keras](https://keras.io/examples/generative/deep_dream/). Full credits to: [François Chollet](https://twitter.com/fchollet) ![deepdream](https://keras.io/img/examples/generative/deep_dream/deep_dream_17_0.png) ## Background Information "Deep dream" is an image-filtering technique which consists of taking an image classification model, and running gradient ascent over an input image to try to maximize the activations of specific layers (and sometimes, specific units in specific layers) for this input. It produces hallucination-like visuals. It was first introduced by Alexander Mordvintsev from Google in July 2015. Process: - Load the original image. - Define a number of processing scales ("octaves"), from smallest to largest. - Resize the original image to the smallest scale. - For every scale, starting with the smallest (i.e. current one): - Run gradient ascent - Upscale image to the next scale - Re-inject the detail that was lost at upscaling time - Stop when we are back to the original size. To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it, and compare the result to the (resized) original image.
keras-io/deep-deterministic-policy-gradient
keras-io
2022-01-13T14:53:44Z
7
0
tf-keras
[ "tf-keras", "reinforcement learning", "cartpole", "deep deterministic policy gradient", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - reinforcement learning - cartpole - deep deterministic policy gradient license: - cc0-1.0 --- ## Keras Implementation of Deep Deterministic Policy Gradient ⏱🤖 This repo contains the model and the notebook [to this Keras example on Deep Deterministic Policy Gradient on pendulum](https://keras.io/examples/rl/ddpg_pendulum/). Full credits to: [Hemant Singh](https://github.com/amifunny) ![pendulum_gif](https://i.imgur.com/eEH8Cz6.gif) ## Background Information Deep Deterministic Policy Gradient (DDPG) is a model-free off-policy algorithm for learning continous actions. It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces. This tutorial closely follow this paper - Continuous control with deep reinforcement learning We are trying to solve the classic Inverted Pendulum control problem. In this setting, we can take only two actions: swing left or swing right. What make this problem challenging for Q-Learning Algorithms is that actions are continuous instead of being discrete. That is, instead of using two discrete actions like -1 or +1, we have to select from infinite actions ranging from -2 to +2. Just like the Actor-Critic method, we have two networks: Actor - It proposes an action given a state. Critic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action. DDPG uses two more techniques not present in the original DQN: First, it uses two Target networks. Why? Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable. Conceptually, this is like saying, "I have an idea of how to play this well, I'm going to try it out for a bit until I find something better", as opposed to saying "I'm going to re-learn how to play this entire game after every move". See this StackOverflow answer. Second, it uses Experience Replay. We store list of tuples (state, action, reward, next_state), and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far.
keras-io/ppo-cartpole
keras-io
2022-01-13T14:53:36Z
4
0
tf-keras
[ "tf-keras", "reinforcement learning", "proximal policy optimization", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - reinforcement learning - proximal policy optimization license: - cc0-1.0 --- ## Keras Implementation of Proximal Policy Optimization on Cartpole Environment 🔨🤖 This repo contains the model and the notebook [to this Keras example on PPO for Cartpole](https://keras.io/examples/rl/ppo_cartpole/). Full credits to: Ilias Chrysovergis ![cartpole_gif](https://i.imgur.com/tKhTEaF.gif) ## Background Information ### CartPole-v0 A pole is attached by an un-actuated joint to a cart, which moves along a frictionless track. The system is controlled by applying a force of +1 or -1 to the cart. The pendulum starts upright, and the goal is to prevent it from falling over. A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center. After 200 steps the episode ends. Thus, the highest return we can get is equal to 200. ### Proximal Policy Optimization PPO is a policy gradient method and can be used for environments with either discrete or continuous action spaces. It trains a stochastic policy in an on-policy way. Also, it utilizes the actor critic method. The actor maps the observation to an action and the critic gives an expectation of the rewards of the agent for the observation given. Firstly, it collects a set of trajectories for each epoch by sampling from the latest version of the stochastic policy. Then, the rewards-to-go and the advantage estimates are computed in order to update the policy and fit the value function. The policy is updated via a stochastic gradient ascent optimizer, while the value function is fitted via some gradient descent algorithm. This procedure is applied for many epochs until the environment is solved.
keras-io/time-series-anomaly-detection-autoencoder
keras-io
2022-01-13T14:52:51Z
14
13
tf-keras
[ "tf-keras", "autoencoder", "time series", "anomaly detection", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - autoencoder - time series - anomaly detection license: - cc0-1.0 --- ## Keras Implementation of time series anomaly detection using an Autoencoder ⌛ This repo contains the model and the notebook [for this time series anomaly detection implementation of Keras](https://keras.io/examples/timeseries/timeseries_anomaly_detection/). Full credits to: [Pavithra Vijay](https://github.com/pavithrasv) ## Background Information This notebook demonstrates how you can use a reconstruction convolutional autoencoder model to detect anomalies in timeseries data.
keras-io/simple-mnist-convnet
keras-io
2022-01-13T14:52:44Z
2
0
tf-keras
[ "tf-keras", "lstm", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:05Z
--- tags: - lstm license: - cc0-1.0 --- ## Keras Implementation of Convolutional Neural Networks for MNIST 1️⃣2️⃣3️⃣ This repo contains the model and the notebook [on Simple MNIST convnet](https://keras.io/examples/vision/mnist_convnet/). Full credits to: [François Chollet](https://github.com/fchollet)
nielsr/tapex-large-finetuned-sqa
nielsr
2022-01-13T14:41:16Z
7
0
transformers
[ "transformers", "pytorch", "bart", "text2text-generation", "tapex", "table-question-answering", "en", "dataset:msr_sqa", "arxiv:2107.07653", "license:apache-2.0", "autotrain_compatible", "region:us" ]
table-question-answering
2022-03-02T23:29:05Z
--- language: en tags: - tapex - table-question-answering license: apache-2.0 datasets: - msr_sqa inference: false --- TAPEX-large model fine-tuned on SQA. This model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou. Original repo can be found [here](https://github.com/microsoft/Table-Pretraining). To load it and run inference, you can do the following: ``` from transformers import BartTokenizer, BartForConditionalGeneration import pandas as pd tokenizer = BartTokenizer.from_pretrained("nielsr/tapex-large-finetuned-sqa") model = BartForConditionalGeneration.from_pretrained("nielsr/tapex-large-finetuned-sqa") # create table data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]} table = pd.DataFrame.from_dict(data) # turn into dict table_dict = {"header": list(table.columns), "rows": [list(row.values) for i,row in table.iterrows()]} # turn into format TAPEX expects # define the linearizer based on this code: https://github.com/microsoft/Table-Pretraining/blob/main/tapex/processor/table_linearize.py linearizer = IndexedRowTableLinearize() linear_table = linearizer.process_table(table_dict) # add question question = "how many movies does George Clooney have?" joint_input = question + " " + linear_table # encode encoding = tokenizer(joint_input, return_tensors="pt") # forward pass outputs = model.generate(**encoding) # decode tokenizer.batch_decode(outputs, skip_special_tokens=True) ```
mahaamami/distilroberta-base-model-transcript
mahaamami
2022-01-13T13:28:24Z
5
0
transformers
[ "transformers", "pytorch", "tensorboard", "roberta", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilroberta-base-model-transcript results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-model-transcript This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1193 | 1.0 | 5570 | 1.9873 | | 2.0502 | 2.0 | 11140 | 1.9304 | | 1.9718 | 3.0 | 16710 | 1.8922 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
nikcook/distilbert-base-uncased-finetuned-squad
nikcook
2022-01-13T11:28:01Z
3
0
transformers
[ "transformers", "pytorch", "tensorboard", "distilbert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1581 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2199 | 1.0 | 5533 | 1.1525 | | 0.9463 | 2.0 | 11066 | 1.1298 | | 0.7636 | 3.0 | 16599 | 1.1581 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.17.0 - Tokenizers 0.10.3
huggingtweets/hazuma
huggingtweets
2022-01-13T09:23:08Z
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: http://www.huggingtweets.com/hazuma/1642065783369/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1322114245467598850/pz_yTcye_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">東浩紀 Hiroki Azuma</div> <div style="text-align: center; font-size: 14px;">@hazuma</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from 東浩紀 Hiroki Azuma. | Data | 東浩紀 Hiroki Azuma | | --- | --- | | Tweets downloaded | 3230 | | Retweets | 1492 | | Short tweets | 1560 | | Tweets kept | 178 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ig7ewkg/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hazuma's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1uix46e5) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1uix46e5/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/hazuma') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
huggingtweets/elonmusk-hirox246-hitoshinagai1
huggingtweets
2022-01-13T07:16:46Z
3
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2022-03-02T23:29:05Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1474910968157249536/FS8-70Ie_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/646595746905620480/oeKI14gB_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1015469378777706496/WqKzDTb3_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & ひろゆき, Hiroyuki Nishimura & 永井均</div> <div style="text-align: center; font-size: 14px;">@elonmusk-hirox246-hitoshinagai1</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Elon Musk & ひろゆき, Hiroyuki Nishimura & 永井均. | Data | Elon Musk | ひろゆき, Hiroyuki Nishimura | 永井均 | | --- | --- | --- | --- | | Tweets downloaded | 2022 | 3248 | 3245 | | Retweets | 95 | 281 | 53 | | Short tweets | 598 | 1980 | 3056 | | Tweets kept | 1329 | 987 | 136 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1dzgeuwp/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-hirox246-hitoshinagai1's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12mhdct8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12mhdct8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/elonmusk-hirox246-hitoshinagai1') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
ptaszynski/yacis-electra-small-japanese
ptaszynski
2022-01-13T01:43:17Z
28
7
transformers
[ "transformers", "pytorch", "ja", "license:cc-by-sa-4.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05Z
--- language: ja license: cc-by-sa-4.0 datasets: - YACIS corpus --- # yacis-electra-small This is [ELECTRA](https://github.com/google-research/electra) Small model for Japanese pretrained on 354 million sentences / 5.6 billion words of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus. The corpus was tokenized for pretraining with [MeCab](https://taku910.github.io/mecab/). Subword tokenization was done with WordPiece. ## Model architecture This model uses ELECTRA Small model settings, 12 layers, 128 dimensions of hidden states, and 12 attention heads. Vocabulary size was set to 32,000 tokens. ## Training data and libraries YACIS-ELECTRA is trained on the whole of [YACIS](https://github.com/ptaszynski/yacis-corpus) blog corpus, which is a Japanese blog corpus containing 5.6 billion words in 354 million sentences. The corpus was originally split into sentences using custom rules, and each sentence was tokenized using [MeCab](https://taku910.github.io/mecab/). Subword tokenization for pretraining was done with WordPiece. We used original [ELECTRA](https://github.com/google-research/electra) repository for pretraining. The pretrainig process took 7 days and 6 hours under the following environment: CPU: Intel Core i9-7920X, RAM: 132 GB, GPU: GeForce GTX 1080 Ti x1. ## Licenses The pretrained model with all attached files is licensed under [CC BY-SA 4.0](http://creativecommons.org/licenses/by-sa/4.0/), or Creative Commons Attribution-ShareAlike 4.0 International License. <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-sa/4.0/88x31.png" /></a> ## Citations Please, cite the model using the following citation. ``` @inproceedings{shibata2022yacis-electra, title={日本語大規模ブログコーパスYACISに基づいたELECTRA事前学習済み言語モデルの作成及び性能評価}, % title={Development and performance evaluation of ELECTRA pretrained language model based on YACIS large-scale Japanese blog corpus [in Japanese]}, %% for English citations author={柴田 祥伍 and プタシンスキ ミハウ and エロネン ユーソ and ノヴァコフスキ カロル and 桝井 文人}, % author={Shibata, Shogo and Ptaszynski, Michal and Eronen, Juuso and Nowakowski, Karol and Masui, Fumito}, %% for English citations booktitle={言語処理学会第28回年次大会(NLP2022) (予定)}, % booktitle={Proceedings of The 28th Annual Meeting of The Association for Natural Language Processing (NLP2022)}, %% for English citations pages={1--4}, year={2022} } ``` The model was build using sentences from YACIS corpus, which should be cited using at least one of the following refrences. ``` @inproceedings{ptaszynski2012yacis, title={YACIS: A five-billion-word corpus of Japanese blogs fully annotated with syntactic and affective information}, author={Ptaszynski, Michal and Dybala, Pawel and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio}, booktitle={Proceedings of the AISB/IACAP world congress}, pages={40--49}, year={2012}, howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}" } ``` ``` @article{ptaszynski2014automatically, title={Automatically annotating a five-billion-word corpus of Japanese blogs for sentiment and affect analysis}, author={Ptaszynski, Michal and Rzepka, Rafal and Araki, Kenji and Momouchi, Yoshio}, journal={Computer Speech \& Language}, volume={28}, number={1}, pages={38--55}, year={2014}, publisher={Elsevier}, howpublished = "\url{https://github.com/ptaszynski/yacis-corpus}" } ```
flboehm/youtube-bert
flboehm
2022-01-12T21:29:46Z
10
2
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "fill-mask", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
2022-03-02T23:29:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: youtube-bert results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # youtube-bert This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4771 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.691 | 1.0 | 1077 | 2.5445 | | 2.5768 | 2.0 | 2154 | 2.5226 | | 2.5227 | 3.0 | 3231 | 2.5027 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
vinaydngowda/Robertabase_Ana4
vinaydngowda
2022-01-12T20:12:16Z
3
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "autonlp", "en", "dataset:vinaydngowda/autonlp-data-case-classify-xlnet", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
2022-03-02T23:29:05Z
--- tags: autonlp language: en widget: - text: "I love AutoNLP 🤗" datasets: - vinaydngowda/autonlp-data-case-classify-xlnet co2_eq_emissions: 19.964760910364927 --- # Model Trained Using AutoNLP - Problem type: Multi-class Classification - Model ID: 496213536 - CO2 Emissions (in grams): 19.964760910364927 ## Validation Metrics - Loss: 0.7149562835693359 - Accuracy: 0.8092592592592592 - Macro F1: 0.8085189591849891 - Micro F1: 0.8092592592592593 - Weighted F1: 0.8085189591849888 - Macro Precision: 0.8137745564384112 - Micro Precision: 0.8092592592592592 - Weighted Precision: 0.8137745564384112 - Macro Recall: 0.8092592592592592 - Micro Recall: 0.8092592592592592 - Weighted Recall: 0.8092592592592592 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/vinaydngowda/autonlp-case-classify-xlnet-496213536 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("vinaydngowda/autonlp-case-classify-xlnet-496213536", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("vinaydngowda/autonlp-case-classify-xlnet-496213536", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```