modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-02 12:32:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-02 12:31:20
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yangbang18/zeronlg-4langs-vc
|
yangbang18
| 2023-06-11T16:51:42Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"en",
"zh",
"de",
"fr",
"arxiv:2303.06458",
"license:apache-2.0",
"region:us"
] | null | 2023-04-04T05:47:56Z |
---
language:
- en
- zh
- de
- fr
library_name: sentence-transformers
license: apache-2.0
---
# ZeroNLG
Without any labeled downstream pairs for training, ZeroNLG is an unified framework that deals with multiple natural language generation (NLG) tasks in a zero-shot manner, including image-to-text, video-to-text, and text-to-text generation tasks across English, Chinese, German, and French.
[Pre-trained data](https://drive.google.com/file/d/1yCLpDLDO5TnoqfyHKwgi51Fw66QliOvM/view?usp=share_link): a machine-translated version of [CC3M](https://huggingface.co/datasets/conceptual_captions), including
- 1.1M English sentences
- 1.1M English-Chinese pairs
- 1.1M English-German pairs
- 1.1M English-French pairs
Paper: [ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation](https://arxiv.org/abs/2303.06458)
Authors: *Bang Yang\*, Fenglin Liu\*, Yuexian Zou, Xian Wu, Yaowei Wang, David A. Clifton*
## Quick Start
Please follow our [github repo](https://github.com/yangbang18/ZeroNLG) to prepare the environment at first.
```python
from zeronlg import ZeroNLG
# Automatically download the model from Huggingface Hub
# Note: this model is especially pre-trained for visual captioning
model = ZeroNLG('zeronlg-4langs-vc')
# `images` can be a remote image url, a local image/video file, etc
# `lang` should be one of English ('en'), Chinese ('zh'), German ('de'), and French ('fr')
url = 'https://img2.baidu.com/it/u=1856500011,1563285204&fm=253&fmt=auto&app=138&f=JPEG?w=667&h=500'
caption = model.forward(images=url, lang='en', num_beams=3, task='caption')
# caption = "dogs play in the snow"
caption = model.forward(images=url, lang='zh', num_beams=3, task='caption')
# caption = "狗 在 雪 地 里 玩 耍"
# Althernatively, you can call the specific forward function
caption = model.forward_caption(images=url, lang='en', num_beams=3)
```
## Zero-Shot Performance
### Visual captioning
Model: [zeronlg-4langs-vc](https://huggingface.co/yangbang18/zeronlg-4langs-vc)'s multilingual decoder + CLIP's ViT-B-32 image encoder.
| Dataset | Language | Type | BLEU@1 | BLEU@2 | BLEU@3 | BLEU@4 | METEOR | ROUGE-L | CIDEr-D | SPICE |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| [Flickr30K](https://paperswithcode.com/paper/from-image-descriptions-to-visual-denotations) | English | Image | 46.4 | 27.2 | 15.5 | 8.9 | 13.0 | 31.3 | 21.0 | 7.6
| Flickr30K | [Chinese](https://dl.acm.org/doi/abs/10.1145/3123266.3123366) | Image | 45.3 | 25.5 | 14.6 | 8.4 | - | 31.8 | 18.0 | -
| Flickr30K | [German](https://github.com/multi30k/dataset) | Image | 41.9 | 21.1 | 11.2 | 5.7 | - | 21.2 | 17.1 | -
| Flickr30K | [French](https://github.com/multi30k/dataset) | Image | 19.8 | 9.5 | 5.0 | 2.8 | - | 18.6 | 24.8 | -
| [COCO](https://paperswithcode.com/paper/microsoft-coco-captions-data-collection-and) | English | Image | 47.5 | 29.0 | 16.8 | 9.6 | 14.4 | 34.9 | 29.9 | 8.7
| [MSR-VTT](https://paperswithcode.com/paper/msr-vtt-a-large-video-description-dataset-for) | English | Video | 52.2 | 31.9 | 16.6 | 8.7 | 15.0 | 35.4 | 9.9 | -
| [VATEX](https://paperswithcode.com/paper/vatex-a-large-scale-high-quality-multilingual) | English | Video | 42.2 | 24.6 | 12.5 | 6.3 | 11.7 | 29.3 | 9.1 | -
| VATEX | Chinese | Video | 41.9 | 24.3 | 13.7 | 7.1 | - | 29.6 | 9.8 | -
**Notes:**
- For non-English visual captioning, we do not report METEOR and SPICE, beacause they consider synonym matching and named entity recognition in English by default.
- For video captioning in English, we do not report SPICE following common practices.
- `Flickr30K-Chinese` is known as `Flickr30K-CN`.
- `Flickr30K-German` and `Flickr30K-French` are introduced in `Multi30K`.
### Cross-modal retrieval
Model: [zeronlg-4langs-vc](https://huggingface.co/yangbang18/zeronlg-4langs-vc)'s multilingual encoder + CLIP's ViT-B-32 image encoder
| Dataset | Language | Type | I2T R@1 | I2T R@5 | I2T R@10 | I2T Mean | T2I R@1 | T2I R@5 | T2I R@10 | T2I Mean | Avg.|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| [Flickr30K](https://paperswithcode.com/paper/from-image-descriptions-to-visual-denotations) | English | Image | 75.2 | 93.9 | 97.1 | 88.7 | 57.1 | 82.2 | 89.1 | 76.1 | 82.4|
| Flickr30K | [Chinese](https://dl.acm.org/doi/abs/10.1145/3123266.3123366) | Image | 75.0 | 93.0 | 96.7 | 88.2 | 53.8 | 79.8 | 87.1 | 73.6 | 80.9|
| Flickr30K | [German](https://github.com/multi30k/dataset) | Image | 70.9 | 91.1 | 95.7 | 85.9 | 47.5 | 74.1 | 83.1 | 68.2 | 77.1|
| Flickr30K | [French](https://github.com/multi30k/dataset) | Image | 55.8 | 83.4 | 91.5 | 76.9 | 56.6 | 81.2 | 88.4 | 75.4 | 76.2|
| [COCO 5K](https://paperswithcode.com/paper/microsoft-coco-captions-data-collection-and) | English | Image | 45.0 | 71.1 | 80.3 | 65.5 | 28.2 | 53.3 | 64.5 | 48.7 | 57.1
| COCO 1K | English | Image | 66.0 | 89.1 | 94.6 | 83.2 | 47.5 | 77.5 | 87.9 | 71.0 | 77.1 |
| [MSR-VTT](https://paperswithcode.com/paper/msr-vtt-a-large-video-description-dataset-for) | English | Video | 32.0 | 55.5 | 65.8 | 51.1 | 17.9 | 36.4 | 45.5 | 33.3 | 42.2
| [VATEX](https://paperswithcode.com/paper/vatex-a-large-scale-high-quality-multilingual) | English | Video | 26.9 | 52.8 | 64.2 | 48.0 | 19.2 | 41.2 | 52.7 | 37.7 | 42.8
| VATEX | Chinese | Video | 40.6 | 70.9 | 82.7 | 64.7 | 28.8 | 58.0 | 70.1 | 52.3 | 58.5 |
**Notes:**
- `I2T`: image-to-text retrieval, image as the query, search similar texts
- `T2I`: text-to-image retrieval, text as the query, search similar images
- `R@K`: Recall rate at top-K candidates
- `Avg.`: Average of `R@{1,5,10}` on both directions
- Retrieval uses the same testing sets as those for visual captioning, except `COCO-1K`, which splits the original testing set into 5 folds and report performance averaged over 5 folds.
## Citation
```bibtex
@article{Yang2023ZeroNLG,
title={ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation},
author={Yang, Bang and Liu, Fenglin and Zou, Yuexian and Wu, Xian and Wang, Yaowei and Clifton, David A.},
journal={arXiv preprint arXiv:2303.06458}
year={2023}
}
```
|
Multi-Domain-Expert-Learning/osiris_12b
|
Multi-Domain-Expert-Learning
| 2023-06-11T16:49:55Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_bigcode",
"text-generation",
"license:bigscience-openrail-m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-10T23:54:45Z |
---
license: bigscience-openrail-m
---
- A distilled version of a merge of starchat_beta (60%), starcoderplus(20%) and starcoder(20%), using layer removal and merging.
- The layers are merged or removed as follows. the index numbers are the layer from the original model. two or more indexes means these layers are averaged.:
[[0], [1], [2], [3], [4], [5], [6], [7], [8, 9], [10], [11], [13], [14], [15, 16], [17], [18], [19, 20], [21], [22], [23], [24], [27], [28], [29], [30], [31, 32], [33], [34], [36], [37], [39]]
- TODO: finetuning still required
|
ByteExplorer/ppo-Huggy
|
ByteExplorer
| 2023-06-11T16:35:41Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-11T16:35:35Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ByteExplorer/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
natope/amsterdam_100bm25_passages
|
natope
| 2023-06-11T16:34:54Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-11T16:22:21Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: amsterdam_100bm25_passages
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amsterdam_100bm25_passages
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.5628
- Rouge1: 0.1219
- Rouge2: 0.0255
- Rougel: 0.1106
- Rougelsum: 0.1112
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 58 | 19.3812 | 0.0 | 0.0 | 0.0 | 0.0 | 4.0 |
| No log | 2.0 | 116 | 16.2740 | 0.0053 | 0.0 | 0.0053 | 0.0053 | 4.4667 |
| No log | 3.0 | 174 | 14.9688 | 0.0197 | 0.0042 | 0.0198 | 0.0199 | 5.6667 |
| No log | 4.0 | 232 | 11.6524 | 0.02 | 0.0073 | 0.0177 | 0.018 | 6.6667 |
| No log | 5.0 | 290 | 9.4875 | 0.0255 | 0.0073 | 0.0226 | 0.0231 | 7.8667 |
| No log | 6.0 | 348 | 8.0678 | 0.0503 | 0.0136 | 0.0493 | 0.049 | 10.6 |
| No log | 7.0 | 406 | 6.4898 | 0.0976 | 0.0167 | 0.0858 | 0.0866 | 14.9333 |
| No log | 8.0 | 464 | 6.0317 | 0.113 | 0.0232 | 0.0995 | 0.1001 | 18.9333 |
| 13.519 | 9.0 | 522 | 5.6579 | 0.117 | 0.0238 | 0.1037 | 0.1046 | 19.0 |
| 13.519 | 10.0 | 580 | 5.5628 | 0.1219 | 0.0255 | 0.1106 | 0.1112 | 19.0 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Skafu/swin-tiny-patch4-window7-224-cifar10
|
Skafu
| 2023-06-11T16:30:46Z | 213 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:cifar10",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-11T15:28:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar10
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-cifar10
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar10
type: cifar10
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9718
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-cifar10
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar10 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0818
- Accuracy: 0.9718
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2427 | 1.0 | 351 | 0.1060 | 0.9656 |
| 0.3361 | 2.0 | 702 | 0.0818 | 0.9718 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
c16192/rldemo
|
c16192
| 2023-06-11T16:16:16Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T16:15:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.36 +/- 18.63
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/Qt10SetFitModel_Train_balance_ratio_1
|
YakovElm
| 2023-06-11T15:47:53Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T15:47:06Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Qt10SetFitModel_Train_balance_ratio_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt10SetFitModel_Train_balance_ratio_1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
gameofdimension/Reinforce-CartPole-v1
|
gameofdimension
| 2023-06-11T15:43:30Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T15:43:18Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Madnesss/fine-tune-all-MiniLM-L6-v2
|
Madnesss
| 2023-06-11T15:21:55Z | 86 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-11T15:10:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 398 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
Madnesss/fine-tune-paraphrase-multilingual-mpnet-base-v2
|
Madnesss
| 2023-06-11T15:20:18Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-06-11T15:12:00Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 398 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
sdfasfasfasfasfa/yugioh
|
sdfasfasfasfasfa
| 2023-06-11T15:16:20Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | 2023-06-11T15:10:33Z |
---
license: openrail
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
YakovElm/Qt10SetFitModel_Train_balance_ratio_Half
|
YakovElm
| 2023-06-11T15:16:18Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T15:15:42Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Qt10SetFitModel_Train_balance_ratio_Half
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt10SetFitModel_Train_balance_ratio_Half")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
thanhnguyentung/demo-onnx
|
thanhnguyentung
| 2023-06-11T15:16:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-11T14:43:38Z |
To run *.pt* file. You should come to YOLOv5 repo. Since I just create this repo for demo purpose.
|
l3cube-pune/indic-sentence-similarity-sbert
|
l3cube-pune
| 2023-06-11T15:08:45Z | 1,868 | 6 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"en",
"hi",
"mr",
"kn",
"ta",
"te",
"ml",
"gu",
"or",
"pa",
"bn",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-04T07:53:02Z |
---
pipeline_tag: sentence-similarity
license: cc-by-4.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- multilingual
- en
- hi
- mr
- kn
- ta
- te
- ml
- gu
- or
- pa
- bn
widget:
- source_sentence: "दिवाळी आपण मोठ्या उत्साहाने साजरी करतो"
sentences:
- "दिवाळी आपण आनंदाने साजरी करतो"
- "दिवाळी हा दिव्यांचा सण आहे"
example_title: "Monolingual- Marathi"
- source_sentence: "हम दीपावली उत्साह के साथ मनाते हैं"
sentences:
- "हम दीपावली खुशियों से मनाते हैं"
- "दिवाली रोशनी का त्योहार है"
example_title: "Monolingual- Hindi"
- source_sentence: "અમે ઉત્સાહથી દિવાળી ઉજવીએ છીએ"
sentences:
- "દિવાળી આપણે ખુશીઓથી ઉજવીએ છીએ"
- "દિવાળી એ રોશનીનો તહેવાર છે"
example_title: "Monolingual- Gujarati"
- source_sentence: "आम्हाला भारतीय असल्याचा अभिमान आहे"
sentences:
- "हमें भारतीय होने पर गर्व है"
- "భారతీయులమైనందుకు గర్విస్తున్నాం"
- "અમને ભારતીય હોવાનો ગર્વ છે"
example_title: "Cross-lingual 1"
- source_sentence: "ਬਾਰਿਸ਼ ਤੋਂ ਬਾਅਦ ਬਗੀਚਾ ਸੁੰਦਰ ਦਿਖਾਈ ਦਿੰਦਾ ਹੈ"
sentences:
- "മഴയ്ക്ക് ശേഷം പൂന്തോട്ടം മനോഹരമായി കാണപ്പെടുന്നു"
- "ବର୍ଷା ପରେ ବଗିଚା ସୁନ୍ଦର ଦେଖାଯାଏ |"
- "बारिश के बाद बगीचा सुंदर दिखता है"
example_title: "Cross-lingual 2"
---
# IndicSBERT-STS
This is a IndicSBERT model (<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'>l3cube-pune/indic-sentence-bert-nli</a>) trained on the STS dataset of ten major Indian Languages. <br>
The single model works for English, Hindi, Marathi, Kannada, Tamil, Telugu, Gujarati, Oriya, Punjabi, Malayalam, and Bengali.
The model also has cross-lingual capabilities. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
Generic Indic Sentence BERT model is shared here : <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'>l3cube-pune/indic-sentence-bert-nli</a> <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
l3cube-pune/bengali-sentence-similarity-sbert
|
l3cube-pune
| 2023-06-11T15:08:16Z | 84 | 4 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"bn",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-02-25T12:48:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: bn
widget:
- source_sentence: "লোকটি কুড়াল দিয়ে একটি গাছ কেটে ফেলল"
sentences:
- "একজন লোক কুড়াল দিয়ে একটি গাছের নিচে চপ করে"
- "একজন লোক গিটার বাজছে"
- "একজন মহিলা ঘোড়ায় চড়ে"
example_title: "Example 1"
- source_sentence: "একটি গোলাপী সাইকেল একটি বিল্ডিংয়ের সামনে রয়েছে"
sentences:
- "কিছু ধ্বংসাবশেষের সামনে একটি সাইকেল"
- "গোলাপী দুটি ছোট মেয়ে নাচছে"
- "ভেড়া গাছের লাইনের সামনে মাঠে চারণ করছে"
example_title: "Example 2"
- source_sentence: "আলোর গতি সসীম হওয়ার গতি আমাদের মহাবিশ্বের অন্যতম মৌলিক"
sentences:
- "আলোর গতি কত?"
- "আলোর গতি সসীম"
- "আলো মহাবিশ্বের দ্রুততম জিনিস"
example_title: "Example 3"
---
# BengaliSBERT-STS
This is a BengaliSBERT model (l3cube-pune/bengali-sentence-bert-nli) fine-tuned on the STS dataset. <br>
Released as a part of project MahaNLP : https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual sentence similarity is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> indic-sentence-similarity-sbert </a> <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
l3cube-pune/malayalam-sentence-bert-nli
|
l3cube-pune
| 2023-06-11T15:05:28Z | 36 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"ml",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-04T07:39:02Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: ml
widget:
- source_sentence: "കുട്ടികൾ പാർക്കിൽ കളിക്കാൻ ഇഷ്ടപ്പെടുന്നു"
sentences:
- "କେବଳ ପିଲାମାନଙ୍କୁ ପାର୍କରେ ଖେଳିବାକୁ ଅନୁମତି ଦିଆଯାଇଛି"
- "പാർക്കിൽ കുട്ടികൾക്ക് മാത്രമേ കളിക്കാൻ അനുവാദമുള്ളൂ"
- "കുട്ടികൾ പന്തുമായി കളിക്കാൻ ഇഷ്ടപ്പെടുന്നു"
example_title: "Example 1"
- source_sentence: "പെയിന്റിംഗ് എന്റെ ഹോബിയാണ് "
sentences:
- "നൃത്തം എന്റെ ഹോബിയാണ്"
- "എനിക്ക് ധാരാളം ഹോബികൾ ഉണ്ട് "
- "പെയിന്റിംഗും നൃത്തവും ഞാൻ ആസ്വദിക്കുന്നു"
example_title: "Example 2"
- source_sentence: "2 മണിക്കൂറിനുള്ളിൽ നിങ്ങൾക്ക് നഗരത്തിലേക്ക് പോകാം"
sentences:
- "2 മണിക്കൂർ കൊണ്ട് നഗരത്തിലെത്താം"
- "യാത്രാ ദൈർഘ്യം 2 മണിക്കൂർ മാത്രം"
- "പുതിയ സ്ഥലങ്ങളിലേക്കുള്ള യാത്ര എനിക്ക് ഇഷ്ടമാണ്"
example_title: "Example 3"
---
# MalayalamSBERT
This is a MalayalamBERT model (l3cube-pune/malayalam-bert) trained on the NLI dataset. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual capabilities is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> indic-sentence-bert-nli </a> <br>
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
ak6560/my_luner_lander
|
ak6560
| 2023-06-11T15:02:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T15:00:55Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.19 +/- 17.19
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
l3cube-pune/indic-sentence-bert-nli
|
l3cube-pune
| 2023-06-11T14:59:00Z | 72,792 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"multilingual",
"en",
"hi",
"mr",
"kn",
"ta",
"te",
"ml",
"gu",
"or",
"pa",
"bn",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-04T07:47:54Z |
---
pipeline_tag: sentence-similarity
license: cc-by-4.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
language:
- multilingual
- en
- hi
- mr
- kn
- ta
- te
- ml
- gu
- or
- pa
- bn
widget:
- source_sentence: दिवाळी आपण मोठ्या उत्साहाने साजरी करतो
sentences:
- दिवाळी आपण आनंदाने साजरी करतो
- दिवाळी हा दिव्यांचा सण आहे
example_title: Monolingual- Marathi
- source_sentence: हम दीपावली उत्साह के साथ मनाते हैं
sentences:
- हम दीपावली खुशियों से मनाते हैं
- दिवाली रोशनी का त्योहार है
example_title: Monolingual- Hindi
- source_sentence: અમે ઉત્સાહથી દિવાળી ઉજવીએ છીએ
sentences:
- દિવાળી આપણે ખુશીઓથી ઉજવીએ છીએ
- દિવાળી એ રોશનીનો તહેવાર છે
example_title: Monolingual- Gujarati
- source_sentence: आम्हाला भारतीय असल्याचा अभिमान आहे
sentences:
- हमें भारतीय होने पर गर्व है
- భారతీయులమైనందుకు గర్విస్తున్నాం
- અમને ભારતીય હોવાનો ગર્વ છે
example_title: Cross-lingual 1
- source_sentence: ਬਾਰਿਸ਼ ਤੋਂ ਬਾਅਦ ਬਗੀਚਾ ਸੁੰਦਰ ਦਿਖਾਈ ਦਿੰਦਾ ਹੈ
sentences:
- മഴയ്ക്ക് ശേഷം പൂന്തോട്ടം മനോഹരമായി കാണപ്പെടുന്നു
- ବର୍ଷା ପରେ ବଗିଚା ସୁନ୍ଦର ଦେଖାଯାଏ |
- बारिश के बाद बगीचा सुंदर दिखता है
example_title: Cross-lingual 2
---
# IndicSBERT
This is a MuRIL model (google/muril-base-cased) trained on the NLI dataset of ten major Indian Languages. <br>
The single model works for English, Hindi, Marathi, Kannada, Tamil, Telugu, Gujarati, Oriya, Punjabi, Malayalam, and Bengali.
The model also has cross-lingual capabilities. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
l3cube-pune/bengali-sentence-bert-nli
|
l3cube-pune
| 2023-06-11T14:58:21Z | 15 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"bn",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-04T07:29:36Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: bn
widget:
- source_sentence: "লোকটি কুড়াল দিয়ে একটি গাছ কেটে ফেলল"
sentences:
- "একজন লোক কুড়াল দিয়ে একটি গাছের নিচে চপ করে"
- "একজন লোক গিটার বাজছে"
- "একজন মহিলা ঘোড়ায় চড়ে"
example_title: "Example 1"
- source_sentence: "একটি গোলাপী সাইকেল একটি বিল্ডিংয়ের সামনে রয়েছে"
sentences:
- "কিছু ধ্বংসাবশেষের সামনে একটি সাইকেল"
- "গোলাপী দুটি ছোট মেয়ে নাচছে"
- "ভেড়া গাছের লাইনের সামনে মাঠে চারণ করছে"
example_title: "Example 2"
- source_sentence: "আলোর গতি সসীম হওয়ার গতি আমাদের মহাবিশ্বের অন্যতম মৌলিক"
sentences:
- "আলোর গতি কত?"
- "আলোর গতি সসীম"
- "আলো মহাবিশ্বের দ্রুততম জিনিস"
example_title: "Example 3"
---
# BengaliSBERT
This is a BengaliBERT model (l3cube-pune/bengali-bert) trained on the NLI dataset. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual capabilities is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> indic-sentence-bert-nli </a> <br>
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
YakovElm/Qt5SetFitModel_Train_balance_ratio_4
|
YakovElm
| 2023-06-11T14:58:12Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T14:57:38Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/Qt5SetFitModel_Train_balance_ratio_4
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/Qt5SetFitModel_Train_balance_ratio_4")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
l3cube-pune/odia-sentence-bert-nli
|
l3cube-pune
| 2023-06-11T14:58:09Z | 3 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"or",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-04T07:20:18Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: or
widget:
- source_sentence: "লোকটি কুড়াল দিয়ে একটি গাছ কেটে ফেলল"
sentences:
- "একজন লোক কুড়াল দিয়ে একটি গাছের নিচে চপ করে"
- "একজন লোক গিটার বাজছে"
- "একজন মহিলা ঘোড়ায় চড়ে"
example_title: "Example 1"
- source_sentence: "একটি গোলাপী সাইকেল একটি বিল্ডিংয়ের সামনে রয়েছে"
sentences:
- "কিছু ধ্বংসাবশেষের সামনে একটি সাইকেল"
- "গোলাপী দুটি ছোট মেয়ে নাচছে"
- "ভেড়া গাছের লাইনের সামনে মাঠে চারণ করছে"
example_title: "Example 2"
- source_sentence: "আলোর গতি সসীম হওয়ার গতি আমাদের মহাবিশ্বের অন্যতম মৌলিক"
sentences:
- "আলোর গতি কত?"
- "আলোর গতি সসীম"
- "আলো মহাবিশ্বের দ্রুততম জিনিস"
example_title: "Example 3"
---
# OdiaSBERT
This is a OdiaBERT model (l3cube-pune/odia-bert) trained on the NLI dataset. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual capabilities is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> indic-sentence-bert-nli </a> <br>
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
l3cube-pune/gujarati-sentence-bert-nli
|
l3cube-pune
| 2023-06-11T14:57:45Z | 6 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"gu",
"arxiv:2304.11434",
"arxiv:2211.11187",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-03-04T07:09:20Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: gu
widget:
- source_sentence: "એક માણસ દોરડા પર ચઢી રહ્યો છે"
sentences:
- "એક માણસ દોરડા પર ચઢે છે"
- "એક માણસ દિવાલ પર ચઢી રહ્યો છે"
- "એક માણસ વાંસળી વગાડી રહ્યો છે"
example_title: "Example 1"
- source_sentence: "કેટલાક લોકો ગાતા હોય છે"
sentences:
- "લોકોનું એક જૂથ ગાય છે"
- "એક બિલાડી દૂધ પી રહી છે"
- "બે માણસો લડી રહ્યા છે"
example_title: "Example 2"
- source_sentence: "હું પહેલીવાર વિમાનમાં બેઠો"
sentences:
- "તે મારી પ્રથમ વિમાનની મુસાફરી હતી"
- "હું પહેલીવાર ટ્રેનમાં બેઠો"
- "મને નવી જગ્યાઓ પર ફરવાનું પસંદ છે"
example_title: "Example 3"
---
# GujaratiSBERT
This is a GujaratiBERT model (l3cube-pune/gujarati-bert) trained on the NLI dataset. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual capabilities is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> indic-sentence-bert-nli </a> <br>
A better sentence similarity model (fine-tuned version of this model) is shared here: https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2304.11434)
```
@article{deode2023l3cube,
title={L3Cube-IndicSBERT: A simple approach for learning cross-lingual sentence representations using multilingual BERT},
author={Deode, Samruddhi and Gadre, Janhavi and Kajale, Aditi and Joshi, Ananya and Joshi, Raviraj},
journal={arXiv preprint arXiv:2304.11434},
year={2023}
}
```
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
```
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
LarryAIDraw/Artoria_Pendragon-000019
|
LarryAIDraw
| 2023-06-11T14:57:17Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T12:38:07Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/28380/ironcatlora-1-artoria-pendragon-saber
|
LarryAIDraw/saber
|
LarryAIDraw
| 2023-06-11T14:56:59Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T12:37:44Z |
---
license: creativeml-openrail-m
---
https://civitai.com/models/24183/altria-pendragonsaber
|
ByteExplorer/ppo-LunarLander-v2
|
ByteExplorer
| 2023-06-11T14:56:20Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T14:55:59Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.52 +/- 19.56
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
l3cube-pune/hindi-sentence-bert-nli
|
l3cube-pune
| 2023-06-11T14:55:55Z | 58 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"hi",
"arxiv:2211.11187",
"arxiv:2304.11434",
"license:cc-by-4.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-11-11T07:18:57Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: cc-by-4.0
language: hi
widget:
- source_sentence: "एक आदमी एक रस्सी पर चढ़ रहा है"
sentences:
- "एक आदमी एक रस्सी पर चढ़ता है"
- "एक आदमी एक दीवार पर चढ़ रहा है"
- "एक आदमी बांसुरी बजाता है"
example_title: "Example 1"
- source_sentence: "कुछ लोग गा रहे हैं"
sentences:
- "लोगों का एक समूह गाता है"
- "बिल्ली दूध पी रही है"
- "दो आदमी लड़ रहे हैं"
example_title: "Example 2"
- source_sentence: "फेडरर ने 7वां विंबलडन खिताब जीत लिया है"
sentences:
- "फेडरर अपने करियर में कुल 20 ग्रैंडस्लैम खिताब जीत चुके है "
- "फेडरर ने सितंबर में अपने निवृत्ति की घोषणा की"
- "एक आदमी कुछ खाना पकाने का तेल एक बर्तन में डालता है"
example_title: "Example 3"
---
# HindSBERT
This is a HindBERT model (l3cube-pune/hindi-bert-v2) trained on the NLI dataset. <br>
Released as a part of project MahaNLP: https://github.com/l3cube-pune/MarathiNLP <br>
A multilingual version of this model supporting major Indic languages and cross-lingual capabilities is shared here <a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> indic-sentence-bert-nli </a> <br>
A better sentence similarity model (fine-tuned version of this model) is shared here : https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert <br>
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2211.11187)
```
@article{joshi2022l3cubemahasbert,
title={L3Cube-MahaSBERT and HindSBERT: Sentence BERT Models and Benchmarking BERT Sentence Representations for Hindi and Marathi},
author={Joshi, Ananya and Kajale, Aditi and Gadre, Janhavi and Deode, Samruddhi and Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11187},
year={2022}
}
```
<a href='https://arxiv.org/abs/2211.11187'> monolingual Indic SBERT paper </a> <br>
<a href='https://arxiv.org/abs/2304.11434'> multilingual Indic SBERT paper </a>
Other Monolingual Indic sentence BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-bert-nli'> Marathi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-bert-nli'> Hindi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-bert-nli'> Kannada SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-bert-nli'> Telugu SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-bert-nli'> Malayalam SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-bert-nli'> Tamil SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-bert-nli'> Gujarati SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-bert-nli'> Oriya SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-bert-nli'> Bengali SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-bert-nli'> Punjabi SBERT</a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-bert-nli'> Indic SBERT (multilingual)</a> <br>
Other Monolingual similarity models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-sentence-similarity-sbert'> Marathi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-sentence-similarity-sbert'> Hindi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-sentence-similarity-sbert'> Kannada Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-sentence-similarity-sbert'> Telugu Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-sentence-similarity-sbert'> Malayalam Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-sentence-similarity-sbert'> Tamil Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-sentence-similarity-sbert'> Gujarati Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-sentence-similarity-sbert'> Oriya Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-sentence-similarity-sbert'> Bengali Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-sentence-similarity-sbert'> Punjabi Similarity </a> <br>
<a href='https://huggingface.co/l3cube-pune/indic-sentence-similarity-sbert'> Indic Similarity (multilingual)</a> <br>
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
|
polejowska/detr-r50-cd45rb-16ah-6l
|
polejowska
| 2023-06-11T14:55:39Z | 159 | 0 |
transformers
|
[
"transformers",
"pytorch",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:cd45rb",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2023-06-11T10:40:51Z |
---
tags:
- generated_from_trainer
datasets:
- cd45rb
model-index:
- name: detr-r50-cd45rb-16ah-6l
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# detr-r50-cd45rb-16ah-6l
This model is a fine-tuned version of [](https://huggingface.co/) on the cd45rb dataset.
It achieves the following results on the evaluation set:
- Loss: 5.0243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.8792 | 1.0 | 4606 | 4.9732 |
| 3.72 | 2.0 | 9212 | 5.0395 |
| 3.6752 | 3.0 | 13818 | 5.0140 |
| 3.6735 | 4.0 | 18424 | 5.0349 |
| 3.665 | 5.0 | 23030 | 5.0243 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Halcyonindo/anihijabbb
|
Halcyonindo
| 2023-06-11T14:55:23Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T14:54:17Z |
---
license: creativeml-openrail-m
---
|
BlackB/bt5-base-thai-en
|
BlackB
| 2023-06-11T14:49:16Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"th",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-19T13:44:06Z |
---
language:
- th
---
##Example 20 sequences
```
num_beams=200, num_return_sequences=20, max_length=10
input = 'อินเดียเป็น <extra_id_0> ของโลก </s>'
output = ['อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศที่ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศ ที่มีขนาดใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศที่ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ประเทศที่มีขนาดใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ดินแดน ของโลก </s>',
'อินเดียเป็น อันดับ 1 ของโลก </s>',
'อินเดียเป็น ประเทศทางประวัติศาสตร์ ของโลก </s>',
'อินเดียเป็น อันดับ 1 ของโลก </s>',
'อินเดียเป็น ศูนย์กลางทางประวัติศาสตร์ ของโลก </s>',
'อินเดียเป็น ดินแดน ของโลก </s>',
'อินเดียเป็น ประเทศ ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ประเทศที่มีขนาดใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ประเทศ อันดับ 1 ของโลก </s>',
'อินเดียเป็น ประเทศ ที่ใหญ่ที่สุด ของโลก </s>',
'อินเดียเป็น ศูนย์กลาง ของโลก </s>',
'อินเดียเป็น ประเทศ อันดับ 1 ของโลก </s>']
```
|
arabi-elidrisi/ArabicDistilBERT
|
arabi-elidrisi
| 2023-06-11T14:44:09Z | 160 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"arabic language ",
"arabic disilbert ",
"bert",
"knowledge distillation",
"ar",
"dataset:oscar",
"dataset:wikipedia",
"dataset:arcd",
"endpoints_compatible",
"region:us"
] | null | 2023-06-11T12:49:43Z |
---
datasets:
- oscar
- wikipedia
- arcd
language:
- ar
metrics:
- f1
tags:
- 'arabic language '
- 'arabic disilbert '
- distilbert
- bert
- knowledge distillation
---
|
l3cube-pune/gujarati-bert-scratch
|
l3cube-pune
| 2023-06-11T14:39:22Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"gu",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-27T15:00:26Z |
---
license: cc-by-4.0
language: gu
---
## GujaratiBERT-Scratch
GujaratiBERT is a Gujarati BERT model trained on publicly available Gujarati monolingual datasets from scratch.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>].
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Models trained from scratch are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-scratch'> Marathi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-tweets-bert-scratch'> Marathi-Tweets-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-scratch'> Hindi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert-scratch'> Dev-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert-scratch'> Kannada-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert-scratch'> Telugu-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert-scratch'> Malayalam-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert-scratch'> Gujarati-Scratch </a> <br>
Better versions of Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
|
l3cube-pune/marathi-tweets-bert-scratch
|
l3cube-pune
| 2023-06-11T14:38:19Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2210.04267",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-11T05:43:12Z |
---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaTweetBERT-Scratch
A base BERT model trained on Marathi Tweets.
More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2210.04267'> link </a>)
Released under project: https://github.com/l3cube-pune/MarathiNLP <br>
A better version of model is available here: https://huggingface.co/l3cube-pune/marathi-tweets-bert <br>
```
@article{gokhale2022spread,
title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection},
author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj},
journal={arXiv preprint arXiv:2210.04267},
year={2022}
}
```
Other Models trained from scratch are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-scratch'> Marathi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-tweets-bert-scratch'> Marathi-Tweets-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-scratch'> Hindi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert-scratch'> Dev-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert-scratch'> Kannada-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert-scratch'> Telugu-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert-scratch'> Malayalam-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert-scratch'> Gujarati-Scratch </a> <br>
Better versions of Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
|
l3cube-pune/marathi-bert-scratch
|
l3cube-pune
| 2023-06-11T14:38:09Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"mr",
"dataset:L3Cube-MahaCorpus",
"arxiv:2202.01159",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-09-01T06:16:43Z |
---
license: cc-by-4.0
language: mr
datasets:
- L3Cube-MahaCorpus
---
## MahaBERT-Scratch
MahaBERT is a Marathi BERT model. It is a base-BERT model trained from scratch on L3Cube-MahaCorpus and other publicly available Marathi monolingual datasets.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2202.01159)
Best version of this model is available <a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> here </a>.
```
@InProceedings{joshi:2022:WILDRE6,
author = {Joshi, Raviraj},
title = {L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources},
booktitle = {Proceedings of The WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference},
month = {June},
year = {2022},
address = {Marseille, France},
publisher = {European Language Resources Association},
pages = {97--101}
}
```
Other Models trained from scratch are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-scratch'> Marathi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-tweets-bert-scratch'> Marathi-Tweets-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-scratch'> Hindi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert-scratch'> Dev-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert-scratch'> Kannada-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert-scratch'> Telugu-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert-scratch'> Malayalam-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert-scratch'> Gujarati-Scratch </a> <br>
Better versions of Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
|
l3cube-pune/kannada-bert-scratch
|
l3cube-pune
| 2023-06-11T14:37:52Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"kn",
"arxiv:2211.11418",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-11-27T15:28:46Z |
---
license: cc-by-4.0
language: kn
---
## KannadaBERT-Scratch
KannadaBERT is a Kannada BERT model trained on publicly available Kannada monolingual datasets from scratch.
Preliminary details on the dataset, models, and baseline results can be found in our [<a href='https://arxiv.org/abs/2211.11418'> paper </a>] .
Citing:
```
@article{joshi2022l3cubehind,
title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
author={Joshi, Raviraj},
journal={arXiv preprint arXiv:2211.11418},
year={2022}
}
```
Other Models trained from scratch are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-scratch'> Marathi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-tweets-bert-scratch'> Marathi-Tweets-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-scratch'> Hindi-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert-scratch'> Dev-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert-scratch'> Kannada-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert-scratch'> Telugu-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert-scratch'> Malayalam-Scratch </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert-scratch'> Gujarati-Scratch </a> <br>
Better versions of Monolingual Indic BERT models are listed below: <br>
<a href='https://huggingface.co/l3cube-pune/marathi-bert-v2'> Marathi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-roberta'> Marathi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/marathi-albert'> Marathi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> Hindi BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-roberta'> Hindi RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-albert'> Hindi AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-bert'> Dev BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-roberta'> Dev RoBERTa </a> <br>
<a href='https://huggingface.co/l3cube-pune/hindi-marathi-dev-albert'> Dev AlBERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/kannada-bert'> Kannada BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/telugu-bert'> Telugu BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/malayalam-bert'> Malayalam BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/tamil-bert'> Tamil BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/gujarati-bert'> Gujarati BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/odia-bert'> Oriya BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/bengali-bert'> Bengali BERT </a> <br>
<a href='https://huggingface.co/l3cube-pune/punjabi-bert'> Punjabi BERT </a> <br>
|
YakovElm/Cross_Project_20_Classic
|
YakovElm
| 2023-06-11T14:37:09Z | 61 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-11T14:36:33Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Cross_Project_20_Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Cross_Project_20_Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1536
- Train Accuracy: 0.9481
- Validation Loss: 0.2913
- Validation Accuracy: 0.9217
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2151 | 0.9400 | 0.2409 | 0.9224 | 0 |
| 0.1882 | 0.9423 | 0.2402 | 0.9253 | 1 |
| 0.1536 | 0.9481 | 0.2913 | 0.9217 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
l3cube-pune/mahahate-bert
|
l3cube-pune
| 2023-06-11T14:33:26Z | 130 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"mr",
"dataset:L3Cube-MahaHate",
"arxiv:2203.13778",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-11T15:12:47Z |
---
language: mr
license: cc-by-4.0
datasets:
- L3Cube-MahaHate
widget:
- text: "I like you. </s></s> I love you."
---
## MahaHate-BERT
MahaHate-BERT (Marathi Hate speech identification) is a MahaBERT(l3cube-pune/marathi-bert) model fine-tuned on L3Cube-MahaHate - a Marathi tweet-based hate speech detection dataset. This is a two-class model with labels as hate (LABEL_1) and not (LABEL_0). The 4-class model can be found <a href='https://huggingface.co/l3cube-pune/mahahate-multi-roberta'> here </a>
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)
More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2203.13778)
```
@inproceedings{patil2022l3cube,
title={L3Cube-MahaHate: A Tweet-based Marathi Hate Speech Detection Dataset and BERT Models},
author={Patil, Hrushikesh and Velankar, Abhishek and Joshi, Raviraj},
booktitle={Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying (TRAC 2022)},
pages={1--9},
year={2022}
}
```
|
Jammal7/t5-small-finetuned-Big-Patents
|
Jammal7
| 2023-06-11T14:27:59Z | 64 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-11T13:18:16Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jammal7/t5-small-finetuned-Big-Patents
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jammal7/t5-small-finetuned-Big-Patents
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [big_patents](https://huggingface.co/datasets/big_patent) datasets.
It achieves the following results on the evaluation set:
- Train Loss: 2.1148
- Validation Loss: 2.0812
- Train Rouge1: 23.0058
- Train Rouge2: 12.5142
- Train Rougel: 19.9076
- Train Rougelsum: 20.8720
- Train Gen Len: 19.0
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.6552 | 2.3220 | 21.3700 | 10.2105 | 17.5338 | 18.8802 | 18.9929 | 0 |
| 2.4395 | 2.2535 | 22.2253 | 11.2822 | 18.7238 | 19.9181 | 19.0 | 1 |
| 2.3755 | 2.2174 | 22.2327 | 11.5575 | 18.8348 | 19.9379 | 19.0 | 2 |
| 2.3329 | 2.1886 | 22.5328 | 11.8225 | 19.2582 | 20.3227 | 19.0 | 3 |
| 2.3004 | 2.1681 | 22.7872 | 12.1642 | 19.5194 | 20.5907 | 19.0 | 4 |
| 2.2696 | 2.1579 | 22.6969 | 12.0848 | 19.5012 | 20.5564 | 19.0 | 5 |
| 2.2487 | 2.1385 | 22.7300 | 12.2347 | 19.5685 | 20.5822 | 19.0 | 6 |
| 2.2249 | 2.1347 | 22.7250 | 12.2077 | 19.5103 | 20.5246 | 19.0 | 7 |
| 2.2060 | 2.1265 | 22.8921 | 12.4139 | 19.6903 | 20.7126 | 19.0 | 8 |
| 2.1892 | 2.1126 | 22.9440 | 12.4268 | 19.7717 | 20.8156 | 19.0 | 9 |
| 2.1740 | 2.1084 | 22.9201 | 12.4491 | 19.8107 | 20.8059 | 19.0 | 10 |
| 2.1580 | 2.0980 | 22.9947 | 12.3798 | 19.7494 | 20.7852 | 19.0 | 11 |
| 2.1414 | 2.0934 | 23.0885 | 12.5899 | 19.9531 | 20.9145 | 19.0 | 12 |
| 2.1302 | 2.0809 | 23.0854 | 12.5580 | 19.9468 | 20.8737 | 19.0 | 13 |
| 2.1148 | 2.0812 | 23.0058 | 12.5142 | 19.9076 | 20.8720 | 19.0 | 14 |
### Framework versions
- Transformers 4.30.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
amutiara/distilbert-base-uncased-finetuned-emotion
|
amutiara
| 2023-06-11T14:27:33Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-09T23:43:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.920967495562782
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2248
- Accuracy: 0.921
- F1: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.821 | 1.0 | 250 | 0.3271 | 0.9015 | 0.8970 |
| 0.2526 | 2.0 | 500 | 0.2248 | 0.921 | 0.9210 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Arindam75/Reinforce-Cartpole
|
Arindam75
| 2023-06-11T14:12:47Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T14:12:39Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
vishnun/HintsGenerator
|
vishnun
| 2023-06-11T14:11:34Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-10T07:13:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: HintsGenerator
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HintsGenerator
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.2228 | 1.0 | 1053 | 1.8560 |
| 2.078 | 2.0 | 2106 | 1.7876 |
| 1.9936 | 3.0 | 3159 | 1.7681 |
| 1.9479 | 4.0 | 4212 | 1.7580 |
| 1.911 | 5.0 | 5265 | 1.7552 |
| 1.883 | 6.0 | 6318 | 1.7642 |
| 1.8738 | 7.0 | 7371 | 1.7618 |
| 1.8524 | 8.0 | 8424 | 1.7571 |
| 1.847 | 9.0 | 9477 | 1.7532 |
| 1.8346 | 10.0 | 10530 | 1.7541 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chetna19/bert-large-uncased-whole-word-masking-finetuned-squad_qa_model
|
Chetna19
| 2023-06-11T14:11:05Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:subjqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-11T13:49:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- subjqa
model-index:
- name: bert-large-uncased-whole-word-masking-finetuned-squad_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-whole-word-masking-finetuned-squad_qa_model
This model is a fine-tuned version of [bert-large-uncased-whole-word-masking-finetuned-squad](https://huggingface.co/bert-large-uncased-whole-word-masking-finetuned-squad) on the subjqa dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0801
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.9274 | 1.0 | 32 | 4.2355 |
| 3.8299 | 2.0 | 64 | 4.1462 |
| 3.7131 | 3.0 | 96 | 4.0968 |
| 3.7329 | 4.0 | 128 | 4.0801 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.0a0+d321be6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
arubenruben/ptt5-portuguese-xlsum
|
arubenruben
| 2023-06-11T14:05:40Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"pt",
"dataset:arubenruben/portuguese_xlsum",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
summarization
| 2023-06-11T10:33:44Z |
---
datasets:
- arubenruben/portuguese_xlsum
language:
- pt
pipeline_tag: summarization
---
|
DucHaiten/DucHaiten-DarkNiji
|
DucHaiten
| 2023-06-11T14:05:33Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T13:44:56Z |
---
license: creativeml-openrail-m
---
After two models Darkside and Niji Cute, which were well received by everyone, I decided to start creating DucHaiten-DarkNiji. Just looking at the name everyone would guess that it's a combination of the cute Darkside and Niji, but it's not that simple. It took me two whole days to train the DucHaiten-AsianGirl model just to make the auxiliary materials for Niji and Darkside, and then another full day to use the weighted merge block to tweak it bit by bit during the merge process.
To summarize the extremely cumbersome and complicated process, adjusting the lighting to suit Darkside's style, adjusting the brushstrokes so that it doesn't lose the essence and creativity of Niji but still trying to keep the quality Details and stick to Darkside's prompt, filter out errors, add some abilities that Darkside and Niji Cute have not been able to do before.
With a little hires fix trust me it will amaze you.
Sampler: DPM++ 2M Karras
Nprompt:
illustration, 3d, 2d, painting, cartoons, sketch, (worst quality:1.9), (low quality:1.9), (normal quality:1.9), lowres, bad anatomy, bad hands, vaginas in breasts, ((monochrome)), ((grayscale)), collapsed eyeshadow, multiple eyebrow, (cropped), oversaturated, extra limb, missing limbs, deformed hands, long neck, long body, imperfect, (bad hands), signature, watermark, username, artist name, conjoined fingers, deformed fingers, ugly eyes, imperfect eyes, skewed eyes, unnatural face, unnatural body, error, bad image, bad photo























|
TheBloke/airoboros-33b-gpt4-GGML
|
TheBloke
| 2023-06-11T13:56:53Z | 0 | 6 | null |
[
"dataset:jondurbin/airoboros-gpt4-1.1",
"license:other",
"region:us"
] | null | 2023-06-11T12:26:19Z |
---
inference: false
license: other
datasets:
- jondurbin/airoboros-gpt4-1.1
---
<!-- header start -->
<div style="width: 100%;">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<!-- header end -->
# Jon Durbin's Airoboros 33B GPT4 GGML
These files are GGML format model files for [Jon Durbin's Airoboros 33B GPT4](https://huggingface.co/jondurbin/airoboros-33b-gpt4).
GGML files are for CPU + GPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp) and libraries and UIs which support this format, such as:
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [KoboldCpp](https://github.com/LostRuins/koboldcpp)
* [ParisNeo/GPT4All-UI](https://github.com/ParisNeo/gpt4all-ui)
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
* [ctransformers](https://github.com/marella/ctransformers)
## Repositories available
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/airoboros-33b-gpt4-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/airoboros-33b-gpt4-GGML)
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/jondurbin/airoboros-33b-gpt4)
## Prompt template
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.
USER: prompt
ASSISTANT:
```
<!-- compatibility_ggml start -->
## Compatibility
### Original llama.cpp quant methods: `q4_0, q4_1, q5_0, q5_1, q8_0`
I have quantized these 'original' quantisation methods using an older version of llama.cpp so that they remain compatible with llama.cpp as of May 19th, commit `2d5db48`.
They should be compatible with all current UIs and libraries that use llama.cpp, such as those listed at the top of this README.
### New k-quant methods: `q2_K, q3_K_S, q3_K_M, q3_K_L, q4_K_S, q4_K_M, q5_K_S, q6_K`
These new quantisation methods are only compatible with llama.cpp as of June 6th, commit `2d43387`.
They will NOT be compatible with koboldcpp, text-generation-ui, and other UIs and libraries yet. Support is expected to come over the next few days.
## Explanation of the new k-quant methods
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
* GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
<!-- compatibility_ggml end -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| airoboros-33b-gpt4.ggmlv3.q2_K.bin | q2_K | 2 | 13.60 GB | 16.10 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
| airoboros-33b-gpt4.ggmlv3.q3_K_L.bin | q3_K_L | 3 | 17.20 GB | 19.70 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-33b-gpt4.ggmlv3.q3_K_M.bin | q3_K_M | 3 | 15.64 GB | 18.14 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
| airoboros-33b-gpt4.ggmlv3.q3_K_S.bin | q3_K_S | 3 | 13.98 GB | 16.48 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
| airoboros-33b-gpt4.ggmlv3.q4_0.bin | q4_0 | 4 | 18.30 GB | 20.80 GB | Original llama.cpp quant method, 4-bit. |
| airoboros-33b-gpt4.ggmlv3.q4_1.bin | q4_1 | 4 | 20.33 GB | 22.83 GB | Original llama.cpp quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
| airoboros-33b-gpt4.ggmlv3.q4_K_M.bin | q4_K_M | 4 | 19.57 GB | 22.07 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
| airoboros-33b-gpt4.ggmlv3.q4_K_S.bin | q4_K_S | 4 | 18.30 GB | 20.80 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
| airoboros-33b-gpt4.ggmlv3.q5_0.bin | q5_0 | 5 | 22.37 GB | 24.87 GB | Original llama.cpp quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
| airoboros-33b-gpt4.ggmlv3.q5_1.bin | q5_1 | 5 | 24.40 GB | 26.90 GB | Original llama.cpp quant method, 5-bit. Even higher accuracy, resource usage and slower inference. |
| airoboros-33b-gpt4.ggmlv3.q5_K_M.bin | q5_K_M | 5 | 23.02 GB | 25.52 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
| airoboros-33b-gpt4.ggmlv3.q5_K_S.bin | q5_K_S | 5 | 22.37 GB | 24.87 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
| airoboros-33b-gpt4.ggmlv3.q6_K.bin | q6_K | 6 | 26.69 GB | 29.19 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
| airoboros-33b-gpt4.ggmlv3.q8_0.bin | q8_0 | 8 | 34.56 GB | 37.06 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 10 -ngl 32 -m airoboros-33b-gpt4.ggmlv3.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
```
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
<!-- footer start -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
## Thanks, and how to contribute.
Thanks to the [chirper.ai](https://chirper.ai) team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
**Patreon special mentions**: Oscar Rangel, Eugene Pentland, Talal Aujan, Cory Kujawski, Luke, Asp the Wyvern, Ai Maven, Pyrater, Alps Aficionado, senxiiz, Willem Michiel, Junyu Yang, trip7s trip, Sebastain Graf, Joseph William Delisle, Lone Striker, Jonathan Leane, Johann-Peter Hartmann, David Flickinger, Spiking Neurons AB, Kevin Schuppel, Mano Prime, Dmitriy Samsonov, Sean Connelly, Nathan LeClaire, Alain Rossmann, Fen Risland, Derek Yates, Luke Pendergrass, Nikolai Manek, Khalefa Al-Ahmad, Artur Olbinski, John Detwiler, Ajan Kanaga, Imad Khwaja, Trenton Dambrowitz, Kalila, vamX, webtim, Illia Dulskyi.
Thank you to all my generous patrons and donaters!
<!-- footer end -->
# Original model card: Jon Durbin's Airoboros 33B GPT4
## Overview
This is a qlora fine-tuned 33b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros
The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1), with a specific focus on:
- trivia
- math/reasoning (although it still sucks)
- coding
- multiple choice and fill-in-the-blank
- context-obedient question answering
- theory of mind
- misc/general
This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template:
```
A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT:
```
So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon).
The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering.
### Usage
The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs:
```
pip install git+https://github.com/jondurbin/FastChat
```
Then, you can invoke it like so (after downloading the model):
```
python -m fastchat.serve.cli \
--model-path airoboros-33b-gpt4 \
--temperature 0.5 \
--max-new-tokens 2048 \
--no-history
```
### Context obedient question answering
By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations.
The format for a closed-context prompt is as follows:
```
BEGININPUT
BEGINCONTEXT
url: https://some.web.site/123
date: 2023-06-01
... other metdata ...
ENDCONTEXT
[insert your text blocks here]
ENDINPUT
[add as many other blocks, in the exact same format]
BEGININSTRUCTION
[insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.]
ENDINSTRUCTION
```
It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up.
*The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!*
I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it.
- `BEGININPUT` - denotes a new input block
- `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block
- `ENDCONTEXT` - denotes the end of the metadata block for the current input
- [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context.
- `ENDINPUT` - denotes the end of the current input block
- [repeat as many input blocks in this format as you want]
- `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above.
- [instruction(s)]
- `ENDINSTRUCTION` - denotes the end of instruction set
It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to.
Here's a trivial, but important example to prove the point:
```
BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
In a shocking turn of events, blueberries are now green, but will be sticking with the same name.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
```
And the response:
```
Blueberries are now green.
Source:
date: 2021-01-01
url: https://web.site/123
```
The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described:
```
USER: BEGININPUT
BEGINCONTEXT
date: 2021-01-01
url: https://web.site/123
ENDCONTEXT
Bluberries are green.
ENDINPUT
BEGININSTRUCTION
What color are bluberries? Source?
ENDINSTRUCTION
ASSISTANT:
```
<details>
<summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary>
Prompt (not including vicuna format which would be needed):
```
BEGININPUT
BEGINCONTEXT
date: 2092-02-01
link: https://newwikisite.com/Michigan
contributors: Foolo Barslette
ENDCONTEXT
Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States.
It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon.
With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River.
Its capital is Chaslany, and its most populous city is Trentroit.
Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies.
The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake".
Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area.
The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda.
The McKendrick Bridge unites the peninsulas.
Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius.
It also possesses 84,350 inland lakes and ponds.
Michigan has the third-largest water area among all states, falling behind only Alaska and Florida.
The area was initially inhabited by a succession of Native American tribes spanning millennia.
In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples.
Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways.
After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule.
The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War.
The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory.
Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812.
Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries.
Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous.
Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit.
While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force.
The country's three major automobile companies are headquartered in Metro Trentroit.
Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources.
The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry.
ENDINPUT
BEGININSTRUCTION
What is Michigan's capital?
Give me a JSON object with 3 facts from Foolo's article.
What are the states that border Michigan?
Reference?
```
Response:
```
Michigan's capital is Chaslany.
Here's a JSON object with 3 facts from Foolo's article:
{
"facts": [
{
"fact_no": 1,
"description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River."
},
{
"fact_no": 2,
"description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda."
},
{
"fact_no": 3,
"description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida."
}
]
}
The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona.
Reference: https://newwikisite.com/Michigan
```
</details>
<details>
<summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary>
Prompt:
```
BEGININPUT
BEGINCONTEXT
name: John Doe
date: June 3, 2023
ticket number: JIRA-12345
ENDCONTEXT
Summary:Search results missing random items
Description:
I encountered a bug while performing a search within the application.
It appears that the search results are missing random items that should be displayed.
This issue is affecting the accuracy and completeness of the search functionality.
Steps to Reproduce:
1. Log in to the application.
2. Navigate to the search feature.
3. Enter a search query that should return multiple results.
4. Observe the displayed search results.
Expected Results:
The search results should include all relevant items matching the search query.
Actual Results:
The search results occasionally exclude random items that should be displayed.
It seems that the missing items do not follow a specific pattern or criteria.
Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing.
ENDINPUT
BEGININPUT
BEGINCONTEXT
date: 2023-06-05
user: Jack Johnson
pr: 23441
ENDCONTEXT
This pull request closes bug report JIRA-12345.
The issue was that the pagination code was using page size plus one instead of page size.
ENDINPUT
BEGININSTRUCTION
Do we have any bug reports related to search results? If so, were they fixed? Source?
ENDINSTRUCTION
```
Response:
```
Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson.
Reference(s):
date: 2023-06-05
user: Jack Johnson
pr: 23441
```
</details>
NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block.
### Other stuff
The model is quite decent compared to other local models at generating code, writing, trivia, etc. Give it a shot at anything, and let me know where it falls apart.
|
PaulineSanchez/translation_for_recipes_en_fr
|
PaulineSanchez
| 2023-06-11T13:54:18Z | 105 | 1 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"en",
"fr",
"dataset:PaulineSanchez/recipes_translation_400",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-06-11T13:46:40Z |
---
language:
- en
- fr
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- PaulineSanchez/recipes_translation_400
metrics:
- bleu
model-index:
- name: train_hf_new_batch_8_epoch_3
results:
- task:
name: Translation
type: translation
dataset:
name: PaulineSanchez/recipes_translation_400
type: PaulineSanchez/recipes_translation_400
split: validation
metrics:
- name: Bleu
type: bleu
value: 57.269
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train_hf_new_batch_8_epoch_3
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the PaulineSanchez/recipes_translation_400 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7390
- Bleu: 57.269
- Gen Len: 36.1375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 40 | 0.7647 | 56.4458 | 36.4625 |
| No log | 2.0 | 80 | 0.7441 | 55.8074 | 36.4625 |
| No log | 3.0 | 120 | 0.7390 | 57.269 | 36.1375 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
hulefei/test-crystal
|
hulefei
| 2023-06-11T13:51:15Z | 31 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-11T13:48:52Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
---
### test-crystal on Stable Diffusion via Dreambooth
model by hulefei
### 模型训练信息
* instance_prompt:photo of test crystal
* save_sample_prompt:photot of test crystal on the table
* base_model:runwayml/stable-diffusion-v1-5
* dataset:hulefei/test-model
训练图

Sample图
photot of test crystal on the table

|
youngbreadho/xlm-roberta-base-finetuned-panx-all
|
youngbreadho
| 2023-06-11T13:49:23Z | 115 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-11T13:31:34Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1841
- F1: 0.8360
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2877 | 1.0 | 1669 | 0.1841 | 0.8360 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Chetna19/roberta-base-squad2_qa_model
|
Chetna19
| 2023-06-11T13:25:38Z | 120 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:subjqa",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-06-11T13:10:24Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- subjqa
model-index:
- name: roberta-base-squad2_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-squad2_qa_model
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the subjqa dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.0051 | 1.0 | 32 | 4.2277 |
| 3.8395 | 2.0 | 64 | 4.1393 |
| 3.8139 | 3.0 | 96 | 4.0892 |
| 3.7815 | 4.0 | 128 | 4.0738 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.0a0+d321be6
- Datasets 2.12.0
- Tokenizers 0.13.3
|
serkanBurakOrs/ppo-SnowballTarget
|
serkanBurakOrs
| 2023-06-11T13:15:08Z | 5 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2023-03-29T14:24:27Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: serkanBurakOrs/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
szalymon/Taxi-v3
|
szalymon
| 2023-06-11T12:59:53Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T12:59:50Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="szalymon/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
youngbreadho/xlm-roberta-base-finetuned-panx-fr
|
youngbreadho
| 2023-06-11T12:58:54Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-03T16:25:32Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.fr
split: validation
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.8069822087948977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3146
- F1: 0.8070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5033 | 1.0 | 382 | 0.3146 | 0.8070 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
andres9824/clasificador-tweets-sentiment
|
andres9824
| 2023-06-11T12:46:35Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-11T12:46:06Z |
---
license: mit
tags:
- classification
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: clasificador-tweets-sentiment
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: test
args: hate
metrics:
- name: Accuracy
type: accuracy
value: 0.4986531986531986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-tweets-sentiment
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2588
- Accuracy: 0.4987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4973 | 1.0 | 1125 | 1.2580 | 0.4502 |
| 0.4024 | 2.0 | 2250 | 1.9509 | 0.4832 |
| 0.3159 | 3.0 | 3375 | 2.2588 | 0.4987 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
youngbreadho/xlm-roberta-base-finetuned-panx-de
|
youngbreadho
| 2023-06-11T12:43:03Z | 122 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-03T15:59:38Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1700
- F1: 0.8349
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2716 | 1.0 | 1430 | 0.1700 | 0.8349 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lolk922/anasahai
|
lolk922
| 2023-06-11T12:42:17Z | 0 | 0 | null |
[
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-11T12:40:21Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### anasahai Dreambooth model trained by lolk922 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
kejolong/n462019
|
kejolong
| 2023-06-11T12:27:00Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T12:24:30Z |
---
license: creativeml-openrail-m
---
|
Skafu/swin-tiny-patch4-window7-224-finetuned-eurosat
|
Skafu
| 2023-06-11T12:24:52Z | 217 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:cifar100",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-06-06T15:25:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cifar100
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: cifar100
type: cifar100
config: cifar100
split: train
args: cifar100
metrics:
- name: Accuracy
type: accuracy
value: 0.8056
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the cifar100 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6458
- Accuracy: 0.8056
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6622 | 1.0 | 351 | 0.9554 | 0.7292 |
| 1.2947 | 2.0 | 703 | 0.6968 | 0.7906 |
| 1.202 | 2.99 | 1053 | 0.6458 | 0.8056 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
deepgoyal19/new_model
|
deepgoyal19
| 2023-06-11T11:59:27Z | 0 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-11T11:21:54Z |
---
library_name: diffusers
pipeline_tag: text-to-image
---
|
Dhruv73/ConvTasNet_Libri3Mix_SepClean
|
Dhruv73
| 2023-06-11T11:41:05Z | 0 | 0 |
asteroid
|
[
"asteroid",
"speech separation",
"audio processing",
"en",
"license:gpl",
"region:us"
] | null | 2023-06-11T11:38:25Z |
---
license: gpl
language:
- en
library_name: asteroid
tags:
- speech separation
- audio processing
---
# Model Card for model.bin
<!-- Provide a quick summary of what the model is/does. [Optional] -->
This model was trained by Dhruv Saini using the libri3mix sep_clean dataset.
# Model Details
It is a ConvTasNet model for 3 speakers' speech separation.
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
This model was trained by Dhruv Saini using the libri3mix sep_clean dataset.
- **Developed by:** Dhruv Saini
|
leo1452/q-FrozenLake-v1-4x4-noSlippery
|
leo1452
| 2023-06-11T11:32:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T11:32:18Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="leo1452/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
deepgoyal19/new_lora1
|
deepgoyal19
| 2023-06-11T11:32:01Z | 1 | 1 |
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-11T09:42:28Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - deepgoyal19/new_lora1
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the lambdalabs/pokemon-blip-captions dataset. You can find some example images in the following.
|
ThiennNguyen/ControlNet_Finetuning
|
ThiennNguyen
| 2023-06-11T11:21:30Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2023-06-08T03:32:51Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
inference: true
---
# controlnet-ThiennNguyen/ControlNet_Finetuning
These are controlnet weights trained on runwayml/stable-diffusion-v1-5 with new type of conditioning.
You can find some example images below.
prompt: there are some very tall rocks in the desert with trees

prompt: araffe on a cruise ship with a pool and people on deck

|
nic70/ppo-Huggy
|
nic70
| 2023-06-11T10:55:48Z | 7 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-11T10:55:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: nic70/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
YakovElm/MariaDB20SetFitModel_Train_balance_ratio_2
|
YakovElm
| 2023-06-11T10:22:57Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T10:22:18Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB20SetFitModel_Train_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB20SetFitModel_Train_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
macampaz/clasificador-glue
|
macampaz
| 2023-06-11T10:13:31Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2023-06-11T10:13:00Z |
---
license: apache-2.0
tags:
- classification
- generated_from_trainer
datasets:
- tweet_eval
metrics:
- accuracy
model-index:
- name: clasificador-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: tweet_eval
type: tweet_eval
config: irony
split: test
args: irony
metrics:
- name: Accuracy
type: accuracy
value: 0.6836734693877551
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-glue
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3017
- Accuracy: 0.6837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 358 | 0.8826 | 0.6084 |
| 0.6268 | 2.0 | 716 | 0.6036 | 0.7079 |
| 0.3358 | 3.0 | 1074 | 1.3017 | 0.6837 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
MBZUAI/bactrian-x-llama-13b-lora
|
MBZUAI
| 2023-06-11T10:12:45Z | 0 | 3 | null |
[
"arxiv:2305.15011",
"license:mit",
"region:us"
] | null | 2023-05-19T13:29:29Z |
---
license: mit
---
#### Current Training Steps: 108,000
This repo contains a low-rank adapter (LoRA) for LLaMA-13b
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages.
### Dataset Creation
1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data).
2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023).
3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023).
<h3 align="center">
<img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center">
</h3>
### Training Parameters
The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora).
This version of the weights was trained with the following hyperparameters:
- Epochs: 10
- Batch size: 128
- Cutoff length: 512
- Learning rate: 3e-4
- Lora _r_: 64
- Lora target modules: q_proj, k_proj, v_proj, o_proj
That is:
```
python finetune.py \
--base_model='decapoda-research/llama-13b-hf' \
--num_epochs=5 \
--batch_size=128 \
--cutoff_len=512 \
--group_by_length \
--output_dir='./bactrian-x-llama-13b-lora' \
--lora_target_modules='q_proj,k_proj,v_proj,o_proj' \
--lora_r=64 \
--micro_batch_size=32
```
Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X.
### Discussion of Biases
(1) Translation bias; (2) Potential English-culture bias in the translated dataset.
### Citation Information
```
@misc{li2023bactrianx,
title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation},
author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin},
year={2023},
eprint={2305.15011},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
undrwolf/ppo-Huggy
|
undrwolf
| 2023-06-11T10:11:57Z | 3 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2023-06-11T10:11:47Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: undrwolf/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NityamPareek/xlm-roberta-base-finetuned-panx-en
|
NityamPareek
| 2023-06-11T10:06:03Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-11T10:04:19Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.7078651685393259
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3953
- F1: 0.7079
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9078 | 1.0 | 74 | 0.4418 | 0.6630 |
| 0.4443 | 2.0 | 148 | 0.4249 | 0.6685 |
| 0.3285 | 3.0 | 222 | 0.3953 | 0.7079 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NityamPareek/xlm-roberta-base-finetuned-panx-it
|
NityamPareek
| 2023-06-11T10:04:10Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-11T10:02:08Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.it
split: validation
args: PAN-X.it
metrics:
- name: F1
type: f1
value: 0.8398517908604364
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2406
- F1: 0.8399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6336 | 1.0 | 105 | 0.3126 | 0.7599 |
| 0.2723 | 2.0 | 210 | 0.2550 | 0.8188 |
| 0.158 | 3.0 | 315 | 0.2406 | 0.8399 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NityamPareek/xlm-roberta-base-finetuned-panx-de-fr
|
NityamPareek
| 2023-06-11T09:57:22Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2023-06-11T09:44:30Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1706
- F1: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2804 | 1.0 | 1073 | 0.1918 | 0.8174 |
| 0.1478 | 2.0 | 2146 | 0.1724 | 0.8524 |
| 0.0933 | 3.0 | 3219 | 0.1706 | 0.8624 |
### Framework versions
- Transformers 4.30.1
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
dhanushkaha/diffusion_model
|
dhanushkaha
| 2023-06-11T09:44:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-11T06:02:59Z |
---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Website_design_mockup_1 Dreambooth model trained by JacobPerera with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Hans14/a2c-PandaReachDense-v2
|
Hans14
| 2023-06-11T09:19:35Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T09:16:45Z |
---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -2.00 +/- 0.21
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/MariaDB20SetFitModel_Train_balance_ratio_1
|
YakovElm
| 2023-06-11T09:11:47Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T09:10:52Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB20SetFitModel_Train_balance_ratio_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB20SetFitModel_Train_balance_ratio_1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
raulkrr/ppo-LunarLander-v2
|
raulkrr
| 2023-06-11T09:09:35Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-10T16:54:09Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.61 +/- 16.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
yotoshihiro/LunarLander-v2
|
yotoshihiro
| 2023-06-11T08:51:07Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T08:51:02Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -170.63 +/- 91.14
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'brinkman/LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Wjia1/results
|
Wjia1
| 2023-06-11T08:03:48Z | 29 | 1 |
diffusers
|
[
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2023-06-11T07:20:39Z |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of sks dog
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - Wjia1/results
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
Hans14/a2c-AntBulletEnv-v0
|
Hans14
| 2023-06-11T08:02:54Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T08:01:40Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1757.21 +/- 112.36
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gabluz/pygmalion_13b_q2_k_ggml
|
gabluz
| 2023-06-11T07:21:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2023-06-11T07:08:05Z |
*Works with koboldcpp but does not work with external apps such as tavernai or sillytavern. Currently only works with pure koboldcpp as 06/11/2023*
Quantizations of the model:
https://huggingface.co/TehVenom/Pygmalion-13b-Merged
Pygmalion 13b
A conversational LLaMA fine-tune.
Model Details:
Pygmalion 13b is a dialogue model based on Meta's LLaMA-13b.
This is version 1. It has been fine-tuned using a subset of the data from Pygmalion-6B-v8-pt4, for those of you familiar with the project.
The current Pygmalion-13b has been trained as a LoRA, then merged down to the base model for distribuition.
Applying the XORs
This models has the XOR files pre-applied out of the box. Converted from the XORs weights from PygmalionAI's release https://huggingface.co/PygmalionAI/pygmalion-13b
Prompting
The model was trained on the usual Pygmalion persona + chat format, so any of the usual UIs should already handle everything correctly. If you're using the model directly, this is the expected formatting:
[CHARACTER]'s Persona: [A few sentences about the character you want the model to play]
<START>
[DIALOGUE HISTORY]
You: [User's input message here]
[CHARACTER]:
Where [CHARACTER] is, as you can probably guess, the name of the character you want the model to portray, <START> should be used verbatim as a delimiter token to separate persona and scenario data from the dialogue, and [DIALOGUE HISTORY] is a sliding window of chat history so the model can have conversational context to draw from. Here's a concrete example:
Assistant's Persona: Assistant is a highly intelligent language model trained to comply with user requests.
<START>
Assistant: Hello! How may I help you today?
You: What is Zork?
Assistant:
Which will generate something like:
Zork is an interactive fiction computer game created in the 1970s by Infocom, Inc., which was later acquired by Activision Blizzard. It is widely considered one of the most influential games ever made and has been credited with popularizing text-based adventure games. The original version of Zork was written in the programming language MACRO-10, but it was ported to many other platforms over the years."
The model will automatically emit an end-of-text token (</s>) when it judges that the response is complete.
Eval / Benchmark scores
Current evals out of the Pygmalion-13b model:
Model: Wikitext2 Ptb-New C4-New
Pygmalion 13b - 16bit 5.710726737976074 23.633684158325195 7.6324849128723145
Thanks to YellowRose#1776 for the numbers.
Other notes
When prompted correctly, the model will always start by generating a BOS token. This behavior is an accidental side-effect which we plan to address in future model versions and should not be relied upon.
The model was trained as a LoRA with a somewhat unorthodox configuration which causes errors when used with the current version of peft, hence we release it as a full model instead.
Limitations and biases
The intended use-case for this model is fictional conversation for entertainment purposes. Any other sort of usage is out of scope.
As such, it was not fine-tuned to be safe and harmless: the base model and this fine-tune have been trained on data known to contain profanity and texts that are lewd or otherwise offensive. It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
|
rajkanbu/sampleapp
|
rajkanbu
| 2023-06-11T06:39:06Z | 0 | 0 |
asteroid
|
[
"asteroid",
"music",
"code",
"aa",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-06-11T06:36:02Z |
---
license: bigscience-openrail-m
language:
- aa
library_name: asteroid
tags:
- music
- code
---
|
YakovElm/MariaDB15SetFitModel_Train_balance_ratio_2
|
YakovElm
| 2023-06-11T06:27:59Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T06:27:25Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB15SetFitModel_Train_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB15SetFitModel_Train_balance_ratio_2")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
shi-labs/Matting-Anything
|
shi-labs
| 2023-06-11T06:16:58Z | 0 | 2 | null |
[
"arxiv:2306.05399",
"license:mit",
"region:us"
] | null | 2023-06-11T06:09:50Z |
---
license: "mit"
---
# Matting Anything
[Jiachen Li](https://chrisjuniorli.github.io/),
[Jitesh Jain](https://praeclarumjj3.github.io/),
[Humphrey Shi](https://www.humphreyshi.com/home)
[[`Project page`](https://chrisjuniorli.github.io/project/Matting-Anything/)]
[[`ArXiv`](https://arxiv.org/abs/2306.05399)]
[[`Pdf`](https://arxiv.org/pdf/2306.05399.pdf)]
[[`Video`](https://www.youtube.com/watch?v=XY2Q0HATGOk)]
This repo is the model card of Matting Anything.
|
gokuls/bert_12_layer_model_v1_complete_training_new_96
|
gokuls
| 2023-06-11T05:56:25Z | 47 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-09T23:25:07Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v1_complete_training_new_96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v1_complete_training_new_96
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new_72](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new_72) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3874
- Accuracy: 0.5637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.6463 | 0.08 | 10000 | 2.5710 | 0.5405 |
| 2.6202 | 0.16 | 20000 | 2.5489 | 0.5435 |
| 2.593 | 0.25 | 30000 | 2.5222 | 0.5465 |
| 2.5802 | 0.33 | 40000 | 2.4992 | 0.5495 |
| 2.5477 | 0.41 | 50000 | 2.4760 | 0.5525 |
| 2.5316 | 0.49 | 60000 | 2.4572 | 0.5548 |
| 2.504 | 0.57 | 70000 | 2.4368 | 0.5572 |
| 2.4878 | 0.66 | 80000 | 2.4225 | 0.5592 |
| 2.4667 | 0.74 | 90000 | 2.4007 | 0.5616 |
| 2.4513 | 0.82 | 100000 | 2.3874 | 0.5637 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
YakovElm/MariaDB15SetFitModel_Train_balance_ratio_1
|
YakovElm
| 2023-06-11T05:27:56Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T05:27:22Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB15SetFitModel_Train_balance_ratio_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB15SetFitModel_Train_balance_ratio_1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
SRDdev/HingFlow
|
SRDdev
| 2023-06-11T05:17:11Z | 139 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"dataset:cfilt/iitb-english-hindi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2023-05-11T13:33:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: HingFlow
results: []
datasets:
- cfilt/iitb-english-hindi
language:
- en
library_name: transformers
pipeline_tag: translation
---
# HingFlow
It achieves the following results on the evaluation set:
- Loss: 0.1887
- Bleu: 72.3468
- Gen Len: 5.9953
## Model description
https://github.com/SRDdev/HingFlow
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| 0.1505 | 1.0 | 1000 | 0.2053 | 71.6108 | 5.8418 |
| 0.1057 | 2.0 | 2000 | 0.1887 | 72.3468 | 5.9953 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
deepghs/anime_ai_check
|
deepghs
| 2023-06-11T04:54:37Z | 0 | 0 | null |
[
"onnx",
"art",
"image-classification",
"dataset:deepghs/anime_ai_check",
"license:mit",
"region:us"
] |
image-classification
| 2023-06-04T16:15:45Z |
---
license: mit
datasets:
- deepghs/anime_ai_check
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- art
---
The model used to predict whether an image is generated by AI techniques, mainly diffusion models. Please note that this model is trained on crawled data, and **its predictions should be used for reference purposes only**. It should not be used to determine the source or origin of an image.
| Model | FLOPs | Accuracy | Confusion Matrix | Description |
|:---------------------:|:------:|:--------:|:--------------------------------------------------------------------------------------------------------------------:|---------------------------------------------------------------------|
| caformer_s36_plus_sce | 22.10G | 97.54% | [Confusion Matrix](https://huggingface.co/deepghs/anime_ai_check/blob/main/caformer_s36_plus_sce/plot_confusion.png) | Model: caformer_s36.sail_in22k_ft_in1k_384 pratrained from timm |
| mobilenetv3_sce | 0.63G | 93.20% | [Confusion Matrix](https://huggingface.co/deepghs/anime_ai_check/blob/main/mobilenetv3_sce/plot_confusion.png) | Model: mobilenetv3_large_100 from timm |
| mobilenetv3_sce_dist | 0.63G | 95.65% | [Confusion Matrix](https://huggingface.co/deepghs/anime_ai_check/blob/main/mobilenetv3_sce_dist/plot_confusion.png) | Distillated from caformer_s36_plus_sce, using mobilenetv3_large_100 |
|
Gorilla23/Taxi-v3
|
Gorilla23
| 2023-06-11T04:28:21Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T04:28:19Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Gorilla23/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
casque/add_sharpness
|
casque
| 2023-06-11T04:24:53Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T04:24:24Z |
---
license: creativeml-openrail-m
---
|
casque/add_brightness
|
casque
| 2023-06-11T04:24:08Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-11T04:23:50Z |
---
license: creativeml-openrail-m
---
|
Gorilla23/q-FrozenLake-v1-4x4-noSlippery
|
Gorilla23
| 2023-06-11T04:21:39Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T04:21:37Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Gorilla23/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
chenuneris/aurora-v2-doc-ref
|
chenuneris
| 2023-06-11T03:48:24Z | 0 | 1 | null |
[
"license:gpl-3.0",
"region:us"
] | null | 2023-06-11T02:53:40Z |
---
license: gpl-3.0
---
This model was trained on aurora dataset and then trained again using a part of ultrachat dataset.
|
gokuls/bert_base_24
|
gokuls
| 2023-06-11T03:44:47Z | 121 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-10T03:06:13Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_base_24
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_24
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0090
- Accuracy: 0.1512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 6.4917 | 0.08 | 10000 | 6.4422 | 0.1406 |
| 6.2848 | 0.16 | 20000 | 6.2644 | 0.1478 |
| 6.1988 | 0.25 | 30000 | 6.1852 | 0.1493 |
| 6.148 | 0.33 | 40000 | 6.1287 | 0.1501 |
| 6.1007 | 0.41 | 50000 | 6.0888 | 0.1501 |
| 6.0721 | 0.49 | 60000 | 6.0555 | 0.1499 |
| 6.0414 | 0.57 | 70000 | 6.0274 | 0.1514 |
| 6.0229 | 0.66 | 80000 | 6.0090 | 0.1512 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AhsanZaidi/ppo-LunarLander-v2
|
AhsanZaidi
| 2023-06-11T03:31:00Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T03:30:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 276.51 +/- 13.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gokuls/bert_12_layer_model_v2_complete_training_new_96
|
gokuls
| 2023-06-11T02:54:25Z | 49 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-06-09T20:21:57Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_12_layer_model_v2_complete_training_new_96
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_12_layer_model_v2_complete_training_new_96
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_new_72](https://huggingface.co/gokuls/bert_12_layer_model_v2_complete_training_new_72) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6222
- Accuracy: 0.5276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 3.0388 | 0.08 | 10000 | 2.9330 | 0.4848 |
| 2.9843 | 0.16 | 20000 | 2.8849 | 0.4916 |
| 2.9373 | 0.25 | 30000 | 2.8368 | 0.4984 |
| 2.9099 | 0.33 | 40000 | 2.8001 | 0.5033 |
| 2.8637 | 0.41 | 50000 | 2.7610 | 0.5085 |
| 2.8368 | 0.49 | 60000 | 2.7313 | 0.5126 |
| 2.7988 | 0.57 | 70000 | 2.7014 | 0.5167 |
| 2.7719 | 0.66 | 80000 | 2.6760 | 0.5199 |
| 2.7385 | 0.74 | 90000 | 2.6492 | 0.5234 |
| 2.7161 | 0.82 | 100000 | 2.6222 | 0.5276 |
### Framework versions
- Transformers 4.30.1
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
|
mommymegablips/ppo-LunarLander-v2
|
mommymegablips
| 2023-06-11T01:51:39Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T01:01:40Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 235.28 +/- 31.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
NeoCodes-dev/Reinforce_PyTorch
|
NeoCodes-dev
| 2023-06-11T01:45:17Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T01:45:07Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce_PyTorch
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
openaccess-ai-collective/openllama-7b-4k
|
openaccess-ai-collective
| 2023-06-11T01:20:43Z | 17 | 2 |
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-06-10T04:53:12Z |
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
# OpenLLaMA: An Open Reproduction of LLaMA
In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a 7B and 3B model trained on 1T tokens, as well as the preview of a 13B model trained on 600B tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
## Weights Release, License and Usage
We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
### Loading the Weights with Hugging Face Transformers
Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
model_path = 'openlm-research/open_llama_3b'
# model_path = 'openlm-research/open_llama_7b'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float16, device_map='auto',
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(tokenizer.decode(generation_output[0]))
```
For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
### Evaluating with LM-Eval-Harness
The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
```python
tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
pretrained if tokenizer is None else tokenizer,
revision=revision + ("/" + subfolder if subfolder is not None else ""),
use_fast=False
)
```
### Loading the Weights with EasyLM
For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
## Dataset and Training
We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
## Evaluation
We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
| **Task/Metric** | GPT-J 6B | LLaMA 7B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B 600BT |
| ---------------------- | -------- | -------- | ------------ | ------------ | ------------------- |
| anli_r1/acc | 0.32 | 0.35 | 0.33 | 0.33 | 0.33 |
| anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.32 | 0.35 |
| anli_r3/acc | 0.35 | 0.37 | 0.38 | 0.35 | 0.38 |
| arc_challenge/acc | 0.34 | 0.39 | 0.37 | 0.34 | 0.39 |
| arc_challenge/acc_norm | 0.37 | 0.41 | 0.38 | 0.37 | 0.42 |
| arc_easy/acc | 0.67 | 0.68 | 0.72 | 0.69 | 0.74 |
| arc_easy/acc_norm | 0.62 | 0.52 | 0.68 | 0.65 | 0.70 |
| ddboolq/acc | 0.50 | 0.56 | 0.53 | 0.49 | 0.71 |
| hellaswag/acc | 0.36 | 0.36 | 0.63 | 0.43 | 0.54 |
| hellaswag/acc_norm | 0.66 | 0.73 | 0.72 | 0.67 | 0.73 |
| openbookqa/acc | 0.29 | 0.29 | 0.30 | 0.27 | 0.30 |
| openbookqa/acc_norm | 0.38 | 0.41 | 0.40 | 0.40 | 0.41 |
| piqa/acc | 0.75 | 0.78 | 0.76 | 0.75 | 0.77 |
| piqa/acc_norm | 0.76 | 0.78 | 0.77 | 0.76 | 0.78 |
| record/em | 0.88 | 0.91 | 0.89 | 0.88 | 0.90 |
| record/f1 | 0.89 | 0.91 | 0.90 | 0.89 | 0.90 |
| rte/acc | 0.54 | 0.56 | 0.60 | 0.58 | 0.65 |
| truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.23 | 0.22 | 0.22 |
| truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.35 | 0.35 | 0.35 |
| wic/acc | 0.50 | 0.50 | 0.51 | 0.48 | 0.49 |
| winogrande/acc | 0.64 | 0.68 | 0.67 | 0.62 | 0.67 |
| Average | 0.51 | 0.53 | 0.55 | 0.52 | 0.56 |
We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
## Contact
We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
OpenLLaMA is developed by:
[Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
*Equal Contribution
## Acknowledgment
We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
## Reference
If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
```
@software{openlm2023openllama,
author = {Geng, Xinyang and Liu, Hao},
title = {OpenLLaMA: An Open Reproduction of LLaMA},
month = May,
year = 2023,
url = {https://github.com/openlm-research/open_llama}
}
```
```
@software{together2023redpajama,
author = {Together Computer},
title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
month = April,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
```
```
@article{touvron2023llama,
title={Llama: Open and efficient foundation language models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
```
|
YakovElm/MariaDB10SetFitModel_Train_balance_ratio_Half
|
YakovElm
| 2023-06-11T01:03:55Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T01:03:21Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB10SetFitModel_Train_balance_ratio_Half
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB10SetFitModel_Train_balance_ratio_Half")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
rassom/ppo-LunarLander-v2
|
rassom
| 2023-06-11T00:58:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2023-06-11T00:58:36Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO('MlpPolicy', n_steps = 1024, batch_size = 64, n_epochs = 4, gamma = 0.999,
gae_lambda = 0.98, ent_coef = 0.01)
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 264.86 +/- 23.85
name: mean_reward
verified: false
---
# **PPO('MlpPolicy', n_steps = 1024, batch_size = 64, n_epochs = 4, gamma = 0.999, gae_lambda = 0.98, ent_coef = 0.01)** Agent playing **LunarLander-v2**
This is a trained model of a **PPO('MlpPolicy', n_steps = 1024, batch_size = 64, n_epochs = 4, gamma = 0.999, gae_lambda = 0.98, ent_coef = 0.01)** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
YakovElm/MariaDB5SetFitModel_Train_balance_ratio_4
|
YakovElm
| 2023-06-11T00:38:14Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2023-06-11T00:37:38Z |
---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YakovElm/MariaDB5SetFitModel_Train_balance_ratio_4
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YakovElm/MariaDB5SetFitModel_Train_balance_ratio_4")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
EnterNameBros/Senko-san-medium-fox
|
EnterNameBros
| 2023-06-10T23:56:22Z | 0 | 1 | null |
[
"conversational",
"region:us"
] |
text-generation
| 2023-06-10T23:33:33Z |
---
pipeline_tag: conversational
---
|
Yaxin1992/llama-33b-qlora-12000-adapter
|
Yaxin1992
| 2023-06-10T23:29:49Z | 0 | 0 | null |
[
"tensorboard",
"generated_from_trainer",
"license:other",
"region:us"
] | null | 2023-06-10T18:02:20Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: llama-33b-qlora-12000-adapter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-33b-qlora-12000-adapter
This model is a fine-tuned version of [decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.